text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
17 Properties and Numerical Modeling-Simulation of Phase Changes Material
There exists a domain of models that is principially classified into the linear and non-linear fields of modelling. In the field of non-linear modelling, significant progress has been made since the 1960s thanks to the widespread and regularly available computer technology. This dynamic development influenced a large number of problems including the description of physical behaviour of non-trivial tasks. Non-linear models are solved in the material, spatial, and time domains. However, certain non-linear model domains are not sufficiently developed or regularly used for analysis of the more simple tasks. This group includes task models using non-linear discontinuous characteristics of materials, which can be exemplified by the change of state of a material during heating or cooling. In this section, we would like to use several descriptive examples to expose the problem of thermal tasks solution utilizing applied materials with a phase change (PCM) (Gille, T.; at al. 2007, Volle, F. at al. 2010, Shi, L.P.; at al. 2006). These are mostly coupled tasks (Fiala,P. December 1998). Within the specification of different aspects of the solution process, emphasis will be placed on the final accuracy of the results of numerical analyses, and therefore (rather than focusing on a complete description of the model) the text will accentuate problematic spots within the solution of such tasks. The PCM characteristics are demonstrated on the task of designing a low-temperature accumulator, an efficient cooler of electronic components, and a separator of impurities in an industrial oil emulsion.
Introduction
There exists a domain of models that is principially classified into the linear and non-linear fields of modelling.In the field of non-linear modelling, significant progress has been made since the 1960s thanks to the widespread and regularly available computer technology.This dynamic development influenced a large number of problems including the description of physical behaviour of non-trivial tasks.Non-linear models are solved in the material, spatial, and time domains.However, certain non-linear model domains are not sufficiently developed or regularly used for analysis of the more simple tasks.This group includes task models using non-linear discontinuous characteristics of materials, which can be exemplified by the change of state of a material during heating or cooling.In this section, we would like to use several descriptive examples to expose the problem of thermal tasks solution utilizing applied materials with a phase change (PCM) (Gille, T.;at al. 2007, Volle, F. at al. 2010, Shi, L.P.;at al. 2006).These are mostly coupled tasks (Fiala,P. December 1998).Within the specification of different aspects of the solution process, emphasis will be placed on the final accuracy of the results of numerical analyses, and therefore (rather than focusing on a complete description of the model) the text will accentuate problematic spots within the solution of such tasks.The PCM characteristics are demonstrated on the task of designing a low-temperature accumulator, an efficient cooler of electronic components, and a separator of impurities in an industrial oil emulsion.
Energy, transformation, accumulation
Within the last decade, scientific interest in the fields of basic and applied research has been focused more intensively on the problem of increasing the share of renewable sources of energy in total energy consumption per capita (Solar energy 2010).In this context, we have seen major development in the field of energy harvesting (Murat Kenisarin, & Khamid Mahkamov May 2006, John Greenman at al. May 2005, Junrui Liang & Wei-Hsin Liao 2010, Vijay Raghunathan at al. April 2005, Jirku T at al. May 2010), or the acquirement of energy from hitherto unused forms.The reason for such processes in technology naturally consists in the fact that the reserves of classical primary sources of energy and fossil fuels (Behunek, I. 2002, WORLD ENERGY STATISTICS from the IEA 2002) available to current industrial society are limited.Moreover, such classification applies also to the possibilities of utilizing the energy of water and wind.A large number of countries have committed themselves to the reduction of greenhouse gas emissions and the related increase of renewable sources of energy (Ministery of industry and trade of Czech Republic, Stat energetic conception 2004, Ministery of industry and trade of Czech Republic 2000) share in total energy consumption.However, the effort to comply with these commitments may be realized in absurd ways such as an uncontrolled surge in the number of constructed solar photovoltaic systems, which is further aggravated by the related problem of their integration into the energy production system of a country, Fig. 1.Fig. 1.A photovoltaic power plant design, the Czech Republic One of the applicable alternative source solutions consist in utilizing solar radiation ( Solar energy, 2010) within its entire spectrum.Among the advantages of this energy harvesting method there are mainly the low cost of the impinging energy, the unimpeded availability of the source in many regions of the Earth, and the polution-free operation.Conversely, the related disadvantages can be identified in the low density of the impinging radiation power flow in the visible and the near-visible spectrum (λ∈<440,780> nm), the comparatively low efficiency of transformation into other forms of energy (considering the currently used photovoltaic elements), and the fact that the cost of a produced energy unit is often rather high when compared to other clean sources of electrical energy (such as nuclear power plants) (Kleczek, J. 1981).In consequence of the uneven power flow density of the solar source within the daily or yearly cycle and owing to weather changes, the solar energy application method is affected by the problems of effective utilization, regulation in power systems, and necessity of accumulating the energy acquired from solar radiation.A feasible technique of energy accumulation seems to consist in direct exploitation of physical effect of material properties as related to metals, liquids and gases (Gille, T. 2007, Shi, L.P. 2006).Accumulators will facilitate power take-off during any time period depending on the needs of the consumer or the power system operator, which provides for the balance in the cost/power take-off relation within the required time interval.Thus, the power distribution network stability will be improved, with substantial reduction of the probability of black-out (Black-out 2003).There occurs the compensation of time disproportion between the potential of the sources and the eletrical energy on the one hand and the consumption in time and place on the other.Solar energy accumulation can be technologically realized through a wide variety of methods; also, research in the field is being consistently developed (Juodkazis, Saulius;at al. November 2004, Zhen Ren at al. Januar 2010, Liu, Y.-T.;at al. 2008 ).Some of the proposed approaches are based on classical solutions.These include the accumulation of energy utilizing the potential energy of mass in a gravitational field (water), the kinetic energy of mass (flywheels), the non-linearities in the state of a mass phase -the compression of gases.Another group of accumulators is based on the solution utilizing the energy of an electromagnetic field.In this case, the most common is the application of electric accumulators or microbiological systems [mikrobiolog akumulator].Yet another one of the fields to be quoted comprises energy accumulation using the properties of chemical bonds of non-trivial chemical systems (the production of synthetic fuels), electrochemical bonds, and the utilization of photochemical energy (the accumulation of low-potential heat in solar-powered systems).The process of designing chemical accumulator forms utilizes the physical effects of nonlinear behaviour of materials at phase changes.(Behunek,I. April 2004, Behunek I & Fiala P. Jun 2007).
Heat accumulation
Principles are known (Baylin, F. 1979) for the utilization of characteristics of chemicalphysical effects, and in this context there exist four basic methods of thermal energy accumulation.The first method consists in the utilization of specific thermal capacity of substances (sensible heat), the second one is built on the application of change in the state of substances (latent heat) , the third one lies in the thermochemical reaction, and the fourth one applies the sorption and desorption of gas / water vapour.Generally, the thermochemical reactions method provides a higher density of accumulated energy than the sensible heat or phase change options (Mar, R.W. & Bramletta, T.T. 1980).An endothermic reaction product contains energy in the form of a chemical bond that is released retroactively during an exothermic reaction.The energy release occurs through the action of a catalyst, which is a suitable characteristic for long-term accumulation.Other advantages of thermochemical accumulation include the possibility of transporting the products over long distances, the possibility of product storage at both low (with a low rate of loss) and very high temperatures (Goldstein, M. 1961), the low cost, and the fact that products of the reaction can be used as the medium in thermodynamic cycles (The Australian National University 2004).Currently, research (Mar, R.W. 1978, Mar, R.W. 1980) is conducted in this field.In order to accumulate energy, we can utilize heat balance at sorption/desorption of moisture in the working substance.The difference with respect to other types of heat accumulation consists in the fact that sorption does not directly depend on the temperature,.but rather on relative humidity of the surrounding air.Therefore, the described method of accumulation may be realized at a constant temperature, which is an aspect utilizable in discharging the accumulator.In the progress of charging, relative humidity of air is decreased to the required level through heating the air to achieve a higher temperature (Close, D.J. & Dunkle, R.V. 1977, Verdonschet, J.K.M. 1981).
Classical heat accumulation methods
The classical accumulation of heat utilizes the so-called sensible heat of substances (Kleczek, J., 1981) being the simplest one of all the methods, this approach was historically used in the first place.Traditional materials applied for the accumulation of heat are water and gravel.
The weight and specific heat capacity of these materials indicate the accumulable quantity of heat.This quantity is given by the calorimetric equation where T 1 is the temperature at the beginning and T 2 the temperature at the end of charging.
Water heat reservoirs
If water is utilized in the process of accumulation, it is usually held in a suitable container during heating (Garg, H.P., at al. 1985).Even though the applicaton of water has proved to be advantageous in many respects, there are also many drawbacks, especially for the preservation of low-potential heat.Water has the highest specific heat capacity of all known substances (Vohlídal J. et al. 1999).It can be applied as an accumulation and working medium (exchangers are not necessary); charging and discharging can be simulated in an exact manner.If water is applied, heat storage or offtake causes temperature fluctuation and the thermal potential is lost (namely the accumulator is charged at a sufficiently high input temperature T, which is averaged in the reservoir to a mean temperature T stř and, during the subsequent heat offtake, the original temperature T can not be reached) (Fisher L.S. 1976, Lavan, Z. & Thomson, J. 1977).With water accumulators, liquid photothermal collectors need to be applied; this means that expensive and rather complicated technologie and installation methods are used as opposed to the hot.-air option) .
Gravel heat accumulators
For multi-day or seasonal accumulation (Behunek, I. April 2004), heat reservoris utilizing gravel are preferred; here, the air used as the heating medium is heated in hot-air collectors.This system eliminates some of the disadvantages of the previously described method.In regular realizations, Figure 2, heat transfer by conduction is also minimal (the individual gravel pieces touch one another only at the edges); here, however, the characteristics include low heat capacity of the crushed stone (excessive dimensions of the reservoir) as well as a very difficult (even impossible as per (Garg, H.P., et al. 1985 ) simulation of charging and discharging.
where ρ is the density, V the volume, c the specific heat, Δh m the enthalpy, Q the heat, and T m , T e the temperature according to Figure 3.If heat is supplied to the material, there occurs the transformation from the liquid into the solid state.Phase transition appears when crystal lattice is disrupted, namely when the amplitude of the crystal lattice particles oscillation is comparable with relative distance between the particles.At this moment, the oscillation energy rises above the value of the crystal binding energy, the bond is broken and the crystal transforms into the liquid phase.However, if heat is removed from the substance, there occurs the solidification (crystallization) of material.During crystallization, the orderly motion of molecules gradually assumes the character of thermal oscillations around certain middle positions, namely crystal lattice is formed.In pure crystalline substances, melting and solidification proceed at a constant temperature T m , which does not vary during the phase transition.In amorphous substances, the phase transition temperature is not constant and the state change occurs within a certain range of temperatures, Figure 4.In simplified terms for a macroscopic description of the numerical model, the phase change of a material is understood as a state in which the material changes its physical characteristics on the basis of variations (external) of its thermodanymic system.This state is often accompanied by a nonlinear effect, Figure 3.The effect involves energy Q supplied to the thermodynamic system of the material, temperature T, latent energy ΔQ necessary to change the externalmacroscopic state of the material, initial state temperature T 0 , phase change temperature T m , and temperature T e limiting the low-temperature mode.2.5 Requirements placed on the PCM, reservoirs and casings PCM materials applicable for the accumulation of heat utilizing state change ought to meet the following criteria (Behunek, I. 2002): Physical (a suitable phase diagram in the transition area, a suitable phase transition temperature, small changes of volume during the change of state, high density of the substance, supercooling tolerance, high specific melting heat, good thermal conductivity), chemical (nonflammability, nontoxicity, chemical stability, anticorrosive properties), economical (low asking price, availability, low cost of a suitable accumulator).The structure of PCM reservoirs must conform to standard requirements placed on thermal containers.In general, with respect to the provision of a suitable speed of heat transfer, it is necessary to encase the actual PCM material and insert the resulting containers that hold the PCM into an external envelope; this insertion should be realized in such a way that, through its circulation, the heating medium ensures an optimum transfer of heat energy in both directions (during charging and discharging), Figure 5. Consequently, there exists substantial similarity to caloric reservoirs containing crushed stone (aggregate) and therefore the rules governing the construction of these reservoirs can be applied.
Basic classification of PCM materials
If we are to futher consider the properties of PCMs, it is necessary to describe their minimum classicification and properties in phase changes, relation (3).a.The advantages of anorganic substances (GARG, H.P. et al. 1985, VENER, C. 1997) G.A. 1983G.A. , FAVIER, A. 1999) is provided in Table 1.The elementary reference quantity is the density of accumulated energy.We assume the initial charging temperature as T 0 20 °C and the final termperature as T e 50 °C.The course of accumulator loading with various types of filling (charge) is shown in Figure 4; a realization example of a PCM-based accumulator is provided in Figure 5.
Material used for accumulation in heat reservoir
Accumulated energy density q [kWh.m
Calcium chloride hexahydrate and its modification
In Figure 6, the phase change of CaCl 2 .6H 2 O during heating and cooling is shown.The dashed lines show the theoretical behaviour under the condition when the melting and freezing were realized at constant temperature T m -the case of pure crystallic substances.Impurity and the methodology of measuring are the main cause of variations (the probe has to be placed only in small amounts of hexahydrate.During solidification, ing occurred owing to weak nucleation.Crystallization was initiated thanks to a solid particle of the PCM added to the measured sample.Otherwise, the crystallization would not have occurred.The group of materials for the encasing of hexahydrate may include plastics, mild steel or copper; aluminium or stainless steel are not suitable. In some cases, temperature fluctuation above T m may occur during solidification (Figure 7).The explanation was found in the binary diagram.Figure 4
Numerical model of heat accumulators
The effectivity of transferring the heat to active elements in the accumulator consists in the optimum setting of dimensions and shapes in the process of circulation of the medium that transfers energy in the accumulator.Therefore, a necessary precondition of the design consisted in solving the air circulation model under the condition of change in its temperature and thermodynamical variations in the PCM material.The actual model of active elements and its temperature characteristic is not fundamental to this task; the characteristic is known and realizable through commonly appplied methods.
A geometric model of one layer of the accumulator is shown in Figure 10 where A is the external acceleration, υ the vector of kinematic viscosity, and (grad v) has the dimension of tensor.In equation (7) where K are the suppressed pressure losses, f the resistance coefficient, D h the hydraulic diameter of ribs, C the air permeability of system, μ the dynamic viscosity, and u x,y,z the unit vector of the Cartesian coordinate system.The resistance coefficient is obtained from the Boussinesq theorem where Re is Reynolds number and a, b are coefficients from [40].The model of short deformation field is formulated from the condition of steady-state stability, which is expressed where f are the specific forces in domain Ω, and t the pressures, tensions and shear stresses on the interface area Γ.By means of the transformation into local coordinates, we obtain the differential form for the static equilibrium where div 2 stands for the div operator of tensor quantity and T v is the tensor of internal tension where X, Y, Z are the stress components which act on elements of the area.It is possible to add a form of specific force from ( 4)-( 7) to the condition of static equilibrium.The form of specific force is obtained by means of an external acceleration A, on the condition that pressure losses and shear stresses τ are given as where F 1 are the discrete forces and div 2 is the divergence operator of tensor.The model which covers the forces, viscosity, and pressure losses is We can prepare the discretization of equation ( 7) by means of the approximation of velocity v and acceleration a (Behunek I, Fiala P. Jun 2007).On the interface there are defined boundary and initial conditions.Initial and boundary conditions can be written; the initial temperature of the air is 50 °C, the initial velocity of the air is 0,4 m.s -1 , the outlet pressure is 101,3 kPa + 10 Pa, and the initial temperature of the air inside the accumulator, PVC and CaCl 2 .6H 2 O is 20 °C.There are the distribution of velocity values indicated in figures 11, 12, and other results for the distribution of turbulent kinetic energy, dissipation, temperature and pressure follow on Figures 13.Calculation of the thermal model (finite element methode (FEM), finite volume methode (FVM), Ansys User's Manual) was realized under the same conditions as the previous turbulence model.
Figure 13 shows the time dependence of temperature in CaCl 2 .6H 2 O in the pipe marked with a black cross (Figure 11).We can compare the result of the numerical simulation with the measurement.Differences between the simulation and the measurement are caused by the inaccuracy of the model with respect to reality.We used tabular values of pure CaCl 2 .6H 2 O; howerer, the pipes contain modified hexahydrate with 1,2% of BaCO 3 .
Cooling system
The PCM may be used for active or passive electronic cooling applications with high power at the package level (see Figure 14).
Analytical description and solution of heat transfer and phase change
We analyze the problem of heat transfer in a 1D body during the melting and freezing process with an external heat flux or heat convection, which is given by boundary conditions.The solution of this problem is known for the solidification of metals.We tried to apply this theory to the melting of crystalline salts.The 1D body could be a semifinite plane, cylinder or sphere.As the solid and the liquid part of PCM have different temperatures, there occurs heat transfer on the interface.According to Fig. 16, the origin of x is the axis of pipe, centre of sphere, or the origin of plate.Liquid starts to solidify if the surface is cooled by the flowing fluid (T w < T m ).The equation describing the solid state is where for the plate n = 0, cylinder n = 1 and sphere n = 2; a s is the thermal diffusion coefficient in the solid state.For x = x 0 we can assume the following boundary conditions: constant temperature or for convective cooling where q w is the specific heat flux and λ s is the thermal conductivity coefficient.Initial condition (t = 0) for ( 24) is For the interface between the solid and the liquid we obtain The analytical solution is exact but we consider several simplifying assumptions.The most important of these is that we can solve the solidification of PCM only in a one-dimensional body.We consider a semi-infinite mass of liquid PCM at initial temperature T 0 , which was cooled by a sudden drop of surface temperature T p = 0 °C.This temperature is constant during the whole process of solidification.The simplifying assumptions are as follows: The body is a semi-infinite plane, the heat flux is one-dimensional in the x-axis, the interface between the solid and the liquid is planar, there is an ideal contact on the interface, the temperature of the surface is constant (T p = 0 °C), the crystallization of PCM is at a constant temperature T m , the thermophysical properties of the solid and the liquid are different but independent of the temperature, there is no natural convection in the liquid.The initial and boundary conditions involve initial temperature T 0 for x ≥ 0 at time 0; the temperature equals T m on the interface between the solid and the liquid ( The evolved latent heat during the interface motion (the thickness of volume element ds, area 1 m 2 , time 1 s) is This dependence is called the parabolic law of solidification, where ε is the root of equation describing the freezing.The boundary and initial condition for the phase change is If we solve the Fourier relations of heat conduction under the above-given conditions for the solid and the liquid, we get the equations below which allow for the calculation of temperatures in the solid, liquid PCMs as well as the location of interface.The results are shown in Figure 17, (Behunek I & Fiala P. Jun 2007).
Nymerical analysis of heat transfer and phase change
If we compare the results of the analytical solution with the experimental measurement of the materials (Behunek, I. 2002), we can see a good agreement.Outside domain Ω, is the air velocity and pressure are zero.We can write the form for an element of mesh related to the (Behunek I & Fiala P. Jun 2007) Cu-cooler with PCM.For the description of different turbulent models see (Piszachich, W.S. 1985, Wilcox, D.C. 1994).The numerical solution consists of two parts.Firstly, we solve the turbulence model and obtain heat transfer coefficients on the surface of the ribs.These results constitute the input for the second part, in which the thermal model is calculated.We obtain the time dependence of temperature distribution in the PCM.A geometric model of a copper cooler in shown in Figure 19.The CaCl 2 .6H 2 O is closed inside of the bottom plate (see Figure 14).The size of the plate is 30x30x5 mm, and the ribs are 20 mm in height.The PCM volume is approximately 3,8.10 -6 m 3 .The plate takes the heat from the processor up and the crystalline salt starts to melt at T m .The air flows through the ribs and extracts heat from the cooler.In Figure 20, the distribution of air velocity module is indicated.We can see the effective rise of air flow velocity at the bottom of the ribs (detail A in Figures 19,20).Temperature distribution in the ribs is shown in Figure 20, Figure 21 compares the results of numerical simulation with the measuring in the middle of PCM enclosure (casing).We measured the temperature by means of a probe.The differences between the simulation and the measurement are due to the inaccuracy of the model with respect to reality.We used tabular values of pure CaCl 2 .6H 2 O but we modified the hexahydrate with 1,2% of BaCO 3. to avoid supercooling and deformation of cooling curves after more cycles of melting and freezing.In order to obtain exact results, we would nevertheless need to obtain exact knowledge of the temperature dependence of thermal conductivity, specific heat and density during phase change (see Figure 22).described separators may utilize classical properties of H 2 O and oil (the mechanical -fluid separation); alternatively, the access of heating or, for example, microwave heating may be applied.In order to use this variant, however, we need to know the process of the material phase change-H 2 O to vapour (steam), and the related diversion of the vapour from the separator.Here, the numerical model proved to be superior to all experiments as it enabled us to examine the details of behaviour and states within individual operating modes of the separator.By means of this method, it is possible to model various states of the emulsion as well as fault conditions in the apparatus; thus, we may identify critical sections of the separator design and perform sensitivity analysis of the system.The reactor exploiting active porous substances was designed to enable oil preparation.The reactor is fed with an industrially produced mixture of oil and water; the desired reaction proceeds in the ceramic porous material.To achieve the desired reaction condition, it is necessary to heat the material and, simultaneously, remove the products of the reaction.After the reaction of water, further heating is undesirable with respect to side reactions.Considering the above mentioned requirements, microwave heating was chosen.The microwave heating effect is selective for the reaction of water.The designed reactor operates at the frequency of f = 2.4 GHz, with the magnetron output power of P=800W.This allows selective heating in the active porous material of the chamber.The basic scheme of the reactor is shown in Figure 23.
Mathematical model
where E and H are the electrical field intensity vector and the magnetic field intensity vector, D a B are the electrical field density vector and the magnetic flux density vector, J is the current density vector of the sources, ρ is the electric charge density, σ is the electric conductivity of the material, and Ω is the definition area of the model.The model is given in manual (Ansys User's Manual).The set of equations ( 27) is independent of time and gives E.
For the transient vector E we can write The results were obtained by the solution of the non-linear thermal model with phase change of the medium.The phase change occurs via the phase conversion of water to steam.
Figure 28 shows the phase-change time characteristic of water.The thermal model is based on the first thermodynamic law ( ) where q is the specific heat, ρ is the specific weight, c is the specific heat capacity, T is the temperature, t is the time, k is the thermal conductivity coefficient, v is the medium flow velocity.If we consider the Snell's principle, the model can be simplified as ( ) The solution was obtained by the help of the ANSYS solver.The iteration algorithm (FEM/FVM) was realized using the APDL language as the main program.The simplified description of the algorithm is shown in Figure 25.change occurred, namely exsiccation and separation of the water from the oil.t=10.8 s, with the magnetron output of P=800W and frequency f=2.4GHz.Figure 31 shows the distribution of temperature rise in the process of exsiccation; here, the indicated aspects include the heat generated through the Joule loss in the material and through dielectric heating for the instant of time t=3.6 s.The distribution of temperature was, for the individual instants, compared with the laboratory measurement.
Results of the thermal analysis
We may conclude that, in relation to the desiccated emulsion system, the numerical model enabled us to identify weak points of the design.These drawbacks consist in the fact that, under certain conditions, local spots might occur where uncontrolled temperature rise could result in explosion of the equipment.This status was experimentally verified.Different variants of the reactor design were analyzed with the aim to evaluate the process and time of desiccation.
Even though modelling realized under the application of PCM produces certain theoretical complications, the actual model is reliable for the assumed boundary and initial conditions if the PCM macroscopic and microscopic properties are respected.This fact was verified experimentally and in laboratory conditions.
Acknowledgement
The research described in the paper was financially supported by the FRVS grant, research plan No MSM 0021630516, No MSM 0021630513.
Fig. 2 .
Fig. 2. Schematic description of a heat accumulator using gravel
Fig. 10 .
Fig. 10.Geometric model of a layer with the mesh of elements
Fig. 15 .Fig. 16 .
Fig. 15.Heat transfer on the interface between the solid and the liquid parts
Fig. 17 .
Fig. 17.Position between the solid and the liquid PCM
Fig. 21 .Fig. 22 .
Fig. 21.Temperature distribution in the cooler (the cross section) It is possible to carry out an analysis of an MG model as a numerical solution by means of the FEM.The electromagnetic part of the model is based on the solution of full Maxwell'
Fig. 25 .
Fig. 25.The simplified iteration algorithm of the model evaluation
Fig. 26 .
Fig. 26.The geometric model of the HF chamber
Fig. 27 .
Fig. 27.The distribution of the electric field intensity vector module E Thesic geometrical model of a separator is shown in Figure.26; in Figures 26 and 27 we can see the distribution of the electric field vector modules with intensities E as well as the heat generated through the Joule loss in material W jh ,. Figure 29 shows heating Θ [°C].In Figure 26, for the given instant of time, the distribution of elements is shown in which phase
Fig. 28 .
Fig. 28.The distribution module of the Joule heat module W jh mainly consist in the high value of specific melting heat, good thermal conductivity, nonflammability, and low cost.The negative characteristics include corrosivity of the substances to most metals, decomposition, loss of hygroscopic water, and the possibility of supercooling.Examples of anorganic PCMs are as follows: CaCl 2 .6H 2 O, Na 2 SO 4 .10H 2 O, Na 2 CO 3 .10H 2 O, MgCl 2 .6H 2 O, CaBr 2 .6H 2 O, Mg(NO 3 ) 2 .6H 2 O, LiNO 3 .3H 2 O, KF.4H 2 O, Na 2 SO 4 .10H 2 O, Na 2 HPO 4 .12H 2 O.The disadvantages consist in the inferior thermal conductivity, relatively significant variations of volume during the change of state, flammability).Examples of organic PCMs include paraffin, wax, polyethylene glycol, high-density polyethylene, stearic acid (C 17 H 35 COOH), and palmitic acid (C 15 H 31 COOH).c.Other substances include compounds, combinations of amorphous and crystalline substances, kombinace amorfních a krystalických látek, clathrates, and other items.The advantage of low-potential heat accumulation in PCM application consists in the variability.A comparison of PCM and classical materials together with a listing of several PCMs (] LANE, (GARG, H.P. et al. 1985ARG, H.P. et al. 1985, VENER, C. 1997) offer advantages such as a high value of specific melting heat, chemical stability, elimination of supercooling, and no corrosivity.
indicates the binary phase diagram of calcium chloride and water.The hexahydrate contains 50,66 wt% CaCl 2 , and the tetrahydrate 60,63 wt%.The melting point of the hexahydrate is 29,6 °C, with that of the tetrahydrate being 45,3 °C.The hexahydrate-α tetrahydrate peritectic point is at 49,62 wt% CaCl 2 -50,38 wt% H 2 O, and 29,45 °C.In addition to the stable form, there are two monotropic polymorphs of the tetrahydrate salt, β and γ.The latter two are rarely encountered when dealing with the hexahydrate composition; however, the α tetrahydrate is stable from its liquidus temperature, 32,78 °C, down to the peritectic point, 29,45 °C, thus showing a span of 3,33 °C.When liquid CaCl 6 .6H 2 O is cooled at the equilibrium, CaCl 2 .4H 2 O can begin to crystallize at 32,78 °C.When the peritectic is reached at 29,45 °C, the tetrahydrate hydrates further to form hexahydrate, and the material freezes.The maximum amount of tetrahydrate which can be formed is 9,45 wt%, calculated by the lever rule.This process is reversed when solid CaCl 6 .6H 2 O i s h e a t e d a t t h e equilibrium.At 29,45 °C the peritectic reaction occurs, forming 9,45% of CaCl 2 .4H 2 O and the liquid of the peritectic composition.With increasing temperature, the tetrahydrate melts, disappearing completely at 32,78 °C.Under actual freezing and melting conditions, the equilibrium processes described above may occur only partially or not at all.Supercooling of the tetrahydrate may lead to initial crystallization of the hexahydrate at 29,6 °C (or lower if this phase also supercools).It is possible to conduct modification by additives.From a number of potential candidates, Ba(OH) 2 , BaCO 3 and Sr(OH) 2 were chosen as they seemed to be feasible.When we used Ba(OH) 2 and Sr(OH) 2 at 1% part by weight, there was no supercooling.We were able to increase the stability of the equilibrium condition by adding | 7,730.8 | 2011-10-17T00:00:00.000 | [
"Materials Science"
] |
Continuum absorption in the vicinity of the toroidicity-induced Alfvén gap
Excitation of Alfvén modes is commonly viewed as a concern for energetic particle confinement in burning plasmas. The 3.5 MeV alpha particles produced by fusion may be affected as well as other fast ions in both present and future devices. Continuum damping of such modes is one of the key factors that determine their excitation thresholds and saturation levels. This work examines the resonant dissipative response of the Alfvén continuum to an oscillating driving current when the driving frequency is slightly outside the edges of the toroidicity-induced spectral gap. The problem is largely motivated by the need to describe the continuum absorption in the frequency sweeping events. A key element of this problem is the negative interference of the two closely spaced continuum crossing points. We explain why the lower and upper edges of the gap can have very different continuum absorption features. The difference is associated with an eigenmode whose frequency can be arbitrarily close to the upper edge of the gap whereas the lower edge of the gap is always a finite distance away from the closest eigenmode.
Introduction
The physics of continuum absorption derives from the classical resonant absorption problem, in which a driven mechanical oscillator absorbs energy efficiently when the frequency of the driving force matches the oscillatorʼs natural frequency. Continuum absorption is very common for electromagnetic waves in nonuniform media, where the wave frequency can locally match another natural frequency of the medium. A typical example is the magnetic beach [1]. When a low frequency magnetohydrodynamic (MHD) wave propagates along the weakening magnetic field, the local ion cyclotron frequency eventually becomes equal to the wave frequency, resulting in complete absorption of the wave due to ion cyclotron resonance. Another well-known example is the resonant absorption of laser light in a nonuniform plasma target [2].
In fusion devices, continuum absorption is one of the key damping mechanisms that determine excitation thresholds and saturation levels for Alfvén modes driven by energetic ions such as fusion-product alpha particles or fast ions generated via neutral beam injection and rf heating. The potentially unstable Alfvén waves cause undesirable losses of the fast ion population. A careful evaluation of continuum absorption is an essential element of the fast ion stability assessment. It is also essential for understanding nonlinear consequences of the fast ion driven instabilities. The absorption takes place near a magnetic surface where the driving frequency matches the local shear Alfvén frequency k v where v A is the local Alfvén velocity. Figure 1 shows a typical radial profile of the shear Alfvén frequency in a tokamak; this profile represents the so-called Alfvén continuum.
The studies of Alfvénic instabilities are largely focused on discrete spectral lines within the frequency gap in the continuum [3][4][5]. Continuum absorption can then occur when the 'tails' of such Toroidicity-induced Alfvén Eigenmodes (TAEs) cross the continuum. In this case, the continuum spectrum near the crossing point can be approximated by a linear function, and the resulting absorption introduces a small damping rate for the mode.
This damping rate can be calculated similarly to Landau damping of plasma oscillations [6], and it is known to be inversely proportional to the slope of the continuum at the crossing point.
However, the constant slope approximation breaks down at the edges of the toroidicity induced gap. The gap forms at r r m = , where the local dispersion relation is satisfied for the m and m 1 + poloidal components simultaneously, so that . At the edges of the gap, the continuum is nearly flat and form two tips. This aspect needs special attention, because the situation is now different from the constant slope case. The need to evaluate continuum absorption at the tips becomes apparent when energetic-particle-driven modes chirp away from the TAE frequency and hit one of the tips. Recalling the constant slope picture, one might then expect a very strong continuum absorption at the tip. Yet, a more careful investigation presented herein shows that this is actually not the case.
In order to solve the tip absorption problem, we use a formalism that probes the MHD response of the plasma to an external current. The external current enters the linearized MHD equations as an oscillating source term, and we examine the response as a function of the source frequency. This source mimics the energetic particle current in the chirping event. In addition, we introduce a small dissipative term that prevents singularity in the MHD response. The dissipative term can be viewed as a friction force acting on the plasma flow. The resulting dissipative power is quite informative: it has a narrow peak inside the gap at the eigenmode frequency, and it represents continuum absorption at other frequencies provided that the friction force is sufficiently small. We have modified the ideal MHD eigenvalue code adaptive eigenfunction independent solution (AEGIS) [7] to implement this approach numerically. The adaptive grid used in AEGIS and the iterative scheme to search for the continuum crossings assures proper resolution near the tip frequency. Analytically, we choose a low shear setup and calculate the dissipative power by solving the MHD equations for shear Alfvén perturbations via asymptotic matching. Our result shows that continuum absorption vanishes at the lower tip and scales as a square root of the frequency deviation from the tip when the frequency is slightly below the tip. By comparison, the absorption near the upper tip can vary considerably due to an eigenmode that can form arbitrarily close to the upper tip depending on system parameters. These findings agree with our numerical results, and they resolve the outstanding mystery that the two tips have very different absorption features [8].
The paper is organized as follows. Section 2 introduces our basic equations and a reduced version of these equations in the limit of large aspect ratio and low magnetic shear. Section 3 presents an analytical consideration of continuum absorption within the reduced model. The numerical scheme and benchmark of the modified AEGIS code is described in section 4, followed by numerical solution of the unabridged equations. Section 5 summarizes our results.
Basic equations and ò versus s ordering
Assuming zero compressibility, we use a linearized ideal MHD equation for a cold plasma: here x is the perturbed plasma displacement, and B is the equilibrium magnetic field. To mimic the energetic particle drive, we introduce an external current J A d with a tunable frequency ω. In what follows, we assume this current to be localized on a single magnetic surface away from the toroidicity-induced gap. We also add a dissipative term t d d 0 x m rg to resolve the singularity at the continuum crossing, so that equation (1) now describes a forced oscillating system with frictional damping: We use a Fourier representation of the plasma displacement, x w , so that This gives the following expression for the time-averaged power Q dissipated in the plasma volume due to the friction force: Although the power is formally propotional to γ, it actually remains finite in the limit as 0 g because x w is large at the continuum crossing points. This allows us to choose a sufficiently small γ and scan the frequency to study the continuum absoprtion near the tip as the frequency changes.
In order to examine the absorption analytically, we consider equation (2) in the large-aspect-ratio ( r R 1 = ) and low-magnetic-shear (s q r d ln d ln 1 ( ) ( ) = ) limit, which is a common approximation for tokamaks. The asymptotic matching technique of TAE theory [9] will then allow us to evaluate the continuum damping rate for TAEs as well as absorption away from the eigenmode frequency.
To start with, we use the following plasma displacement representation: in which Φ represents the shear Alfvén perturbation and dominates in the shear Alfvén frequency range. The potential, Φ, can be expressed as , where m and n are the poloidal and toroidal mode numbers. In the limit of large aspect ratio, low shear and high toroidal mode number, equation ( is the normalized frequency and The quantities ò, D ¢, and η are evaluated at r m , and all three of them have the same order of magnitude: r R = is the inverse aspect ratio; D ¢ is the radial derivative of the Shafranov shift, and 2 ( ) h = + D ¢ . Without loss of generality, the external current is assumed to be localized at y y 0 = with y 0 0 > , and we also assume that the external current flows along the equilibrium magnetic field, with j m d representing the mth poloidal component of the current. It is easy to see that 1 2 h W = + are the lower and upper edges of the frequency gap in the Alfvén continuum (i.e. the tip frequencies), and that both tips are located at y = 0. In the absence of the source terms, these wave equations describe bound states (TAEs) within the gap. We now recall [10][11][12] and remind some features of the TAEs that are essential for our subsequent steps. The wave equations contain two-dimensionless parameters: the inverse aspect ratio ò and the magnetic shear s. Depending on their relative values, the gap accommodates one, two, or multiple eigenmodes. There is only one TAE mode in the gap when s 2 . This mode is symmetric ( m m 1 f f » + ) and its frequency is slightly above the lower tip of the gap [11]. The second mode appears when ò becomes comparable to s 2 . The frequency of this mode lies slightly below the upper tip of the gap, and the mode is antisymmetric ( m m 1 f f » -+ ) [12]. For even larger values of ò, the gap contains multiple (more than two) modes. This happens when ò is comparable to or greater than s [10].
To simplify the subsequent analysis, we restrict ourselves to the case when s . We thereby exclude multiple modes from our consideration. However, we still intend to consider the s 2 ~range, which means that we need to take two modes into account: the ever-present symmetric mode near the lower tip and the antisymmetric mode that may emerge near the upper tip. We note that the condition s makes it allowable to neglect the first derivative terms on the righthandside (RHS) of equation (4). Following [11], we introduce the symmetric and antisymmetric combinations: We also take into account that y 1 , which simplifies equation (4) to . These equations describe two eigenmodes within the gap. The frequency parameter of the lower (nearly symmetric) mode is This mode is always present since 2 2 h -D ¢ = -D ¢ is positive. By comparison, the antisymmetric eigenmode near upper tip can only exist when s 2 -D ¢ > , and its frequency parameter is: These expressions for g 1 and g 2 follow from the disscussion of TAEs in [11,12] .
Analytical consideration
The limiting case of s involves a separation of scales in the solution of equation (5). The large difference between the outer region ( y s | |~) and the inner region ( y | | ) allows us to connect the outer and inner solutions via asympotic matching.
We first consider the vicinity of the lower tip ( g 1 1 | | + ), and only keep the dominant terms of equation (5) in the inner region (the first term on the LHS and the first term on the RHS). This simplification enables integration of equation (5) where y g 1 i 1 ( ) h n = + + and the branch of the square root is specified by the condition that the imaginary part of y 1 is positive.
In the outer region ( y s | |~), the first term on the LHS in equation (5a) scales as g y S , and we observe that this term is much smaller than the second term, which scales as s S Equation (5a) can therefore be simplified to: It now follows from equation (10) Note that S is an almost even function, except for the jump near the origin due to the source: S C D = .
Given the solution for S, we treat the RHS of equation ( The +¥ and -¥ integration limits in these expressions ensure that A vanishes at infinity, whereas the coefficient R 2 in front of K 0 is chosen to match A y d d at small values of y to the inner solution. As seen from equation (12), the jump in A across the origin is describes the contribution of the source. By equating the jumps in S in the outer and inner solutions, we find R yC.
where g 1 , defined by equation (6), is the frequency parameter of the lower TAE. As seen from equation (13), the dissipative power has a sharp peak inside the gap at the TAE frequency (g g 1 = ) with Q 1 ñ . This feature is characteristic for a simple forced oscillator with small friction, since there is no continuum absorption in the gap.
When the frequency of the source current is somewhat below the gap, the corresponding value of g is less than −1, and the quantity y 1 is predominantly imaginary. We can then simplify equation (13) to As we scan the source frequency downward from the tip, the quantities jand C remain nearly constant (they do not change significantly in the vicinity of the tip). For the first term in equation (14), the denominator contains a constant contribution and the g 1 | ( )| h + term. The constant part is finite at the lower tip, which shows that the total dissipative power Q scales as g 1 | ( )| h + downward from the lower tip. We can roughly estimate the range of g values for the g 1 | ( )| h + scaling as: which is of the same order as the frequency difference between the upper TAE and the tip. In contrast with the lower tip case, the threshold frequency can be very close to the tip because may change sign and become very small as parameter changes. In the limiting case when , and results in a large divergent part in the total dissipative power. Figure 2 shows the behavior of the first term in equation (16) versus frequency for various parameters, which demonstrates that the absorption can be large at the tip, and is sensitive to the parameters when We can now summarize the different scenarios for continuum absorption near the two tips in the s limit. Near the lower tip, where there is an ever-present neighboring eigenmode that never touches the tip, the absorption always vanishes at the tip. When ò increases, the separation between the eigenfrequency and the lower tip will increase as well as the range for the g 1 | ( )| h + scaling of Q below the lower tip. For the upper tip, where there is no neighboring eigenmode until ò becomes comparable to s 2 , the range of the g 1 | ( )| hscaling shrinks when ò approaches the mode-existence threshold; consequently, the g 1 | ( )| hscaling breaks down and we may expect a large continuum absorption at the tip when the upper eigenmode is just about to appear. As we further increase ò, the range of the g 1 | ( )| hscaling of Q grows in step with the distance between the eigenmode frequency and the tip. These features are responsible for significant asymmetry in continuum absorption at the tips.
Numerical scheme and benchmark
We use the AEGIS code to study continuum absorption numerically. AEGIS is a linear MHD eigenvalue code with an adaptive mesh in the radial direction (here toroidal magnetic flux ψ is used as the radial coordinate), and Fourier decomposition in the poloidal (θ) and toroidal (ζ) directions. The plasma displacement vector (x), which is orthogonal to the equilibrium magnetic field under incompressibility condition, is represented by two functions ( s x and x y ) as represents the external current. The definitions of matrices F, K, and G are given in [7]. We add a small positive imaginary part (i 2 g ) to the frequency ω in the expressions for F, K, and G, to capture the effect of friction. To solve for m x y in equation (17), AEGIS divides the radial computational domain into multiple regions, and matches the independent solutions at the interfaces of these regions. Consequently, 0 y can be set as one of the interfaces between the adjacent regions in AEGIS so that the source contribution will only affect the matching condition across 0 y . The adaptive mesh in AEGIS allows us to input the resonance point and pack the nearby grid points exponentially, which ensures proper resolution near the continuum crossing for small values of γ.
With the modified code, we first study the continuum absorption of the n = 1 TAE as a test case. Since the continuum crossing is away from the gap in this case, toroidicity-induced coupling can be neglected near the crossing, which means that the matrices F, K, and G are almost diagonal there. Suppose mth equation in (17) has singularity (F 0 mm » ) near the crossing. We can then keep the highest derivative term of m x y and simplify the equation to where C 1 is an integration constant. By linear expansion of F mm at the crossing 0 y , we find Here * g is the imaginary part of F mm , which is propotional to γ. The total dissipative power is which shows that Q is inversely propotional to the slope of the continuum spectrum near the crossing when γ is sufficiently small. We choose a low beta tokamak equilibrium that has nearly circular cross section (see figure 3), with the equilibrium pressure and safety factor plotted in figure 4. By slightly varying the density profile near the plasma edge, the gap can be either open or closed without changing the TAE frequency significantly. Figure 5 shows two density profiles we used ('density profile I' and 'density profile II') and the corresponding continuum spectra. For 'density profile I', the gap is open and we find the n = 1 TAE (whose frequency is labeled in figure 5). On the other hand, 'density profile II' closes the gap and introduces continuum absorption at the edge for the TAE. We scan the source frequency for 'density profile II' for different values of γ and plot the total dissipative power in figure 6. This figure shows that the dissipative power has a peak near the eigenfrequency, and it converges for small values of γ away from the eigenfrequency, as we expect analytically. To check how well the mode structure is resolved at the continuum crossing, we plot the plasma response when dissipative power has a peak and compare it to the TAE structure when the gap is open in figure 7. We see that the plasma displacement in the two cases agrees quite well away from the continuum crossing. In addition, we compare the calculated plasma response near the continuum crossing with the analytical solution, which is obtained after integration of equation (18): The code output is found to be in close agreement with this expression. Near the eigenfrequency, the bulk plasma response is large and it can still contribute considerably to the total dissipative power even when γ is relatively small. As a result, the total dissipative power is greater than the continuum absorption at the crossing. To single out the continuum absorption at eigenfrequency, we define the Here we use the fact that the total energy of the mode, E tot , is contributed equally from the bulk plasma kinetic energy and potential energy, and we choose * d g to determine the integration limits. For small values of γ, the bulk plasma energy is well-separated from the continuum crossing and d g . We calculate the continuum damping rate for different values of γ for the n = 1 TAE. The continuum damping rate converges as γ decreases, and its value agrees with the result obtained via analytic continuation in [13].
Tip absorption results
In the studies of continuum absorption at the tips, there are two closely spaced crossings that are equally important. To resolve the field structure at the two crossings, γ must be very small and the points for grid setting need to be chosen carefully. To meet the numerical requirement, we search for the continuum crossings iteratively. Previous studies of tip absorption in a reversed-shear configuration [8] provide a good guidance for our simulation since the numerical requirements are similar despite the differences in the physics picture.
For better comparison with the analytical result, we study the n = 5 case using the same tokamak equilibrium (with 'density profile II') as in section 4.2. Figure 8 is the corresponding n = 5 continuum spectrum. We set the boundary at 0.4 y = to focus on the tip absorption in the first gap. In this way the gap is open and TAEs appear both near the upper and the lower tip (labeled in figure 8). The two TAEs are strongly asymmetric (the upper TAE is much closer to the upper tip than the lower TAE to the lower tip).
We scan the source frequency from below the lower tip to the upper tip and plot the total dissipative power in figure 9. Inside the gap, the continuum absorption is almost zero except for the peaks at the TAE eigenfrequencies. Outside the gap, the dissipative power shows good convergence for different values of γ, and we observe a significant difference between the upper and lower tips. Figure 10 is a zoom-in of the dissipative power near the lower tip. It shows that the absorption almost vanishes at the tip and follows the w D scaling below the lower tip, where w D measures the frequency difference from the lower tip and is propotional to the To study the tip absorption as system parameters change, we investigate a set of eight equilibrium cases, which are generated by varying the strength of the equilibrium current. All these equilibrium cases have Figure 9. Plots of the total dissipative power Q versus frequency for different values of γ for the n = 5 case, from below the lower tip to above the upper tip. The dissipative power is negligible in the gap, except for the TAE peaks. The peaks at the lower TAE ( 0.437 w = ) are outside the frame. The peak values are Q = 1333 for 1 10 5 g =´-, Q = 745 for 2 10 5 g =´and Q = 327 for 5 10 5 g =´-(they roughly scale as 1 g ). Note that Q is insensitive to γ outside the gap. Figure 10. A zoom of the total dissipative power Q versus frequency near the lower tip for 1 10 6 g =´-. The plot shows a clear w D scaling when the frequency is below the lower tip. Figure 11. A zoom of the total dissipative power Q versus frequency near the upper tip for 1 10 6 g =´-, in which the absorption peak in the gap is well-resolved. Above the upper tip, the power has a small range of w D scaling and quickly grows to a finite value. negligible pressure and similar safety factor profile. Yet, the safety factor value decreases monotonically from case 1 to case 8, which can be seen from figure 12. Thus, as one moves from case 1 to case 8, the position of the gap moves outwards and the ratio s 2 will also decrease monotonically. With these equilibria, we observe the expected sensitivity of the upper tip absorption to variation of the aspect ratio and magnetic shear. From case 1 to case 8, the frequency interval between the lower TAE and the lower tip doesn't change significantly. Consequently, the lower tip absorption turns out to be nearly the same. Yet, the upper TAE and the upper tip absorption are much more sensitive. Figure 13 shows the absorption near the upper tip as equilibrium changes. It is apparent that the TAE frequency moves closer to the upper tip and the range of the w D scaling narrows from case 1 to case 4. As the q profile decreases from case 5 to case 8, the TAE disappears and the range of the w D scaling grows. This feature of the upper tip absorption agrees with the analytical expectation presented in figure 2, and it also appears in our simulations for the n = 1 case.
Summary and discussion
To summarize, we have examined the continuum absorption around the edges of the toroidicity-induced gap in the Alfvén continuum spectrum. To solve the problem, we introduce a driving current and a small dissipative term to the ideal MHD equations. The frequency is normalized by the upper tip frequency ( t w ). Note that the pattern changes dramatically as the upper TAE (which is outside the frame for case 1) moves upward and merges into the continuum. | 5,968.2 | 2015-11-18T00:00:00.000 | [
"Physics"
] |
Effects of Color Modifier on Degree of Monomer Conversion, Biaxial Flexural Strength, Surface Microhardness, and Water Sorption/Solubility of Resin Composites
Color modifiers can be mixed with resin composites to mimic the shade of severely discolored tooth. The aim of this study was to assess the effects of a color modifier on the physical and mechanical properties of a resin composite. The composite was mixed with a color modifier at 0 wt% (group 1), 1 wt% (group 2), 2.5 wt% (group 3), or 5 wt% (group 4). The degree of monomer conversion (DC) was examined after light curing for 20 or 40 s. Biaxial flexural strength (BFS)/modulus (BFM), surface microhardness (SH), and water sorption (Wsp)/solubility (Wsl) were also tested. The DC of group 1 was significantly higher than that of groups 3 and 4. The increase in curing time from 20 to 40 s increased the DC by ~10%. The BFS, BFM, Wsp, and Wsl of all the groups were comparable. A negative correlation was detected between the concentration of color modifier and the BFS and DC, while a positive correlation was observed with Wsp. In conclusion, the color modifier reduced the DC of composites, but the conversion was improved by extending the curing time. The increase in color modifier concentration also correlated with a reduction in strength and the increase in the water sorption of the composites.
Introduction
The systemic administration of tetracycline during skeletal and tooth development leads to the deposition of the drug into the tissues, causing irreversible intrinsic discoloration [1]. The severity of tetracycline-induced tooth discoloration varies from yellow to dark brown, which is a major challenge in restorative dentistry. A common method for managing the lesions or masking the discolored teeth is the use of indirect veneers [2]. The technique generally requires the removal of the tooth surface, followed by the placement of a desirable shade of ceramic to mask the underlying discoloration. The placement of ceramic veneers provides excellent esthetic outcomes [3], but the technique is invasive and requires great experience from the operator [4].
A minimally invasive approach to restoring tetracycline-induced tooth discoloration is the use of direct resin composites [5]. Additionally, the use of direct composites to manage poor esthetics in anterior teeth was facilitated by the substantial improvement in the physical and mechanical properties of resin composites. However, the shade of most commercial composites is unable to mimic the discolored tooth. The application of light-cured characterizing materials or color modifiers under or between the incremental layers of composite may help to mask the discoloration and produce a natural appearance or desirable restoration shade [6]. Color modifiers consist of light-curable, low-viscosity methacrylate monomers, colorants, and pigments that are available in various colors, such as brown, black, red, or white. The materials contain a low filler content to aid the flowability and adaptation to the surface. The purpose of using a color modifier is to mimic the shade, natural appearance, or characteristics of the tooth [7,8]. However, a study showed that the placement of color modifiers between the composite layers reduced the cohesive strength of the composite, which may affect the longevity of the restoration [7]. Another minimally invasive and simplified method to restore a tetracycline-induced discolored tooth is the use of a composite-mixed color modifier to mimic the shade of the discolored tooth ( Figure 1). Figure 1. Example of using a composite mixed with a color modifier for restoring a severe tetracycline-induced discolored tooth. (A) A patient willing to restore the fracture of the lower left central incisor using direct resin composite; (B) resin composite mixed with the color modifier (grey shade), which was used to mimic the discolored dentin; (C) final outcome of the composite restoration that exhibits the natural appearance of the discolored tooth.
The incorporation of a color modifier into composites may reduce the physical and mechanical properties of the materials. The dark pigments from the color modifier may reduce the light transmission, which could decrease the degree of monomer conversion in the materials [9]. The low conversion may reduce the polymer cross-linking and rigidity of the polymer network. This may subsequently promote water sorption/solubility and the release of monomers from the material [10,11]. Furthermore, the identified unreacted monomers of composites have been shown to induce cytotoxic, genotoxic, mutagenic, carcinogenic, and allergenic effects in the in vitro studies [12]. The darker composites reached the highest polymerization after light curing, slower than the composites with lighter shades, which led to a low degree of monomer conversion [13,14]. Additionally, the darker-shade composites tend to absorb more light, and require a longer exposure time compared with lighter-shade composites [9]. Furthermore, the incorporation of low-molecular-weight monomers from the color modifier may decrease the mechanical properties of the composites [15,16], which could potentially reduce the longevity of the restoration.
At present, the evidence explaining the effect of the incorporation of a color modifier on the physical and mechanical properties of resin composites is limited. The aim of the current study was, therefore, to assess the effect of the incorporation of a color modifier on the degree of monomer conversion (after light-curing for 20 or 40 s), surface microhardness, biaxial flexural strength/modulus, and the water sorption/solubility of the composite material. The null hypothesis was that the addition of a color modifier at different concentrations would have no significant effect on the physical/mechanical properties of the material.
Materials Preparation
A commercial resin composite (Harmonize shade A3, Kerr Corporation, Orange, CA, USA) was mixed with a color modifier (Kolor Plus, Kerr Corporation, Orange, CA, USA) at 0 wt% (group 1 or control), 1 wt% (group 2), 2.5 wt% (group 3), and 5 wt% (group 4). The materials were weighed using a four-figure balance and hand-mixed within 20 s in a dark box. The compositions of the commercial materials are presented in Table 1. A schematic explaining the protocol used in the current study is presented in Figure 2. Table 1. The composition of the commercial materials used in the current study. The exact amount is not provided by the manufacturer.
Degree of Monomer Conversion (DC)
The DC was measured using an attenuated, total reflection Fourier-transform infrared spectrometer (ATR-FTIR, Nicolet i5, Thermo Fisher Scientific, Waltham, MA, USA) (n = 5) [17]. The composite and color modifier were weighed and hand-mixed within 20 s. The mixed paste was placed in the metal ring (1-mm thickness) on the ATR diamond. The paste was covered and pressed with an acetate sheet so that the thickness of the composites was fixed at 1 mm. They were light-cured using an LED light-curing unit (irradiance of 1200 mW/cm 2 , SmartLite Focus Pen Style, DENTSPLY Sirona, York, PA, USA) from the top surface for 20 and 40 s ( Figure 1). The curing time of 20 or 40 s is clinically relevant and commonly used in curing protocols for resin composites [18]. FTIR spectra were obtained in the region of 700-1800 cm -1 at the bottom of the specimen before and after curing. The test was conducted at room temperature (25 ± 1 • C). The DC (%) of the specimen was then calculated, using the following equation: where ∆A 0 and ∆A t represent the absorbance of the C-O peak (1320 cm -1 ) above the background level at 1335 cm -1 before and after curing at time t, respectively. The peak at 1320 cm -1 [ν (C-O)] of the methacrylate group was used to calculate the DC due to the lower variation in the result compared to that obtained from the peak at 1636 cm -1 [ν (C=C)] [19].
Surface Microhardness (SH)
Disc specimens (n = 5) were prepared according to the previous section. They were immersed in 10 mL of deionized water at 37 • C for 24 h before the test. The Vickers surface microhardness of the specimens was tested using a microhardness tester (FM-800, Future-Tech Corp, Kanagawa, Japan) at room temperature (25 ± 1 • C), with an indenter load of 50 g for an indentation time of 15 s [20,21]. The results were recorded as Vickers hardness number (VHN). The obtained hardness value of each specimen was the average of values measured from four areas on the surface.
Biaxial Flexural Strength (BFS) and Modulus (BFM)
The composites and color modifier were weighed and mixed within 20 s. The mixed pastes were loaded into a metal circlip (10 mm in diameter and 1 mm in thickness, Springmasters, Redditch, UK). The specimens were covered with an acetate sheet and glass slaps on the top and bottom surfaces. They were light-cured using the LED light-curing unit for 20 s on the top and bottom sides to produce disc specimens ( Figure 1). The specimens were left at room temperature for 24 h to allow the process to complete. Then, the specimens were removed from the circlip and any excess was trimmed. They were placed in tubes containing 5 mL of deionized water. The tubes were incubated at 37 • C for 24 h before the test.
The biaxial flexural strength (BFS) test was conducted at room temperature (25 ± 1 • C). The disc specimen was placed on a ball-on-ring testing jig under a mechanical testing frame (AGSX, Shimadzu, Kyoto, Japan). The load cell (500 N) was applied on the jig at a crosshead speed of 1 mm/min until the specimen was fractured. The load at failure was then recorded. The BFS (Pa) was then calculated according to the following equation [22]: where F is the load at failure (N), d is the specimen's thickness (m), r is the radius of circular support (mm), and v is Poisson's ratio (0.3). Then, the biaxial flexural modulus (BFM, Pa) was obtained using the following equation [23]: where ∆H ∆W c is the rate of change of the load with regards to the central deflection versus the gradient of the force-displacement curve (N/m), β c is the center deflection junction (0.5024), and q is the ratio of the support radius to the radius of the disc.
Water Sorption (W sp ) and Water Solubility (W sl )
Disc specimens were prepared (n = 5). They were placed in the first desiccator with a controlled temperature of 37 ± 1 • C for 22 h. Then, the specimens were moved to the second desiccator with a controlled temperature of 25 ± 1 • C for 2 h. The mass of the specimens was then measured using a four-figure balance. These procedures were repeated until a constant mass (conditioned mass, m 1 ) was obtained.
The specimens were then placed in a tube containing 10 mL of deionized water. They were placed in an incubator with a controlled temperature of 37 ± 1 • C for 7 days. Then, the specimens were removed and blotted dry. The mass of the specimens was recorded after 7 days (m 2 ).
The specimens were then reconditioned following the procedure described above for m 1 . The reconditioning was repeated until a constant mass was obtained (m 3 ). The water sorption (W sp , g/m 3 ) and water solubility (W sl , g/m 3 ) of the materials were calculated using the following equations [24]: where m 1 is the conditioned mass of the specimen (g), m 2 is the mass of the specimen after immersion in water for 7 days (g), m 3 is the reconditioned mass of the specimen after immersion in water (g), and v is the volume of the specimen (m 3 ).
Statistical Analysis
The numerical data presented in the current study are means ± SD. The data were analyzed using Prism 9.2 (GraphPad Software LLC., San Diego, CA, USA). The normality of the data was assessed using the Shapiro-Wilk test. Then, data were analyzed using a one-way ANOVA, followed by Tukey's multiple comparisons. Additionally, the difference in DC upon curing for 20 or 40 s was examined using a repeated-measures ANOVA and Tukey's post hoc multiple comparisons test. Pearson's correlation analysis was additionally performed to examine the correlation between the concentration of color modifier and the DC, SH, BFS/BFM, W sp , and W sl of composites. All p-values lower than 0.05 were considered statistically significant. Power analysis was performed using G * Power 3.1 (University of Dusseldorf, Germany) [25] based on the results from previously published studies [20][21][22]. The results from G * Power suggested that five samples per group were required to obtain a power greater than 0.95 in a one-way ANOVA (α = 0.05).
Degree of Monomer Conversion (DC)
A reduction in the peak at 1320 cm -1 was observed after light-curing ( Figure 3). The reduction was increased upon extension of the light-curing time (20 to 40 s). However, the reduction in the peak was less evident in groups 3 and 4. Group 1 exhibited the highest DC after curing for 20 (42.8 ± 1.6%) and 40 s (49.1 ± 1.0%) compared with the other groups (Table 2). Group 4 showed the lowest degree of monomer conversion at 20 (3.3 ± 3.7%) and 40 s (7.9 ± 7.6%). Table 2. The results (mean and SD) from each group. The same lowercase letters indicate significant differences (p < 0.05) between groups in the same column. The same uppercase letters indicate significant differences (p < 0.05) in DC in the same group after curing for 20 or 40 s. The appearance of the specimens in each group after light-curing is presented in Figure 4. Higher concentration of the color modifier led to a darker shade of the specimens. The conversion in group 1 at 20 and 40 s was not significantly different from that of group 2 (20 s, 37.3 ± 3.2%; 40 s, 45.2 ± 3.1%) (p > 0.05). The conversion in all groups after being light-cured for 40 s was significantly higher than that at 20 s (p < 0.05). Additionally, a negative correlation was detected between the concentration of color modifier and the degree of monomer conversion at 20 and 40 s (p < 0.01) ( Figure 5).
Water Sorption (W sp ) and Water Solubility (W sl )
The highest and lowest mean W sp were observed in group 4 (28.8 ± 2.3 µg/mm 3 ) and group 3 (24.6 ± 3.0 µg/mm 3 ), respectively (Table 2). Additionally, the highest and lowest mean W sl were observed in group 2 (2.7 ± 1.7 µg/mm 3 ) and group 4 (1.7 ± 1.4 µg/mm 3 ), respectively. No significant differences were detected in W sp and W sl among (p > 0.05). Furthermore, no correlation was observed between the concentration of color modifier and W sl (p = 0.5275). However, a positive correlation was detected between the concentration of color modifier and W sp (p = 0.0487).
Discussion
The aim of the current study was to assess the effect of using different concentrations of color modifier on the physical and mechanical properties of the composites. The use of the color modifier significantly reduced the degree of monomer conversion of the composites. Hence, the null hypothesis was partially rejected. It should be mentioned that the current study is an in vitro study. Hence, the related clinical significance should be carefully interpreted.
The degree of monomer conversion is primarily governed by the chemical structures of monomers [26], the concentration and type of photoinitiators [27], translucency and shade of materials [28], and the irradiance of light-curing units [29]. A high degree of monomer conversion after light curing may generally help ensure good physical and mechanical properties in the restored composites [30]. This may additionally decrease the risk of releasing toxic, unreacted monomers [31,32]. In general, the DC, after a sufficient light-curing time of conventional composites, ranged from 50 to 70% [18,30].
The increase in color modifier concentration significantly reduced the DC of the composites on the inner surface. The current study showed that composites with an added color modifier of greater than 1 wt% exhibited DC values lower than 40%, even after being light-cured for 40 s. It is known that the maximum curing depth in light-activated free-radical polymerization is limited by the attenuation of curing light. This could be explained using Beer-Lambert's law [33,34] (Equation (6)) where I and I 0 are the light intensity at depth d and light intensity entering the specimen surface, respectively. γ is the Naperial absorption coefficient of the medium. The reduction in light intensity in the composites upon the addition of a color modifier may be due to the light absorption and the increase in light scattering caused by fillers and other additives [34].
This could subsequently lead to a limited curing depth and the production of free radicals in the materials. Additionally, the transmission of light energy into the composites may be diminished by the increase in darkness of the composite shade [35] (Figure 7). Furthermore, dark pigments of a color modifier may block the light penetration or increase the light scattering due to the increase in refractive index mismatch in the composites [9,36,37]. It was reported that the color pigments may act as the light-scattering centers, which could reduce light penetration into the composites in a dose-dependent manner [38]. The reduction in DC may lead to the release of unreacted monomers from the composites. Future work should, therefore, investigate the monomer elution using HPLC. The results of the current study also suggest that the DC of composites mixed with the color modifier was increased by~10% after extending the light-curing time from 20 to 40 s. This could be due to the increase in the radiant exposure, which could promote the production of free radicals [39] to enhance the DC of the materials [40,41]. Another method to enhance the polymerization could be the use of a high-irradiance light-curing unit [42].
Negative correlations were detected in the concentration of color modifier versus water sorption and biaxial flexural strength. It is known that water sorption is generally associated with the DC, the hydrophilicity/hydrophobicity of the polymers, and the structure of the polymer network [43]. The reduction in DC due to the addition of a color modifier may reduce the polymer cross-link of the composites. This may subsequently decrease the rigidity of the polymer network and increase water sorption into the materials. Additionally, the primary methacrylate monomer of the color modifier is triethylene glycol dimethacrylate (TEGDMA). It was demonstrated that poly-TEGDMA absorbed more water than other dimethacrylate polymers [44]. This could be due to the heterogenicity of poly-TEGDMA, which contains microporosities or clusters inside the polymer network. The space created between the clusters may accommodate a large quantity of water. Additionally, the high flexibility of poly-TEGDMA, due to its low molecular weight (TEGDMA monomer = 286.3 g/mol), may allow for swelling of the polymer chain due to water. The adsorbed water can act as a plasticizer that increases polymer plasticization, thus reducing the strength of the composites [45][46][47].
It should be mentioned that no significant differences were detected in the strength, surface microhardness, and water sorption/solubility of the composites in each group. This could be due to the fact that the composite specimens were light-cured on both sides following the protocol used in the BS ISO 4049 (Dentistry-polymer-based restorative materials) [24]. This may enhance the physical and mechanical strength of the specimens. Therefore, the main limitation of the current study was that the specimen preparation did not represent the actual clinical situation, where the composites can only be light-cured on the outer surface. Therefore, future work may need to prepare for specimens to be light-cured from only one side to mimic the clinical reality.
Conclusions
Within the limits of the current in vitro study, it is possible to draw the following conclusions: Funding: The current study was partially supported by the Faculty of Dentistry, Mahidol University and Faculty of Dentistry, Thammasat University.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement:
The consent for publishing the clinical images ( Figure 1) was obtained from the representative of the patient.
Data Availability Statement:
The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request. | 4,671.8 | 2021-11-01T00:00:00.000 | [
"Materials Science"
] |
Salient semantics
Semantic features are components of concepts. In philosophy, there is a predominant focus on those features that are necessary (and jointly sufficient) for the application of a concept. Consequently, the method of cases has been the paradigm tool among philosophers, including experimental philosophers. However, whether a feature is salient is often far more important for cognitive processes like memory, categorization, recognition and even decision-making than whether it is necessary. The primary objective of this paper is to emphasize the significance of researching salient features of concepts. I thereby advocate the use of semantic feature production tasks, which not only enable researchers to determine whether a feature is salient, but also provide a complementary method for studying ordinary language use. I will discuss empirical data on three concepts, conspiracy theory, female/male professor, and life, to illustrate that semantic feature production tasks can help philosophers (a) identify those salient features that play a central role in our reasoning about and with concepts, (b) examine socially relevant stereotypes, and (c) investigate the structure of concepts.
in whether a feature has the attribute of necessity, i.e., whether the feature is a necessary component of a concept.Advocates of prototype and exemplar theories care more about whether a feature is typical or prototypical, i.e., whether most objects that fall under a concept have the property referred to by that feature, or whether a particularly noteworthy exemplar has that feature.
Necessity and typicality are not the only attributes that play an important role in our representation of kinds.A feature might be universal, i.e., all objects falling under a concept have the property referred to by that feature, but not necessarily so.A feature might also be salient, which means-very roughly-that some objects falling under a concept have a property referred to by that feature that is striking, i.e., it stands out from other properties in our representation of the kind. 2ere is an example to illustrate differences between those attributes.The concept of shark might be characterized to have the following set of features: <is a fish>, <has 5-7 pairs of gills>, <is predatory>, <attacks humans>.Sharks are necessarily fish.<is a fish> is thus a necessary feature of shark.All sharks have 5-7 pairs of gills, but sharks might evolve to have more than 7 pairs of gills.The feature <has 5-7 pairs of gills> is thus universal without being necessary.Most sharks are predatory, but not all are.<is predatory> is thus only a typical, but not a universal or even necessary feature of shark.And hardly any sharks attack humans, but <attacks humans> is a highly salient feature, most likely because the potential danger of sharks plays an important role in people's reasoning about sharks.
Although I have provided a tentative characterization of salience above and have discussed an example of a salient feature of a concept, readers might still be unclear about what exactly is meant by salience.Unfortunately, there is no accepted definition of salience.Instead, researchers seem to either • provide paraphrases that are likely to be uninformative and possibly circular (Sloman et al., 1998, my own characterization above); • state technical definitions (Del Pinal & Spaulding, 2018;Fischer & Engelhardt, 2020;Sloman et al., 1998); • operationalize instead of provide a definition (McRae et al., 2005).
Let me quickly take these three approaches in turn.First, stating that features are salient if and only if the property they refer to is striking (see my characterization above) seems to only trade one word for a synonymous term.More worryingly, if one is pressed to say what it is for a property to be striking, one is easily led into a circle by saying that a striking property is salient in our representation of the kind.Sloman et al.'s (1998) claim that salient features are those that are 'prominent' in our representation does not fare any better, unless an informative definition of 'prominent' is given, which is at least absent in Sloman et al.'s discussion.
Second, Sloman et al. also provide a more technical definition of salience: Salience refers to the intensity of a feature, the extent to which it presents a high amplitude signal in relation to background noise, in a way that is fairly independent of context.For example, the brightness of a bright light or the redness of a fire engine are salient features.(1998, p. 193) It seems that Sloman et al. take the analogy with perceptual salience quite literally.However, while perceptual salience of a stimulus and the salience of a feature of a concept might share some commonalities, the analogy breaks down fairly quickly when we consider functional and abstract features.For example, it is hard to tell what the signal-to-noise ratio for the feature <attacks humans> is supposed to be, or how to cash out the salience of features for abstract concepts like conspiracy theory or life in terms of its intensity or high amplitude signal.In a range of projects, Fischer and Engelhardt investigate reasoning processes that are influenced by what they call a linguistic salience bias.While their focus is more strongly on stereotypical inferences due to dominant uses of words (see e.g., Fischer & Engelhardt, 2016, Fischer & Engelhardt, 2019, see also Fischer & Sytsma, 2021), they also provide an extended characterization of salience: Salience (in this sense) is a function of exposure frequency, that is, of how often the language user encounters the word in this sense.It is further modulated by prototypicality (Rosch, 1978), where a sense of a polysemous word (e.g., "see") is more or less prototypical depending upon whether it stands for more or less prototypical examples of the relevant category (e.g., more or less prototypical cases of seeing).The more salient a use is for a hearer, the more rapidly and strongly the situation schema associated with it gets activated.(Fischer & Engelhardt, 2020, p. 418) Cashing out salience as a function of frequency and prototypicality seems very promising, especially given the results from empirical studies (see below).A possible downside of this characterization is that prototypicality is usually considered to be itself a function of frequency, and thus, we would need to know more about Fischer & Engelhardt's definition of prototypicality.Additionally, their work focuses mostly on polysemous uses of terms, for which we can expect other factors to play a more important role for salience.
Other researchers (e.g., McRae et al., 2005) take the salience of a feature to be what is revealed by certain experimental tasks, e.g., memory retrieval tasks.However, this approach rather operationalizes the concept of salience but does not provide an independent definition.Consequently, we cannot tell whether a feature is salient because it is quickly retrieved from memory or whether a feature is quickly retrieved from memory because it is salient.With this not entirely satisfying state of affairs, let us consider the role that salience plays for traditional philosophers, experimental philosophers and psychologists.
(Experimental) Philosophers and psychologists
In the practice of conceptual analysis, philosophers typically focus on identifying components that are necessary for applying a concept.This approach often leads them to adopt the classical theory of concepts.According to this theory, concepts are under-stood as collections of features that are both necessary and jointly sufficient for their application. 3, 4 The dominant method of conceptual analysis for determining the necessary and jointly sufficient features of a concept is the method of cases: Philosophers devise thought experiments that constitute possible or actual counterexamples to the proposed definition of a concept.
Experimental philosophers have challenged the idea that a small number of experts, i.e., professional philosophers, can reliably and robustly reveal whether a putative counterexample works or whether it fails (Brun & Reuter, 2022;Mallon et al., 2009).Most experimental philosophers, however, rarely challenge the predominance of the search for necessary and jointly sufficient features of concepts.In other words, the method of cases is also a prevalent approach among experimental philosophers.While vignette studies do not need to be designed to test for necessary and jointly sufficient features, they often are. 5Why have philosophers been less intrigued by other attributes of features, especially by the attribute of salience?Arguably, the issue of feature salience has been largely neglected in philosophical debates because philosophers commonly hold that their main objective is to probe deeper than the obvious or 'salient' aspects that are immediately accessible to most individuals.Instead, their goal is to venture past these initial impressions to uncover the more intricate and profound essence of various phenomena.However, this paper argues that disregarding the salience of features as irrelevant to philosophical inquiry is a significant oversight.It will demonstrate that acknowledging and examining the salience of features is not just beneficial but essential for a comprehensive and accurate understanding of various concepts, offering a richer and more complete perspective in philosophical explorations. 6everal scholars, including Machery (2017) and Isaac (2021aIsaac ( , 2021b)), have underscored the importance of adopting psychological approaches in analyzing concepts and highlighting areas for conceptual change.Despite this, the influence of feature salience on our engagement with and understanding of philosophical concepts remains largely uncharted territory.A notable exception can be found in the works of Fischer and his colleagues, who investigate the effects of what they term "linguistic salience bias" on reasoning processes, as seen in Fischer andEngelhardt (2016, 2020), and Fischer and Sytsma (2021).Aside from Fischer and colleagues' work on the linguistic salience bias, the importance of the salience of a feature of a concept has been noted in the study of generics, see, in particular, Leslie (2008).According to Leslie's view, if a particular feature (like <attacks humans>) is highly salient in a concept like shark, people are likely to formulate a generic statement based on that feature ("Sharks attack humans"), even if the characteristic is not statistically prevalent in the category of sharks.Despite the significance of feature salience for the study of generics, little empirical work has in fact been conducted to determine the salient features of concepts.
Psychologists, in contrast to both traditional and experimental philosophers, are less impressed by the classical theory of concepts and the search for necessary and jointly sufficient features.Not only are there surprisingly few success stories of the classical theory (Laurence & Margolis, 1999), many studies have shown that our categorization and reasoning processes are often best explained by prototypical representations and encoded exemplars (Rosch, 1978;Hampton, 1995).Whether or not the classical theory can somehow accommodate this research is a matter of ongoing debate (see e.g., Lakoff, 2007;Machery, 2009).I am not taking any sides in this discussion.What can be said though with certainty is that psychologists take very seriously and investigate thoroughly the salience, typicality, centrality, and diagnosticity of features, whereas philosophers consider those attributes of a concept's features to a much smaller extent.
Despite psychologists' interest in salient and typical features of concepts, there is relatively little work on the salient and typical features of individual concepts, especially for philosophically relevant concepts.As an example, take the concept of lie for which philosophers and psychologists have advanced our knowledge hand-in-hand.The case of lie underscores my claim that even experimental philosophers are usually strongly invested in the traditional program of finding necessary and jointly sufficient conditions.Coleman and Kay (1981) empirically studied and developed a prototype semantics for the concept lie.Experimental philosophers have contributed widely to the literature on lie within the last 10 years.Although some of this research belongs both philosophically and methodologically to the best of experimental philosophy, those experimental philosophers have, unfortunately in my opinion, 'gone back' to frame their results in terms of necessary and sufficient conditions (Rutschmann & Wiegmann, 2017;Turri & Turri, 2015;Wiegmann & Willemsen, 2017). 7hus, on the one hand, we have scholars-the psychologists-who are interested in typical and salient features, but mostly in order to understand how people reason with and about concepts more generally.On the other hand, we have scholars-the philosophers-who are interested in individual concepts like truth, lie, knowledge, conspiracy theory, etc., but then mostly attend to those features of these concepts that might turn out to be necessary for their application.
The semantic feature production task
The method of cases is not an uncontroversial method, but it surely is the dominant paradigm to explore which features of a concept are considered to be necessary.What about salient features?How can we explore which features are the salient features of a concept?The probably most widely used method to determine salient features of concepts is the so-called semantic feature production task, also known as feature listing task (for some early examples, see e.g., Hampton, 1979;Barsalou, 1983; for an in-depth discussion see, e.g., Machery, 2017).8Interestingly, there is no prescribed way of conducting a semantic feature production task.So, let's look more closely at how one of the most influential papers (McRae et al., 2005) on semantic feature production tasks goes about doing this (see also Wu & Barsalou, 2009).This is what they presented their participants with: We want to know how people read words for meaning.Please fill in features of the word that you can think of.Examples of different types of features would be: how it looks, sounds, smells, feels, or tastes; what it is made of; what it is used for; and where it comes from.Here is an example: duck: is a bird, is an animal, waddles, flies, migrates, lays eggs, quacks, swims, has wings, has a beak, has webbed feet, has feathers, lives in ponds, lives in water, hunted by people, is edible Complete this questionnaire reasonably quickly, but try to list at least a few properties for each word.(McRae et al., 2005) After participants were presented with these instructions, they were then simply given a list of words and fields to fill in features that came to mind.These features largely fell into four categories: sensory (e.g., has fur), functional (e.g., you can sit on it), encyclopedic (e.g., lives in woods), and taxonomic (e.g., is a vegetable).
The results of their semantic feature production tasks for 571 items allows for two important observations.First, typicality is arguably the single most important predictor for the frequency with which a feature is named, where typicality is understood as the frequency with which members of a certain category possess the property referred to by the feature.9To illustrate, consider the case of a knife: Not all knifes are sharp, are used for cutting, and have a handle.But many, and perhaps even most, knifes are.Thus, being sharp, used for cutting and having a handle are highly frequent properties of knifes.Thus, rather unsurprisingly, <sharp>, <used for cutting>, <has a handle> are typical features and among the most common features named in a semantic feature production task for knive.
Second, "participants' responses are somewhat biased toward information that distinguishes among concepts-that is, the pieces of information that enable people to distinguish a concept from other, similar concepts" (2005, p. 549).For example, take the feature <attacks humans> of the concept shark.Sharks hardly ever attack humans.However, other fish are even more unlikely to attack humans.Thus, among all fish, we can distinguish sharks from other fish easily by their propensity to attack humans, even if the propensity is very low.On the flip side, people are very unlikely to state the feature <has a kidney> for the concept human, although all humans have a kidney.<Has a kidney> simply does not allow us to distinguish humans from many other animals.Rosch (1978) made popular the term 'cue validity' to refer to the conditional probability of an object falling in a particular category given a particular property.Cue validity is greater the more the feature is considered to apply to members of the category in question, and the less the feature is considered to apply to members of other categories.Thus, <attacks humans> has high cue validity because we associate the property of attacking humans strongly with sharks and with hardly any other fish.And <has a kidney> has relatively low cue validity despite it being universal for humans, because we also associate the property of having a kidney with many other animals.
The more typical and the more cue valid a feature is, the more likely it will be stated frequently in semantic feature production tasks.If we take semantic feature production tasks to reveal some of the most salient features of concepts, then typicality and cue validity seem to be the two most important predictors for salience.Features derived from semantic feature production tasks have been shown to be crucial for cognitive processes like memory, categorization, recognition and even decision-making (Ashcraft, 1978;Cree et al., 1999;Hampton, 1979;Smith et al., 1988;Solomon & Barsalou, 2001;Vigliocco et al., 2004).Thus, the importance of salient features of concepts for various cognitive processes can hardly be overstated.
Proof of concept
So far, I have introduced the notion of salience (Sect.1.1).I have then argued that whereas philosophers are interested in individual concepts but rarely in the salient features of those concepts, psychologists are interested in the salient features of concepts, but rarely in any individual concepts that are also relevant for philosophers (Sect.1.2.).Lastly, I discussed the semantic feature production task as one of the primary methods to reveal the salient features of concepts (Sect.1.3).
Perhaps, salient features of concepts have not been discussed very much by philosophers, because salient features are simply not particularly interesting when individual concepts are at stake.Thus, the burden of proof is certainly on those like me who argue that we should care about whether a feature of a concept is salient.In the next section, I will therefore go through three empirical studies to try to make the point that we have been wrong in neglecting salient semantics.
Before proceeding, it is important to address a further theoretical question: Should the study of salient features be classified within the domain of semantics?Using the term 'salient semantics' indeed marks a departure from traditional truth-conditional semantics.Nonetheless, there are several compelling reasons why 'salient semantics' is an appropriate term.First, this approach resonates with Putnam's (1970) suggestion that prototype structures are an integral part of a term's meaning, even though they don't directly determine the word's reference.Second, the exploration of salient features is fundamentally different from pragmatic analysis.While pragmatics deals with the use of language in context and the implications of that use, the study of salient features focuses on the inherent characteristics of concepts as they are understood independently of specific contexts.Semantic feature production tasks typically involve collecting features of concepts in a context-independent manner, i.e., participants are asked to name features of concepts without having first read a vignette, or having been primed about a specific subject.Third, semantics is fundamentally concerned with the meaning and interpretation of words and phrases in language.Salient features of concepts play a crucial role in how we understand and ascribe meaning to various terms and concepts.By examining these features, we gain insights into how meanings are constructed, interpreted, and conveyed in language.Or so I hope to show in the next section.
Empirical studies
In this section of this paper, I aim to demonstrate how an analysis of the salient features of concepts enables us to achieve three key objectives: (a) pinpoint the features that are crucial in our reasoning about and with concepts, (b) scrutinize socially pertinent stereotypes, and (c) explore the intrinsic structure of concepts.To illustrate these points, I will engage in detailed discussions of three recent empirical studies, each focusing on a different concept: conspiracy theory, female/male professor, and life.Through these case studies, we will see how an investigation into the salient features not only enriches our understanding of these specific concepts but also offers broader insights into the dynamics of conceptual analysis.
Study 1: When salience trumps necessity
There are many exciting philosophical projects which aim at revealing the necessary features of philosophically interesting concepts.My aim is not to undermine these efforts, which are integral to mainstream analytic philosophy.My overall point is different: understanding the salient features of concepts is crucial, as it enables us to address several philosophically important questions.Therefore, research that aims to identify necessary features and research targeting salient features should not only coexist but also enrich each other.That said, there are concepts, where it seems that salience trumps necessity.Let's take a look at such a concept.
The case of conspiracy theory
The prevailing view among analytic philosophers is that conspiracy theories are fundamentally theories about conspiracies.This viewpoint is supported by a range of scholars, including Basham and Dentith (2016), Cassam (2019), Coady (2003), Cohnitz (2018), Feldman (2011), Harris (2018), Keeley (1999), Pigden (2007), and Räikkä (2018), although it is important to note that these scholars don't necessarily agree on a singular definition of 'conspiracy theory'.According to this dominant view, conspiracy theory is not a negative evaluative concept but rather seen as a descriptive concept.This implies that features like <deficient>, <crazy>, or simply <bad> are not necessary features of conspiracy theory.While some philosophers recognize that conspiracy theories are often perceived negatively, such evaluative aspects are considered to be at most pragmatic features (Pigden, 2007).
One argument for proposing a descriptive account of conspiracy theory is straightforward: by applying the method of cases, philosophers can illustrate that certain theories are identified as conspiracy theories without evaluating them negatively.Take, for instance, the theories surrounding the Watergate scandal.These are seemingly aptly categorized as conspiracy theories, given Nixon's involvement in the conspiracy and subsequent cover-up.Yet, such theories are not regarded as epistemically flawed, irrational, or morally reprehensible.Hence, it's reasonable to consider the Watergate scandal as a compelling counterexample to the notion that the term 'conspiracy theory' is inherently evaluative. 10ven though this putative counterexample challenges the idea that <being deficient> or a similar evaluative aspect is intrinsic to the concept conspiracy theory, it is still intriguing to explore how salient negative evaluations factor into our understanding of conspiracy theories.In this context, Napolitano and Reuter (2023) conducted a study where they gathered people's responses using a modified semantic feature production task.Their findings indicate that negative evaluative elements are indeed salient in our representation of conspiracy theories.However, their approach diverged from the standard methodology; they asked participants to identify features they deemed necessary for a concept to qualify as a conspiracy theory.To build on this, I implemented another semantic feature production task, with a straightforward prompt: "Please tell us: Which features are characteristic of a conspiracy theory?"In this task, 40 participants were given three fields to input three features, aiming to gain further insight into the salient features of conspiracy theory.
In this study, one participant listed only examples of conspiracy theories, such as 'flat earth', 'hollow earth', and 'lizard', rather than identifying features.Among the remaining 39 participants, 34 participants (which constitutes 87% of the sample) included at least one negative evaluative term in their response.These terms ranged from 'ambiguous evidence' to descriptors like 'far-fetched', 'confusing', 'misleading', 'outlandish', 'self-importance', 'gossip', 'arrogance', and 'lies', indicating that negative evaluation is a highly salient aspect in how people conceive of conspiracy theories.The responses of the first 15 participants, as shown in Table 1, further illustrate this trend, providing a detailed view of how these theories are commonly characterized.11
Discussion
The results of the semantic feature production task on conspiracy theory demonstrate that an overwhelming majority think that negative features like <far-fetched> and <outlandish> are characteristic features of conspiracy theory.Thus, taking semantic feature production tasks to be in the business of revealing salient features, a highly salient feature in our representation of conspiracy theories is something akin to <is epistemically deficient>.However, let us not forget, that the concept conspiracy theory does not seem to necessarily be an evaluatively negative concept.The case of the Watergate scandal provides an intuitively compelling example that conspiracy theories can be true, justified and rational theories.So, what shall we do with our findings?
To truly understand and interpret people's attitudes towards conspiracy theories, it is important to grasp the salient features of conspiracy theory.Similarly, comprehending how individuals reason with the term 'conspiracy theory' requires an understanding of these salient features.For the purpose of conceptual engineering, particularly in relation to everyday understanding of 'conspiracy theory', identifying these salient features is crucial.Simply analyzing specific instances, such as the Watergate scandal, falls short in providing the necessary insights for these explorations.Consequently, being aware of the necessary features required to apply the term 'conspiracy theory' has limited utility in offering constructive responses.corpus analysis uncover the salient features of a concept?As it stands, the extent to which this method can yield comprehensive insights into concept characterization remains an open area for investigation.
Study 2: On stereotypes
Exploring salient features not only provides insights into how people reason and understand various concepts, but it is also crucial for understanding how certain stereotypes take shape.Both psychological and philosophical research extensively examine aspects such as (a) identifying prevalent stereotypes and biases, (b) tracing their origins, (c) analyzing their ethical implications, and (d) exploring potential interventions.Philosophers, in particular, are adept at addressing the ethical consequences of stereotypes and biases within the context of broader societal injustices and specific instances of discrimination.Additionally, philosophers across various disciplines are trained to critically assess the vehicles of thought, i.e., the concepts and language we employ to describe and think about the world.While modifying the language and concepts we use might not always be the most direct or effective method to combat harmful stereotypes, the prevailing consensus is increasingly acknowledging that our words and concepts are not neutral and require careful consideration and, potentially, change.
Stereotypical thinking often arises from the salient features of the concepts we hold, a phenomenon also highlighted in the research by Fischer and Engelhardt (2020).It is common for these stereotypes to manifest in beliefs* such as Asians excelling in mathematics or male Italians being sexist.However, these beliefs, as indicated by the asterisk, need not be explicitly endorsed; they are often held implicitly, as discussed by Holroyd et al. (2017) and Schwitzgebel (2010).Importantly, these stereotypical features are not viewed as necessary for the application of a concept.For instance, no one genuinely believes that being proficient in math is a necessary characteristic of the concept Asian person.Rather, such stereotypes reflect the traits that are conceived of as salient within certain groups, like Asians or male Italians.These conceptions highlight how stereotypes inform our understanding of different social groups.The study presented here emphasizes the critical importance of examining salient features in discussions surrounding stereotypes.
The case of female and male professor
In an influential article, Leslie et al. (2015) show that women are under-represented in fields were brilliance is believed to be a more important determinant of success than hard work.They also provide an explanation for this finding.They hypothesize that "women are stereotyped as not possessing such talent."(2015, p. 262).This hypothesis would indeed explain the empirical finding that women are under-represented in fields such as mathematics, physics, etc.
Surprisingly, despite the availability of experimental methods, Leslie et al.'s hypothesis hasn't been directly tested.To address this, Del Pinal et al. ( 2017) conducted a semantic feature production task to identify the most salient features associated with the concepts of female professor and male professor.If Leslie and colleagues' assertion is accurate-that women are stereotypically believed to lack brilliance-then this belief should be reflected in the salient features identified for female professor.This approach offers a direct empirical test of the hypothesis, seeking evidence for the stereotype in question.A total of 312 participants were recruited via Amazon Mechanical Turk.Participants were requested to write down features for specific social categories.They were randomly assigned either to one of the two test conditions (female professor or male professor), or to one of four control conditions (female baker, male baker, female actress, female actor).The target stimuli were as follows: Imagine that Mary/Jack is a professor at a university [an actor/actress; a baker].Please list five features that you think are typical of Mary/Jack.(from Del Pinal et al., 2017) The results are displayed in Table 2 and reveal two noteworthy findings: First, contrary to the hypothesis that female professors are stereotypically viewed as less brilliant, the participants in this semantic feature production task attributed terms such as 'smart' and 'intelligent' equally to both female and male professors.This finding provides little support for the explanation proposed by Leslie and colleagues regarding the stereotype of female professors' brilliance.Second, although there was no significant difference in the frequency of terms like 'smart' and 'intelligent' being associated with both genders, a notable difference emerged in the proportion of participants who used terms synonymous with 'hardworking'.Intriguingly, the term 'hardworking' and its synonyms were almost twice as likely to be associated with female professors compared to male professors.
In a subsequent study, Del Pinal and colleagues offered an empirically supported alternative explanation for the perceived association between gender and brilliance.Their study focused on exploring gender-specific associations between being 'smart' and 'hardworking'.Participants were asked to estimate the number of hours per week a female and a male professor would need to work.The findings were telling: when the smartness of female professors was emphasized, they were perceived as needing to work more hours compared to the control condition.In contrast, this increase in perceived work hours was not observed for male professors when their smartness was highlighted.Overall, the research by Del Pinal et al. suggests that the stereotype linking brilliance and gender exists within the dependency networks of the concepts female professor and male professor, but this stereotype is not evident at the level of feature salience.
Discussion
Semantic feature production tasks not only enable the identification of a concept's most salient features (as demonstrated in Study 1), they also provide insights into the extent and nature of stereotypical thought.As such, these tasks offer a relatively efficient and direct method for examining various hypotheses related to the stereotypes held by people.In the specific instance of Del Pinal et al.'s study, this approach was instrumental in casting doubt on Leslie's hypothesis that women are stereotypically perceived as having less intelligence.
Study 3: On the structure of concepts
In the previous two subsections, we have seen that detecting the salient features of concepts allows us (a) to identify those features that are likely to dominate people's thinking in various cognitive processes, and (b) to examine those salient features that are likely to play a crucial role in stereotypical thinking.In the following study (Study 3), I aim to show that investigating salient features through semantic feature production tasks can provide us with insights not only into the meaning of concepts but also into the structure of concepts.
The case of LIFE
The concept life has been highlighted to be particularly resistant to a (classical) definition that receives widespread agreement.Chyba and McDonald (1995, p. 216), for example, claim that "it is now a commonplace that the various proposed definitions [of life] virtually all fail".While Chyba and McDonald might be right in their assessment, they do not provide an answer of why that is the case.In trying to make sense of the state of confusion, drawing on Wittgenstein's notion of family resemblance, Pennock (2012, p. 5) claims that "life is a cluster concept with fuzzy boundaries".However, conclusions in regards to the structure of the concept life seem to be at best premature without any empirical evidence on the matter.Beisbart and Reuter (2021) presented laypeople with a semantic feature production task.More specifically, they asked 102 participants (65 female, 36 male, 1 nonidentied), to write down up to three answers to one of the following two different versions of a semantic feature production task: • First version."Which features are characteristic of species of living beings?You can name up to three features."• Second version."Which features do you think distinguish species of living beings from non-living entities?" The answers of the first 15 participants are displayed in Table 3.The most frequently named responses fell into the categories <growth> (47%), <breathing> (46%), <reproduction> (35%) and <nutrition> (31%).People gave responses indicative of what "living beings do at the level of a whole living being, and most of these features are observable for many life forms" (2021).People hardly gave answers that Beisbart and Reuter (2022) were classified to refer to the material (organic matter) or structure of the underlying material (cells (10%)).
In a second study, Beisbart and Reuter aimed to find out which features of living beings people consider to be universal, i.e., which features are thought to hold for all species of life.In order to examine which features people consider to be universal, they asked them "What percentage of species consist of living beings that [feature]?"People's answers were measured on a scale ranging from 0 to 100% in steps of 1%.In contrast to the semantic feature production task, an entirely different outcome emerged.Whereas only 10% of the participants named <cells> or <material> in the semantic feature production task, a whopping 68% thought that 100% of species of living beings are made of organic material, and 64% considered all species to be made of cells.In contrast, <growth> and <nutrition> received much lower numbers.
Discussion
The research conducted by Beisbart and Reuter indicates that focusing solely on necessary features overlooks critical elements of the concept of life.A key finding from their empirical studies is the distinction between salient features on the one hand, and universal features on the other.This raises the question: How can we interpret these differences?Beisbart and Reuter suggest that this division mirrors the inherent structure of the concept of life, which they believe is a natural kind concept embodying both an essence and observable surface properties.Their proposal outlines three aspects: First, they posit that life is conceived of as a natural kind, underpinned by an esseence, such as cellular or organic composition.Second, they argue that people identify this natural kind through salient macroscopic features, like <growth> and <nutrition uptake>.Third, they contend that the essence can be identified by current scientific knowledge.
Claims, according to which life cannot be defined (Machery, 2012), or that life is a family resemblance concept (Pennock, 2012) were not based on empirical data that took into account the importance of salient features.However, this case shows that finding out about the structure of a concept often requires investigating the salient features of a concept.The results of these investigations can then help us to capture more precisely the semantics of our concepts as well as help developing more reliable and empirically grounded theories.
General discussion
The identification of the salient features of concepts plays at most a minor role in philosophical studies so far.Whenever philosophers are concerned with analyzing a certain concept, they are likely to focus on what the necessary and jointly sufficient features of a concept are.Experimental philosophers have certainly departed from the obsession on necessary features in many philosophical projects.However, even experimental philosophers seem to be strongly focusing on the traditional program of uncovering necessary and sufficient conditions.
Psychologists have uncovered a range of cognitive processes where the salience of features plays a more crucial role than their necessity for the application of a concept.These processes encompass memory, recognition, categorization, and reasoning, among others.While psychologists have given considerable attention to the salience of features in concepts within their field, there's a noticeable lack of focus on features of concepts that hold philosophical significance.This oversight is understandable, as concepts deemed important in philosophy may not always align with those considered relevant in psychological studies.Therefore, the divergence in interest between these two disciplines, particularly regarding concept features, is not surprising.
Despite the importance of salience for various cognitive processes, philosophers might still be justified in disregarding research into the salient features of concepts.Why is that?If our philosophical interests are only peripherally, if at all, affected by the role that salient features play, then research time and mental effort are better put into other projects.It seems, for example, that questions about how the salience of features of concepts such as free will influence how those concepts are retrieved from memory, are philosophically not overly exciting.The primary aim of this paper was therefore to make a compelling case that philosophers should be interested in salient semantics.
In the empirical part of this paper, I have presented studies that underline the importance of salient semantics for the concepts conspiracy theory, female professor, and life. 12It seems to me that the results of the semantic feature production tasks are indeed relevant for a range of philosophical debates, both at the level of those individual concepts as well as at the level of more general questions, like, how is evaluative content encoded in our concepts?, how do stereotypes work?, how are our concepts structured?, etc.That said, there are a lot of issues and questions one can raise about the role of the semantic feature production task.We still know relatively little about the cognitive processes involved when retrieving features of abstract concepts in a semantic feature production task.As most philosophically relevant concepts are fairly abstract, it might well be that we cannot make reliable inferences from the results of studies on concrete concepts (e.g., McRae et al., 2005;Vinson et al., 2008;Buchanan, 2019) to abstract concepts.Perhaps most worryingly, the lack of knowledge about the cognitive processes involved in semantic feature production tasks for abstract concepts, raise the very real possibility that the experiments tap into different properties of these concepts. 13 would like to close by briefly considering the relevance of salient semantics for two of the most central research methods in philosophy, conceptual engineering and conceptual analysis.
Conceptual engineering
Conceptual Engineering has emerged as an umbrella term for explication and amelioration.While explication projects aim to improve our concepts to make them more fruitful for scientific purposes, ameliorative projects aim to improve our concepts (a) for better public discourse and reasoning, (b) to eliminate sources of misunderstanding and confusion, and (c) to reduce discrimination.I am certainly not the first to argue that amelioration shouldn't rely too much on a classical conception of concepts.Other researchers (Machery, 2017: chap. 7;Fischer, 2020;Isaac, 2021aIsaac, , 2021b) ) have highlighted the need for more "psychological" approaches both for identifying which aspects of our concepts need improvement, and also for determining how new concepts can be more successfully implemented.The results of the semantic feature production tasks for conspiracy theory and female professor reinforce this claim.Without knowing how salient the evaluative content of conspiracy theory is, we do not know, for example, the degree to which people may talk past each other when discussing conspiracy theories, the degree to which the use of the term 'conspiracy theory' has the potential to disparage certain theories and advocates of those theories, etc.Furthermore, semantic feature production tasks are likely to allow insights into how new or redefined concepts inherit unwanted features from related concepts.Thus, when it comes to conceptual engineering, the need for salient semantics seems to be immanently plausible.
Conceptual analysis
Conceptual analysis is traditionally conceived to be the process of (i) providing sets of necessary and jointly sufficient features of concepts.Successful analyses are often taken to be (ii) referentially invariant and (iii) feasibly performed by individual reflection on cases.There is a sense in which salient semantics has no bearing whatsoever on conceptual analysis, given its focus on the necessity of features.That said, conceptual analysis, as traditionally conceived, has received severe criticism.One strand of criticism takes the underlying assumption of the classical theory of concepts to be misguided (Chalmers & Jackson, 2001).A second strand takes experimentalphilosophical studies to show huge variation in the reference of concepts between people on an individual level, as well as groups of people on a cultural level (Machery et al., 2017;Reuter and Sytsma 2020;Weinberg et al. 2001).A third strand looks for additional methods to circumvent various problems with the method of cases, e.g., corpus-analytic approaches (Andow, 2015;Fischer et al., 2015;Hansen et al., 2021;Reuter, 2011;Sytsma et al., 2019).As a consequence of these objections, we find that many philosophers entertain a looser concept of conceptual analysis that is not tied to the classical theory of concepts, not tied to referentially invariant concepts, and not tied to the method of cases.Although nowadays many philosophers don't do conceptual analysis as traditionally conceived, they, of course, still analyze concepts.Once we adopt a wider and more liberal perspective on what it means to analyze concepts, there is no good reason to exclude the investigation of salient features of concepts from the philosophical task of analyzing concepts.
Conclusion
The primary objective of this paper was to emphasize the significance of researching salient features of concepts.To underscore this point, I have detailed three case studies, each exemplifying a unique context in which understanding salient features is not just philosophically intriguing, but also essential for advancing knowledge in specific areas.First, the case of conspiracy theory shows the importance of identifying salient features, which are pivotal in shaping our understanding and reasoning about conspiracy theories.Second, examining the concept of female/male professor highlights how salient features are instrumental in analyzing socially relevant stereotypes.Finally, the study of life demonstrates the need of pinpointing salient features to uncover the structure of concepts.While these three examples provide a glimpse into the substantial philosophical utility of salient semantics, they represent just a fraction of its potential applications and impacts.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Table 1
Responses of the first 15 participants in the semantic feature production task
Table 3
Responses of the first 15 participants to the semantic feature task Beisbart and Reuter used eight categories to classify the responses: breathing cells (no response among the first 15 participants), evolution, growth, movement, nutrition, perception/consciousness, reproduction.The cells left white indicate responses that were not categorized.Tabletaken from | 9,717.4 | 2024-07-17T00:00:00.000 | [
"Philosophy"
] |
ASYMMETRIC CUT AND CHOOSE GAMES
Abstract We investigate a variety of cut and choose games, their relationship with (generic) large cardinals, and show that they can be used to characterize a number of properties of ideals and of partial orders: certain notions of distributivity, strategic closure, and precipitousness.
Definition 1.1.Let κ be a regular cardinal, and let ≤ κ be a limit ordinal.Let U (κ, ) denote the following game of length on κ.Initially starting with all of κ, two players, Cut and Choose, take turns to make moves as follows.In each move, Cut divides a given subset of κ into two disjoint pieces, and then Choose answers by picking one of them.In the next round, this set is then divided by Cut into two disjoint pieces, one of which is picked by Choose, etc.At limit stages, intersections are taken.In the end, Choose wins in case the final intersection of their choices contains at least two distinct elements, and Cut wins otherwise. 1t us provide the following basic observation, which shows that the consideration of winning strategies for Cut in games of the form U (κ, ) is not particularly interesting.It is probably essentially due to Stephen Hechler, however has never been published, and is vaguely mentioned in a footnote to [8, Theorem 1].
Observation 1.2.Let ≤ κ be a limit ordinal.Then, Cut has a winning strategy in the game U (κ, ) if and only if κ ≤ 2 | | .
Proof.First, assume for a contradiction that Cut has a winning strategy , however κ > 2 | | .The strategy can be identified with a full binary tree T of height , where the root of the tree is labelled with κ, and if a node of the tree is labelled with y, then its immediate successor nodes (in the natural ordering of the tree, which is by end-extension) are labelled with the sets from the partition that is the response of to the sequence of cuts and choices leading up to the choice of y, and limit nodes are labelled with the intersection of the labels of their predecessors.Since is a winning strategy, the intersection of labels along any branch of T2 has cardinality at most one.But note that the union over all these intersections has to be κ, which clearly contradicts our assumption, for the number of branches is 2 | | < κ.Now assume that κ ≤ 2 | | .We may thus identify κ with a subset X of the higher Cantor space 2. 3 The winning strategy for Cut is to increasingly partition the space 2 (and thus also X ) in -many steps using basic open sets of the form [f] for functions f : → 2, with increasingly large < .In the end, the intersection of all of the sets that Choose picked in a run of the game can clearly only contain at most one element, yielding Cut to win, as desired.
By the below remark, for a fixed , what is interesting is the least κ such that Choose has a winning strategy in the game U (κ, ).
Remark 1.3.If there is some cardinal κ such that Choose has a winning strategy in the game U (κ, ), then they have a winning strategy in the game U ( , ) whenever > κ as well: they simply pick their choices according to their intersection with κ.
The games U (κ, ) are closely tied to large cardinals.If κ is measurable, then it is easy to see that Choose has a winning strategy for U (κ, ), and in fact for U (κ, ) whenever < κ (see Observation 3.2).And actually, measurable cardinals are necessary in some way: If Choose has a winning strategy in U (κ, ), then there exists a measurable cardinal in an inner model (see Theorem 2.5).Furthermore, variants of this game can be used to characterize weakly compact cardinals (see Observation 3.4), various notions of distributivity (Section 6), strategic closure of posets and precipitousness of ideals (Section 7).
Various other interesting classes of games can be obtained from the above cut and choose games by the following adjustments, several of which have been studied in the set theoretic literature before.
(1) Winning conditions: (a) Final requirements.Instead of the requirement that the final intersection cannot have size at most 1, this should hold in each round while the final intersection is only required to be nonempty.This is the weakest possible cut and choose game in the sense that it is easiest for Choose to win.We study this variant in Section 2. (b) Notions of smallness.The family of subsets of κ of size at most 1 is replaced by an arbitrary monotone family, i.e., a family of subsets of κ that is closed under subsets.A canonical choice is the bounded ideal bd κ on κ, or other <κ-closed ideals on κ that extend bd κ .We study such generalizations in Section 3. (2) Types of moves: (a) Partitions.Each move of Cut is a partition of κ into a number of pieces which are disjoint only modulo a <κ-closed ideal I in Section 5.This leads to characterizations of various notions of distributivity of I in Section 6 and precipitousness of I in Section 7. (b) Poset games.The moves of Cut are maximal antichains in a poset, of which Choose picks one element.In order for Choose to win, their choices need to have lower bounds in the poset.This is used in Section 6 to characterize notions of distributivity.
Remark 1.4.Note that poset games can have arbitrary length, and after Definition 6.1, we will also briefly consider games of length ≥κ as in (2a).
A natural extension of the games in (1a) to games of length ≥κ, which we do not study in this paper, is obtained in filter games by weakening the winning condition for Choose to the requirement that the set of their choices generates a <κ-closed filter.Introducing delays in this game, i.e., allowing Cut to make κ moves in each single round before it is Choose's turn to make κ-many choices, leads to characterisations of the α-Ramsey cardinals defined in [11] (see also [7, 20]).
We shall provide an overview of results on the existence of winning strategies for various cut and choose games, and their connections with generic large cardinals, and combinatorial properties of ideals and posets.This includes a number of previously unpublished proofs, extensions of known results to more general settings and new results.
We will show the following results concerning the above types of games.In Section 2, we show that Choose having a winning strategy in the games in (1a) has the consistency strength of a measurable cardinal.In Section 3, we show that certain instances of generic measurability of κ suffice in order for Choose to win games defined relative to ideals on κ as in (1b).In Section 4, we show that starting from a measurable cardinal, one can force to obtain a model in which the least cardinal κ such that Choose wins U (κ, ) is a non-weakly compact inaccessible cardinal.In Section 6, we investigate the close connections between the existence of winning strategies for Cut in certain cut and choose games and various notions of distributivity.In particular, Theorem 6.5 partially answers a question of Dobrinen [3].In Section 7, we investigate connections with Banach-Mazur games on partial orders, showing in particular that these Banach-Mazur games, which will be defined in Section 7, are equivalent to certain cut and choose games.In Section 8, we make some final remarks and provide some open questions.§2.The weakest cut and choose game.Regarding Definition 1.1, it may seem somewhat odd to require two elements in the final intersection of choices in order for Choose to win games of the form U (κ, ).But note that if we required only one element in this intersection, then Choose easily wins any of these games by fixing some ordinal α < κ in advance, and then simply picking the set that contains α as an element in each of their moves, for this α will then clearly be contained as an element in the final intersection of their choices as well.By requiring two elements in the final intersection of their choices, this strategy is not applicable as soon as Cut plays a partition of the form {α}, Y .
In this section, we will be considering canonical variants of the games U (κ, ).Among the cut and choose games of length that we consider in this paper, these are the easiest for Choose to win.They have (or rather, an equivalent form of them has) already been considered in unpublished work of Galvin (see [21, Section 3]).Definition 2.1.Let κ be a regular uncountable cardinal, and let ≤ κ be a limit ordinal.Let U (κ, ≤ ) denote the following game of length (at most) on the cardinal κ.As in the game U (κ, ), starting with all of κ, players Cut and Choose take turns, with Cut dividing a given subset of κ in two, and Choose picking one of the pieces and returning it to Cut for their next move.Cut wins and the game immediately ends if Choose ever picks a singleton.At limit stages, intersections are taken.If the game lasts for -many stages, Choose wins in case the final intersection of their choices is nonempty.Otherwise, Cut wins.
Note that unlike in the games U (κ, ), fixing one element α ∈ κ at the beginning of the game, and picking the set which contains α as an element in each of their moves is not a winning strategy in the games U (κ, ≤ ), since Cut can play a partition of the form {α}, X at some point, so that Choose would pick {α}, and would thus immediately lose such a run.
The games U (κ, ≤ ) behave somewhat differently with respect to the existence of winning strategies for Cut.At least the forward direction in the following observation is attributed to unpublished work of Galvin, and independently to Hechler in [21].We do not know of any published proof of this result.For arbitrary ordinals , we let 2 < = sup{2 | < is a cardinal}.
Observation 2.2.If κ is a regular uncountable cardinal and < κ is a limit ordinal, then Cut has a winning strategy in the game U (κ, ≤ ) if and only if κ ≤ 2 < .
Proof.Assume first that Cut has a winning strategy in the game U (κ, ≤ ).The strategy can be identified with a binary tree T of height , where the root of the tree is labelled with κ, and if a node of the tree is labelled with a set X which is not a singleton, then its immediate successor nodes are labelled with the sets from the partition that is the response of to the sequence of cuts and choices leading up to the choice of X, and limit nodes are labelled with the intersection of the labels of their predecessors.
being a winning strategy means that the intersection of labels along any branch of T of length is empty.Thus, for each ordinal α < κ there has to be a node labelled with {α}, for this is the only reason why α would not appear in an intersection of choices along some branch of T. However there are only at most 2 < -many nodes in this tree, hence κ ≤ 2 < .Now assume that κ ≤ 2 < .Let X ⊆ 2 be such that for y ∈ 2, we have y ∈ X if and only if there is α < such that for all < , By our assumption on κ, we may identify κ with a subset Y of the space X.The winning strategy for Cut is to increasingly partition the space Y in -many steps using sets of the form [f] ∩ Y for functions f : → 2 with increasingly large < .That is, during a run of U (κ, ≤ ), Cut and Choose work towards constructing a function F : → 2, the only possible element of the intersection of all choices of Choose, fixing one digit in each round of the game.Now in any even round, Choose cannot possibly pick the digit 1, for this would correspond to picking a set [f] ∩ Y that is only a singleton, by the definition of the set X.But this means that either Cut already wins at some stage less than , or that F : → 2 is not an element of X.But this again means that Cut wins, for it implies that the intersection of all choices of Choose is in fact empty.
Choose having a winning strategy in U (κ, ≤ ) for some cardinal κ has the consistency strength of a measurable cardinal.A slightly weaker version of this result for the game U (κ, ) with essentially the same proof, that is due to Silver and Solovay, appears in [18, p. 249].However, the proof that is presented there is somewhat incomplete (in particular, the argument for what would correspond to Claim 2.7 is missing), and we do not know of any other published proof of this result.For this reason, even though it is just a minor adaption of a classic result, we would like to provide a complete argument for the below.Definition 2.3.We say that κ is generically measurable as witnessed by the notion of forcing P if in every P-generic extension, there is a V -normal V -ultrafilter on κ that induces a well-founded (generic) ultrapower of V. Equivalently, in every P-generic extension V [G], there is an elementary embedding j : V → M with critical point κ for some transitive We will make use of the following standard fact.We include its short proof for the benefit of the reader.Fact 2.4.Assume that U is a nonprincipal V -ultrafilter on κ in a Pgeneric extension of the universe, that U yields a wellfounded ultrapower of V, and that j is the generic embedding induced by U. Let ≤ κ be the Using that U is nonprincipal, and letting c α denote the function with domain κ and constant value α for any ordinal α, we have that [c α ] U < [h] U < [c ] U for every α < κ.It follows that j( ) > , and by the V -< -completeness of U, we obtain that crit j = , as desired.
Theorem 2.5.If < κ are regular cardinals, and Choose has a winning strategy in the game U (κ, ≤ ), then there exists a generically measurable cardinal below or equal to κ, as witnessed by < -closed forcing.
Proof.Let us generically Lévy collapse 2 κ to become of size , by the <closed notion of forcing Coll( , 2 κ ).In the generic extension, we perform a run of the game U (κ, ≤ ) with Choose following their ground model winning strategy , and with the moves of Cut following an enumeration of P(κ) V in order-type .More precisely, let x | < be an enumeration of P(κ) V in our generic extension.At any stage < , assume that D α | α < denotes the sequence of choices of Choose so far, and let D = α< D α .Let Cut play the partition C = D ∩ x , D \ x at stage , and let D denote the response of Choose.Note that since the Lévy collapse is < -closed, any proper initial segment of this run is in the ground model V, and therefore it is possible for Choose to apply their strategy in each step.Having finished the above run of U (κ, ≤ ), let U be the collection of all x 's such that D = D ∩ x .Equivalently, for any x ⊆ κ, x ∈ U if and only if D ⊆ x for all sufficiently large < .
Proof.It is easy to check that U is an ultrafilter on P(κ) V .Let us check that U is V -< -complete.If < , and A i | i < ∈ V is a sequence of elements of U, assume for a contradiction that i< A i ∈ U .Using the regularity of and our above characterization of U, we thus find an ordinal < so that the intersection of choices of Choose up to stage would be ∅, contradicting that Choose follows their winning strategy .
In order to show non-principality of U, note that for any < κ, some x is equal to { }, hence C = { }, B \ { } , and D = B \ { } since is a winning strategy, and therefore { } ∈ U .
Claim 2.7.The generic ultrapower of V by U is well-founded.
Proof.Assume for a contradiction that this is not the case.We may thus assume that = , for otherwise U is < 1 -complete in a -closed forcing extension of the universe V and therefore yields a well-founded ultrapower of V. Let T be the tree of tuples of the form f, n, t with the following properties: (1) t = A i , B i | i < k is a partial run of the game U (κ, ≤ ) of length k < that is consistent with , where A i denotes the partition played by Cut, and B i denotes the choice of Choose at stage i for every i < k. (2) n = n j | j < l is a strictly increasing sequence of natural numbers for some l ≤ k, and if l > 0 then n l -1 = k -1.(3) f = f j | j < l is such that f j : B n j → Ord for each j < l. (4) f j+1 (α) < f j (α) for all α ∈ B n j+1 whenever j + 1 < l.
The ordering relation on T is componentwise extension of sequences, that is for f, n, t and f , n , t both in T, we have f, n, t < T f , n , t if f extends f, n extends n, and t extends t as a sequence.Proof.Using our assumption of ill-foundedness, pick a decreasingsequence of ordinals in the generic ultrapower of V by U, which are represented by functions in which Choose plays according to , and in which Cut plays based on the enumeration x i | i < of P(κ) V .We define sequences n j | j < and f j | j < inductively.Let n 0 = 0, and let f 0 = g 0 B 0 .Given n j and f j , let n j+1 be least above n j such that B n j+1 ⊆ U j -note that such n j+1 must exist for U j ∈ U .Let f j+1 = g j+1 B n j+1 .It is now straightforward to check that the sequence By the absoluteness of well-foundedness, T thus has a branch in V.Such a branch yields a run A i , B i | i < of the game U (κ, ≤ ) in which Choose follows their winning strategy, hence there is ∈ n∈ B n = ∅.This branch also yields a strictly increasing sequence n i | i < of natural numbers, and a sequence of functions f i | i < so that for each i < , f i : B ni → Ord, and f i+1 (α) < f i (α) whenever α ∈ B n i+1 .But then, our choice of yields a decreasing -sequence f ni ( ) | i < of ordinals, which is a contradiction, as desired.By Fact 2.4, it follows that ≤ crit j ≤ κ, and hence by the weak homogeneity of the Lévy collapse, it follows that crit j ≤ κ is generically measurable, as witnessed by < -closed forcing. 4 particular thus, using standard results from inner model theory, it follows from Theorem 2.5 that if Choose has a winning strategy in the game U (κ, ≤ ), then there is an inner model with a measurable cardinal, for the existence of such an inner model follows from having a generically measurable cardinal.In more detail, suppose there is an elementary embedding j : V → W in some generic extension V [G] of V. Furthermore, we may assume that there is no inner model with a measurable cardinal of order 1. 5 So the canonical least iterable structure 0 ‡ with a sharp for a measure of order 1 does not exist (see [27, Section 6.5]).Then 0 ‡ also does not exist in V [G] [27, Lemma 6.5.6].Therefore, the core model K for measures of order 0 can be constructed in V and V [G] (see [27, Section 7.3]) and K V = K V [G] by generic absoluteness of K [27, Theorem 7.4.11].In V [G], the restriction j K is an elementary embedding from K to a transitive class.Every such embedding comes from a simple iteration [27, Theorem 7.4.8],i.e., there are no truncations of iterates of K (see [27, Section 4.2]).Hence K has a measurable cardinal.
Note that the analogue of Remark 1.3 clearly applies to games of the form U (κ, ≤ ) as well.As another corollary of Theorem 2.5, we can show that starting from a measurable cardinal, it can consistently be the case that a measurable cardinal κ is least so that Choose wins U (κ, ≤ ).The same holds for U (κ, ).
Observation 2.9.Starting from a measurable cardinal κ, there is a model of set theory in which κ is measurable, so that Choose has a winning strategy in the game U (κ, ) whenever < κ, however Choose doesn't have a winning strategy in the game U ( , ) for any < κ.
Proof.Let U be a normal measurable ultrafilter on κ and work in L[U ].Choose has a winning strategy in the game U (κ, ) whenever < κ.Assume for a contradiction that there were some < κ for which Choose had a winning strategy in the game U ( , ).By Theorem 2.5, there is a generically measurable cardinal ≤ .But then, by standard inner model theory results (see the discussion before this observation), < κ would be measurable in some inner model of our universe L[U ].Let u be a normal measurable ultrafilter on in that model, and consider the model L [u].By classical results of Kunen (see [17, Theorem 20.12]), L[U ] can be obtained by iterating the measure u over the model L [u].But then, u ∈ L[U ] contradicts the fact that the ultrafilter u could not be an element of its induced ultrapower of L[u] (see [17, Proposition 5.7
(e)]).
Next, we show that we can obtain a weak version of Observation 1.2 for games of the form U (κ, ≤ ).Together with Observation 2.2, this shows in particular that U (κ, ≤ ) is not determined when 2 < < κ ≤ 2 .
Theorem 2.10.If ≤ κ is regular and κ ≤ 2 , then Choose does not have a winning strategy in the game U (κ, ≤ ).
Proof.Fix X ⊆ 2 of size κ that does not contain a continuous and injective image of 2. 6 Let U (X, ≤ ) be the variant of U (κ, ≤ ) where we play on the underlying set X rather than κ.Noting that these two games are equivalent, assume for a contradiction that Choose had a winning strategy for the game U (X, ≤ ).We consider the following quasistrategy for Cut: 7 • In each even round 2i, given a set A ⊆ 2, Cut splits it into the sets • In each odd round 2i + 1, Cut splits off some singleton {x i }, i.e., presents a partition of the form {x i }, X i .
Note that if Choose wins a run of the game U (X, ≤ ) in which Cut plays according to their quasistrategy , then by the definition of at even stages, if x ∈ 2 is in the intersection of choices made by Choose in such a run, x(i) has been fixed for every i < , that is the intersection of these choices will only have a single element.
Claim 2.11.Suppose that t is a partial play of U (X, ≤ ) of length less than according to both and .Then, there are partial plays t 0 , t 1 of successor length, both extending t and according to both and , such that the final choices of Choose in t 0 and t 1 are disjoint.
Proof.If not, take an arbitrary run of U (X, ≤ ) extending t and according to both and , such that only x is in the intersection of choices along this run.Now consider a different run that starts with t as well, however in which Cut splits off {x} at the next odd stage.If Choose made all the same 0/1-choices at even stages in this run as before, then the intersection of their choices would now be empty, contradicting that is a winning strategy for Choose.This means that at some stage in those two runs, the respective choices of Choose according to have to be disjoint, and we may pick t 0 and t 1 to be suitable initial segments of these runs.
Using the above claim, since is a winning strategy for Choose and is regular, we can construct a full binary tree T of height of partial plays t such that partial plays on the same level of T have the same length, and such that the final choices made by Choose in any two such partial plays of successor length which are on the same level of T will be disjoint.Let be an order-preserving isomorphism from < 2 → T , and for Since is a winning strategy for Choose, the intersection of choices from any run of U (X, ≤ ) is nonempty, and thus, using the way the quasistrategy was defined at even stages, this yields a continuous and injective map f : 2 → X , letting f(a) = x whenever a ∈ 2, b = (a) is a branch through T, and x is the unique element of the intersection of choices of Choose in the run b.This shows that X contains a continuous and injective image of 2, contradicting our choice of X. §3.Ideal cut and choose games.We want to introduce a larger class of generalized cut and choose games on regular and uncountable cardinals κ, in which the winning condition is dictated by a monotone family I on κ, that is a family of subsets of κ that is closed under subsets, which in many cases will be a <κ-complete ideal I ⊇ bd κ .Before we introduce this class, let us observe that the games that we considered so far proceeded as progressions of cuts and choices, so that the chosen pieces would then be further cut into pieces.Equivalently however, we could require Cut to repeatedly cut the starting set κ of these games into pieces and Choose to pick one of those pieces, in each of their moves, simply because we only evaluate intersections of choices in order to determine who wins a run of a game, so whatever happens outside of the intersection of choices made in a run of any of our games up to same stage is irrelevant for this evaluation (and every partition of κ canonically induces a partition of any of its subsets X, plus every partition of some X ⊆ κ can be extended to a partition of κ, for example, by adding all of κ \ X to one of its parts).Our generalized cut and choose games will be based on the idea of Cut repeatedly partitioning the starting set of our cut and choose games.Fix a regular uncountable cardinal κ and a family I of subsets of κ that is monotone, i.e., closed under subsets, throughout this section.Let I + denote the collection of all subsets of κ which are not elements of I (I-positive).Definition 3.1.Let X ∈ I + , and let < κ be a limit ordinal.Let U (X, I, ) denote the following game of length .Starting with the set X, two players, Cut and Choose, take turns to make moves as follows.In each move, Cut divides the set X into two pieces, and then Choose answers by picking one of them.Choose wins in case the final intersection of their choices is I-positive, and Cut wins otherwise.
U (X, I, ≤ ) denotes the variant of the above game which Choose wins just in case the intersection of their choices is I-positive up to all stages < , and nonempty at the final stage , and Cut wins otherwise.
Let us also introduce the variant U (X, I, < ) for ≤ κ of the above game: It proceeds in the same way for -many moves, however Choose already wins in case for all < , the intersection of their first -many choices is I-positive, and Cut wins otherwise.
Note that for the games defined above, we could let them end immediately (with a win for Cut) in case at any stage < , the intersection of choices of Choose up to that stage is in I.Note also that if I ⊆ J are monotone families on κ, X ∈ J + , and Choose has a winning strategy in the game U (X, J, ) for some limit ordinal , then they clearly also have a winning strategy in the game U (X, I, ).Moreover, if S denotes the monotone family {∅} ∪ {{α} | α < κ}, then U (κ, ≤ ) corresponds to U (κ, S, ≤ ) and U (κ, ) corresponds to U (κ, S, ).We have thus in fact generalized the basic cut and choose games from our earlier sections.
Let us start with some minor extensions of observations from Section 1.We refer to a non-principal <κ-complete ultrafilter on a measurable cardinal κ as a measurable ultrafilter on κ.Observation 3.2.If κ is measurable, and I is contained in the complement of some measurable ultrafilter U on κ, then Choose wins U (X, I, <κ) whenever X ∈ U .
Proof.They simply win by picking their choices according to U.
Proof.The 2 κ -strong compactness of κ allows us to extend I to a <κ-complete prime ideal, the complement of which thus is a measurable ultrafilter containing X as an element.The result then follows from Observation 3.2.
We next present an observation on when Cut wins generalized cut and choose games.This is in close correspondence to our earlier observations for games of length κ, but it also shows that Cut not winning certain games of length κ has large cardinal strength. 8servation 3.4.Let < κ be a limit ordinal, and let I be a monotone family such that κ cannot be written as a <κ-union of elements of I.Then, the following hold : Proof.The proof of ( 1) is analogous to the proof of Observation 1.2, and the proof of ( 2) is analogous to the proof of Observation 2.2, making use of the fact that κ cannot be written as a <κ-union of elements of I in the forward directions.For (3), assume for a contradiction that κ > 2 < , however Cut has a winning strategy for the game U (κ, I, < ).can be identified with a binary tree T of height at most , where the root of the tree is labelled with κ, and if a node of the tree is labelled with y ∈ I + , then its immediate successor nodes are labelled with the sets from the partition that is the response of to the sequence of cuts and choices leading up to the choice of y, and limit nodes are labelled with the intersection of the labels of their predecessors.If a node is labelled with a set in I, then it does not have any successors, and it means that Choose has lost at such a point.being a winning strategy means that T has no branch of length .Thus, the union of all the labels of the leaves of T has to be κ, which clearly contradicts our assumption on I, for the number of leaves of T is at most 2 < < κ.
Regarding (4), assume first that κ is weakly compact, however Cut has a winning strategy in the game U (κ, I, <κ).Let be a sufficiently large regular cardinal, and let M ⊇ (κ + 1) be an elementary substructure of H ( ) of size κ that is closed under <κ-sequences and with ∈ M .Using that κ is weakly compact, let U be a uniform <κ-complete M-ultrafilter on κ.Let us consider a run of the game U (κ, I, <κ) in which Cut follows their winning strategy , and Choose responds according to U.This is possible for proper initial segments of such a run will be elements of M by the <κ-closure of M, and hence can be used as input for in M, yielding the individual moves of Cut to be in M as well.But since U is uniform and <κ-complete, all choices of Choose will be in U and therefore I-positive.This means that Choose wins against , which is our desired contradiction.
In the other direction, assume that Cut does not have a winning strategy in the game U (κ, I, <κ).We verify that under the assumptions of our observation, κ has the filter property (as in [11, Definition 2.3]) and is thus weakly compact.Recall that the filter property states that for every subset X of P(κ) of size at most κ, there is a <κ-complete filter F on κ which measures X.Note that the definition of <κ-complete filters in [11, Definition 2.2] is nonstandard, but every such filter induces a <κ-complete filter in the usual sense as its upwards closure, and every <κ-complete filter in the usual sense is <κ-complete in the sense of [11, Definition 2.2].Therefore the filter property in [11, Definition 2.3] is equivalent to the standard one.A proof of the equivalence of weak compactness and the filter property can be found in [1, Theorem 1.1.3].To verify the filter property for κ, let A = A i | i < κ be a collection of subsets of κ.We need to find a <κcomplete filter F on A, that is, <κ-sized subsets of F need to have κ-sized intersections.At any stage i < κ, let Cut play the partition A i , κ \ A i of κ.Since Cut does not have a winning strategy in the game U (κ, I, <κ), there is a sequence B i | i < κ of choices of Choose in such a run such that for every < κ, i< B i ∈ I + .But now we may clearly define our desired filter F by letting Let us next observe that instead of measurability, it is sufficient for κ to be generically measurable as witnessed by sufficiently closed forcing in order for Choose to win cut and choose games at κ.This is a property that can be satisfied by small cardinals, and this thus shows that Choose can win cut and choose games at small cardinals.It is well-known how to produce small cardinals that are generically measurable: For example, if κ is measurable, as witnessed by some ultrapower embedding j : V → M with crit j = κ, and for some nonzero n < , P denotes the Lévy collapse Coll(ℵ n-1 , <κ) to make κ become ℵ n in the generic extension, then in any P-generic extension V [G] with P-generic filter G, κ is generically measurable, as witnessed by the notion of forcing that is the Lévy collapse in the sense of Theorem 10.2] works for the ℵ n 's as well.
Together with Theorem 2.5, the next observation will also show that assumptions of the existence of winning strategies for Choose in cut and choose games of increasing length form a hierarchy which is interleaved with assumptions of generic measurability, as witnessed by forcing notions with increasing closure properties.Observation 3.5.Assume that ≤ κ is regular, and that κ is generically measurable, as witnessed by some < -closed notion of forcing P. 9 Let U be a P-name for a uniform V-normal V-ultrafilter on κ, and let I be the hopeless ideal with respect to U , that is Then, I ⊇ bd κ is a normal ideal on κ and for any X ∈ I + , Choose wins U (X, I, < ).
Proof.It is straightforward to check that I ⊇ bd κ is normal, using that U is forced to be uniform and V -normal.We will describe a winning strategy for Choose in the game U (X, I, < ).At each stage α, Choose not only decides for a set C α to actually respond with, but they also pick a condition p α ∈ P forcing that Čα ∈ U , such that these conditions form a decreasing sequence of conditions.
At stage 0, assume that Cut presents the partition A 0 , B 0 of X.Since some condition forces that X ∈ U , Choose may pick C 0 to either be A 0 or B 0 , and pick a condition p 0 forcing that Č0 ∈ U .At successor stages α + 1, we proceed essentially in the same way.Assume that Cut presents the partition A α+1 , B α+1 of C α .Since p α Čα ∈ U , Choose may pick C α+1 to either be A α+1 or B α+1 , and pick a condition p α+1 ≤ p α forcing that Čα+1 ∈ U .
At limit stages α < , Cut presents a partition A α , B α of <α C .Since the forcing notion P is < -closed, we may let p α be a lower bound of p | < α .Then, p α forces that < α Č ∈ U , and Choose may pick C α to either be A α or B α , and pick a condition p α ≤ p α forcing that Čα ∈ U .
The following result, which is also a consequence of our above results, is attributed to Richard Laver in [8, comment (4) after the proof of Theorem 4]: It is consistent for Choose to have a winning strategy in the game U ( 2 , I, ) for some uniform normal ideal I on 2 , and in particular, it is consistent for Choose to have a winning strategy in the game U ( 2 , ).By Observation 1.2, 2 will clearly be the least cardinal κ so that Choose has a winning strategy in the game U (κ, ), for Cut has a winning strategy in the game U ( 1 , ).We can now show that for either of the games U ( , ) and U ( , ≤ ), any small successor cardinal of a regular and uncountable cardinal can be least so that Choose wins.Note that the assumptions of the following observation are met in models of the form L[U ], when U is a measurable ultrafilter on a measurable cardinal , as we argue in the proof of Observation 2.9.
Observation 3.6.If is measurable with no generically measurable cardinals below, and given some regular and uncountable κ < , then in the Lévy collapse extension by the notion of forcing Coll(κ, < ), making become κ + , Choose has a winning strategy in the game U ( , ) whenever < κ, however Choose does not have a winning strategy in the game U ( , ≤ ) for any < .
Proof.Apply the Lévy collapse forcing to make become κ + , which is <κ-closed.Work in a generic extension for this forcing.As we argued above, κ + is generically measurable as witnessed by <κ-closed forcing.Thus by Observation 3.5, Choose has a winning strategy in the game U ( , ) whenever < κ.Assume for a contradiction that there were some with < < for which Choose had a winning strategy in the game U ( , ≤ ).By Theorem 2.5, we obtain a generically measurable cardinal ≤ .But then clearly, is also generically measurable in our ground model, contradicting our assumption.§4.Cut and choose games at small inaccessibles.In Observation 2.9, we observed that a measurable cardinal can be the least cardinal at which Choose wins cut and choose games, and in Observation 3.6, we argued that consistently, Choose can first win cut and choose games at successors of regular cardinals.In this section, we want to show that it is consistent for Choose to first win cut and choose games at small inaccessible cardinals, that is inaccessible cardinals which are not measurable, and as we will see, not even weakly compact.The key result towards this will be the following, which is an adaption of Kunen's technique [19] of killing the weak compactness of a measurable cardinal by adding a homogeneous Suslin tree T, and then resurrecting measurability by forcing with T. Our presentation is based on the presentation of this result that is provided by Gitman and Welch in [9, Section 6].The difference in our result below will be that we need our homogeneous Suslin tree T to have additional closure properties, and that this will require us to do a little extra work at some points in the argument.
Theorem 4.1.Given a measurable cardinal κ, and a regular cardinal < κ, one can force to obtain a model in which κ is still inaccessible (in fact, Mahlo) but not weakly compact anymore, however generically measurable, as witnessed by < + -closed forcing.Hence, in particular, by Observation 3.5, Choose wins the game U (κ, I, ) for some normal ideal I ⊇ bd κ on κ, and thus also U (κ, ) and U (κ, ≤ ). 10 Proof.We first force with a reverse Easton iteration, adding a Cohen subset to every inaccessible cardinal below κ.Let us consider the generic extension thus obtained as our ground model in the following.By wellknown standard arguments (similar to those in the proof of Silver's theorem about violating the GCH at a measurable cardinal (see [14, Theorem 21.4])), adding a Cohen subset of κ to that model will resurrect the measurability of κ in the extension.We will force to add a Suslin tree T to κ that is closed under ascending < + -sequences, and show that κ is generically measurable in that extension, as witnessed by forcing with that Suslin tree (with its reversed ordering), which now is a < + -closed notion of forcing.• Each t ∈ T is a function t : → 2 for some < α, and T is ordered by end-extension.• T is closed under initial segments.
• If + 1 < α and t : → 2 is in T, then t 0 and t 1 are both in T.
• If < α and t : → 2 is in T, then for every with < < α, there is some s : → 2 in T that extends s (this property is abbreviated by saying that T is pruned).
Lemma 4.4.If κ is inaccessible, and < κ is a regular cardinal, then there is a <κ-strategically closed notion of forcing P κ of size κ that adds a κ-Suslin tree within which every increasing sequence of length at most has an upper bound.
Proof.Fix and κ, and let P κ be the following notion of forcing Q consisting of conditions of the form t, f , for which the following hold: • t is a normal (α + 1)-tree that is closed under increasing unions of length at most , for some α < κ, • Aut(t) acts transitively on t, 11 and • f : → Aut(t) is an injective enumeration of Aut(t) for some ordinal .
Conditions are ordered naturally, that is, t 1 , f 1 ≤ t 0 , f 0 when t 1 endextends t 0 , 12 and for all ∈ dom(f 0 ), f 1 ( ) extends f 0 ( ).Proof.We imagine two players, Player I and Player II taking turns for κ-many steps to play increasingly strong conditions in Q. Player I has to start by playing the weakest condition of Q, and is allowed to play at each limit stage of the game.The moves of Player I will be conditions denoted as t i , f i , and the moves of Player II will be conditions denoted as t i , f i .In order to show that Q is <κ-strategically closed, Player I has to ensure that at the end of the game, the decreasing sequence of conditions that has been produced by the above run has a lower bound in Q.We will see in the argument below that it is only at limit steps when Player I has to be careful about their choice of play.
Let t 0 , f 0 = {∅}, id be the weakest condition of Q.Given t i , f i for some i < κ, let t i , f i ≤ t i , f i be the response of Player II, and let Player I respond by any condition t i+1 , f i+1 ≤ t i , f i in Q.
At limit stages ≤ , we let t be the union of the t 's for < , and we let f be the coordinate-wise union of the f 's for < .We define the next move t , f of Player I as follows.In order to obtain t , we add a top level to t -we do so by simply adding unions for all branches through t .The enumeration f is then canonically induced by t and by the f 's.
This process can be continued for κ-many steps, showing that Q is <κstrategically closed, as desired.
Note that it is easy to extend conditions in Q to have arbitrary height below κ.A crucial property of Q is the following.Claim 4.6 (Sealing).Suppose p ∈ Q, Ṫ is the canonical Q-name for the generic tree added as the union of the first components of conditions in the generic filter, and p Ȧ is a maximal antichain of Ṫ .Then, there is q ≤ p in Q forcing that Ȧ is (level-wise) bounded in Ṫ .This means that Ṫ is forced to be a κ-Suslin tree.
Proof.Suppose p = t 0 , f 0 , with dom(f 0 ) = 0 .Choose some M ≺ H (κ + ) of size less than κ containing Q, p, Ṫ and Ȧ as elements, such that M is closed under -sequences, and such that Ord M ∩κ is equal to some strong limit cardinal < κ of cofinality greater than .Let ϕ : κ → κ be a function in M which enumerates each < κ unboundedly often.Working entirely inside of M, we carry out a construction in κ-many steps (so this construction only has -many steps from the point of view of V ).By possibly strengthening p, we may without loss of generality assume that there is some a ∈ t 0 such that t 0 , f 0 ǎ ∈ Ȧ.Let B 0 be any branch through a in t 0 .Let b 0 be the top node of B 0 .The node b 0 begins the branch we will try to construct.
Given t i , f i ∈ Q, with dom(f i ) = i , and given b i , for some i < κ, let t i+1 , f i+1 ∈ Q strengthen t i , f i , such that dom(f i+1 ) = i+1 , and with the property that for every s ∈ t i , there is a s ∈ t i+1 that is compatible with s and such that t i+1 , f i+1 forces that ǎs ∈ Ȧ.It is straightforward to obtain such a condition in |t 0 |-many steps, making use of Claim 4.5.Now, say ϕ(i) = .If ≥ i , let b i+1 be a node on the top level of t i+1 extending b i .Otherwise, let s = f i ( )(b i ).Let s be on the top level of t i+1 above both s and a s , and let b i+1 = f i+1 ( ) -1 (s ).This will have the effect that whenever t, f ≤ Q t i+1 , f i+1 , t, f will force f( ˇ )( bi+1 ) to be above an element of Ȧ in this latter case.At limit stages , we let t be the union of the t 's for < , and we let f be the coordinate-wise union of the f 's for < .Let b = < b .Now, in order to obtain t , we add a top level to t .If has cofinality larger than , we pick this top level of t to be { c[{b Note that since the identity map is an element of range( f ), it follows in particular that b ∈ t .If has cofinality at most , we pick this top level to consist of all unions of branches through t (note that by the closure properties of M, these are the same in M and in V, and we thus obtain an actual condition in Q).Finally, f is canonically induced by t and the f 's in each case.It is easy to check that t , f is a condition in Q in each case, however note that having f act transitively on t for each < is needed to ensure that t is pruned.
In V, after -many steps, we build q = t, f = t , f by unioning up the sequence of conditions t i , f i | i < , adding a top level to t = i< t i , and extending f as in the limit ordinal case above.Note that since has cofinality greater than , we will be in the case when we only include top level nodes in t above certain branches of t .
We finally need to show that t, f forces Ȧ to be bounded in Ṫ .We will do so by showing that it forces Ȧ to be a maximal antichain of ť (in fact, of t ).Let b be a branch of t , induced by some node on the top level of t.This node will have to be of the form c[{b for some ordinal < .But then, using that ϕ ∈ M , and that = M ∩ κ, it follows that ϕ(i) = for unboundedly many ordinals i < .Pick one such i for which < i , noting that i< i = .By our remark made at the end of the successor ordinal step of our above construction, it now follows that t, f forces č[{ b | < ˇ }] to meet Ȧ (within ť; in fact, within t ).
By the above claim, it is immediate that Ṫ is forced to be a κ-Suslin tree.By the definition of Q, it is also immediate that every increasing sequence of length at most ˇ in Ṫ is forced to have an upper bound in Ṫ (note that by its <κ-strategic closure, Q does not add any new <κ-sequences of elements of Ṫ ).
Observation 4.7.If we let Ṫ be the canonical name for the κ-Suslin tree added by forcing with Q, then Q * Ṫ is equivalent to κ-Cohen forcing, where the ordering of the notion of forcing Ṫ is the reverse tree ordering.
Proof.It suffices to argue that Q * Ṫ has a dense subset of conditions that is <κ-closed.Our dense set will be conditions of the form t, f, b where b is a node on the top level of t.Given a decreasing sequence t i , f i , bi | i < of conditions in this dense set of length < κ, we may find a lower bound as in the limit stage case in the proof of Claim 4.6, with the sequence of b i 's inducing a branch through the union of the t i 's.
It thus follows that after forcing with Q * Ṫ , κ is measurable, and thus Q forces that κ is generically measurable as witnessed by the notion of forcing Ṫ , which is < + -closed, as desired.It is also clear that κ is Mahlo after forcing with Q, for otherwise it could not be measurable in the further Ṫ -generic extension.
Note that in particular, if in the starting model there are no generically measurable cardinals below κ, then in our forcing extension above, arguing as in the proof of Observation 2.9, κ is the least cardinal such that Choose has a winning strategy in the game U ( , ).The same holds for U ( , ≤ ).§5.Cutting into a larger number of pieces.Let us start by considering variants of cut and choose games in which we allow Cut to cut into a larger number of pieces in each of their moves.We again fix a regular and uncountable cardinal κ and a monotone family I on κ throughout.Definition 5.1.For any cardinal < κ, and any limit ordinal < κ, we introduce the following variants U (X, I, ), U (X, I, ≤ ), and U (X, I, < ) of the games U (X, I, ), U (X, I, ≤ ), and U (X, I, < ), allowing also for = κ in U (X, I, < ): In each move, Cut is allowed to cut X into up to -many rather than just two pieces, and as before, Choose will pick one of them.For any cardinal ≤ κ, we also introduce variants U < (X, I, ), U < (X, I, ≤ ), and U < (X, I, < ): Cut is now allowed to cut X into any number of less than -many pieces in each of their moves.The winning conditions for each of these variants are the same as for the corresponding games defined above.
If I is a <κ-complete ideal, then in the games U (X, I, ) and U (X, I, < ) above, and their variants where is replaced by < , we could equivalently require Cut to cut the starting set X into I-positive sets in each of their moves: Choose will clearly lose if they ever decide for a set in I, but it is also pointless for Cut to cut off pieces in I, using that either our games have length less than κ, or in the case of games of length κ, the winning conditions only depend on properties of proper initial segments of its runs, and that I is <κ-complete.
The following generalizes Observation 3.4, showing that it is still not very interesting to consider winning strategies for Cut in these games.
Observation 5.2.Let < κ be a limit ordinal, let < κ be a regular cardinal, let I be a monotone family such that κ can not be written as a <κ-union of elements of I, and let X ∈ I + .Let < = sup{ | < is a cardinal}.Then, the following hold : Proof.The proofs of (1)-( 3) and ( 5) are analogous to those in Observation 3.4.The argument for (4) is a minor adaption of that for (3).Observation 3.5 easily generalizes to the following, using that in the notation of that observation, U is forced to be V -<κ-complete.Observation 5.3.Assume that ≤ κ is regular, and that κ is generically measurable, as witnessed by some < -closed notion of forcing P. Let U be a P-name for a V-normal V-ultrafilter on κ, and let I be the hopeless ideal with respect to U .Then, for any X ∈ I + , Choose has a winning strategy in the game U <κ (X, I, < ).
We also want to define cut and choose games on a cardinal κ where Cut can cut into κ-many pieces.A little bit of care has to be taken in doing so however.One thing to note is that we do have to require Cut to actually cut into I-positive pieces, for otherwise, given that I contains all singletons, they could cut any set X into singletons in any of their moves, making it impossible for Choose to win.Another observation is that if I ⊇ bd κ is <κ-complete, then any disjoint partition W of an I-positive set X into less than κ-many I-positive sets is maximal: there cannot be an I-positive A ⊆ X such that for any B ∈ W , A ∩ B ∈ I .This is clearly not true anymore for partitions of size κ.However, as the following observation shows, in many cases, maximality is needed in order for such cut and choose games to be of any interest.
Observation 5.4.If I ⊇ bd κ is <κ-complete and has the property that any I-positive set can be partitioned into κ-many disjoint I-positive sets, 13 and the game U κ (X, I, ) were defined as the games U (X, I, ) in Definition 5.1, however letting = κ while additionally requiring Cut to always provide partitions into I-positive pieces, and X ∈ I + , then Cut has a winning strategy in the game U κ (X, I, ).
Proof.Write X as a disjoint union of I-positive sets X i for i < .At any stage n < , let Cut play a disjoint partition A n α | α < κ of X into Ipositive sets such that each A n α contains exactly one element of X n , and such that A n α ∩ X m ∈ I + whenever m > n. 14 Choose has to pick some B n = A n α .Let A n α ∩ X n = {α n }.Note that the above defines a strategy for Cut which ensures that for any i < , X i ∩ n< B n contains at most one element, and hence the intersection n< B n of choices of Choose is countable, showing this strategy to be a winning strategy for Cut, as desired. 15 will need the following.Definition 5.5.Let I be a monotone family on a regular and uncountable cardinal κ.
• If X ∈ I + , then an I-partition of X is a maximal collection W ⊆ P(X ) ∩ I + so that A ∩ B ∈ I whenever A, B ∈ W are distinct.• An I-partition W is disjoint if any two of its distinct elements are.
In the light of the above, we now define cut and choose games in which Cut can cut into κ-many, or even more pieces in each of their moves as follows.
Definition 5.6.Let κ be a regular uncountable cardinal, let I be a monotone family on κ, let < κ be a limit ordinal, let X ∈ I + , and let be a cardinal, or = ∞.
• G (X, I, ) denotes the variant of the game U (X, I, ) where in each move, Cut may play an I-partition of size at most of X, or of arbitrary size if = ∞, and Choose has to pick one of its elements.Choose wins in case the intersection of all of their choices is I-positive, and Cut wins otherwise.
• In a similar fashion (using I-partitions rather than disjoint partitions), we also define games G (X, I, ≤ ) and G (X, I, < ) as variants of U (X, I, ≤ ) and U (X, I, < ), allowing also for = κ for the latter.• If is a cardinal, we also define games G < (X, I, ), G < (X, I, ≤ ), and G < (X, I, < ) in the obvious way.
By the below observation, these games actually generalize the games that we introduced in Definition 5.1.
Observation 5.7.If I is a monotone family, then the U -games introduced in Definition 5.1 are equivalent to their corresponding G-games introduced in Definition 5.6, that is, for any choice of parameters X, I, , and that are suitable for Definition 5.1, the games G (X, I, ) and U (X, I, ) are equivalent, the games G (X, I, ≤ ) and U (X, I, ≤ ) are equivalent, etc.
Proof.We only treat the equivalence between games of the form G (X, I, ) and U (X, I, ) when < κ is a cardinal and < κ is a limit ordinal, for the other equivalences are analogous.Making use of the comments after Definition 5.1, if Cut wins U (X, I, ), then Cut wins G (X, I, ), because every disjoint partition of an I-positive set X into less than κ-many I-positive sets is an I-partition of X. Analogously, if Choose wins G (X, I, ), then Choose wins U (X, I, ).
Suppose that is a winning strategy for Cut in G (X, I, ).To define a winning strategy for Cut in U (X, I, ), we use an auxiliary run of G (X, I, ) in which Cut plays according to .Given a move W α of Cut in round α < in G (X, I, ), we let perform two consecutive moves in U (X, I, ).The first one is the full disjointification W α of W α .The second one splits X into W α and X \ W α .Choose first picks an element Y α of W α and then they have to pick W α .By the definition of full disjointifications, there is some X α ∈ W α with Y α ∩ W α ⊆ X α .We let Choose play such an X α in G (X, I, ), and Cut again responds in the next round by using .Since is a winning strategy for Cut, it follows that α< (Y α ∩ W α ) ⊆ α< X α ∈ I , and therefore that is a winning strategy for Cut, as desired.
We now argue that a winning strategy for Choose in U (X, I, ) yields a winning strategy for Choose in the game G (X, I, ), making use of an auxiliary run of U (X, I, ) in which Choose plays according to .Given a move W α of Cut in the game G (X, I, ), we let Cut perform two consecutive moves in the game U (X, I, ): The first one is the full disjointification W α of W α , and the second one splits X into W α and X \ W α .The strategy will pick some element Y α ∈ W α , and then decides for W α .We let the next move of Choose according to be some X α ∈ W α for which Y α ∩ W α ⊆ X α .Since is a winning strategy for Choose, it follows that α< (Y α ∩ W α ) ⊆ α< X α ∈ I + , and therefore that is a winning strategy for Choose, as desired.
Up to some point, increasing the possible size of I-partitions that Cut may play actually does not make a difference (in terms of the existence of winning strategies for either player) for our generalized cut and choose games of the form G (X, I, ) or G (X, I, < ).This will follow as a special case of Theorem 6.2, noting that if is a cardinal and I is a < + -complete ideal on a cardinal κ, then the partial order P(κ)/I is a < + -complete Boolean algebra.§6.Poset games and distributivity.A very natural further generalization is to consider analogues of the above games played on posets.On Boolean algebras, such games of length were considered by Veličković [26], and such games of arbitrary length were considered by Dobrinen [3], who also mentions a generalization to partial orders.We assume that each poset Q has domain Q and a maximal element 1 Q .Definition 6.1.If Q is a poset with X ∈ Q, is a limit ordinal, and is a cardinal, or = ∞, G (X, Q, ) denotes the game of length in which players Cut and Choose take turns, where in each move, Cut plays a maximal antichain of Q below X of size at most , or of arbitrary size if = ∞, and Choose responds by picking one of its elements.Choose wins in case the sequence of all of their choices has a lower bound in Q, and Cut wins otherwise.We also introduce obvious variants with < and/or < in place of and , respectively-if the final parameter is of the form < , we only ask for lower bounds in Q for all proper initial segments of the sequence of their choices in order for Choose to win.
Let κ be a regular uncountable cardinal, and let I be a <κ-complete ideal on κ.It is easily observed that for X ∈ I + , any limit ordinal < κ, and any cardinal , or = ∞, the games G (X, I, ) and G ([X ] I , P(κ)/I, ) are essentially the same game (and are in particular equivalent), as are G (X, I, < ) and G ([X ] I , P(κ)/I, < ).But note that Definition 6.1 can also be taken to provide a natural definition of G (X, I, ), and its variants with < and/or < , which also works for ≥ κ: We could take them to be G ([X ] I , P(κ)/I, ) and its variants, and we observe that this corresponds to requiring the existence of an I-positive set that is I-almost contained in every choice of Choose in order for Choose to win, rather than an I-positive intersection of those choices, in Definition 5.6.
We first want to show a result that we already promised (for games with respect to ideals) in Section 5, namely that up to some point, increasing the possible size of partitions provided by Cut still yields equivalent games.Given a cardinal , we say that a partial order Q is < -complete in case it has suprema and infima for all of its subsets of size less than , under the assumption that those subsets have a lower bound for the latter.Theorem 6.2.Let and be cardinals, let < be a cardinal, let Q be a separative partial order with domain Q, and let q ∈ Q.
Proof.The idea of the arguments for the above is that we may simulate a single move of Cut, in the games where they are allowed to play larger antichains, by less than -many moves in the corresponding games where they are only allowed to play antichains of size at most in each of their moves.Let us go through some of the details of one of those equivalences in somewhat more detail.For example, let us assume that in (1), Cut has a winning strategy in the game G (X, Q, ).Assume that in one of their moves, they play a maximal antichain of Q below X of the form W = {x r | r ∈ }.Let Cut make -many moves in the game G (X, Q, ), playing maximal antichains W i below X for i < , with and hence the collection of these infima for different r ∈ forms a maximal antichain of Q below X.Thus, by the separativity of Q, it follows that in fact x r = inf{w r(i) i | i < }.Let Choose respond to W by picking x r ∈ W . Cut will win this run of the game G (X, Q, ) for they are using their winning strategy, but then they will also win the above run of G (X, Q, ), for the responses of Choose in this run will be cofinal in the sequence of their corresponding responses in the run of G (X, Q, ), and thus the set of responses of Choose in either game will not have a lower bound.We have thus produced a winning strategy for Cut in the game G (X, I, ) in this way, as desired.
The remaining arguments for (1) are very similar to the above.Item (2) follows directly from the argument for (1), and ( 3) is an immediate consequence of (2).
Let us recall and introduce two notions of distributivity for posets.Definition 6.3 (Distributivity).Let Q be a poset with underlying set Q, let be a limit ordinal and let be a regular cardinal, or = ∞.
each of size at most , or of arbitrary size in case = ∞, then there is a sequence X α | α < of conditions so that for each α < , X α ∈ W α and the sequence X | < α has a lower bound in Q.We call such a sequence α < is a sequence of maximal antichains of Q below X, each of size at most , or of arbitrary size in case = ∞, then there is a sequence X for every X ∈ Q. 19 • Let I be an ideal on a regular and uncountable cardinal κ.We say that I is ( , )-distributive or uniformly (< , )-distributive if the poset P(κ)/I is.
For complete Boolean algebras Q, it is easy to see that ( , )-distributivity implies ( , )-distributivity, since adding no new functions from to by forcing with Q is clearly equivalent to adding no new functions from to . 20The following is a version of this observation with weaker completeness assumptions that seems to require a different kind of argument.This lemma and its proof are closely related to Theorem 6.2.
Lemma 6.4.If Q is a ( , )-distributive poset, where , and ≤ are cardinals, then the following statements hold : In analogy to the above, uniform (< , )-distributivity implies higher levels of uniform distributivity as well.
Proof.(1): Suppose that W j | j < is a sequence of maximal antichains in Q, each of size ≤ .For each j < , we define W j i | i < as in the proof of Theorem 6.2.Since Q is ( , )-distributive, there exists a positive branch through W j i | ≺i, j ∈ with a lower bound p, where ≺i, j denotes the standard pairing function applied to i and j.As in the proof of Theorem 6.2, p induces a positive branch through W j | j < .
(2): Suppose that W j | j < is a sequence of maximal antichains in Q, each of size ≤ < .Fix a cofinal sequence i | i < cof( ) in .For each j < , we partition W j into subsets W j,i | i < cof( ) such that W j,i has size ≤ i for each i < cof( ).We can extend each W j,i to a maximal antichain W j,i by adding a single condition, namely sup(W j \ W j,i ), since Q is <( < ) + -complete.As in the proof of (1), we then replace each W j,i by a sequence W j,i k | k < i such that W j,i k has size ≤ .Let Wl | l < enumerate all the W j,i k in order-type .Since Q is ( , )-distributive, there exists a positive branch through Wl | l < .This is easily seen to induce a positive branch through W j | j < , as required.
(3): We proceed as in the proof of (2), except when W j,i is extended to a maximal antichain W j,i : Since |W j,i | ≤ i < < and Q is <( < )complete, sup(W j,i ) exists, and thus, using that Q is a Boolean algebra, also sup(W j \ W j,i ) = ¬sup(W j,i ) exists.
For Boolean algebras, the case = in (1) and ( 2) below was proved by Thomas Jech in [13, Theorem 2] (and (3) is nontrivial only for uncountable ).A more general result for arbitrary cardinals was then shown by Dobrinen in [3, Theorem 1.4].In the theorem below, (1) and ( 2)-(2b) are essentially due to Dobrinen.We will present a somewhat different and simpler argument for these, and furthermore present additional results which partially answer a question of Dobrinen [3, paragraph after Theorem 1.4] by showing in (2d) that a <( < ) + -complete Boolean algebra Q is ( , )distributive if and only if Cut does not have a winning strategy in the game G (X, Q, ). of Q of elements below sequences of possible first α-many choices of Choose, allowing us to drop the completeness assumption on Q).Note that by our assumption that < = , these antichains will always have size at most .Use ( , )-distributivity with respect to X to obtain a positive branch through the sequence of W α 's, which yields a way for Choose to win while Cut is following their supposed winning strategy, which is a contradiction.
(3) follows by exactly the same arguments as ( 1) and ( 2) using the instances of Lemma 6.4 about uniform distributivity.Definition 6.6.Let I be an ideal on a regular and uncountable cardinal κ.Let be a limit ordinal and let be a regular cardinal, or = ∞.I is (≤ , )-distributive if whenever X ∈ I + and W α | α < is a sequence of I-partitions of X, each of size at most , or of arbitrary size in case = ∞, then there is a sequence X α | α < so that for each α < , X α ∈ W α and for every < , < X ∈ I + , and The proof of the following theorem essentially proceeds like the proof of Theorem 6.5 (1) and (2a)-(2c), and we will thus omit presenting the argument.Theorem 6.7.Let I be an ideal on a regular and uncountable cardinal κ, let be a cardinal or = ∞, and let X ∈ I + .
(1) if < κ is a limit ordinal and Cut does not have a winning strategy in the game G (X, I, ≤ ), then I is (≤ , )-distributive with respect to X. (2) If I is < -complete and (≤ , )-distributive with respect to X, and either = ∞, is a cardinal and < = , or < = , then Cut does not have a winning strategy in the game G (X, I, ≤ ).
Recall that a nonprincipal ideal I is precipitous if its generic ultrapower is forced to be wellfounded.It is a well-known standard result that (in our above terminology) an ideal I is precipitous if and only if it is (≤ , ∞)distributive (see, for example, [14, Lemma 22.19]).It thus follows by the above that precipitousness of an ideal can be described via the non-existence of winning strategies for Cut in suitable cut and choose games.We will say more about the relationship between precipitousness and cut and choose games in Section 7.
With respect to Footnote 8, let us also remark that a <κ-complete ideal I ⊇ bd κ is a WC ideal (as defined by Johnson in [15]) if and only if I is uniformly (<κ, κ)-distributive.§7.Banach-Mazur games and strategic closure.In this section, we want to show how winning strategies for Banach-Mazur games on partial orders relate to winning strategies for certain cut and choose games.Definition 7.1 (Banach-Mazur games).Let κ be a regular uncountable cardinal, let I be a monotone family on κ, let be a limit ordinal, and let Q be a poset with domain Q.
• If < κ, let B(I, ) denote the following game of length .Two players, Empty and Nonempty take turns to play I-positive sets, forming a ⊆decreasing sequence, with Empty starting the game and with Nonempty playing first at each limit stage of the game.If at any limit stage < , Nonempty cannot make a valid move, then Empty wins and the game ends.Nonempty wins if the game proceeds for -many rounds and the intersection of the sets that were played is nonempty.Otherwise, Empty wins.• B(Q, ) denotes the following game of length on the poset Q.Two players, Empty and Nonempty take turns to play elements of Q, forming a ≤-decreasing sequence, with Empty starting the game and with Nonempty playing first at each limit stage of the game.If at any limit stage < , Nonempty cannot make a valid move, then Empty wins and the game ends.Nonempty wins if the game proceeds for -many rounds and the collection of the sets that were played has a lower bound in Q. • We let B + (I, ) denote the game B(P(κ)/I, ).
Clearly, if Nonempty has a winning strategy in a game B + (I, ) for some < κ and I contains all singletons, then the same strategy makes them win B(I, ).Let us observe that by a similar proof to that of Observation 1.2, it easily follows that if κ ≤ 2 , then Empty has a winning strategy in the game B + (I, ) for any monotone family I on κ that contains all singletons.Note that an ideal I is precipitous if and only if Empty has no winning strategy in the game B(I, ), and that a poset Q is < + -strategically closed 21 if and only if Nonempty has a winning strategy in the game B(Q, ).We recall a classical result, which is verified when κ = 1 as [8, Theorem 4], and it is easy to see that the proof of [8, Theorem 4] in fact shows the following, replacing 1 by an arbitrary regular and uncountable cardinal κ.This parallels Observation 3.5 and the comments preceding it.Theorem 7.2 (Galvin, Jech, and Magidor).Let κ be a regular and uncountable cardinal, and let < κ be regular.If we Lévy collapse a measurable cardinal above κ to become κ + , then in the generic extension, there is a uniform normal ideal I on κ + such that Nonempty has a winning strategy in the game B + (I, ).
In the following, we want to compare the above games with the cut and choose games from our earlier sections.When = , (1) and ( 2) below are essentially due to Jech in [12, 13].For larger , (2) below follows from [6, theorem on page 718] and [3, Theorem 1.4] (we presented the latter in Theorem 6.5):That is, in his [6], Matthew Foreman showed that Empty not winning B(Q, ) is equivalent to the ( , ∞)-distributivity of Q.We will provide an argument that directly connects these types of games.
Theorem 7.3.Let κ be a regular uncountable cardinal, let I be an ideal on κ, let < κ be a limit ordinal, and let Q be a poset with domain Q.Then, the following hold : (1) Empty wins B(I, ) if and only if Cut wins G ∞ (X, I, ≤ ) for some Proof.Let us provide a proof of (1), and remark that (2) is verified in complete analogy.By Theorem 6.7, Cut having a winning strategy in the game G ∞ (X, I, ≤ ) is equivalent to I not being (≤ , ∞)-distributive with respect to X.
Assuming that I is not (≤ , ∞)-distributive with respect to X, we pick a sequence W i | i < of I-partitions of X witnessing this.Let us describe a winning strategy for Empty in the game B(I, ).In their first move, let Empty play the set x 0 = X .At any stage i < , given the last move y ∈ I + of Nonempty, pick x i ∈ W i such that y ∩ x i ∈ I + , which exists by the maximality of W i .Let Empty play x i .It follows that i< x i = ∅, as desired.
On the other hand, assume that Empty has a winning strategy in the game B(I, ).Let x 0 be the first move of Empty according to .We will describe a winning strategy for Cut in the game G ∞ (x 0 , I, ≤ ), making use of an auxiliary run of B(I, ) according to .Given a play of B(I, ) in which the moves of Empty are x i | i < j for some j < , and Nonempty is to move next, for every possible next move q ∈ I + of Nonempty, has a response r ⊆ q in I + , which provides us with a dense set D j of such responses r below i<j x i in P(κ)/I .Noting that maximal antichains in P(κ)/I are exactly I-partitions, let Wj ⊆ D j be an I-partition of i<j x i .Let Cut play an Ipartition W j of x 0 extending Wj in their jth move.Choose will pick some element w j ∈ Wj , and we let Nonempty play some I-positive q j in their next move, such that Empty answers this by playing x j = w j in their next move, according to .In this way, all choices of Choose are also moves of Empty, hence i< w i = ∅, since Empty is following their winning strategy .
The next theorem will show that we in fact obtain instances of equivalent Banach-Mazur games and cut and choose games.The forward direction when = in Item (2) below is due to Jech in [13], and the reverse direction of (2) for = is due to Veličković in [26].The full proof of (2) below is due to Dobrinen [4, Theorem 29].
Theorem 7.4.Let κ be a regular uncountable cardinal, let I be an ideal on κ, let < κ be a limit ordinal, and let Q be a poset with domain Q.Then, the following hold : (1) Nonempty wins B(I, ) if and only if Choose wins G ∞ (X, I, ≤ ) for all Proof.We provide a proof of (1), and remark that ( 2) is verified in complete analogy.For the forward direction, let be a winning strategy for Nonempty in B(I, ), and let X ∈ I + .We describe a winning strategy for Choose in G ∞ (X, I, ≤ ), making use of an auxiliary run of B(I, ) according to .Suppose that Cut starts the game by playing an I-partition W 0 of X.Let Empty play x 0 = X , and let y 0 be the response of .Using the maximality of W 0 , let Choose pick w 0 ∈ W 0 such that w 0 ∩ y 0 ∈ I + as their next move.At any stage 0 < i < , assume Cut plays an I-partition W i of X, and let y i be the last move of Nonempty according to .Let Choose pick w i ∈ W i such that w i ∩ y i ∈ I + as their next move.Let Empty play w i ∩ y i , and let Nonempty respond with y i+1 using .At limit stages, let Nonempty make a move according to .Since y i+1 ⊆ w i , we have i< w i ⊇ i< y i = ∅, showing that we have indeed described a winning strategy for Choose, as desired.
For the reverse direction, suppose that Empty starts a run of the game B(I, ) by playing some x 0 ∈ I + .Let be a winning strategy for Choose in the game G ∞ (x 0 , I, ≤ ).We can identify with a function F which on input W i | i ≤ for some < considers the partial run in which the moves of Cut are given by the W i , the moves of Choose at stages below are given by the strategy , and F ( W i | i ≤ ) produces a response w ∈ W for Choose to this partial run.We describe a winning strategy for Nonempty in the game B(I, ), making use of an auxiliary run of G ∞ (x 0 , I, ≤ ) according to .
For the first move, consider the set and note that there is an I-positive y 0 ⊆ x 0 such that P(y 0 ) ∩ I + ⊆ Σ ∅ , for otherwise the complement of Σ ∅ is dense in I + below x 0 , and hence there is an I-partition W of x 0 that is disjoint from Σ ∅ , however F ( W ) ∈ W ∩ Σ ∅ , which is a contradiction.Let Nonempty pick such a y 0 as their response to Empty's first move x 0 .
In the next round, suppose that Empty plays x 1 ⊆ y 0 .Let Cut play an I-partition W 0 of x 0 such that F ( W 0 ) = x 1 as their first move in the game G ∞ (x 0 , I, ≤ ).Consider the set As before, there is y 1 ⊆ x 1 in I + such that P(y 1 ) ∩ I + ⊆ Σ W 0 , and we let Nonempty respond with such y 1 .We proceed in the same way at arbitrary successor stages.At any limit stage 0 < i < , let W = W j | j < i , and let Nonempty pick y i such that P(y i ) ∩ I + ⊆ Σ W = {F ( W W ) | W is an I-partition of x 0 } by an argument as above.
In this way, the choices of Choose are exactly the choices of Empty in the above, and hence their intersection is nonempty, for Choose was following their winning strategy .This shows that we have just described a winning strategy for Nonempty in the game B(I, ), as desired.
We showed in Theorem 6.5 that a poset Q is ( , ∞)-distributive if and only if for all X ∈ Q, Cut does not win the game G ∞ (X, Q, ).The next characterization follows from Theorem 7.4, since a poset Q is < +strategically closed (by the very definition of this property) if and only if Choose has a winning strategy in B(Q, ).
Corollary 7.5.Let κ be a regular uncountable cardinal, let I be an ideal on κ, let < κ be a limit ordinal, and let Q be a poset with domain Q.Then, Q is < + -strategically closed if and only if Choose wins G ∞ (X, Q, ) for all X ∈ Q.
Let us close with some complementary remarks on the games studied in this section.We first argue that allowing for arbitrary large partitions is important in the above characterisations of precipitous ideals.For instance, a restriction to partitions of size <κ does not lead to equivalent games.To see this, note that assuming the consistency of a measurable cardinal and picking some regular cardinal below, it is consistent to have an ideal I on a cardinal κ such that Choose has a winning strategy in the game G <κ (κ, I, ≤ ) for any < κ, but I is not precipitous.Simply take I = bd κ when κ is either measurable, or in a model obtained from Theorem 7.2.It is well-known that the bounded ideal is never precipitous (see [8, p. 1]).
This can also hold for normal ideals.For example, work in a model of the form L[U ] with a measurable cardinal κ with a solitary normal ultrafilter U on κ.Let J be the ideal on κ dual to U. The cardinal κ is also completely ineffable, and we let I be the completely ineffable ideal on κ, as introduced by Johnson in [16] (see also [10]).Since J equals the measurable ideal on κ, that is defined as the intersection of the complements of all normal ultrafilters on κ in [10], we have I ⊆ J by [10, Theorems 1.4(5) and 1.5 (11)].As in Observation 3.2, Choose has a winning strategy in the game G <κ (κ, I, ) whenever < κ.But by a result of Johnson [16, Theorem 1.6], the completely ineffable ideal is never precipitous, that is Empty has a winning strategy in B(I, ).
In general, it is harder for Cut to win G ∞ (X, I, ≤ ) than to win G ∞ (X, I, ).To see this, note that for any precipitous ideal I on 1 , Cut does not win G ∞ (X, I, ≤ ) by Theorem 7.3 (1).However, Cut wins G (X, I, ) by Observation 5.2.
Building on results of Galvin, Jech, and Magidor [8], Johnson [15, Theorem 4] shows that for κ < ℵ , 22 if J ⊇ bd κ is a <κ-complete ideal on κ such that Nonempty wins the game B(J, ), then J is ( , κ)-distributive.Let I be a normal precipitous ideal on 1 . 23Since I is not ( , 1 )-distributive, there exists X ∈ I + such that Choose does not win G ∞ (X, I, ≤ ) by the above and by Theorem 7.4 (1).By the combination of these two observations, it is consistent that there exists a normal ideal K on 1 such that G ∞ ( 1 , K, ≤ ) is undetermined.This is still possible for ℵ 2 .To see this, note that Shelah has shown that precipitousness of I does not imply ( , κ)-distributivity of I even if κ = 2 , CH holds and I is normal (see [15, Theorem 2 and comments before Theorem 4] and [24, Theorem 6.4(2)]).In this situation, there exists some X ∈ I + such that Cut wins G 2 (X, I, ) by Theorem 6.5, but for all Y ∈ I + , Cut does not win G ∞ (Y, I, ≤ ) by Theorem 7.4.Using [15, Theorem 4] and Theorem 7.4(1) as above, there exists X ∈ I + such that Choose does not win G ∞ (X, I, ≤ ).Hence it is consistent that there exists a normal ideal K on 2 such that G ∞ ( 2 , K, ≤ ) is undetermined.§8.Final remarks and open questions.We have seen that for most cut and choose games, the existence of a winning strategy for Cut has a precise characterization as in Figure 1 below (where ( * ) indicates that the stated equivalence is only known to hold under certain additional completeness and cardinal arithmetic assumptions).Let κ be an uncountable regular cardinal, < κ a limit ordinal, a regular cardinal and I ⊇ bd κ a <κcomplete ideal on κ.For the proofs of the first three rows of Figure 1, see Observations 1.2, 2.2, and 3.4, and for the remaining ones Observation 5.2(2) and (1), Theorems 6.5, 6.7, and 7.3 (1).Recall the equivalence between U -games (where Cut presents disjoint partitions of size < κ) and G-games proved in Observation 5.7.
Regarding winning strategies for Choose, we have seen in Corollary 7.5 that Choose wins G ∞ (X, Q, ) for all X ∈ Q if and only if Q is < + -strategically closed.However, it is often much harder to characterize the existence of winning strategies for Choose.For instance, it is open in many cases whether
Cut wins Characterization
I is not uniformly (< , )-distributive ( * ) ∀X G (X, I, ≤ ) I is not (≤ , )-distributive ( * ) ∀X G (X, I, ) I is not ( , )-distributive ( * ) ∀X G ∞ (X, I, ≤ ) I is not precipitous the existence of winning strategies for Choose in different cut and choose games can be separated.
Question 8.1.Is it possible to separate the existence of winning strategies for Choose in U (κ, ≤ ) and G ∞ (κ, bd κ , ) for some limit ordinal < κ?Besides these two extreme cases, the previous question is also open for games in between the above ones, such as U (κ, ), or the variant of U (κ, ) with a different (finite or infinite) number of elements required in the final intersection.An obvious candidate for a model to answer Question 8.1 positively would be the model obtained in the proof of Theorem 4.1.Question 8.2.In the model obtained in the proof of Theorem 4.1, where κ is inaccessible but not weakly compact, and in which Choose has a winning strategy in the game U (X, I, ≤ ) whenever < κ and X ∈ I + , with I being a hopeless ideal as obtained from the generic measurability of κ in that model, does Nonempty have a winning strategy in the precipitous game B(I, ) (and hence in the game G ∞ (X, I, ≤ ) for every X ∈ I + by Theorem 7.4)?
A related natural question regarding the ideal games studied in Section 3 is whether the existence of a winning strategy for Choose can ever depend on the choice of ideal.We formulate our question for ideal games of the form U (κ, I, ) in the below, however analogous questions could clearly be asked regarding all sorts of variants of these games that we study in our paper.Question 8.3.Is it consistent that there exist <κ-closed ideals I, J ⊇ bd κ on κ so that Choose has a winning strategy for U (κ, I, ), but not for U (κ, J, )?Note that by Observation 3.3, this cannot happen if κ is 2 κ -strongly compact.This question is equivalent to the question whether the existence of winning strategies for Choose in any of the ideal cut and choose games introduced in this paper can depend on the choice of starting set X ∈ I + , when I ⊇ bd κ is a <κ-complete ideal on κ.If Question 8.3 had a positive answer, then we could easily form an ideal K that is generated by isomorphic copies of such ideals I and J on two disjoint subsets of κ such that the existence of a winning strategy of Choose in the game U (X, K, ) depends on the starting set X.In the other direction, if the existence of winning strategies for Choose in some ideal cut and choose game related to I can depend on the choice of starting set X ∈ I + , then we can consider the games with respect to the ideals obtained by restricting I to those starting sets, exactly one of which will be won by Choose.
In Section 4, we have seen that it is consistent that Choose wins U (κ, bd κ , ) for a small inaccessible cardinal κ.The cardinal studied there is not weakly compact, however Mahlo.We do not know if the latter is necessary.Question 8.4.Is it consistent that Choose wins U (κ, ≤ ), where κ is the least inaccessible cardinal?Theorem 2.5 and Observation 3.5 show that if < κ is regular and Choose wins U (κ, ≤ ), then κ is generically measurable as witnessed by some < -closed notion of forcing, which in turn implies that Choose wins U (κ, bd κ , < ).We ask if either of these implications can be reversed.Question 8.5.Let < κ be regular.
(1) If κ is generically measurable as witnessed by some < -closed notion of forcing, does it follow that Choose wins U (κ, ≤ )? (2) If Choose wins U (κ, bd κ , < ), does it follow that κ is generically measurable as witnessed by some < -closed notion of forcing?
This latter question makes sense also if = κ: (3) If Choose wins U (κ, bd κ , <κ), does it follow that κ is generically measurable as witnessed by some <κ-closed notion of forcing?
Regarding the characterisations of distributivity in Section 6, we have partially answered a question of Dobrinen [3, paragraph after Theorem 1.4], however the following is still open to some extent.Question 8.6.Which degree of completeness of a poset Q is necessary in order to show that the ( , )-distributivity of Q implies that Cut does not have a winning strategy in the game G (X, Q, )?Note that <( < ) + -completeness suffices by Theorem 6.5.In conjunction with Lemma 6.4, this shows that the existence of a winning strategy for Cut in the games G (X, Q, ) and G < (X, Q, ) are equivalent.In most cases, an analogous result holds with respect to the existence of winning strategies for Choose by Theorem 6.2, however we do not know whether this is the case when is a limit cardinal such that < < whenever < .In this direction, Theorem 6.2(2) only shows that G (X, Q, ) and G <( < ) (X, Q, ) are equivalent.
Question 8.7.Is the existence of a winning strategy for Choose in the games G (X, Q, ) and G < (X, Q, ) equivalent, assuming that Q is <( < ) +complete?
The following questions concern the relationship between cut and choose games and Banach-Mazur games.The first one is connected with the separation of the existence of winning strategies for Choose for different games.For instance, is it consistent that Choose wins U ( 2 , ≤ ), but Cut wins G ∞ ( 2 , I, ≤ ) for arbitrary <κ-complete ideals I ⊇ bd κ ?This is equivalent to the following question.Question 8.8.Is it consistent that Choose wins the game U ( 2 , ≤ ), but there are no precipitous ideals on 2 ?
Similarly, one can ask about situations in which certain cut and choose games are undetermined.Note that for all cut and choose games studied in this paper, the existence of a winning strategy for Choose implies that there is an inner model with a measurable cardinal.Therefore, the statements listed in Figure 1 show that all games of length where Cut plays partitions of size < κ can be simultaneously undetermined, for instance, when κ = 2 and CH holds.Concerning larger partitions, we have seen at the end of Section 7 that it is consistent that each of G ∞ ( 1 , I, ≤ ) and G ∞ ( 2 , I, ≤ ) is undetermined, and in fact, the former holds for any normal precipitous ideal I on 1 .This leads to the following question, which is left open by the discussion at the end of Section 7. Question 8.9.Can the game G ∞ (κ, I, ) be undetermined for some <κcomplete ideal I ⊇ bd κ on κ?
This leaves open whether the games U ( 2 , ≤ ) and G ∞ ( 2 , I, ≤ ) can be simultaneously undetermined for some <κ-complete ideal I ⊇ bd κ .This would follow from a positive answer to the next question, which is closely connected to Question 8.8.Question 8.10.Is it consistent that there is a precipitous ideal on 2 , however Choose does not win U ( 2 , )?
We would finally like to mention a few more natural question about Banach-Mazur games.We defined B(I, ) for > so that at limit stages, Nonempty goes first.Can we separate the existence of winning strategies for either player between this game and its variant where we let Empty go first at limit stages?Moreover, the game B + (I, ) depends only on the isomorphism type of P(κ)/I .Is this also the case for B(I, )?For instance, if I and J are ideals on κ with P(κ)/I ∼ = P(κ)/J , does Empty win B(I, ) if and only if Empty wins B(J, )?
Definition 4 . 2 .
A collection G of automorphisms of a tree T acts transitively on T if for every a and b on the same level of T, there is ∈ G with (a) = b.Definition 4.3.A normal α-tree is a tree T of height α with the following properties:
Figure 1 .
Figure 1.Characterizations of the existence of winning strategies for Cut in various cut and choose games. | 23,493.4 | 2023-07-28T00:00:00.000 | [
"Mathematics"
] |
Macroeconomic Effects of EU Energy Efficiency Regulations on Household Dishwashers, Washing Machines and Washer Dryers
: Testing the relationship between economic performance and energy consumption is of utmost importance in nearly all countries. Taking the European Union as scope, this paper analyses the impacts of energy efficiency legislation on a selection of household appliances. In particular, it analyses the employment and value added impacts of the stricter energy efficiency requirements for dishwashers, washing machines, and washer dryers. To do so, this paper combines a bottom-up stock model with a macro-econometric dynamic general equilibrium model (FIDELIO) to quantify the direct and indirect value added and employment impacts in the European Union. The analysis shows that stricter energy efficiency requirements on household dishwashers, washing machines, and washer dryers have a net negative macroeconomic impact on value added (roughly 0.01 % of the total European Union value added) and a slightly net positive impact on employment. In fact, the regulations cause a shift in the composition of the household consumption basket that seems to favor labor-intensive industries.
Introduction
This paper (The views expressed are purely those of the authors and may not in any circumstances be regarded as stating an official position of the European Commission) focuses on the relationship between energy efficiency policies and macroeconomic performance. A better understanding of this relationship is of key importance due to the increasing volume of regulated energy related products, as well as the number of countries implementing energy efficiency standards. In particular, the aim of the analysis is to add new empirical evidence regarding the macroeconomic impact of specific energy efficiency regulations on three household appliances: dishwashers, washing machines, and washer dryers in the European Union (EU). These appliances contribute approximately 2.8% to the residential energy consumption of the EU [1].
Nowadays, energy efficiency policies are one of the key measures to reduce the impact of human activities on the environment and the climate. Looking at the EU current growth strategy, climate change and energy sustainability priorities establish, for instance, the need of a 32.5% energy efficiency improvement for 2030. The improvement of energy efficiency has key impacts on the efforts that the EU undertakes to reduce energy consumption and greenhouse gas (GHG) emissions [2].
Theoretically speaking, energy efficiency policies are expected to have impacts not only on the environmental footprint of human activities, but also on the economic system through different channels [3]. In a first instance, these policies create incentives for certain economic sectors to There is finally a third group of studies that combine the two approaches using hybrid models. These analyses integrate detailed bottom-up technical descriptions of specific industries affected by the policies with a broader economic perspective provided by the macroeconomic framework. Barker et al. (2007) [20] analyze the UK Climate Change Agreements and related energy efficiency policies for energy-intensive industrial sectors. They combine bottom-up estimates of the effects of these policies and the dynamic econometric model of the UK economy Multisectoral Dynamic Model-Energy-Environment-Economy (MDM-E3). They find final energy reduction and a slight increase in economic growth through improved international competitiveness. Ringel et al. (2016) [21] use a hybrid approach-bottom-up model together with the Assessment of Transport Strategies (ASTRA) model-to analyze the environmental and socio-economic impacts of Germany's latest energy efficiency and climate strategies for the year 2020. They find that enhanced green energy policies bring about economic benefits in terms of GDP and employment, even in the short term. Additionally, the European Commission uses a hybrid approach in the impact assessment of the proposal for the revision of the energy efficiency directive [22]. The analysis uses bottom-up models-Price-Induced Market Equilibrium System (PRIMES), Greenhouse gas -Air pollution Interactions and Synergies (GAINS), Global Biosphere Management Model -Global Forest Model (GLOBIOM-G4M), Prometheus, Common Agricultural Policy Regional Impact (CAPRI)-together with two different general equilibrium macroeconomic models: the computational general equilibrium model GEM-E3 and the dynamic econometric global model E3ME. While the E3ME model presents positive impacts GDP and employment in all analyzed scenarios, the results for the GEM-E3 models are mixed. Finally, Hartwig et al. (2017) [3] present a case study for Germany, where a scenario including ambitious energy efficiency measures for building, household appliances, industry, and the service sector is compared to a reference scenario with respect to the macroeconomic impacts. Connecting the energy demand models Forecast and invert/EE-lab with the macroeconomic model ASTRA-D is undertaken to analyze the effects of the policy scenarios. The authors conclude that the macroeconomic effects of ambitious energy efficiency policies in Germany have considerable positive impacts on employment (particularly in those that produce energy efficiency technologies and construction and manufacturing sector, as well as in real-state and consulting) and GDP.
In this paper, we follow the third approach to use the technical information that is offered by bottom-up models, while going beyond the direct impacts on employment and value added of the energy efficiency regulations to analyze their impacts across industries and countries. In particular, we use a hybrid framework combining the detailed bottom-up energy demand stock based model that Boyano et al. developed [15,16], together with the macro-econometric dynamic general equilibrium model FIDELIO [23]. The resulting hybrid model is used to provide new empirical evidence on a specific energy efficiency policy, which is the revision of the energy efficiency regulatory framework for dishwashers, washing machines, and washer dryers. In particular, the overarching regulatory framework for energy efficiency products is the combination of two policies: The Energy Labelling Regulation (EU) 2017/1369 [24], which defines the process of determination of energy label to be displayed in new appliances, and the Eco-design Directive 2009/125/EC [25], which specifies the process of defining minimum energy performance levels. Acting in combination as a pull-push effect, these regulatory measures have improved the average energy efficiency of the household appliance stock over the years in the EU [26]. Regarding the analyzed appliances, their specific regulations are: (1) for washing machines, Regulation EU No 1061/2010 on energy label and Regulation EU No 1015/2010 on Eco-design requirements, (2) for washer dryers Regulation EU No 96/60/EC on energy label, and (3) for dishwashers, Regulation EU No 1060/2010 on energy label and Regulation EU No 1016/2010 on Eco-design requirements. These regulations have been revised in 2014-2018, and adopted in 2019. The revision introduces stricter energy efficiency requirements of the products that enter the EU market from March 2021, and a rescaling of the energy efficiency classes. Additionally, some changes in the testing programs are introduced to bring the energy efficiency developments of the products closer to the end-user behavior. This paper quantifies the EU-wide economic (in terms of value added) and employment impacts between 2020 and 2030 of such proposed changes to the EU energy efficiency regulations on the aforementioned household's appliances. The analysis provides results by countries and for most of the economic sectors in the EU.
The paper is organized as follows. Section 2 provides an overview of the methods and materials that were used in the study. Section 3 presents the impacts from changes in the EU regulations estimated using FIDELIO. Section 4 discusses a range of implications of the empirical results, while Section 5 draws some concluding remarks.
Methods and Materials
The modelling tool that is proposed in this analysis has two main components: a bottom-up approach, used together with the top-down macro-economic Fully Interregional Dynamic Econometric Long-term Input-Output model (FIDELIO). In this Section, we first describe the two models (in Subsection 2.1 and 2.2, respectively). Next, Subsection 2.3 describes the necessary further steps to link the two approaches, and it offers a graphic representation of the methodological proposal.
Bottom-Up Approach
We use a bottom-up energy demand model covering all EU countries. In this model, the EU electricity consumption and sale prices of dishwashers, washing machines, and washer dryers are based on the related energy efficiency technological improvements, which are estimated while using an engineering approach.
The technological improvements that were triggered by the implementation of the new regulations have an effect on the sales price of the appliances as well as in the electricity consumption of the overall stock at EU level. The sales price of each machine is estimated based on the manufacturing costs, manufacturers and retailers' mark-ups, the value added tax, and, wherever appropriate, the additional costs of the improvement options that are added to the basic models to achieve a better energy efficiency and, therefore, a better energy efficiency label classification. The manufacturing costs are provided by the manufacturers and assumed to decrease over time according to the experience curve [27], experience gained by the manufacturer in producing the machines. This correction is applied to the sale prices beyond 2015.
The annual electricity consumption of the overall stock at the EU level is estimated based on the average unitary electricity consumption of one appliance. Data regarding the energy consumption of the appliances are based on the performance data provided by the manufacturers, data from consumer surveys on how the appliances are used, and the evolution of technology. This unitary electricity consumption also depends on the sales distribution over the energy efficiency classes of the year when the appliance is purchased. The annual market share of each energy efficiency class is based on the historical data series and the influence that labelling has on the investment decisions of consumers, directing preference towards more energy efficiency appliances [28].
The way that this bottom-up model calculates the energy consumption is similar to that used by Yilmaz et al. [6], as both models estimate the number of replaced machines throughout a Weibull distribution. However, Yilmaz et al. [6] disregard the effect of the users when using the appliances and assumed the testing program energy consumption as the average value. For a complete description of the calculations and the assumptions that were considered in this bottom-up stock model, see [15,16].
A variety of studies have addressed the rebound effects for appliances, including an increase in operation hours, appliance size, or ownership rate [11,29,30]. The direct rebound effect related to dishwashers, washing machines, and washer dryers has been estimated to be negligible. The most recent user surveys [10,13] report that consumers remain using these appliances in the same way, regardless of their energy efficiency class, and this behavior has been stable in the last years. For example, the annual number of cycles of use of the appliances is stable and it mainly depends on the household size. The laundry load has remained constant through the last years at approx. 3.4 kg laundry/cycle, and the number of washing machines per household has remained constant, which indicates a saturated market.
Top-Down Model: FIDELIO
FIDELIO is based on a neo-Keynesian demand-driven non-optimization macroeconomic framework in the line of the E3ME (Cambridge Econometrics) model. This family of models is frequently compared to another set of macroeconomic models that are often used for policy and environmental analysis that is computational general equilibrium (CGE) models. One of the main broad differences between the two types of models is that CGE models are based on neoclassical assumptions in line with economic theory of optimization. Prices adjust to market clearing, aggregate demand adjusts to meet potential supply, and output is determined by available capacity. Instead, macro-econometric models assume that agents lack perfect knowledge and do not optimize their decisions. They provide a more empirically grounded approach and the alternative assumption ruling agents' choices is represented by econometric estimations. The parameters are estimated from time-series databases; therefore, they are validated against historical relationship: agents behave as they did in the past. Differently from CGE models, market imperfections exist and the economy is not assumed to be in equilibrium. There is no guarantee that all available resources are used. The level of output is a function of the level of demand and it might be less than potential supply.
Besides offering a relatively strong empirical grounding, the use of FIDELIO offers two additional advantages for the analysis carried out. First, the model offers a fairly high level of geographical and sectorial disaggregation. FIDELIO covers 35 regions (the 28 EU Member States plus Brazil, China, India, Japan, Russia, Turkey, and the United States), with each of them being disaggregated in 56 industries and products (see in Appendix A, Table A1, the list of industries available in FIDELIO). Second, the model offers a useful instrument to analyze policies that influence household consumption. In fact, while the supply side is described in a relatively simple way-it is characterized by an input-output core enlarged with nested constant-elasticity-of-substitution production function-the household block is modelled with relatively high detail. In FIDELIO, households receive three sources of income: wages, a share of the firms' gross operating surplus, and some government transfers. This income, after taxes, is either used for consumption or saved. In particular, households consume different categories of products: durable products (housing rents and vehicles) and non-durable products, such as appliances, electricity, heating, fuel for private transport, public transport, food, clothing, furniture and equipment, health, communication, recreation and accommodation, financial services, and other products. For almost all consumption categoriesincluding also appliances and electricity consumption-the demand is characterized through econometric estimations with different consumption categories modelled with different functional forms. For a complete description of the characteristics, the assumptions and equations of the FIDELIO model, see [23]. Appendix B offers a short description of the data sources that are needed to build the FIDELIO database.
Bridging Bottom-Up and Top-Down Approaches
Introducing the shock values into the FIDELIO model requires additional information. In fact, both shocks-to the sale prices of appliances and to the electricity requirements-are separately estimated for each specific appliance: dishwashers, washing machines, and washer dryers. However, the FIDELIO model only operates with one single household consumption category, which includes these and all other appliances together. Therefore, additional information is required to weigh the estimated shocks and calculate the corresponding equivalent shocks that are to be introduced in the FIDELIO model.
As regards the shock of the sale prices of appliances, we use the penetration rates that were estimated in [31] for dishwashers, washing machines, and washer dryers to weigh each (exogenously estimated) sales price shock and compute the single weighted equivalent sales price shock that includes all three household appliances. Next, we use information from the 2010 Household Budget Survey (HBS) micro-data produced by Eurostat in the COICOP (Classification of Individual Consumption According to Purpose) classification at the five-digit level. In particular, the survey provides information on the household total consumption of appliances and the household consumption of "clothes washing machines, clothes drying machines, and dish washing machines". Using this information, for each EU country, we compute the share of "clothes washing machines, clothes drying machines, and dish washing machines" over the broader category "household appliances". Eventually, we compute the sales price variations that are to be used in the FIDELIO model by using the weighted equivalent price shock for washing machines, washer dryers, and dishwashers, and the weights based on the HBS data. These price variations are introduced as shock parameters into the endogenously determined prices of appliances of FIDELIO. We implicitly assume that the shock in the sales price affects both domestic and imported products since the price of appliances in FIDELIO is computed as an average of the price of the domestic products and the price of the imported products. This is how the policy is actually expected to operate.
We use a similar approach to combine the electricity consumption shock related to each appliance into an aggregate shock in the value of the household total electricity consumption. First, we use the weights based on the penetration rates previously described to compute a weighted variation in the value of electricity consumed for dishwasher, washing machine and washer-dryer appliances. Next, to know the share of electricity consumption that households use for dishwashing, washing machine and washer-dryer appliances over the household total consumption of electricity we use data from the European Environment Agency [32] and from the ODYSSEE database [33]. These databases distinguish among different uses of household electricity (for electrical large appliances, other appliances, lighting, space heating, water heating, cooking, and air cooling). These shares are used to compute the final variations in household total electricity consumption used as a shock in FIDELIO. In FIDELIO, household electricity consumption depends on the stock of appliances, the electricity price, an exogenous index capturing the efficiency of appliances, the previous year's electricity consumption and stock of appliances, and the demand for energy that is needed for heating. We impose a shock in the efficiency parameter that would be equivalent, ceteris paribus, to the exogenously computed shock in the electricity consumed to simulate the electricity consumption variation computed through the bottom-up model. Appendix C -Tables C1-C5 -presents the cost variations and the household total electricity consumption variations that were introduced in FIDELIO, and the weights used to compute them.
Given how consumers' choices are described in FIDELIO, the model takes indirect rebound effects into account. By reducing their energy consumption, households might use their additional savings to buy other goods and services that require additional use of electricity, partially offsetting the initial electricity reduction. Figure 1 provides a graphical description of the two models used for the analysis and of the input (in yellow) and outputs (in green) flows between the two models. As the Figure shows, the revised ecodesign requirements and the new energy efficiency classes are inputs for the bottom-up model that computes the shock in the appliances sale prices and household electricity consumption. These outputs of the bottom-up model are inputs for FIDELIO that simulates the policy revision impacts on employment and value added.
Results
We shock the model in 2020 and run it up to 2030 for a baseline scenario (no new regulatory measures) and for a scenario with the proposed stricter EU energy efficiency regulations. The results that are presented hereafter correspond to variations of value added and employment with respect to the baseline scenario, in 2030. The results are very similar for the other years. While Subsection 3.1 focuses on the macroeconomic impact of the policies, Subsection 3.2 provides an overview of the environmental impact in terms of CO2-equivalent emissions. Table 1 shows the variations in value added and in employment in the EU economy as a whole. While the value added decreases by 1.9 billion euros, employment increases in around 24,000 jobs. In relative terms, with 0.01%, none of these two results represents a significant share of the total value added and the total employment in the EU. Subsections 3.1.1 and 3.1.2 provide more detailed results at the industry and country level, respectively, in order to give more insight into the EU (negative) value added effects and the EU (positive) employment effects.
Industry Level Analysis
Input-output based models typically provide results that are broken down by economic activities or industries (also sometimes denoted as sectors). Even if the analyzed policies aim at influencing the energy efficiency of some specific household appliances, produced by some specific industries, FIDELIO simulates the indirect impacts that these policies would also have on other industries. These impacts can be caused, for example, by an indirect effect of regulations-such as the reduction of electricity consumption, or by changes in the quantity of intermediate inputs necessary to produce the appliances, or by changes in the households' bundle of goods and services consumed. Table 2 shows the absolute and relative variations in the EU value added, broken down by industry. The left hand side of Table 2 shows industries with value added increases, while the right hand side shows industries that are worse off. In both sides, the industries are ranked based on their share over the total variation in value added, in decreasing order. Even when the total value added decreases, it does not shrink in all industries. The value added of the household appliances producer industry (electrical equipment) increases by 90 million euros. However, the positive impact in other industries is even greater. In fact, 60% of the value added growth takes place in the accommodation and food services (360 million euros), retail trade (343 million euros), food and beverages (253 million euros), and other services (290 million euros), including activities, such as repairing services, art, entertainment, and recreation services, among others.
A possible reason why these sectors are increasing their production and, consequently, their value added, can be found in the way consumers' choices are modelled in FIDELIO. Households increase their demand of these products mainly due to savings made in electricity consumption. In fact, 50% of the value added reduction takes place in the electricity production industry (and corresponds to around two-billion euros). However, this decrease only represents 0.69% of the value added of the electricity industry.
Regarding employment, the results present a positive effect in the EU economy, in contrast to the value added decrease. The reason lies in the fact that the industries that show an increase in production, value added, and employment are more labor intensive than the industries that are worse off. Table 3 shows the absolute and relative variations in employment at the EU level, broken down by industry (with the same structure as Table 2). The total positive impact on employment (ca. 72,000 jobs) in some sectors more than compensates the negative impact (ca. 48,000 jobs) in others. The positive impacts mainly come from the agricultural sector, accommodation and food services, retail trade, and other services. On the negative side, the electricity industry is the one that suffers most in terms of employment (25% of the total impact), although much less than in terms of value added (50% of the total impact). Other industries that show a decrease in employment are, for example, construction, mining and quarrying, and forestry and logging.
Country-Level Analysis
In addition to the analysis at the industry level, in this Subsection we analyze how the impact is distributed among the different EU countries. Figure 2 shows the absolute variation in value added and employment by country. All EU countries would see their total value added reduced, except Hungary and Italy. However, these reductions would represent very small values in relative terms with respect to their total national levels, with most of them being close to zero. The maximum value is −0.05% for Lithuania, Latvia and Slovakia.
In absolute terms, the countries absorbing most of the value added decrease are Germany, Spain, France, United Kingdom, and Poland. For these countries, between 31% and 54% of the value added decrease occurs in the electricity industry. However, this just corresponds to a 1% value added of the electricity industry in France, Poland, Spain, and United Kingdom. Other industries contributing to the overall value added decrease are the mining sector and the construction sector.
The positive impact in Italy and Hungary is driven by industries, such as accommodation, repairing, retail trade, agriculture, and manufacture of food products.
The employment effects are positive in most of the countries. The three countries showing the biggest employment increase are Germany (ca. 6100 jobs), Italy (ca. 8600 jobs), and Romania (ca. 8300 jobs). In Germany and Italy, the industries that mostly drive the employment increase are accommodation, retail trade, agriculture, and manufacture of food products. For Romania, 70% of the employment increase is in the agricultural sector, followed by the manufacture of food products and the manufacture of textiles.
For some countries, such Estonia, France, Lithuania, Latvia, the Netherlands, Slovenia, and Slovakia, employment (slightly) decreases around less than 1000 jobs in all cases. The exception is United Kingdom, where the employment decrease was around 1300 jobs.
Environmental Impact
In FIDELIO, the monetary value of energy that is consumed by firms and households is linked to energy consumption and is then used to compute emissions of carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) related to energy production and consumption (see [23] for a description of the conversion factors from primary energy consumption to emissions). To describe the environmental impact of the proposed stricter energy efficiency requirements, we look at the variation of GHG emissions. In particular, for a synthetic measure of the GHG effect, the emissions are converted into CO2-equivalent units while using the Global Warming Potential (GWP), as in [34]. Conversion factors are 1 for CO2, 265 for N2O and 28 for CH4.
The emissions increase is, in absolute terms, smaller than the emission reduction that is driven by stricter energy efficiency requirements (as expected), and the net effect is a decrease in GHG emissions equal to 1.5 million tonnes. This reduction is driven by the industry producing electricity that is responsible for the 70% of the total reduction, followed by mining and quarrying (15%), sewerage and waste collection and treatment (10%), and the manufacture of coke and refined petroleum products (3%). Figure 3 shows the industries that are responsible for the resulting increase in emissions and those industries that are responsible for emission reductions. The three main industries showing the biggest shares of emission increases are the agriculture industry, the manufacture of food and beverages, and accommodation services. Together, they would cause GHG emissions to increase in around half a million tonnes, out of which 95% would correspond to the agriculture industry. This effect might be considered as rebound effects, a reduction in the expected environmental gains of the regulation caused by behavioral responses. Figure 4 shows the distribution of emissions decrease among EU countries. Germany, Poland, and the United Kingdom are responsible of half of the total reduction of GHG emissions that are caused by the proposed stricter energy efficiency requirements on dishwashers, washing machines, and washer dryers.
Discussion
One of the main findings of our analysis is that energy efficiency regulations on household dishwashers, washing machines, and washer dryers have a net negative macroeconomic impact on value added (roughly 0.01 % of the total EU value added) and a slightly net positive impact on employment (ca. 24,000 jobs). In both cases, energy efficiency regulations have positive and negative economic effects, depending on the industry sector analyzed. Most of the negative impact comes from the reduction in the consumption of energy due to the implementation of more energy-efficient technologies in household appliances. Positive impacts are derived from the new investments on more efficient technologies, and the shift in the composition of the household consumption basket, while using less energy in favor of goods and services that are produced by more labor-intensive industries.
While the most part of studies analyzed in the literature review find positive impacts of energy efficiency policies on both GDP and employment, these analyses usually consider the impact of a bundle of policies together. On the contrary, our results are consistent with the findings of Barker et al. (2016) [19]. The authors test whether energy efficiency measures contribute to closing the 2020 emissions gap without a loss in GDP and employment, and they offer the results disaggregated by different policy measures. One of the policy measures that they analyze is the use of more efficient appliances and lighting in residential and commercial buildings. For this specific measure, in line with our results, they find negative economic impacts for the EU, particularly in Germany and positive economic impacts in Italy. These authors also reported net positive employment effects. Therefore, it seems useful to specifically analyze individual energy efficiency policies, given that different measures may have different impacts on the economy.
In any case, it would be important to validate the results that were obtained using alternative theoretical frameworks to check the sensitivity of the results that were obtained to the assumptions underlying the used approach. For instance, following the findings of [11,29,30], in the bottom-up approach, we assume no relevant direct rebound effects induced by the revised regulations. A further extension of the analysis might corroborate how the results change relaxing this assumption. It is also important to note that one of the strengths of the FIDELIO model is its capacity to re-allocate household consumption across different goods and services in reaction to price changes, based on a relatively simple description of the electricity market and the electricity production function. Thus, the model is neither able to consider strategic choices that the electricity industry can make to accommodate the new energy efficiency policies nor the possible incentives towards innovative business models. Therefore, it would be interesting to deepen the analysis that was carried out with complementary approaches, for instance through microeconomic analyses of the electricity sector, or through energy models.
The revised energy efficiency regulations on household appliances that are studied in this paper are part of the EU initiatives to reach the EU targets on energy efficiency and GHG emissions reduction. Even if the main aim of the regulation is environmental, in the political discussion it is necessary to add information regarding economic and social impacts.
In particular, the energy efficiency policies force producers' and consumers' intertemporal choices. Whenever technological improvements are already available, but not used, stricter regulations create incentives for firms to anticipate investments that bring future revenues. For consumers, the revision of the regulations implies an increase in the purchase cost of the appliances, but a decrease in the future spending in electricity. Further research could dwell on the alternative investments not realized by producers and consumers, as a consequence of forced investments in energy efficiency.
The approach that is used to address the quantification of indirect impacts of product energy efficiency policies has the potential to expand and nuance the policy-making discussion. Assuming that the proposed changes are required to reach the EU environmental targets therefore they are appropriate per se, the analysis shows that they are expected to have negative but small economic impact (with a small 0.01% decrease in the EU GDP) and a small positive social impact on employment. Indeed, the economic brake that a reduction in the use of energy could cause is compensated by other trade-off in the economic system, such as a change in the household consumption basket or the economic growth that is driven by the new investments induced by the revised regulations. The presented hybrid approach provides quantitative information that complements the policy discussion of energy saving policy, enriching this discussion with knowledge of the indirect impacts and tradeoffs between the economy sectors and countries/regions in the EU. Being able to highlight which sectors and countries are most benefited and which bear the weight of the policy the most adds relevant information. This information can indeed be used during the policy process to adjust the reform, for example by introducing some compensation mechanism for countries or sectors that are more disadvantaged.
Further research might look more in depth to distributional issues across the EU countries and regions, as well as non-EU trade issues. Additionally, another important aspect to be investigated could be the impacts of these regulations not only on the energy efficiency requirements, but also on the lifetime of the appliances. In fact, some papers demonstrate that an extension of the lifetime of durable goods has a positive effect on the overall lifecycle energy consumption and GHG emissions, as long as the production and end of life stages require less energy than the use phase [35][36][37][38]. Moreover, in a background of progressive greening and de-carbonization of the economy, the incentives behind energy efficiency policy are transforming. In a new metric of social welfare, carbon footprint and intensity might deserve further analysis to articulate the narrative for and/or against stricter product efficiency policies.
Conclusions
Many studies support a positive correlation between income growth and energy consumption [39]. This implies that policies aimed at improving energy efficiency and decreasing energy consumption can have a negative impact on economic growth. With respect to this topic, the study that is reported in this article provides additional insights and granularity on the energy-growth relationship, at EU level, and it suggests two main outcomes.
First, the analysis shows that stricter energy efficiency requirements of three household appliances-washing machines, dishwashers, and washer-dryers-have a negative macroeconomic impact on value added, a decrease of around two-billion euros. The order of magnitude of the changes that the regulations introduce is small as compared to the whole EU economy since these appliances constitute around 20% of total household appliance energy use. In terms of value added, with 0.01% of the total EU value added, the impact is very small. The reasons behind this result are manifold. Firstly, the stricter requirements are expected to cause an increase in the cost of manufacturing and, consequently, the sales price of appliances, which increases by ca. 10%. Secondly, the result is also due to a change in the composition of household spending: households' savings resulting from a reduction in energy demand can be used to consume other goods and services. Although the energy market is a key industry for economic development, the negative impact that the proposed changes in the EU regulations are expected to have on this industry is partially compensated by the increase in other industries' production.
The second main outcome is that while the impact on value added is negative, the impact on employment is small but positive. Again, the shift in the composition of the household consumption basket seems to mainly favor industries that use relatively more labor than the industries that are negatively affected by the analyzed policies.
In terms of industry distribution, the sector that has the greatest negative impact on value added is the electricity industry, and this effect is quite homogenous among European countries. This result is straightforward: policies that aim at controlling emissions by improving energy efficiency have a negative impact on the electricity production, which is one of the main sectors responsible for GHG emissions. According to the European Environment Agency, in 2017 electricity production generated the largest share of GHG emissions (23.3% of total GHG emissions) [40].
Finally, our results suggest that the proposed changes in the EU regulations would cause a reduction in emissions of around 1.5 million tonnes of CO2-equivalent emissions. As expected, this reduction is driven by the industry producing electricity, but the 30 % of total GHG emissions reduction comes from other industries, such as mining and quarrying, sewerage and waste collection and treatment, or manufacture of coke and refined petroleum products.
Acknowledgments:
We would like to thank Frédéric Reynes and Jinxue Hu for their fruitful comments and support.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A Table A1. List of NACE Rev. 2 industries in FIDELIO.
Sector
Description A01 Crop and animal production, hunting and related service activities A02 Forestry and logging A03 Fishing and aquaculture B Mining and quarrying C10T12 Manufacture of food products, beverages and tobacco products C13T15 Manufacture of textiles, wearing apparel and leather products C16 Manufacture of wood and of products of wood and cork, except furniture; manufacture of articles of straw and plaiting materials C17 Manufacture of paper and paper products C18 Printing and reproduction of recorded media C19 Manufacture of coke and refined petroleum products C20 Manufacture of chemicals and chemical products C21 Manufacture of basic pharmaceutical products and pharmaceutical preparations C22 Manufacture of rubber and plastic products C23 Manufacture of other non-metallic mineral products C24 Manufacture of basic metals C25 Manufacture of fabricated metal products, except machinery and equipment C26 Manufacture of computer, electronic and optical products C27 Manufacture of electrical equipment C28 Manufacture of machinery and equipment n.e.c. C29 Manufacture of motor vehicles, trailers and semi-trailers C30 Manufacture of other transport equipment C31_32 Manufacture of furniture; other manufacturing C33 Repair and installation of machinery and equipment D35 Electricity, gas, steam and air conditioning supply E36 Water collection, treatment and supply E37T39 Sewerage; waste collection, treatment and disposal activities; materials recovery; remediation activities and other waste management services F Construction G45 Wholesale and retail trade and repair of motor vehicles and motorcycles Sector Description G46 Wholesale trade, except of motor vehicles and motorcycles G47 Retail trade, except of motor vehicles and motorcycles H49 Land transport and transport via pipelines H50 Water transport H51 Air transport H52 Warehousing and support activities for transportation H53 Postal and courier activities I Accommodation; food and beverage service activities J58 Publishing activities J59_60 Motion picture, video and television program production, sound recording and music publishing activities; programming and broadcasting activities J61 Telecommunications J62_63 Computer programming, consultancy and related activities; information service activities K64 Financial service activities, except insurance and pension funding K65 Insurance, reinsurance and pension funding, except compulsory social security K66 Activities auxiliary to financial services and insurance activities L68 Real estate activities M69_70 Legal and accounting activities; activities of head offices; management consultancy activities M71 Architectural and engineering activities; technical testing and analysis M72 Scientific research and development M73 Advertising and market research M74_75 Other professional, scientific and technical activities; veterinary activities N Administrative and support service activities O84 Public administration and defense; compulsory social security P85 Education Q Human health and social work activities R-S Arts, entertainment and recreation. Other service activities T Activities of households as employers; undifferentiated goods and services producing activities of households for own use U Activities of extraterritorial organizations and bodies
Appendix B
To build the FIDELIO database, many different data sources are used. The core of the database is the IO data, which is the main source of information that feed the production block and the trade block. The IO core is built mainly using the World Input Output Database [41] (WIOD, 2016 release).
In particular, the model uses the WIOD international and national supply and use tables. Whenever the WIOD database does not provide all the necessary information, other databases, such as Eurostat supply and use tables or OECD data are used.
Besides the IO core, other databases are necessary in order to compile data for the other blocks of the model. For the household block, the main data sources come from Eurostat. The main datasets used are "non-financial transactions of households and non-profit institutions serving households" (nasa_10_nf_tr), "final consumption expenditure of households by consumption purpose -COICOP 3 digit" (nama_10_co3_p3), "heating degree-days by NUTS 2 regions -annual data under Energy statistics" (nrg_esdgr_a) and "financial balance sheets of households and non-profit institutions serving households" (nasa_10_f_bs). Other datasets come from the OECD-"simplified non-financial accounts" (no. 13), "final consumption expenditure of households" (no. 5) and "financial balance sheets -consolidated" (no. 710) -and from the National Statistical Institutes of Belgium, China, Czech Republic, Hungary, India, Slovakia, Turkey and the United Kingdom. Some data on household energy consumption are taken from the EU Reference Scenario 2016 on energy, transport and GHG emissions containing trends to 2050 [42]; population demographics from United Nations projections [43]. As regards the government block, the main sources come from Eurostat datasets-the main datasets used are: "non-financial transactions" (nasa_10_nf_tr), "government revenue, expenditure and main aggregates" (gov_10a_main) and "government deficit/surplus, debt and associated data" (gov_10dd_edpt1)-, WIOD data, the OECD dataset on general government debt-"general government debt -Maastricht" (no. 750)-and data from the World Bank. The labor market is described using the WIOD social accounts, and other data from Eurostat, World Bank, and CEDEFOP. Finally, for the energy block data comes from the WIOD (2019) energy accounts (https://europa.eu/!Un47Cp), the POLES model, the Eurostat table on "air emissions accounts by NACE Rev. 2 activity" (env_ac_ainah_r2), the ODYSSEE database [33], the EXIOBASE database [44] and the KLEMS database [45]. A detailed description of all the data sources and methods used to build the FIDELIO database can be found in [23].
Appendix C
This Appendix show the information used to weight the exogenous shocks in the price of each appliance and the exogenous shocks in the total electricity consumption as described in Subsection 2.2. | 9,181.2 | 2019-11-12T00:00:00.000 | [
"Economics"
] |
Topology change, emergent symmetries and compact star matter
Topology effects have being extensively studied and confirmed in strongly correlated condensed matter physics. In the large color number limit of QCD, baryons can be regarded as topological objects -- skyrmions -- and the baryonic matter can be regarded as a skyrmion matter. We review in this paper the generalized effective field theory for dense compact-star matter constructed with the robust inputs obtained from the skyrmion approach to dense nuclear matter, relying to possible ``emergent"scale and local flavor symmetries at high density. All nuclear matter properties from the saturation density $n_0$ up to several times $n_0$ can be fairly well described. A uniquely novel -- and unorthdox -- feature of this theory is the precocious appearance of the pseudo-conformal sound velocity $v^2_{s}/c^2 \approx 1/3$, with the non-vanishing trace of the energy momentum tensor of the system. The topology change encoded in the density scaling of low energy constants is interpreted as the quark-hadron continuity in the sense of Cheshire Cat Principle (CCP) at density $\gsim 2n_0$ in accessing massive compact stars. We confront the approach with the data from GW170817 and GW190425.
I. INTRODUCTION
The structure of dense nuclear matter relevant to compact stars has been investigated for several decades but still remains largely uncharted. Unlike at high temperature, so far, the physics at high density can be accessed by neither terrestrial experiments nor lattice simulation. Recently, the observation of massive neutron stars with mass ∼ > 2.0M and detection of gravitational waves from neutron star mergers provide indirect information of nuclear matter at low temperature and high density, say, up to ∼ 10 times the normal nuclear matter density n 0 0.16 fm −3 [1][2][3][4][5][6]. These new developments offer the powerful means to explore the nuclear matter in the interior of the compact stars, for example, the patterns of the symmetries involved therein, what is in the core of the stars, say, baryons and/or quarks and a combination thereof. For recent discussions on these aspects, we suggest, e.g., Refs. [7][8][9][10][11][12] and some relevant references therein.
The study of nuclear matter in the literature has largely relied on either phenomenological approaches anchored on density functionals or effective field theoretical models implemented with assumed QCD symmetries and degrees of freedom appropriate for the cutoff to which the theory is applicable. For finite nuclei as well as the infinite nuclear matter up to ∼ n 0 , the physics can be described very well by using the nuclear effective theory with or without pion, in addition to the nucleon [7,13] (denoted as sχEFT). However, in the dense system relevant to the compact stars at ∼ 10n 0 , the sχEFT is symmetries are not explicit in the matter free space but it seems reasonable to think that they get (partially) restored in the dense system. At least there is nothing glaring at odds with the presently available observations. Prior to QCD, Skyrme suggested that baryons can be described by the topology solution of a mesonic theory, skyrmion [20]. After the arrival of QCD, it was argued that when the number of color N c is infinitely large, baryons in the constituent quark model share the same N c scaling properties as skyrmions [21,22]. Since then, the Skyrme(-type) model 1 has become one of the models in the study of nucleon, nuclei as well as nuclear matter [23][24][25].
In the skyrmion approach to dense nuclear matter obtained by putting skyrmions on crystal lattice, a robust observation independent of the model and crystal structure-at least what has been checked so far-is the topology change where the skyrmions with the integer winding number transit to half-skyrmions with the halfinteger winding number. The density at which this takes place is denoted as n 1/2 . This model-independent topology change gives rise to several interesting density dependences of hadron properties that have not been found in other approaches.
Although the Skyrme model approach can describe the nucleon, nuclei as well as nuclear matter in a unified way, it is a daunting task to put this approach into practice since the calculation depends on the efficiency of the computer and the results are valid in the large N c limit. Therefore, in practice, one resorts to chiral effective models that incorporate baryons as explicit degrees of freedom. In our GnEFT, we incorporate the robust characteristics of topology in the low energy constants of the model. The effect of the change of the degrees of freedom is formulated in terms of the possible topology change at a density n 1/2 encoded in the behavior of the parameters of the GnEFT Lagrangian as one moves from below to above the changeover density n 1/2 . After making the V lowk RG approach implementing the strategy of Wilsonian renormalization group flow [26], we construct the pseudo-conformal model (PCM) of dense nuclear matter [27,28](see Ref. [10] for a review).
The PCM that satisfy all the constraints from astrophysics turns out to have a peculiar feature that has not been found in any other approaches: The sound velocity approaches the conformal limit v 2 s /c 2 ≈ 1/3 at the density relevant to compact stars although the trace of the energy-momentum tensor does not vanish. This is in stark contrast to the standard scenario favored in the field [29]. This conceptually novel approach predicts that the core of massive compact stars is populated by confined quasi-fermions of fractional baryon charge [30], not "deconfined quarks" expected in perturbative QCD [31]. We suggest that this phenomenon, together with the 1 Hereafter, for convenience, we use Skyrme model with the pion field only to represent the Skyrme model and its extensions.
"quenched g A problem" in nuclei, shows that hidden symmetries hidden in medium-free vacuum of QCD emerge in nuclear dynamics [32].
II. TOPOLOGY CHANGE AND HADRON-QUARK CONTINUITY
It has long been discussed that in the large number of color N c limit, baryons can be regarded as topological objects-solitons, namely skyrmions. In the skyrmion approach, the dense nuclear matter can be accessed by putting the skyrmions onto the crystal lattice [25,33,34].
Here we exploit the Skyrme model with the Lagrangian connected to QCD in the sense of Weinberg "folk theorem" on effective field theories [35]. For a development quite different in spirit from ours, we refer to, e.g., review [36] and the references therein.
A. Topology change
Topology change is a novel phenomenon that has not been observed in any approach other than the skyrmion crystal approach to dense nuclear matter.
To have an intuitive idea, let us look at the distribution of the baryon number density in a specific lattice, say, face-centered cubic crystal. The distribution of the baryon number density looked along an axis is illustrated in the left panel of Fig. 1. The winding number is 1 if one integrates out the blue volume. Now, squeeze the system. One finds that, after a critical density n 1/2 (or equally, the crystal size L 1/2 ), the distribution of the baryon number density changes to the right panel of Fig. 1. What happens is that when the increasing matter density surpasses n 1/2 (or the crystal size drops below L 1/2 ), the constituents of the matter given in the blue square transit from winding number-1 objects (left panel) to winding number-1/2 objects, half-skyrmions (right panel). (How this happens in the numerical simulation can be seen in [34] .) In the half-skyrmion configuration, as a consequence of symmetry, the space average of where U 0 is the static configuration of the chiral field U = exp(2iπ a T a /f π ) with T a = σ a /2. This means that the quark condensate qq vanishes when the space is averaged. Therefore, one can use this quantity as a signal of the skyrmion-half-skyrmion transition.
It should be noted that the location of n 1/2 cannot be pined down theoretically because it is model-dependent. Since nuclear dynamics at low density can be well described by sχEFT, we set n 1/2 ∼ > 2.0n 0 . Later, we will see that astrophysical observations observations indicate 2.0n 0 ∼ < n 1/2 ∼ < 4.0n 0 .
B. Implications of topology change Chiral symmetry breaking.-In the skyrmion crystal approach to dense nuclear matter, the pion decay constant can be calculated through the axial-vector currelator [37] iG ab At the leading order of fluctuations, we can express the medium modified pion decay constant as In the skyrmion phase, since φ 2 0 decreases with density, f * 2 π decreases with density. After passing n 1/2 from below, since φ 0 = 0 in the chiral limit, φ 2 0 0. Thus a nonzero constant although φ 0 = 0. This argument is supported by explicit numerical calculation. In terms of current algebra, the generalized Gell-Mann-Oakes-Renner relation tells us [38] where for convenience, we have kept the current quark mass. F n stands for the contribution from multiquark condensation. Since the pion mass scales little with density, when going to the half-skyrmion matter, which is in qualitative agreement with the result from skyrmion crystal calculation. Equation (4) means that the chiral symmetry is only partially restored in the half-skyrmion matter and we are still in the Nambu-Goldstone phase. This means that the skyrmion-half-skyrmion transition is not a Landau-Ginzburg-type phase transition. Although it is not a paradigmatic phase change, in what follows, we will use the term "half-skyrmion phase" for simplicity.
Chiral doublet structure.-It is found that when the system goes to the half-skyrmion medium, the nucleon mass becomes a density-independent constant [39]. Therefore, one can decompose the nucleon mass as where ∆( qq ) is the sector of the nucleon mass coming from the quark condensate which becomes zero in the half-skyrmion medium. m 0 is the sector of the nucleon mass independent of qq and has a magnitude about (50−70)% of the nucleon mass in vacuum. The existence of m 0 = 0 implies that there is a part of the nucleon mass that is chiral invariant. It should be noted that, the decomposition (7) can also be inferred from other approaches. The lattice calculation found that, when the chiral symmetry is unbroken, baryons are still massive and one should not expect a drop of the mass in dense medium [40]. The same behavior was found in Ref. [41] in a renormalization group (RG) analysis of hidden local symmetric Lagrangian with baryons. Moreover, in Ref. [42], by using a chiral effective model with parity doubler, it was found that, to reproduce the nuclear matter around saturation density, the nucleon mass should has a sizable chiral invariant component. So far, it is not clear to us whether m 0 reflects a fundamental feature of QCD or an emergent symmetry via correlations in medium as in condensed matter as indicated in this crystal calculation.
Symmetry energy.-The symmetry energy of nuclear matter E sym (n) which plays the most important role in the equation of state (EoS) for compact stars is not under control at the density relevant to compact stars [43,44]. It is given by the term proportional to α 2 in the energy per nucleon E(n, α) E(n, α) = E(n, α = 0) + E sym (n)α 2 + O(α 4 ), (8) where α = (N −P )/(N +P ) with P (N ) being the number of protons (neutrons).
Since the symmetry energy arises from the protonneutron asymmetry, to calculate it from the skyrmion crystal approach, the crystal lattice should be rotated through a single set of collective coordinates [45]. A tedious but straightforward calculation yields where λ I is the isospin moment of inertia. The density dependence of the symmetry energy obtained from the skyrmion crystal approach is schematically plotted in dotted curve in Fig. 2. What is interesting is the appearance of the cusp structure locked at n 1/2 , i.e., the symmetry energy first decreases with density and then increases when the density passes n 1/2 . To understand the density dependence of symmetry energy, we consider the expression of λ I [46] λ I = f 2 where · · · stands for the contribution from the Skyrme term and · · · indicates the space average of the quantity inside. As discussed above, with the increasing of the density, φ 2 0 decreases to zero. So, 1/λ I , or equivalently E sym , decreases going toward n 1/2 . After n 1/2 , the tendency of E sym is highly involved. Since at n ∼ > n 1/2 , φ 2 0 ≈ 0, the density dependence from the quartic term in the Skyrme model which represents massive excitations -such as the vector mesons in the HLS modelsintervene. It gives the cusp structure.
It should be stressed that the crystal description of baryonic matter at low density cannot be reliable, so the density dependence of symmetry energy obtained at density n n 0 cannot be taken seriously. The cusp structure at n 1/2 is present in nuclear correlations as is shown below in terms of nuclear tensor forces. What is important in the skyrmion crystal calculation is that the symmetry energy decreases toward the cusp density after which it increases. We will see later that this cusp sheds light on the medium modified-hadron properties.
Nuclear tensor force.-We have shown that, the robust characteristic in the skyrmion crystal approach is the existence of the cusp structure in the symmetry energy. A natural question is what is the implication of this cusp in GnEFT including nucleon as an explicit degree of freedom or equivalently, how to reproduce this cusp in GnEFT. To address this question, we consider the tensor force between nucleons that is mediated by one boson exchange.
The symmetry energy is dominated by the nuclear tensor force V T and can be written in the closure approximation as [47] Therefore, the behavior of the symmetry energy is controlled by the absolute value of the tensor force between nucleons carried by the exchanged mesons. For the present purpose, it suffices to consider the onepion and one-ρ contributions to two-body tensor forces. The scalar meson, here dilaton, does not contribute directly at the tree level but affects indirectly on the scaling relations of the masses and coupling constants in the Lagrangian. In the non-relativistic limit, the tensor forces are given by where M = π, ρ, S ρ(π) = +1(−1) and with the Pauli matrices τ i and σ i for the isospin and spin of the nucleons with i = 1, 2, 3. The density dependence enters through the scaling parameters in the in-medium quantities marked with asterisk [48]. The strength f * N M scales as where g M N N are the effective meson-nucleon couplings. What is significant in Eq. (12) is that given the same radial dependence, the two forces (through the pion and ρ meson exchanges) come with an opposite sign and therefore cancels each other.
As discussed in Ref. [49], if the hadron scales from low to high densities with no topology change, the tensor force will decrease monotonically with density. There will then be no cusp in the symmetry energy. This feature will be in conflict with what happens in Nature.
Now let us see what happens
if there is the topology change at n 1/2 . For illustration we take R * ρ ≈ Φ 2 at n > n 1/2 but with all others the same as in the case without topology change. The results are plotted in Fig. 3. It shows that the topology change effect is dramatic. Due to the cancellation between these two tensor forces, in the range of nuclear forces relevant for the nuclear interaction, r ∼ > 1 fm, the magnitude of the net force first decreases and then, after passing n 1/2 , increases and the force from the rho meson is nearly totally suppressed. Then, from Eq. (11), one concludes that going toward to n 1/2 from below the symmetry energy is to drop and more or less abruptly turn over at n 1/2 and then increase beyond n 1/2 . This reproduces precisely the cusp predicted in the crystal calculation. As a result, the cusp structure in E sym -a consequence of topology change with the onset of the half-skyrmion phase-is signaling the different density scaling property of the gauge coupling from n ≤ n 1/2 to n > n 1/2 .
In summary, the topology change found in the skyrmion crystal approach to density nuclear matter indicates that the hadron properties, such as nucleon mass, meson masses, pion decay constant and hidden gauge coupling and so on, have different density scaling in the skyrmion and half-skyrmion phases. We will see later that, this observation has a drastic effect on the dense nuclear matter for compact stars.
We should mention here that higher correlation corrections brought in the V lowk renormalization flow calculation "smoothen" the cusp in the form represented in solid line in Fig. 2.
C. Quark-hadron continuity
We have argued that the topology change is a robust feature in the skyrmion crystal approach to dense nuclear matter. The question is whether or how the topology change represents the "quark deconfinement" process in QCD. There is no clear answer at present, so we can only offer a conjecture on how one can establish the connection in the sense of Cheshire Cat Principle (CCP) based on the chiral bag model of nucleon.
For the number of flavors N f ≥ 2, baryons can be described by chiral bags [14,15]. Inside the bag, the degrees of freedom are quarks and gluons, and the baryon number is carried by the quarks. Outside of the bag, mesons are the relevant degrees of freedom, and the baryon number is carried by topology in the winding number. When the bag is shrunk, all the quarks drop into the inifinite hotel and turn into skyrmions with only the Cheshire Cat smile remaining. That physics should not depend on the bag size is the CCP.
In the case of single flavor N f = 1, the situation is quite different because there is no N f = 1 skyrmion. It turns out that the baryon should be a soliton resembling a pancake [50] or pita [51] having a fractional quantum Hall (FQH) topology structure. There is a Cheshire Cat description for this in terms of an anomaly flow [52]. But what is puzzling is that there are two Cheshire Cats, one involving 3D ball and the other 2D sheets. It seems very plausible that at low density baryonic matter is in skyrmions in 3D with the metastable 2D FQH pan-cakes/pitas suppressed. However it seems indispensable at high density that the FQH topology structure be taken into account. This is because at high density where chiral transition takes place, the vector mesons in hidden local symmetry become the Chern-Simons fields (via Seibergtype duality). This part of the high density story is not yet understood, so we can only say that we really do not understand what happens at high density. In what we have done, we are simply assuming that the Chern-Simons fields do not figure importantly in the range of compact-star densities. We will simply ignore this "dichotomy problem." This aspect of the problem is discussed in [53,54].
The topology change in the skyrmion crystal approach appears at the density at which the profiles of solitons overlap and the valence quarks inside the baryons rearrange to form different clusters, here configurations with baryon number-1/2. This picture resembles the quarkyonic matter proposed in [8,55] and the hard-core realization of the deconfinement from nuclear to quark matter phrased in Ref. [56].
As mentioned above and will be discussed later, owing to the topology change implemented in the parameters in the Lagrangian of GnEFT, the symmetry energy E sym , as it approaches n 1/2 from slightly below, softens and after passing n 1/2 , hardens. This generates a spike in the density dependence of the sound velocity. In Ref. [57], this spike was attributed to the enhancement and then suppression of the ω 0 condensate in the low and high density region. We suppose that this behavior of ω 0 condensate can be naturally explained using the scale-chiral effective theory beyond the leading order scale symmetry in which not only the ω meson mass but also the ω-N -N coupling scales with density [58].
III. EMERGENT SYMMETRIES
After the discussion on the topology change which serves as one of the key ingredient of the PCM, let us now turn to two other essential ingredients, the hidden local gauge symmetry and hidden scale symmetry which are invisible in the vacuum of QCD. Our approach is to exploit the possible emergence of these symmetries as density increases to the regime relevant to compact stars, say, ∼ < 10n 0 . We use these symmetries to include the higher-energy degrees of freedom-the lowest-lying vector mesons V = (ρ, ω) and the scalar meson f 0 (500). Here, we focus on the points directly relevant to the PCM construction, leaving the details to [10,59] A. Emergent hidden local symmetry To bring in the lowest-lying vector mesons ρ and ω into the chiral effective theory, we adopt the strategy of hidden local symmetry (HLS) [16][17][18] which at low density is gauge equivalent to nonlinear sigma model, the basis of sχEFT.
By decomposing the chiral field U (x) as U (x) = ξ † L ξ R , one can introduce a redundant local symmetry h(x) under which ξ L,R transforms as but keeps the chiral properties of U (x) intact. When a chiral effective theory is expressed in terms of ξ L,R , the gauge fields of local symmetry h(x)-V (x)-enter the theory. After higssing the gauge symmetry, the gauge fields V (x) obtain masses. In HLS, the field content depends on the symmetry h(x). If one chooses (2) and ω ∈ U (1). It is assumed that the kinetic terms of V µ (x) can be generated by underlying dynamics of QCD or quantum corrections, thus V µ (x) become dynamical gauge bosons [16]. Compared to other approaches of vector mesons, with HLS, one can establish a systematic power counting by treating the vector mesons on the same footing as the Nambu-Goldstone boson, pions [18]. Now, come back to the nuclear matter. At low density where the nucleons are far from each other, the vector mesons are massive objects and can be exchanged between them. Using the equations of motion of the vector mesons, their effects are accounted for as a two-pion exchange effect, i.e., one-loop contribution in sχEFT. The question is in whst sense the vector mesons can be regarded as hidden local gauge fields. The Suzuki's theorem [60] states that "when a gauge-invariant local field theory is written in terms of matter fields alone, a composite gauge boson or bosons must be inevitably formed dynamically." If we assume the "vector manifestation (VM)" [18,61] that m 2 ρ ∝ f 2 π g 2 ρ → 0 since g ρ → 0 at certain scale valid at some theoretically unknown high density n vm , the hidden local gauge symmetry emerges in dense system. We will see below that n vm ∼ > 25n 0 is indicated for the emergence of the pseudo-conformal sound velocity in stars.
Moreover, it was argued that the HLS fields could be (Seiberg-)dual to the gluons [62][63][64]-the intrinsic quantity in QCD. At this moment, we do not know how could this happen. But, if this is right, we believe it means that the HLS gets un-hidden at high density. Since this duality indicates a Higgs phase-to-topological phase transition coinciding with the quark deconfinement at asymptotic density, it is most likely irrelevant to the compact stars we are concerned with [53] .
B. Emergent scale symmetry
It is well known that the scalar meson f 0 (500) is essential for providing the attractive force between nucleons. In our approach, it figures in as the Nambu-Goldstone boson of scale symmetry, the dilaton χ. Actually, in Ref. [65], the trace anomaly has been applied as a source of the scalar meson to construct the effective model of scalar meson by using anomaly matching. Here, we adopt the "genuine dilaton" (GD) structure proposed in [19].
The key premise of the GD idea is the existence of an infrared fixed point (IRFP) with the beta function β(α IR ) = 0 for flavor number N f = 3 and we are living slightly away from this IRFP. Both the distance from the IRFP and current quark masses account for the dilaton mass. Explicitly, the dilaton mass is expressed as where · · · stands for the contribution from quark mass and higher order of ∆ IR = α IR − α s with α s . This is in analogy to the Gell-Mann-Oakes-Renner relation in the pseudoscalar meson sector. Since, unlike the unflavored hadrons, the effective masses of strange hadrons do not drop so much in dense medium, we will not consider the strangeness here.
Whether the proposed IRFP exists in QCD is still under debate. In Ref. [66], it was argued that in the IR region there is a nonperturbative scale invariance different from that in the UV region. This is argued to lead to the possibility of massless glueballs in the fluid. What may be significant is the possible zero-mass glueball excitation. If we simply assume this picture works in dense system, this can be regarded as an indirect support of our theme. Anyway, we did not find any contradiction with nature in using this GD idea.
IV. PSEUDO-CONFORMAL MODEL OF COMPACT STAR MATTER
By using the GnEFT discussed above we are now in the position to calculate the nuclear matter properties. We shall focus on the EOS of the baryonic matter, leaving out such basic issues as corrections to gravity, dark matters etc. Unless otherwise stated the role of leptons-electrons, muons, neutrinos, etc.,-is included in the EOS. Hereafter, we mainly focus on the effect of topology change. For other aspects, we refer to [10].
A. Density scaling
In the construction of the PCM, we incorporate the medium modified hadron properties (dubbed as "intrinsic density dependence (IDD)" to the GnEFT we constructed above by using the Brown-Rho scaling [48] for n ≤ n 1/2 (R-I) and the topological inputs for n > n 1/2 (R-II).
Density scaling in R-I.-In R-I, only one parameter Φ in Eq. (18) fixes all the IDDs. To the leading order in the chiral-scale counting [67], the density scaling in R-I can be written as [10] where V = (ρ, ω). Since there is no first-principle information on this quantity, for convenience, we fix it by taking the form Φ I = 1 1 + c I n n0 (19) with c I a constant. The range of c I that gives a good fit to nuclear matter properties as shown in Table. I is found to be [27,49] c I ≈ 0.13 − 0.20 (20) with the upper value giving the measured pion decay constant [68]. Of course, it is expected as would be agreed by all nuclear physicists that certain fine-tuning in the parameters be required for ground-state properties of nuclear matter. Density scaling in R-II.-Due to the topology change at n 1/2 > n 0 , the density dependence of some parameters are drastically different from that in R-I.
Since the hidden local gauge coupling g ρ and the ρ meson mass are related to each other through the KSRF relation, we take the simplest form where n VM is the putative VM fixed-point density. How to join the Φ ρ from Φ I for n ≤ n 1/2 is discussed in Ref. [27]. To have a result consistent with that from skyrmion crystal approach discussed above and mean field approach based on the leading order scale symmetry (LOSS) [72], we take n VM ∼ > 25n 0 . The density scaling of the ω meson is more involved and different from that of ρ meson which flows to the VM fixed point [41,49]. It should be fine-tuned to match to the well constrained nuclear matter properties around the saturation density. Here, we take where g ω is the U (1) gauge coupling and with d ≈ 0.05. This reflects the predicted break-down in R-II of the flavor U (2) symmetry for the vector mesons which holds well in R-I. As for other parameters, we simply adopt the inputs from the skyrmion crystal approach, that is The dilaton mass also goes proportional to the dilaton condensate. This follows from the partially conserved dilatation current (PCDC) [19] m * σ m σ ≈ κ.
It follows also from low-energy theorems that The dilaton coupling to nucleon and other fields is unscaling to the leading order in scale-chiral symmetry, so it is a constant in R-II as in R-I. V lowk renormalization group approach.-Equipped with the IDD, we are ready to calculate the EoS of nuclear matter. Here, to take into account the hadron fluctuation effects, we apply the V lowk renormalization group technique [26] which accounts for higher-order corrections to the Landau Fermi-liquid approximations [10]. In this procedure, in addition to the IDD implemented in the density scaling of the parameters, the induced density dependence from the nucleon correlation denoted as DD induced 2 is also included. Therefore, the density dependence in the obtained EoS includes both IDD and DD induced . We denote the sum of IDD and DD induced as IDD.
We would like to point out that, owing to the IDD of the two-nucleon potentials, our calculation amounts to doing roughly N 3 LO SχEFT including chiral 3-body potentials which are essential for the nuclear matter stabilized at the proper equilibrium density [73]. The same mechanism has been found to work for the C-14 dating Gamow-Teller matrix element where the three-body potential effect in SχEFT is reproduced by the IDD.
B. The pseudo-conformal model of dense nuclear matter
Using the density scaling discussed above, we can calculate the nuclear matter properties now. First we see from Table I that the empirical values of the normal nuclear matter properties can be well reproduced. Now, go to a higher density. Due to the topology change at n 1/2 , there is a drastic change in the scaling of the parameters of GnEFT leading to a qualitative impact on the structure of the EoS. So far, there is no theoretical argument to pin down n 1/2 . Phenomenologically, we can estimate its range as 2.0n 0 < n 1/2 < 4.0n 0 by using various astrophysical observations available, such as the maximum mass, the gravity-wave data and specially the star's sound speed, and so on. Sound velocity.-One of the most striking predictions that is in stark contrast to the conventional picture is the precocious appearance of the conformal sound velocity of the compact star matter. From Figure 4, one can see that while the sound speed increases steadily and overshoots the conformal velocity at presumed ∼ n 1/2 = 2n 0 , it comes down and converges to v 2 s ≈ 1/3. It should be noted that the appearance of the conformal sound velocity at some high density is not so peculiar. Some reasonable sχEFT results resemble more or less this picture. But they show much broader and bigger bumps not exceeding the causality bound v s = 1 before converging to the conformal speed v 2 s = 1/3 but at an asymptotic density ∼ > 50n 0 [29]. After all, the convergence to the conformal speed at asymptotically high density is expected in perturbative QCD. What is striking and in a way unorthodox is the precocious onset of, and the convergence to, v 2 s ≈ 1/3 before reaching to an asymptotically high density despite that the trace of the energy-momentum tensor is nonzero. See below. It is somewhat like the "quenched g A " going to 1 in light nuclei [32], reflecting the pervasive imprint of hidden scale symmetry.
In our approach, the conformal sound speed follows as a logical outcome of the propositions [10], different from the parameter scanning done in [74]. These propositions yield that, going toward the DLFP [41], the trace of the energy-momentum tensor θ µ µ is a function of only the dilaton condensate χ * . Now if the condensate goes to a constant ∼ m 0 due to the emergence of parity-doubling as we learned after the topology change, the θ µ µ will become (more or less) independent of density. In this case, we will have ∂ ∂n θ µ µ = 0. (27) This would imply that where v 2 s = ∂P (n) ∂n / ∂ ∂n and and P are, respectively, the energy density and the pressure. If we assume ∂ (n) ∂n = 0, i.e., no Lee-Wick-type states in the range of densities involved, we can then conclude This means that the dilaton condensate χ * goes to the density-independent constant m 0 due to the parity for n vm ∼ > 25n 0 . This suggests the parity doubling at high density is linked to the ρ decoupling from the nucleon together with the vector manifestation [41].
The above chain of reasoning is confirmed in the full V lowk RG formalism specifically for the case of n 1/2 = 2n 0 . In Fig. 5 is shown the trace of the energy momentum tensor (left panel) that gives the conformal velocity for n ∼ > 3n 0 (right panel). This feature of both the TEMT and the sound velocity are expected to hold for any n 1/2 at which the topology change sets in, i.e., within the range 2 ∼ < n 1/2 /n 0 ∼ < 4. Equation of state.-We now focus on the EoS of compact stars. It turns out that at density n ≥ n 1/2 , the conformality of the sound velocity can be captured by a simple two-parameter formula for the energy per-particle where X, Y are parameters to be fixed.
What we refer to as the pseudo-conformal model (PCM for short) for the EoS is then E/A given by the union of that given by V lowk in R-I (n < n 1/2 ) and that given by Eq. (30) in R-II ( n ≥ n 1/2 ) with the parameters X α and Y α fixed by the continuity at n = n 1/2 of the chemical potential and pressure µ I = µ II , P I = P II at n = n 1/2 .
This formulation is found to work very well for both α = 0 and 1 in the entire range of densities appropriate for massive compact stars, say up to n ∼ (6 − 7)n 0 , for the case n 1/2 = 2n 0 where the full V lowk RG calculation is available [27]. We apply this PCM formalism for the cases where n 1/2 > 2n 0 . Since a neutron star with mass 1.4M for which the tidal deformability Λ obtained for n 1/2 = 2.0n 0 is Λ 1.4 790 [27,28] that corresponds to the upper bound set by the gravity-wave data, we take the lower bound for the topology change density FIG. 5. θ µ µ (upper panel) and vs (lower panel) vs. density for α = 0 (nuclear matter) and α = 1 (neutron matter) in V lowk RG for n 1/2 = 2n0.
Next, let's see how the sound velocity comes out for n 1/2 /n 0 = 3 and 4 [28,75]. (The case for n 1/2 = 2n 0 was given in Fig. 5.) The results for neutron matter are summarized in Fig. 6. It is clear from Fig. 6 that, when n 1/2 = 4n 0 , the sound velocity violates the causality bound v 2 s < 1. The spike structure could very well be an artifact of the sharp connection made at the boundary. It may also be the different behaviour of the ω 0 condensation at the low and high densities [57]. What is however physical is the rapid increase of the sound speed at the transition point signaling the changeover of the degrees of freedom. Significantly, together with the lower bound (32), this allows us to pinpoint the region of the topology change Later, we will explore whether or how the waveforms of the gravitational waves emitted from the binary neutron star mergers respond to the location of n 1/2 which in our formulation corresponds to the point of hadron-quark continuity in QCD. At this moment, we cannot obtain a more precise constraint than (33). The important point is that it is an order of magnitude lower than the asymptotic density ∼ > 50n 0 that perturbative QCD predicts and signals the precocious emergence of pseudo-conformality in compact stars. However the robustness of the topological inputs figuring in the formulation convinces us that the precocious onset of the pseudo-conformal structure can be trusted at least qualitatively. In this connection, a recent detailed analysis of currently available data in the quarkyonic model is consistent with the possible onset density of v 2 c ≈ 1/3 at ∼ 4n 0 [76]. Predicted pressure for neutron matter (α = 1) vs density compared with the available experimental bound (shaded) given by Ref. [77] and the bound at 6n0(blue band). Fig. 7 is the predicted pressure P vs. density for n 1/2 /n 0 = 3, 4 compared with the presently available heavy-ion data [77]. The case of n 1/2 = 4n 0 , while consistent with the bound at n ∼ 6n 0 , goes outside of the presently available experimental bound at n ∼ 4n 0 . This may again be an artifact of the sharp matching, but that it violates the causality bound seems to put it in tension with Nature. Nonetheless, without a better understanding of the cusp singularity present in the symmetry energy mentioned above it would be too hasty to rule out the threshold density n 1/2 = 4n 0 .
V. STAR PROPERTIES AND GRAVITATIONAL WAVES
Star mass.-The solution of the TOV equation with the pressures of leptons in beta equilibrium duly taken into account as in Ref. [27] yields the results for the star mass M vs. the radius R and the central density n cent as given in Fig. 8. The maximum mass comes out to be roughly 2.04M ∼ 2.23M for 2.0 n 1/2 /n 0 4.0, the higher the n 1/2 , the greater the maximum mass. This bound is consistent with the observation of the massive neutron stars M = 1.908 ± 0.016M for PSR J1614 − 2230 [1], = 2.01 ± 0.04M for PSR J0348 + 0432 [2], = 2.14 +0.10 −0.09 M for PSR J0740 + 6620 [3]. Note that this is not at odds with the conclusion of Ref. [78] since in our model, the sound velocity exceeds the conformal limit in the intermediate density. Fig. 8 shows that, when n 1/2 ≥ 3.0n 0 , changing the position of n 1/2 affects only the compact stars with mass 2.0M although the mass-radius relation is affected by the topology change when 2.0n 0 ≤ n 1/2 ≤ 3.0n 0 .
Tidal deformability.-Next, we confront our theory with what came out of the LIGO/Virgo gravitational observations-the dimensionless tidal deformability Λ. We will consider the dimensionless tidal deformability Λ i for the star M i andΛ defined bỹ Λ = 16 13 for M 1 and M 2 constrained to the well-measured "chirp mass" GW190425 [6]. (35) We plot our predictions forΛ in Fig. 9 and for Λ 1 vs. Λ 2 in Fig. 10 and compare our predictions with the results obtained with the parametrization of the EoS from the sound velocity constraints [74]. As it stands, our prediction with n 1/2 ∼ > 2n 0 is compatible with the LIGO/Virgo constraint. Although there seems to be some tension with the pressure, the result for n 1/2 = 4n 0 is of quality comparable to that of n 1/2 = 2n 0 . A detailed analysis of the difference between PCM and [74] will be made later.
Massive star composition.-Recently, combining astrophysical observations and model-independent theoretical ab initio calculations, Annala et al. arrive at the conclusion that the core of the massive stars is populated by "deconfined" quarks [31]. This is based on the observation that, in the core of the maximally massive stars, v s approaches the conformal limit v s /c → 1/ √ 3 and the polytropic index takes the value γ < 1.75 -the value close to the minimal one obtained in hadronic models.
We have seen above that, in the PCM, the predicted pseudo-conformal speed sets in precociously at n ≈ 3n 0 and stays constant in the interior of the star. In addition, it is found that the polytropic index γ drops, again rapidly, below 1.75 at ∼ 3n 0 and approaches 1 at n ∼ > 6n 0 [30]. This can be see from Fig. 11. Microscopic descriptions such as the quarkyonic model [80] typically exhibit more complex structures at the putative hadron-quark transition density than our description, which is not unexpected given our picture is coarse- . The the PCM prediction with n 1/2 = 2.0n0 is plotted in solid line and those by [74] are in dashed and dot-dashed lines (see Ref. [74] for notation). The grey band in the upper panel is the constraint from the low spinΛ = 300 +500 −190 obtained from GW170817 [5] and that in the lower panel is Λ ≤ 600 from GW190425. grained macroscopic description whereas the quarkyonic is a microscopic rendition of what's going on.
To understand the origin of the similarity and difference between [31] and PCM, we compare in Fig. 12 our prediction for P/ with the conformality band obtained by the SV interpolation method [31]. We see that our prediction is close to, and parallel with, the conformality band. There are basic differences between the two. First of all, in our theory, conformality is brokenthough perhaps only slightly at high density-in the system which can be seen from the deviation from the conformal band. Most importantly, the constituents of the matter after topology change in our theory is not (perturbatively) "deconfined" quarks. It is a quasiparticle of fractional baryon charge, neither purely baryonic nor purely quarkonic. In fact it can be anyonic lying on a (2+1) dimensional sheet [53,54]. That the predicted P/ deviates from the conformal band is indicating that the scale symmetry the EoS of our theory is probing is some Tidal deformabilities Λ1 and Λ2 associated with the high-mass M1 and low mass M2 components of the binary neutron star system GW170817 with chirp mass 1.188M (upper panel) and GW190425 with chirp mass 1.44M (lower panel). The constraint from GW170817 at the 90% probability contour is also indicated [79]. distance away from the IR fixed point with non-vanishing dilaton mass.
Gravitational wave.-We finally apply our theory to the description of the waveforms of the gravitational waves [81]. The purpose is to explore whether one can probe the possible continuous crossover from hadrons to quarks represented in terms of the topology change. For this purpose, we consider the typical values n 1/2 = 2n 0 and 3n 0 and the neutron star mass 1.5M .
The dominant mode of GW strain h + 22 multiplied by the distance of the observer to the origin R from BNS mergers is plotted in Fig. 13. The plot shows the location of the topology change affecting the number of the inspiral orbits, i.e., the number of the peaks in the inspi- ral phase which is the number of the peaks before merger, defined as the maximum of the amplitude of the GWs. Explicitly, the larger the n 1/2 , the more the number of peaks. This could be within the detection ability of the on-going and up-coming facilities, especially the groundbased facilities [82]. The effect of the topology change on the waveforms can be understood from the distribution of the matter evolution of the BNS merger shown in Fig. 14. It is found that the matter evolves faster when n 1/2 = 2n 0 (the EoS is softer) than when n 1/2 = 3n 0 (the EoS is stiffer). Therefore the stars merge more easily with a shorter inspiral period. This indicates that the waveforms of the gravitational waves emitted from the merger process as well as the matter evoluation could be sensitive to the EoS of compact stars (see, e.g., [83]). This observation explains the waveforms of Fig. 13. However, there is a caveat: given that no qualitatively striking differences are predicted for all other astrophysical observables so far studied for n 1/2 /n 0 = 2 and 3, it appears unnatural that the waveforms appear so different for only slightly different locations of the topology change. Furthermore since the transition involves no obvious phase change, at least within the framework, the seemingly different impact of the topology change density-which is a coarsegrained description of the phenomenon-seems puzzling. It would be interesting to see whether the "microscopic" models that simulate the quark degrees of freedom for hadron-quark continuity show similar sensitivity on the transition point. If indeed the waveforms were indeed very sensitive to the precise location of the cross-over, it would be extremely interesting.
VI. SUMMARY AND PERSPECTIVE
In this work we reviewed the effect of the topology change representing the putative hadron-quark continuity on dense nuclear matter. The hadron and nuclear matter properties obtained from the skyrmion crystal approach, supplemented with the presumed emergent scale and flavor symmetries, inspired the construction of the pseudo-conformal model of dense nuclear matter relevant to compact stars. Locked to the density dependence of hadron properties effected by the topology change at n 1/2 , the trace of the energy momentum tensor of the model turns out to be a nonzero density-independent quantity and induce the precocious appearance of the pseudo-conformal limit with v 2 s = 1/3, in stark contrast to what's widely accepted in the field [29].
So far, the pseudo-conformal model can describe the nuclear matter properties from low density to high density in a unified way. The nuclear matter properties calculated around the saturation density, the star properties such as the maximum mass, the mass-radius relation, the tidal deformability and so on all satisfy more or less satisfactorily the constraints from terrestrial experiments, astrophysical observations and gravitational wave detection.
Finally we state the possible caveats in and extensions of the model.
One can explicitly see from the above that although the tidal deformability predicted in the approach satisfies the currently cited constraint from the gravitational wave detection, it lies at the upper bound. But should the bound turn out to go to a substantially lower value than what's given presently, the description of the cusp structure of the symmetry energy would need a serious revamping. In the present framework, the tidal deformability probes the density regime slightly below the topology change density n 1/2 where the EoS is softer, and this is the density regime which is the hardest to control quantitatively in terms of the coarse-grained approach. It would require a more refined V lowk -renormalizationgroup treatment than what has been done so far in [27], including the approximation made for the anomaly effect in the GD (genuine dilaton) scheme and the role of strangeness mentioned below.
One possible way to resolve the above caveat is to include the corrections to the LOSS, applied so far, in such a way that, in addition to the mass parameters, the coupling constants also carry IDDs. This procedure may change the property of the EOS in the vicinity of n 1/2 . As a consequence the sound velocity after the topology change may also expose bumps, i.e., fluctuations from the conformal limit, because of the explicit breaking of the conformal symmetry. However if the corrections from the explicit breaking of the conformal limit are taken as chiral-scale perturbation, the global picture of the compact star discussed would remain more or less intact.
Another point is the density at which the hidden scale and local flavor symmetries emerge. This is encoded in the IDDs of the hadron parameters such as pion decay constant, dilaton decay constant, ρ-N-N coupling and meson masses. By checking the effect of the location of the emergent symmetries on the star properties, one can also extract the information on the emergent symmetries and the phase structure of QCD at low temperature.
Lastly we have left out the strangeness in the present discussion. It seems to have worked well without it in our approach up to now. But there is of course no strong reason to ignore it. It could very well be that strangeness does play a crucial role but indirectly, buried in the coarse-graining in the approach. Or it could also be that strangeness does not play a significant role up to the density involved in compact stars. The chiral-scale effective theory that the pseudo-conformal description relies on is based on three flavor QCD with the scalar f 0 (500) taken on the same footing as the pseudoscalar mesons pion and kaon. There is however a good reason to believe that in nuclear dynamics, the dilaton scalar is strongly affected by medium whereas the kaon is not. Implementing the strangeness in our approach would require doing the V lowk RG for 3-flavor systems with the hyperons treated on the Fermi sea together with the nucleons as Fermi-liquid theory. This would then involve the kaon condensation as well as the hyperons as bound states of skyrmions and kaons. As argued in [84], it could postpone the role of strangeness to a much higher density than relevant to the most massive compact stars stable against gravitational collapse. What happens beyond, such as color-flavor-locking, would be irrelevant to the problem.
One excuse for ignoring the strangeness could be that the whole thing works without it, so why not adopt the spirit "Damn the torpedoes! Full speed ahead!" 3 and proceed until hit by a torpedo? | 11,719.4 | 2021-03-01T00:00:00.000 | [
"Physics"
] |
Application of Data Mining in English Online Learning Platform
English is becoming more and more important in our life and English learning is also conducted anytime and anywhere. With high-tech products more and more popular, learning English through mobile phones and other products is very convenient. There are numerous platforms for English online learning, but they provide a very single learning content. All learners, no matter what their learning purposes are, have the same learning content, thus problems will follow. Based on the analysis of current situation, this paper puts forward solutions, case analysis and conclusion process. The application of data mining technology to English online learning platform provides 80% ideas for the construction of online learning platform. Statistics show that nearly 70 million people study online every year.
Introduction
English is the most widely used language in the world. In our country, English education starts from kindergarten, and English learning has become a compulsory course for students. English is also very important in our work and life. Because of the importance of English, for adults who have joined the work, they can no longer enter the school to learn English, and can only learn English through various platforms, which is convenient for them to learn wherever and whenever they are.
The application of data mining in English online learning platform has attracted the interest of many experts and has been studied by many academic teams. For example, some have found that many employees learn English to meet their personal needs, rather than training for training, but they often see a single, useless learning content on the learning platform, just to complete certain training tasks and random stacking of learning content [1]. Some have found there are fewer and fewer theoretical and practical researches which focus on learners' learning support service. Many scholars and researchers in China attach importance to the study of learner learning support services and consult the literature. There are countless articles in this respect, but they only stay in the research stage and have little practice. The platform construction of each network college must support the learner's learning support service, but only stay in the slogan stage and the real service is rare. In contrast, the various commercial online learning platforms provide relatively more learning support services because they are for profit. Only by providing more and better services to learners can we [2]. Many scholars have found that data mining has been effectively applied in many industries, and its application in education is more and more extensive, but the application in online learning of English in China is almost zero. Many studies on English online learning do not mention data mining. As English online learning becomes more and more extensive, a large number of online learning data are piled up. It is necessary to apply data mining technology to online learning. Data mining technology can help us to find problems in online learning from the point of view of data, objectively reflect the problems in online learning platform, and improve the guidance of online learning quality [3]. Other teams have found that the development of online learning abroad is relatively mature, the platform of online learning is relatively perfect, the more famous platform is the online learning platform of British universities, and many domestic scholars and researchers have studied this more [4]. At the same time, the government, schools, scientific research institutes and training institutions attach great importance to the construction of online learning platform, fully analyze the needs of learners, and meet the individualized learning needs of adult learners by developing rich online learning content. At the same time, the evaluation of online learning is emphasized to ensure the quality of online learning [5]. Foreign online learning platforms attach great importance to learners' learning support services. For example, some online learning platforms have a module, "Learning support Service". This module details the content of learning support services provided by their own platforms. And how different types of learners should learn on their own platforms [6]. According to the research of foreign scholars, learning support services can be divided into academic support services and non-academic support services. In terms of academic support, foreign online learning platforms mainly provide services on the problems encountered by learners in course learning. These services are very mature abroad and are recognized by many online learners [7]. Although their research results are very rich, there are still some shortcomings.
After more than ten years of development, data mining technology has made a lot of research results in foreign countries. At the same time, more and more large and medium-sized enterprises have begun to use this technology to analyze and excavate the current situation of their own companies. Assist in decision-making on major issues. In China, data mining has gradually changed from simple research to comprehensive research. In the application stage, the demand for data mining technology in China is increasing. However, the application of data mining technology in the field of education, especially in English online learning platform, still has a lot of room for growth. This paper analyzes and discusses the application of data mining in English online learning platform.
Calculation of Fitness Values
In genetic algorithms, the size of fitness functions is often used to evaluate the advantages and disadvantages of individuals in a population. The fitness function is obtained by the transformation of objective function. The larger the value, the better the individual. The fitness function is formula (1), where the ei is the deviation between the expected value and the actual value distribution of the attributes such as the range of numbering type and the subject discussed above. wi is the proportion of each deviation. It can be determined from the formula that when the constraint error of the content individual to the content organization is small, the larger the fitness value is, which indicates that the extracted content individual is closer to the content organization [8].
Differential Coefficient Analysis
Through the amount of difference in the data sample, the difference reflects the trend of population separation, That is, the degree of differentiation. This coefficient reflects the different needs of students in foreign language teaching, Through the coefficient of difference, Weighted by standard 3 deviation and average, Using the average score as a reference to the difference, (2) is the accounting formula for the coefficient of difference (CV is the coefficient of difference, S standard deviation, V average. Experience shows that, CV values typically range from 5% to 35%, If >35%, It may question whether the average is meaningless; If <5%, The question is whether the value of accounting is wrong, In educational evaluation, Teachers and school administrators also need to analyze differences, To judge the learning differences between students in different subjects and the same subject. Empirical markers of differentiation: if CV<9%, This means little differentiation; If CV>20%, indicating severe differentiation; If 9%<CV<20%, Indicating signs of differentiation) [9]:
Assessment Scale Method
In the investigation, the five-segment evaluation is called the evaluation scale method. Through the meaning of graph structure, the five-segment evaluation data are analyzed by structural analysis method, and the structural equation is established. The relationship between potential variables such as learning personalization and learning satisfaction is described by structural equation. The mathematical representation of the measurement model is [10]: According to the task requirements, for mining the strong association rules of each functional module in the test version of the mobile learning platform, it is necessary to use the Apriori algorithm to collect and organize the data before mining. So the minimum support counting formula is (5), and the minimum credibility is 80. 135 = 10% * 1346 (5)
Source of Experimental Data
The main research object of this paper is adults, who learn English for work needs, or for life needs, etc. Most of the forms of learning are online learning in training institutions or online training platforms within enterprises. The total number of questionnaires issued was 3000, 2600 questionnaires were collected. According to the principle of complete and accurate information, 2531 valid questionnaires and 2531 valid questionnaires were entered into the Excel form. The CronbachAlpha0.938, AKMO value of SPSS22.0, reliability test was 0.955.2531 questionnaires Bartlett qualified have reliable reliability and validity.
Experimental Design
Collection of data, data preprocessing, analysis selecting, results analysis.
Establishing Learning Content Indicators
At present, the learning content of English online learning platform is single, all learners learn the same content, and do not provide learning content according to their own needs, which leads to the poor learning effect of many learners. Therefore, this paper hopes to provide a set of suitable learning content for each English learner, which can help them master English. At present, the learning content of many English learning platforms is displayed through exercises, which are stored in the database and presented to students in the form of websites, so it is very important to define the attributes of learning content. Combined with the analysis and design of the first two sections of this chapter, the learning content index system defined in this paper includes the number, type, grade, scope and theme, as shown in Table 1 below. The above indicators are the key to organizational learning content, and the problem we need to solve is to find the optimal combination for each learner. Theme Content on what topics (1) The number is the number of learning content in the database, and the number of each learning content in the content library is unique. (2) For example, the the types of exercises in English learning. For example, the case of this paper is EF online learning system. Therefore, for the learning content of this system, the specific types of exercises can be divided into writing questions, oral exercises, matching questions, sorting questions. (3) The application of score index mining in English online learning platform Chapter 1, the application design level of data mining in English online learning platform explains which English level each learning content belongs to. (4) Scope refers to the specific learning content belongs to which English module of listening, speaking, reading, writing, different learning content to cultivate learners' different English ability. (5) Topic refers to the specific learning content is about which topic exercises, such as reading comprehension of climate topics. The content of this paper is defined as the student career scope defined EF online learning.
Collecting Data
In this paper, the data of students' level test are extracted from EF online learning platform for cluster analysis. The level test is divided into four modules: listening, speaking, reading and writing. At the same time, the data of comparative analysis value are extracted from the' basic data as the basis of cluster mining results, including age, sex, occupation, position and learning reasons. Because the research in this paper is mainly online learning, the subjects are adults, the age limit is over 20 years old, occupation, position and learning reasons are based on EF English online learning platform.
Data Mining
As the first step of the Apriori algorithm, we first perform technical statistics on each item set of previously preprocessed data sources, and the results are shown in figure 1 below.
Differentiation Coefficient
Because the index distribution can not reflect the individual's clear cognition of English online learning, it is only in the stages of learning knowledge, learning environment conditions, learners' knowledge, skills and abilities, learners' motivation and so on. Therefore, the difference coefficient analysis of the survey data of the above four standard items is carried out, and the results are shown in figure 2 below.
Conclusion
Aiming at the problem that the current English online learning system only provides a single learning content, this paper develops a tool to provide individualized learning content for adult learners from the point of view of data mining, and provides guidance and help for teachers' online teaching. The main research results of this paper are as follows: according to the clustering analysis algorithm, the evaluation results of learners before learning are analyzed, the learners are clustered, their English scores are determined, and the teachers are instructed to arrange the learning content for them. Assign study groups, etc. By clustering students, we can understand the students' English level more clearly, lay the foundation for the follow-up learning content, and provide more individualized learning content for learners. 2. The analysis of learning content, English learning can be divided into four modules: listening, speaking, reading and writing. According to the analysis of association rules, the relevance of each module of English is analyzed, and the association rules between contents are obtained. Teachers can know which module problems lead to low English proficiency. The results of association rules can also be used as a basis for teachers to provide individualized learning content for learners. There are still some shortcomings in this study, so there are still many problems that need further study. | 3,001.6 | 2021-04-01T00:00:00.000 | [
"Computer Science",
"Education"
] |
Demonstration of magnetic force in the process of studying physics
A conductor with current flowing through it and placed in a magnetic field is acted upon by a force. It is called magnetic force. This paper describes two simple small-sized devices to demonstrate this force
Introduction
The force f that acts on a charged particle moving in a magnetic field is called the Lorentz force.It can be calculated using the following formula [1]: where q is the charge of the particle, v is its speed and B is the magnetic field induction (vector physical characteristics are indicated in bold).
If an electric current of magnitude I flows through a conductor, then all free carriers in it move with a directed speed v.During the time Δt, the carriers pass of conductor of length Δl.The magnitude of the current I is then equal to: where ΔQ is total charge flowing during time Δt through the cross section of the conductor.
All particles move with speed v = Δl/Δt and the Lorentz force acting on each particle is equal to: In this formula, a conductor of small length Δl written in vector form.The direction of this vector concides with the direction of movement of positively charged particles.
The total force ΔF acting on all particles in this section of the conductor is ΔN times greater, where ΔN is the number of particles: Where ΔQ is the total charge of all particles moving directionally with speed Δl/Δt in a piece of conductor of length Δl.
If we are given a straight conductor of not small length L , then all forces ΔF acting on the small sections Δl that make up the conductor are directed in the same direction.In this case, the total Lorentz force acting on all charged particles in this conductor ( and as a result on the entire conductor as a whole) is equal to: This force F is called magnetic force ( it is sometimes also called the Lorentz or Ampere force).This force is quite small.As can be seen from the last formula, its value is proportional to the current flowing trough the conductor.Therefore, in the classic versioon, when demonstrating this in a large classroom, a current of tens amperes is used, passed through a copper foil tape.The tape is easily deflected under the influence of small force, and its area provides good cooling when a high current flows.
At the same time, in such a demonstration experiment it is poissible to use a conductor deflected in a magnetic field in the form of a small section of a conventional stranded installation wire.Since in this case the dimensions of the conductor, as well as its deflection, are small, when demonstrating the experience to a large classroom, you need to use a document projector with a screen.The device itself for conducting the experiment can be very small in size.
Device structures
When designing a mini-device to demonstrate magnetic force, it was decided to use one AA alkaline battery as a current source.This choice ensures the low cost of the device, its simplicity, as well as its autonomy during the experiment.The electronic circuit of such a simple device is shown in Figure 1.
Figure 1 Electronic circuit of the first version of the device
An insulated stranded wire was used as a conductor L deflecting in the magnetic field.Its cross-section was 0.2 mm 2 and the total length was 26 cm.The conductor was installed between two clamp terminals XS3 and XS4.To achieve greater elasticity and mobility when magnetic force acts, the ends of the conductor where twisted into a spring.The measured conductor L resistance was 20 milliohms.
A small permanent magnet M (see Fig. 1) was plased under the middle of the conductor.Using a push-button switch SB2, an electric current passed through the conductor for a short time (0.2…1.0 s).The source of the magnetic field was a neodymium flat magnet.Table 1 shows the values of measured average magnetic induction near the surface of flat some neodymium magnets at the author's disposal.
As we see, the magnetic induction, depending on the size of the magnet, is 40…160 mT.As can be seen from Figure 1, the current through conductor L is practikally a short circuit current, which primarily depends on the internal resistance of the battery.Table 2 shows the average internal resistance values of fresh AA batteries of four types (brands).Internal battery resistance were measured at the load of 2.7 ohms.The electromotive force of all batteries was in the range of 1.56 … 1.62 volts.There were five copies of each brand.The data presente in Table 2 shows only the internal resistance of the batteries at the author's disposal (they cannot be considered as statistical data of the power sources of the listed brands).
Thus, from the point of view of the internal resistance of AA batteries, the current through the conductor cannot exceed approximately 8 amperes.In reality it will be even less.Contributing to the total resistance of the circuit during a short circuit are the conductor L resistance equal to 20 milliohms, the contact resistance XS1 … XS4, as well as the contact resistance of the high-current switch SB2.Measurements show that the resistance of this switch can reach several tens of milliohms.Thus, the maximum current will already be 5 … 6 amperes.In addition, due to oxidation of contacts, it may turn out to be unstable.
Figure 2 Electronic circuit of an improved version of the device
Push-button switch SB1 is used to measure battery voltage with a voltmeter PV.
As experiments show, for this version of the device the deflection of the central part of the conductor is insignificant and, as a rule, does not exceed several millimeters.Thus, in turn, may affect students' perceptions of magnetic force.
The following device, the electronic circuit of which is shown in Figure 2, helps to eliminate the influence of the internal resistance of the battery, battery socket terminal resistance, and the contact resistance of the switch SB2.
The device uses an energy storage in the form of a supercapacitor and electronic switch based on a MOSFET transistor.The capacity of the supercapacitor is 40 farads with a maximum permissible voltage not exceeding 2.7 volts.Its internal resistance does not exceed 30 milliohms [2].A MOSFET transistor IRLB 3034 is used as a electronic switch, the drainsource resistance of which at a gate voltage of 5 volts does not exceed 2 milliohms [3].To increase the voltage from 1.5 volts to 5 volts, a DC -DC modul is used, which, at an input voltage of 0.9 … 5 volts provides an output voltage of 5 volts.Note that in this circuit the switch SB2 may be low-current.
When the SB1 is turned on, the supercapacitor begins to charge.The voltage on it is controlled using a voltmeter PV.
It takest time to fully charge supercapacitor, no more than one minute.Then, with a flat magnet located under the middle of the conductor L, press the push-button SB2 for 0.2 … 1.0 sec.Depending on the elongation (elasticity) of the conductor, the deflection of this middle should be from several millimeters to two centimeters.
The dependence of the current in the conductor on time (when the switch SB2 is pressed for 0.4 sec) is shown in Figure 3.The capacitor was in this case charged to a voltage of 1.5 volts.
Figure 3 Dependence of current strength in a conductor on time
In this case, the initial current reaches a value of 25 amperes, which is approximately five times more than in initial simplest versioon of the device.Thus the total resistance of the discharge circuit is approximately 60 milliohms.On this value, 20 milliohms will be applied to the resistance of conductor L, approximately 30 milliohms -to the internal resistance of the supercapacitor, the rest -to the resistance of the transistor, the resistance of XS3 and XS4 terminals and remaining wires of the circuit.
It should be noted here that the presence of magnetic force can be demonstrated without waiting for the supercapacitor to charge to the full battery voltage.You can conduct an experiment with the voltage of a charged capacitor, starting from 0.8 … 0.9 volt.Naturally, the deflection of the conductor will decrease somewhat.The appearance of the second (improved) version of the device is shown in Figure 4.
If you want to increase the current in the conductor, you need to increase the capacitance of the supercapacitor.In this case, the internal resistance of the supercapacitor decreases and, consequently, the total resistance of the discharge circuit also decreases.For example, when the capacitance of the supercapacitor was increased by parallel connection of an additional capacitor from 40 farads to 90 farads (all supercapacitors were of the JGNE brand), the initial current in the discharge pulse increased from 25 amperes to 33 amperes (of cource, this would also roughly double the charge time of the supercapacitor).
Conclusion
In conclusion, we can say that the described device was repeatedly used in physics classes in gymnasiums and high schools to demonstrante the action of magnetic force.Knowing the direction of the current in the conductor and the direction of the magnetic field induction vector, we can use formula (1) to predict the direction of the magnetic force acting on the conductor and confirm this experimentally.The device has a simple design, small weight and size and does not require external power.It is also characterized by great reliability.One AA battery is trough for a large number of demonstrations.
Figure 4
Figure 4 Appearance of the second version of the device
Table 1
Magnetic induction near the surface of flat magnets
Table 2
Average internal resistance of АА batteries | 2,228.8 | 2024-03-30T00:00:00.000 | [
"Physics",
"Engineering"
] |
Design and Molecular Docking Study of Antimycin A 3 Analogues as Inhibitors of Anti-Apoptotic Bcl-2 of Breast Cancer
In this paper, we report the design and moleculardocking study of analogues of antimycin A3 as inhibitors of anti-apoptotic Bcl-2 of breast cancer. Twenty designed compounds and the original antimycin A3 were docked based on their interaction with breast tumor receptor binding target Bcl-2. The docking resulted in the five top-ranked compounds, namely, compounds 11, 14, 15, 16, and 20, which have a lower ∆G binding energy, better affinity and stronger hydrogen bonding interactions to the active site of Bcl-2 than antimycin A3. Among those five top-ranked compounds, analogue compounds 11 and 14, which have an 18-membered tetralactone core and 18-membered tetraol core, respectively, exhibited the strongest hydrogen bond interaction, formed high stability conformation, and demonstrated the greatest inhibitory activity on the catalytic site of Bcl-2.
Introduction
Breast cancer is the most prevalent cancer for women both in the developed and the developing world.Approximately 30% of the women diagnosed with early-stage disease in turn progress to metastatic breast cancer, for which treatments with anti-breast cancer therapeutic agents are needed.Although many current anti-breast cancer therapies can alter tumor growth, in most cases the effect is not long-lasting.Cancer drug resistance is thought to reduce seriously the effectiveness of current anti-breast cancer therapies, which caused around 50% of all treated patients relapsed [1]- [3].This indicates need for new agents, which are safer, more effective, and potentially able to extend the survival of breast cancer patients.
Antimycin A 3 , a mixture of the two nine-membered dilactones A 3a and A 3b isolated from Streptomyces sp., is an active agent that inhibits the electron transfer activity of ubiquinol-cytochrome c oxidoreductase and prevents the growth of human cancer cells (Figure 1).Antimycin A 3 was also found to induce apoptosis of cancer cells by selectively killing the cancer cells that expressed high levels of anti-apoptotic Bcl-2 with IC 50 of 50 µM on Hela cells [4]- [6].While Bcl-2 is known to be over-expressed in 70% of breast cancer cells [7], it is reasonable to expect antimycin A 3 to induce apoptosis in those cells.Thus, it is also quite reasonable to expect its analogue to have a similar or higher anti-breast cancer activities.In this work, antimycin A 3 analogues (Figure 2), are designed and subsequently simulated based on their interactions with receptor binding target Bcl-2 by a computational molecular docking approach.The top-ranked compounds showing stronger interaction, better affinity, as well as a greater inhibitory activity than antimycin A 3 against breast tumor receptor binding target Bcl-2, may become lead compounds in our next synthesis project.
Studies on the structure-activity relationship of antimycin A 3 by Miyoshi et al. in 1995 revealed that the ninemembered dilactone core in antimycin A 3 was less effective for anticancer activity than 3-formamidosali-cylyl moiety [8].Pettit et al. (2007) reported that respirantin which has an 18-membered polylactone core instead of nine-membered dilactone core in antimycin A 3 , showed stronger cytotoxicity than antimycin A 3 on mouse leukimia P-388 cells and breast MCF-7 cells [9] (Figure 1).It has also been reported that the presence of hydroxyl groups in bioactive compounds significantly increase their biological activities due to the enhancement of its solubility in water, which is one of the important factors influencing the efficacy of drugs [10].These facts suggested that, it is quite possible to design novel antimycin A 3 analogues by replacing the nine-membered dilactone core in antimycin A 3 with either an 18-membered polylactone core or polyhydro-xylated 18-membered polylactone core that contributes to the improvement of its anticancer activity.
In a recent study, we succeeded in synthesizing of novel polyhydroxylated 18-membered analogue of antimycin A 3 (compound 14), which showed a potent anticancer activity against breast MDA-MB-231 cells [11].It revealed that the polyhydroxylated 18-membered core was very important for anti-breast cancer activity.Therefore, in this work, the nine-membered dilactone core of antimycin A 3 was replaced by an 18-membered tetralactone core in analogues 10, 11 and 12, and was replaced by polyhydroxylated 18-membered tetralactone core in 13 -20.To study how the stereochemistry can affect the binding capability, we designed four hydroxyl groups with bottom facial stereochemistry on the 18-membered core in 13 -16, and, in contrast to those analogues, with the top facial stereochemistry in 17 -20.Subsequently, in order to increase the anticancer activity, we introduce two parts of 3-formamidosalicylyl moiety in 11, 14, and 18.To explore how the simple substitutions on 3formamidosalicylyl moiety can influence the binding capability on receptor target Bcl-2, we substitute the existing hydroxyl group in formamidosalicylyl moiety with benzyloxy group in 10, 13, and 17.Whereas 12, 15, and 19 were designed by replacing the hydroxyl group in 3-formamidosalicylyl moiety with methoxy group.Furthermore, 3-formamidosalicylyl moiety was modified into 3-N-methylformamido-2-methoxy-benzoyl moiety in 16 and 20.In this work, we also investigated the interaction of some benzoic acid ring segments (compound 3 -9), the 18-membered tetralactone (1), and 18-membered tetraol (2) on receptor binding target Bcl-2 of breast cancer.
Methodology
In this research, we simulated some analogue compounds based on their interactions with Bcl-2 breast cancer, using computer software applications (Molecular method) [12] to determine the best compounds [13].Analysis and screening were based on Gibbs Free energy (∆G) values, affinity, conformation of the structure, and hydrogen bonding interaction between compounds and the target proteins [14].
Sequence Alignment and Homology Modelling
Target protein sequences were selected and downloaded from NCBI (http://www.ncbi.nlm.nih.gov/protein/133893254?report=fasta).The multiple sequence alignment method was based on the Clustal W2 program (www.ebi.ac.uk/Tools/clus talw2/index.html).Homology modeling was performed using the Swiss Model which can be accessed through http://www.swissmodel.expasy.org/SWISS-MODEL.html.Swiss model showed that Bcl-2 has structurally homologous to a target protein with template PDB code 1g5mA (target region 3-204, 88.00 % of sequence identity.
Structural Analysis of Target Protein
Validation of 3D structure from homology modeling was performed using the Protein Geometry program and superimposed using superpose program in MOE 2009.10 software.Based on superimposed the RMSD was calculated to find out structural similarity between template model mutated with 3D structure from homology modeling.Identification of catalytic site of protein target using site finder program in MOE 2009.10 software.
Optimization and Minimization of 3D Structure
Optimization and minimization of three-dimensional structure of the enzyme were conducted using the software of MOE 2009.10 with addition of hydrogen atoms.Protonation was employed with protonating 3D programs.Furthermore, partial charges and force field were employed with MMFF94x.Solvation of enzymes was performed in the form of a gas phase with a fixed charge, RMS gradient of 0.05 kcal/A 0 mol, and other parameters using the standard in MOE 2009.10 software.
Preparation of Compounds
Some antimycin A 3 analogues were designed using ACD Labs software.With this software, The analogues were built into three-dimensional structures.The three-dimensional shape was obtained by storing the analogue in the 3D viewer in ACD Labs.Furthermore, the output format was changed into Molfile MDL Mol format using the software Vegazz to confirm for the docking process.Compounds were in the wash with compute program, adjustments were made with the compound partial charge and partial charge optimization using MMFF94 xforcefield.The conformation structure energy of compounds was minimized using the RMS gradient energy with 0.001 kcal/A ˚mol.Other parameters were in accordance with the default setting in the software.
Molecular Docking
The docking process was begun with the docking preparation that was employed using a docking program from MOE 2009.10 software.Docking simulations were performed with the Compute-Simulation dock program.The placement method was conducted using a triangle matcher with 1,000,000 repetition energy readings for each position and other parameters were in accordance with the default settings in the MOE software.Furthermore, scoring functions used London DG, refinement of the configuration repetition force field with 1000 populations.The first repetition was done for 100 times and the second setting was conducted only for one of the best result.
Results and Discussion
The twenty designed compounds, including the analogue compounds, 18-membered polylactones, and simple benzoic acid ring segments, were simulated using molecular docking on target protein of Bcl-2 breast cancer.The results are displayed in Table 1.The top-ranked compounds were selected based on low ∆G binding energy, high pK i affinity, and number of hydrogen acceptor/hydrogen donors (hydrogen bonding interaction) to the catalytic site of Bcl-2 target protein.
As shown in Table 1, compared to antimycin A 3 and respirantin, the 18-membered tetraol (2) exhibited higher binding energy, affinity, and hydrogen bond interaction on Bcl-2 breast cancer cells, indicating that tetraol 2 has a stronger inhibitory activity against receptor target Bcl-2.In contrast, all the series of benzoic acid ring segments, 3 -9, showed a less number of hydrogen bonds than antimycin A 3 and respirantin.Suggesting that the benzoic acid ring segment itself has low interaction to protein target of Bcl-2 breast cancer.The docking of the analogue compound 10 -20, produced the five top-ranked compounds, namely, compounds 11, 14, 15, 16, and 20, which showed lower ∆G binding energy value and a higher number of hydrogen bonding interaction than the others compounds.The ∆G values of compounds 11, 14, 15, 16 and 20 are −16.4486,−15.9491, −15.0703, −17.1838 and −17.1553 kcal/mol, respectively, which are better than antimycin A 3 and respirantin, with a ∆G value of −11.4295 kcal/mol.These results showed that, compared to antimycin A 3 , those five top-ranked compounds will form a more stable complex with Bcl-2, as well as, be better able to inhibit and reduce the activity of Bcl-2.The pKi value of the five top-ranked compounds are higher than antimycin A 3 , indicating that The blue color represents the top-ranked compound.
they have a higher affinity and interact effectively with the target Bcl-2.Moreover, all of those five top-ranked compounds have a number of hydrogen acceptor/hydrogen donor interactions more than antimycin A 3, which demonstrated greater inhibitory activities on receptor target Bcl-2.
The catalytic site of the Bcl-2 breast cancer cells are Arg10, Glu11, Met14, Trp28, Asp29, Ala30, Gly31, Asp32, Val34, Glu46, Asn37, Asp168, and Ala171.If a compound interacts with the catalytic site of the protein target, it will reduce the activity of the target protein, and change the protein conformation.Generally, the interaction of the compound with the complex protein target is the hydrogen bond.The quantities of hydrogen bond interactions of the compound with the catalytic site of the target protein indicate its ability to inhibit the protein target.Figure 3 displays the ligand complex interaction of the five top-ranked compounds (11, 14, 15, 16, and 20) and antimycin A 3 with the receptor target Bcl-2.As shown, all the five of top-ranked compounds could change the conformation of the receptor target cavity, and were able to enter the binding site of the receptor target Bcl-2.In addition, compared to antimycin A 3 , those five top-ranked compounds showed more hydrogen binding interaction against Bcl-2.Hydrogen bond interactions between amino acid residues of Bcl-2 breast cancer with 11, 14, 15, 16, 20 and antimycin A 3 are summarized in Table 2.
As shown in Table 2, all the analogue compounds 11, 14, 15, 16 and 20 have a higher number of hydrogen bonds to the protein target Bcl-2 than that of the original antimycin A 3 .Compared to respirantin, compound 11, 14 and 20 have a higher number of hydrogen bonds to the Bcl-2.Both compound 11 and 14 which form four The red color represents the catalytic site of Bcl-2.hydrogen bonds to the active site of Bcl-2, exhibited the strongest inhibitory activity.Compound 11 binds to the Bcl-2 catalytic site at the Glu11, Asp32, Asp168 and Arg10 residue whereas, 14 binds to the Bcl-2 catalytic site at the Glu11, Asp32 (two hydrogen bonds) and Glu46 residues.The three dimensional conformation of the two best analogues 11 and 14 on the catalytic site Bcl-2 are given in Figure 4.The docking results in Figure 4 revealed 11 which bears an 18-membered tetralactone core and two parts of 3-formamidosalicylyl moiety as a ligand, has more binding interaction, a more stable conformation and a stronger inhibitory activity on the catalytic site of Bcl-2 than antimycin A 3 .Similar to 11, compound 14 bearing the18-membered core with four hydroxyl groups at the bottom facial stereochemistry as a ligand, also showed stable conformation and strongly inhibited the activity of the Bcl-2 catalytic site.Consistent with the previous in-vitro assay [11], the docking result of synthesized analogue 14 confirmed that introducing two parts of 3-formamidosalicylyl moiety and replacing the nine-membered dilactone core of antimycin A 3 with the 18-membered tetraol core in 14 could remarkably increase its anti-breast cancer activity.Moreover, replacing the nine-membered dilactone core of antimycin A 3 with the 18-membered tetralactone core in analogue 11, could also greatly improve its inhibitory activity against the receptor target Bcl-2 of breast cancer.Thus, compound 11 and 14 are promising candidates for new anti-breast cancer agents, and should be considered as the lead compounds in the next synthesis project.
Conclusion
In conclusion, we have simulated twenty designed compounds by molecular docking approach.Among them, the analogues 11 and 14 which have an 18-membered tetralactonecore and 18-membered tetraol core, respectively, demonstrated stronger inhibitory activity and greater interaction with amino acid residues in the catalytic site of Bcl-2 breast cancer compared to the original antimycin A 3 .
Table 1 .
The properties of twenty designed compounds and antimycin A 3 on the catalytic site of Bcl-2. | 3,060.4 | 2014-09-23T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
High-speed multi-objective Fourier ptychographic microscopy
: The ability of a microscope to rapidly acquire wide-field, high-resolution images is limited by both the optical performance of the microscope objective and the bandwidth of the detector. The use of multiple detectors can increase electronic-acquisition bandwidth, but the use of multiple parallel objectives is problematic since phase coherence is required across the multiple apertures. We report a new synthetic-aperture microscopy technique based on Fourier ptychography, where both the illumination and image-space numerical apertures are synthesized, using a spherical array of low-power microscope objectives that focus images onto mutually incoherent detectors. Phase coherence across apertures is achieved by capturing diffracted fields during angular illumination and using ptychographic reconstruction to synthesize wide-field, high-resolution, amplitude and phase images. Compared to conventional Fourier ptychography, the use of multiple objectives reduces image acquisition times by increasing the area for sampling the diffracted field. We demonstrate the proposed scaleable architecture with a nine-objective microscope that generates an 89-megapixel, 1.1 µm resolution image nine-times faster than can be achieved with a single-objective Fourier-ptychographic microscope. New calibration procedures and reconstruction algorithms enable the use of low-cost 3D-printed components for longitudinal biological sample imaging. Our technique offers a route to high-speed, gigapixel microscopy, for example, imaging the dynamics of large numbers of cells at scales ranging from sub-micron to centimetre, with an enhanced possibility to capture rare phenomena.
Introduction
There is an unmet need to record wide-field, high-resolution microscopic images of dynamic events at high frame rates [1][2][3][4]. Examples include subcellular imaging in high-throughput digital pathology, and of rare and dynamic events, such as cell divisions within large in vitro cancer cell cultures. The ability of a microscope to record wide-field, high-resolution images is, however, fundamentally limited by diffraction and optical aberrations [5][6][7]. Diffraction limits the minimum resolvable feature size to λ/(NA obj + NA ill ), where λ is the wavelength of light, and NA obj + NA ill is the sum of the numerical apertures of the objective and illumination. A typical high-resolution microscope with NA obj ∼ 0.9 offers a lateral resolution of ∼ 0.3 µm(λ = 550 nm), but only within a commensurately small depth of field of 0.7 µm [8]. That is, a microscope that is able to resolve sub-cellular features can do so only within a thin layer that is much less than the thickness of the cell [8]. Optical aberrations of high-NA lenses further limit the field of view to typically 0.65 × 0.65 mm, yielding an image with a maximum space-bandwidth product (SBP) (resolution × field-of-view) of around 5-megapixels [5,6,9].
High-SBP images may be constructed in time sequence by stitching together a mosaic of images recorded while stepping the sample through the field of view of a high-resolution microscope. Although this can yield very high SBP, image acquisition is slow and high-cost, high-precision by the sample, each forming distinct low-resolution images on associated detectors. The solid angle subtended by the nine objectives proportionately increases both the optical étendue and instantaneous SBP of image acquisition. More generally, for n parallel objectives capturing images for each of m illumination angles, a total of n · m low-resolution images are recorded. Achieving equivalent SBP with a single-objective FPM would also require n · m images, but at an expense of n · m illumination angles. Compared to MOFPM, a conventional FPM has an n-times larger image acquisition overhead for a given SBP. Notably, there are no fundamental obstacles to the implementation of a full hemispherical array of objectives with a 100% fill factor, analogous to that demonstrated in gigapixel photography [9]. Consequently, the MOFPM architecture enables the maximum possible étendue and an arbitrarily-high SBTP. In ptychographic imaging, the goal is to collect multiple images encoding various spatial frequencies of the sample. Which spatial frequencies go through the pass-band of the optical sensor and get detected depend either on the illumination angle or the position of the aperture. In multi-objective Fourier ptychography a clever combination of illumination angles and aperture positions are used to design an optical system. The addition of multiple apertures enables reduction of illuminations angles used without loss of reconstructed image quality/resolution. Order of magnitude image capture speed improvements can be achieved through parallelised image capture, enabling an experimental configuration providing near-snapshot gigapixel imaging. A picture of our nine-camera experimental prototype is shown on the left, with a CAD design on the right.
Although we report the first demonstration of FPM with multiple objectives used in parallel, the geometry has common features with previous investigations into the feasibility of MOFPM [31] and in wide-field digital holography [32]. In particular, these articles report the scanning of a single objective through the diffraction pattern of the sample to enable the realisation of an increased SBP by time-sequential aggregation of data. Additionally, in conventional single-objective FPM, and in [31,32] all low-resolution images are recorded with identical imaging distortion and aberrations, which enables relatively straightforward Fourier-ptychographic aggregation of the image spectra into a single high-resolution spectrum. Hence, the use of a single objective enables considerable simplification of calibration and image recovery using conventional FPM algorithms. Lastly, in these proof-of-principle experiments, the longitudinal stability of the instrument was never an issue. However, our use of multiple mutually tilted objectives and multiple dissimilar sensors pose significant challenges for computational reconstruction and calibration algorithms, which we have addressed in this manuscript.
In MOFPM, the multiple objectives exhibit dissimilar imaging distortions and optical aberrations that vary substantially between the n low-resolution images due to variations in geometry and manufacturing imperfections. To provide in-focus off-axis imaging across the field of view and to minimize off-axis aberrations, we used the Schleimpflug imaging configuration [33] for the off-axis objectives. Furthermore, the off-axis cameras in MOFPM record darkfield images only, unlike conventional single-objective FPM where both brightfield and darkfield images are present. The lack of high-signal-to-noise information in brightfield images makes aberration recovery and computational convergence extremely challenging, especially in noisy imaging conditions. With improved reconstruction algorithms presented here, we can recover the aberrations of off-axis cameras from darkfield images without the need for additional calibration data, unlike methods in [31]. We also developed new calibration and image-recovery algorithms that compensate for the dissimilar distortions and dissimilar field-dependent aberrations during Fourier-ptychographic image synthesis.
We report a practical demonstration of the MOFPM concept for a nine-objective MOFPM that is able to record a 89-megapixel, 1.1 µm-resolution image in 1 s of image acquisition (although latency in the camera readout electronics increased this time to 3 s in our implementation). To achieve identical SBP without multiplexing and multiple cameras, our setup would require 15× longer image acquisition time of 45 seconds. Such improvement is higher than the 10-fold acquisition time reduction offered by the highest SBPT FPM demonstration [1]. With the experimental design, reconstruction and calibration techniques outlined in this manuscript, we demonstrate the feasibility of scaling this new architecture to high-resolution microscopy with arbitrarily high SBP and SBTP.
In the next section, we introduce the theoretical principle of MOFPM followed by experimental quantification of resolution and demonstration of reconstructed image using a histology sample, and a time-resolved imaging of Dictyostelium cell dynamics. Detailed explanation of the automatic self-calibration and reconstruction algorithms are included in the Supplementary material.
Principle of multi-objective FPM
Given a thin sample with a transmission function o(r) illuminated by a plane wave, the diffraction pattern in the Fourier plane can be expressed as O(k − k i ) [12], where r and k denote space and spatial-frequency co-ordinates respectively. The wave vector k i corresponds to the angular illumination by LED i, which translates the diffraction pattern with respect to the optical system. The translated sample spectrum is intercepted by the objective lens of a microscope, defined by its pupil function P c (k), where the subscript c refers to the "camera" index of a multi-camera system, and optical aberrations in P c (k) are unique to each lens. In MOFPM, the frequency spectrum is intercepted by multiple cameras simultaneously (see Fig. 2) and is low-pass filtered by the aperture to produce a spectrum O(k − k i − k c )P c (k). The wave vector k c indicates the position of each camera with respect to the diffracted spectrum for the given illumination angles. Multiple frequency bands are recorded in parallel, reducing the number of time-sequential illuminations required by a factor equal to the number of cameras. It should be emphasized, that unlike other multi-camera FPM implementations [16], in MOFPM each camera images the same area of the sample. The SBP and resolution is then computationally increased by synthesis of an increased image-space NA. The low-pass filtered spectrum transmitted through each objective pupil is focused onto the corresponding sensor, yielding an intensity image for each camera and illumination angle, given by I i,c (r) (see Supplementary material S2): The additional operator T c , which is not required in conventional, single-camera FP, describes coordinate transformations and image distortion due to a tilted, off-axis Scheimpflug imaging geometry, which varies from camera to camera. The Scheimpflug configuration [33,34] involves tilting sample, lens and detector planes with respect to each other, to minimise defocus and distortion effects. Residual distortions vary from camera to camera and are incorporated into the image-construction algorithm. In single-camera FPM, the phase recovery of the constructed high-bandwidth diffraction pattern at the objective pupil-plane involves the correct phasing of all diffracted fields recorded for each illumination angle. In MOFPM, the co-phasing requirement also requires co-phasing of diffracted fields captured by the multiple cameras. With ptychographic reconstruction algorithms, we can aggregate the diffracted fields coherently in the Fourier domain from intensity-only measurements. While phase-retrieval is inherently ill-posed [20], a stable solution is possible, provided that the multiple diffracted measurements overlap in the Fourier domain. To use existing ptychographic reconstruction algorithms [18,20,21] in MOFPM, the coordinate distortion must be accounted for to remove T c from the image-formation model. While this could in principle could be achieved through careful experimental calibration, we developed a robust and fully automated self-calibration strategy that removes the need for precise multi-camera alignment. We used image-registration algorithms that correct for sensor tilts (which cause perspective distortions) and field-of-view mismatches between the cameras. We also correct for LED-array and aperture/lens displacements of each camera prior to the reconstruction process with an algorithm (described in the Supplementary material S3), based on Fourier ptychographic position-misalignment method [35]. The outlined calibration algorithm enables high-quality image reconstruction using low-cost lenses, low-precision and low-stability 3D-printed components and alignment by hand. Despite the seemingly complicated experimental design, the computational correction of misalignment allows for a relatively simple experimental implementation without the need for high-precision alignment.
Following preprocessing, the MOFPM forward model can be simplified (by eliminating T c ) to that derived in Supplementary Material S3: Apart from variations in P c (k) between the cameras, the forward model is identical to that used in conventional FPM. Consequently, established FPM reconstruction algorithms can be modified and utilized for construction of a broad image spectrum from multiple low-resolution intensity measurements, as shown in Supplementary Material S4. The off-axis cameras typically capture dark-field images representing high-spatial frequencies of the sample, whereas the bright-field images are captured by the on-axis camera only. The lack of bright-field conditions within images and the associated lower signal-to-noise ratio degrades reconstruction convergence, especially without a priori knowledge of optical aberrations. Like in computational calibration, the central camera can act as a "guide star" for image construction by the off-axis cameras and this ensures a robust high-SBP image reconstruction without prior knowledge of optical aberrations or distortions. Lastly, we also utilize LED-multiplexed FPM [1,2], where multiple LEDs are illuminated in parallel during capture of a single image, to provide a further improvement in speed of image acquisition. Each captured image-intensity spectrum then contains multiple overlapping frequency bands, which introduces additional challenges for convergence of computational reconstruction. Nevertheless, LED multiplexing has enabled enhanced frame rates for live-cell imaging using FPM [1]. While both MOFPM and LED-multiplexed FPM aim to improve speed of image acquisition, MOFPM achieves this through parallelization of the detection NA, while illumination multiplexing parallelises the synthesis of the illumination NA. Due to mutual orthogonality between the processes, we are able to demonstrate parallelization of both illumination (through multiplexing) and detection (though increased image-space NA) in the same image acquisition. This combined parallelization offers the fastest possible FPM data capture.
Experimental results
In this section, we describe experimental results obtained with our nine-camera MOFPM system. A single MOFPM frame required for ptychographic reconstruction is regarded as a complete data set recorded by nine cameras for each of the 49 illumination angles to yield 441 unique diffracted spectral bands. The recorded MOFPM dataset is equivalent to conventional acquisition using a single camera and 441 illumination angles (instead of 49), but is a factor nine faster. The image quality can be as high as conventional FPM only if the image reconstruction is not degraded during the fusing of the spectra from the nine dissimilar cameras. Thus, we employ single-camera FPM images as a gold-standard reference to evaluate MOFPM. We show that we can reduce image acquisition time from 45 s to 5 s, without the loss of resolution or reconstruction quality. We also demonstrate a further reduction in acquisition time to 3 s by use of LED-multiplexed MOFPM.
Using the calibration and reconstruction algorithms outlined in the Supplementary Material S3 and S4, the MOFPM reconstruction provides robust convergence in the presence of deviations between the ideal forward model and the experimental implementation. Deviations include chromatic aberration of the microscope objectives, spatially varying illumination intensity and spatially varying aberrations. Moreover, MOFPM does not require knowledge of optical aberrations, instead, aberrations are recovered iteratively together with the complex fields. This is especially useful for calibration of the unique aberrations of all nine cameras. Full-field reconstructions of the Lung Carcinoma sample validate the robustness of our calibration and reconstruction algorithms, which were performed without any a priori aberration knowledge.
Lastly, for imaging live cells, such as Dictyostelium described below, scattering is weak and exhibits negligible contrast under bright-field illumination. This makes reconstruction of phase-only samples much more challenging compared to cells with good amplitude contrast [1,36]. The lower signal-to-noise ratio for weakly-scattering samples also inhibits the use of LED-multiplexing, which is known to be sensitive to Poisson noise. Lastly, given the higher noise due to our low-cost sensors, we did not use LED multiplexing for the weakly scattering sample imaging.
Enhanced image resolution and space-bandwidth-time product using MOFPM
From reconstructed images of a resolution test target, we can quantitively demonstrate a nine-fold greater SBTP using MOFPM without sacrificing image quality. In Fig. 3(a) we show the raw image of a USAF test target recorded with a single-camera microscope (the central camera of our nine-camera array illuminated by 7 × 7 LEDs simultaneously), demonstrating a spatial resolution of 8 µm. Conventional FPM, using a single camera recording for 441 illumination angles yields the reconstruction shown in Fig. 3(b), which exhibits resolution enhancement to 1.1 µm. The enhanced resolution improvement is based on the ability to resolve group 9 element 6, and is in agreement with theoretical calculations for an illumination wavelength of 430 nm. The total data acquisition time is 45 s. We reduced this acquisition time by a factor of nine to 5 s by using nine-fold fewer illumination angles (i.e., 49) together with a nine-camera MOFPM. The reconstructed image is shown in Fig. 3(d) and can be seen to exhibit the same resolution and image quality to the 'gold-standard' single-camera FPM image shown in Fig. 3(a). For reference, we also show in Fig. 3(b) an image recorded in 5 s using 49 illumination angles, which exhibits the expected intermediate image resolution. LED multiplexing enables the number of captured images to be further reduced from 49 to 29, and a reduction in image acquisition time from 5 s to 3 s, while maintaining image quality and resolution, as can be seen by the image shown in Fig. 3(e).
Wide-field histology
In this section, we demonstrate the application of MOFPM for wide-field imaging in histology. Figure 4 shows a reconstructed image of a lung carcinoma sample with a field of view of 5.63 mm × 4.71 mm=26 mm 2 . Given that the maximum resolving power of our microscope is 1.1 µm, this corresponds to a SBP of 88.5-megapixels -a 40× increase over the 2-megapixel SBP of the raw image shown in (Fig. 4(b2,c2,d2)). Since SBP is determined only by the imaged FOV and the reconstructed pixel size (determined by the synthetic NA), SBP calculations are independent of the sample scattering strength [12]. Comparison of ( Fig. 4(c1)) and c3 demonstrates that LED multiplexing can be used to increase acquisition speed without discernible degradation in resolution or image quality. The reduction in acquisition speed to 3 s for the 88.5-megapixel image corresponds to ∼ 30-megapixels per second SBP. This is limited by latency in our cameras. Removal of this latency would enable an increase in frame rate from 10 Hz to 30 Hz and a SBTP of ∼ 90-megapixels per second. Lastly, we also show quantitative phase imaging (QPI) in Fig. 4(c4) as a result of MOFPM reconstruction, corresponding to 630nm illumination. This demonstrates the suitability of MOFPM for quantitative label-free digital pathology application.
Longitudinal imaging of cell dynamics
High frame rate is essential for live-cell imaging, especially for ptychographic imaging techniques, which cannot cope with sample motion throughout data acquisition. We show that MOFPM is sufficiently stable, even using 3D printed components, to enable high-SBTP, wide-field imaging of cell dynamics over several hours. Stability of illumination angles is necessary Fig. 3. A USAF target was imaged for quantitative assessment of resolution and image quality using MOFPM. A conventional microscope image (a) and FPM reconstruction using 441 LEDs (b) and using 49 LEDs (c) illustrate the trade-off between speed of image acquisition and resolution of reconstructed image. With MOFPM (d) we achieve resolution of 1.1 µm -equal to that obtained using 441 LEDs with conventional FPM, but with reduction to only 5 s for data capture due to the use of only 49 LEDs and parallel data capture. LED multiplexing enables image acquisition time can to be reduced even further to <3 s while maintaining the same reconstructed image resolution (e). sections (b1,c1,d1). The reconstruction quality is significantly improved compared to raw data with 2-megapixel SBP (b2,c2,c3). We also demonstrate compatibility with LED multiplexing (c3) and the possibility of quantitative phase imaging (c4).
for accurate positioning of spatial frequencies during image fusion, while sample and sensor stability ensures the spectral content being sampled matches the theoretical model. Furthermore, high-quality imaging is maintained even when the focus varies during image capture, such as due to evaporation of cell-growth media during imaging (we did not use temperature or humidity controlled sample stages). Our self-calibration algorithms can correct for movement of all mechanical components, and the reconstruction algorithm provides digital re-focusing through recovery of defocus aberration.
We demonstrate MOFPM for capturing video sequences of the collective motion of large numbers of Dictyostelium cells. These social amoebae are a model organism used to study coordinated cell migration and cell differentiation, which -in response to starvation -aggregate and morph into large migratory slugs several millimetres in length [37]. Investigating such cell evolution is a challenge for conventional microscopy, requiring a range of lenses to be used for small-scale cell-to-cell interactions (∼ 60 − 100× magnification, NA ≳ 0.6) and large-scale cell migration (∼ 4× magnification, NA≲ 0.1) [38][39][40]. With MOFPM, we are able to resolve individual cells 5-15 µm in diameter across a wide field of 26 mm 2 .
Reconstruction of an 84-minute time-lapse image sequence of Dictyostelium cells, extracted from a 10-hour sequence, is shown in Fig. 5. These weakly-scattering cells exhibit very low amplitude contrast, and so we show only quantitative phase reconstructions. Raw images in Fig. 5(b1-c1) obtained using illumination from a single LED exhibit high contrast, which would not be the case with illumination from an extended source (for example, when multiple LEDs are illuminated) [41]. With MOFPM it is possible to resolve individual cells and their formation into migratory slugs as can be seen in Fig. 5(b2-c2)) and the linked video sequence (see figure caption). The extended FoV of 26 mm 2 enables tracking of the movement of large slugs, however, during cell aggregation, the thickness can violate the thin-sample approximation assumed in FPM, which can be overcome with multi-slice reconstruction techniques [42,43]. In summary, with our technique, we were able to successfully demonstrate algorithmic robustness and stability of our calibration algorithms over extended timeframes.
Discussion
In this section, we explain the unique aspects of MOFPM that offer unique enhancements over other high-speed ptychographic imaging techniques, while also highlighting the synergy with existing high-speed imaging that offers a facile route to further increases in SBTP.
The fastest FPM demonstration to date is LED multiplexed FPM [1,2], which was used to collect a complete FPM dataset in 1 second. Image-acquisition time is reduced by simultaneously illuminating the sample with multiple LEDs. Consequently, images corresponding to differing passbands are superimposed at the detector and this is incorporated into the forward model for the FPM reconstruction. While an increased SBTP is achieved, there is a limit on the number of LEDs that can be illuminated in parallel, before image recovery is degraded. We demonstrate below that LED multiplexing can be successfully combined with MOFPM to provide an even greater enhancement in SBTP.
MOFPM and multiplexed FPM both involve redundant sampling of spatial-frequency bands, although for MOFPM, there is the added advantage that redundant passbands are encoded in separate images captured by different cameras. This way, the computational burden is removed because spatial frequency decomposition is performed directly by the use of multiple-cameras prior to the detection process, rather than through complex computations. Moreover, in our scalable architecture, there is no fundamental limit to the number of cameras that can be used in parallel. Most importantly, LED multiplexing does not work well with weakly scattering samples, since the consequent mixing of dark-field and bright-field images leads to the weak dark-field signals being overwhelmed by shot noise from bright-field images [1,20]. MOFPM does not suffer from this limitation, since darkfield and brightfield images are captured by different imaging sensors.
Fortunately, MOFPM and LED multiplexing are mutually complementary, as was demonstrated in Sec. 3. In fact, MOFPM is complementary with all previously reported Fourier ptychographic implementations, overcoming the limitations imposed by the use of a single sensor and/or lens. The MOFPM image-formation model that we describe should be considered as a generalisation of FPM for increasing both the illumination and detection NA, and which can be integrated into the design of various optical systems. Further speed improvement is possible however by ab initio optimisation of optical design specifically for multiplexing, as was demonstrated by data-driven approaches [44,45]. For example, a non-uniform camera arrangement would need to be optimized for a given LED-multiplexing pattern to achieve the fastest, non-redundant diffracted field sampling given the LED multiplexing constraints [2].
Lastly, the MOFPM also has promise for imaging of samples that are optically thick. When the illumination is transmitted through a thin sample, there is a one-to-one relationship between k-space vectors k i and LED position r i . This is no longer valid for a thick sample [42,46,47] due to light refraction proportionality with the illumination angle. It has been demonstrated in aperture-scanning Fourier ptychography [46,47] that diffraction can be accurately modelled by keeping the illumination direction constant while scanning the aperture in the Fourier domain. More generally, MOFPM can be regarded as a combination of both illumination-and aperturescanning FPM. For thin samples, conventional image-reconstruction can be used and for thick samples the multi-camera arrangement can be used to deduce the scattering geometry similar to tomographic imaging. These novel adaptations are left as future work.
Conclusion
Fourier ptychography has demonstrated quite emphatically how computational image construction can reconfigure the problem of wide-field high-resolution microscopy from the design and manufacture of complex high-cost optics to, instead, the computational integration of multiple band-pass images acquired with simple low-cost optics. Because FPM images can be recorded with a single objective of low SBP, the overall cost and complexity can be massively reduced. The quid pro quo however is that the time-sequential construction of high-SBP images requires long image acquisition times, which reduces the attractiveness of FPM in a wide range of applications: from high-throughput digital pathology to imaging of biological dynamics. One route to increased speed is to use lenses and detectors with higher SBP, but the higher cost of these components reduces the cost-benefit of FPM -which is fundamentally its greatest asset.
Multi-objective FPM offers a new scalable architecture for increased speed of acquisition: the SBP of the detector and objective can be optimally selected to meet system requirements and reduced cost. Zheng et al. introduced the concept of synthesising an increased illumination NA from discrete illumination angles in FPM [12]. Our use of multiple objectives achieves an equivalent increase in NA for imaging, but in parallel. MOFPM can thus be considered as a generalisation of the Fourier ptychography concept to both illumination and imaging domains that provides both enhanced SBP and enhanced SBTP.
We have demonstrated that techniques developed previously for calibration of LED illumination and a single objective in conventional FPM can be extended to calibrate also the positions and aberrations of a multi-objective array. Our prototype enables construction of a wide-field (85-megapixel), high-resolution (1.1 µm) image, captured in 3 seconds. This is a dramatic improvement compared to raw images containing 2-megapixel SBP and 8 µm resolution. However, the concept can be scaled to almost arbitrarily high SBP and SBTP.
Optical design
In a regular microscope, the object, lens and image planes are all mutually parallel. The use of multiple objectives means this is no longer possible, but by use of the so-called Scheimpflug configuration [33] it is possible to tilt the lens and image planes such that sharp focus is retained across an extended field. The Scheimpflug condition states that the sample, lens and detector planes must meet at a single point called the Scheimpflug intersection, as illustrated in Fig. 6. This imaging technique was designed for aerial photography to remove perspective distortions and also for corneal imaging, because in both cases either the lens or the imaging sensor is tilted with respect to the sample. Scheimpflug configuration was also suggested for off-axis FPM imaging due to minimised off-axis aberrations [13,31,34]. While it is possible to correct aberrations (e.g., coma, astigmatism and defocus) computationally, this minimisation of defocus reduces the burden on the reconstruction algorithms. Additional advantages include minimised spatially varying magnification and the ability to use a curved lens array (for multi-objective systems) which increases the maximum attainable resolution compared to a planar lens array. Our experimental Scheimpflug-based MOFPM, shown in Fig. 1, employs nine imaging sensors with a curved lens array. Lenses were located in a 3D-printed holder and detector arrays were mounted in three-axis kinematic stages capable of tip-tilt-axial adjustment. Camera holders were manufactured out of aluminium to cope with the heat generated by the cameras. Based on the diagram in Fig. 6 the following set of equations can be derived, for the Scheimpflug criteria to yield a given constant magnification between the cameras: where f is the focal length of the objective lenses, M is the magnification of each camera (microscope) and θ D and θ c are the tilts of the detector of the lens respectively. Lens tilt θ c depends on the position of the lenses r c , which in turn will define the band sampled of each camera. With this prototype we aimed to demonstrate the speed enhancement of a nine-camera MOFPM compared to a single-camera FPM using 441 LEDs. Since illumination angles of the LEDs define the total frequency coverage, each camera must cover frequencies equivalent to 441/9 = 49 LEDs for which we employ an array of 7 × 7 LEDs. Examples of the spectra covered by single and multiple cameras are illustrated in Fig. 6(b). To construct a high-speed MOFPM we select a desired frequency coverage by a given LED array and use the reciprocal relationship from Fig. 6(c-d) to compute the lens positions r c (based on the desired LED positions r i ). This defines separations between the lenses, and in turn the angles θ D and θ c using Eqn. 3. The experimental parameters for our final prototype are summarised in Table 1. LEDs to be used, for which the 32 × 32 Adafruit LED array was used. However, this LED array allows the illumination of only a single LED at a time. For LED multiplexed illumination, which requires simultaneous illumination by multiple LEDs, the Tindie LED array was used. Both LED arrays provide the same light intensity, and they were positioned such that the spatial frequency overlap (60%) remains the same, providing equivalent illumination conditions in both experiments. The total cost of microscope components was estimated to be approximately $6000, which is significantly lower compared to commercial microscope systems used for other high-speed FPM applications [1,2]. Further applications could utilize an array of lower cost cameras, such as the ones used for the $150 FPM-based microscope [14].
Image acquisition
Time-sequential acquisition of a complete dataset required for image reconstruction constitutes a single frame containing 51 images captured for each of the nine cameras: 49 images captured for illumination by each of 49 LEDs, a darkframe captured without any LEDs and one brightfield image used for image registration to enable calibration of for possible microscope drift during imaging. A one-off correction of LED-position misalignment requires ∼ 9 images to be captured for each camera once, prior to longitudinal imaging. Only the 49 brightfield/darkfield images must be captured in quick succession, whereas the remainder can be obtained while cameras are idle (in between longitudinal frame capture). Lastly, all colour images were obtained by capturing separate frames for illumination by red, green or blue LEDs. Stacking these monochrome reconstructions yields a into a single RGB colour images. When all nine cameras are used, the frame rate is reduced from 38 FPS (for an isolated camera) to 10 FPS when cameras are used in parallel. For our experiments, we connected 2 − 3 cameras per USB PCIe card on a standard tabletop computer and used Python scripts for image acquisition. All images were captured at the maximum available frame-rate of 10 FPS, but upgrading of the USB PCIe cards and use of native image-acquisition coding will enable an almost four-fold increase in image acquisition speed.
LED multiplexed image acquisition
In MOFPM, each camera can be considered as a standalone conventional FPM microscope with tilted optical components. Given the equivalence between MOFPM and FPM, the LED array can be multiplexed using the same principles and constraints of conventional FPM: captured diffracted fields should not overlap in spectrum and LED illumination from within the objective NA (resulting in brightfield images) should not be mixed together with LED illumination outside the NA (resulting in darkfield images) [1,2]. In our system only the central camera can capture brightfield images whereas off-axis cameras capture only darkfield. In total, 49 LEDs are used for data acquisition, 9 of which result in brightfield images within the central camera. Since all of the 9 brightfield LEDs overlap in the spectral domain, they could not be multiplexed. Hence, brightfield images were captured in time-sequence, while the remaining 40 darkfield images were captured using either 2-LED multiplexing or 4-LED-multiplexing. The total number of captured images was reduced by about half: from 49 to 29 (2 LEDs in parallel) or to 19 (4 LEDs in parallel), leading to reductions of capture times from 5 s of 3 s and 3 s respectively. However, reconstruction quality requirements were satisfied only for 2-LED multiplexing, which was used for data reconstruction.
Image reconstruction
All images were reconstructed using the quasi-Newton engine [20] whose convergence was accelerated with adaptive-momentum (ADAM) [48]. We obtained a low-resolution reconstruction of the sample and the pupil from the central camera, which was used as an initial estimate for the MOFPM reconstructions. Central camera reconstructions required up to 250 iterations to recover the optical aberrations. Afterwards, up to 100 iterations per camera were done to reach convergence. Reconstructions were performed by splitting the 2448 × 2048 pixel image FoV into 80 segments (256 × 256 pixels) to mitigate the issue of spatially varying aberrations. Reconstruction of a single FOV segment took 5 minutes using NVIDIA GeForce 1080 Ti GPU, producing a 2048 × 2048 pixel image. Since our microscope was finite-conjugate, the non-telecentric geometry produced a phase curvature in the sample plane. To avoid artefacts in the reconstruction, we used a phase-curvature correction method [49]. Since the reconstructed image segments in MOFPM are visually identical to those reconstructed using conventional FPM, the same stitching methods can be used to produce a single wide-field image. All image segments were blended together in ImageJ [50], to produce full-FOV images without visible discontinuities. | 7,693 | 2022-07-12T00:00:00.000 | [
"Physics"
] |
Estimating a planetary magnetic field with time-dependent global MHD simulations using an adjoint approach
The interaction of the solar wind with a planetary magnetic field causes electrical currents that modify the magnetic field distribution around the planet. We present an approach to estimating the planetary magnetic field from in situ spacecraft data using a magnetohydrodynamic (MHD) simulation approach. The method is developed with respect to the upcoming BepiColombo mission to planet Mercury aimed at determining the planet’s magnetic field and its interior electrical conductivity distribution. In contrast to the widely used empirical models, global MHD simulations allow the calculation of the strongly time-dependent interaction process of the solar wind with the planet. As a first approach, we use a simple MHD simulation code that includes time-dependent solar wind and magnetic field parameters. The planetary parameters are estimated by minimizing the misfit of spacecraft data and simulation results with a gradient-based optimization. As the calculation of gradients with respect to many parameters is usually very time-consuming, we investigate the application of an adjoint MHD model. This adjoint MHD model is generated by an automatic differentiation tool to compute the gradients efficiently. The computational cost for determining the gradient with an adjoint approach is nearly independent of the number of parameters. Our method is validated by application to THEMIS (Time History of Events and Macroscale Interactions during Substorms) magnetosheath data to estimate Earth’s dipole moment.
Abstract. The interaction of the solar wind with a planetary magnetic field causes electrical currents that modify the magnetic field distribution around the planet. We present an approach to estimating the planetary magnetic field from in situ spacecraft data using a magnetohydrodynamic (MHD) simulation approach. The method is developed with respect to the upcoming BepiColombo mission to planet Mercury aimed at determining the planet's magnetic field and its interior electrical conductivity distribution. In contrast to the widely used empirical models, global MHD simulations allow the calculation of the strongly time-dependent interaction process of the solar wind with the planet. As a first approach, we use a simple MHD simulation code that includes time-dependent solar wind and magnetic field parameters. The planetary parameters are estimated by minimizing the misfit of spacecraft data and simulation results with a gradient-based optimization. As the calculation of gradients with respect to many parameters is usually very time-consuming, we investigate the application of an adjoint MHD model. This adjoint MHD model is generated by an automatic differentiation tool to compute the gradients efficiently. The computational cost for determining the gradient with an adjoint approach is nearly independent of the number of parameters. Our method is validated by application to THEMIS (Time History of Events and Macroscale Interactions during Substorms) magnetosheath data to estimate Earth's dipole moment.
Introduction
Planets with an intrinsically generated magnetic field, such as Earth or Mercury, interact with the solar wind. This causes electrical currents that modify the planetary magnetic field. The properties of the interaction not only depend on the planetary magnetic field but also on the continuously varying solar wind conditions. A spacecraft orbiting a planet in such a highly variable environment measures the modified magnetic field distribution.
In 2025 the BepiColombo mission (Benkhoff et al., 2010) of the ESA and the Japan Aerospace Exploration Agency (JAXA) is expected to reach planet Mercury. In contrast to the previous MESSENGER (Mercury Surface, Space Environment, Geochemistry, and Ranging) mission (Solomon et al., 2001), two spacecraft will simultaneously measure the magnetic field distribution around the planet. The planetary magnetic field at Mercury is about 100 times weaker than the field of Earth. Therefore, the magnetosheath is much closer to the surface of the planet. As a consequence, the magnetic field of the electric currents of the interaction is not negligible, even in the immediate proximity of the planet (e.g., Glassmeier, 2000). Furthermore, electromagnetic induction effects within the planet might be important (e.g., Grosser et al., 2004;Jia et al., 2015). To estimate the planetary magnetic field precisely, the time-dependent interaction needs to be determined. With its two spacecraft, the BepiColombo mission is most suitable for determining the interaction because of simultaneous observations of the magnetic field distribution in the magnetosphere and the solar wind. If both spacecraft are within the interaction region, the solar wind reconstruction method by Nabert et al. (2015) can be used to estimate the time-varying solar wind conditions from the ob-servations of one spacecraft. Then the data of the other spacecraft provide still observations within the interaction region while the solar wind conditions are known.
So far, the planetary magnetic field of Mercury has been determined using empirical models of the interaction between the solar wind and the planetary magnetic field with spacecraft data from MESSENGER or Mariner 10 (e.g., Korth et al., 2004;Alexeev et al., 2010;Johnson et al., 2012). The electrical current density of the interaction in empirical models is parametrized by pre-described functional relations. Typically, the current system is described as a superposition of localized electrical currents such as the magnetopause current, which is parametrized by its subsolar location and ellipsoidal shape. Taking only a few parameters into account, these prescribed functional relations do not include, for example, effects such as magnetic pile-up, which correspond to a distribution of the magnetopause current within the entire magnetosheath. Furthermore, the parameters are distinguished between only a few discrete solar wind scenarios such as strong and weak solar wind pressure. If more parameters or solar wind scenarios are considered to parametrize the current system more accurately, it is not always possible to determine all parameters with small statistical error due to the finite data coverage. This is especially true if strongly time-dependent nonlinear phenomena occur.
Mercury's magnetic field close to the subsolar magnetopause has a strength of about 60 nT (Johnson et al., 2012). Using an average solar wind velocity of 430 km s −1 , this corresponds to a gyroradius of the interaction of about 37.5 km. Compared to global structures of the interaction, such as a subsolar magnetosheath thickness of about 1220 km (Winslow et al., 2013), a magnetohydrodynamic (MHD) approximation seems to be a valid approximation. The inverse gyrofrequency is about 0.5 s, which limits the time resolution for this approximation. In regions dominated by heavy ions, a kinetic approach might be necessary. Here, we restrict our considerations to the MHD approximation. Taking the observations of the two spacecraft of the BepiColombo mission into account, the interaction can be calculated as fully timedependent with a MHD model. We investigate a procedure to estimate the planetary magnetic field in a strongly modified magnetic environment of the planet using a global MHD simulation. In contrast to empirical models, a MHD simulation requires only parameters of the solar wind conditions, planetary magnetic field, and plasma properties. This approach calculates the interaction self-consistently and does not contain parameters to fit electrical currents. Note that such a model also allows taking a conductivity distribution of the planet into account. Then, the parameters of the planet's interior conductivity can be estimated in addition to the planetary magnetic field parameters in a further step.
As a first approach, we consider a simple MHD simulation code based on the MHD code presented by Ogino (1993) to examine our method. A cost function quantifies the misfit of the spacecraft observations in the magnetosphere to the cor-responding MHD simulation results. The cost function needs to be minimized with respect to the planetary magnetic field parameter to estimate these planetary parameters. Different methods can be used to minimize the cost function. Methods such as downhill simplex or Markov chain Monte Carlo algorithms are usually used if derivatives of the cost function cannot be calculated directly. If the gradient can be calculated, gradient-based minimization algorithms can be used, which often offers faster convergence speed. However, these methods are restricted to find a local minimum in parameter space instead of the global minimum. Here, we expect a global minimum, which is not superposed by local minima, so that a gradient-based optimization procedure is considered. The gradient-based methods can provide fast convergence only if the gradient can be determined quickly. However, the calculation of the gradient with respect to several parameters using, for example, finite difference quotients can be very time-consuming. Thus, an adjoint approach is considered, which can theoretically compute gradients nearly independent of the number of parameters (e.g., Jameson, 1988;Giles and Pierce, 2000).
In this paper we investigate the applicability of an adjoint approach to a MHD simulation code using automatic differentiation (Wengert, 1964). Although the adjoint approach can be much faster than using finite differences, it requires larger memory capacities. An adjoint approach using automatic differentiation was successfully applied to a reduced MHD model, the magnetosheath model by Nabert et al. (2013), to estimate the solar wind parameters of the model (Nabert et al., 2015). The reduced MHD model uses series expansions along the bow shock and magnetopause geometry of the MHD quantities. This transfers the stationary partial differential MHD equations into a set of ordinary differential equations. Close to the stagnation streamline, only low-order series expansions are necessary to obtain a valid representation of the interaction. Not only the numerical effort for solving the corresponding ordinary differential equations is significantly lower compared to solving the full MHD system, the required storage capacity is also much lower. Therefore, the automatic differentiation procedure could be applied without regarding memory limitations. Here, an automatic differentiation tool is applied to a full MHD simulation code and thus special emphasis needs to be put on memory consumptions.
Our approach to estimating planetary parameters using data from a multi-spacecraft mission is validated with the THEMIS (Time History of Events and Macroscale Interactions during Substorms) mission (Angelopoulos, 2008) at Earth with its well-known planetary magnetic field. The five spacecraft of the mission (THA, THB, THC, THD, and THE) provide simultaneous observations of the interaction region and the solar wind. However, in contrast to the situation at Mercury, the interaction of the solar wind near the planet's surface is negligible at Earth. Due to the weak magnetic field at Mercury, the interaction region of the solar wind is much closer to the planet (Winslow et al., 2013). In particular, the subsolar bow shock distance to the center of the planet is on average about 1.89 R M at Mercury and 13 R E at Earth (R E = 6371 km). The average distance of the subsolar magnetopause at Mercury is 1.45 R M and 10 R E at Earth. As a consequence, only close to the magnetosheath region, are the modifications of Earth's magnetic field comparable to the strong modifications throughout the magnetosphere of Mercury. To validate our procedure with respect to the future measurements of the BepiColombo mission, THEMIS data from the terrestrial magnetosheath is used. However, in its final application for the BepiColombo mission, spacecraft data of the entire interaction region including the magnetosphere will be taken into account to estimate the planetary magnetic field.
MHD simulation code
The interaction of the planetary magnetic field with the solar wind is computed by a MHD simulation code. The MHD simulation has to be efficient to perform the time-consuming estimation procedure of the planetary parameters. Furthermore, the simulation code should be simple in its numerical implementation structure to simplify the application of the adjoint approach using automatic differentiation. For these reasons, as a first approach, a simple MHD simulation code is developed, which is based on the simulation code described by Ogino (1993). The MHD simulation code described by Ogino (1993) was already used in studies of magnetospheric convection, for example, depending on the solar wind magnetic field (Ogino et al., 1985) or field-aligned currents (Ogino, 1986). The code is modified and extended for the application to the parameter estimation process as explained in the following paragraphs. Furthermore, some details about the numerical implementation of the simulation code are summarized to understand the application of the adjoint method via automatic differentiation, which is explained in the next section.
Planetary magnetic field
The magnetic field in the simulation code by Ogino (1993) is restricted to a dipole along the planet's axis of rotation. Here, a more general representation of the magnetic field is required. The planetary magnetic field can be represented by a multipole expansion using a spherical harmonic analysis (Gauss, 1839;Glassmeier and Tsurutani, 2014). Note that this part of the magnetic field does not contain contributions due to the interaction with the solar wind such as induction or magnetopause currents. As a consequence, the planetary magnetic field outside the planet can be represented by a scalar potential V pot : (1) Thereby, the scalar potential V pot satisfies a Laplace equation. Using spherical coordinates (r, λ, θ ), the solution outside the planet (r > R P ), where R P denotes the planet's radius, is given by with the Gauss coefficients g m l , h m l and the Schmidt seminormalized associated Legendre polynomials P m l (cos(θ )), e.g., P 1 1 = cos(θ ) or P 1 2 = √ 3 cos(θ ) sin(θ ) (e.g., Langel, 1987;Clauser, 2016).
The lowest-order coefficients for l = 1 are associated with the dipole moment corresponding to the magnetic field vector B dipole . The simulation code uses a Cartesian representation of the magnetic field. For the dipole moment, this is Here, m = (m x , m y , m z ) T is the vector of the dipole moment, which is related to the Gauss coefficients via The Gauss coefficients for Earth's magnetic field in 2010 were published using the International Geomagnetic Reference Field (IGRF) by Finlay et al. (2010). Thereby, the geographic coordinate system, a body-fixed coordinate system, is used, with its z axis along the axis of rotation. Magnetic field data of spacecraft close to Earth's surface as well as ground stations were used to determine the coefficients. The influence of external currents due to the interaction of the solar wind with the planetary magnetic field was neglected. The magnetic field of Earth outside the planet is dominated by the dipole coefficients, which are summarized in Table 1. Note that a similar estimation procedure at Mercury leads to large errors because of insufficient data coverage in the southern hemisphere of the planet. Furthermore, the solar wind interaction has a strong influence on the magnetic field distribution. Similar to the dipole, higher-order moments of the planetary magnetic field can be taken into account. Thereby, the tensor structure of the Cartesian representation becomes more complex for higher orders. For example, the quadrupole can be expressed by a symmetric, traceless matrix Q, which is defined by (e.g., Vogt and Glassmeier, 2000;Stadelmann et al., 2010) The magnetic field related to the quadrupole can be expressed as with Q sp defined by The simulation code according to Ogino (1993) uses normalizations for the physical quantities. The normalization constants for the additional magnetic field parameters are m norm = 8.07 × 10 15 T m 3 , for a dipole component, 5.14 × 10 22 T m 4 , for a quadrupole component.
The simulations take the dipole and quadrupole moments into account. Thus, the resulting planetary magnetic field is
MHD equations and boundary conditions
The interaction of the solar wind with the planetary magnetic field is calculated by solving the MHD equations. Thereby, these equations are solved within a box sketched in Fig. 1.
The simulation uses a model solar wind planet (MSP) coordinate system, whereby the origin is in the planet's center. The x axis is along the unperturbed solar wind velocity vector, the z axis is parallel to the rotation axis, and the y axis completes a right-handed coordinate system. The length of the simulation box is in x direction x L , in y direction y L , and in z direction z L . The MHD equations provide solutions for the mass density ρ, the plasma velocity v := (v x , v y , v z ) T , the pressure p, and the magnetic field B := (B x , B y , B z ) T . The solutions are summarized in the vector Numerical grid Figure 1. The simulation box contains the planet with its magnetic field. The origin of the coordinate system is in the planet's center and the x axis is along the unperturbed solar wind velocity.
The following representation of the MHD equations is solved by the MHD simulation code: Here, D ρ , D v , D p , and D B are diffusion coefficients of the density, the velocity, the pressure, and the magnetic field, respectively. The magnetic diffusion coefficient is related to the electrical resistivity η by D B := η/µ 0 , with the vacuum permeability µ 0 := 4 π × 10 −7 . The current density j is calculated with Ampere's law, neglecting the displacement current: According to Ogino (1993), the MHD equations are solved using a two-step Lax-Wendroff method (Lax and Wendroff, 1960), which has an accuracy of second order in space and time. This numerical scheme uses finite difference approximations, which require the solution to be described on a discrete grid. The discretization of the MSP coordinates (x, y, z) is related to the indices (i, j, k), with i for x, j for y, and k for z. Thereby, valid values for the indices are i = 1, . . ., i max +2; j = 1, . . ., j max + 2; and k = 1, . . ., k max + 2. The number of spatial grid points N grid is given by The boundaries of the simulation box along the x direction are located at (i = 1, j, k) and (i = i max + 2, j, k). Along the y direction, the boundaries are located at (i, j = 1, k) and (i, j = j max + 2, k) and along the z direction at (i, j, k = 1) and (i, j, k max +2). The distance between grid points is x = x L /(i max + 1) in x direction, y = y L /(j max + 1) in y direction, and z = z L /(k max + 1) in z direction. Within the grid, the planet is located at (i P , j P , k P ), with i P = (i max + 1)/2, j P = (j max + 1)/2, and k P = (k max + 1)/2. The grid points (i, j, k) are related to a position (x, y, z) by The time t is discretized by the index l with l = 0, . . ., l max , whereby l = 0 is related to t = 0 and l max to t = t E . This corresponds to a constant time step of t = t E /l max . The spatial and time-dependent solution of the MHD equations u(t, x, y, z) defined by Eq. (8) can be represented by u n l,i,j,k , whereby n = 1, . . ., N var refers to a component of the vector u. Here, the number of the MHD variables is N var = 8.
Boundary conditions are required to solve the MHD equations. The inflow boundary conditions at (i = 1, j, k) are determined by the solar wind conditions. The solar wind velocity vector is restricted to the x axis, perpendicular to the planet's rotation axis. In contrast to the more simple inflow boundary conditions of Ogino (1993), we use time-varying solar wind conditions: Instead of using the mass density, the ion density N SW = ρ SW /m P can be used as well, with the proton mass m P = 1.672621898 × 10 −27 kg. In general, the physical properties at grid points at (i = 1, j, k) can be replaced by the solar wind conditions in every time step. The solar wind vector discretized in time is u SW,l := u SW (l t). All other outer boundaries are outflow boundaries according to Ogino (1993). In addition to the boundary conditions, our simulation requires initial conditions. Therefore, at time step l = 0, the physical quantities have to be determined in the entire simulation domain. The velocity v is assumed to be zero, so that u 2 1,i,j,k = u 3 1,i,j,k = u 4 1,i,j,k = 0. The density ρ and pressure p are initialized by their solar wind values, ρ SW (0) and p SW (0), respectively. Thus, u 1 1,i,j,k = ρ SW (0) and u 5 1,i,j,k = p SW (0) are used. The initial values of the magnetic field are determined by the planetary magnetic field. Taking only the dipole and the quadrupole moments into account, the planetary magnetic field can be calculated by Eq. (7) with Eqs. (2) and (5). The initial conditions determine a stationary solution at a certain time step l = l st with 0 < l st < l max . The solar wind conditions for l < l st are set to the values at l st . For l > l st , time-dependent solar wind conditions from spacecraft observations are applied in the simulation and the results are compared to spacecraft observations. The extended magnetic field geometry, especially the arbitrarily aligned dipole moment, can cause a complex motion of the plasma in the magnetosphere due to co-rotation of the plasma or magnetic reconnection, for example. Therefore, different from the simulation described by Ogino (1993), the simulation requires appropriate inner boundary conditions to allow a stable simulation for long time intervals. The planetary surface is approximated by a spherical surface with the distance R P to the planet's center. The distance of a grid point (i, j, k) to the center is defined by The velocity of the plasma inside the planet, i.e., r i,j,k < R P , is u n l,i,j,k = 0 , for r i,j,k < R P , n = 2, 3, 4.
It is also possible to set only the normal component of the velocity to zero. Density and pressure gradients between the planet's interior and the plasma outside must not cause forces on the plasma. However, the MHD equations are solved within the entire simulation domain. This can lead to an interaction as sketched in Fig. 2. The values at grid points inside the planet, which have at least one neighboring grid point outside, are replaced in every time step by average values of the surrounding neighboring grid points outside the planet, i.e., the non-boundary neighbors. Neighboring grid points of a grid point (i P , j P , k P ) are {(i P ± 1, j P ± 1, k P ± 1)}. This procedure suppresses the interaction of density and pressure gradients across the planet.
In contrast to the density and gas pressure, the magnetic field can interact with the planet due to electromagnetic induction, which is additionally implemented. Time-dependent variations in the magnetic field inside the planet are calculated by Eq. (12). We assume the velocity inside the planet to be zero, not considering the detailed time-dependent dynamo action. This is justified due to the very different timescales of magnetospheric and dynamo action. Concerning a possible coupling between magnetosphere and dynamo, see Glassmeier et al. (2007) and Heyner et al. (2011). Thus, the in-duction equation simplifies to the diffusion equation The resistivity distribution in the simulation box is modeled by Thereby, the planet's interior consists of two regions with different resistivity, a core with η Core = 1/σ Core and a mantle with η Mantle = 1/σ Mantle . The resistivity outside the planet is assumed to be constant with η A . As a consequence, the interaction due to diffusion is allowed depending on the electrical resistivity of the planet. This is of particular importance if Mercury is considered; however, it is of minor importance for Earth.
Using spacecraft data
The simulation uses solar wind parameters as boundary conditions. With respect to the two spacecraft in mission Bepi-Colombo, simultaneous observations of the solar wind as well as the magnetic field close to the planet will be available in the future. Thereby, the solar wind conditions can be determined either directly by in situ measurements or by using the reconstruction method by Nabert et al. (2015) from data within the interaction region. This allows a precise determination with a high time resolution of the solar wind conditions. The THEMIS mission provides data from similar spacecraft constellations at Earth. The solar wind conditions observed by a spacecraft need to be transferred to the inflow boundary of the simulation box. Therefore, the solar wind data are shifted by t SC/in , which is given by where n P is the solar wind's phase plane normal vector, r SC/in := r SC −r in is the distance vector between the spacecraft's position r SC , with the center of the inflow boundary r in .
For a comparison between simulation results and spacecraft data, the data need to be transferred into MSP coordinates. Therefore, the THEMIS data are first transferred into geographic (GEO) coordinates. Vectors in these coordinates can be transferred into MSP coordinates by rotation matrices R y (θ K ) and R z (λ K ). These matrices are defined by The rotation angles θ K and λ K are determined by the solar wind velocity vector: Here, v SW,GEO = (v x,SW,GEO , v y,SW,GEO , v z,SW,GEO ) T is the solar wind velocity vector using GEO coordinates and v SW,GEO is defined by Then, a vector in GEO coordinates g GEO can be transferred into a vector in MSP coordinates g MSP by Applying the coordinate transformation, the solar wind velocity becomes parallel to the x axis. For the validation of the code, the known planetary dipole moment components of Earth according to Table 1 To include the rotation of the planetary magnetic field due to the planet's rotation, the magnetic moments of the magnetic field are modified according to Eq. (23). The angles of the transformation will continuously vary along the spacecraft's trajectory. The rotation of the planetary magnetic field is performed every 200 time steps by subtracting the planetary field contribution from the total magnetic field at the time step considered and adding the planetary magnetic field corresponding to the new angles θ K and λ K .
Validation of the simulation code
To validate the modified simulation code, we compare a simulation using the known dipole moment of Earth according to Table 1 with THEMIS magnetosheath data from 24 August 2008 measured by THC (Angelopoulos, 2008). Solar wind conditions are observed by THB during the magnetosheath transition. The size of the simulation box is x L = 50.0 R E , y L = 60.0 R E , and z L = 60.0 R E , with the planet in the center. The simulation uses a grid with i max = 200, j max = 150, and k max = 150. Furthermore, the values of the diffusion coefficients were chosen according to Ogino (1993) for a stable simulation at Earth. The data and corresponding simulation results on the spacecraft's trajectory are presented in Fig. 3.
The bow shock is observed at about 00:30 UT and the magnetopause at about 03:30 UT in accordance with the simulation results. Most physical quantities show a good agreement between actual observations and simulation results. Only the ion density in the magnetosheath is observed to be higher than in the simulation. Furthermore, the magnetic field in the magnetosphere is about 15 nT weaker than measured by the spacecraft. The magnetopause thickness is observed to be smaller than in the simulation, which is related to the diffusion coefficients required for a stable simulation. These differences between simulation result and data are mainly caused by numerical errors. This can impact an estimation of the planetary magnetic field. The lower magnetospheric magnetic field will tend to overestimate the planetary magnetic field strength. However, this overestimation is limited due to the magnetopause location. A much stronger dipole moment will increase the magnetopause distance and the magnetic field will increase in the magnetosheath, which is not in accord with the observations. In general, the MHD simulation results agree well with the observations made. In a future step, the simulation code might be improved to reduce differences between simulation results and observations. An adaptive mesh refinement should be introduced to enhance the accuracy close to the magnetopause and reduce numerical errors.
3 Data assimilation
Cost function and its minimization
In the previous section, spacecraft data were qualitatively compared to the results of the MHD simulation. To quantify the deviations, a cost function is introduced. Therefore, the method of least squares is used. The sum of squared residuals, FQ, of M data -measured values y m at points x m with a model f depending on the parameters s is The parameters s of the MHD model are related to a vector space P. Here, for simplification, we consider only the planetary magnetic field parameters of the dipole and quadrupole. Thus, the parameters are s = (m, Q) T = (m x , m y , m z , Q xx , Q xy , Q xz , Q yy , Q yz ) T , (26) with Q := (Q xx , Q xy , Q xz , Q yy , Q yz ) T . The vector space corresponding to these parameters is named P D,Q . The parameters of the model s are estimated by minimizing the sum of squared residuals FQ. Transferred to the magnetic field observations B data := (B x,data , B y,data , B z,data ) T and MHD simulation results B simu := (B x,simu , B y,simu , B z,simu ) T with the spacecraft's position in the orbit r SC,m , the cost function K is A gradient-based optimization can be used to minimize the cost function with respect to the parameters of the model s. Starting from a point s 0 in parameter space, new points s k = (m k , Q k ) T are determined with every kth gradient calculation. This optimization problem is without constraints and can be solved using a quasi-Newton method. We use the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm (Press et al., 1992) to minimize the cost function.
The algorithm requires the gradient of the cost function K with respect to the parameters of the model at points s k in parameter space. There are different possibilities to compute these gradients. For example, the gradient can be approximated by difference quotients: where e l denotes the lth unit vector and s l is the corresponding step size in parameter space P. The sum of Eq. (28) includes all dimensions in parameter space (N P := dim(P)). Note that the gradient ∂ s K(s) is used as a column vector. The step sizes s l need to be adequately small to approximate the gradient sufficiently well.
Automatic differentiation and adjoint method
Each calculation of the cost function for a certain set of parameters requires a full global MHD simulation of the data along the spacecraft's trajectory. Thus, (N P + 1) simulations have to be performed in the calculation of the gradient according to Eq. (28). In general, the calculation is extremely time-consuming because of the nonlinearity of the MHD equations.
Another possibility to calculate the gradient ∂ s K(s) is the differentiation using analytical expressions, as explained in detail in the following. The solution of the MHD simulation depending on space and time coordinates u(t, x) can be represented by a vector u t,x on a numerical grid. This vector contains the solution at all time steps and discrete positions in space for all physical quantities in its components. Thus, the number of components of the vector u t,x is with the constants as defined in the previous chapter. The simulation code calculates the time-and spatially dependent solution of the MHD quantities u(t, x) iteratively. The iteration is implemented by a time loop in the simulation code, whereby u(t = l t, x) is computed from the results of the previous time step u(t = (l−1) t, x) and the boundary conditions. Equivalently, the time iteration of l can be considered as an iteration that steadily improves the approximation of the final solution vector u t,x , sketched in Fig. 4. Thereby, after the lth iteration step, the vector q l t,x contains the valid solution for all time steps that satisfy t ≤ l t. The final solution q l max t,x = u t,x is obtained after l max iteration steps. At the 0th iteration step, the simulation needs to be initialized. The simulation code calculates the solution in the lth iteration step from the previous approximation by a function F . Thus, the lth iteration step can be expressed by The cost function K(q l max t,x (s)) depends implicitly on the parameters s. With respect to the nested dependences of the solution in Eq. (30), the gradient of the cost function can be expressed by the chain rule: t,x (s) ∂s .
Planetary parameters Updating vector representation Figure 4. The left column sketches the time iteration of the MHD simulation code starting from the initial state at t = 0 determined by the planetary magnetic field parameters s. At a certain time step, a stationary solution u st,x is obtained and time iteration continues using time-dependent solar wind conditions until the simulation ends at u t E ,x . In the middle column, the corresponding interpretation of updating the complete time-and spatially dependent solution vector q t,x is presented. The cost function K is calculated from the final vector. On the right side, the automatic differentiation gradient calculation is presented, starting from the bottom and multiplying each factor according to Eq. (31).
This expression contains the stationary solution at l = l st because 0 < l st < l max . Although the cost function is evaluated only after the stationary state has been obtained, i.e. l > l st , the cost function also depends implicitly on prior time steps because the stationary solution emerges from the initial state. The magnetic field components of the initial state vector u 0 t,x depend on the parameters s because the initial magnetic field distribution in the simulation is created by the dipole and quadrupole parameters. Equation (31) can be written as using the following abbreviations: The function F in Eq. (30) is determined by the Lax-Wendroff scheme of the differential equations, which is related to the MHD equations and the boundary conditions. Therefore, the derivatives of the matrices in Eq. (31) can be determined by analytical expressions. The time iteration of the simulation code starts at l = 0 and ends at l max . After the lth iteration step, the corresponding matrix A −1 l can be calculated. Starting from a unit matrix, the matrix containing the derivatives is multiplied after every time step to the left side. Finally, after l max iterations, the gradient ∂K(s)/∂s is obtained.
This procedure is called forward differentiation because the gradient is calculated parallel to the execution of the time loop in the simulation code. The advantage over the computation of the gradient using difference quotients according to Eq. (28) is that no errors due to finite step sizes occur. Forward differentiation can be applied by hand to the simulation code, or alternatively, by an automatic differentiation (AD) tool (Wengert, 1964). Therefore, the cost function K and its dependent parameters s need to be declared in the code. The AD tool identifies all implicit dependences. The required analytical expressions for the derivations are taken from a library of the AD tool and inserted at the correct positions in the code. Note that the library contains elementary analytical derivations of all important expressions such as ∂ x sin(x) = cos(x). According to Eq. (31), the inserted expressions are related to each other such that the required gradient is computed. Several different AD tools were developed during the last decades. Here, the Transformation of Algorithms in Fortran (TAF) tool (Giering and Kaminski, 2003) from the company FastOpt was used (http://www.FastOpt.com).
An AD tool is able to differentiate a numerical code automatically, i.e., the tool can be applied without considering details of the implementation. However, for complex numerical codes, such as MHD simulation codes, problems might occur. For example, codes using parallel computing function calls by the message-passing interface (MPI), as they are also used for our simulation code, usually need further treatment. The analytical forward differentiation with an AD tool is also called automatic forward differentiation.
The computational costs for the calculation of the gradient with difference quotients or using automatic forward differentiation do not differ much. However, the latter procedure leads to a more efficient approach, the adjoint method. The adjoint method is extensively used for optimization problems in fluid dynamics, e.g., drag minimization by variations in surface geometry (e.g., Jameson, 1988;Othmer, 2008Othmer, , 2014Meader and Martins, 2012) or in seismology (e.g., Fichtner et al., 2006).
The adjoint method can be introduced with systems of linear equations, as described briefly in the following (e.g., Giles and Pierce, 2000;McNamara et al., 2004;Nabert et al., 2015). The symbols used for variables, vectors, and matrices refer to the previous considerations and will be marked by an asterisk as an index for distinction. We consider the following system of equations with the matrices of the coefficients A * , the solution X * , and the inhomogeneity L * . All elements of the matrices are real numbers. The scalar product of a vector g * with the matrix X * should be calculated using the following equation: This scalar product can be computed by solving Eq. (34) first, and then calculating the product of vector g * with the solution X * . This approach is called forward calculation. Another possibility is to use the adjoint method. To deduce the method, the product of a vector y T with both sides of the system of linear Eq. (34) is considered: The vector y * is defined by This equation is transposed, which leads to the adjoint system of equations Using Eqs. (36) and (37), the scalar product (Eq. 35) can be written as If the adjoint system of Eq. (38) is solved, y T * ·L * can be computed, which is nothing other than the scalar product (Eq. 35) as seen in Eq. (39).
The computational costs are mainly determined by the number of multiplications and differ for both possibilities of calculating the scalar product (Eq. 35). Only in case of a column vector inhomogeneity L * , is the number of multiplications equal. If the matrix L * consists of N * ,P column vectors, N * ,P systems of linear equations with a vector inhomogeneity need to be solved in the forward calculation. The adjoint method is independent of N * ,P and only a single system of linear equations needs to be solved. Therefore, the latter approach requires N * ,P times fewer multiplications.
The adjoint approach can be applied to the calculation of the gradient in Eq. (32). If the product of all matrices A −1 : The second product on the right side of Eq. (40) can be substituted by This equation can be related to the system of linear Eq. (34) by identifying the quantities with an asterisk as an index.
During the analytical forward differentiation, the gradient is computed successively using chain rule from the right to the left. This corresponds to a procedure, where, at first, the system of linear Eq. (34) is solved with respect to X * and then, the scalar product (Eq. 35) is calculated. If the matrix products of Eq. (40) are computed from left to right, at first, the product is determined. This corresponds to solving the adjoint Eq. (38) with respect to y * . The scalar product (Eq. 35) is determined by y T * · L * , which is related to the multiplication of y T · L to determine the cost function. Thus, the adjoint method for the gradient calculation of the cost function can be identified with the execution of the multiplications from the left to the right in Eq. (40).
The dimensions of the vectors and matrices involved are The calculation of the gradient by computing the matrices from the right to the left in Eq. (40) requires N rl multiplications of components, whereby If the gradient is calculated from the left to the right in Eq. (40), N lr multiplications of components are performed: The limit N P = 1 leads to N rl = N lr . Usually, one can assume N P N v because the number of grid points exceeds the dimensions of parameter space, which is eight for the dipole and quadrupole parameters. Then, Eq. (44) simplifies to Thus, the multiplication of the matrices in Eq. (40) from the left to the right, the adjoint approach, is more efficient for many parameters and requires about N P times fewer multiplications. The evaluation procedure for the simulation code is depicted in Fig. 4. However, the numerical implementation of the adjoint method is more difficult than the analytical forward differentiation. As described, the calculation of the gradient with the analytical forward differentiation is parallel to the execution of the time loop in the simulation code. In contrast, the solution at the last time iteration q l max t,x has to be known to calculate g T · A −1 l max . Thus, at first, the simulation needs to be performed once, whereby all calculation results that are required for the matrix multiplications are stored temporarily. Then, the gradient can be computed according to the adjoint approach.
There are AD tools that can derive codes not only according to forward differentiation but also according to the adjoint method. However, the available memory on a computer is often too small to store all the required results in the central memory. The memory consumption M Memory can be estimated by multiplying the number of grid points of the simulation box N grid according to Eq. (14) with the number of time steps l max , the number of MHD variables N var , and the size of a MHD variable M var : The number of variables of the MHD simulation is N var = 8 and the size of such a variable is M var = 4 bytes if a float variable is assumed. This gives a memory consumption of about 1600 GB for a simulation grid i max = j max = k max = 100 and l max = 5 × 10 5 . The central memory is often much smaller, so that a certain portion of the variables needs to be stored on the hard disk. However, the seek time of the central memory is much smaller, and thus, the runtime of the algorithm becomes longer by storing data on the hard disk.
To minimize the access to the hard disk, checkpointing can be used. Thereby, the main iteration loop of the algorithm is split at certain checkpoints into smaller loops. Then, the smaller loop iterates over N loop,check iterations instead of the complete time loop with l max iterations. This reduces the memory requirements for such a loop to The variables during a calculation of such a smaller loop can be stored within the central memory. After the execution of the smaller loop, the results are stored to the hard disk to combine all results of the smaller loop. However, using smaller loops, the adjoint method can only be applied within these smaller loops. Thus, checkpointing reduces the seek time of the memory, but the adjoint approach is restricted to a smaller part of the algorithm. In total, this reduces the runtime of the algorithm, but the performance is below the theoretical possible performance of the adjoint approach with unlimited central memory space. Note that instead of using only observations of a single spacecraft, simultaneous measurements from multiple spacecraft at different locations can be calculated in Eq. (27) as well. This can be done without additional computational costs and memory capacity because the solution of the MHD simulation is calculated in the entire simulation domain and stored anyway.
Adjoint MHD simulation code
The AD tool applied for an automatic backward differentiation transfers a numerical code for the calculation of a cost function into an adjoint code, which can compute the gradient according to the adjoint approach. This was done for the MHD simulation code presented in the previous chapter by the TAF tool of the company FastOpt. Thereby, the parameter space of the dipole and quadrupole parameters P D,Q was Rel. error mnorm Figure 5. The relative errors for gradients determined by difference quotients and the adjoint method for the dipole components. Thereby, on the left side, different points in parameter space are considered. On the right side, the dependence of the error on a different number of time iteration steps is shown. t Adj t DQ Figure 6. The runtime of calculating gradients using the adjoint method t Adj and using difference quotients t DQ . On the left side, the dependence on the number of time iteration steps is presented. On the right side, the ratio of the runtimes is shown.
considered. Thus, the adjoint code computes the gradient of the cost function (27) with respect to the parameters (26).
To validate the adjoint MHD simulation code, the gradients produced by the adjoint code are compared to those calculated by difference quotients according to Eq. (28). Therefore, at first, the interaction of the solar wind with the planetary magnetic field is neglected and the planetary magnetic field represented by its dipole and quadrupole moments is only taken into account. Gradients at certain points s 0 = (0, 0, m z , 0, 0, 0, 0, 0) T in parameter space are considered. Thereby, the m z component varies between 0.7 and 1.2 m norm with a step size of 0.1 m norm . The spacecraft data B data on a trajectory r SC , required to calculate the cost function, are generated synthetically along the x axis between 20.2 and 9 R E with a step size of 0.42 R E . The gradient of the cost function is calculated using difference quotients ∂ s 0 K DQ and the adjoint method ∂ s 0 K Adj for different s 0 . The relative error of the ith component of the gradient is defined by Here, The result of the maximum function max(a, b) is the larger value of a and b and e i defines the i unit vector. The relative error of the dipole moment for different s 0 is depicted in Fig. 5. The error is smaller than 10 −4 , i.e., both gradients agree for different values of m z . Now, the interaction of the planet with the solar wind is taken into account. Thereby, the gradients calculated by the adjoint method can be compared to gradients computed by difference quotients for a different number of time iterations. The corresponding relative errors of the dipole components of the gradient are shown in Fig. 5 on the right side. It is seen that the gradients agree very well.
To determine the runtime of the adjoint code, the gradient calculations are performed on a test computer for different numbers of time iteration steps. The test computer uses 64 GB of central memory and has an Intel Xeon E5 processor with 12 cores and 24 threads at 2.5 GHz. The results are shown in Fig. 6 on the left side. The runtime of calculating the gradients increases linearly with the number of iteration steps performed, as expected. The plot on the right side presents the ratio of the runtime of the adjoint code t Adj and the runtime using difference quotients t DQ . On the test computer, the adjoint method calculates the gradient about 33 % faster than using the difference quotients. According to the previous argumentation, eight parameters require nine MHD simulation calls to determine the gradient with difference quotients of Eq. (28). The adjoint method needs to run the simulation once to store all necessary results and another simulation run to calculate the gradient. In Fig. 4, the first simulation run is shown on the left side from top to bottom, storing all results in vector representation presented in the middle. Using these results, the automatic differentiation procedure calculates the gradient as sketched on the right from bottom to top, which corresponds to another simulation run. Thus, in theory, the adjoint method can be up to 78 % faster than the difference quotient calculation. However, our adjoint code uses checkpointing because the central memory is too small, which increases the runtime. Consequently, the test computer configuration is not optimal to achieve the best performance. The performance can be improved by using a computer cluster with distributed memory space. Then, each core can access its own memory space and checkpointing can be avoided. This can increase the performance. Furthermore, it should be noted that without additional computational costs and memory requirements, more parameters can be introduced in the estimation process of the adjoint approach, such as octupole planetary magnetic field parameters.
4 Estimation of planetary magnetic field parameters 4.1 Using synthetic data At first, the results of data assimilation using synthetically produced data are considered, neglecting the interaction of the planetary magnetic field. The simulation box has a length of 60.2 R E in every dimension with the planet in its center. The number of grid points is i max = j max = k max = 300. Synthetic spacecraft data B data are calculated from the magnetic field distributions of certain dipole and quadrupole parameters s Planet along a trajectory r SC (x). The initialization s 0 for the estimation procedure of the planetary parameters differ from these moments. Starting from this initialization, the cost function is minimized.
The first trajectory considered here is r SC (x) = (x, 10.1 R E − x, 0) T , which is diagonal within the xy plane. The spacecraft's magnetic field data B data are generated by a dipole along the z axis, i.e., s Planet = (0, 0, 1, 0, 0, 0, 0, 0) T m norm .
In parameter space, the starting point of the estimation procedure is s 0 = (1, 0, 0, 0, 0, 0, 0, 0) T m norm , which is nothing other than a dipole along the x axis. The BFGS algorithm iteratively computes new gradients in which direction the cost function (27) is minimized. The corresponding dipole parameters during the minimization, depending on the iteration step of calculating new gradients, are presented in Fig. 7 in the top left panel. The vector of the dipole moment s Planet is reconstructed very well after 15 iteration steps.
Next, a different trajectory is considered to produce the synthetic data: r SC = (x, 30.1 R E − x, 0) T . This orbit is farther away from the planet than the previous trajectory. The results of the estimation process are depicted in Fig. 7 in the top right panel. Again, the dipole vector was reconstructed very well, however, about twice as many iterations were required. This is related to the variations in the magnetic field strength, with a smaller percentage ratio of the variations on the trajectory farther out. The plot on the bottom shows the iterative assimilation of the MHD simulation to the THC data (blue) before the first iteration (black) and after the 13th iteration (red).
The simultaneous estimation of dipole and quadrupole parameters is considered as well by using magnetic field data B data generated by s Planet = (0, 0, 1, 1, 0, 0, 0, 0) T m norm . Thereby, the trajectory r SC (x) = (x, 10.1 R E − x, 0) T is used. The reconstruction of the planetary magnetic field, starting from s 0 = (0, 0, 0, 0, 0, 0, 0, 0) T m norm , is shown in Fig. 7 in the bottom left panel. Additionally, the estimation process from a different point in parameter space s 0 = (1, 0, 0, 0, 0, 0, 0, 0) T m norm is realized. The results are presented in Fig. 7 in the bottom right panel. In both situations, the moments s Planet were correctly determined. Thereby, the estimation starting in parameter space farther away from the solution required 15 more iteration steps. Altogether, it is seen that the dipole as well as the quadrupole parameters can be reconstructed from synthetic data, whereby larger magnetic field variations along the trajectory or a starting point s 0 closer to the minimum speed up the estimation process.
Using THEMIS data
After proving the general functionality of the algorithm, it is applied using THEMIS spacecraft data at Earth. Thereby, data from the magnetosheath, a region strongly influenced by the interaction process of the solar wind, is considered. Different to the situation at Earth, spacecraft magnetic field observations in Mercury's magnetosphere are strongly mod-ified due to the magnetosphere's small size. Here, we restrict our approach to Earth's magnetosheath data to consider a strongly disturbed magnetic environment comparable to the situation at Mercury. However, in final application, magnetospheric data will be used as well to reduce errors. Due to the weak components of the quadrupole at Earth, their influence is negligible close to the magnetopause. The largest quadrupole moment corresponds to a magnetic field of < 0.1 nT at 10 R E . This is very small compared to the contribution of the dipole's z component of about 30 nT. Thus, only the dipole moment is considered in the estimation at Earth. The estimation process starts from s 0 = (0.25, 0, −1.2, 0, 0, 0, 0, 0) T m norm in parameter space. Subsequently, the cost function is minimized iteratively, whereby every new calculation of a gradient denotes a new iteration step.
The magnetosheath transition observed by THC on 24 August 2008, presented in Fig. 3, is used as a first estimation of the dipole parameters. Thereby, the solar wind measurements of the THB spacecraft determine the inflow boundary conditions of the MHD simulations. The results of the estimation process are depicted in Fig. 8. Thereby, the values of the dipole moment as well as the cost function are shown. The dipole components vary mainly during the first iterations. The value of the cost function is strongly reduced from iteration steps 0 to 1 and 6 to 7. After the eighth iteration step, the cost function and the components of the dipole moment do not change much. Finally, the solution vector after 13 iteration steps is s 13 = (−0.072, 0.084, −1.078, 0, 0, 0, 0, 0) T m norm . Thereby, all components are closer to the value of the dipole moment of Earth according to Eq. (24) than the initial values. The relative errors for the dipole components are m x = (s 13,1 −m x,E )/m x,E = −0.44, m y = (s 13,2 −m y,E )/m y,E = −0.47, and m z = (s 13,3 − m z,E )/m z,E = 0.14. The relative error of the z component is the smallest because Earth's dipole is mainly along the z direction. Considering the magnitude, the relative error is 0.13. The panels showing B x , B y , and B z in Fig. 8 show the magnetic field distribution of the MHD simulation, which corresponds to s 0 and s 13 .
Conclusions
We introduced an approach to estimating planetary parameters using global MHD simulations of the interaction of the solar wind with a planet. A simple MHD simulation code was introduced and prepared for an automatic differentiation tool to obtain an adjoint MHD simulation code. The differences of spacecraft data and corresponding simulation results are quantified by a cost function, which is minimized by a gradient-based optimization. The adjoint code computes the gradient with lower computational effort compared to a difference quotient calculation.
Our approach is designed to estimate planetary magnetic fields, especially if the field strength is weak so that the interaction strongly modifies the magnetic field of the planet's environment. We used THEMIS data of Earth's magnetosheath to simulate such an environment to test our approach. The results of the estimation process can be affected by statistical and systematic errors. Therefore, statistical errors will not contribute to the mean values of the estimated planetary magnetic field if a sufficiently large number of magnetosheath transitions are considered. For example, the solar wind density can be measured incorrectly due to a spacecraft potential (McFadden et al., 2008). However, the density is usually equally overestimated and underestimated. Considering a single magnetosheath transition, the estimated dipole magnitude of Earth differs about 13 % from the expected value. Based on this approach, further transitions can be considered to minimize the errors. Note that including magnetospheric data at Mercury will further reduce the statistical error. The runtime of the parameter estimation using the test computer is about 1 week using the 5 h magnetosheath data. This fast calculation procedure allows taking a lot more data into account, especially if supercomputers are used.
We used a simple MHD simulation code to investigate the automatic differentiation procedure. As a next step, the simulation code needs to be improved, e.g., by an adaptive mesh refinement, to reduce numerical errors. Also, kinetic or hybrid simulation codes can be considered and treated with an automatic differentiation tool. The limiting factor for applying the automatic differentiation is not the complexity of the code but the memory consumption. Using our test computer, the adjoint approach was about 33 % faster than a finite difference approach. Although the adjoint MHD code does not calculate the gradient very much faster than using difference quotients, it has the advantage that further parameters such as higher-order magnetic field moments or parameters of the planet's conductivity can be included with nearly no additional computational costs. Nonetheless, the performance of the adjoint code is, related to memory limitations of our test computer, much lower than expected from theory. Thus, as a further step, the test computer configuration needs to be modified to increase performance. It is beneficial for the adjoint approach that each core has access to its own memory, which is different from our test computer. Thus, instead of using traditional supercomputers with fewer more powerful computers, a computer cluster using many commoditized computers with their own memory should be considered. These computer configurations recently became very popular in big data analysis using Google's well-known MapReduce technique (Dean and Ghemawat, 2004). A similar configuration might be more suitable for the adjoint code and increase its performance. The ability of our approach to perform on clusters with many cores depends on the parallelization of the MHD simulation code. Although this can be limited to a certain number of cores, another possibility to parallelize the estimation process is to split the data into subsets and perform the calculation of these subsets in parallel. Each data set will provide an individual estimator of the planetary parameters that can be applied in an ensemble averaging technique to reduce errors.
Data availability. Data from the THEMIS mission are publicly available and can be obtained from http://themis.ssl.berkeley.edu/ data/themis from the University of California Berkeley (Angelopoulos, 2008).
Competing interests. The authors declare that they have no conflict of interest. | 13,918.8 | 2017-05-09T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Second harmonic generation in the bulk of silicon induced by an electric field of a high power terahertz pulse
The experimental findings on the second harmonic generation (SHG) in centrosymmetric crystal silicon are reported. The SHG is induced by extremely high electric field (up to 15 MV/cm) parallel to the crystal surface of a short terahertz (THz) pulse while probing by an infrared femtosecond optical pulse. The SHG under such unique conditions is reported for the first time. At the electric field amplitude above 8 MV/cm, the quadratic dependence of the SHG yield integrated over the THz pulse duration on the electric field is violated and SHG yield is not changed with a further increase of the THz field. Saturation of SHG intensity at high electric fields is explained in terms of carrier density increase due to impact ionization and destructive interference of electric-field induced and current induced nonlinear polarizations.
Results
The used in the experiments silicon sample with low electron density (as a p-type) is non-absorbing also for THz radiation, in contrast to n-type silicon with a high electron density 11,12 .
The transmittance measurement of silicon wafer depending on the electric field of the THz pulse incident on the sample were carried out in order to accurately interpret the experimental findings and to estimate the electric field strength of the THz pulse at the backside of the silicon wafer (this field serves as a pump for SHG, see Methods).
The electric field strength of the incident THz pulse was changed by adjusting the laser pump energy incident on the THz crystal. The transmittance was determined as a ratio of the THz pulse energy transmitted through the silicon sample to the THz pulse energy incident on the sample.
The transmittance of the silicon wafer considering the Fresnel reflection and absorption losses is shown in Fig. 1. One can see that the transmittance stays nearly the same at electric field strengths in the range from 2 to 8 MV/cm. However, with increasing THz electric field strength above 10 MV/cm the transmittance of silicon begins to decrease up to 40% at 22 MV/cm. In addition, the measured dependence of THz radiation reflectivity on its electric field strengths did not show any changes. Therefore, it might be supposed that a decrease in transmittance is affected solely by an increase in absorption.
On the assumption that changes in absorption are due to impact ionization and the following increase in free carrier density, the estimated at 1 THz in the frame of the Drude model electron density does not exceed 10 16 cm −3 .
As at this concentration the plasma frequency is much lower the optical probe radiation frequency, no significant changes in dielectric function and coherence length for radiation at 1240 and 620 nm occur.
Contour maps of the electric field temporal profile of the transmitted THz pulse as a function of its energy and the temporal profile of the second harmonic radiation during THz irradiation as a function of the THz electric field strength are shown in Fig. 2. The SHG measurements were carried out at a constant energy of the 100 fs laser pulse of 1 μJ that corresponded to intensity of 0.5 · 10 12 W/cm 2 .
The electric field waveforms of the transmitted THz pulse ( Fig. 2(a)) were measured using the conventional electro-optical sampling (EOS) method in GaP crystal, a detailed description of the setup is presented in 12 .
As follows from the experimental results, the temporal profiles of the electric field and the second harmonic intensity are very different. Really, the input THz pulse consists mainly of large positive maximum at t = 0.7 ps surrounded by two negative maxima with approximately 3 times lower amplitude at t = 0.4 and 1 ps. The profile does not change much (consists of parallel lines in Fig. 2(a)) while propagating through the sample. This means that we have not observed strong self-influence of the THz pulse. For SHG, situation is different. Although for the lowest THz field, the main maxima positions of THz field and SHG intensity are quite close, increase of THz field results in a shift on a time scale of main SHG maximum towards the first (small and negative) maximum of THz field. Quantitative analysis of these dependences is quite difficult because SHG intensity and THz field are www.nature.com/scientificreports www.nature.com/scientificreports/ measured by different techniques, both with their own zeroes on a delay time scale. Dynamics of the second harmonic radiation and its correlation with the THz radiation waveform is a subject of separate scientific research. In the further analysis, the second harmonic radiation energy integrated over the whole pulse will be considered as function of electric field amplitude of the THz pulse.
The dependence of the second harmonic radiation energy coming out from the sample on the electric-field strength of the THz pulse is shown in Fig. 3, the values are obtained through integrating of the second harmonic intensity temporal profiles over time in the 0-2 ps range ( Fig. 2(b)). The electric field strength of the THz pulse in Fig. 3 was determined as product of incident electric field strength and the corresponding transmittance coefficient from Fig. 1.
As it could be seen from the graph, the dependence of the second harmonic radiation energy is well approximated by a second-order power law in the electric field strength range from 2 to 8 MV/cm. However, at higher fields in the 8-14 MV/cm range the second harmonic radiation yield appears to be invariable on the THz pulse electric field. As one can see in Fig. 1, the transmittance of silicon also changes in the same range.
Discussion
For a centrosymmetric crystal like silicon the even-order bulk dipole nonlinear susceptibility tensors are zero, and bulk dipole SHG is absent. In the absence of external factors that could break symmetry (electric field or mechanical deformation) there is a weak contribution to a SHG signal attributed to quadrupole effects. A symmetry could be also broken at the interface in a thin near-surface layer of several interatomic distances in thickness that also has second-order nonlinear susceptibility [13][14][15] . However, bulk quadrupole effects are small in comparison to bulk dipole contribution 16 that we consider in this paper. In the experiments we did not observe the SHG signal in the absence of the THz field, or that signal was lower the detector sensitivity.
When applying a DC electric field to a silicon sample the SHG is determined by the third order nonlinear susceptibility that is referred to as an electric field-induced second harmonic (EFISH) 17,18 or TEFISH if an electric field of a THz pulse is applied 4-10 , the corresponding nonlinear polarization has the form: where (3) χ is the phenomenological tensor of the bulk dipole cubic susceptibility of the medium, which does not vanish in centrosymmetric media, E ω and E Ω are electric fields of the infrared laser and of the THz pulses, respectively, both taken inside a material.
For noncentrosymmetric medium, nonlinear polarization is written as P EE being dependent on electric field E ω of THz pulse. Phenomenologically for low enough electric field (in our experiments the ratio is valid), the function can be expended in Taylor series and the first term gives linear dependence of polarization (equation 1) on THz electric field and corresponding quadratic dependence of SHG intensity. It should be emphasized that both bound and free electrons may give contribution to χ (3) in semiconductors 19 .
With an external field increase, linear dependence may violate quadratic law by different effects. Such effects have been extensively studied since 80-th, in semiconductors as well as in other materials at optical frequencies for second and third harmonics generation, sum-frequency generation and optical Kerr effect. Both bound and free electrons may give contribution to χ (3) in semiconductors 19,20 , the latter providing dependences of χ (3) on carrier density. For an electric field at THz frequencies, quasi-static conditions with high electric field application can be considered.
As shown in 11,21-24 high electric fields lead to the effect of impact ionization that in turn results in a short-time occupation by electrons of conduction band and electron-hole pairs generation. Electrons and holes moving in a field of a THz pulse create a transient current that holds only during the action of a THz pulse. As presented in 13,25 www.nature.com/scientificreports www.nature.com/scientificreports/ current in a semiconductor is a source of nonlinearity due to arising asymmetry in occupation of valence band and the dependence of the nonlinear susceptibility on the field becomes dependent on current density: Analogously to 13,25 , consider linear dependence of a second-order nonlinear susceptibility on a current density. Then we can put the total susceptibility as a sum of two contributions: pure field dependent and current dependent terms: χ . For electric field lower than a threshold value for impact ionization, Ω E th , the first term exists only. For E E th , > Ω Ω , the second term joins the game. Estimations for these two terms are required in order to justify the suggested mechanisms.
For estimating the current density, expression at Ω E = 14 MV/cm. The estimation of free carrier concentration was obtained from the experimental results (Fig. 1) using changes in absorption of the THz pulse. Let us compare a ratio of field and current contributions into overall nonlinear susceptibility. For 1 ps, while the THz pulse lasts, and even more so for 0.3 ps with a field of one sign, the electron motion can be considered ballistic. In the expression for current density j free carrier velocity e h , ν is determined through electron and hole mobilities μ e h , and an electric field strength of the THz pulse Ω . For estimations we used μ e = 1400 cm 2 /(V s), μ h = 450 cm 2 /(V s) 26 . For the electric field strength of 2 MV/cm (in the region of quadratic dependence of second harmonic yield on the electric field strength) current density is j = 1.4 · 10 5 A/cm 2 , and for 14 MV/cm (in the saturation region) j = 1.2 · 10 7 A/cm 2 .
In DC experiments, current-induced nonlinear polarization in silicon is χ = = ⋅ − j Acm . Thus in this estimation for lower THz field, electric-field induced term substantially dominates over the current-induced term. With the further THz power increase, the current-induced term increases much faster due to impact ionization and reaches the same order of magnitude as the electric-field induced term.
The presence of two contributions probably can explain saturation of SH intensity at high electric fields in terms of destructive interference of electric-field induced and current induced nonlinear polarizations. It is also important to note that field-induced nonlinearity is connected mostly with bound electrons, which concentration is not changed during the impact ionization process, because it is is much higher (10 22 cm −3 ) than free electron density.
From the point of view of polarization and current directions, it is worth noting that a condition of optimal geometry is fulfilled by intrinsic current nature which is collinear to the driving field. The experimentally observed saturation of second harmonic intensity indicates the factors confining carrier density and current under the high carrier density and high currents induced by the impact ionization.
The suggested approach allows qualitative explanation of SHG temporal-field behavior coded in Fig. 2. Initially for low fields, the simple scenario of EFISH takes place. At the main maximum of the THz oscillation, the field increase up to 8 MV/cm (t del = 0.7 ps, black dashed line below the cross) results in correspondent increase of SHG. The same is true for the first oscillation of THz field ("negative maximum" at t del = 0.47 ps, white line). For higher fields, impact ionization occurs, a current-induced SHG comes to play and the SHG intensity decreases (see right panel, black line, above the cross). The current-induced SHG field is apparently shifted in phase regarding to the EFISH phase in a way providing the total SHG signal decrease. For larger time delay, such a simple picture does not work: for t del = 1.23 ps the SHG signal first increases but then starts to decrease at much lower field of about 3-4 MV/cm. This might be due to some cumulative effects which require consideration of precise impact ionization model.
Conclusions
In this paper, for the first time, the results are reported of experimental studies in a bulk of p-type silicon crystal of a second harmonic generation of a femtosecond chromium-forsterite laser radiation induced by a strong electric field (up to 15 MV/cm) of a THz pulse. It is shown that the dependence of the second harmonic yield integrated over the THz pulse duration on the pulse electric field is quadratic in the range up to 8 MV/cm. It shown, for the first time, that the second harmonic yield does not depend on the THz field amplitude in the range of 8 MV/cm to 15 MV/cm, and the temporal shape of the of the second harmonic intensity does not correlate with the temporal shape of the THz pulse. In the same range, a decrease in the THz radiation transmission in a silicon sample is observed. It is assumed that the experimentally observed saturation of the SH output is due to the short-term population of the conduction band with electrons and the generation of electron-hole pairs as a result of impact ionization. Electron-hole pairs in the field of THz waves, create a transient current, which is the source of nonlinearity. The saturation of the SHG yield in strong electric fields can be explained in terms of the destructive interference of nonlinear polarizations caused by the electric field and current.
Methods
An unique Cr:forsterite femtosecond laser system that provides 100-fs laser pulses at a central wavelength of 1240 nm with above 40 mJ pulse energy and at 10 Hz repetition rate was used in the experiments 28 . High power THz pulses could be generated with a high efficiency using this laser system 29,30 . An experimental scheme for studying high-power terahertz electric field induced second harmonic generation in silicon at 620 nm is shown in Fig. 4. www.nature.com/scientificreports www.nature.com/scientificreports/ The terahertz beam was expanded with a telescope and then focused onto a sample to a 260-μm spot (at a level of 1/e 2 ) 31,32 using a 90° off-axis parabolic mirror with a focus length of 50.8 mm and a diameter of 50.8 mm. Terahertz pulse energy at a focal plane was measured by means of a calibrated Golay cell resulting in 106 ± 10 μJ that corresponds to the incident electric field strength of 22 ± 1 MV/cm. The estimation of the electric field strength was made in accordance with the Poynting's theorem stating the dependence between the electric field amplitude and time-averaged intensity of an electromagnetic wave I S c E 0 2 ε = = 〈 〉 . A more detailed description of the estimation and verification of a THz electric field strength value is presented in our previous studies 11,12,31,32 .
The time-delayed probe beam at 1240 nm coaxially propagated with the THz pump and using a lens was focused through an aperture of the off-axis parabola to a spot of 50 μm in diameter.
In the experiments we utilized THz radiation with linear polarization parallel to that of the probe radiation. The THz and probe beams were incident normally to the sample surface. This geometry is optimal for conversion of the probe radiation into the second harmonic during the action of the THz field, as other electro-optic effects 33 .
Polarized attenuators consisted of the Glan-Tompson prism and a half-wave plate were used to adjust the energy of the optical probe and the THz crystal pump pulses (THz energy pulse control).
The second harmonic radiation at 620 nm coming out from the sample's backside was collimated with a positive lens and then recorded using the photomultiplier tube. The laser radiation at 1240 nm transmitted through the sample was attenuated by means of two narrow-band interference filters at 620 nm with an overall attenuation coefficient of 10 10 .
Experiments were carried with a polished p-doped Si wafer 245 μm thick with a crystallographic orientation (100), free carrier density of 1.6 · 10 15 cm −3 and mobility of 325 cm 2 /(V · s) (from the Hall effect measurements). www.nature.com/scientificreports www.nature.com/scientificreports/ A distinctive feature of the experiments on optical second harmonic generation is related with the use of the 1240 nm (0.98 eV) laser radiation for which silicon with a band-gap energy (1.1 eV) is transparent. At the same time, in silicon the 620 nm second harmonic radiation decays with a penetration depth of μ l 2 penetr m 34 , while a coherence length L c λ = / n (4 ) π∆ , being a characteristic length of a maximum energy transfer from fundamental frequency to second harmonic, and vice versa, is about 0.8 μm. | 3,980.2 | 2019-07-05T00:00:00.000 | [
"Physics"
] |
Optimizing photoacoustic image reconstruction using cross-platform parallel computation
Three-dimensional (3D) image reconstruction involves the computations of an extensive amount of data that leads to tremendous processing time. Therefore, optimization is crucially needed to improve the performance and efficiency. With the widespread use of graphics processing units (GPU), parallel computing is transforming this arduous reconstruction process for numerous imaging modalities, and photoacoustic computed tomography (PACT) is not an exception. Existing works have investigated GPU-based optimization on photoacoustic microscopy (PAM) and PACT reconstruction using compute unified device architecture (CUDA) on either C++ or MATLAB only. However, our study is the first that uses cross-platform GPU computation. It maintains the simplicity of MATLAB, while improves the speed through CUDA/C++ − based MATLAB converted functions called MEXCUDA. Compared to a purely MATLAB with GPU approach, our cross-platform method improves the speed five times. Because MATLAB is widely used in PAM and PACT, this study will open up new avenues for photoacoustic image reconstruction and relevant real-time imaging applications.
Background
Photoacoustic imaging is an emerging modality, which is well-known for overcoming the light diffusion limit by converging light absorption into sound [1]. Upon irradiation by laser or radiofrequency pulses, tissue will experience thermos-elastic expansion, which generates acoustic waves to be detected by transducers. Capitalizing on non-ionizing light illumination and rich optical contrasts, photoacoustic imaging possesses advantages in term of safety, penetration depth, and tissue contrast [2]. Photoacoustic computed tomography (PACT), in particular, employs higher energy pulses and wide-field scanning and is capable of capturing 3D structures in a wide range of scales, from vasculatures to organs [3,4]. This character gives PACT an outstanding advantage over other tomography modalities [5].
Despite its immense possibility [6][7][8], PACT is limited by the extensive 3D computation. For example, to reconstruct a 200 × 430 × 200 matrix, it takes half an hour on GPU-based MATLAB on our PC with an NVI-DIA Titan X, using the focal-line-based 3D reconstruction algorithm [9]. Even with the fact that MATLAB is not time-efficient, this processing time is still considerable, placing a burden on the "pipeline" of 3D PA studies. Since reconstruction is the very "front door" component in this "pipeline", long reconstruction time leads to delay in the overall research process.
Current efforts on shortening PA reconstruction range from algorithm development to hardware improvement. In terms of algorithm development, fast Fourier transform-based (FFT) reconstruction [10] has succeeded at improving reconstruction speed. On the other hand, in terms of hardware enhancement, with the recent boom in graphics processing units (GPU), parallel computation has been widely used in various medical tomography modalities, such as PA [11][12][13][14], CT and MRI [15][16][17][18]. Because PA reconstruction involves mostly linear computation which is straightforward for being parallelized, GPU becomes a suitable solution for improving the computation time [11][12][13][14]. Kang et al. [11] combined both FFT reconstruction with GPU to show a significant improvement of 60 times compared to single-thread CPU on optical-resolution photoacoustic microscopy (OR-PAM) with 500 × 500 pixels. Impressive improve in performance also proved in 3D reconstruction. For instance, Wang et al. implemented GPU-based image reconstruction on C and demonstrated an improvement of 1000 times in comparison with CPU [19]. Luis et al. even managed to perform 4D PA imaging with 120 × 120 × 100 voxels and achieved a speed of 51 frames per second [20]. However, all these studies focused on the reconstruction efficiency and neglected the front-end simplicity of user interaction, which is also important in PA studies.
To improve the user-friendliness of image reconstruction, here we propose a cross-platform image reconstruction approach. Our solution is different from previous studies in a sense that it spans across two programming platforms -MATLAB and C++ on CUDA API. This MATLAB/C++/CUDA code (MCCC) combines the simplicity of MATLAB and the time-efficiency of C++. It can tremendously assist PA research because most of current PA systems heavily depend on MATLAB. In details, the reconstruction code is back-projection-based, with preand post-processing steps performed in MATLAB and reconstruction loops executed in CUDA/C++, through MEXCUDA functions. Validating images are then reconstructed using the MATLAB/CUDA-without-C++ code (MCC), MATLAB-without-GPU code (MWGC), and our MCCC in this study. MCC does not perform any computation in C++, and MWGC processes all the steps on CPU only. They are used to compare with MCCC to see if our cross-platform method reduces the reconstruction time. Successfully, our solution is able to shorten this processing time to one-fifth while keeping the same image quality comparing to the MCC.
Reconstruction methodfocal-line-based back-projection algorithm
In PACT, the universal back-projection (UBP) is frequently used for 3D image reconstruction [21]. Details of this reconstruction method are described by the following formula: Here, p 0 ð r ! Þ is the initial PA pressure at r ! , pðr d ! ; tÞ is the acoustic pressure at r d ! , and delay time t is calculated from the travel time jr d ! − r ! j=v s , in which v s is the speed of sound in tissue (1.54 m/msec). Ω 0 is the solid angle spanning over the transducer surface S. The universal back-projection algorithm is developed based on point-like transducers and is inaccurate for focused transducers, such as linear transducer arrays with a focus along the axial direction. In this case, because of the element aperture, time delay cannot be computed directly from the point source to the center of the element. The focal-line reconstruction algorithm addresses this issue by utilizing a focal line which goes through the foci of all transducer elements. The travel path (time of arrival) of any point in 3D space is quantified based on its intersection with the focal line: only the path that goes across the focal line gives the strongest response in the transducer. Detailed descriptions of this method can be found in [9,22].
MEXCUDA function generation
As aforementioned, MATLAB is used as the main platform for pre-and post-processing the data and all the extensive computation process is performed in C++. Such that, we need to establish a "gateway" between CUDA/C++ and MATLAB. MEXCUDA function offers a perfect solution for this connection. It is a convenient way to take input from MATLAB to C++, perform calculation in C++, and then take the output back to MATLAB. In details, MEXCUDA is the expansion of MATLAB mex function that utilizes C/C++ for execution using C++ MEX API. The difference between mex and MEXCUDA is that MEXCUDA is compiled by the NVIDIA CUDA compiler (nvcc), enabling GPU execution on C++ for improved performance. We first need to generate a MEXCUDA function before calling it in MATLAB. The source code for the MEXCUDA function is a CU file which is written in C+ + for CUDA. The CU file has the following main building blocks. The first block is initialization with two purposes. First, it prepares the code with MathWorks' GPU library by calling mxInitGPU from the mxGPU API. Secondly, it creates mxGPUArray objects (mxGPUArray is a CUDA class to contain GPU arrays) to store gpuArray inputs from MATLAB and an output matrix "pa_img" representing the reconstructed image. The next block of code is parallel computation. It contains several kernel functions on the device code to calculate pa_img from the input mxGPUArray objects in parallel. The last block of the CU code is finalization. It includes functions to deliver pa_img back to MATLAB code and to destroy the GPU matrices to save memory. From this source code, we create the compiled MEXCUDA function by using the mexcuda command in MATLAB. This final MEXCUDA function is in mexw64 type, which is a nvcc-compiled code for the 64-bit Windows operating system. This function can be called directly in Matlab as a subfunction. Fig. 1. First, in the MATLAB front-end code, users load raw data, convert CPU-based matrices into GPU matrices, and set reconstruction parameters. Then, users send inputs to MEXCUDA function. After executing through the building blocks mentioned above, this function returns the output as the final reconstructed image to MATLAB. Finally, with post-processing steps in MATLAB, users are able to visualize and examine the reconstructed 3D structure.
Heterogeneous computing in CUDA/C++
The process flow executed in C++ employs a widely-known programming method called heterogeneous computing in order to maximize the performance. GPU, despite having excellent computing ability by calculating each matrix value in parallel, cannot perform both traditional serial and CPU-based tasks effectively, such as checking input compatibility, pre-allocating memory, and creating output arrays. On the other hand, CPU is faster at handling these steps so it is better suited for pre-and post-processing data. Such that, CPU is employed in the initialization and finalization blocks, while GPU is exploited in the parallel computation block. This processing flow is presented in Fig. 2.
Validating experiments
To evaluate the efficiency of the optimized code, we scanned a breast of a healthy volunteer to acquire 3D vascular data. The human imaging study was performed in compliance with the University at Buffalo IRB protocols. The PACT imaging system contains three main parts: a 10-ns-pulsed Nd:YAG laser with 10 Hz pulse repetition rate and 1064 nm output wavelength, a customized linear array with 128 elements and 2.25 MHz central frequency, and a Verasonics' Vantage data acquisition system with 128 receive channels. The light illumination was achieved through a bifurcated fiber bundle with 1.1-cm-diameter circular input and two 7.5-cm-length line outputs (Light CAM #2, Schott Fostec). During the experiment, the input laser energy was around 800 mJ/pulse and the efficiency of the fiber bundle is 60%, so that the laser output from the fiber bundle is around 480 mJ/pulse. Since the size of the laser beam on the object's surface was approximately 2.5 cm × 8.0 cm, the laser intensity is 30 mJ/cm 2 , which is much lower than the safety limit of 100 mJ/cm 2 [23]. The transducer was scanned along the elevation direction over 40 mm at 0.1 mm step size. The entire imaging area is 8.6 cm (lateral width of the probe) × 4 cm (scanning distance). A schematic of the experimental setup is illustrated in Fig. 3. Following data collection, we performed 3D focal-line reconstruction with MCCC, MCC and MWGC for comparison.
Results
We reconstruct the image using MCC, MCCC and MWGC methods for comparison of processing time and image quality. All the reconstructions are carried out in our PC with an NVIDIA Titan X GPU (Pascal architecture) and Intel Core i5-6400 CPU.
In terms of reconstruction time, even though it is already supported by GPU for parallel programming, MCC reconstruction still shows a costly computing time. It takes more than 30 min with a resolution factor (RF) of five for a volume of 200 × 430 × 200 voxels. Here, RF is the reciprocal of the voxel size (in mm). Reducing this number can reduce the reconstruction time to 400 s as shown in Fig. 4 with the loss of resolution as a tradeoff.
The results clearly show that C++ has played a vital role to shorten the reconstruction time. Overall, for RF of 5, computing time in MCCC is reduced by almost five-fold in comparison with MCC, from 33 to 7 min. Expectably, both of the codes with GPU support (MCC and MCCC) outperform the reconstruction without GPU (MWGC) which takes up to 1376 min as shown Fig. 4(a). At RF of 2, MCCC has ten times shorter processing time than the duration of MCC as shown in Fig. 4(b). In terms of image quality, because the reconstruction methods are the same, there are no changes from those images created by MCC and MCCC as indicated by Fig. 5. This fact proves that there is no tradeoff between reconstruction accuracy and processing time. From the perspective of the users, as aforementioned, there is no need for modification of parameters or calculations in the source code in C++ because it is used as a predefined sub-function. With this fact, the simplicity nature of the front-end MATLAB code is maintained.
Discussion and conclusion
To summarize, in this paper, we propose a novel way to optimize 3D reconstruction for PACT using cross-platform MATLAB/C++ code on CUDA. Our approach, utilizing C++/GPU reconstruction function, manages to significantly reduce the reconstruction time by five times compared with the performance of the MATLAB/GPU code. On the other hand, it maintains the simplicity of user interaction in MATLAB front-end side. Our method paves the way for future 3D reconstruction optimization for PACT on cross-platform MATLAB/C++ that benefits further PACT research which depends heavily on MATLAB. Future work for this project will focus on further decreasing the reconstruction time by cutting down the number of iterations in the source code. The current reconstruction is still processed through a significant amount of loops based on the number of transducer elements and scanning lines. Instead of going through 128 (number of elements) × 400 (number of lines) loops, we should find a solution to perform calculation all at once if possible. For example, all the input data for each loop can be allocated to all available GPU memory and be executed in parallel. However, the limitation of this approach is that a huge amount of memory will need to be deployed, making it only viable for small 3D PA structures. Other than that, reducing the number of iterations can be achieved by having only either 128 loops based on transducer elements or 400 loops based on lines. | 3,110.8 | 2018-09-05T00:00:00.000 | [
"Engineering",
"Computer Science",
"Physics"
] |
Electron Injection via Modified Diffusive Shock Acceleration in High-Mach-number Collisionless Shocks
The ability of collisionless shocks to efficiently accelerate nonthermal electrons via diffusive shock acceleration (DSA) is thought to require an injection mechanism capable of preaccelerating electrons to high enough energy where they can start crossing the shock front potential. We propose, and show via fully kinetic plasma simulations, that in high-Mach-number shocks electrons can be effectively injected by scattering in kinetic-scale magnetic turbulence produced near the shock transition by the ion Weibel, or current filamentation, instability. We describe this process as a modified DSA mechanism where initially thermal electrons experience the flow velocity gradient in the shock transition and are accelerated via a first-order Fermi process as they scatter back and forth. The electron energization rate, diffusion coefficient, and acceleration time obtained in the model are consistent with particle-in-cell simulations and with the results of recent laboratory experiments where nonthermal electron acceleration was observed. This injection model represents a natural extension of DSA and could account for electron injection in high-Mach-number astrophysical shocks, such as those associated with young supernova remnants and accretion shocks in galaxy clusters.
Introduction
Astrophysical observations have long shown the ability of high-Mach-number collisionless shocks to accelerate nonthermal electrons, from supernovae remnants (SNRs) to accretion shocks in galaxy clusters (Völk et al. 2005;Molnar et al. 2009;Ha et al. 2023).These shocks are mediated by plasma instabilities, which produce and amplify magnetic fields and dissipate the flow energy by heating the plasma and accelerating particles.The dominant particle acceleration mechanism thought to operate at astrophysical shock waves is diffusive shock acceleration (DSA; Axford et al. 1977;Krymskii 1977;Bell 1978aBell , 1978b;;Blandford & Ostriker 1978), which corresponds to a first-order Fermi process.Particles gain energy by repeatedly crossing the shock front while scattering off converging magnetic field turbulence on both sides of the shock.DSA is well understood in the testparticle limit and, for strong shocks, produces power-law energy spectra with a universal slope ε −2 (with ε the particle energy; Drury 1983;Blandford & Eichler 1987) that can be consistent with the spectrum of galactic cosmic rays ε −2.7 when nonlinear corrections (Blandford & Eichler 1987;Diesing & Caprioli 2021) and diffusion in the galactic medium is taken into account (Blasi & Amato 2012).However, in order for particles to cross the shock front they must undergo preacceleration, causing their gyroradii to reach a size comparable to or greater than the shock transition size, which is typically dictated by the gyroradius of the inflowing ions.This is known as the "injection problem" and it poses a significant challenge, particularly for electrons due to their relatively small mass (Treumann 2009).
Two main injection mechanisms are often invoked: shock drift acceleration (SDA; Hudson & Kahn 1965;Begelman & Kirk 1990) and shock surfing acceleration (SSA; Sagdeev 1966;Katsouleas & Dawson 1983).In SDA, particles gyrating in the ambient magnetic field gain energy as their guiding center moves along the convective electric field due to drifts associated with the magnetic field gradient at the shock.Conservation of magnetic moment limits the energy gain during the interaction of the particles with the shock.In SSA, particles reflected by the shock potential can be trapped between the shock front and the upstream, and gain energy by the convective electric field.
Spacecraft observations have been shaping our understanding of planetary shocks, suggesting that both electrons and ions can be efficiently accelerated via SDA in the region where the shock is quasi-perpendicular (Sarris & Krimigis 1985;Johlander et al. 2021).However, observations of planetary shocks are typically limited to low/moderate Mach numbers (M 10); electron injection in very high-Mach-number shocks (M 100) remains an important challenge that is critical to understanding particle acceleration in SNR and galaxy cluster accretion shocks.
In the last decades, kinetic simulations have been playing an important role in the characterization of particle acceleration at shocks by providing a self-consistent description of the shock dynamics.Most numerical studies of high-Mach-number shocks have focused on quasi-perpendicular configurations and indicate that electrons can gain energy by a combination of different effects, including scattering off Whistler waves (Levinson 1992), SSA in electrostatic modes driven by the Buneman instability (Hoshino & Shimada 2002;Matsumoto et al. 2012), a combination of SDA and SSA (Amano & Hoshino 2007, 2008), and magnetic reconnection (Matsumoto et al. 2015;Bohdan et al. 2020).In addition, microturbulence produced at the shock has been suggested to help electron injection directly via second-order Fermi acceleration (Bohdan et al. 2017;Ha et al. 2023) or by increasing the efficiency of SDA due to the increased time over which particles remain confined close to the shock front, in a process termed stochastic SDA (SSDA; Matsumoto et al. 2017;Katou & Amano 2019;Amano et al. 2020).There is still no consensus on whether these mechanisms can effectively operate in high-Machnumber astrophysical shocks and how they control electron injection.
Laboratory experiments are opening new, complementary paths to investigate the microphysics governing astrophysical shocks at kinetic scales, which are not accessible by observations.While the scales associated with laboratory experiments are vastly different from those of astrophysical shocks, it is becoming possible to drive energetic plasmas where the dimensionless parameters that control the microphysical behavior (e.g., Mach number, flow velocity normalized to the speed of light, and collisional mean free path normalized to the system size) match those of astrophysical plasmas enabling formal scaling of the processes between both systems (Ryutov et al. 2012).Recently, experiments at the National Ignition Facility (NIF) have used energetic lasers to drive high-Mach-number plasma flows, demonstrating the formation of collisionless shocks in conditions relevant to young SNRs (Fiuza et al. 2020).In these experiments, shock formation follows from the interpenetration of two counterstreaming flows with velocity 1000 km s −1 produced by laser ablation of two solid targets.Measurements of the electron spectra show the acceleration of nonthermal electrons up to 500 keV, which is more than 100 times the measured shocked electron thermal energy T e ; 3 keV (Fiuza et al. 2020).These results offer a unique opportunity to benchmark numerical simulations and help validate models of electron injection in shocks.
In this paper, we present the results of kinetic simulations of the NIF experimental conditions that indicate that kinetic-scale turbulence produced by the ion Weibel (Weibel 1959), or current filamentation (Fried 1959), instability leads to effective electron injection via a first-order Fermi process occurring within the shock transition.We introduce a description of this process as a modified version of DSA, here termed modified-DSA (m-DSA), in which we account for the flow velocity profile at the shock transition, in contrast to conventional DSA in which the shock front is considered an infinitely sharp transition.This model is consistent with the simulation results and represents a natural mechanism for electron injection in high-Mach-number shocks.
Simulations of NIF Experiments
The experimental observation of nonthermal electron acceleration to >100× their thermal energy at the shock, in a few nanoseconds of the shock evolution, can help benchmark models of electron injection in high-Mach-number shocks.In order to gain further insight into the shock structure and to identify the electron acceleration mechanism, we perform 2D3V fully kinetic simulations of the experimental conditions with the particle-incell (PIC) code OSIRIS 4.0 (Fonseca et al. 2002(Fonseca et al. , 2008)).We consider two nonrelativistic counterpropagating electron-ion plasmas with nonuniform velocity and density profiles matching the NIF laser-produced flows (see Fiuza et al. 2020 andGrassi &Fiuza 2021 for more details).These profiles are in good agreement with hydrodynamic simulations (Marinak et al. 2001), with measurements of the plasma density at the midplane from the NIF experiments, and with the well-established self-similar solution (Gurevich et al. 1966), which predicts a flow velocity that decreases with time as ∝t −1 at the midplane.We consider initially unmagnetized plasmas as appropriate for the experiments and given that we are interested in studying the high Mach number limit.The sonic Mach number is M s ; 35 for the initial velocity of the counterstreaming flows.We model the interaction between the two flows in a box of extension ( ) w ´c 11,000 800 pe 2 , where c is the speed of light in vacuum and 1 2 is the electron plasma frequency, n 0 the maximum density of the overlapped plasmas, and −e and m e the electron charge and mass.We used a spatial resolution of ;0.4 c/ω pe and 50 macro-particles per cell with a third-order particle interpolation scheme for improved numerical accuracy.We have tested the convergence of the numerical results by varying the resolution in the range (0.07-0.4)c/ω pe , the number of particles per cell between 25 and 200, and the ion-to-electronmass ratio between 128 and 512, observing overall agreement on the shock structure and nonthermal electron spectrum.The boundary conditions for both particles and fields are open along the flow direction and periodic transversely.To maintain a reasonable computational cost, we employed a reduced ion mass-to-charge ratio m i /(m e Z) = [128-512], with m i the ion mass and Z the ion charge number, and an initial peak flow velocity |u(t ; 0)| ; 0.25c, which is ;30× higher than the experimental one.For this scaled-up velocity, the interaction remains nonrelativistic and dominated by electromagnetic instabilities so that the properties of the shock structure and magnetic fields can be formally scaled to the experimental conditions (Ryutov et al. 2012) as also shown by Bohdan et al. (2021).
The main simulation results are shown in Figure 1.As the plasma flows interact, strong B-fields perpendicular to the propagation direction are produced through the Weibel, or current-filamentation, instability (Fried 1959;Weibel 1959;Medvedev & Loeb 1999), with a spatial scale comparable to the ion skin depth, c/ω pi , and a few percent of the flow kinetic energy is converted into electromagnetic energy (Spitkovsky 2007;Kato & Takabe 2008;Lemoine et al. 2019;Swadling et al. 2020).During the nonlinear evolution of the instability, the spatial scale of the magnetic field structures increases and becomes turbulent, as shown in Figure 1(a).As the coherent length of the amplified magnetic field becomes comparable to the gyroradius of the incoming flow ions in the local field, the ions are effectively slowed down and compressed.Two collisionless shocks are formed, whose front widths are comparable to the local ion gyroradius and propagate outward with respect to the midplane (as highlighted by the black arrows).
The temporal evolution of the electron energy spectrum in the downstream region clearly shows the development of a nonthermal tail that extends to 100k B T e , where T e is the shocked electron temperature computed in a downstream region of extension ;200 c/ω pi around the center of interaction (x = 0).This is consistent with the experimental observations (Figure 1(b)).The separation between the thermal and nonthermal electron populations occurs around ε NT ; 4k B T e .We note that at late times electrons reach high enough energy to be injected into DSA, namely the electron gyroradius becomes comparable to inflowing ion gyroradius r L,e ; r L,i in the local magnetic field.In general, this injection condition is given by v e γ e > u m i /(Zm e ).Inspection of the magnetic power spectrum in this region reveals that the initial ion kinetic scale magnetic fields evolve into a broad turbulent spectrum ranging from sub c/ω pi scales to ∼10c/ω pi .Most importantly, we observe that the gyroradius of electrons with ε NT is close to c/ω pi allowing for efficient scattering of these electrons.We have verified this for different values of the ion-to-electron mass ratio, as illustrated in Figure 1(c).This suggests that magnetic turbulence induced by the ion Weibel instability can impact the diffusion of thermal electrons.
To investigate the mechanism responsible for the acceleration of electrons to nonthermal energies we track, in the PIC simulation performed with m i /(Zm e ) = 128, ;3500 electrons with high temporal resolution, i. , so as to have even statistics in 15 logspaced bins with energies ε > 10 k B T e .
Electrons are found to gain energy via multiple reflections along the flow direction (i.e., the x-axis), as shown for a typical trajectory in Figure 2(a).This happens while the particle is confined in the shock transition layer, which is the region with strong B-field turbulence, as confirmed by the transversely averaged B-field 〈|B z |〉 y (red plain line).We also observe (Figure 2 In order to confirm if either SDA or SSDA is responsible for the electron energization, we have computed the magnetic moment of the electrons while they are accelerated
2
, where, in the configuration of our 2D3V simulation, ⊥ identifies the component in the xy-plane transverse to the B z field produced by the Weibel instability and all quantities are averaged over the local gyroperiod of the particle.We observe that μ is not conserved.This is illustrated in Figure 2(b), where we focus on one energization event associated with a reflection in the x-direction.This can be understood from the fact that the gyroradius of the electrons is comparable to the scale of the magnetic turbulence produced at the shock.To further confirm this, we have calculated the energy gain experienced by electrons due to the grad-B drift associated with SDA as ∫E y v ∇B,y dt with v ∇B = ( − μ/e)B × ∇B/B 2 .We can see that both in the case illustrated in Figure 2(b) and for all the selected nonthermal particles (Figure 2(c)) the grad-B drift does not contribute to the nonthermal particle acceleration.We can thus exclude mechanisms such as SDA or SSDA to play a dominant role in this case.
This analysis suggests that in such a high-Mach-number shock the small-scale magnetic turbulence produced at the shock transition is key in controlling particle scattering and the ability of the shock to energize electrons up to injection into DSA.This injection process will be described in the following as a modified-DSA mechanism that occurs while the particles are still confined within the shock front; for thermal electrons, the finite nature of the shock transition must be considered, given that the shock transition is larger than the electron gyroradius, in contrast with the standard DSA.
Electron Injection via Modified-DSA
The transport and acceleration of electrons in the selfgenerated turbulence at the shock transition can be described by a Fokker-Planck-type equation (Skilling 1975;Blandford & Eichler 1987;Petrosian 2012).In the diffusive limit, this can be reduced to a diffusion-convection equation for the isotropic part of the distribution f 0 (x, p), which gives (Reville & Bell 2013) where D is the spatial diffusion coefficient.As shown in Katou & Amano (2019) and Amano & Hoshino (2022), the component proportional to the gradient of the magnetic field in the energization term (first term on the right-hand side) corresponds to SDA.Since we have shown that this is not the dominant contribution in our configuration, we will focus on the energization term proportional to the gradient of the fluid velocity, which corresponds to first-order Fermi acceleration.
From Equation (1) we can then obtain the equation describing standard DSA by assuming that the shock seen by the particles is an infinitely sharp transition, such that ∇ • u = (u u − u d )δ(x), where x = 0 identifies the shock front position, and u u and u d the upstream and downstream velocities, respectively.For nonrelativistic steady-state shocks u sh = c, considering that electrons are subject to elastic scatterings in the rest frame of the scattering centers, which are moving at u u and u d , this gives an average energization rate (Bell 1978a(Bell , 1978b)
DSA
where Δε is the average energy gain in a cycle around the shock front (i.e., upstream-downstream-upstream) over the time Δt that a particle takes to scatter back and forth across it, v e and ε are the electron velocity and energy, and D = t t scatt DSA the scattering time.This is related to the spatial diffusion coefficients in the upstream D u and downstream D d as The typical acceleration time t acc defined as dε/dt ∼ ε/t acc , then reads This description leads to the well-known power-law energy spectrum with an index dependent only on the shock compression factor (Blandford & Eichler 1987).
For low-energy, near-thermal electron energies, as relevant to the injection process considered here, however, the approximation of an infinitely sharp shock transition cannot be made, as the electron gyroradius is much smaller than the shock transition.Indeed, as we saw in the simulation analysis above, electron energization happens through scattering in the shock transition itself, and thus we must account for the spatial dependence of the different quantities within the shock front.There, the upstream flow slows down and the average flow velocity decreases roughly linearly with the distance from the unperturbed upstream (Figure 2(a) dotted red line).The variation of the flow velocity in the region where the electrons are accelerating modifies the average energy gain Δε and consequently the energization rate.Specifically, the difference of the velocities seen at consecutive reflections, that in Equation (2) is a constant equal to u u − u d , has to be replaced by a generic Δu(ε) that can be a function of the particle energy.Indeed, particles with higher energies can, in principle, explore larger portions of the shock front transition through their diffusive motion and this corresponds to larger values of Δu, as suggested by Figure 2(a) (dotted red line).The extension of the longitudinal electron motion depends on the properties of the diffusion in the self-generated magnetic field turbulence.Given the large amplitude of the magnetic turbulence at the electron gyroradius scales (Figure 2(c)), we consider Bohm diffusion to be a reasonable approximation (we will later verify with the help of PIC simulations) leading to a spatial diffusion coefficient D Bohm = r L,e v e /3.Analogously to Equation (3), we expect the scattering time for the modified-DSA to scale as This suggests that contrary to what is expected for standard DSA, the acceleration time is independent of energy for high energy (relativistic) electrons, and therefore the energization rate increases linearly with the particle energy, up to injection into DSA.Note that in high Mach-number shocks with typical velocity 1000 km s −1 , the electrons need to be relativistic to reach the threshold for injection.Considering that the shock transition is set by the gyroradius of incoming protons, the condition r L,e ; r L,i entails γ e 6.
Validation of Electron Injection Model with PIC Simulations
To test the proposed injection model, the physical quantities that identify the modified-DSA process are extracted from the trajectories of the electrons tracked in the PIC simulation of Section 2. We first analyze the dependence of the energization rate on the electron energy.We have checked that for nonthermal particles their energization is dominated by the perpendicular electric field E y , while the thermal component is primarily heated by the longitudinal E x component.Thus, our theoretical description of the nonthermal particle acceleration accounts for the work W y of the E y component of the electric field, which is predominantly produced by the advection of the magnetic field B z along the x-direction.The average energization rate dW y /dt, shown in Figure 3(a) is measured considering the trajectories of each tracked particle up to t sel , in a region of ±10 3 c/ω pe around the midplane, which contains both shock fronts and the shocked downstream at all times t < t sel .The best fit confirms a near-linear dependence on energy, dW y /dt ∝ ε α with α ; 1.2 ± 0.1, and consequently an acceleration time t acc m DSA independent from the particle energy.This is consistent with the scaling predicted in Equation (6), i.e., an energization rate ∝v e ε ∝ ε 1.1 when considering mildly relativistic particles as appropriate for the simulation.
To further confirm our interpretation of this scaling, we study the dependence on the particle energy of each term on the right side of Equation (6).Since the scattering time is strictly related to the diffusion properties, we extract the spatial diffusion coefficient as = From the fit D(ε) ∝ ε β (dashed line), we obtain β = 0.85 ± 0.02, which confirms that the diffusion coefficient follows quite accurately the scaling predicted by Bohm diffusion in the mildly relativistic regime (dotted line), which is D Bohm ∝ r L,e v e ∝ ε 0.85 .Following Equation (5), we see that the scattering time increases as the accelerating electron gyroradius.We verify this with an independent analysis, in which we identify reflection events in the electron trajectories, averaged over the local gyroperiod, and - t scatt m DSA is computed as the interval between two consecutive reflections.The obtained scattering time compares well with the expected theoretical scaling, Equation (5), as shown in Figure 3(c) (orange line).We also report the reference value of the half gyroperiod in the Bfield seen by the electrons at the reflection (black dotted line), to highlight that we are excluding simple rotations in the averaged field from our analysis.
To conclude the investigation of the energy dependence in Equation (6), we verify that Δu, the velocity difference seen by an electron at consecutive reflections, scales as r L,e .To do that we considered the same reflection events used to compute - t scatt m DSA and we extracted Δε, considering only the work of the E y component of the electric field.The fit of Δε ∝ ε δ (dashed line in Figure 3(d)) gives δ = 1.8 ± 0.2, from which Δu ∝ ε 0.7±0.2 .This is in reasonable agreement with the dependence on energy of the electron gyroradius that, in this range of energies, is ∝ε 0.77 .This analysis confirms that particles with higher energy can explore a larger portion of the shock front extension and, therefore, a larger variation of the advection velocity.We note that in the standard Fermi acceleration picture ( ) e e D µ u c u N , where N is the order of the Fermi process.This relation is generally used to distinguish between first-order acceleration in nonrelativistic shocks and second-order acceleration in turbulence (Petrosian 2012).In our configuration, the average flow velocity within the shock front, where the acceleration process takes place, decreases from ¯= u c 0.125 at early times to ¯= u c 0.0025 at t sel .Considering all the reflections happening in this interval of time, one can use the time-averaged velocity as Δε/ε ∝ (0.064) N .This value is consistent with a linear fit of the data in Figure 3(d), which gives ¯= u c 0.05 0.01, and is much larger than the squared velocity (normalized to c) dependence even for the early times.Hence, this suggests the predominance of a firstover second-order Fermi process operating as the injection mechanism.
Given that, while confined in the shock transition region, electrons experience a velocity jump (or equivalently a density compression) between consecutive reflections that is smaller than the total flow velocity variation across the shock front, the energy spectrum produced by this injection mechanism is expected to have a steeper slope than that associated with standard DSA.Because high-energy particles will experience a larger flow velocity jump, the spectrum shape can also become slightly concave (Ellison et al. 2000;Amato & Blasi 2006).Above the injection energy (i.e., when the gyroradius of the electrons becomes comparable to that of the inflowing upstream ions) the spectrum should then evolve to the standard DSA spectrum.
Conclusions
We have shown that ion kinetic scale magnetic turbulence produced by the Weibel instability in high-Mach-number shocks can be effective in injecting electrons into a nonthermal population via a first-order Fermi process.Results of kinetic simulations for the conditions of recent NIF simulations confirm the ability of this process to accelerate electrons to >100k B T e , consistent with the experimental results.We have proposed that this injection process can be described as a modified-DSA mechanism, where within the shock transition electrons interact with converging magnetic turbulence, undergoing Bohm-like diffusion and accelerating via multiple scatterings in a first-order Fermi process.The obtained scaling laws are consistent with the statistical analysis of the accelerated electrons in the self-consistent kinetic simulations.This modified-DSA mechanism represents a natural extension of conventional DSA and could enable the injection of electrons in high-Mach-number astrophysical shocks, such as those associated with young supernova remnants and accretion shocks in galaxy clusters.
e., equal to the time resolution of the PIC simulation w D = t 0.285 pe 1 .These electrons have been selected from the nonthermal component at (a); red dotted line), that the effective (mean) flow velocity profile in this shock transition, defined as 〈u〉 y = (n + u + + n − u − )/(n + + n − ) where the index +(−) refers to the flow moving in the positive (negative) x-direction, is approximately linear.As the shocks move away from each other, we have confirmed that electrons are accelerated by interacting with only one shock front and performing multiple reflections as they are trapped within the shock front, similarly to the case of the particle trajectory illustrated in Figure2(a)i.e., particles are not gaining energy by interacting with both shocks.
Figure 1 ..
Figure 1.(a) Magnetic field B z at w ´t 2.5 10 pi sel 3 1 .The shock front positions and their propagation direction are highlighted by the black lines and arrows.(b) Energy spectrum of the electrons in a downstream region extending for ; ±100 c/ω pi at w ´t 2.5 10 pi sel 3 1 (blue plain line) and w = ´t 5 10 pi 3 1 (orange plain line).The thermal component given by the Maxwellian fit (black dashed line) and the nonthermal components (obtained by subtracting the thermal component from the full distribution) are shown at both times (blue and orange dotted lines, respectively).(c) Magnetic power spectrum in the shock downstream −22 < x ω pi /c < 22 at w - t 1260 pi 1 (plain lines) and inverse gyroradii for downstream electrons with energy ε NT ; 4k B T e (dashed vertical lines), considering the average B-field amplitude in the downstream, for simulations using m i /(m e Z) = 128 (blue lines) and 512 (orange lines).
Figure 2 .
Figure 2. (a) Typical particle energization trajectory as a function of its longitudinal position (i.e., along the x-axis) and time (color bar), magnetic field amplitude (red plain line), and flow velocity (red dotted line) averaged along the transverse y-direction at w - t 7125 pe 1 .(b) Evolution in time of the electron energy (black line), of the work done by the electric field, assuming the particle motion to be described by the gradient drift approximation (green line), and of the magnetic moment (red line) averaged over the local gyroperiod, during a typical energization event.(c) Evolution of the average particle energy (black line) and the average work within the gradient drift approximation (green line).Shaded gray lines correspond to the energy evolution in time of all particles.
Within the assumption of Bohm diffusion, the longitudinal diffusive length scales as L diff ∝ r L,e , and consequently Δu ∝ r L,e , because of the linear behavior suggested by Figure2(a).The typical energization rate and acceleration time for the modified-DSA process hence read interval larger than the local gyroperiod for all particle energies, 7 and Δx the corresponding displacement along the x-direction (Figure3(b)).
Figure 3 .
Figure 3. Analysis of the tracked particles for t < t sel in the region ±10 3 c/ω pe : (a) Average contribution to the energization of W y the work of the E y component of the electric field.The best fit (dashed line) confirms the linear dependence on the particle energy with dW y /dt ∝ ε α with α = 1.2 ± 0.1.(b) Spatial diffusion coefficient D = 〈Δx 2 〉/(2Δt).The dashed line corresponds to the best fit (D ∝ ε β with β = 0.85 ± 0.02) and the dotted line corresponds to the Bohm diffusion in the average Bfield seen by the particles along their trajectories.They both show the same dependence in energy.(c) Averaged scattering time computed as the interval between two consecutive reflections in the particle trajectories time-averaged over the local gyroperiod.Value of half electron gyroperiod (black dotted line) and theoretical prediction (orange line) assuming Bohm diffusion Equation (5).(d) Averaged energy gain due to the work of the E y component of the electric field during the reflection events.The fit of Δε ∝ ε δ (dashed line) gives δ = 1.8 ± 0.2. | 6,606.6 | 2023-11-28T00:00:00.000 | [
"Physics"
] |
Bethe Ansatz and Bethe Vectors Scalar Products
An integral presentation for the scalar products of nested Bethe vectors for the quantum integrable models associated with the quantum affine algebra $U_q(\hat{\mathfrak{gl}}_3)$ is given. This result is obtained in the framework of the universal Bethe ansatz, using presentation of the universal Bethe vectors in terms of the total currents of a"new"realization of the quantum affine algebra $U_q(\hat{\mathfrak{gl}}_3)$.
Introduction
The problem of computing correlation functions is one of the most challenging problem in the field of quantum integrable models, starting from the establishment of the Bethe ansatz method in [1]. For the models where algebraic Bethe ansatz [2,3,4,5] is applicable, this problem can be reduced to the calculation of scalar products of off-shell Bethe vectors. These latters are Bethe vectors where the Bethe parameters are not constrained to obey the Bethe Ansatz equations anymore. For gl 2 -based integrable models, these scalar products were calculated in [6,7,8] and are given by the sums over partitions of two sets of the Bethe parameters. Lately, it was shown by N. Slavnov [9], that if one set of Bethe parameters satisfies Bethe equations (which guarantees that the Bethe vectors are eigenvectors of the transfer matrix), then the formula for scalar products can be written in a determinant form. This form is very useful to get an integral presentation for correlation functions [10,11,12,13] in the thermodynamic limit.
There is a wide class of quantum integrable models associated with the algebra gl N (N > 2). An algebraic Bethe ansatz for these type models is called hierarchical (or nested) and was introduced by P. Kulish and N. Reshetikhin [14]. This method is based on a recursive procedure which reduces the eigenvalue problem for the transfer matrix for the model with gl N symmetry to an analogous problem for the model with gl N −1 symmetry. Assuming that the problem for N = 2 is solved, this method allows to find hierarchical Bethe equations. Explicit formulas for the hierarchical Bethe vectors in terms of the matrix elements of the gl N monodromy matrix can be found in [15], but these complicated expressions are very difficult to handle.
The solution of this problem, namely the formulas for the off-shell Bethe vectors in terms of monodromy matrix, was found in [16]. These vectors are called universal, because they have the same structure for the different models sharing the same hidden symmetry. This construction requires a very complicated procedure of calculation of the trace of projected tensor powers of the monodromy matrix. It was performed in [17], but only on the level of the evaluation representation of U q ( gl N ) monodromy matrix.
There is a new, alternative approach to the construction of universal Bethe vectors for gl N symmetry models using current realizations of the quantum affine algebras [18] and using a Ding-Frenkel isomorphism between current and L-operators realizations of the quantum affine algebra U q ( gl N ) [19]. This approach allows to obtain explicit formulas for the universal Bethe vectors in terms of the current generators of the quantum affine algebra U q ( gl N ) for arbitrary highest weight representations. It was proved in [20] that the two methods of construction of the universal Bethe vectors coincide on the level of the evaluation representations. Furthermore, it was shown in [21] that the eigenvalue property of the hierarchical universal Bethe vectors can be reformulated as a problem of ordering of the current generators in the product of the universal transfer matrix and the universal Bethe vectors. It was proved that the eigenvalue property appears only if parameters of the universal Bethe vectors satisfy the universal Bethe equations of the analytical Bethe ansatz [22].
The universal Bethe vectors in terms of current generators have an integral presentation, as an integral transform with some kernel of the product of the currents. In the U q ( sl 2 ) case, this integral representation produces immediately an integral formula for the scalar product of offshell Bethe vectors [23] which is equivalent to the Izergin-Korepin formula. In this article, we present an integral presentation of the universal off-shell Bethe vectors based on the quantum affine algebra U q ( gl 3 ). These integral formulas lead to integral formulas for scalar products with some kernel. The corresponding formula (5.6) is the main result of our paper. The problem left to be done is to transform the integral form we have obtained to a determinant form which can be very useful for the application to quantum integrable models associated to gl N symmetry algebra.
2 Universal Bethe vectors in terms of L-operator 2.1 U q ( gl 3 ) in L-operator formalism Let E ij ∈ End(C 3 ) be a matrix with the only nonzero entry equals to 1 at the intersection of the i-th row and j-th column.
be a trigonometric R-matrix associated with the vector representation of gl 3 . Let q be a complex parameter different from zero or a root of unity. The algebra U q ( gl 3 ) (with zero central charge and the gradation operator dropped out) is an associative algebra with unit, generated by the modes Actually, we will not impose the condition (2.3) since the universal Bethe vectors will be constructed only from one L-operator, say L + (z). Subalgebras formed by the modes L ± [n] of the L-operators L ± (z) are the standard Borel subalgebras U q (b ± ) ⊂ U q ( gl N ). These Borel subalgebras are Hopf subalgebras of U q ( gl N ). Their coalgebraic structure is given by the formulae
Universal off-shell Bethe vectors
We will follow the construction of the off-shell Bethe vectors due to [16].
Baxter commutation relation with a R-matrix R(u, v). We use the notation L (k) (z) ∈ C 3 ⊗M ⊗ U q (b + ) for L-operator acting nontrivially on k-th tensor factor in the product C 3 ⊗M for 1 ≤ k ≤ M . Consider a series in M variables In the ordered product of R-matrices (2.5), the R (ji) factor is on the left of the R (ml) factor if j > m, or j = m and i > l. Consider the set of variables Following [16], let The element T(t,s) in (2.6) is given by (2.4) with obvious identification. The coefficients of B(t,s) are elements of the Borel subalgebra U q (b + ).
We call vector v a right weight singular vector if it is annihilated by any positive mode L i,j [n], i > j, n ≥ 0 of the matrix elements of the L + (z) operator and is an eigenvector of the diagonal matrix entries L + i,i (z): For any right U q ( gl 3 )-module V with a right singular vector v, denote The vector valued function B V (t,s) was called in [16,17] universal off-shell Bethe vector. We call vector v ′ a left weight singular vector if it is annihilated by any positive mode L i,j [n], i < j, n ≥ 0 of the matrix elements of the L-operator L + (z) and is an eigenvector of the diagonal matrix entries L + i,i (z): Our goal is to calculate the scalar product There is a direct way to solve this problem, using the exchange relations of the L-operators matrix elements and the definitions of the singular weight vectors. However, this approach is a highly complicated combinatorial problem. Instead, we will use another presentation of the universal Bethe vectors given recently in the paper [20,24], using current realization of the quantum affine algebra U q ( gl 3 ) and method of projections introduced in [25] and developed in [26].
3 Current realization of U q ( gl 3 )
Gauss decompositions of L-operators
The relation between the L-operator realization of U q ( gl 3 ) and its current realization [18] is known since the work [19]. To build an isomorphism between these two realizations, one has to consider the Gauss decomposition of the L-operators and identifies linear combinations of some Gauss coordinates with the total currents of U q ( gl 3 ) corresponding to the simple roots of gl 3 .
Recently, it was shown in [24] that there are two different but isomorphic current realization of U q ( gl 3 ). They correspond to different embeddings of smaller algebras into bigger ones and to different type of Gauss decompositions of the fundamental L-operators. These two different current realizations have different commutation relations, different current comultiplications and different associated projections onto intersections of the current and Borel subalgebras of U q ( gl 3 ). Our way to calculate the scalar product of Bethe vectors (2.11) is to use an alternative form to expressions (2.8) and (2.10) for the universal Bethe vectors. It is written in terms of projections of products of currents onto intersections of the current and Borel subalgebras of U q ( gl 3 ). In this case, the universal Bethe vectors can be written as some integral and the calculation of the scalar product is reduced to the calculation of an integral of some rational function.
For the L-operators fixed by the relations (2.1) and (2.2), we consider the following decompositions into Gauss coordinates F ± j,i (t), E ± i,j (t), j > i and k ± i (t): Using the arguments of [19], we may obtain for the linear combinations of the Gauss coordinates (i = 1, 2) and for the Cartan currents k ± i (t), the commutation relations of the quantum affine algebra U q ( gl 3 ) with zero central charge and the gradation operator dropped out. In terms of the total currents F i (t), E i (t) and of the Cartan currents k ± i (t), these commutation relations read with Serre relations Formulae (3.4)-(3.14) should be considered as formal series identities describing the infinite set of relations between modes of the currents. The symbol δ(z) entering these relations is a formal series n∈Z z n .
Borel subalgebras and projections on their intersections
We consider two types of Borel subalgebras in the algebra U q ( gl 3 ). Borel subalgebras U q (b ± ) ⊂ U q ( gl 3 ) are generated by the modes of the L-operators L (±) (z) respectively. For the generators in these subalgebras, we can use instead modes of the Gauss coordinates (3.1)- Other types of Borel subalgebras are related to the current realizations of U q ( gl N ) given in the previous subsection. We consider first the current Borel subalgebras generated by the modes of the currents In the following, we will be interested in the intersections and will describe properties of the projections on these intersections. We call U F and U E the current Borel subalgebras.
In [18] the current Hopf structure for the algebra U q ( gl 3 ) has been defined as: With respect to the current Hopf structure, the current Borel subalgebras are Hopf subalgebras of U q ( gl 3 ). One may construct the whole algebra U q ( gl 3 ) from one of its current Borel subalgebras using the current Hopf structure and the Hopf pairing Formulas (3.16) can be obtained from the commutation relations (3.4)-(3.14) using the commutator rules (5.4) in the quantum double. One can check [26,27] that the intersections U − f and U + F , respectively U + e and U − E , are subalgebras and coideals with respect to the Drinfeld coproduct (3.15): and that the multiplication m in U q ( gl 3 ) induces an isomorphism of vector spaces According to the general theory presented in [26], we introduce the projection operators They are respectively defined by the prescriptions It was proved in [28] that the projections P ± f and P ± e are adjoint with respect to the Hopf pairing (3.16) e, P ± f (f ) = P ∓ e (e), f .
Denote by U F an extension of the algebra U F formed by infinite sums of monomials that are ordered products . Denote by U E an extension of the algebra U E formed by infinite sums of monomials that are ordered products . It was proved in [26] that (1) the action of the projections (3.18) can be extended to the algebra U F ; the action of the projections (3.19) can be extended to the algebra U E ;
Definition of the composed currents
We introduce the composed currents 2 E 1,3 (w) and F 3,1 (y) which are defined by the formulas where the contour integrals C 0,∞ dz z g(z) are considered as integrals around zero and infinity points respectively. The composed currents E 1,3 (w) and F 3,1 (y) belong to the completed algebras U E and U F , respectively. Let us remind that, according to these completions, we have to understand the product of currents E 2 (z)E 1 (w) as an analytical 'function' without singularities in the domain |z| ≪ |w|. Analogously, the product F 1 (y)F 2 (v) is an analytical 'function' in the domain |y| ≫ |v|. For practical calculation, the contour integrals in definitions (3.20) can be understood as the formal integrals of a Laurent series g(z) = k∈Z g[k]z −k picking up its zero Deforming contours in the defining formulas for the composed currents we may rewrite them differently Formulas (3.21) are convenient for the presentation of the composed currents as products of simple root currents Formulas (3.22) are convenient to calculate the commutation relation between total and halfcurrents. This will be done lately. First, we calculate the formal integrals in the formulas (3.20) to obtain Here we used the series expansion Introducing now the half-currents and using the decomposition of the algebra U q ( gl 3 ) into its standard positive and negative Borel subalgebras and the definition of the screening operators we may write To obtain the latter relation, we used formulas (3.23) and relation between total and half-currents
Universal Bethe vectors and projections
The goal of this section is to obtain the representations for the left and right universal Bethe vectors in terms of the integral transform of the products of the total currents. This will generalize the results obtained for U q ( sl 3 ) in the paper [28]. The calculation of the scalar product after that will be reduced to the calculation of the exchange relations between products of total currents.
Universal Bethe vectors through currents
It was shown in the papers [20,24] that the universal right Bethe vectors can be identified with some projection of products of total currents. Using the same method, we may prove that the left Bethe vectors can be also identified with projection of products of total currents 3 . So the problem of calculating the scalar product C V ′ (τ ,σ), B V (t,s) is reduced to the exchange relations between projections. Fortunately, to perform this exchange, we have to calculate only modulo the ideals in the algebra U q ( gl 3 ) which are annihilated by the left/right singular vectors. One could calculate these projections to present them in the form of a sum of products of projections of simple and composed root currents (see formulas (4.4) and (4.5) below). However, this calculation has the same level of difficulty as the exchange relations of Bethe vectors in terms of L-operators. The idea of the present paper is to rewrite projection formulas (4.4) and (4.5) in terms of integrals of total simple root currents, and then to compute the exchange of products of total currents. In this way, we will obtain an integral representation for the scalar product of the off-shell Bethe vectors. This calculation is much more easy, since the commutation relations of the simple roots total currents are rather simple.
Calculation of the universal off-shell Bethe vectors
Before presenting the formulas for the universal off-shell Bethe vectors in terms of the current generators, we have to introduce the following notations. Consider the permutation group S n and its action on the formal series of n variables defined, for the elementary transpositions σ i,i+1 , as follows The q-depending factor in this formula is chosen in such a way that each product F a (t 1 ) · · · F a (t n ) is invariant under this action. Summing the action over all the group of permutations, we obtain the operator Symt = σ∈Sn π(σ) acting as follows The product is taken over all pairs (ℓ, ℓ ′ ), such that conditions ℓ < ℓ ′ and σ(ℓ) > σ(ℓ ′ ) are satisfied simultaneously.
According to the results of the papers [26,27], the calculation of the universal off-shell Bethe vectors is reduced to the calculation of the projections for the right Bethe vectors and 4 for the left Bethe vectors. The calculation was detailed in [28]. Here, we present the result of calculations and give several comments on how it was performed.
Proposition 1. The projections (4.2) and (4.3) are given by the series . . . , t a−k+1 ; s k , . . . , s 1 ) (4.4) Note that the kernels (4.6) are defined in such a way, that they have only k simple poles at the point t 1 , . . . , t k with respect to the variable x k , k = 1, . . . , n. These kernels appear in the integral presentation of the projections of the products of the same simple root currents (see (4.14) below).
The proof of the formulas (4.4) and (4.5) is similar to the proof presented in the paper [28]. We will not repeat this calculations here, but for completeness, we collect all necessary formulas. As a first step, we present the products of currents F 2 (s 1 ) · · · F 2 (s b ) and E 2 (σ b ) · · · E 2 (σ 1 ) in a normal ordered form using properties of the projections given at the end of the Subsection 3.2: To evaluate the projections in formulas (4.4) and (4.5), we commute the negative projections P − f (F 2 (s 1 ) · · · F 2 (s k )) to the left through the product of the total currents F 1 (t 1 ) · · · F 1 (t a ) in case of (4.4) and commute the negative projections P − e (E 2 (s m ) · · · E 2 (s 1 )) to the right through the product of the total currents E 1 (τ a ) · · · E 1 (τ 1 ) in (4.5). To perform this commutation we use and The expressions are linear combinations of the half-currents, while φ s ℓ (s 1 ; s 2 , . . . , s k ) = k j=2, j =ℓ are rational functions satisfying the normalization conditions φ s j (s i ; s 2 , . . . , s k ) = δ ij , i, j = 2, . . . , k. One also needs the commutation relations between negative half-currents and the total currents valid for arbitrary permutations ω and ω ′ of the setss andt, respectively.
Integral presentation of the projections (4.4) and (4.5)
The projections (4.4) and (4.5) are given as a product of projection of currents. As already mentioned, this form is not convenient to obtain scalar products. We give a new representation in term of a multiple integral over the product of simple root currents: where the kernels E(τ ,σ;μ,ν) and F(t,s;x,ȳ) are given by the series . . , τ a−m+1 ; σ m , . . . , σ 1 )Z(τ a , . . . , τ 1 ; µ a , . . . , µ 1 ) and The proof of these formulas is given in the next subsection. Let us explain the meaning of the integral formulas for the projections (4.8). There is a preferable order of integration in these formulas. First, we have to calculate the integrals over variables ν i and y i , i = 1, . . . , b, respectively, and then calculate the integrals over µ j and x j , j = 1, . . . , a. Example 1. Let us illustrate how it works in the simplest example a = b = 1 and for projection P + f (F 1 (t)F 2 (s)). We have Integration over y with the first term of the kernel yields to dy y due to the commutation relations (4.19). Integration over y with the second term of the kernel produces according to the formulas (3.24). Finally, integration over x in both terms produces the result for the projection in this simplest case. The general case can be treated analogously. Of course, one can first integrate over x and then over y. However in this case, the calculation of the integrals for the projection becomes more involved and requires more complicated commutation relations between half-currents.
Proof of the integral presentation of the projections (4.4) and (4.5)
Integral representation for the projections of the same type of currents P + f (F i (s 1 ) · · · F i (s b )) and P + e (E i (σ b ) · · · E i (σ 1 )) (i = 1, 2) were obtained in [28]. They can be obtained from the calculation of these projections where F + i (s k ; s k−1 , . . . , s 1 ) and E + i (σ k ; σ k−1 , . . . , σ 1 ) are linear combinations of the half-currents with coefficients being rational functions There is a very simple analytical proof of the formulas (4.11) given in [28].
Example 2. Let us illustrate this method on one example: the first relation in (4.11) with b = 2. Indeed, from the commutation relation of the total currents F i (s 1 ) and F i (s 2 ), and due to the integral presentation of negative half-currents (4.12) we know that where X(s 1 ) is an unknown algebraic element which depends only on the spectral parameter s 1 . This element can be uniquely defined from the relation (4.13) setting s 1 = s 2 and using the fact that F 2 i (s) = 0. The general case can be treated analogously (see details in [28]). Formulas (4.7) can be proved in the same way.
Using now the integral form of the half-currents one can easily obtain integral formulas for (4.11): (4.14) According to the structure of the kernels (4.6), the integrands in (4.14) have only simple poles with respect to the integration variables y 1 and ν 1 in the points s 1 and σ 1 respectively, while with respect to the variables y b and ν b they have simple poles in the points s j and σ j , j = 1, . . . , b. Due to q-symmetric prefactors in the integrals (4.14), the integrals themselves are symmetric with respect to the spectral parameters s j and σ j , j = 1, . . . , b, respectively. The integral form for the projections of the strings and is a more delicate question. To present them as integrals, we use arguments of [28] and formulas (3.24). The point is that the analytical properties of the reverse strings P + f (F 3,1 (t a ) · · · F 3,1 (t a−k+1 )F 1 (t a−k ) · · · F 1 (t 1 )) and are the same as the analytical properties of the product of the simple root currents F 1 (t a )· · ·F 1 (t 1 ) and E 1 (τ 1 )· · ·E 1 (τ a ). Therefore the calculation of projection of the reverse string can be done along the same steps as for the product of simple root currents. In order to relate the projection of the string and projection of the reverse string, we need the commutation relations and the fact (proved in [28]) that under projections we can freely exchange currents without taking into account the δ-function terms. As result, we get · · · dµ a µ a Z(τ a , . . . , τ 1 ; µ a , . . . , µ 1 ) Here we used the notations for products of non-commutative terms and the identities e (E 1,3 (τ )) = P + e (E 2 (E 1 (τ ))) = E 2 P + e (E 1 (τ )) = E 2 E + 1 (τ ) , on commutativity of the screening operators and the projections proved in [28].
The last step before getting integral formulas for universal Bethe vectors is to present products of screening operators acting on total currents, as an integral using formulas (3.24). The presentation follows from the following chain of equalities and where we have used the commutation relation Note that these commutation formulas are crucial for the integral formulas given below in (4.20) and (4.21). One can see that the right hand sides of these formulas are not ordered, while the left hand sides are.
Example 3. Let us check the first equality in (4.19), in the simplest case. To calculate this exchange relation, we start from the definition of the composed currents F 3,1 (x) as given in (3.22) and apply to this relation the integral transformation dy y To calculate this integral, we decompose the kernel of the integrand as
Commutation of products of total currents
Formulas (4.8) show that in order to calculate the scalar product of the universal Bethe vectors, one has to commute the products of the total currents E(μ,ν) = E 1 (µ 1 ) · · · E 1 (µ a ) E 2 (ν b ) · · · E 2 (ν 1 ) and According to the decomposition of the quantum affine algebra U q ( gl 3 ) used in this paper, the modes of the total currents F i [n], E i [n + 1], k + j [n], n ≥ 0 and a q-commutator E 1, , belong to the Borel subalgebra U q (b + ) ∈ U q ( gl 3 ). We define the following ideals in this Borel subalgebra. Definition 1. We note J, the left ideal of U q (b + ) generated by all elements of the form U q (b + ) · E i [n], n > 0 and U q (b + ) · E 1,3 [1]. Equalities in U q (b + ) modulo element from the ideal J are denoted by the symbol '∼ J '. Definition 2. Let I be the right ideal of U q (b + ) generated by all elements of the form F i [n] · U q (b + ) such that n ≥ 0. We denote equalities modulo elements from the ideal I by the symbol '∼ I '.
We also define the following ideal in U q ( gl 3 ): Definition 3. We denote by K the two-sided U q ( gl 3 ) ideal generated by the elements which have at least one arbitrary mode k − j [n], n ≤ 0, of the negative Cartan current k − j (t). Equalities in U q ( gl 3 ) modulo element of the ideal K are denoted by the symbol '∼ K '.
Equalities in U q ( gl 3 ) modulo the right ideal I, the left ideal J and the two-sided ideal K will be denoted by the symbol '≈'.
A right weight singular vector defined by the relations (2.7) is annihilated by the right action of any positive mode E i [n], n > 0, the element E 1,3 [1] and is a right-eigenvector for k + j (t), where Λ j (τ ) are some meromorphic functions, decomposed as a power series in τ −1 . A left weight singular vector v ′ defined by the relation (2.9) is annihilated by the left action of any nonnegative modes F i [n], n ≥ 0 and is a left-eigenvector for k + j (t), where Λ ′ j (t) are also meromorphic functions. These facts follow from the relation between projections of the currents and the Gauss coordinates of the L-operator (3.1)-(3.3).
We observe that the vectors belong to the modules over the quantum affine algebra U q ( gl 3 ) from the categories of the highest weight and lowest weight representations respectively. This is in accordance with the definition of the completions U E and U F and the corresponding projections given above. We assume the existence of a nondegenerate pairing v ′ , v and by the scalar product of the left and right universal Bethe vectors, we will understand the coefficient S(τ ,σ;t,s) in front of the pairing v ′ , v in the right hand side of equality v ′ · P + e (E 2 (σ b ) · · · E 2 (σ 1 )E 1 (τ a ) · · · E 1 (τ 1 )) , P + f (F 1 (t 1 ) · · · F 1 (t a )F 2 (s 1 ) · · · F 2 (s b )) · v = S(τ 1 , . . . , τ a , σ 1 , . . . , σ b ; t 1 , . . . , t a , s 1 , . . . , It is clear that the scalar product (2.11) differs from (5.3) by the product The problem of calculation of the scalar product of the universal Bethe vectors (5.3) is equivalent to the commutation of the projections entering the definitions of the vectors (5.1) and (5.2) modulo the left ideal J and the right ideal I. To calculate this commutation, we use the integral presentation of the projections (4.8), commute the total currents and then calculate the integrals. Since both projections belong to the positive Borel subalgebra U q (b + ), we can neglect the terms which contain the negative Cartan currents k − i (t) and perform the commutation of the total currents modulo the two-sided ideal K. Actually, in commuting the total currents, we will be interested only in terms which are products of combinations of the U q ( gl 3 ) positive Cartan currents (3.17). All other terms will be annihilated by the weight singular vectors.
Let us recall that elements E(μ,ν) and F(x,ȳ) are elements of the completed algebras U E and U F , which are dual subalgebras in U q ( gl 3 ) considered as a quantum double. There is a nondegenerate Hopf pairing between these subalgebras, given by the formulas (3.16). For any elements a ∈ A and b ∈ B from two dual Hopf subalgebras A and B of the quantum double algebra D(A) = A ⊕ B, there is a relation [26] a (2) , b (2) where ∆ A (a) = a (1) ⊗ a (2) and ∆ B (b) = b (1) ⊗ b (2) . Let us apply formula (5.4) for a = E(μ,ν) = E and b = F(x,ȳ) = F. Using the current coproduct (3.15), we conclude that where the element E ′ satisfies ε(E ′ ) = 0 and the element E ′′ contains at least one negative Cartan current k − i (τ ). The element K + in (5.5) takes the form The left hand side of the relation (5.4) have the form E, F · K + modJ and the right hand side of the same relation is The idealJ , similar to the ideal J, is the left ideal in U q ( gl 3 ) generated by the elements U q ( gl 3 ) · E i [n], i = 1, 2 and n ∈ Z. One can check that after integration in (4.8) the terms of the idealJ which have non-positive modes of the currents E 1 (µ k ) and E 2 (ν m ) on the right will disappear and can be neglected. Alternatively, we can argue that these terms are irrelevant using cyclic ordering of the current or Cartan-Weyl generators, as it was done in the papers [26,21]. As result, a general equality (5.4) for the given elements a = E(μ,ν) and b = F(x,ȳ) reads E(μ,ν) · F(x,ȳ) = E(μ,ν), F(x,ȳ) ψ + 2 (y j ) mod (K, J) modulo ideals K and J. This relation shows that instead of calculating the exchange relations for the product of the currents E(μ,ν) and F(x,ȳ) it is enough to calculate the pairing between them.
Pairing and integral formula for scalar products
To calculate the pairing, we will use the basic properties of pairing between dual Hopf subalgebras where A = U E and B = U F . From these properties, we obtain Using the definition of the scalar product of the universal Bethe vectors (5.3) and integral presentations of the projections (4.8), we conclude ψ + 2 (y j ), (5.6) where the rational series E(τ ,σ;x,ȳ) and F(t,s;x,ȳ) are given in (4.9) and (4.10).
Conclusions
The kernels entering the formulas (4.8) can be q-symmetrized over integration variables due to the q-symmetric properties of the product of the total currents. In the gl 2 case, this leads to the determinant representation of the kernel due to the identity where the determinant on the right hand side is called an Izergin determinant. It is equal (up to a scalar factor) to the partition function of the XXZ model with domain wall boundary conditions [8].
The challenge is to get determinant formulas for the q-symmetrized kernels (4.9) and (4.10) as a sum of determinants and to use further this determinant formula to get a determinant formula for the scalar products. Work in this direction is in progress. | 8,020.6 | 2010-12-07T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Strain and atomic stacking of bismuth thin film in its quasi-van der Waals epitaxy on (111) Si substrate
We report on the structural properties of Bi thin films grown on (111) Si substrates with a thickness of 22–30 BL. HRXRD and EBSD measurements show that these Bi films are mainly composed of twinning grains in the (0003) direction. The grain size can be as large as tens of microns. From a double-peak (01\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overline{1}$$\end{document}1¯4) φ-scan, we found two pairs of twinning phases coexisting with a rotation angle of ~ 3.6°. We proposed a coincidence site lattice model based on preferential close-packed sites for Bi atoms on Si (111) surface to explain the coexistence of the rotation phases in the quasi-van der Waals epitaxy. From the measured lattice constants c and a of our samples, along with the data from the literature, we derived a c–a relation: (c–c0) = − 2.038(a–a0), where c0 and a0 are the values of bulk Bi. The normalized position of the second basis atom in the unit cell x, in these strained Bi films is found very close to that of bulk Bi, indicating that the strain does not disturb the Peierls distortion of the lattice. The fixed ratio of bilayer thickness to lattice constant c, reveals that the elastic properties of covalent-bonded bilayer dominate those of Bi crystal.
We report on the structural properties of Bi thin films grown on (111) Si substrates with a thickness of 22-30 BL.HRXRD and EBSD measurements show that these Bi films are mainly composed of twinning grains in the (0003) direction.The grain size can be as large as tens of microns.From a double-peak (011 4) φ-scan, we found two pairs of twinning phases coexisting with a rotation angle of ~ 3.6°.We proposed a coincidence site lattice model based on preferential close-packed sites for Bi atoms on Si (111) surface to explain the coexistence of the rotation phases in the quasi-van der Waals epitaxy.From the measured lattice constants c and a of our samples, along with the data from the literature, we derived a c-a relation: (c-c 0 ) = − 2.038(a-a 0 ), where c 0 and a 0 are the values of bulk Bi.The normalized position of the second basis atom in the unit cell x, in these strained Bi films is found very close to that of bulk Bi, indicating that the strain does not disturb the Peierls distortion of the lattice.The fixed ratio of bilayer thickness to lattice constant c, reveals that the elastic properties of covalentbonded bilayer dominate those of Bi crystal.
Bulk bismuth is a unique semi-metal with a small band overlap and very small effective mass in certain orientations in its conduction bands and valence bands 1,2 .The unique properties make bismuth a versatile material that can be manipulated in between semi-metal and semiconductor through quantum size effect.Recent studies on ultra-thin bismuth films also revealed peculiar surface bands resulting from strong spin-orbit interaction [3][4][5][6][7] .These properties have drawn attention to Bi thin film, which is promising for functional electronic and magnetic devices.
The hexagonal lattice structure of bulk crystal Bi is shown in Fig. 1.The lattice shows three bilayers (BLs), stacking in ABC closed pack sequence along the trigonal direction (c-axis).In each bilayer, Bi atoms in the lower layer (in light blue) covalently bond three Bi atoms in the upper layer (in light orange), and vice versa.Bi atoms between the adjacent bilayers also have covalent charges and form a much weaker "semi-covalent" bonding 8 or van der Waals bonding 9 .A green trigonal unit cell, containing two basis atoms, is also shown in the figure.The position of the second basis atom (in light orange) is normalized to the body diagonal of the unit cell c, and is denoted as x.The trigonal cell is distorted from a rock-salt cubic cell through a so-called Peierls-Jones mechanism [10][11][12] .The distortion shifts the normalized position x of the second basis atom from 0.5, the value for rock-salt lattice, destroying the cubic symmetry and dimerizing the atoms along the c-axis.Such a distortion introduces a small band gap to stabilize the system energy.However, the stable x position is liable to be perturbed by external influences 11 .The relief of Peierls distortion by highly injected electrons has been demonstrated 11,12 .
For the growth of Bi thin film on (111) Si substrates, Nagao et al. and Kammler et al. have demonstrated a method of preparing ultra-thin Bi films on a clean Si (111) 7 × 7 reconstruction surface at room temperature 8,13,14 .These previous works have utilized in-situ low-energy electron diffraction technique to observe a structure transition from textured pseudo-cubic Bi (110) grains into hexagonal Bi (111) grains when the coverage of Bi is about 5-7 monolayers.The Bi (111) grains have perfect azimuthal alignment with the Si (111) substrate with a relation of 6a Bi = 7a Si .After the transition, the Bi (111) grains gradually coalesce and develop into a continuous layer.
In our previous work 15 , we found that the lattice constants of a ~ 80-nm-thick Bi film are all close to those determined from bulk Bi 16,17 , indicating that the growth is fully relaxed despite the large lattice mismatch between Bi and Si.In this work, we studied the properties of Bi thin film with thickness in the range of ~ 10 nm.Through high-resolution X-ray diffraction (HRXRD) measurements, the lattice parameters, c and a, were determined and the film was found strained.From the φ-scan of (011 4) planes, we observed the coexistence of two phases with a small rotation angle.A coincidence site lattice model was proposed to explain the phenomena.The normalized position x of the second basis atom was also determined and was found unchanged.0006), (0009), and (00012) planes of sample S1, respectively.The linewidths of these peaks are broad, which is due to the thin thickness, i.e., Scherrer broadening 18 .We used an interference function for XRD to fit the line-shapes.The function 18 is, where θ is the angle between the incident X-ray and the plane, θ B is the Bragg's angle of the plane, M is layer number and I 0 is a constant intensity.In addition to the line-shape function, the fitting also considered a parabolic background to reduce the effect of background noise, and the fitting results are summarized in Table 1.The average c is 11.932 Å, which is longer than the value reported for bulk Bi 16,17 , suggesting the existence of the vertical strain in the layer.The layer numbers, M's, can be converted to the number of bi-layer (BL) to estimate the thickness.Figure 2e shows the cross-sectional TEM lattice image of the Bi thin film.The thickness obtained by TEM is 31-32 atomic layers, which is close to the 30 BL obtained by the interference function fitting for (0003) reflection.The SAED patterns of this sample (not shown) are similar to those of 80-nm-thick Bi film reported previously 15 .Detailed analysis about the epitaxial relationship between Bi and Si substrate will be provided in the subsequent HR-XRD φ scan experiments.
Results and discussions
To study the in-plane structures, we performed XRD measurements on tilted plane (0114).A typical Ω-2θ scan for the tilted plane (011 4) is shown in Fig. 3a, and the (011 4) φ-scans of three Bi samples are shown in Fig. 3b-d.For comparison, the (220) φ-scan of their Si substrates is also shown in the figures.Detailed procedures for XRD measurements on tilted planes have been reported previously 15 .Lattice constant a can be resolved from the following equation, where d (hklm) is the thickness of the plane (h k l m), and h = 0, k = 1, and m = 4 for plane (01 1 4).We proceeded series of (011 4) measurements, the results of a are listed in Table 2.The average a of Bi films are all smaller than the reported values for bulk Bi 16,17 .
Figure 3b shows the (011 4) φ-scan of sample S2.As shown in the figure, there are three stronger peaks interleaved by three weaker peaks.Note that the (011 4) plane is with threefold symmetry.The stronger 3 (011 4) peaks belong to a twinning phase resembling the stacking sequence of the Si substrate, and the φ angles are aligned to those of Si (220) with an angle difference of ~ 1.4° only.The weaker 3 peaks belong to the other twinning phase.The lower intensity suggests that it is less preferential than the former one, which is supported by the EBSD IPF X image shown as the inset of the figure.In Fig. 3c, the φ-scan of sample S3 is shown, the six (011 4) peaks are in the same intensity level, indicating that the two twining phases are with equal preference.However, three of the peaks well match the (220) peaks of the Si substrate, with an angle difference of less than 0.1°.The average grain sizes of sample S2 and S3, estimated from their EBSD IPF X images are 1 ~ 2 μm.The (014) φ-scan of sample S1, whose average grain size is in tens microns, is shown in Fig. 3d.We can see that all the six (011 4) signals split into two peaks, suggesting that in addition to the two twining phases, there appears a pair of misaligned twinning phases in this sample.The Si (220) peaks well align the left split peaks of the primary twining peaks, and the right split peaks are ~ 3.6° apart from the Si (220) peaks.For the other twinning phase, the angle difference between the split peaks is ~ 5°.
To explain the coexistence of the two Bi grains, we consider the atomic stacking at the Bi/Si interface.Figure 4a shows an atomic arrangement based on the 6 Bi to 7 Si registry reported previously 13,14 .In this model, we assume that 6a Bi = 7a Si and the preferential sites for Bi atoms are the A, B, and C sites of the close-packed hexagonal lattice of the Si ( 111 www.nature.com/scientificreports/by a dashed hexagon.The hexagon has its six corner atoms alternately fall in the B-site and the C-site of the Si lattice, and the central atom is at the A-site.Note that the Bi atoms on A-site form a 6 × 6 (a Bi ) coincidence site lattice with the bottom Si 7 × 7 (a Si ) lattice. Figure 4b shows the misaligned atomic arrangement.In this model, the Bi lattice is slightly rotated to shift the sixth Bi atom from the origin preferential site B6 of the well registry model (Fig. 4a) to the nearest preferential site C6.The lattice constant a Bi is slightly extended to 6a Bi = 7.024a Si , and the rotation angle is ~ 4.7°.As shown in the figure, the Bi atoms sitting on A, B, or C sites form a 3 × 3 (a Bi ) supercell, indicated by a dashed hexagon.Like the case in Fig. 4a, the hexagon has its six corner atoms alternately fall in the B-site and the C-site of the Si lattice, and the central atom is at the A-site.The Bi atoms on A-site form a 3 √ 3 × 3 √ 3 (a Bi ) coincidence site lattice at the Bi/Si interface, and the bottom Si coincidence site lattice is √ 37 × √ 37 (a Si ).Yaginuma et al. suggested that the formation of 6 × 6 (a Bi ) and Si 7 × 7 (a Si ) coincidence site lattice is the key step for the following highly crystallized Bi lattice 13 .They used DFT theory to calculate the a Bi of freestanding Bi slab with a thickness ranging from 1 to 8 hexagonal bi-layer.When the thickness reaches 3 bilayers, where the hexagonal nuclei stabilize on the Si substrate, the calculated a Bi = (7/6) a Si , in good agreement with the value obtained from SPA-LEED measurement.This lattice-match information is then conveyed to the growth film through the hexagonal nuclei.We believe that in addition to the 6 × 6 (a Bi ) coincidence site lattice shown in Fig. 4a, the 3 √ 3 × 3 √ 3 (a Bi ) coincidence site lattice, shown in Fig. 4b plays the same role.In fact, Kammler et al. have mentioned the observation of a second preferred Bi grain with a 4.7° rotation to the direction of the main Bi grain when the Bi coverage is 7 ML 14 .In this early stage, their reported rotation angle is almost matched to the value predicted by the models we proposed.
The thickness of our samples is within 25-30 BL.These films have undergone grain coalescence into a continuous layer, and may have begun to relax.In coalesce process and the following relaxation process, shear stress resulting from the formation of grain boundaries or relaxation could further rotate the in-plane orientation.As a result, the angles we observed are slightly different from the predicted values.We have tried other misalign arrangements by shifting the sixth Bi atoms from the B6 site to other adjacent B-sites or C-sites, and found that the lattice constants a Bi of these cases deviate from that of the well-registered case at least 4.9%, and thus unlikely to be the cases for Bi lattices.
Figure 5 shows the relationship between lattice constants c and a for our four samples along with several data reported in the literature 7,8,15,17,19 .Except for the 6-BL-thick Bi film grown on Bi 2 Te 3 substrate 7 , all the Bi thin films grown on Si substrate are within the left and right vertical dotted lines, which represent a Bi = 7a Si /6 = 4.480 Å and a Bi = 4.546 Å, respectively.The former is the value reaching the coincidence lattice matched to Si substrate in the nuclei stabilizing stage 13 , which can be regarded as the starting point of the growth.While the latter is the bulk value reported by Barrett 16,17 , which was obtained from zone-refined single crystal ingot, free from strain.Therefore, it can be regarded as the end of the growth.As can be seen in the figure, there is an empirical solid line, and almost all the points are close to or on this line.On the assumption of biaxial strain, the line can be represented by the following equation: where ε zz is strain along trigonal axis, ε xx is in-plane strain, c 0 and a 0 are the fully relaxed or freestanding lattice constants, and C ij 's are six Voigt stiffness constants 20 .In here, c 0 and a 0 , are selected to be the values of Barrett, and the strain ratio − 2C 13 /C 33 derived from the solid line is − 0.781, which is deviated from the values − 1.286, calculated from Eckstein's stiffness constants 20 , and − 0.960, calculated from Bridgman's elastic constants 21 .Both Eckstein's constants and Bridgman's constants were obtained from bulk Bi crystal bars.According to Yaginuma et al.DFT calculation, the a Bi of freestanding Bi slab increases with the thickness and gradually approaches to bulk Bi's value 13 .Their calculation was from 1 to 8 BL, and the corresponding "strain" with respect to bulk a Bi was from − 5.3 to − 0.8%.In here, strain needs a clear definition.The strain with respect to bulk's a Bi is called "apparent strain".And the strain resulting from stress is called "effective strain", and is defined as the strain with respect to the thickness-dependent freestanding a Bi .Different freestanding lattice constants may result in different c-a relations.Since the solid line in Fig. 5 contains Bi films with different thicknesses ranging from 6 to 200 BL, we assume that all the freestanding points stand on the same line.For the Bi film grown on Si substrate, in the early nuclei stage, the freestanding a Bi of the nuclei makes the coincidence site lattice match to Si lattice.However, as the film thickness increases, the increasing freestanding a Bi builds up effective strain and stress.Consequently, the strain relaxation moves the lattice constants of the film toward the freestanding a Bi along the solid line.Although our results follow the solid line, their positions are not in order of thickness, which could arise from the complex granular structures and the internal atomic deviation, especially of the atoms not at the preferential sites in the lattice.In ref. 8 , Nagao et al. calculated the cohesive energy of hexagonal bilayer.From their result, when the thickness is larger than 8 BL, the cohesive energy per atom has been below 25 meV.For the growth close to room temperature, it is easy for Bi atoms to deviate from the lattice positions.The deviation may induce lattice distortion, which not only causes extra effective strain, but also enlarge the uncertainty of measured a Bi .
In the Bi unit cell shown in Fig. 1.The diagonal along the c-axis contains three BLs.In each BL, the atoms in the top and bottom layer belong to different basis atoms, and the bilayer thickness, b = c (x−1/3).The normalized basis position x affects the structure factor of the lattice and thus the integrated XRD intensity.By comparing the integrated XRD intensities of different (0 0 0 m) planes, one can find x.However, the integrated intensity is also a function of Bragg's angle, θ, and Debye-Waller factor, e −2M .In here, we consider only the terms relevant to the normalized position, x, Bragg's angle, θ, Debye-Waller factor, e −2M , and the Miller indices of the (0 0 0 m) plane, the integrated XRD intensity, I, can be expressed as follows, where A is a constant independent of Bragg's angle and orientation, L is the Lorentz factor, f Bi is the atomic scattering factor, μ is the absorption coefficient, and t is the layer thickness.The factor M of Debye-Waller factor can be expressed as, where λ is the wavelength of the X-ray, r 2 z is the average of the mean square atomic displacement along z direction and Bz = 8π 2 r 2 z .For the new D8 HRXRD system, the X-ray source is Cu-Kα and has a Ge (220) first crystal, the Lorentz factor, L, includes the polarization factor and the angular velocity factor and is given by where θ M is the Bragg's angle for Ge (220) and cos 2θ M = 0.7033.The absorption coefficient of Bi, μ, is 2391/cm.For the measurement at the TPS09A line, the X-ray source is horizontally polarized and with a photon energy of 13.3 keV.The Lorentz factor contains only the angular velocity factor, and L = 1/sin2θ.The absorption coefficient, μ, is 660.3/cm.Finally, we divide Eq. ( 1) by the measured X-ray intensity, I exp , and define it as F(x), The F value should equal to A for all planes if we select the correct values for x and M. Since A, x and M are three unknown variables, at least three planes are needed to solve them.In here, (0006), (0009) and (00012) planes to resolve the problem.(0003) plane was not chosen because of the extinction effect resulting from its small Bragg's angle 16 .For sample S1, a plot of F(b/d)/A as a function of x is shown in Fig. 6.The three curves intersect at a point at x = 0.4677 and the solved Bz = 3.012 Å 2 .The method was applied to other samples, and the results are summarized in Table 3.Compared with previous thicker films of about 80 nm in the laboratory 15 and bulk materials reported in 16 , the difference in x is within 0.2%, showing that the strain does not affect the x value.In Fig. 4, the Bi film on Bi 2 Te 3 7 has the largest strain.The apparent strain is as large as − 3.3%.In the reference, Hirahara et al. also gave both intra-bilayer thickness, b, and inter-bilayer thickness, c/3−b, using LEED technique 7 .The x value calculated from the two thicknesses is 0.4680, in very good agreement with the bulk value as well as our values, listed in Table 1.value.Note that x value indicates a stabilization of the Peierls distortion applied to a rock-salt lattice (x = 0.5), which introduces a small band gap over the extended region of the Brillouin zone.Disturbing the stabilization by high electronic excitation has been reported.Our findings suggest that the strain does not affect the x, implying that the change of band gap structure by thickness reduction does not affect the stabilization, even to a thickness of 6 BL.A fixed value of x also implies a fixed relationship between b and c.It also means that the elastic properties of Bi crystal are dominated by the elastic properties of the covalently bonded bilayer network.The inter-bilayer semi-covalent bonds play a minor role.Its long-range effect causes a change in the freestanding lattice constant when the bilayer number is small.
Conclusion
In conclusion, we have studied the structural properties of Bi thin films grown on (111) Si substrates with a thickness of 22-30 BL.The HRXRD and EBSD measurements showed that the epilayers are mainly composed of twinning grains in the (0003) direction and the grain size can be as large as tens of microns.In the HRXRD φ-scan of S1, we observed double-peaks with an angle difference of ~ 3.6°, suggesting the coexisting of two twinning phases.We proposed a coincidence site lattice model based on preferential close-packed sites for Bi atoms on Si (111) surface to explain the two phases in the quasi-van der Waals epitaxy.From the measured lattice constants c and a of our samples, along with the data from the literature, we derived a c-a relation: (c/ c 0 −1) = − 0.781(a/a 0 −1), where c 0 and a 0 are the values of bulk Bi.The normalized position of the second basis atom in the unit cell, x, was also determined in these strained Bi films.We found that all the x values are very close to that of bulk Bi, indicating that neither the strain nor the geometric thin structure disturbs the Peierls distortion of the lattice.The fixed ratio of bilayer thickness to lattice constant c, reveals that the elastic properties of covalent-bonded bilayer dominate those of Bi crystal.
MBE of nanoscale Bi
An SVTA solid source molecular beam epitaxial system was used to grow the Bi thin films on n-type Si (111) substrates.The Si substrates were immersed in acetone, methanol, and isopropanol each for 2 min and then soaked in 2% HF solution for 1 min to remove native oxide.Afterward, the substrates were loaded into the MBE
Figure
Figure 2a-d show the Ω-2θ scans for (0003), (0006), (0009), and (00012) planes of sample S1, respectively.The linewidths of these peaks are broad, which is due to the thin thickness, i.e., Scherrer broadening18 .We used an interference function for XRD to fit the line-shapes.The function18 is,
Figure 1 .
Figure 1.Schematic diagram of rhombohedral Bi lattice drawn in a hexagonal lattice.The two basis atoms are represented by light blue and light orange balls, respectively.Parameters b and x are the bilayer thickness and the normalized distance of the second basis, respectively.
Figure 3 .
Figure 3. (a) A typical HRXRD Ω-2θ scan of tilted Bi (104) plane for sample S1. (b-d) HRXRD φ scans of Bi (014) and Si (220) planes for samples S2, S3 and S1 respectively.The EBSD inverse pole figures (IPF X) are shown as the inset in their panels.The crystal orientation is indicated by the colored sector on the right of (b).
2 .
) surface layer.A site is the position right on top of Si surface atoms, and B and C sites are the other two possible sites for close-packed hexagonal layer.Notice that Bi atom has two different bonds, i.e., covalent and semi-covalent bonds, with different bond lengths in Bi crystal.This behavior could allow Bi atoms to stand directly on top of Si atom (A site) or to occupy the other two close-packed sites (B site and C site).As shown in the figure, the Bi atoms sitting on A, B, or C sites form a 2 √ 3 × 2 √ 3 (a Bi ) supercell, indicated Table Lattice parameter c and a. a Monolayer thickness, equal to c/3, which is obtained from Fig. 2k of Ref 8 .b Intrabilayer thickness, equal to b. c Interbilayer thickness, equal to (c/3)−b.
Figure 4 .
Figure 4. (a) Sketch of Bi atomic arrangement that satisfies 6a Bi = 7a Si .The Bi atoms form a 2 √ 3 × 2 √ 3 (a Bi ) supercell indicated by a dashed hexagon.(b) Sketch of the misaligned atomic arrangement.The Bi atoms form a 3 × 3 (a Bi ) supercell indicated by a dashed hexagon.
Figure 6 .
Figure 6.Logarithmic plot of F(x) versus x of the (000 m) planes for sample S1.
Table 1 .
Results of lattice parameters of S1, which are fitted by interference function.
Table 3 .
Lattice parameter x and B z .a Calculated from the intralayer and interbilayer thickness, also listed in | 5,777.8 | 2023-11-13T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Image-based Classification of Tumor Type and Growth Rate using Machine Learning: a preclinical study
Medical images such as magnetic resonance (MR) imaging provide valuable information for cancer detection, diagnosis, and prognosis. In addition to the anatomical information these images provide, machine learning can identify texture features from these images to further personalize treatment. This study aims to evaluate the use of texture features derived from T1-weighted post contrast scans to classify different types of brain tumors and predict tumor growth rate in a preclinical mouse model. To optimize prediction models this study uses varying gray-level co-occurrence matrix (GLCM) sizes, tumor region selection and different machine learning models. Using a random forest classification model with a GLCM of size 512 resulted in 92%, 91%, and 92% specificity, and 89%, 85%, and 73% sensitivity for GL261 (mouse glioma), U87 (human glioma) and Daoy (human medulloblastoma), respectively. A tenfold cross-validation of the classifier resulted in 84% accuracy when using the entire tumor volume for feature extraction and 74% accuracy for the central tumor region. A two-layer feedforward neural network using the same features is able to predict tumor growth with 16% mean squared error. Broadly applicable, these predictive models can use standard medical images to classify tumor type and predict tumor growth, with model performance, varying as a function of GLCM size, tumor region, and tumor type.
Currently in the clinic, imaging is primarily used for anatomical information such as assessing tumor volume and location, determining feasibility of surgical resection, and assessing response to treatment. Additional information can be extracted from these images using radiomics by mining the images for quantitative image features that are not intuitively observable, such as variance in neighboring pixel values 13,14 . Recent studies have found that these features can be informative of the tumor's underlying molecular processes when integrated with machine learning techniques to provide valuable diagnostic, prognostic and predictive information [15][16][17][18][19][20][21][22] . In this work, we focused on texture features, derived from the gray level co-occurrence matrix (GLCM) which are commonly used in many different texture analysis. The GLCM is a square matrix that captures the frequency in which a combination of gray scale intensities occur with the dimensions determined by the number of gray levels 23 . Features derived from this matrix are informative of the spatial relationships between grayscale intensities such as amount of variation, disorder, or contrast within an image 23 . These features, however, can be sensitive to image processing which include acquisition, reconstruction protocols, and inter-scanner variability [24][25][26][27][28] . Independent of these systemic variations, the values of these features can also be affected by the GLCM size or the number of gray levels which is determined a priori to feature extraction. For a given image the number of gray levels (pixel intensity) is dependent on the bit depth of that image; with an 8 bit image having 256 possible values or 16 bit image having 65,536 possible values. For a higher bit image it is not practical to construct a 65,536 by 65,536 GLCM therefore the image is often rescaled to a more manageable bit size which can ultimately affect the values of the texture features derived from the matrix. There have been few studies on the impact of GLCM size on the image's features values even though these features are commonly used in radiomic studies and are included in image analysis software tools [29][30][31] . Preclinical studies can be valuable to investigate the influence GLCM size has on classifier performance where imaging parameters can be controlled and confounding factors can be eliminated.
The aim of this work is to classify brain tumor type and predict tumor growth rate using texture features from T 1 -weighted post contrast MR scans in a preclinical model. Tumor regions were segmented using in-house software with GLCMs constructed for a single tumor slice or entire tumor volume. We investigated the sensitivity of texture features values to different GLCM sizes and how this affects the performance of different classifiers. Our preclinical model also allowed for the opportunity to systematically follow the growth of these tumors. To further explore the potential application of texture features derived from diagnostics images we used these features to predict tumor growth rate. Using, a shallow neural network, image features were used to predict the αβ value of an exponential function, a simple growth model that assumes that growth is proportional to the cell population, where the α and β values describes the initial volume and the growth rate respectively. Using machine learning we assess whether radiomics approaches have the potential to classify tumor type and predict tumor growth rate noninvasively by allowing clinicians to make better informed treatment decisions using standard medical images.
Results
To construct our classification and prediction models, texture features were first extracted from the tumor region using in-house MATLAB program for three different types of tumors: GL261 (mouse glioma), U87 (human glioma) and Daoy (human medulloblastoma). The program consists of a graphical user interface (GUI) which allow users to import in DICOM files from T 1 -weighted post contrast scans either in batch or as a single image slice and perform manual segmentation of the region of interest (ROI). Image features were extracted from three different tumor regions (central, middle and edge) as well as the average of the entire tumor. Once segmentation is completed, the program calculates first and second order image features. First order features are derived from the grayscale intensity histogram of the selected ROI. Second order features are derived from ten different GLCM sizes. The extracted image features from the different GLCM sizes for each tumor type are used as inputs to train the three classifiers and tumor growth prediction models (Fig. 1).
Tumor type classification. Overall, the three different classification models, decision tree, random forest and support vector machine, had similar performances. Figure 2 shows the validation accuracy of the different GLCM sizes for each classifier and tumor region. We observed an overall trend of increasing validation accuracy with the number of gray levels or GLCM size, however it was not a proportional increase and varied for different classification models. The only instance in which increasing the GLCM size directly increased validation accuracy was when the entire tumor volume was used with a decision tree model (Fig. 2a). The selection of the tumor region used for GLCM construction also impacted the classifier's accuracy. Using the entire tumor volume resulted in higher accuracy compared to a single image slice. The edge of the tumor was the least predictive while the central and middle regions were comparable (Fig. 2).
The best performing classification model was using random forest with 512 gray levels, equivalent to a GLCM size of 512, which achieved 84% validation accuracy. With the same model using the central, middle and edge tumor regions the validation accuracies were 74%, 74% and 52% respectively. The model resulted in an area under the curve (AUC) = 0.92, 0.88 and 0.85 for GL261, U87 and Daoy respectively (Fig. 2). While there was high specificity and sensitivity for both GL261 and U87 tumor classifications, sensitivity was only 0.77 for Daoy tumors (Tables 1-3). Even though the GLCM size of 256 had similar accuracy, the classifier performance was not as uniform for all tumor types as with the GLCM size 512. Therefore all subsequent analysis were performed on features extracted from the GLCM size of 512. We've also identified the four texture features dependent on GLCM size: autocorrelation, cluster prominence, sum of square and sum variance (p ≤ 0.01-0.0001) (see Supplementary Fig. S1).
Random forest is an ensemble learning technique which grows many decision trees where the final class is determined by a majority "vote" from all the decision trees. Analysis of the estimated feature importance reveals skewness (0.91) to be the most important feature followed by cluster shade (0.84), kurtosis (0.55), median intensity (0.40), mean intensity (0.30), sum average (0.30), sum variance (0.289), autocorrelation (0.26), max Growth curve prediction. Using a shallow feedforward two layer neural network, the tumor growth curve could be established with mean square error of 16% (Fig. 4a). The neural network was trained to predict the αβ value for an exponential growth model. The inputs used to train the network were image features from similar tumor volumes and the αβ value from the fitted exponential of the experimental data (Fig. 4b). The experimental data showed U87 to be highly aggressive with a rapid growth rate (α = 1.2 ± 0.8, β = 0.21 ± 0.03) with Daoy having a slower growth rate (α = 0.9 ± 0.5, β = 0.08 ± 0.04) (Fig. 4c). GL261 tumors had higher variance in tumor growth compared to other tumor types (α = 0.7 ± 0.9, β = 0.15 ± 0.04) (Fig. 4d). Overall, the neural network was able to predict tumor growth more accurately for U87 and Daoy tumors than GL261. Additional results can be found in Supplementary Fig. S2.
Discussion
In this study we have demonstrated that by using texture analysis, standard medical images can be used to classify tumors and predict tumor growth rate in preclinical models. Furthermore we have shown that the size of the GLCM or number of gray levels impacts the performance of the classification models and have identified a classifier that outperformed all others. The best performing classifier was a random forest model with texture features derived from the 512 gray level GLCM that used the entire tumor volume. These derived texture features can be used to predict tumor growth using a shallow two layer neural network. Assuming an exponential growth curve, the neural network is able to predict the αβ value using only the diagnostic scan. These classification and www.nature.com/scientificreports www.nature.com/scientificreports/ prediction models can provide an additional tool for tumor diagnosis and could be used to better personalized treatment planning without additional burden to the patient.
Previous works have shown that image features can be used for brain tumor classification and grading [32][33][34][35] . Studies aimed at using images to classify tumor type used different classification models and parameters and different scan sequences with varying degrees of success (77 to 91% accuracy) 32,34,36 . Our results of 84% accuracy is comparable to these other models by optimizing the GLCM size and classifier. Taking a more reductionist approach, our study focused on the robustness of texture features for different GLCM sizes and how this in turn affects the performance of different classifiers' while controlling for any external confounding factors such as www.nature.com/scientificreports www.nature.com/scientificreports/ machine variance by using our preclinical tumor models. Changing the distance between neighbors to construct the GLCM is another variable that can be optimized. However our preclinical tumor volume is limited, therefore extending the distance between neighboring pixels would not produce meaningful results. However, Chen et al. has shown in texture analysis of breast lesions that extending the distance of neighbors does not significantly affect the GLCM, a finding which may be applicable to brain tumor studies 37 . We have identified four features (autocorrelation, cluster prominence, sum of squares, and sum variance) that were dependent on the number of gray levels while the other texture features were independent. Cluster prominence, which is a measure of the GLCM asymmetry, was the only feature that was significantly lower in the glioma (GL261 and U87) tumors than medulloblastoma. The observed lower local variance, as defined by cluster prominence, in the glioma tumor model can be attributed to the well-defined tumor border in gliomas unlike the diffuse borders of medulloblastomas (Fig. 1). This finding agrees partially with Brynolfsson's et al. work using apparent diffusion coefficient (ADC) maps of glioma and prostate data 38 . However, this study identified more features being influenced by GLCM size and noted a greater effect of GLCM size on the texture feature value 38 . This seems to indicate that the influence of the GLCM size on texture feature value may vary between image acquisition protocols. Our results indicate that GLCM size is an important parameter to consider when constructing classification models. Therefore to have meaningful comparison across acquisition protocols, the GLCM size should be standardized and not part of the optimization process.
Classifier performance was impacted by the GLCM size, but the selection of the ROI for the derivation of the GLCM had the greatest impact on performance. Using the entire tumor volume resulted in higher accuracy than using a single image slice alone. This may be due to an increase in counting statistics and less sparse GLCMs which allows for extraction of meaningful features. This was the case at the edge of tumors where the ROI was small which resulted in poor performance for all three classifiers. Furthermore, using the entire tumor region would better approximate the tumor's heterogeneity.
To our knowledge, this is the first demonstration of the use of texture features to predict tumor growth using a neural network. While there are many classical models of tumor growth, we chose to fit the experimental data to an exponential growth curve since the model is straightforward (with only two parameters) and provides a www.nature.com/scientificreports www.nature.com/scientificreports/ good fit to our data 39 . More complex mathematical models of tumor growth hasve also been proposed which are driven by the tumor's biological processes however the majority of these models are derived and optimized for gliomas [40][41][42] . Using a simple two-parameter growth model with minimal assumptions about the underlying tumor biology allows us to test the predictive power of image features with different tumor types. Overall the neural network is able to predict the αβ with better performance for U87 and Daoy tumors with 16% mean square error. The prediction model was best for tumors that had lower growth variance which helps reinforce the network. The high variance in the GL261, a mouse glioma cell line, tumor growth curve may be due to immunological factors in the animal model (Fig. 4). In the GL261 model, cells are implanted into an immunocompetent animal, unlike the human U87 or Daoy cells, potentially leading to an immune response that could delay tumor growth to varying degrees.
While there are other image features that can be used, in this study we are limited to features derived from the signal intensity histogram (first order) and features derived from the GLCM (second order). Differences in image processing between our in-house MATLAB program and other publicly available software tools means that the extracted features' values are not necessarily identical or universally comparable even though the same mathematical equations were used [29][30][31] . As such, the use of image features from other software can impact the performance of the classifiers we developed. The classifiers were also trained on scans that were acquired from the same animals at different time points with the assumption that these scans are not necessarily equivalent. This assumption allowed the classification models to be constructed with the use of large cohorts; however this assumption should be investigated in future studies. Another limitation of our work is the use of preclinical brain tumor models that prevents the findings from this study from being directly extrapolated to patient data. Since we are using supervised machine learning algorithms, the classifier relies heavily on the training dataset. Preclinical models can generate these datasets in a timely manner while controlling for confounding factors such as acquisition and reconstruction protocols which are not always readily available in clinical data sets. While there are datasets available for adult gliomas including glioblastoma and low grade glioma 43,44 and multi-institutional efforts to manually annotate data from The Cancer Imaging Archive (TCIA) 45 , to our knowledge, there are no publically available imaging datasets, at this time, for medulloblastomas. As datasets become available, the established workflow can be implemented to test clinical data. Furthermore, with a preclinical model, we can track tumor growth and establish the growth curve which is not possible with patient data. This work provides proof-of-concept that texture features can classify tumor types with high accuracy and when interpreted by neural networks, these features can map out tumor growth.
These findings demonstrate that image features extracted from standard medical images have the ability to make diagnoses and even predict tumor growth rate. For patients who are not eligible for a biopsy or tumor resection, with further validation using clinical data, this modeling can be an alternate source of information to help clinicians make better informed treatment plans. Furthermore, for patients who may not be immediately eligible for treatments such as radiation therapy, mapping out the growth curve of these tumors can help clinicians identify critical time points when planning the course of treatment. More importantly, this study adds to the previous body of work on the impact of GLCM size on texture feature value.
Conclusion
Features derived from standard-of-care images can be used to classify tumor type and map tumor growth using machine learning algorithms. The number of gray levels in the construction of the GLCM influenced the performance of the predictive models. A GLCM size of 512 with a random forest classification model yielded an accuracy of 84% based on a tenfold cross-validation in our preclinical glioma and medulloblastoma tumor models. These results are promising in achieving a noninvasive marker for tumor type classification. These texture features are also found to be informative of tumor growth. Using a two layer neural network, the αβ values were predicted from the image features. The neural network had mean squared error of 16.02% with better performance for U87 and Daoy tumors compared to GL261 tumors. The performance of these models can be greatly improved with the addition of new data sets since both the random forest and neural network relied heavily on training data sets. Finally, standardization of feature extraction and exploration of deep learning techniques can contribute to a more accurate prediction of tumor type and growth curve, including a standardized GLCM that can allow for meaningful comparison of texture features. Texture feature extraction. Animals were imaged either weekly or biweekly based on tumor type, with T 1 -weighted relaxation time = 1500 ms, echo time = 8.5 ms, matrix size 256 × 256, pixel spacing 0.117 × 0.117 mm, slice thickness 0.5 mm) post contrast scan on a 9.4 T magnet (Bruker BioSpin). Gadolinium (Magnevist ® , 0.1 µL/g diluted in sterile saline) was administered intravenously 10 minutes prior to start of image acquisition. The acquired axial scans (n = 87 scans) were used for texture feature extraction. Due to tumor burden, not all animals were imaged the same number of times.
Method
Image features were extracted using a custom program developed in the 2016 MATLAB program (The MathWorks Inc., Natick, MA). The program allows the users to import the image files and manually select a region of interest (ROI) using a graphical user interface (GUI). The GUI provides visualization of the image and the user segments the ROI by manually defining the perimeter of the tumor. Tumor region segmentation was performed on the central, middle, edge and entire tumor region. The central slice was defined as the slice with the largest cross section (n = 73); the edge of the tumor was designated as the second to last slice where the tumor was visible (n = 69); and the middle tumor region was defined as halfway between the center and tumor edge (n = 34). The middle and edge slices were random slices selected from either side of the tumor. If the tumor was too small, we did not include the edge and/or middle tumor region in the analysis hence there were differences between the number of slices analyzed for each tumor region. For the GLCM representing the entire tumor region, the texture features were calculated by averaging the GLCM from each image slice. Tumor segmentation was manually performed by selecting the tumor border as delineated by the enhancement from the imaging contrast agent. The GLCM was constructed using MATLAB's built-in graycomatrix function which creates a GLCM from an image www.nature.com/scientificreports www.nature.com/scientificreports/ with the specified number of gray levels, offset. In this study the segmented tumor region gray level intensity binning is performed in the entire image.
Once the ROI or tumor region was segmented 33 different image features are automatically extracted as defined by Harlick et al. with the corresponding unique code provided by the international Image Biomarker Standardization Initiative (IBSI) 23,46 (see Supplementary Table S1). These image features include both first and second order features. Second order features were derived from the gray level co-occurrence matrix (GLCM). The GLCM is a N × N matrix which represents the frequency in which combination of grayscale intensity occurs. The GLCM is defined as for a given matrix size N: Where N is defined by the number of discrete gray level intensities, each (i,j) element represent the frequency in which the combination of intensity level i and j occur as separated by pixel distance (δ) in direction (α). First order features were derived from the grayscale intensity distribution histogram of the pixels from the selected ROI (tumor region). Second order image features were derived from the GLCM constructed with 10 different dimensions (N = 8, 16 23,47,48 . The final texture features were extracted from the normalized GLCM by averaging the four different offsets (α = 0°, 45°, 90° and 135° with symmetry and pixel distance of δ = 1). The list of image features can be found in Supplementary Table S1.
Tumor type classification model. The classifications models were constructed in MATLAB with three classes: GL261, U87 and Daoy and the image features as inputs. In this study, three different classification models were investigated; decision tree, random forest, and support vector machine. Each model used the extracted image features as inputs to predict the three tumor classes or tumor type. Decision tree models were constructed using the fitctree function in MATLAB which fits binary decision trees for multiclass classification, with the default setting and split criterion set to Gini's diversity index. Hyperparameters were optimized to minimize the cross-validation error for all eligible parameters which included max number of splits and minimum number of observation at each node. Random forest models were constructed using the TreeBagger function in MATLAB which grows the decision tree by bootstrapping samples of the dataset and selecting a random subset of predictors to use at each split. Default settings were used with the exception of using an ensemble of 500 decision trees. Support vector machine models were constructed using the fitcecoc function in MATLAB which produces a multiclass support vector machine model using a Gaussian kernel function. The models were trained using default settings and hyperparameters were optimized to minimize the cross-validation error. The following parameters were optimized:, box constrainedt, penalty imposed on samples for outside of margins), kernel scale, polynomial kernel function order to compute the Gram matrix and the standardization of the input features. To evaluate each algorithm's performance and prevent overfitting during the training phase, a 10-fold cross validation was performed, where 90% of the data is randomly sampled for training and 10% withheld for testing. The training and testing dataset is partitioned based on individual imaging scans and not by animal. All hyperparameters were optimized using Bayesian optimization. Feature importance for random forest was determined by the summation of changes in error due to node removal and normalizing by the number of branching points.
Tumor growth rate prediction. Tumor growth curves from each individual tumor were first fitted to a one-term exponential: The αβ values were fitted using the Trust-Region algorithm with default setting using MATLAB for each tumor. The values found from the fit was used as the target value for the neural network with image features derived from early scans (first imaging session) used as input. The two layer neural network used consistied of 33 hidden neurons with a sigmoid transfer function in the hidden layer and linear transfer function in the output. The network was trained with Levenberg-Marquardt back-propagation algorithm where 60% of the data was used for training, 35% used for validation and 5% used for testing (n = 6, 2 test cases for each tumor type from separate cohort). Training and testing samples were divided to have similar distribution and equal representation of all three tumor types.
Statistical analysis and model performance evaluation. Statistical analysis was performed using
GraphPad Prism (GraphPad Software, La Jolla, CA). Model performance was assessed with the following metrics: accuracy, specificity, sensitivity and F-score. TP = true positive, TN = true negative, FP = false positive, FN = false negative: The metrics were computed for each individual tumor class from the confusion matrix. Analysis of variance (ANOVA) was used to compare multiple means for each image feature between different tumor types. Bonferroni correction was used for multiple comparisons with P-values presented as multiplicity adjusted p-values with alpha set to 0.05, threshold for significance = 0.000505051, to determine statistical significance of the image features. This stringent correction accounts for 99 potential hypotheses for the three tumor types and thirty-three possible image-based features. P-values were not additionally adjusted for GLCM size, which was set during the choice of optimal model.
Data Availability
The datasets generated and/or analyzed during the current study are available on https://github.com/tien-tang/ tumor-classification_growth-rate. | 5,863.2 | 2019-08-29T00:00:00.000 | [
"Computer Science"
] |
ParaQooba : A Fast and Flexible Framework for Parallel and Distributed QBF Solving ⋆
. Over the last years, innovative parallel and distributed SAT solving techniques were presented that could impressively exploit the power of modern hardware and cloud systems. Two approaches were particularly successful: (1) search-space splitting in a Divide-and-Conquer (D&C) manner and (2) portfolio-based solving. The latter executes different solvers or configurations of solvers in parallel. For quantified Boolean formulas (QBFs), the extension of propositional logic with quantifiers, there is surprisingly little recent work in this direction compared to SAT. In this paper, we present ParaQooba , a novel framework for parallel and distributed QBF solving which combines D&C parallelization and distribution with portfolio-based solving. Our framework is designed in such a way that it can be easily extended and arbitrary sequential QBF solvers can be integrated out of the box, without any programming effort. We show how ParaQooba orchestrates the collaboration of different solvers for joint problem solving by performing an extensive evaluation on benchmarks from QBFEval’22, the most recent QBF competition.
the actual solving in which certain redundancies of a formula are eliminated in a satisfiability-preserving way with the aim to make it easier for the solver [10].
Despite the vivid development in sequential QBF solving, only few approaches have been presented for parallel and distributed QBF solving [18]. The most recent parallel QBF solvers are HordeQBF [1] which integrates sequential QCDCL-based solvers to obtain a parallel QBF solver and, more recently, a basic implementation of a QBF module based on the parallel SAT solver Para-Cooba [6] with DepQBF as its only backend solver. To the best of our knowledge, besides these two approaches no other parallel QBF solver has recently been presented. The situation in SAT is different: several very powerful parallel and distributed SAT solvers like Mallob [24], Painless [5], and the afore mentioned solver ParaCooba [7] have been released. They show the potential of parallel and distributed approaches impressively by solving hard SAT instances, for example from multiplier verification [15].
In this paper, we present ParaQooba, a novel framework for parallel and distributed QBF solving that integrates search-space splitting based on the Divide-and-Conquer paradigm with portfolio solving. Our framework is built on top of the ParaCooba SAT solving framework and extends its basic nonportfolio QBF solving module. ParaQooba reuses most of ParaCooba's modules providing management and distribution of solver tasks. In addition, we implemented a very generic interface that allows the easy integration of any QBF solver binary into our framework.
Our main contributions are as follows: we present a new flexible framework for parallel and distributed QBF solving that combines D&C search-space splitting with portfolio solving; we show how different QBF solvers that are based on different solving approaches can be integrated seamlessly into our framework; we provide our framework as open-source project; we perform an extensive evaluation that demonstrates the power of our approach on various kinds of benchmarks.
ParaQooba is integrated into ParaCooba's and available on GitHub: https://github.com/maximaximal/paracooba This paper is structured as follows: First we introduce some preliminaries required for the rest of the paper in the following section. We continue with related work in section 3. After that, section 4 summarizes concepts of the ParaCooba solver framework used in our work. Then we introduce how we apply Divideand-Conquer to solving QBF in section 5. Having introduced the background, we present our portfolio ParaQooba module in detail in section 6 and provide an extensive evaluation in section 7. Finally, we summarize our findings and conclude in section 8.
Preliminaries
We consider QBFs Q.φ in prenex conjunctive normal form (PCNF) where the prefix Q is of the form Q 1 x 1 , . . . , Q n x n with Q ∈ {∀, ∃}. The matrix φ is a propositional formula over the variables x 1 , . . . , x n in conjunctive normal form (CNF). A formula in CNF is a conjunction (∧) of clauses. A clause is a disjunction (∨) of literals. A literal is a variable x, a negated variable ¬x or a (possibly negated) truth constant ⊤ (true) or ⊥ (false). For a literal l, the expressionl denotes x if l = ¬x and it denotes ¬x otherwise. We sometimes write a clause as a set of literals and a CNF formula as set of clauses. Further, it is often convenient to partition the quantifier prefix into quantifier blocks, i.e., maximal sets of consecutive sets of variables with the same quantifier type. For example, for the QBF ∀x 1 ∀x 2 ∃y 1 ∃y 2 .φ we also write ∀X∃Y.φ with X = {x 1 , x 2 } and Y = {y 1 , y 2 }. With upper case letters X, Y, . . . (possibly subscripted), we usually denote sets of variables, while with lower case letters x, y, . . . (also possibly subscripted), we denote variables. If φ is CNF formula, then φ x←t is the CNF formula obtained from φ by replacing all occurrences of variable x by truth constant t ∈ {⊤, ⊥}. Depending on the value of t, variable x is either set to true (if t is ⊤) or to false (if t is ⊥). We define the semantics of QBFs as follows: Note that we assume that all variables of a QBF are quantified, i.e., we are considering closed formulas only. Further, we use standard semantics of conjunction, disjunction, negation, and truth constants. For example, the QBF ϕ 1 = ∀x∃y.((x ∨ y) ∧ (¬x ∨ ¬y)) is true, while ϕ 2 = ∃y∀x.((x ∨ y) ∧ (¬x ∨ ¬y)) is false. As we see already by this small example, the semantics impose an ordering on the variables w.r.t. the prefix. Given a QBF Q.φ, we say that x < Q y iff x occurs before y in the prefix. If clear from the context, we write x < y. In ϕ 1 , we have x < y, while in ϕ 2 , we have y < x.
Related Work
In practical QBF solving, attempts to parallelize and distribute QBF solvers have a long history (cf. [18] for a survey). Already more than 20 years back, the first distributed QBF solver PQSolve [4] was presented, in a time when QCDCL had not been invented yet. With the advent of QCDCL, several attempts have been made to build parallel QCDCL solvers and implement knowledge-sharing mechanisms for learned clauses and cubes. One example of such a solver is PAQuBE [16]. Unfortunately, the code of most of the early approaches is not available anymore. Following the success of Cube-and-Conquer-based searchspace splitting, the QBF solver MPIDepQBF has been presented [14]. While MPIDepQBF does not implement any sophisticated look-ahead mechanisms, it could demonstrate that even without knowledge-sharing considerable speedup could be achieved. These results serve as motivation for the approach presented in this paper. Unfortunately, MPIDepQBF is implemented in an older version of OCaml that does not run on recent systems and relies on now deprecated libraries, making a comparison impossible. As indicated by its name, it is tailored around the sequential QBF solver DepQBF [17]. Another recent MPI-based QBF solver is HordeQBF [1] which implements knowledge sharing for QCDCL solvers. It is designed in such a way that it allows the integration of any QCDCL solver. In order to integrate a solver, it requires that it implements a certain interface, i.e., programming effort is necessary to add a new solver. To the best of our knowledge, it includes the QBF solver DepQBF only. HordeQBF does not perform search-space splitting, but it is a parallel portfolio solver with clauseand cube sharing. It diversifies the parallel solver instances by different parameter settings. This is different than in sequential portfolio solvers as presented in [12], which select among different solvers based on some properties of the input formula. Overall, a very strong focus on QCDCL-based solvers can be observed for parallel QBF solving frameworks. Because of this, many chances for better solving performance are missed, as nowadays there are many other solvers of orthogonal strength. With ParaQooba we provide a simple way of exploiting the power of the different solving approaches without any integration effort.
ParaCooba
Our novel framework ParaQooba (with q in the middle of its name) builds on top of the SAT solver ParaCooba (with c in the middle of its name). In this section, we describe the parts of ParaCooba that are relevant for the remainder of this work for our extension of ParaCooba to ParaQooba. ParaQooba will be made available publicly during the artifact evaluation under the MIT license, similar to ParaCooba [7,6] which is publicly available on GitHub also under the MIT license 3 . ParaCooba is a distributed Cubeand-Conquer (C&C) solver that implements a proprietary peer-to-peer based load balancing protocol. In contrast to standard D&C solvers the splitting of the search-space can both be done upfront by using a look-ahead solver that produces n cubes or online during solving by lookahead or other heuristics. Amongst other information, the cubes are stored in a binary tree, the solve tree. Solver module. A solver module manages the sequential solver that is responsible for solving a subproblem. Different solver modules have different code-bases, but they also generally share common concepts. A solver module implements a parser task, which is created directly after the module was initiated and serves as its starting point. It parses the input formula in its own worker thread and instantiates a solver manager based on the fully parsed formula. The parser task also creates the first solver task as the root of the solve tree. Solver Tasks. For ParaCooba, solver tasks are paths in the solve tree, whith a parser task being used to generate the tree's root. Solver tasks are usually started as children of other tasks, saving references to their parents, with the root solver task being the only exception. A task's depth in the solve tree represents its priority to be worked on: The greater the depth, the more important a task is to be solved locally and the less important it is to be offloaded to other compute nodes by the broker module. Only tasks that were created locally may be distributed.
Broker module. The broker module handles relations between solver tasks and processes their results. While the solver module generates tasks, the broker schedules them based on their priorities (their depths) and offloads them if a different compute node has less load than the current node. A task result is propagated upwards across compute nodes, there is no conceptual difference between locally and remotely solved tasks. The broker module is generic and does not rely on a specific solver module, instead providing the environment a solver module works in. It is already provided by ParaCooba and stays the same for different solver modules.
Cube Sources. For generating concrete subproblems, cube sources provide assumption literals to leaf solver tasks. A cube source decides whether a given solver task should split again, based on the current configuration (mainly the splitting depth) and the given formula. Every solver module can implement its own cube source, hence there are different kinds of cube sources for different solver modules. On this basis, very flexible mechanisms for the selection of splitting variables can be implemented, ranging from a simple count of literal occurrences to advanced look-ahead heuristics. Task Tree. The task tree built lazily, i.e., only once a leaf is visited, the leaf is either expanded into a sub-tree, or solved. We picture such a tree in Figure 1. This tree has a depth of 1, because the path from the tree's root solver task to the leaf solver tasks has a length of 1. Once the active cube source stops further splits from being carried out, the tree's maximum depth is reached. The worker thread currently executing a task then lends a solver instance from the solver manager's central store. Each solver instance is created on-the-fly once (normally initialized based on the parser task) for each worker thread, which can also happen for multiple worker threads in parallel. After a solver instance was created, all other tasks solved by the same worker thread use the same solver instance.
Guiding Paths. The cubes that are given to solver instances as assumptions are called guiding paths. They are generated from the path to the leaf being solved. The solver instance then handles the solving internally, blocking the worker thread until either result is generated or the task is terminated. Results are not returned to parents, but instead handled by the broker module, which then traverses the solve tree upwards as far as possible, based on the results already in the tree. Different kinds of evaluations can be defined on every level using a userdefined assessment function. With the result processed by the broker module, the solver task then finishes and the worker thread can take on the next task, based on the next-highest priority. The broker may delete the solver task after it finished processing, if the result was already used somewhere above it in the tree and no information from the original solver task structure is required anymore. Once the broker module has enough information to solve the root task, the result of the formula was computed successfully.
Solver Handle. A solver handle wraps instances of a given solver. It must be able to receive an Assume event, directly followed by a Solve event. While processing these events, a correctly working handle must block its calling thread until a result is found. Additionally, it must be fully re-entrant after finishing processing, so that the next solver task can apply new assumptions. On top of this, a handle must also be able to process a Terminate event, stopping the solver and earlyreturning control to its calling thread. Such a termination event may happen at any time, as it is generated by other solver tasks. This possibility of random terminations was an issue for our extension to ParaQooba, as it complicated synchronization of all involved threads.
QBF Solver Module. ParaCooba already provided a basic QBF solver module similar to the approach seen in MPIDepQBF. It implemented a QDIMACSparser in a new solver module based on the SAT module. It realizes a simple cube source that returns the variable at the nth position in the prefix, with n being the current depth of a solver task. The solve tree is built using two adapted assessment functions: one for variables quantified ∀ (requiring all subtrees to be true), one for ∃ (requiring at least one sub-tree to be true). The assessment functions also use ParaCooba's cancellation-support to terminate unneeded siblings after results already satisfy the respective subproblem. As backend solver, it exclusively uses DepQBF that provides an incremental API (which no other recent solver provides, to the best of our knowledge).
Summary. With its already existing tree-based QBF solving module together with its support for distributed solving, ParaCooba provides a stable basis for building an advanced parallel QBF solver. While the existing QBF module is rather uncompetitive with a few exceptions that indicate its potential, its core infrastructure turned out to be very useful to build our novel framework ParaQooba that offers built-in portfolio support.
The networking support mentioned above enables combining multiple compute nodes by giving each peer a connection to the main node. This is achieved with setting the --known-remote option. With this feature it becomes possible to easily distribute larger problem instances on a cluster or in the cloud.
Architecture of ParaQooba: Combining Divide-and-Conquer Portfolio Solving
Our framework ParaQooba combines Divide-and-Conquer (D&C) search space splitting with portfolio solving. The key feature of ParaQooba compared to ParaCooba is to allow portfolio solving at different search depths. The idea is illustrated in Figure 1. Both approaches are widely used to realize parallel and distributed SAT and QBF solvers. The D&C approach has been especially successful for hard combinatorial SAT problems [11] in a variant called Cube-and-Conquer (C&C). The C&C approach relies on powerful, but expensive lookahead solvers that heuristically decide which variables shall be considered for splitting. In its original SAT version, ParaCooba builds upon this idea [7].
∃} though, the possible choices for variable selection are more restricted because of the quantifier prefix. In general, only variables from the outermost quantifier block Q 1 X may be considered, because otherwise, the value of the formula might change. Jordan et al. [14] observed that for QBF following the sequential order of the variables in the first quantifier block already leads to improvements compared to the sequential implementation of DepQBF. The already existing QBF solver module of ParaCooba (see section 4) relied on this observation: it traverses the prefix of a PCNF and splits each visited leaf into two sub-trees, respecting both universal and existential quantifiers, until a pre-defined maximum depth is reached. Hence, it re-implements the approach of MPIDepQBF in ParaCooba.
Our framework ParaQooba generalizes the previous QBF module of Para-Cooba not only by generalizing the interface in such a manner that any QBF solver can be easily (without programming effort) integrated as backend solver. Now it is also possible to run several solvers in the leaves as shown in Figure 2 for one split. Overall, ParaQooba realizes the following approach. The searchspace is split according to the variable ordering of the prefix until a given depth. Once one of the sub-trees of an existentially quantified variable split is found to be true, the other sibling is terminated. Only when both siblings return false, the whole split returns false. Universal splits work in a dual manner: the result is only true if both sub-trees are found to be true and false otherwise. This property of QBF enables efficient termination of sub-tasks.
In ParaQooba, we now also parallelize each solver call over several QBF solvers with orthogonal strategies. Compared to prior approaches [18], we run a portfolio of multiple solvers in the leaves of the solve tree instead of only parallelizing its root. Having just one tree leads to several advantages: We are more flexible and may also call a preprocessor (e.g. Bloqqer) before each solve call. We also only instantiate the tree once, saving memory and enabling earlytermination of sibling solver tasks.
Implementation
This section describes the extension of the SAT solver ParaCooba (for an overview see section 4) to our QBF solving framework ParaQooba. As Para- Fig. 1: Divide-and-Conquer with arbitrary-many levels of splitting and subformulas on the leaves solved by a portfolio of different sequential solvers Cooba was originally not designed for portfolio support, several modifications and extensions were necessary. To this end, we first present the new QBF module of ParaQooba followed by a discussion of novel search-space pruning facilities.
The ParaQooba QBF Module
We generalized the already existing QBF solver handle to become an abstract base class, which now can be either a single solver handle or a portfolio handle. The latter unifies multiple handles into one, emulating a blocking and re-entrant interface. Once a portfolio handle is initialized, it starts one thread per internally wrapped handle. Each such thread implements a small state machine, waiting for events on a shared queue. Once the portfolio handle receives an assumption (a temporary truth assignment of a variable for one solver call), it is forwarded to all internal threads and is worked on by each wrapped solver in parallel. If a portfolio handle was terminated before a solve call was issued, the internal handles would enter an invalid state. To circumvent this situation, an assumption event also directly triggers the internal state machine to continue into the solve state. Once the solve request actually arrives, it is just translated to an empty event, which, after it finished processing, indicates that a result was computed. A termination event is forwarded to the internal solver handles, but is limited to only one event per solve cycle.
QBF Module
QBF Solver Task(s) Worker 1 The first internal solver handle to compute a result returns and sends a termination event to all sibling solvers. The result is saved and the portfolio handle waits for all internal handles to be ready to receive the next assumption, i.e., returning all solvers to a known state. Once every internal handle has reached that, the portfolio handle finally returns to its calling thread, forwarding the result of the inner handle. Because of thread scheduling and fast solving of trivial subproblems, a result can be forwarded even before the other sibling has been started, letting the broker module already complete a task before it itself has created both child tasks. This effect lead to some issues and had to be mitigated by adding some conditions on a task already being terminated even though it did not yet run to completion. Because a task will only be scheduled after the initial call to its assessment function, not many such checks were needed.
As many QBF solvers lack APIs, we have to work with their binaries that generally only read QDIMACS files. For this, we use the QuAPI interfacing library, that adds well-performing assumption-based reasoning support to generic solver binaries [9]. By not relying on specialized modifications of a solver's source code, we are able to plug-in generic third-party solvers, completely composable at runtime. Our ParaQooba module provides the --quapisolver parameter, that either directly specifies the leaf solver to be used, or automatically generates a portfolio handle to wrap multiple parallel leaf solvers. Note that our approach works for QBFs starting with existential as well as with universal quantification.
In its standard configuration, ParaQooba returns whether a given instance is found to be true or false. When enabling trace output using -t, it also supports printing the specific solver and the subproblem (including its guiding path) that produced a result. Using this machinery, one obtains an environment to experiment with benchmarks and to see how multiple solvers complement each other for the generated sub-formulas. The trace output is also useful when fully expanding a QBF formula by specifying a tree-depth of -1. While not advised for any real formulas, this was a well-received debugging aid for stress-testing new features. The opposite to this can also be done, by applying a tree-depth of 0. This directly solves the root task, without splitting the formula. This was also how the configuration PQ Portfolio with depth 0 (as discussed in the experimental evaluation below) was executed.
Search-Space Pruning
Preprocessing in the leaves. We modified the QBF preprocessor Bloqqer to allow forwarding output directly into a given solver binary by adding a -p argument. Internally, this writes the complete formula with added assumptions into the standard input of Bloqqer's preprocessing pipeline.
To plug e.g. Caqe into such a processing chain and then into ParaQooba, one may use our QBF solver module's command line option --quapisolver bloqqer-popen@-p=caqe. Deferring preprocessing until solving the leaves preserves the original formula structure of a formula during the split phase. We discuss the effects of this later in subsection 7.4.
Integer-Split Reduction. In many planning and verification encodings, the variables of a quantifier block QX are interpreted as bitvectors representing m nodes of a graph. Assume that n = |X| bits with m ≤ 2 n are used for modeling the states of the graph. Then 2 n − m assignments to X are not relevant, but as a solver is agnostic of this information, it has to consider all assignments.
If m is known to the user, ParaQooba can be called with the option --intsplit (once or multiple times, once for each layer). One integer-split is counted as one layer in the task tree, so a tree-depth of two would split another quantifier into two more tasks for each state encoded in the previous integerbased split. To provide an example: Setting --intsplit 5 creates 5 child-tasks in the task tree, spanning over the first ⌈log 2 5⌉ = 3 boolean variables from the quantifier prefix. When not using doing an integer-based split, these 3 variables would have to be expanded over 3 layers in the task tree, each inner task being split into two child tasks, resulting in 8 leaves , opposed to the 5 from before. Thus, integer-based splits require less intermediate splitting tasks to model the same formula, reducing the work to be done by the load-balancing mechanism in the Broker module. These integer splits are efficiently distributed over the network by relying on both the config-system and an extended QBF cube source. The cube source always saves the current guiding path, applying new splits, and in turn new assumptions, by appending to that path. The cube source itself is automatically serialized when a task is chosen to be offloaded to another compute node. While the possible savings are large, one has to exert great caution when using this feature, as it might change the semantics of a formula.
Evaluation
In this section, we evaluate ParaQooba on recent benchmarks and compare it to (sequential) state-of-the-art QBF solvers. As sequential backend solvers, we use the latest versions of DepQBF [17] as QCDCL solver, Caqe [23] as clausalabstraction solver, and RaReQs [13] as recursive abstraction refinement solver. For preprocessing, we use Bloqqer [3] (version 31). All of these solvers were topranked in the most recent edition of QBFEval'22 [22]. For our experiments we used the benchmarks of the PCNF-track of this competition. The main questions we want to answer with our evaluation are as follows: how does the parallel portfolio-leaf approach of ParaQooba perform in comparison to the individual sequential solvers? how does the parallel portfolio-leaf approach of ParaQooba perform in comparison to the virtual portfolio solver of the sequential solvers? what is the impact of performing the preprocessing in the leaves instead on the original input formula?
We ran our experiments on machines with dual-socket 16 core AMD EPYC 7313 processors with 3.7 GHz sustained boost clock speed and 256 GB main memory. Each task was assigned as many physical cores as its setup required, except for tasks with more than 32 concurrent threads, which were exclusively assigned a whole node each as to not be slowed down by other loads. The effects of over-committing in case of three concurrent portfolio solvers (48 threads running in parallel with only 32 physical cores available) are discussed below in subsection 7.3. Please note that in this evaluation we do not use the networking features provided by ParaCooba, as we focus on applicability to QBF and not on the already presented scalability of the networking component (for the details see [3]).
Overall Performance Comparison
In order to exploit our hardware with 32 physical cores and 64 logical cores in the best possible way, we mainly focus on a splitting depth of four in the following. With this depth, 16 worker threads are generated for each problem and with three sequential backend solvers, overall 48 processes are started. We call this configuration PQ Portfolio, Depth 4. For understanding the impact of splitting, we also consider other depths as well. With PQ Portfolio, Depth 0 we refer to the configuration in which splitting is disabled. This configuration is particularly interesting, because compared to the virtual best solver (VBS), it reveals the overhead introduced by our framework (see also the discussion below). In order to show the improvements of ParaQooba compared to the QBF module without portfolio solving that was already available in ParaCooba [6], we also included the configuration PQ DepQBF, Depth 4. Figure 3 shows the overall results of our evaluation without preprocessing. Both configurations of ParaQooba, PQ Portfolio, Depth 0 and PQ Portfolio, Depth 4 are considerably better than the single sequential solvers as well as the basic non-portfolio QBF module of ParaCooba only solving with DepQBF (PQ DepQBF, Depth 4). However, compared to the virtual portfolio, 28 instances less are solved in total (for an explanation see below). On the positive side, 33 formulas can be solved by our new approach that could not be solved by any sequential solver. The situation changes when preprocessing is applied (cf. Figure 4). Now ParaQooba in configuration PQ Portfolio Preprocessed Formulas, Depth 4 is able to solve most formulas. It even solves more formulas than the Preprocessed Virtual Portfolio, indicating the potential of our approach.
A detailed analysis is given in Figure 5. By comparing the number of solved instances to the solve time of individual (preprocessed) problem instances, we see a small average speedup when using ParaQooba with depth 4 compared to a virtual portfolio solver in Figure 5a. The more trivial instances tend to be solved quicker using a sequential solver, while the harder to solve instances tend to be solved faster with the Divide-and-Conquer approach of ParaQooba.
Next, we used the preprocessed leaves functionality introduced in subsection 6.2. Here ParaQooba generates its guiding paths using the original formula and applies Bloqqer only in the leaves of the solve tree. In this configuration, some problem instances take longer to solve than when preprocessing the full formula, while others can be solved quicker. We present these results in Figure 5b. Such a result was expected, as it is conceptually similar to inprocessing. [20,25] benchmarks from the QBF22 benchmark set. Also compared to HordeQBF [1] as available state-of-the-art parallel QBF solver.
When considering the formulas that were exclusively solved by ParaQooba, then the variant with preprocessing the full formula up-front performed best followed by the variant with preprocessing in the leaves. These formulas include verification and synthesis benchmarks with 2-3 quantifier alternations as well as many encodings of the game Hex with 13, 15 or 17 quantifier alternations. Table 1 in the appendix lists all instances (48) that were only solved with some variant of ParaQooba. It also lists which variant was the fastest.
Family-Based Analysis
To understand which formula families benefit most from our Divide-and-Conquer solving strategy, we compared the (wall-clock) solve time of ParaQooba to the virtual portfolio solver. We calculated the speedup by dividing the solve time of the sequential solver by the solve time of ParaQooba. The instances with the highest speedups were some reachability queries (up to 18.09), the Hex game planning family (17.64), multipliers (16.46), and the formula_add family (15.16). More detailed results are appended in Table 2. Together with the number of Hex instances only ParaQooba solved (21), this makes Hex game planning the benchmark family with the best overall results in our evaluation. A comparison between ParaQooba and other solvers is shown in Figure 6.
Scalability of our Approach
As already discussed above, using 16 workers leads to overcommitting cores when solving with a portfolio of more than two solvers. To quantify this, we did a scalability experiment with different worker counts. Because the Hex planning benchmarks had the most predictable performance, we focused this experiment on these formulas. Figure 7 shows the scalability graph, where the X-axis has been multiplied by the number of workers used, to visualize the cost of increased CPU-time compared to reduced wall-clock solve time. The impact of over-committing CPU cores can be clearly observed in the results of the portfolio with depth 4. This curve solves more compared to the others and takes longer to solve the first 140 instances, until the curves become more similar again.
Preprocessed Leaves compared to Preprocessed Formulas
We compared preprocessing the whole formula at once using Bloqqer to calling Bloqqer using bloqqer-popen in each leaf after first splitting on the unchanged formula. The first variant modifies the original prefix, including the quantifier ordering. Because the used splitting algorithm generates guiding paths by following this quantifier ordering, the different approaches lead to vastly different results. Figure 5c visualizes these differences by scattering both variants together. Looking at the specific benchmarks benefiting from the two variants, we often observed improvements to one variant per family. This strongly suggests that adaptive preprocessing and inprocessing techniques could further improve solving performance, even without otherwise changing solvers themselves.
Lessons Learned
One would expect that for any given problem, parallel portfolio solvers are as fast as the fastest used solver. While this statement is conceptually true, we encountered some formulas where PQ-Portfolio gave comparatively bad results, while a solver alone could solve the same formula quicker or even instantly.
We investigated this in more detail and found several segmentation faults in Caqe and API inconsistencies in DepQBF that were encountered because of some corner-case structures of the generated subproblems (e.g., by enforcing the values of certain variables). We reported these issues to the solver developers and hope to obtain fixes soon. Having this issues fixed would lead to a more performant general solution and to a more robust user experience. In sequential execution of these solvers, we did not encounter any problems on the unmodified competition benchmarks without added unit clauses.
Currently, we adopt the following work-around. Segmentation faults of the sequential solvers are handled in our QBF module using the indirection provided by QuAPI. Once an unrecoverable error occurs in the solver child process, it exits and returns the error up through QuAPI's factory process and into the solver handle. There, such a result is interpreted as Unknown, which is invalid and therefore ignored, letting the portfolio wait for other results. We provide all affected formulas that we found in the artifact submitted alongside this paper.
We also observed that calling a solver via its API might lead to a considerably different behavior than calling a solver from the command line, i.e., different optimizations are activated when calling a solver through its API compared to using the command-line binary. Such behavior can be mitigated by not using the API directly, and instead relying on QuAPI, even if an API would be available. This fixes the issues with DepQBF, which solves some formulas (with assumptions supplied as unit clauses) in under one second if used as a solver binary, but not when applying assumptions through its API. We also supply all found formulas that triggered this issue in the submitted artifact.
Conclusions
We presented ParaQooba, a parallel and distributed QBF solving framework that combines search-space splitting with portfolio solving. We designed the framework in such a way that any sequential QBF solver binary can be easily integrated without any implementation effort. Our experiments demonstrate that this approach in combination with sequential preprocessing lead to considerable performance improvements for certain formula families.
With our framework, we provide a stable infrastructure that has the potential for many future extensions. For example, we did not incorporate any advanced splitting heuristics as in modern Cube-and-Conquer solvers. We expect that with more advanced heuristics, combined with adaptive but possibly non-deterministic re-splitting of leaves, even more speedups could be achieved.
In addition to the presented experiments, we also evaluated the novel integersplit feature (cf. subsection 6.2) with the Hex benchmark family. By providing the number of valid game states to ParaQooba, we could increase the splitting depth as well as the number of solved instances. We see much potential of providing encoding-specific or domain-specific knowledge to the solver and will investigate this in future work. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 8,330 | 2023-01-01T00:00:00.000 | [
"Computer Science"
] |
E-Business Curricula and Cybercrime: A Continuing Error of Omission?
The growth of e-business has been accompanied by even faster increases in losses from security breaches, legal problems, and cybercrime. These unnecessary costs inhibit the growth and efficiency of e-business worldwide. Professional education in e-business can help address these problems by providing students with coursework aimed at them. The present study extends research begun in 2007 on course coverage of law, security, and ethics in e-business master’s programs. Data were collected from university web sites in 2010 on 104 e-business master’s curricula worldwide and compared with 2007 data. Results suggest no significant coverage changes and a majority of program curricula still lack courses in law, security, or ethics. Coverage of these topics did not apparently increase from 2007 to 2010 despite the rapid acceleration of cybercrime during the same period. However the change in coverage of topics related to cybercrime varied by region of the world in which e-business degree programs are based.
Introduction
The financial impact of cybercrime on e-business and business in general has escalated dramatically in recent years, apparently the fastest growing type of crime (1,2). Businesses all over the world are disrupted, lose sensitive information, and experience productivity declines as a result of cybercrime (3).Results of the 2010/2011 Computer Crime and Security Survey indicate that 67 percent of respondents experienced malware infection (2). According to the Internet Crime Complaint Center 2012 report (4), losses from Internet crime are up 8.3 percent over the previous year. Despite the cost and pervasiveness of cybercrime, a survey of over 9,300 business and technology executives in 128 countries found that businesses' use of key technology safeguards are on the decline and fewer than half have security awareness training programs for employees (5). Zhao and Zhao (6) identified various security vulnerabilities on up to one-third of Fortune 500 corporations' retail e-commerce. It is therefore not surprising that 40 percent of e-commerce customers sampled in a recent study expressed concerns about the security of web sites (7).This may reduce online shopping (8).
The 2011 Cyber Security Watch Survey report (9) concluded that employees need greater skills to address such problems. The Comprehensive National Cyber security Initiative(10) of the U.S. government has been expanded by President Obama to include a major goal to increase education focused on security (Initiative 8).In a review of legal harmonization on cybercrime, Clough (11) argued that education and technological solutions are as important as legal regulation in addressing the challenges of cybercrime. The paper's next section reviews the literature on the extent to which topics aimed at addressing cybercrime have been covered by educational programs intended to prepare professionals for e-business.
Literature Review
Law (12), security (13), and ethics (14) are necessary foundations for effective e-business. These factors can help to promote trust among e-consumers and prevent cybercrime. The Economist Intelligence Unit report in 2009 [15] recognized the importance of law by adding a legal component to estimate countries' e-readiness. Blythe (16) tracked the development and spread of electronic signature law around the world to promote security of Internet transactions. Abyad (17) contended that trust is more critical in e-commerce than in traditional shopping because it involves more uncertainty and risk. E-business security measures can help to build consumer trust (17,18,19). Shahibi and Fakeh (13) reported that technology concerns such as virus protection during online transaction and safe online payment were viewed by consumers as important determinants of e-commerce providers' security.
Ethics in e-business can take the form of privacy policies and clearly stated return policies (17). Maury and Kleiner (20) pointed out the need to build ethical values into e-business to improve consumer confidence.Bruce and Edgington (21) reported that ethics education in MBA programs can influence students' beliefs and behavior.Harris et al. (22) argued that "instruction in ethics should be a core component of the curriculum" (p. 187) and include a particular focus on cybercrime. Together, these studies suggest that ethics education may be basic to achieving the goals of improved security practices, legal compliance, and customer trust. Moreover, coursework in the areas of law, security, and ethics may serve to create an awareness of the dimensions of the cybercrime problem as well as prevent it. Fusilier and Penrod (23) reported that education programs for e-business professionals do not appear to provide sufficient preparation for preventing e-business legal and security problems even though evidence indicates that law, security, and ethics are necessary for enabling e-business.
Law, Security, and Ethics in e-business Education
Cybercrime is not viewed with the same ethical certainty as other types of crime such as robbery or assault (24).Incorporating cyber ethics into curricula for youth is increasingly seen as a means for prevention (24).McCrohan, Engel, and Harvey (25) demonstrated that it is possible to change security behaviors of Internet users through training. In spite of this, evidence suggests that e-business degree programs in higher education include: (a) ethics courses in fewer than five percent of curricula (26,27), (b) law and/or security courses in zero to 50 percent of curricula (28,29,30,31), and (c) e-business security courses in only 54 percent of the graduate and undergraduate e-business programs (32).A contrary finding was reported by Mechitov, Moshkovich, and Olson (33) that 70 percent of the masters of science in electronic commerce programs sampled had security and law course(s). However that result was based on a sample of only 10 programs.In a study of 163 e-business master's programs, Fusilier and Penrod (23) found law, security, and ethics courses offered in 47, 33, and 10 percent of e-business master's programs respectively. Two studies took a different approach to assessing curriculum content by coding e-business course syllabi (34,35). Results suggested that fewer than half of the syllabi included topics on legal issues, ethics, or privacy. Slightly over half of the syllabi included security as a topic. No previous studies were found that reported the existence of courses on prevention of cybercrime.
Purpose of the Study
It seems that the increase in cybercrime in recent years would necessitate more courses in e-business curricula to address the problem. Previous literature indicates a baseline underrepresentation of courses in e-business curricula focused on addressing cybercrime.Has this changed in recent years?The present study explored the responsiveness of universities to the wave of cybercrime. A repeated measures analysis was conducted to explore the extent of law, security, and ethics courses in master's degree e-business programs worldwide from 2007 to 2010. The purpose is to investigate whether cybercrime education has improved during this period as cybercrime has dramatically increased. Only e-business programs were studied which provides a focused test of curricular efforts regarding factors related to cybercrime. This type of program seems likely to include cybercrime prevention coursework because cybercrime is a serious threat to online business and the consumer trust so critical for e-business to thrive.
e-business Programs
The list of e-business master's programs used in the Fusilier and Penrod (23) study was obtained. The list included e-business master's programs that existed in 2007.The web sites of the schools offering the programs were checked in 2010 to determine whether each program was still being offered at that time. It is not unusual for e-business programs to be revised or canceled (36). Fifty-nine of the 163programs included in the earlier study were no longer listed on the web sites of the schools that had offered them in 2007. This suggested that 59 programs had been canceled. The present study therefore focused on the remaining 104 programs that had been analyzed by Fusilier and Penrod (23) in 2007 and were still operational in 2010. The list of courses comprising the curriculum of each program was accessed on the web site of the school offering it. Web sites in languages other than English were either translated by the authors or by an Internet translation site.
Course Categories
The online title and description for every course in the 104 e-business curricula were examined to determine whether the course appeared to cover the topic of (1) business law, (2) e-business law, (3) ethics, or (4) security. Each of these topics served as a category for courses. A course was included in a category if its title and description indicated that it mainly covered the topic of the category. For example, if a course mainly covered business law, it would be assigned to the business law category. Courses that did not cover any of the four topics of interest to the present study were not included in the categories. For example, a general finance course would not fit with any of the categories because its main focus was not on any of the four topics of interest.
After assigning courses to categories, the total numbers of courses in each of the four categories could be tabulated for all of the 104 e-business program curricula. The specific procedure used for placing courses into categories was: 1.Business Law: This category was for course titles and descriptions that indicated coverage of the topic of general business law. Examples of titles of courses that were included in this category were: "Survey of Business Law" and "Legal Environment of Business."In a few cases, legal issues and ethics were included in the same course (four incidents). Descriptions of these combination courses indicated a predominant emphasis on legal topics. They were therefore coded as business law or e-business law, whichever was most appropriate. No courses were counted in more than one coding category.
2.e-business Law: Courses were assigned to this category if their title and description indicated that they covered law in the context of e-business. Examples of such course titles included "Cyber law" and "e-business Intellectual Property." 3.Ethics:Courses with titles that concerned ethics were coded into this category. "Business Ethics and Society" and "Applied Ethics" were examples of course titles that were included in the category for ethics.
4.Security: The security category was for courses that covered topics such as security, cryptography, encryption technology, risk analysis, firewall technology, intrusion detection system, handling computer viruses, etc. Two example course titles are: "Electronic Payment and Security" and "Computer Security for e-commerce." The procedure of using course titles as a measure of topic coverage is consistent with previous research (23,28,32,30,37). Courses in the present sample were also designated as being required or elective in thee-business curriculum of which they were a part.
Results
Of the 104 e-business masters programs analyzed in the present study, forty-six were based in North America (Canada [5], USA [40], and Mexico [1]), 23 in Australia/New Zealand [21 and 2, respectively], 27 in Europe, and 8 in Asia. Table 1 shows the current findings on representation of law (business or e-business), security, and ethics courses in the master's e-business curricula as well as the previous results of Fusilier and Penrod (23).For each of the three categories, fewer than half of the programs included even one course, required or elective. The 2010 data suggest that 61 percent of the programs did not include a single business law course while 64 percent did not offer a course dedicated to security. Nearly 90 percent of the programs did not include a course clearly focused on ethics.
Interactions
Interaction terms were computed to explore changes in course offerings across the years by world region. Results of this analysis could address the question of whether the findings differ according to where the programs are located. Programs based in North America were compared to those based in the rest of the world. This approach was used because there were more programs based in North America than on any other continent. Dividing the sample this way allowed more equal sized groups to be formed. Statistically significant interaction terms were detected for required courses in business law, e-business law, and security. Each is explained below.
Results for required business law courses are displayed in Table 2 and graphed in Figure 2 A different result can be seen for required e-business law courses in Table 3
Discussion
Previous research has documented low representation of law, security, and ethics coursework in e-business education from the beginning of the past decade (e.g., 30,31).The present study's data do not indicate improvements in this low-representation pattern: changes in the percentages of programs including law, security, or ethics courses from 2007 to 2010 are not significant. If instruction in these topics is strongly related to awareness, prevention, and remedies for cybercrime, then their absence from university e-business programs might help to explain the growth of cybercrime with the spread of e-business.
The present data suggest that e-business programs in North America have increased their e-business law and security course offerings from 2007 to 2010 however this increase is not enough: the average number of courses per program is still less than 1.0.The decrease in business law courses in the sample's North American programs might be explained by a decision to focus more courses specifically on e-business law. The business law course requirement may have been shifted to e-business law, accounting for the increase in the latter area.
There is a clear need for cyber-security education. The present study found that 63.5 percent of the programs in the 2010 sample included no security course at all, either required or elective. This corroborates the Kim et al. (32) conclusion of a general deficiency in security coursework. The apparent lack of security coverage in degree programs may strongly contribute to the widespread security problems in e-business today. Liu and Mackie (38) provide details for teaching security for e-commerce.
Ethics coursework also appears needed. Evidence suggests that requiring an ethics course in an MBA curriculum contributes to students' views of effective coverage of ethics in the program (21). However 88.5 percent of the programs reviewed in the present study offered no courses obviously devoted to ethics.
E-business has tremendous potential for positive economic impact and appears more recession resistant than retail sales in general. Despite the global economic downturn, e-commerce is currently surging with a more than 15.2 percent increase over last year's sales (39).It has shown consistent rapid global growth and improved delivery of goods and services while potentially reducing the impact of doing business on the natural environment (40). However, World cybercrime can undermine these benefits. Technology changes rapidly and teaching students to have an orientation toward security, ethics, and legal issues will allow them to stay in touch with developments regarding cybercrime. Such instruction may also help e-business professionals to build ethical organizational cultures that can curtail internal criminal activity. It is incumbent on universities to include coursework on this issue. Academic studies recommending revision of curricula, however, do not seem to be enough. More than ten years of research findings and recommendations appear to have had no significant impact on e-business curricula regarding law, security, and ethics coursework.
Fusilier and Penrod (23) used regulatory focus theory to explain the relative neglect of courses in business curricula that focus on loss prevention, such as law, security, and ethics. These authors argued that business schools tend to emphasize coursework that promotes students' ability to attain gains, such as higher revenue or market share. Coursework focused on gains, such as marketing, were found to be more prevalent in business school curricula than the prevention-focused courses of law, security, and ethics. However, ongoing economic uncertainty may impose pressures on business schools to provide more balanced coverage between activities that address attainment of gain and prevention of loss.
University characteristics may also be partially to blame. Financial pressures and bureaucratic, unresponsive organizational structures may result in curricula that do not address the needs of society and students. Armour (41) contended that e-business programs are being designed more to attract enrollment rather than to effectively equip students to cope with the demands of an e-business career. This is consistent with Gumport's (42) observation that many universities are abdicating their roles as social institutions and instead taking on a profit orientation through increased enrollments. Unfortunately, such a role shift might result in neglect of universities' social and educational responsibilities. Universities must address cybercrime as leading social institutions regardless of economic imperatives. Activities consistent with this goal should include instruction in areas that bear on the issue.
Limitations of the Study and Directions for Future Research
The present study did not address the connection of university instruction in law, security, and ethics to actual behaviors directed at preventing and managing cybercrime. Future research could relate curriculum coverage of these topics to graduates' behaviors directed at addressing cybercrime. Another potential shortcoming concerns the extent to which course coding is based solely on course titles and descriptions represent course content. Studies could investigate course content in a variety of course titles to determine the extent to which law, ethics, and security might be infused into other academic subjects. King et al. (34) and Rezaee et al. (35) have already developed coding procedures for e-commerce syllabi, and these procedures could be applied in studies of course content. Additionally, degree programs in related areas, such as computer science, might be studied to see if topics related to cybercrime are meaningfully covered in such curricula. If so, e-business degree programs could use these practices as a guide for improvement.
Conclusion
Tracking more than 100 e-business programs over a recent three-year period found no discernible university education changes in response to the growing incidence and seriousness of cybercrime. Accordingly, the present findings strongly suggest including more law, security, and ethics courses in e-business master's programs. Generally, it appears that curricula should be more responsive to stakeholder needs and the events that can impact e-business. Educational initiatives to address cybercrime can add to the other management and prevention approaches currently being implemented such as legal remedies that extend across national borders. Although governmental and administrative limitations often constrain master's programs on the numbers of courses they can include, having even one course in each area would be an improvement for a majority of the programs. Offering courses dedicated to each topic could assure coverage of these vital subjects and also streamline program assessment. | 4,180.4 | 2013-08-01T00:00:00.000 | [
"Law",
"Business",
"Computer Science",
"Education"
] |
Cyborgian Approach of Eco-interaction Design Based on Machine Intelligence and Embodied Experience
. The proliferation of digital technology has swelled the amount of time people spent in cyberspace and weakened our sensibility of the physical world. Human beings in this digital era are already cyborgs as the smart devices have become an integral part of our life. Imagining a future where human totally give up mobile phones and embrace nature is neither realistic nor reasonable. What we should aim to explore is the opportunities and capabilities of digital technology in terms of fighting against its own negative effect - cyber addiction, and working as a catalyst that re-embeds human into outdoor world. Cyborgian systems behave through embedded intelligence in the environment and discrete wearable devices for human. In this way, cyborgian approach enables designers to take advantages of digital technologies to achieve two objectives: one is to improve the quality of environment by enhancing our understanding of non-human creatures; the other is to encourage a proper level of human participation without disturbing eco-balance. Finally, this paper proposed a cyborgian eco-interaction design model which combines top-down and bottom-up logics and is organized by the Internet of Things, so as to provide a possible solution to the concern that technologies are isolating human and nature.
"Cyborg" is a rejection of human-machine dualism by obscuring the rigid boundary and advocating the man-machine symbiosis.
By the end of 1980s, the concept of "Cyborg" has already widespread in science fictions such as Ghost in Shell and Blade Runner. The world of tomorrow predicted in these futurism works, with the highly hybrid cyber-organ relationship as an essential feature, is becoming today's reality.
Attempts and practices of cyborg individuals has been conducted in various areas, starting at a relatively basic and safe level -as the substitutes of lost or damaged body parts. Then, the continuous breakthrough of technology propelled the development and acceptance of "enhancement prosthetics" -for example, British artist Neil Harbisson has had a cyborg antenna implanted in his head that allows him to extend his perception of colors beyond the human visual spectrum [17]. However, implanted cyborg remains a controversial issue. Opponents concern that this technology would aggravate social polarization and impair social order and ethics.
In a broad sense, implant surgery is not necessary for becoming a cyborg. Everyone holding a smartphone is a cyborg. Because the external apparatus has become an integral part of us, as a cyber-extension of our organic corporeity. Embedded and external devices share the same purpose -to enhance the perception, communication, interconnection and control of everything ( Fig. 1).
The Importance of the Presence and the Bodily Experience
Embodied cognition is a promising theory that has been developing rapidly since the "postcognitivism revolution" [3] in the middle of last century. Embodied cognition challenges traditional theories such as Connectionism and Computationalism, which hold the notion of Disembodiment and Mind-body Dualism [25]. It opens a new chapter of cognitive psychology with emphasis on the indispensability of human body in the process of cognition.
In 1945, phenomenological philosopher Maurice Merleau-Ponty claimed that "the body is our general medium for having a world" [14]. In 1979, James J. Gibson, who fathered the school of ecological psychology, also expressed similar idea that we acquire the information in the environment through our active body [8]. It is widely acknowledged that emotions influence behaviours, while according to the theory of embodied cognition, vice versa (Fig. 2).
"A designer and a cognitive scientist seem like an unlikely pair … both are trying to decode how humans interact with the world" [13]. Relational art/aesthetics is a mode or tendency in fine art practice with embodied cognition as one of its theoretical foundations. It is defined as a set of artistic practices which depart from the concerns of human relations and social context, instead of independent and private spaces [2]. Relational art values the encounter between an audience and an artwork, and the encounters between people.
On the basis of these two theory, here comes a question: since the corporeity is the main source of knowing the world and the others, when our body is augmented by advancing cyber technologies, how would human cognitive abilities and experiences be improved with the help of machine intelligence, and how would this be beneficial for the construction of a user-experience-oriented interactive environment? (Fig. 3). To encourage human-nature interaction, one way is to make plants more sensible, intelligent and interactive. This chapter is going to research on how cyborgian approach influences the procedures of plant's behaviour, namely sense (input), think (algorithm and feedback) and actuate (output).
How They Sense
We underestimated plants' sensing abilities and their feelings for a long time. Plants "are just very slow animals" [20]. In addition to the five senses that animals have, scientists believe plants have at least 15 other ways to feel the world. For example, they can perceive and calculate gravity, electromagnetic fields, moisture, and chemical substances [9]. Plant's sensing mechanisms are talented and exquisite enough, therefore, the question is how to make use of their powerful but implicit sensing capability for interactive functions. Focus on two aspects: the input and output of sensing process.
The former means to make them more sensitive to human behavioural inputs which is difficult for plants to understand without a cyborgian medium. It works in the way of exaggerating voice or motion signals or interpreting them into another type of signal that plants are more sensitive to, like electronic signals (Fig. 4).
The later means to externalize and visualize plants' internal biologic process. With the aid of cyber-devices, plants are augmented as sensors by visualizing their invisible sensing process. In the case of Cyborg Botany by MIT media lab, the electronics is transferred into plants. Internal wires are connected to sampling instrumentations, turning a plant into an inconspicuous sensor to detect motion and more [18]. These cyborg plant sensors are applied into many interesting scenarios, for example when a motion-sensitive rosebush senses that a cat run out of door, it will send an alert to its owner's computer.
How They Think
Recent research on plant intelligence has proved that plants are capable of certain levels of intelligent behaviours including communication, learning and memory.
Plants, trees in particular, communicate with each other relying on underground fungi networks [11]. Cyborgian plants, gifted by the cyber characteristics, become nodes in a more efficient communication network compared to biologic ones. These two forms of networks work cooperatively, sharing a large dataset of information, thus enhancing the adaptiveness of plants. Cyborgian plants are resonating and able to make predictions because they are in a network where they can talk to neighbours and understand what is happening at the other end of network. By simulating the process in the backend they can prepare themselves for the upcoming changes.
There's another project by MIT media lab -a pair of couple plants that can feel and respond to each other even when they're far apart. When one plant is gently poked, the other wiggles [18]. Cyber network enables them to "say hello" crossing distance. In the Resonating Forest at Jewel Changi Airport by teamLab, when someone passing by a tree, light color changes and a new tone resonates out. These information is transmitted to nearby plants, spreading continuously as if the plants are discussing the presences and locations of people.
As for learning and memory, it is proved by a team from Western Australia University that plants can build up classical conditioning through training and form memories through experiences [7]. The cyborgian approach imparts plants machine intelligence, featuring incredible capability of data processing and memory storage. By massanalysing human behaviours as training inputs, cyborg plants learn about users' habits and preferences so as to better indulge them in nature.
How They Actuate
Plants are always sensing, thinking and responding to our voice and movements but in an extremely subtle way. Since we understand their sensing and thinking mechanisms, now we are able to guide and supervise their actuation by controlling what they sense and how they think.
By applying external stimulus, like changing light intensity and direction, human is able to guide the growth and movement of plants taking advantages of phototaxis or other biologic tropism. But the growth of plants is too inconspicuous to be noticed in a short time, therefore, their responds are usually transformed into other forms and are visualized through external devices which can be seen as their extended cyber-body. For example, MIT Media Lab created a robotic plant called Elowan. When there's light nearby, electronic signals within the leaves are detected by embedded wires, and the wheels of the robotic planter are triggered to move autonomously toward the light [19]. In the case of Breeze, an ambient robot inhabits the body of a Japanese maple, allowing her to sense and reach out to nearby people [5]. One difference between the two cases is: the bio-corporeity of Elowan itself acts as a sensor while in the project Breeze, there are embedded sensors around (Fig. 5). Anyway, all roads lead to the same purpose: to make human behaviours sensible for plants while make their responds visible for us.
In summary, cyborgian approach would be instrumental in the construction of a more interactive environment because: 1. Rather than augmenting the way plants sense, it augments the way how we understand and benefit from their gifted sensing abilities for interactive functions. 2. On the basis of big data flowing in cyber-network, it augments plant intelligence including communication, learning and memory performance, thus improving capability of cross-species interaction. 3. It guides the growth and movement of plants by controlling input stimulus and even guides the process of evolution to a certain extent, towards the objective of the most appropriate level of interactivity.
How Cyborgian Approach Encourages Human Participation?
To encourage human-nature interaction, the other way is to improve bodily interaction experience. This chapter is trying to explore how would cyborgian system play a valuable role under the frame of experience hierarchy and assessment matrix (Fig. 6).
Experience Level
Rational Level. Communication is the most basic and rational need in the interaction with nature. Embedded intelligence makes plants cyborgs while wearable devices arm human as cyborgs, so we are able to communicate in a common language -binary machine language. Take self-caring planting pot linked with smart phones as an example, embedded technology provides plants the ability to respond to environmental changes in order to better survive, as well as enables human to understand their living conditions and feelings better.
Sensational Level. "Pleasure" and "the sense of alien" are the two dimensions to describe sensational stimulus [23]. Digital technologies expand the concept of reality. They allow us to explore the reality beyond human limits, to see the familiar world from an alien perspective, which can be quite interesting and motivating. For example, the VR project Mashmallow Laser Feast by B. C. Steel provides alien experience of discovering forests in the eyes of different animals.
Emotional Level. What eco-interaction aims to achieve is not only sensational pleasure, but more importantly, is emotional bonding and deep reflection. For example, in the project Talking tree by EOS magazine, anthropomorphic plants post their living conditions and feelings online. The purpose is to build up an empathetic bonding which would last longer and deeper than just rational or sensational memories, and would inspire reflects on the relationship with nature. This "reflective level of emotions" which Donald described as the supreme level can be achieved through aesthetic immersion as well (Fig. 7). Steel hopes to bridge the gap between science and art. For teamLab, their concept and purpose behind the aesthetic enjoyment is to cherish the balance between technology and nature and to create a global beauty culture.
Experience Assessment
"Interaction design is all about user experience" [1]. Experience of all users (not just human) should be the key assessment indicators for a design project. Cyborgian approach would be advantageous in human-nature bonding process. Its application and efficacy are investigated from four aspects: maneuverability, immersibility, joviality, and attractivity.
Maneuverability. It focuses on the quality and convenience of the operating process, or in other words, the pleasure of behavioural level [16]. Maneuverability includes fluency and freedom. Fluency concerns whether users operate naturally and smoothly with a clear logic. Freedom refers to the range and types of interactive inputs that users can play with. For example, in the interactive musical plants project Akousmaflore, by applying multi-touch interaction technology, every leaf becomes a tiny instrument. While in the project Forest Entrance by TeamLab, users get involved by walking around, as the project adopts somatosensory devices to capture and respond to users' position changes. The interaction mode in Forest Entrance is more fluent, but with less freedom of control, as there's only one type of corresponding input and output. The development of human-machine interface (HMI) is a process with increasing naturalness, fluency and fitness in line with human cognitive and usage habits. Currently, somatosensory device is one of the most cutting edge HMI interface. Its innovation lies in the realization of contactless operation, which completely liberates people's hands and fingers and enables participants to freely control machines with their whole bodies. This development brings new opportunities for the interaction between man and nature. Interaction projects that respond to human bodily behaviour is more welcomed because it is the presence of human in outdoor environment that really matters. Immersibility. Immersion is the engagement level of interactions. Immersibility is related to sensational experience and mental state. Immersive quality can be improved by enhancing multi-sensory experiences, balancing the relationship between "challenge" and "skill" based on feedbacks.
Sensationally. Although 83% of human's perception of the outside world comes from visual sense, the importance of other senses should not be neglected. For perceiving the world, the whole is greater than the sum of its parts [15]. High immersibility can be achieved not only by ensuring a wide scope of environment with minimalized visual distractions, but also by multi-sense activation including shape, smell, texture, softness and roughness with the help of perceptual augmentation devices.
Emotionally. According to Csikszentmihalyi, there are eight mental states in terms of the relationship between skill and challenge level: flow, control, relaxation, boredom, apathy, worry, anxiety and arousal (Fig. 10). These relationships can be constantly evaluated and adjusted with the aid of real-time concentration analysis and massive personal feedbacks, so as to reach the "Flow State" or "Peak Experience" [6], namely a status where fulfilment and enjoyment are coming out of high concentration and totally indulgence [12] (Fig. 9).
Joviality. The assessment of joviality is a composite index of all positive emotions in an interaction process. Electronic devices nowadays are not only able to monitor physical data such like heart rate and running pace, but are also capable of sensing and recognizing mental states (Fig. 8). Smart treadmills would automatically adjust their slope and speed in accordance with users' physical conditions. Similarly, an intelligent interaction system can also adjust in real time its difficulty level, surprise level, etc., according to users' emotional conditions in order to ensure constantly positive experience. In this way, emotion is no longer just the output or by-product of an interaction process. It participates in the process of feedback loop and becomes an input parameter that affects final output. There're already biometric sensors that can recognize micro emotions after deep learning. In the case of Emotional Design Language Orb, various emotions are visualized by different color and form of the orb (Fig. 11). Attractivity. It can be interpreted as "seductiveness" and "replayability." It evaluates the appeal for first-time users as well as the capability of encouraging re-participation without losing fun. Attractivity derives from beauty and creativity which can be enhanced by cyborgian approach. For instance, in the project Bio-responsive Garden, botanicals were fitted with microelectronics to make them physically animated. Creativity emerges from the unusual combination of "plants" + "dance". Another example is the Forest Entrance mentioned before. The resonating art work is rendered in real time in response to human behaviours. It is neither a pre-recorded nor imagery on loop [22]. Its attractiveness emerges from the non-repeatable organic aesthetics empowered by machine intelligence.
In summary, cyborgian approach would be instrumental as it allows us to better understand the physical world, improve bodily experience and add more joy and attractiveness to the interaction with nature. The analysis above provides anchor points for machine intelligence to intervene, facilitate and encourage human-nature interaction by making use of its sensing and computational capabilities.
Design an Interactive Outdoor Environment
As a matter of fact, many of the projects mentioned above are installed indoors. What hinders outdoor interactions? This chapter aims to figure out current constraints of outdoor eco-interaction projects and the opportunities provided by cyborgian technologies.
Challenges and Opportunities of Outdoor Interaction
One of the biggest technical obstacles of outdoor interactions is the more complex environment compared with indoor ones, which calls for better performance of input and output equipment. To address this problem, personal wearable devices which equip users as cyborgs would be a key tool. With fixed sensors only, it would be difficult to accurately capture motions and track positions of all users. Smart wearable devices greatly increase the accuracy of somatosensory input. Subtle movements and gestures are likely to be recognized, thus improving operational experience.
Besides, personal wearable devices increase the feasibility of multisensory enhancement in outdoor scenarios. As the name suggested, multisensory enhancement technologies aim to create alluring sensational experiences with the aid of various equipment, such as stereo, blower, perfume machine, tactile gloves, etc. It's like updating a 2D movie into a 3D (visually enhanced only) and even a 4D movie (multisensory enhanced). People need more attractive and comprehensive sensory experience to indulge themselves in a distractive outdoor environment.
Another challenge as well as principle of outdoor eco-interaction is to minimize the impact on voiceless ecological entities. Cyborgian approach would be helpful because: the living condition of plants can be always monitored, adjusted and guaranteed by their cyber apparatus; and our cyber extensions work as the medium for harmless virtual interactions that bring about genuine feelings. For plants, behavioural response in accordance with human inputs may be a negative interference. That's why in general, designers would apply non-material digital technologies such as light and sound which have no physical impact on the environment, turning nature into living art without harming it. However, due to the requirement of light and sound conditions, the suitable time for outdoor interaction is limited and unpredictable. The application of AR devices eliminates time limits in a way of mixing interactions happen in the cyberspace with reality in physical space in suitable lighting and volume settings.
Additionally, distributed wearable devices enable mass-customized multi-user interactions, which encourage not only physical but also social activities in natural environment. Although there're hundreds of users involved simultaneously, interactions can be tailor-made according to individually physical and mental state. Even the viewpoint can be set to be unique, for example, experiencing the world from the perspective of fish or birds, which brings alien experiences that can be attractive for potential users.
One weakness of cyborgian approach is that currently wearable devices are not light or user-friendly enough to be totally ignored while using, which may hinder users' movements and experience. There's always a gap between the real action (e.g., press the button) and the action executed in virtual world (e.g., pick up something). Similarly, the rationality of virtual viewpoint should be ensured. The sense of inconsistencies and the learning process keep reminding users the existence of a physical interface. Technologies are being optimized to reduce such inconsistencies of viewpoints and actions to minimize the perceptibility of external devices.
In summary, cyborgian approach, supplemented by multi-sensory enhancement technology, would work as a powerful toolkit to the challenges of outdoor interactions with enhanced immersibility, customized experience and minimal impact on the ecology.
A Cyborgian Eco-interaction Design Model
After all the researches and discussions above, this paper proposes a cyborgian ecointeraction design model depending on a distributed network of cyborgian intelligence.
On the one hand, non-human users are indispensable parts in an eco-interaction project, which means the ecological inputs that represent their status and interaction experience should not be neglected. Environmental data and plant living conditions that are monitored and evaluated by ubiquitous sensors have decisive impact on the system. Because the health and balance of ecosystem is the premise of human interaction.
On the other hand, for human users, their behavioural inputs including motions, gestures and voice commands etc. are recognised by wearable devices and stationary sensors. Besides, their interactive experience are also essential input parameters in an intelligent interactive system. As discussed before, users' feelings and experiences are no longer by-products of the interaction, but rather become inputs that trigger self-tuning mechanisms based on feedback loops.
The algorithms at the back end of interface are not fixed, instead, they are supposed to be determined by user experience. Users' experiences are evaluated from the four aspects mentioned before. The system automatically would customise its difficulty and intensity level according to individuals' feedback in order to help them reach and maintain "flow state". If the system notices that relying on bottom up self-tuning process only is not efficient enough to enhance user experiences, then human designers are required to get involved in order to modify the interactive design in a top down way. This procedure is similar to what Kevin Kelly called "control of control" [10]. In this way, the system provides customized and optimized experiences which in turn increase the attractivity (Fig. 12). In sum, what distinguishes this model from the others is that it concerns more about the user experience at all levels for all ecological entities. The capability of enhancing user experience by applying a synesthetic approach to reality and the capability of masscustomising and self-evolving based on sensible user experience, are both gifted by a cyborgian methodology.
Conclusion and Outlook
The argument throughout this paper is to explore new opportunities and capabilities that are imparted by advancing digital technologies when it comes to establishing an empathetic relationship with the natural environment where we inhabit. The cyborgian theory supports the skeleton of the whole paper while the embodied cognition theory lays a solid foundation.
Cyborgian approach is a hybrid of machinery intelligence and biological intelligence as well as a mixture of top down and bottom up design methodologies. It allows designers to take advantages of both methods: the ability to secure efficiency and to manage complexity.
Embedded intelligence turns plants and other ecological entities into cyborgs while wearable devices transform human into cyborgs as well. Input data from human and nonhuman users are collected from the interface, processed at the backend and the output can be given back to any interface in this network. In this way human and environment are communicating and interacting in "the third space" [21] which is organized by the network of interface and augmented by digital technologies.
It is revealed by this paper that cyborgian approach would be instrumental in the construction of a more successful human-nature interactive system. This can be justified by the facts that cyborgian intelligence is gifted at: facilitating the whole process of cross-species interaction, namely sensing, processing, actuation and feedback; breaking the limitations of outdoor interaction; improving experiences of all participants which determines the quality of interaction.
The cyborgian eco-interaction design model this paper proposed may be far from satisfactory, but meaningfully, it is expected to work as a minnow that is thrown out to catch a whale, to broach the subject for further concerns and investigations.
Opportunities always come along with threats. Advancing technology is facilitating our life while challenging our relationship with nature. Insightful designers should be keenly aware of the pros and cons of technology, trying to come up with a new paradigm to rehabilitate this relationship, which would then be more than a design paradigm, but even become a new lifestyle in the upcoming digital future.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 5,748.4 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
Mendelian randomization: genetic anchors for causal inference in epidemiological studies
Observational epidemiological studies are prone to confounding, reverse causation and various biases and have generated findings that have proved to be unreliable indicators of the causal effects of modifiable exposures on disease outcomes. Mendelian randomization (MR) is a method that utilizes genetic variants that are robustly associated with such modifiable exposures to generate more reliable evidence regarding which interventions should produce health benefits. The approach is being widely applied, and various ways to strengthen inference given the known potential limitations of MR are now available. Developments of MR, including two-sample MR, bidirectional MR, network MR, two-step MR, factorial MR and multiphenotype MR, are outlined in this review. The integration of genetic information into population-based epidemiological studies presents translational opportunities, which capitalize on the investment in genomic discovery research.
INTRODUCTION
Many examples exist of apparently robust observational associations between behavioural, pharmacological or physiological measures and disease risk which, when subjected to randomized controlled trials (RCTs), do not deliver the anticipated health benefits (1). These include many nutritional factors (e.g. several vitamins), pharmacological agents (e.g. hormone replacement therapy) and circulating biomarkers (e.g. HDL cholesterol) (1 -4). Confounding, reverse causation and various biases can generate the associations, and even with careful study design and statistical adjustment, incorrect causal inference is possible (1,5). The recognition of these problematic aspects of epidemiological investigation has led to the application of a series of methods aimed at improving causal inference (6,7). A successful approach is to use genetic variants as exposure indicators that are not subject to the influences that vitiate conventional study designs, an approach known as Mendelian randomization (MR) (8,9). We will not repeat the many detailed reviews that now exist of MR (8,(10)(11)(12)(13)(14)(15) nor summarize the hundreds of empirical studies applying the technique to a wide range of exposures and disease outcomes, rather, after a brief summary of the foundational principles, we will outline recent developments and potential future directions of the field.
BASIC PRINCIPLES OF MENDELIAN RANDOMIZATION
Inferring the causal direction between correlated variables is a pervasive issue in biology that simple regression analysis cannot answer. The association between two variables could reflect a causal relationship, but the direction of causality (e.g. A causing B or B causing A) is not clear. Furthermore, there may be unobserved factors that influence both variables and lead to their association (confounding) (Fig. 1). In the latter scenario, the effect of the independent variable on the outcome may be zero. Even if the hypothesized causal direction were correctly specified, if the independent variable is correlated with some unobserved or imprecisely measured confounders then the estimate of its causal effect could be biased. Mendelian randomization is a technique aimed at unbiased detection of causal effects and, where possible, estimation of their magnitude.
Suppose that trait A and trait B are correlated, it follows that if this correlation arises because A is causing B, then any variable that influences trait A should also influence trait B. The key to inferring a causal relationship between A and B is to identify an 'instrument' that is reliably associated with A in a known direction. Biologists are in a privileged position in this regard because virtually all traits of interest are at least partially influenced by genetic effects, and genetic effects can serve as excellent instruments for a number of reasons. First, in a genetic association, the direction of causation is from the genetic polymorphism to the trait of interest, and not vice versa. Second, conventionally measured environmental exposures are often associated with a wide range of behavioural, social and physiological factors that confound associations with outcomes (16). Genetic variants, on the other hand, can serve as unconfounded indicators of particular trait values (16). Third, genetic variants and their effects are subject to relatively little measurement error or bias. Fourth, the actual causal variant for the trait is not required, a marker in linkage disequilibrium (LD) with the causal variant will satisfy the conditions for MR. Finally, in the era of genome-wide association studies (GWAS) and highthroughput genomic technologies, genetic data are routinely available on large well-phenotyped studies.
ANALOGY BETWEEN MENDELIAN RANDOMIZATION AND RANDOMIZED CONTROLLED TRIALS
An intuitive way to understand how MR can be used to infer causality is by analogy with RCTs. In RCTs, the study participants are randomly allocated to one or another treatment, avoiding potential confounding between treatment and outcome, and causal inference is unambiguous. MR creates a similar scenario for us. Suppose a particular allele is robustly related to trait A, and trait A causes trait B. Alleles are largely passed from parents to offspring independent of environment, and people who inherit the allele are, in effect, being assigned a higher on-average dosage of trait A, whereas those who do not inherit the allele are assigned a lower on-average dosage. As in RCTs, groups defined by genotype will experience an on-average difference in exposure to trait A, whilst not differing with respect to confounding factors. Thus, a by-genotype analysis is equivalent to an intention-to-treat analysis in a RCT, in which individuals are analysed according to the group they were randomized into, independent of whether they complied to the treatment regimen or not. This form of analysis ensures that confounding is not reintroduced though allowing reclassification of exposure status after randomization.
Empirical evidence that there is a general lack of confounding of genetic variants with factors that confound exposures in conventional observational epidemiological studies is extensive (16,17), although it is important to take appropriate measures to avoid introducing confounding through population stratification.
To date, MR has been successfully applied to a wide range of observational associations, covering applications to the causal effects of biomarkers on disease, understanding the correlation between physiological measures, estimating the causal effects of various behaviours and specifying maternal intrauterine influences (Table 1). In certain circumstances, it is possible to perform an instrumental variable analysis to obtain an estimate of the magnitude of the causal effect of the exposure of interest on the outcome under investigation, and we outline this in Box 1. There are a number of limitations to MR that should be considered when using this approach (Table 2), which have been discussed at length elsewhere (8,(10)(11)(12)(13)(14)(15). Pleiotropy (Box 2) is particularly problematic in this regard. The remainder of this review will outline recent developments in MR, some of which explicitly seek to address these limitations.
Use of multiple variants to increase power and test assumptions
Ideally, MR is performed using a single variant whose biological effect on the trait for which it is an instrument is understood. However, even this situation is subject to a few potential
R90
Human Molecular Genetics, 2014, Vol. 23, Review Issue 1 limitations, which can be partially mitigated by increasing the number variants used as instruments.
First, the genetic effect may not be particularly large, resulting in a weak instrument and the requirement for very large sample sizes. By increasing the number of variants, the proportion of variance explained by the instrument increases, thus improving precision in two-stage least-squares regression (Box 1) (50). Combining these into a weighted allele score is generally the optimal approach in this context (51).
Second, the variant could be pleiotropic or in LD with a variant that affects the outcome, violating the conditions for being a valid instrument. This potential caveat can be interrogated by using multiple instruments. For example, it would be increasingly improbable that two, three or more independent instruments all result in the same conclusion, owing to perfectly balancing pleiotropic effects on both traits. For a convincing example demonstrating the causal influence of low-density lipoprotein cholesterol (LDL-C) on coronary heart disease (CHD), see Figure 2, where nine polymorphisms from six genes independently lead to very similar predicted causal effects of LDL-C, using instrumental variables analyses (52).
Third, multiple variants can also provide some evidence regarding the problematic issue of the complexity of associations in MR studies (see Box 3). If multiple variants that relate to a particular intermediate phenotype through different mechanisms all relate to the disease outcome in the manner predicted by their association with the intermediate phenotype-as in the case of multiple variants related to LDL-C and CHD, discussed Higher BMI increases the risk of symptomatic gall stone disease (27).
Maternal influences (corrected for genetic correlation between mother and child) Alcohol consumption Childhood school performance The observational finding that moderate maternal alcohol intake is associated with more favourable school performance is due to confounding, and the casual association is in the opposite direction (28) Maternal BMI Fat mass of offspring Fat mass in children aged 9-11 is not strongly influenced by BMI of mothers during pregnancy (29)
Box 1. Application of instrumental variable approaches to MR studies
Conventional instrumental variable (IV) analysis requires that the instruments are valid, and in order to be valid, they must meet three conditions. An instrument for trait A must be: 1. reliably associated with trait A; 2. associated with the outcome (trait B) only through trait A and 3. independent of unobserved confounders that influence traits A and B after conditioning on observed confounders. In MR, condition (1) is straightforward to test, but (2) and (3) cannot be established unequivocally. For example, if the variant is pleiotropic (see Box 2), or if it is in LD with a genetic variant that influences the outcome through a different mechanism, this can lead to erroneous causal estimation. If the above-mentioned conditions are met, then the unbiased estimate of the effect of trait A on the outcome, B, can be made using two-stage least-squares (2SLS) regression.
In stage 1, a predictor for A is constructed from its instrument, and in stage 2, the effect of the predictor for A on the outcome B is estimated. The intuition here is that A is potentially associated with B owing to many confounding effects, and we wish to estimate the effect of A on B that occurs only via the component of A associated with the instrument. Thus, if the predictor for A is associated with B in the estimate from stage 2, then this is only occurring through a path which has no confounding.
Several software implementations exist for performing various type of MR analysis. The 'ivregress' package in STATA, and the 'systemfit' package in R each have functions for performing 2SLS. The general case of IV estimation, including when the number of instruments is greater than the number of explanatory variables, can be performed using the generalized method of moments using the 'gmm' package in R (30). Few software examples exist for the specific types of MR that have been described in this review, but STATA routines for performing subsample and two-sample IV estimation are provided by Pierce and Burgess (31). Typically, genetic variants are only used as instruments if they are reliably detected and replicated in GWAS. However, predictive power may be improved when SNPs that do not reach
Box 2. Consequences of pleiotropy for the interpretation of MR
Pleiotropy is the phenomenon by which a single locus influences multiple phenotypes (32). Depending on the form it takes, pleiotropy may be a potential limitation to interpretation of MR, so distinguishing between its different types is important. In the context of MR, there are two mechanisms by which pleiotropy occurs: a single process leading to a cascade of events (e.g. a locus influences one particular protein product, and this causes perturbations in many other phenotypes); or a single locus directly influencing multiple phenotypes (33,34). Amongst its many names, the former has been termed 'spurious pleiotropy' (35,36), 'mediated pleiotropy' (37) or 'type II pleiotropy' (36); the latter 'biological pleiotropy' (37) or 'type I pleiotropy' (36). Type II pleiotropy is not only unproblematic for MR, it is the very essence of the approach, in which the downstream effects of a perturbed phenotype are estimated through the use of genetic variants that relate to this phenotype. Thus, the instrument of common variation in FTO, known to influence BMI (38), probably through influencing caloric intake (39,40), is associated with a wide range of downstream phenotypes; blood pressure and hypertension (41), coronary heart disease (42), fasting insulin, glucose, HDL cholesterol and trigylcerides (43), bone mineral density (44), chronic renal disease (45) and diabetes (38). These associations are expected, as higher BMI influences these traits, and it would be an error to consider these to be 'pleiotropic' effects of FTO variation that vitiate MR investigations. Type I pleiotropy, however, is problematic for the interpretation of MR. Estimates of the degree of pleiotropy suggest that type II pleiotropy is the more pervasive form (36,46), with type I pleiotropy being more pronounced at the level of the gene than at the level of single SNPs (36,47). Greater pleiotropic effects are seen for mutations with larger effects on the primary trait (48,49), as would be anticipated for type II pleiotropic influences that are downstream effects of considerable perturbation of the primary trait.
Potentially erroneous causal inference owing to type I pleiotropy can be minimized by restricting instruments to genetic effects which plausibly act directly on the trait (e.g. genetic instruments for CRP levels located within the promoter region of the CRP gene). When less well-characterized variants, or combinations of variants, are utilized, then the ways of exploring the potential contribution of pleiotropy detailed in this review and elsewhere (15) need to be implemented.
R92
Human Molecular Genetics, 2014, Vol. 23, Review Issue 1 significance thresholds are also included, the rationale being that these will include false-negatives owing to small effect size (56). This approach can improve the power of MR, but considerable caution should be applied, owing to the increased chance of introducing pleiotropic effects (Box 2) (57).
Two-sample Mendelian randomization
It is often the case that an observational association between two variables exists, but high measurement costs or lack of appropriate biospecimens leads to relatively small datasets with intermediate phenotypes and genetic instruments. Methods have been developed to perform IV analysis when the intermediate phenotype and the outcome variable are measured in two independent datasets (58), and these can be applied in the MR context (31). This approach can be particularly valuable when applied to the very large datasets that exist relating GWAS data to disease outcomes, but which lack intermediate phenotype data. Another scenario in which two-sample MR can be used is if the dataset in which MR is being performed is the same as is being used to identify instruments. GWAS is known to lead to overestimation of genetic effect sizes owing to the phenomenon of the winner's curse, and this can lead to bias in MR. Dividing the dataset into two (or more) samples for estimation and testing can mitigate this problem. This method has been applied in a study of physical activity and childhood adiposity (59). (52)]. Boxes represent the proportional risk reduction (1-OR) of CHD for each exposure allele plotted against the absolute magnitude of lower LDL-C associated with that allele (measured in mg/dl). SNPs are plotted in order of increasing absolute magnitude of associations with lower LDL-C. The line (forced to pass through the origin) represents the increase in proportional risk reduction of CHD per unit lower long-term exposure to LDL-C.
Box 3. Complexity of associations
In MR studies, genetic variants are taken to be proxy indicators of modifiable factors that potentially influence disease risk. The manner in which the variants relate to such factors can lead to misleading interpretations, however. For example, antioxidants are potentially protective against risk of CHD risk, so increasing circulating levels of the natural antioxidant extracellular superoxide dismutase (EC-SOD, a scavenger of superoxide anions), might be hypothesized to decrease CHD risk. However, a genetic variant associated with higher circulating EC-SOD is associated with substantially increased CHD risk (53). An explanation for this apparent paradox is that the genetic variant may influence circulating levels of EC-SOD by reducing the levels of EC-SOD in arterial walls; thus, the in situ anti-oxidative activity is lower, whereas the circulating levels are higher. A naive interpretation of the genetic studies-that higher levels of antioxidant increase risk of CHD-would be misleading. Similarly, it has been suggested that the interpretation of MR studies purporting to show that elevated uric acid levels do not increase risk of hypertension (20,54) is rendered problematic by the fact that the main genetic variant utilized in such studies, whilst increasing circulating uric acid levels, does not increase the intracellular level of uric acid, and the latter may be the important factor with respect to hypertension (55).
Bidirectional and network Mendelian randomization
A major limitation of MR is that it can be difficult to distinguish between an exposure causing an outcome and an outcome causing a trait, because genetic variants could have their primary influence on either variable. For example, atheroma and body mass index (BMI) influence C-reactive protein (CRP) levels and apparent misleading causal effects can be generated if a genetic variant that primarily influences atheroma or BMI is mistaken as being a variant with a primary influence on CRP (60). With a focus on instruments for which there exists some degree of biological understanding, bi-directional MR can be Box 4. Two-step and two-sample, two-step MR Genetic variants can be used as instrumental variables in a two-step framework to establish whether particular DNA methylation profiles are on the causal pathway between exposure and disease. In step 1, a SNP is used to proxy for the environmentally modifiable exposure of interest (e.g. smoking) to examine how this exposure influences DNA methylation. In step 2, a different SNP (which is not related to the exposure), preferably a cis variant, is used to proxy for this specific DNA methylation difference and to relate this to the disease outcome under investigation.
Two-sample, two-step MR can be utilized to interrogate tissue-specific DNA methylation as a potential causal intermediate phenotype.
In the smaller first sample, the association of the exposure to tissue-specific DNA methylation is established using an MR approach (with the exposure-related SNP1; A) and a cis variant associated with the same methylation difference but not related to the exposure is identified (SNP2; B). In the larger second sample, the exposure is shown to influence the outcome through the use of SNP1, either through relating SNP1 to both the exposure (if data are available on this) and the outcome, or if exposure data are not available, then simply relating SNP1 to the outcome (C). Finally, exposure-related methylation is shown to influence the outcome through the use of SNP2, which is related directly to the outcome (D).
R94
Human Molecular Genetics, 2014, Vol. 23, Review Issue 1 applied in these circumstances. Here, instruments are required for both variables, and MR is performed in both directions (Fig. 1). If trait A causes trait B, then the instrument, Z A , will be associated with both A and B. However, a second instrument specific to trait B, Z B , will be associated with trait B, and not with trait A. This method is only valid on the condition that the two instruments are not marginally associated with each other (e.g. there is no LD between instruments for A and B). This method has been used to demonstrate that BMI influences CRP levels (61,62), vitamin D (63), uric acid (20,64) and fetuin-A (65), and not vice versa. Extracting data from different studies can also be utilized in this context; for example, MR studies suggest that IL-6 influences CRP levels, but not vice versa (18,22,23). When utilizing variants with little understanding of their biological effects, bidirectional MR can be potentially misleading, as it is obvious that if trait A influences trait B then GWAS studies with adequate statistical power will identify a variant with a primary influence on trait A as being associated with trait B. This reflects 'spurious' or 'type II' pleiotropy (Box 2), and many examples of this exist. For example, FTO variation was initially identified in relation to type 2 diabetes, with subsequent recognition that this was because the genetic variant related to BMI, which in turn increased the risk of type 2 diabetes (38). Similarly, genetic variants with a primary influence on BMI appear amongst the top hits in GWAS of CRP (66) but obviously cannot be utilized as instruments for CRP levels. Use of allele scores in bidirectional MR studies will increase the likelihood of incorrectly including a variant primarily influencing trait A as one that primarily influences trait B, with consequent misinterpretation, and findings from such studies need to be treated with caution (59). Utilizing multiple single and composite instruments can help interrogate such situations, because if trait A influences trait B, and not vice versa, then all variants related to trait A will relate to trait B, but the reverse will not be the case.
Bidirectional MR is applied in two-variable settings, but clearly this can be scaled up to explore the causal directions within a network of a larger number of correlated variables (67). Such 'network MR' is an area of current active development, with parallel logic to the application of genetic anchors in the causal dissection of networks of gene interactions (68,69).
Mediation and two-step Mendelian randomization
Networks will often contain cases of mediation, in which the association between an exposure and an outcome may act through an intermediary factor. For example, higher BMI may increase the risk of CHD in part through its effect on blood pressure. Conventional mediation analysis in the epidemiological field, solely utilizing phenotypic measurements, is problematic, because it is highly dependent on the measurement characteristics of the variables and on reliable identification of causal effects (70)(71)(72). In such situations, it may be possible to obtain causal estimates from MR studies for all steps in the chain. In the abovementioned example, MR studies have shown that greater adiposity leads to higher blood pressure (41), and in turn higher blood pressure increases the risk of coronary heart disease (73). More reliable specification of the quantitative contribution of the mediator (blood pressure) to the casual link between the exposure (BMI) and the outcome (CHD) could be made with such data.
MR approaches can be applied to mediation in situations of high-dimensional potential mediator data, as, for example, in the delineation of mediation by specific epigenetic processes between environmental exposures and disease. This has been referred to as two-step MR (74). Intermediate phenotypes, such as DNA methylation, can show tissue specificity, in that both genetic and phenotypic associations can differ between tissues, and assays of easily accessible samples (such as methylation of DNA extracted from blood) may not be representative of DNA methylation in the tissue that is responsible for disease development (75,76). Obtaining tissue-specific data on large numbers of individuals is challenging, but using a combined two-sample and two-step MR approach could be applied. First, the causal associations of both exposure on methylation and of a cis SNP on methylation in the tissue of interest could be established, and then in a larger population-based sample, the SNP associations with exposure and disease outcome delineated. Box 4 illustrates the logic of these more complex approaches.
Factorial Mendelian randomization
The manner by which causes of disease act together to increase disease risk can have important public health implications, as above-additive effects lead to the clustering of risk factors, generating a greater burden of disease in the population. For example, evidence exists that the combined influence of obesity and heavy alcohol consumption on the risk of liver disease is greater than multiplicative (77). It is difficult to estimate such effects, however, as confounding can be magnified when examining two already confounded risk factors. Factorial RCTs overcome this issue by randomizing each treatment independently, allowing characterization of interactions between them (78). Likewise, combinations of genetic variants can be used to perform factorial MR studies to obtain unconfounded estimates of the effect of cooccurrence of the two risk factors for disease.
Multiphenotype Mendelian randomization
In some situations, genetic variants tend to be associated with multiple intermediate phenotypes, and estimating the causal effect of one particular intermediate phenotype is problematic. For example, HDL cholesterol and triglycerides are observationally associated with coronary heart disease, but they are also highly (inversely) correlated, and observational studies cannot reliably separate their effects (79). Many of the genetic variants related to HDL-C and triglycerides, of which there are a large number, associate with both measures (80), in what appear to be examples of type I pleiotropy (Box 2). Whereas factorial MR can be applied to multiphenotype relationships when different SNPs can be taken to be instrumental variables for each phenotype, in this case, this is not possible because constructing an instrument that purely relates to one phenotype is currently not possible. An initial way of interrogating this problem is to use regression methods to attempt to separate the effects, and two independent studies utilizing this approach have recently suggested that the causal influence of triglycerides was robust, whereas the apparent protective effect of HDL-C was not (81,82). The appropriateness of different statistical approaches and whether reliable answers can be obtained in the multiphenotype context remain areas of active investigation.
Hypothesis-free Mendelian randomization
The majority of MR studies have been focused on testing hypotheses that arose from associations between traits seen in observational studies. But is this only the tip of the iceberg? An illustrative example of there being vastly more potential associations than those already known was presented by Blair et al. who, after mining the medical records of 110 million patients, uncovered 2909 associations between Mendelian diseases and complex traits, the majority of which were previously unreported (83). As high-throughput 'omics technologies continue to reduce in time-and financial-cost, datasets with comprehensive genotyping and phenotyping are destined to grow, and in principle, it should be possible to construct instruments for many exposures and through data mining obtain evidence regarding outcomes caused by these exposures (57). More speculatively, generating instruments from within the data and performing split-sample or jackknife IV analysis, including bi-directional analysis, could allow resolution of causal direction within networks of phenotypes, without advance specification of which exposure or outcome is being examined (67).
Conclusion
Resolving observational correlations into causal relationships is an elusive problem at the heart of biological understanding, pharmaceutical development, prevention of disease and medical practice. MR is a potentially robust method that can support this endeavour, and its scope for application will widen as the cost of data generation continues to reduce. Findings from MR studies need to be interpreted in the context of other evidence related to the particular issue under investigation, and as such, it will contribute to the application of 'inference to the best explanation' (84) approaches to strengthening causal inference. Identifying the most promising targets for intervention-for example, through pharmacotherapy-can be enhanced through the application of MR and thus lead to a more rational approach to prioritizing treatments for evaluation in RCTs. | 6,241.2 | 2014-07-04T00:00:00.000 | [
"Medicine",
"Biology"
] |
Prediction of blast-induced air overpressure using a hybrid machine learning model and gene expression programming (GEP): A case study from an iron ore mine
: Mine blasting can have a destructive effect on the environment. Among these effects, air overpressure (AOp) is a major concern. Therefore, a careful assessment of the AOp intensity should be conducted before any blasting operation in order to minimize the associated environmental detriment. Several empirical models have been established to predict and control AOp. However, the current empirical methods have many limitations, including low accuracy, poor generalizability, consideration only of linear relationships among influencing parameters, and investigation of only a few influencing parameters. Thus, the current research presents a hybrid model which combines an extreme gradient boosting algorithm (XGB) with grey wolf optimization (GWO) for accurately predicting AOp. Furthermore, an empirical model and gene expression programming (GEP) were used to assess the validity of the hybrid model (XGB-GWO). An analysis of 66 blastings with their corresponding AOp values and influential parameters was conducted to achieve the goals of this research. The efficiency of AOp prediction methods was evaluated in terms of mean absolute error (MAE), coefficient of determination (R 2 ), and root mean square error (RMSE). Based on the calculations, the XGB-GWO model has performed as well as the empirical and GEP models. Next, the most significant parameters for predicting AOp were determined using a sensitivity analysis. Based
by engineers.Uncontrollable parameters in the second group consist of the rock mass's RQD and compressive, shear and tensile strengths.These mainly depend on the complex geological and geotechnical conditions of the rock mass.
Remennikov and Rose [10] have proposed solutions such as improving the structure of buildings and glass doors or using barriers as shields to reduce the effects of AOp.However, their efficacy was not significantly proven [11].With similar objectives, experimental methods are also presented to calculate AOp.Also, Khandelwal and Kankar [12] and Armaghani et al. [13] showed that empirical equations have poor performance.In addition to low accuracy, conventional tools also have many limitations including studying specific areas, considering linear relationships between influence parameters and focusing on explosive charges per delay and monitoring distances [13].
Over the years, machine learning and artificial intelligence (AI) methods have been demonstrated to be acceptable and reliable for solving different engineering subjects, especially for prediction and optimization purposes [14][15][16][17][18][19].For example, AI can be used to analyze the results of a blast, using machine learning algorithms to identify patterns, and then this data can be used to adjust the blast design for future operations [20,21].Prediction of AOp has also been conducted using these methods.The artificial neural network (ANN) proposed by Khandelwal and Singh [22] was compared with regression methods to predict AOp.Results showed that ANN performed similarly to regression methods in predicting AOp with higher precision.Khandelwal and Kankar [12] for predicting AOp investigated and confirmed the efficacy of support vector machine (SVM).Nguyen et al. [23] evaluated and compared different ANN systems including ANN, BRNN and HYFIS to predict blastinduced AOp.Similarly, a heuristic algorithm, intended to optimize ANNs for predicting AOp, was used successfully by Armaghani et al. [24].Also, Hasanipanah et al. [25] developed a hybrid SVR model that was optimized with PSO to predict AOp.
Several soft computing models were presented by Bui et al. [4] for predicting AOp, including SVM, ANN, boosted regression trees, k-nearest neighbors and RF.In another study, Zhou et al. [26] developed a hybrid model which combines a fuzzy system (FS) and the firefly algorithm (FFA).Their study demonstrated that FFA-FS could be used to predict AOp efficiently.Tran et al. [27] have investigated the effect of meteorology on AOp, using AOp prediction models.Their results show that meteorological conditions, especially wind speed and air humidity, have a noteworthy impact on blastinduced AOp.Zeng et al. [28] have developed an efficient method based on the Levenberg-Marquardt (LM) algorithm and cascaded forward neural network (CFNN) to predict AOp.In addition, the accuracy level of this technique has been tested using the generalized regression neural network (GRNN) and extreme learning machine (ELM).Table 1 summarizes some of the relevant previous studies conducted by various researchers.
Blasting operations have a range of effects on the environment, including air overpressure (AOp) [29][30][31].Therefore, AOp prediction with high accuracy is essential to determining the safe zone around an operation site.In many studies, machine learning has been applied to evaluate and predict the adverse consequences of blasting.However, the studies have not addressed the evaluation and prediction of air overpressure via hybrid extreme gradient boosting (XGB).Thus, this article develops a hybrid model which combines extreme gradient boosting (XGB) with grey wolf optimization (GWO) to predict AOp in open pit iron mines accurately.Furthermore, gene expression programming (GEP) and an empirical model were used to assess the validity of the hybrid model (XGB-GWO).Additionally, unlike other methods, GEP can create a simple mathematical expression that can be applied to different mining conditions, which is another advantage of this article.
Methodology
Machine learning models require parameters based on datasets.Many studies have shown that heuristic algorithms improve machine learning accuracy and stability [40,41].Therefore, this study applied a hybrid predictive approach based on ideas from the extreme gradient boosting framework (XGB) as well as metaheuristic algorithms: namely, grey wolf optimization (GWO).Then, gene expression programming (GEP) and empirical models were employed to assess the validity of the optimized model (XGB-GWO).By using GWO we can find the optimal values of hyperparameters of the regression model.Therefore, the intelligent optimization algorithm by the adjustment of three key parameters of the XGB model (n_estimator, maximum_depth and learning_rate) achieves higher accuracy.N_estimator, maximum_depth and learning_rate represent the number of trees, the maximum depth of a tree and the shrinkage coefficient of the tree, respectively.
Extreme gradient boosting (XGB)
Chen et al. [42] proposed a method based on gradient boosting [43][44][45][46].Many engineering fields have implemented XGBoost for classifying and predicting problems [47].It has performed well because of the advantages of parallel processing, regularization and efficient tree pruning.XGBoost can solve a wide variety of data science problems quickly and accurately with parallel tree boosting.The core of this algorithm is optimizing the objective function [48].
In each iteration, XGBoost calibrates the previous predictor using the residual.Loss function optimization (LOF) is involved in this process.However, regularization is applied to the objective function during calibration to reduce overfitting.Using this description, Equation (1) describes the objective function as two parts: regularization and training loss.
In Equation ( 1), Θ and Ω represent the parameters trained from the data and related to regularization, respectively.Regularization controls the complexity of the model in order to avoid overfitting [49].L denotes the training loss function, which measures the model's fit to the training data.Complexity can be defined in a variety of ways.Equation (2), however, is often used to calculate the complexity of each tree.
where γ denotes each leaf's complexity, T represents the number of DT leaves, ω is the vector of scores on the leaves and λ scales the penalty.Next, XGBoost combines loss function (LOF) general gradient boosting with the second-order Taylor expansion.When the mean squared error is taken into account, equation ( 3) can obtain the objective function.
The MSE loss function's first and second derivatives are and ℎ , respectively.Also, the q function is used to assign data points to leaves.As a final step, we calculate the XGBoost objective function by referring to Equation (4).
Here are independent of each other.The definitions of the two terms and are given in Equation (5).
In general, finding the minimum of a quadratic function can be applied to optimizing the objective function.In order to evaluate the performance of the model after splitting a particular node in DT, an objective function is used.If the model performs better than before, this division will be accepted.Otherwise, the division will come to a halt.Due to the inclusion of regularization phenomena, XGBoost can prevent over-installation more effectively [50].
Grey wolf optimization (GWO)
GWO is a metaheuristic optimization algorithm proposed by Mirjalili et al. [51] which reflects grey wolf family social systems in nature.Social hierarchy is very strict in the grey wolf community.Usually, wolf levels are divided into four categories: α, β, δ and ω. α is primarily responsible for making overall decisions at the first level, and the β wolf assists the α wolf at the second level.At the third level, δ must follow the decisions of α and β wolves.Furthermore, the lowest rank for wolves is ω.Also, this group must follow wolves with a higher rank.Mathematical models can be developed, and optimal solutions can be found based on wolves' hunting process and social hierarchy.
α represents the optimal solution, β denotes the second, and δ shows the third optimal solution.All other applicant solutions are denoted by ω.In the wolf pack, D indicates the distance between the prey and the individual (Equations ( 6) and ( 7)).
where C, D and XP(t) are the step length coefficient, the current number of iterations and prey location, respectively.Also, r1 and X are a random number ranging in (0,1) and the location of a grey wolf, respectively.
A pack of individuals tries to shorten the distance between their prey and themselves by constantly updating the following Equations ( 8) and ( 9): where r2 and A are a random number ranging in (0,1) and the convergence influence factor, respectively.A decreases linearly with the number of iterations from 2 to 0. Since α, β and δ possess high levels of intelligence, they can carry more information about where prey can be found.This will enable them to lead the group to the hunting grounds gradually.Gradually, the pack makes its way to the hunting grounds.As a result, three optimal solutions will now be available to ignore or find other solutions, and gradually they will find the global optimal solution based on the three optimal solutions, which are outlined in Equations ( 10) to (12).
A decision will be made as to which individuals to include in the remaining pack relying on a joint decision by α, β and δ.It is now necessary to move the position shown in Equation (13).
It is possible to summarize the grey wolf optimization algorithm in this way: during the optimization process, it constantly updates the solution area for the location search problem and finds the ideal solution at the end (Figure 1).
Gene expression programming (GEP)
The GEP method was introduced by Ferreira [52] and combines genetic programming (GP) and genetic algorithms (GA).As an AI developmental approach, GEP has corrected some of GA's and GP's weaknesses, such as tree systems.Both of these methods differ in the following ways: the concept of individuals or answers.In the GEP algorithm, individuals are defined as binary fixed-length chromosomes.Answers from the GP method can include tree systems of varying sizes.Because GEP combines GA and GP, it produces chromosomes of fixed length and tree systems with different sizes and shapes, identified as an expression tree (ET).The GEP algorithm's structure consists of several elements, including function, terminal, operator, fitness, and stop criteria [52].
Chromosomes are composed of two parts: a head and a tail, making them a fixed length.Functions and terminals are included in the head part, and terminals are also included in the end part.As a result of the complexity of the problems, a specific equation cannot be used to calculate the head part's length.This term is defined as the input of the GEP method.Trial and error are one of the solutions [53].Furthermore, the tail part's length can be calculated using Equation (14).
where , h and t represent the number of arguments of the functions and the length of the head and tail.Solitary chromosomes in the initial population are evaluated according to a fitness function developed for gap problems.A number of genetic operators are used to adapt the considered chromosomes.Depending on the problem conditions, each chromosome may contain functions, constants and terminals [54,55].The following is a general description of the GEP algorithm: Step 1: Depending on the problem's conditions (size) being studied, a certain number of chromosomes should be determined (randomly).
Step 2: Expression trees and mathematical equations are used to select the initial population chromosomes.
Step 3: The chromosomes are fitted according to the overall fitness function (RMSE or R 2 ).Alternative methods, such as the roulette wheel method, are used if the stopping criterion is not met.
Step 4: During this step, the GEP algorithm's genetic operators, known as the core, must be linked to the rest of the chromosomes.
Step 5: Lastly, the process of creating the next generation begins, and the process is repeated until new structures are created.
In order to decode the programs on the chromosomes, Karva (K-Expression) was invented to express the codes.Inversions, mutations, triple recombination operators and triple transposition operators have been introduced so far as genetic operators that are used for chromosome modification [54].The structure of the GEP method is illustrated in Figure 2.
The equation recommended by USBM has been widely adopted by researchers and is expressed in the following manner [57]: By regression analysis, AOp in decibels (dB) is calculated, and and are site factors.
Model verification and evaluation
Evaluation and verification of models are essential steps during model development.In order to determine whether a model is of high quality and whether the results produced are adequate for the goals pursued, it is necessary to test it.In this study, training and testing sets are used to train and verify predictive models.Predicted and actual values are compared using relevant evaluation indicators.
In this study, hybrid models will be evaluated to ensure their reliability [58].R 2 , RMSE, and MAE are evaluation indicators.
As defined in Equation 17, between actual and predicted values, R 2 represents the square of the correlation.Further, the RMSE represents the standard deviation between predicted and actual values (Equation 18).Equation 19defines mean absolute error as the measure of error between paired observations describing the same phenomenon [59].
Case study and data collection
This study analyzes the case of Chadormalu, a large open-pit iron mine located 120 kilometers northeast of Yazd, Iran. Figure 3 In the current study, input and output parameters were used in developing the model.The influential input parameters used for air overpressure (AOp) prediction are given in Table 2. Burden (B), number of holes (NH), hole depth (H), spacing (S), powder factor (PF), distance (D), rock quality designation (RQD), stemming (ST) and the maximum charge per delay (MC) have been employed as the output parameters of the models to predict air overpressure (AOp).
Data were collected as follows: D was determined by handheld GPS, and AOp values from blasting operations were recorded with a sound level meter (SLM).Using blast design, the remaining parameters were collected.Table 2 shows the influential input and output parameters' details, symbols and statistical descriptions.
For model development, 66 data pairs were used to create a database related to air overpressure (AOp).From the organized database, 20% of the data set was selected to test the model to ensure consistency.
Results and discussion
Predicting air overpressure requires the preparation of a database.This database was divided into training and test sets using the most common division ratio of 80%-20% [60].The air overpressure prediction model was evaluated using several performance indicators, including R 2 , RMSE and MAE.All predictive models use the same training and test data sets.
Hybrid model result (XGB-GWO)
In order to avoid complexity in XGBoost modeling, three stopping criteria were considered, namely, n estimators, learning rate and maximum depth.If significant values are assessed for each parameter, overfitting can occur.Therefore, XGB parameters have been optimally determined using grey wolf optimization (GWO).Figure 1 illustrates the method used to develop models based on the XGB.It was time to set the XGB model's parameters.The optimization algorithm's parameters are shown in Table 3.Additionally, Table 3 shows the optimal parameters of the model based on the optimization process.Table 3.The optimization algorithm's parameter and optimal parameters for the hybrid model.
GEP model result
The flowchart shown in Figure 2 illustrates the gene expression programming (GEP) modeling process.The GEP was designed using the same testing and training datasets as the previous sections.The final relationship between the initial data and air overpressure was analyzed and determined by GeneXproTools (version 5.0).
To build an efficient model, it is imperative to consider the fitting parameters.The number of chromosomes determines how long it takes for the model to run, which is crucial to the model's performance.For a suitable architecture, the size of the head and the number of genes must also be considered.Each component's complexity and the number of related equations are determined by the head size and the number of genes.To achieve the most suitable GEP model, a trial and error mechanism was applied (Table 4).By using R 2 , RMSE and MAE indices, for the testing and training data sets, the performance prediction of GEP models was evaluated.Of the ten models stated in Table 4, the five which had the most accurate predicting of air overpressure were selected (Table 5).Therefore, based on the results in Table 5, model No. 6 was the most accurate model among all models made by the GEP method.Figure 5 illustrates the scatter plot of the predicted and measured air overpressure in the selected GEP model by training and test data.The number of genes and head size in model No. 6 are 8 and 4, respectively.The expression tree (sub-ETs of each gene) of this model is illustrated in Figure 6.In addition to the expression trees, functions were linked together to construct a large and complex ET.Expressions (20) to (23) are mathematical expressions of each of the genes.Finally, the developed GEP equation for predicting air overpressure is shown in Equation (24).
Empirical model result
In the empirical model, site factors and were computed from 52 blasting events (training data set).A multivariate regression analysis was performed using Microsoft Excel 2019.Therefore, the optimal values of and for the USBM model for predicting AOp are 173 and 0.1.Using the USBM model in this case, we can describe it as follows: Also, Figure 7 illustrates the scatter plot of the predicted and measured air overpressure in the empirical (USBM) model by training and test data.
Comparison of models and validation performance
The efficiency of the predictive models is evaluated in this section.In this study, R 2 , RMSE and MAE (Equations 15 to 17) were employed to assess predictive models' performance.Regarding Table 6, the above statistical criteria and performance comparison for techniques were determined for the testing and training data sets.Based on these results, the correctness rate and performance of the hybrid XGBoost (XGB-GWO) method are preferable to the GEP and empirical models.In the next step, the selected AOp prediction models' accuracies are compared, as shown in Figure 8.The accomplishment of the models in predicting the air overpressure in the testing and training data set is shown in Figure 8. Also, based on Figure 8, the GWO-XGB technique gives the most reliable and steady results in AOp prediction among the GEP and empirical models.As shown in Figure 9, the box plots show the distribution functions corresponding to the measured and predicted AOp values.Based on an exhaustive analysis of Figure 9, the XGB-GWO approach yielded the most promising performance relative to the GEP model due to its similar probability distribution to observational results.A Taylor diagram illustrating the performance of the predictive models is presented in this subsection.Taylor diagrams are used to assess the accuracy of models by showing them in two dimensions [61].Indicators of the relationship between the actual and predicted observations are R, RMSE and standard deviation.Each model is denoted by a term in the Taylor diagram.In an ideal model, the position of the point should coincide with the reference point.Figure 10 illustrates the predictive models developed in this study.While both models are highly accurate at predicting air overpressure, Figure 9 shows that the hybrid XGB-GWO model is better.
Decay of air overpressure
Understanding the decay of overpressures during the explosion of an explosive charge in an open pit mine is crucial in ensuring the safety of personnel and equipment.The pressure waves generated during an explosion can cause significant damage to structures and equipment, as well as pose a risk to the health and safety of personnel.By examining the decay of overpressures, it is possible to determine the safe distance for personnel and equipment from the explosion site, as well as the appropriate level of protective measures required.This information can also inform the design of blast patterns and other explosives-related procedures in order to minimize the risk of harm.Therefore, studying the decay of overpressures during explosions is an important aspect of ensuring the safe and effective use of explosives in open pit mines [62].
In this case, only two distances have been measured, and it is necessary to use empirical and GEP methods to estimate the decay of overpressures at different distances.The resulting plot can provide valuable insights into the behavior of air overpressure as it decays over distance, allowing us to identify any unexpected trends or anomalies that may require further investigation.In this regard, Figure 11 Analyzing the plot, we can see that the measured AOp values are highest at the closest distance to the explosion site (300 m) and decrease as the distance increases.Both the empirical and GEP predicted AOp values follow similar trends, with the GEP method predicting AOp values first that are generally higher than the empirical values.The plot also shows that the GEP method generally provides a reasonable estimate of the AOp values at different distances from the explosion site, although it tends to underestimate the AOp values at shorter distances.
Sensitivity analysis
During the final modeling stage, the output of the model is analyzed for its sensitivity to the input parameters.Analyzing the sensitivity of input parameters can provide insights into how they affect model output (objective function).For determining sensitivity analysis, the cosine amplitude method (CAM) is one method [63][64][65].Equation ( 26) describes the CAM method in terms of an n-dimensional space.
The m-dimensional length vector , which is part of the array X, represents a variable in the given context.
= { 1 , 2 , 3 … , } (27) This implies that the different dimensions of are interrelated with those of , and the degree of correlation between them can be expressed mathematically as shown in the following equation.Input parameters have a greater influence on output as approaches one.It has been shown that input parameters have a significant effect on the output when this parameter is above 0.9 [63].In Figure 12, the regression prediction parameters are sensitivity analyzed.According to Figure 10, stemming length and RQD had the most significant effect on the air overpressure among the input elements.
Conclusions
Blasting is a commonly used method in open-pit mines for breaking down rocks.However, it can result in adverse effects, such as air over-pressure (AOp), ground vibration, flyrock, backbreak, and dust, which can have a negative impact on the surrounding environment.To address this issue, it is essential to predict and control the effects of blasting.In this study, an efficient and practical hybrid machine learning model (XGB-GWO) was proposed for predicting AOp values, and its performance was evaluated using gene expression programming (GEP) and an empirical model.The accuracy of the predictive models was assessed using standard criteria such as R 2 , RMSE and MAE.Additionally, a sensitivity analysis was conducted using the cosine amplitude method (CAM) to determine the intensity of input parameters at AOp.Overall, this study highlights the significance of developing effective models to predict the impact of blasting and minimize its adverse effects on the environment.In conclusion, the hybrid XGB models proposed in this study demonstrate considerable potential in predicting AOp and can aid XGB in adjusting hyperparameters.The performance of the predictive models for test data falls within the following ranges: XGB-GWO (MAE: 0.69; RMSE: 1.42; R2: 0.983), GEP (MAE: 0.63; RMSE: 1.04; R2: 0.989) and empirical (MAE: 5.92; RMSE: 6.68; R2: 0.53).It is worth noting that the XGB-GWO hybrid model outperforms the other models in terms of overall performance.Furthermore, a sensitivity analysis technique called cosine amplitude was used to determine the importance of input variables, which revealed that the stemming length and RQD had the most significant impact on the penetration rate.This study demonstrates that the proposed XGB-GWO model is robust and effective in predicting blast-induced AOp, indicating its potential for practical applications in the mining industry.
Figure 1 .
Figure 1.Structure of hybrid method based on XGB and GWO for predicting air overpressure.
Figure 2 .
Figure 2. Structure of the GEP method for prediction of air overpressure.
2. 4 .
EmpiricalIn open-pit mines, the blast-induced AOp has been estimated using empirical methods.The technique proposed by the United States Bureau of Mines (USBM) is the most widely used empirical equation for predicting AOp among all available methods.Kuzu et al. [56] calculated the scaleddistance (SD) factor for AOp predictions.USBM is an empirical method for predicting AOp based on site factors, the maximum charge per delay (MC) and monitoring distance (D) in open-pit mines.SD values determine the relationship between MC and D [56]: depicts the location of the Chadormalu mine in Iran in relation to Tehran (the capital of Iran) and Yazd.It is estimated that there are about 400 million tons of mineable ore reserves.Magnetite and hematite are the main components of the deposit, according to mineralogical studies.Mine blasting is primarily carried out with ANFO explosives.
Figure 3 .
Figure 3.The location of the Chadormalu mine in Iran relative to Tehran (the capital) and Yazd city.
training data set, Figure 4 shows the correlation between the actual values and the predicted values.The training data are largely scattered near the closest fit line for the intelligent model, showing that training effects remain relatively favorable.XGB-based optimization techniques have been proposed that demonstrate high training effects with R 2 values of 0.96.After the hybrid intelligent model has been trained, it is verified and evaluated using the testing data set.Figure4shows that the test data set is mostly distributed near the perfectly fitted line.Because the predicted AOp values and the actual AOp values are within the correlation, they can be classified as correlated.Hybrid models achieve high levels of prediction accuracy, with R 2 values of 0.98.
Figure 4 .
Figure 4. Actual values and predicted values on the testing and training dataset by XGB-GWO Method.
Figure 5 .
Figure 5. Measured and predicted air overpressure on the training and test dataset by selected GEP Method.
1 (( 6 .Figure 6 .
Figure 6.Sub-ETs of each gene for the best GEP model with addition as a linking function.
Figure 7 .
Figure 7. Measured and predicted air overpressure on the training and test dataset by Empirical Method.
Figure 8 .
Figure 8. Predictive models with prediction values on training and testing datasets.
Figure 9 .
Figure 9.The box plot of predictive models (distribution function).
Figure 10 .
Figure 10.Performance comparison of predictive models with testing and training data in Taylor diagrams.
plots the measured, empirical and GEP predicted AOp values for different distances from the blasting site.
Figure 11 .
Figure 11.Comparison of measured, empirical, and GEP predicted AOp values at different distances from the blasting site.
Figure 12 .
Figure 12.The sensitivity analysis and effect of input data on air overpressure.
Table 1 .
Some recent work with machine learning techniques for air overpressure prediction.
Note: Symbols are explained in the Abbreviations section.
Table 2 .
Input and output data with details, symbols and statistical descriptions.
Table 5 .
Performance indices of GEP models.
Table 6 .
Statistics of predictive models and performance comparison. | 6,381.2 | 2023-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
BLADE-ON-PETIOLE proteins act in an E3 ubiquitin ligase complex to regulate PHYTOCHROME INTERACTING FACTOR 4 abundance
Both light and temperature have dramatic effects on plant development. Phytochrome photoreceptors regulate plant responses to the environment in large part by controlling the abundance of PHYTOCHROME INTERACTING FACTOR (PIF) transcription factors. However, the molecular determinants of this essential signaling mechanism still remain largely unknown. Here, we present evidence that the BLADE-ON-PETIOLE (BOP) genes, which have previously been shown to control leaf and flower development in Arabidopsis, are involved in controlling the abundance of PIF4. Genetic analysis shows that BOP2 promotes photo-morphogenesis and modulates thermomorphogenesis by suppressing PIF4 activity, through a reduction in PIF4 protein level. In red-light-grown seedlings PIF4 ubiquitination was reduced in the bop2 mutant. Moreover, we found that BOP proteins physically interact with both PIF4 and CULLIN3A and that a CULLIN3-BOP2 complex ubiquitinates PIF4 in vitro. This shows that BOP proteins act as substrate adaptors in a CUL3BOP1/BOP2 E3 ubiquitin ligase complex, targeting PIF4 proteins for ubiquitination and subsequent degradation.
Introduction
A key element of plant adaptive responses is their ability to make morphological changes by adjusting the regulatory processes controlling their growth and development patterns in response to environmental stimuli such as changes in light and temperature. Members of the phytochrome light receptor family have the unique ability to sense red (R) and far-red (FR) light. They control germination, early seedling development, stem and internode elongation, the balance between leaf lamina and petiole formation (part of the shade avoidance syndrome) and the transition to flowering (Franklin and Quail, 2010). All higher plants possess multiple phytochromes (phyA-phyE in Arabidopsis) with phyB being the primary photoreceptor mediating seedling de-etiolation in red light. A major function of phyB is to prevent the shade-avoidance syndrome (SAS) in sunlight, an environment that is rich in red light and thus leading to phyB activation. In Arabidopsis phyB controls the SAS with major contributions from phyD and phyE (Devlin et al., 1998). Recently, it was also shown that phyB can integrate light and temperature signals by acting as a thermosensor (Jung et al., 2016;Legris et al., 2016).
Once activated, phytochromes are transported from the cytosol to the nucleus, where they interact with a group of transcription factors from the Phytocrome Interacting Factor (PIF) family (Leivar and Monte, 2014). PIFs act as inhibitors of phytochrome-induced responses in a partially redundant manner and the phytochromes promote these responses by inhibiting the PIFs (Lorrain et al., 2009;Leivar et al., 2008b;Leivar et al., 2012). Different members of the PIF family display different functions in light and temperature regulated development. For instance, PIF1 is a repressor of seed germination (Oh et al., 2004), PIF4 plays a crucial role in the response to high temperature (Koini et al., 2009) and PIF4 and PIF5 induce leaf senescence in Arabidopsis (Sakuraba et al., 2014). PIF4, PIF5 and PIF7 promote elongation of hypocotyls and petioles in response to shade cues (low R:FR and low blue light) (Lorrain et al., 2008;Keller et al., 2011;Li et al., 2012;de Wit et al., 2016). PIFs also display different modes of regulation at the transcriptional and post-transcriptional levels. For instance, phosphorylation of all studied PIFs is light regulated and in most cases lead to rapid protein degradation (Leivar and Monte, 2014;Ni et al., 2013;Leivar et al., 2008a). The nature of the protein kinase(s) and the ubiquitin E3 ligase(s) involved in the regulation of PIF stability in response to light have just started to be explored. A recent paper suggests that phytochromes may phosphorylate the PIFs (Shin et al., 2016). Also, the BR signaling kinase BRASSINOSTEROID-INSENSITIVE2 (BIN2) has been shown to phosphorylate PIF4 and subsequently affect PIF4 protein abundance, but in a non-light-inducible manner (Bernardo-García et al., 2014). As for ubiquitin E3 ligases, the degradation of PIF3 by light was reported to be mediated by the Light Response Bric-a-Brack/Tramtrack/Broad (BTB) proteins (LRBs) interacting with Cullin3 (CUL3), while PIF1 degradation is controlled by a CUL4-COP1-SPA complex (Ni et al., 2014;Zhu et al., 2015). These findings suggest that the degradation of different PIF proteins might be controlled by specific E3 ligase complexes. However, which E3 ligases that control PIF4 degradation has so far been unknown.
The CUL3-based E3 ligase complexes are composed of a CUL3 backbone, an E2-Ub-docking RING Box1 (RBX1) protein and a member from the large family of BTB-domain containing proteins that serve as target-recognition adaptors (Genschik et al., 2013). The Arabidopsis genome contains two CUL3 genes, called CUL3A and CUL3B and about 80 genes encoding BTB domain proteins, which could be possible interactors with CUL3A and CUL3B (Genschik et al., 2013;Gingerich et al., 2005).
Besides LRBs proteins, two other BTB domain containing proteins, NONEXPRESSER OF PR GENES3 and 4 (NPR3 and 4), were shown to function as substrate adaptors in a CUL3 E3 ubiquitin ligase complex to mediate degradation of the co-transcription factor NPR1 (Fu et al., 2012). Two close homologs of NPR proteins, BOP1 and BOP2, were previously shown to redundantly regulate leaf development (Ha et al., 2003;Hepworth et al., 2005;Norberg et al., 2005). In Arabidopsis bop1 bop2 double mutants the leaf lamina extends along the petioles and leaves one and two become massively elongated (Ha et al., 2003;Hepworth et al., 2005;Norberg et al., 2005), phenotypic alterations that are reminiscent of responses to changes in light quality and light intensity (Franklin and Quail, 2010). bop1 bop2 double mutants are also defective in the suppression of bract formation, do not form floral organ abscission zones and display alterations in floral organ identity and positioning (Hepworth et al., 2005;Norberg et al., 2005;McKim et al., 2008). The molecular function of BOP proteins is not known but they have been suggested to act as co-transcription factors (Khan et al., 2014) or to inhibit transport of transcription factors to the nucleus (Shimada et al., 2015). In this study, we report that BOP proteins act as substrate adaptors in a CUL3 E3 ligase complex to control the degradation of PIF4. This regulatory activity has a strong influence on the role of PIF4 during responses to light and temperature.
BOP2 promotes photomorphogenesis in red light
First, we explored the possibility that BOP genes may be involved in the regulation of light signaling by analyzing their role in the light-dependent suppression of hypocotyl elongation (Figure 1a . Plants were grown in constant monochromatic red, far-red, blue or white light, at a range of fluence rates. In all light qualities, the bop1 mutant, bop1-5 (Norberg et al., 2005), responded identically to the wild type (Col-0) control (Figure 1a In contrast, the bop2 mutant, bop2-2 (Hepworth et al., 2005;Norberg et al., 2005), showed increased elongation in red light compared to Col-0, while it displayed slightly increased elongation in white light at lower intensity and no hyposensitivity to blue and far-red light (Figure 1a,b; Figure 1-figure supplement 1a-c). In all these experiments, the bop1 bop2 double mutant behaved identically to the bop2 single mutant, suggesting that only BOP2 plays a significant role in the light-dependent suppression of hypocotyl elongation.
To further investigate the function of the BOP genes in response to red light, we performed kinematic assays for the bop mutants in response to red light (Boutté et al., 2013). In these assays, hook unfolding, cotyledon separation, and hypocotyl elongation were measured in a time course. The bop2 and bop1 bop2 mutants displayed significantly decreased cotyledon separation in red light, with a much stronger alteration in the double mutant ( Figure 1c). The bop1 bop2 double mutant showed a slightly delayed hook opening compared to wild type and the single mutants ( Figure 1-figure supplement 1d). These results suggest a partially redundant role of BOP1 and BOP2 in cotyledon separation and hook opening. Interestingly, plants overexpressing BOP genes had the opposite phenotype with both 35S:: BOP1 and 35S::BOP2 seedlings displaying reduced hypocotyl elongation at all red light fluence rates tested, and also in darkness (Figure 1-figure supplement 2a,b). Kinematic analysis also showed an opposite cotyledon separating phenotype in the bop1-6D mutant, a strong activation-tagged line (Norberg et al., 2005), with a slightly decreased hook folding in darkness (Figure 1-figure supplement 2c,d). The 35S::BOP2 seedlings had a striking hook folding defect in darkness (Figure 1-figure supplement 2d), showing that upon over-expression BOP2 can promote photomorphogenesis in the absence of light. Collectively, our data provide genetic evidence that BOP2, and to a lesser extent BOP1, suppress hypocotyl elongation and promotes cotyledon opening, especially in red light conditions.
BOP2 genetically interacts with PIF4 in response to light
We then characterized the genetic interaction between the red light photoreceptor mutant phyB and bop2. The phyB bop2 double mutant showed the same hyposensitive response as the phyB single mutant, suggesting that BOP2 acts in the phyB pathway to suppress hypocotyl elongation in red light ( Figure 1-figure supplement 3a). In order to test whether the bop2 phenotype is due to reduced phyB levels we analyzed the levels of the photoreceptor by western blotting. We found that phyB accumulated to normal levels in bop2 suggesting that BOP2 is rather involved in phyB signaling ( Figure 1-figure supplement 3b,c).
PIF transcription factors are important mediators of phytochrome signaling and PIF4 plays a prominent role during de-etiolation in red light (Huq and Quail, 2002). We therefore investigated the genetic relationship between bop2 and pif4, which show opposite phenotypes in red light ( Figure 2a) (Huq and Quail, 2002). Interestingly, both the pif4bop2 double mutants and the pif4-bop1bop2 triple mutants had the same short hypocotyl as the pif4 single mutant (Figure 2a In the kinematic analysis, compared to wild type the pif4 mutants showed more separated cotyledons in response to red light ( Figure 2c). As observed for hypocotyl elongation, the pif4bop1bop2 triple mutants had the same cotyledon separation phenotype as the pif4 single mutant (Figure 2c). These results show that pif4 is epistatic over bop2 and that PIF4 is necessary for the BOP2-mediated suppression of hypocotyl elongation and promotion of cotyledon separation.
BOP2 promotes red-light induced reduction of PIF4 levels A possible reason for the epistatic effect of pif4 over bop2 is that bop2 mutants contain increased levels of PIF4. Therefore, we assessed the possibility that the BOPs regulate PIF4 protein accumulation. We generated plants expressing PIF4-HA under its endogenous promoter (PIF4p::PIF4-HA) in the pif4 and pif4bop2 mutant backgrounds. PIF4p::PIF4-HA complemented the pif4 mutant phenotype (Figure 3a,b). As previously observed, the level of PIF4-HA rapidly decreased in response to red light ( Figure 3c) (Lorrain et al., 2008;Nozue et al., 2007). Interestingly, in the pif4bop2 mutant background red light led to a slower decline of PIF4-HA levels than in pif4 (Figure 3c). After 10 min of treatment about 30% of PIF4-HA protein remained in the pif4 background compared to darkness, while more than 80% of PIF4-HA remained in the pif4 bop2 mutant (Figure 3d). These changes in PIF4-HA accumulation in pif4bop2 were not due to an effect on PIF4 transcription as PIF4 transcript levels in the same experimental conditions were the same in both genotypes ( Figure 3e). In agreement with a role of BOP2 in the control of PIF4 protein accumulation we also observed an enhanced phenotype of 35S:PIF4-HA in bop2 (Figure 3-figure supplement 1). Collectively, these results suggest that BOP2 controls PIF4 protein abundance in particular during the transition from dark to light conditions.
BOP2 physically interacts with PIF4 and CUL3A
PIF4 undergoes proteasome-mediated degradation when etiolated seedlings are transferred into the light (Lorrain et al., 2009). Moreover, BOP proteins contain BTB domains, a domain which in other proteins mediates the formation of CUL3-BTB complexes (Geyer et al., 2003;Moon et al., 2004;Pintard et al., 2004). We therefore hypothesized that BOP2 might be part of a CUL3 BOP1/ BOP2 E3 ubiquitin ligase complex controlling PIF4 degradation. As a first step to test this hypothesis, we used co-immunoprecipitation studies in Arabidopsis protoplasts. HA-CUL3A co-immunoprecipitated together with both myc-BOP1 and myc-BOP2 ( Figure 4a) and CUL3A could also be co-immunoprecipitated from 35S::myc-BOP2 plants (Figure 4-figure supplement 1a) indicating that the proteins indeed interact. In general, CUL3A interacted more strongly with BOP2 than with BOP1 ( Figure 4a). These findings indicate that BOP1 and BOP2 may act as substrate adaptors in a CUL3 BOP1/BOP2 E3 ubiquitin ligase complex.
To determine whether the putative CUL3 BOP1/BOP2 E3 ubiquitin ligase can directly interact with PIF4, we tested if HA-PIF4 could interact with myc-BOP1 or myc-BOP2. HA-PIF4 was found to coimmunoprecipitate with both myc-BOP1 and myc-BOP2 ( Figure 4b). The interaction between BOP2 and PIF4 was further confirmed in vivo using Bimolecular Fluorescence Complementation (Figure 4c; Figure 4-figure supplement 1b). A strong signal could be seen in nuclear bodies which were previously observed in PIF4 and phytochrome localization experiments (Chen, 2008). In order to test whether PIF4 and BOP2 could interact in the absence of other plant proteins we performed yeast two hybrid assays. The BOP2-PIF4 interaction was detected using both LacZ and histidine auxotrophy as reporters of the interaction (Figure 4d). In order to determine whether these proteins directly interact with each other we used in vitro pulldown assays with purified recombinant proteins. This experiment showed that glutathione S-transferase (GST) tagged PIF4 directly interacted with maltose-binding protein (MBP) tagged BOP2, but not with MBP alone (Figure 4e). Finally, we tested whether BOP2 could mediate the interaction of PIF4 to a CUL3 complex by coexpressing HA-CUL3A, myc-BOP2 and HA-PIF4 in protoplasts followed by immunoprecipitation with anti-myc antibodies (Covance, Princeton, USA). Both HA-CUL3A and HA-PIF4 could be pulled down by BOP2 (Figure 4f), suggesting that all three proteins may act in the same complex. Taken together, these results demonstrate that the BOP proteins physically interact with PIF4 and serve as substrate adaptors in a CUL3 BOP1/BOP2 E3 ubiquitin ligase complex potentially targeting PIF4 for ubiquitination and subsequent degradation.
A CUL3-BOP complex mediates the polyubiquitination of PIF4
To determine whether BOP2 can direct PIF4 polyubiquitination in vivo, we performed pull-down assays with a Tandem Ubiquitin Binding Entities (TUBEs) approach, to detect polyubiquitinated PIF4-HA proteins in plant expressing PIF4-HA under its native promotor. Total ubiquitinated proteins - Galactosidase Activity(Unit) n.d.
Bait Prey
Bright field YFP fluorescence PIF4-HA seedlings. 3-day-old dark-grown seedlings were irradiated with 6 mmolÁm À2 Ás À1 red light for 2 min followed by 8 min in the dark before harvesting. Total ubiquitinated proteins from dark and red light treated samples were immunoprecipitated with argrose-TUBE2, then analyzed by western blotting with anti-HA antibodies for detection PIF4-HA and anti-ubiquitin antibodies as loading controls. Control agarose that were not TUBEs conjugated were used as negative controls. Anti-RPN6 antibodies were used as loading controls for input samples. (b) Quantification of ubiquitinated PIF4-HA protein levels relative to total ubiquitinated proteins. Result is shown in a box-and-whiskers plot. Statistical significance was determined using the Student's t test (two-sided) between pif4;PIF4p::PIF4-HA and pif4 bop2;PIF4p::PIF4-HA lines. Circles represent each measured sample from six independent experiments, **p<0.01. (c) In vitro ubiquitiniation assays. A Cullin3 E3 ubiquitin ligase complex was assembled with recombinant MBP-BOP2 and incubated with GST-PIF4. The reactions were then pulled down with Glutathione Sepharose 4B beads followed by Western blotting analysis using anti-GST and anti-MBP antibodies. Streptavidin-HRP were used for detection of biotin-labeled ubiquitinated protein. MBP-GFP and GST-GFP were used as negative controls. DOI: https://doi.org/10.7554/eLife.26759.012 The following source data and figure supplement are available for figure 5: Source data 1. Quantification of ubiquitinated PIF4-HA protein levels relative to total ubiquitinated proteins from TUBEs assays. Figure 5 continued on next page To investigate whether BOP2 can directly recruit PIF4 for ubiquitination, we then assembled a Cullin3-based E3 ligase in vitro using recombinant MBP-BOP2, GST-PIF4, and human Cullin3/RBX1 proteins (Ubiquigent, Dundee, UK), and performed in vitro ubiquitination assays using biotinylated ubiquitin as substrates (R&D Systems, Abingdon, UK). After incubation with the E3 complex, GST-PIF4 protein was precipitated with glutathione sepharose 4B beads (GE Healthcare, Uppsala, Sweden) and subsequently analyzed by western blotting using anti-GST antibodies (Santa Cruz Biotechnology, Dallas, USA) and horseradish peroxidase (HRP) conjugated streptavidin (Sigma, St. Louis, USA). The result showed the presence of more slowly migrating, ubiquitinated PIF4 isoforms following the in vitro assays (Figure 5c). Furthermore, ubiquitinated GST-PIF4 was only observed in the presence of MBP-BOP2. These data demonstrate that BOP2 possesses the capacity to directly ubiquitinate PIF4. Collectively, our results indicate that the CUL3 BOP1/BOP2 E3 ubiquitin ligase complex controls PIF4 protein levels (Figures 3-5).
BOP2 modulates PIF4 abundance in response to temperature
In addition to its role in light signaling, PIF4 is known to play a key role in the response to warm temperatures (Koini et al., 2009;Johansson et al., 2014). We therefore tested whether BOPs modulate this response by controlling PIF4 levels. In constant white light, hypocotyl length of all tested mutants, pif4, bop1, bop2, and bop1bop2, was indistinguishable from the wild type at 22˚C (Figure 6a). However, at 28˚C the hypocotyls of bop1 and bop2 mutants were longer than the wild type while in pif4 mutants the temperature response was largely abolished (Figure 6a). On the contrary, overexpression of BOP1 or BOP2 strongly suppressed hypocotyl elongation at 28˚C (Figure 6a). Interestingly, the bop1bop2 double mutant displayed an enhanced phenotype compared to bop1 and bop2 single mutants, indicating that, in contrast to the light response, BOP1 and BOP2 function in a partially redundant manner in response to temperature. Similar results were observed under 12 hr light/12 hr dark growth conditions, except that under these conditions bop mutants showed longer hypocotyls also at 22˚C (Figure 6-figure supplement 1a). This phenotype is likely due to the higher levels of PIF4 accumulated in dark compared to in constant light ( Figure 3c). Importantly, as observed for the red light responses, the pif4 phenotype was epistatic over the bop2 single mutant and the bop1bop2 double mutant in both growth conditions (Figure 6a, Figure 6-figure supplement 1a), suggesting that the longer hypocotyls in bop mutants is due to elevated PIF4 levels.
We then tested the PIF4-HA protein abundance in pif4;PIF4p::PIF4-HA and pif4bop2;PIF4p::PIF4-HA lines in constant white light (Figure 6b,c; Figure 6-figure supplement 1b). As previously observed, the level of PIF4-HA dramatically increased in response to high temperature (Figure 6b,c) (Johansson et al., 2014;Oh et al., 2012). Consistent with the red light response, higher abundance of PIF4-HA was observed in the pif4 bop2 background compared to the pif4 background at both 22˚C and 28˚C (Figure 6b,c). We also showed that the longer hypocotyls of bop1-5 mutants under 12/12 hr growth conditions ( Figure 6-figure supplement 1a) could be linked to increased levels of PIF4 ( Figure 6-figure supplement 1c,d) confirming that BOP1 and BOP2 have partially overlapping functions during these conditions. One recent study has shown that high temperature increases the rate of reversion from the active Pfr form to the inactive Pr form of phyB (Jung et al., 2016;Legris et al., 2016). As PfrB (phyB in the Pfr form) promotes PIF4 degradation (Leivar et al., 2008b;Leivar et al., 2012), our findings suggest that the longer hypocotyls in bop mutants at high temperature result from enhanced PIF4 accumulation due to reduced PfrB and reduced BOP-mediated degradation. All together, these data suggest that BOP-mediated control of PIF4 abundance is important to control not only the light-mediated but also the temperature-mediated growth response in Arabidopsis. . For light signaling it has been known for quite some time that this involves the regulation of PIF4 stability through ubiquitination targeting the protein for degradation (Lorrain et al., 2009;Lorrain et al., 2008). However, the nature of the E3 ligase complex mediating this ubiquitination has been unknown. Here we show that the BOP proteins can act as substrate adaptors in a CUL3 BOP1/BOP2 E3 ubiquitin ligase complex that can ubiquitinate and target PIF4 for degradation. This has a significant contribution in regulating the levels of PIF4 particularly during deetiolation in red light and when growing plants at elevated temperatures (Figure 3c,d, Figure 5a, Figure 6b,c). However, other E3 ligases or proteases are likely to contribute since PIF4 ubiquitination and degradation is not completely abolished in bop mutants (Figure 3c,d, Figure 5a). In a recent study, the E3 ubiquitin ligase HIGH EXPRESSION OF OSMOTICALLY RESPONSIVE GENES 1 (HOS1) was shown to interact with both phyB and PIF4. However, instead of mediating PIF4's protein degradation, HOS1 was found to suppress transcriptional activity of PIF4 without affecting its protein abundance (Kim et al., 2017). In this study, we show that the degradation of PIF4 is mediated by a novel Cullin3-based E3 ubiquitin ligase. All these findings suggest a complexity of the regulation of PIF protein abundance by different E3 ubiquitin ligases. Interestingly, apart from the light-triggered protein degradation, the CUL4-COP1-SPA E3 ligase was suggested to be involved in PIF1 degradation in darkness . Recent reports also show that DELLA proteins in the GA signaling pathway mediate PIF3 degradation in a non-light-inducible manner (Li et al., 2016), and that PIF4 protein abundance can be regulated in a non-light-inducible manner (Bernardo-García et al., 2014). In line with this, we observed that PIF4 protein abundance is higher in bop2 mutant background in darkness (Figure 3c) suggesting that the CUL3-BOP E3 ubiquitin ligase might also be involved in the regulation of PIF4 protein stability in the dark.
In previous studies, BOP proteins have been suggested to act as co-transcription factors (Khan et al., 2014) and to have a role in directing BZR1 protein translocation from the cytosol to the nuclei (Shimada et al., 2015). Our finding that the BOP proteins can act as substrate adaptors in E3 ubiquitin ligase complexes raises the interesting question whether BOP proteins act in the same way in these previously described processes and, if so, what are their targets? (Hepworth et al., 2005;Shimada et al., 2015;Xu et al., 2016).
BOP1/2 belongs to the BTB-ankyrin protein family containing 6 members, including NRP1/2/3/4 ( Khan et al., 2014). NPR1 was also first identified as a co-transcription factor, acting as a key regulator of systemic acquired resistance (SAR) (Mou et al., 2003). Further studies found that NPR1 and its paralogs NPR3/4 are actually the receptors of the immune signal salicylic acid(SA) (Fu et al., 2012;Wu et al., 2012). Moreover, NPR3/4 were shown to serve as substrate adaptors in a Cullin3 E3 ligase complex targeting NPR1 for degradation (Fu et al., 2012). Our findings provide further evidence of E3 ligase activity in this gene family. However, compared to NPR proteins, BOP1/2 play roles in multiple developmental processes.
The BTB-ankyrin family belongs to the BTB domain containing protein super family with about 80 members in Arabidopsis (Gingerich et al., 2005). Another subfamily of the BTB proteins, including LIGHT-RESPONSE BTB1, 2, and 3 (LRB1, 2, and 3), were shown to interact with CUL3 and have been suggested to be part of a CUL3 E3 ligase complex with probable targets in the phyB and phyD signaling pathway (Christians et al., 2012). A recent study proved that LRB1 and LRB2 can target PIF3 for degradation in a light-dependent manner (Ni et al., 2014). The degradation of PIF1 was shown to be mediated by a CUL4-COP1-SPA E3 ubiquitin ligase complex . Intriguingly, both spa and lrb mutants are hypersensitive to light, which is somewhat counterintuitive given that they control the degradation of PIFs which promote elongation. One explanation for those phenotypes is that LRBs and SPAs also control the degradation of other targets that play an opposite role in the control of hypocotyl elongation. Indeed, the LRB-PIF3 interaction induced the degradation of phyB, and SPAs also control the degradation of proteins such as HRF1 and HY5 which promote deetiolation (Ni et al., 2014;Sheerin et al., 2015). In contrast, bop2 mutants are hyposensitive to red light associated with enhanced accumulation of its target PIF4 and show no alterations in phyB levels ( Figure 1-figure supplement 3b,c) suggesting that BOP2 plays a less pleiotropic role in the control of de-etiolation than LRBs and SPAs.
It is common among substrate adaptors in Cullin E3 ligase complexes to form both homo-and hetero-dimers, something that can potentially increase their target range (Bosu and Kipreos, 2008;Hua and Vierstra, 2011). One can speculate, as has been done for other BTB substrate adaptors (Ni et al., 2014;Christians et al., 2012), that BOPs sometimes work as homodimers, which would explain the unique role of BOP2 in controlling red light suppression of hypocotyl elongation (Figure 1), while sometimes BOP1 and BOP2 can work as heterodimers to control other environmental response, which would explain the role of both BOP1 and BOP2 in controlling high temperature induction of hypocotyl elongation ( Figure 6).
Collectively, these findings suggest that E3 ligases with different substrate adaptors target proteins for degradation at different steps in the phytochrome/PIF signaling pathway. This working model is also confirmed from our studies of phyB/BOP interactions during later stages of development. When grown in constant red light phyB mutants displayed a constitutive shade-avoidance response with long petioles but maintained a normal rosette habit with no internodal elongation ( Figure 6-figure supplement 2). In contrast, dramatic elongation of rosette internodes was observed in all combinations of phyB and bop mutants ( Figure 6-figure supplement 2). The phyB bop mutant combinations appeared similar to phyB phyD phyE triple mutants, which also show rosette internodal elongation (Devlin et al., 1998). This suggests that BOP1 and BOP2 are involved in phyB/D/E-mediated suppression of the shade avoidance syndrome in red light and, in contrast to the regulation of hypocotyl elongation, BOP1 and BOP2 are needed together in this suppression. This suggests that the BOP proteins can also affect other phytochrome signal transduction pathways, besides the phyB pathway.
One outstanding question relates to the role of phosphorylation in the BOP-directed PIF4 ubiquitination and degradation. Our data indicate that PIF4 phosphorylation is not a pre-requisite for BOP2-mediated PIF4 degradation (Figure 4e, Figure 5c). Interestingly, we also observed a very low level of PIF4-HA polyubiquitination in dark, which was clearly reduced in the pif4 bop2 background compared to the pif4 background ( Figure 5-figure supplement 1). These data indicate that although BOP-mediated PIF4 degradation is stronger in the light, it also occurs in darkness. However, the physiological consequences of this regulatory mechanism are particularly strong during deetiolation and growth at elevated temperatures. It has been shown that protein phosphorylation is absolutely required for the interaction of PIF3 with its E3 ligase LRB2 (20). Phosphorylation also affects the stability of PIF4 during brassinolide signaling (Bernardo-García et al., 2014), although it is unclear if this is absolutely required for PIF4 degradation. Our in vitro results using recombinant (unphosphorylated) E. coli-produced proteins suggest that phosphorylation is not required for BOP2 binding to, and ubiquitination of, PIF4. This is then in contrast to the situation with PIF3. This does not exclude that phosphorylation will further enhance BOP binding to PIF4 in vivo, or that it has an effect on ubiquitination and degradation following BOP binding. This will be an important question for future research. Our results show that BOP controls PIF4 stability in both light-and temperature responses suggesting that BOP might serve as a more general regulator of PIF4 accumulation, also affecting other pathways acting through PIF4.
Plants were grown in 16 hr light/8 hr dark, 12 hr light/12 hr dark, or 8 hr light/16 hr dark cycles on either soil mixed with vermiculite (3:1) or on ½ MS 0.8% agar plates without sugars. Plants grown on ½ MS were surface-sterilised and stratified for 5 days at 4˚C in darkness then subjected to 1 hr of white light to induce germination. After a further 23 hr in darkness at 22˚C they were placed in constant white light or monochromatic red, blue or far red light of different fluence rates. The light intensities were measured with a spectroradiometer. Only the five longest 5-day-old hypocotyls of each genotype of seedling in each experiment were measured of a total of 25 seedlings to minimize germination effects. The experiments were then repeated five times to give a total of 25 measurements. Kinematic assays were done as previously described with a small modification (Boutté et al., 2013). Seedlings were grown in dark for 72 hr then transferred to 6 mmolÁm À2 Ás À1 red light. Photos were taken every 3 hr. Hypocotyl lengths, hook angles, and cotyledon angles were measured using ImgaeJ software. Statistical analysis and blot-whisker plots were done using the GraphPad Prism software.
PIF4-HA stability assay
For the protein stability assays in response to red light, three-day-old pif4;PIF4p::PIF4-HA, and pif4 bop2;PIF4p::PIF4-HA seedlings were grown in dark on ½ MS medium without sugar and then subjected to 10 mmol m À2 s À1 of red light at 22˚C. The seedlings were harvested in a time course after subjected to red light. For the PIF4 protein stability assays between Col-0 and bop1-5 plants, 8-dayold soil-grown seedlings in 12 hr light/12 hr dark condition at 22˚C were harvested. Proteins were extracted from 20 seedlings for each line in a PBS buffer containing 0.1% w/v SDS, 0.1% v/v Triton X-100, 1 mM phenylmethylsulfonyl fluoride (Sigma, St. Louis, USA), 14 mM 2-mercaptoethanol, and 2x complete protease cocktail (Roche, Basel, Switzerland). Each extract was cleared by centrifugation at 4˚C with full speed for 10 min. Protein concentration was measured in a spectrophotometer with Lowry dye reagent (Bio-Rad, Hercules, USA). Around 30 mg of total protein was loaded on an 8% SDS-PAGE gel and blotted onto an Immobilon-P PVDF transfer membrane (Millipore, Billerica, USA). The resulting immunoblot was probed with the 16B12 anti-HA-POD monoclonal antibody (Roche, Basel, Switzerland) for detection of PIF4-HA protein or anti-PIF4 antibody from goat (Agrisera) for detection of PIF4 native protein. Band signals were visualized by the SuperSignal Western Blotting system (Thermo Scientific, Waltham, USA). The intensities of Western blot band signals were collected from the LAS-3000 Imaging System (Fuji, Minato, Japan) and were measured using Image J. Quantification was performed with three biological replicates for each line using anti-tubulin (T5168, Sigma, St. Louis, USA) or anti-RPN6 (26S proteasome non-ATPase regulatory subunit) antibodies (BML-PW8370, ENZO Life Sciences, New York, USA) as loading controls.
PhyB stability assay 40 seeds per genotype/point were plated in 1/2 MS and cold-treated for 3 days in the dark. Germination was induced by 3 hr of 50 mmol.m À2 s À1 red light. After this time, plates were placed into different light conditions (red 0.1 or 15 mmol.m À2 .s À1 ) for 5 days. Seedlings were ground in liquid nitrogen (quiagen tissulyser) and resuspended into 150 mL of hot 2x FSB buffer. 15 mL of each sample was separated by SDS-PAGE (10% gel). Quantification was performed as described in (Trupkin et al., 2007) using DET3 as a loading control.
Protoplast transfection and co-immunoprecipitation
Arabidopsis thaliana ecotype Columbia cell suspension cultures were used for protoplast isolation. Isolation and transfection of Arabidopsis protoplasts was performed essentially as described previously (Cruz-Ramírez et al., 2012). In brief, 5 Â 10 5 cells were transfected with 3 mg each of myc-BOP1, myc-BOP2 and HA-PIF4 or 3-9 mg of HA-CUL3A expression constructs. Transfected cells were cultured for 16 hr at RT, then collected by centrifugation and lysed in 50 ml of extraction buffer (EB) containing 25 mM Tris-HCl pH 7.8, 10 mM MgCl 2 , 5 mM EGTA, 75 mM NaCl, 60 mM beta-glycerophosphate, 2 mM DTT, 0.2% Igepal CA 630, 0.1 mM Na 3 VO 4 , 1 mM benzamidine and protease inhibitors (Protease Inhibitor Coctail for plant cells, Sigma). For in vivo BOP2-CUL3A interaction analysis, 4-day-old dark-grown seedlings were harvested from Col-0 and 35S::myc-BOP2 transgenic plants. For co-immunoprecipitation assays, 50-75 mg of total proteins were incubated for 2 hr at 4˚C with 1.5 mg of antibody against c-Myc epitope (Covance, clone 9E10) in a total volume of 100 ml of EB buffer supplemented with 150 mM NaCl and 0.2 mg ml À1 BSA. Immunocomplexes were then adsorbed on 10 ml of Protein G-Sepharose matrix (GE Healthcare), washed three times with Tris buffered saline containing 50 mM Tris-HCl pH 8.0, 150 mM NaCl, 5% glycerol, 0.1% Igepal CA-630 and eluted by boiling in 30 ml of 1.5x Laemmli sample buffer. Proteins were then resolved by SDS-PAGE and blotted to PVDF transfer membrane (Immobilon-P, Millipore). The epitope-tagged proteins were probed with anti-HA-peroxidase (3F10, Roche) or chicken anti-c-Myc antibodies (A2128, Invitrogen, Carlsbad, USA) and detected with the SuperSignal West Pico Chemiluminescent Substrate. The native Cul3A proteins were probed with an anti-AtCul3 polyclonal antibody (Enzo Life Science). To assess abundance of tagged proteins in the supernatants of immunoprecipitation reactions, proteins were precipitated with 10% trichloroacetic acid, resolved by SDS-PAGE and immunoblotted with anti-HA-peroxidase antibody.
Bimolecular fluorescence complementation assay
The full-length coding sequence (CDS) of AtBOP2 and AtPIF4 were amplified and cloned into pDONR207 vector by BP Clonase II (Invitrogen) to construct entry clones pENTR207AtBOP2 and pENTR207AtPIF4. Then pENTR207AtBOP2 and pENTR207AtPIF4 were recombined into pDEST-VYCE(R) GW and pDEST-VYCE(R) GW destination vectors, respectively (Gehl et al., 2009). The binary vectors expressing the fusion genes, Venus C -BOP2 and Venus N -PIF4, were transferred into Agrobacterium tumefaciens strain GV3101 (pMP90). The constructs, expressing the fusion genes, Venus N -CNX6 and Venus C -CNX6, were transferred into the same strain and used as controls. Then, the fusion genes were co-transfected in different combinations into 4-week-old Nicotiana benthamiana leaves by agroinfiltration as previously described (Gehl et al., 2009). Fluorescence of the lower epidermis of leaf discs 2 days after infiltration was visualized with a TCS SP2 confocal microscope (Leica, Wetzlar, Germany) and a 488 nm Ar/Kr laser line. Venus N/C fluorescence was detected with the excitation/emission combination, 514/525-535. The chlorophyll auto-fluorescence was detected with the emission, 685-700.
Yeast two-hybrid analysis
The full-length CDS of AtBOP2 and AtPIF4 were cloned into pDEST32 and pDEST22 (Invitrogen, Carlsbad, USA) using Gateway system to generate the GAL4-DB-BOP2 bait and GAL4-AD-PIF4 prey construct, respectively. Then both constructs were co-transformed into the MaV203 yeast strain that contains single copies of each of three reporter genes (HIS3, URA3 and lacZ). The yeast cell harboring two constructs were grown on leucine and tryptophan dropout media for transformants selection, and leucine, tryptophan, histidine dropout media for interaction selection. b-galactosidase quantitative assays were performed as described in the Clontech Yeast Protocols Handbook (http:// www.clontech.com/SE/Support/ProductDocuments?sitex=10023:22372:US). The combinations with two empty vectors, pDEST32 and pDEST22, were used as negative controls.
In vitro pulldown assay
His-GST-PIF4 and His-MBP-BOP2 protein were generated from Escherichia coli BL21 cells and purified using Ni-NTA Agarose (Qiagen, Hilden, Germany) according to the manufacturer's protocol. 1 mg of GST (Santa Cruz Biotechnology, Dallas, USA) or GST-PIF4 was firstly incubate with 10 ml Glutathione Sepharose 4B beads for 1 hr at 4˚C in PBS containing 0.5% Triton X-100. Then the beads were blocked for 30 min in the PBS buffer containing 1% milk powder and 1% Triton X-100. After 5 min' centrifugation with 500 g, the supernatant was discarded. The beads were then washed two times with PBS buffer containing 1% Triton X-100 and incubated with the PBS buffer containing 2% BSA and 1% Triton X-100 for another 30 min. After wash with PBS buffer containing 1% Triton X-100, 1 mg of MBP (New England Biolabs, Ipswich, USA) or MBP-BOP2 were added to the beads and incubated in 300 ml binding buffer (50 mM Tris-Cl, pH 7.6 100 mM NaCl, 1 mM EDTA, 1% Triton X-100, 0.5 mM DTT, 5% Glycerol) at 4˚C for 2 hr. After three times washing with the binding buffer, proteins were eluted with 2x laemli loading buffer then subjected to Western blot analysis with anti-MBP antibodies (New England Biolabs, Ipswich, USA).
TUBEs analysis
The Immunoprecipitateion of ubiquitinated proteins from pif4;PIF4p::PIF4-HA and pif4 bop2;PIF4p:: PIF4-HA seedlings using Tandem Ubiquitin Binding Entities (TUBEs) agarose (tebu-bio, Le Perray-en-Yvelines, France) were performed as previously described with slight modification (Ni et al., 2014). 3-day-old dark-grown seedlings were irradiated with 6 mmolÁm À2 Ás À1 red light for 2 min followed by 8 min in the dark before harvesting. Proteins were extracted with a buffer containing 100 mM MOPS, pH7.6, 150 mM NaCl, 0.1% NP40, 1% Triton X-100, 0.1% SDS, 20 mM Iodoacetamide, 1 mM PMSF, 2 mg/l aprotinin, 40 mM MG132, 5 mM PR-619, 1 mM 1,10-Phenanthroline, and 2X Complete protease inhibitor Cocktail and PhosStop cocktail (Roche). 30 ml Agarose-TUBE2 was incubated with 2 mg total protein from each sample for 6 hr at 4˚C. The agarose beads were washed with extraction buffer four times and eluted with 2x laemli loading buffer then subjected to Western blot analysis with the 16B12 anti-HA-POD antibodies (Roche) for detection of PIF4-HA and anti-ubiquitin antibodies (sc-8017, Santa Cruz Biotechnology) for detection of ubiquitinatied protein. The intensities of western blot band signals collected from the LAS-3000 Imaging System (Fuji) and were measured using Image J. Quantification was performed with the measurements of 6 independent experiments using anti-ubiquitin antibodies as loading controls. Statistical analysis and blot-whisker plots were done using the GraphPad Prism software.
In vitro ubiquitination assays
His-GST-PIF4, His-GST-GFP, His-MBP-BOP2, and His-MBP-GFP were generated from Escherichia coli BL21 cells and purified using Ni-NTA Agarose from Qiagen according to the manufacturer's protocol. Human Cullin3/Rbx1 recombinant proteins were purchased from Ubiquigent (UK). The Neddylation of the Cullin3 was performed as previously described using a NEDD8 Conjugation Reaction Buffer Kit from R&D Systems Europe (Abingdon, UK) (Duda et al., 2008). The recombinant human ubiquitin-activating enzymes (UBE1) and ubiquitin-conjugating enzymes (UbcH5b) were purchased from R&D Systems Europe. Ubquitination reactions were performed as described previously with slight modification (Ni et al., 2014). About 100 nM UBE1, 1 mM UbcH5b, 100 nM Cul3/Rbx1, 290 nM GST-PIF4, 500 nM MBP-BOP2, were incubated at 30˚C for 1 hr in a buffer containing 40 mM biotin-ubiquitin, 5 mM Mg-ATP, 50 mM Tris-HCl pH7.6, 200 mM NaCl, 10 mM MgCl2, 1 Unit Inorganic pyrophophatase, and 1 mM DTT. Reactions were then pulled down with 10 ml Glutathione Sepharose 4B beads. GST-PIF4 and ubiquitinated proteins were detected by Western blot analysis using anti-GST antibodies and streptavidin-HRP conjugates, respectively. Anti-MBP antibodies were used for detection of the MBP proteins after pulldown. The reactions without E1, E2, or Cul3/Rbx1, and the reactions with MBP-GFP or GST-GFP were used as negative controls. The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. | 8,818.6 | 2017-08-22T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Applications of Information Theory in Rock Engineering
Rock engineering relies heavily on empirical systems to identify significant parameters influencing rock mass behaviour. The empirical and inductive nature of rock engineering design is such that it is not possible to eliminate uncertainty. One way of managing uncertainty during the design process is by collecting good quality data in a standardized and objective manner. However, difficulties arise when defining and determining what constitutes good quality data. We believe that information theory and the concept of Shannon’s entropy could be effectively used to better audit rock engineering data. This paper builds on established concepts by expanding and refining the application of information theory to rock mass classification systems, specifically the rock mass rating and the Q-system. One of the objectives is to provide and showcase a method whereby information auditing is used to flag uncertain (or poor quality) data. In the future it is not difficult to envision data collection processes that include improved core logging and data processing where imaging technologies are coupled with machine learning processing capability. Such an approach requires more quantitative and objective rock mass descriptions; in this context it easy to appreciate the role that information theory might have in the future in rock engineering.
Introduction
Uncertainty in rock engineering is unavoidable, whether geological, parameter, model, and/or human uncertainty. As such, it is imperative that rock engineers develop methods to manage uncertainty during the design process, especially as the digitalization trends increase. One such method is to collect data and then quantitatively determine what constitutes good quality data. Information theory, such as the concept of Shannon's entropy [1], can be applied to rock engineering as a way to better audit rock engineering data and determine the quality of data. The field of information theory was originally developed for communications and has been extended to other fields, such as computer science and machine learning; however, its use in rock engineering has been limited. This paper will (i) provide a review of information theory concepts relevant to rock engineering and (ii) build upon the concepts and examples introduced in [2] by providing and showcasing a method whereby information auditing and assessment are used to flag uncertain (or poor quality) rock mass classification values, specifically the rock mass rating (RMR) and Q-system. IOP
Review of information theory concepts
The field of information theory began with Claude Shannon's 1948 paper titled "A Mathematical Theory of Communication", where he introduced the fundamental concepts of information theory in the context of communication. One of the goals of the paper was to "find a measure of how much choice is involved in the selection of the event or how uncertain we are of the outcomes" [1]. This measure was defined as entropy, and is expressed as: where H(X) is the entropy of a set of n possible events of random variable X whose probabilities of occurrence are p1, p2, …, pn. As defined in Equation (1), entropy becomes a measure of the uncertainty associated with the random variable X and a higher entropy indicates that there is more uncertainty.
Note that H(X) is not a function of the random variable X; it is strictly a function of probability and the X denotes that it is the entropy of random variable X. Additionally, Equation (1) is only applicable to a set of discrete probabilities. Shannon's concept of informatic entropy is similar to the concept of entropy in statistical mechanics, such as Boltzmann's theorem for entropy.
The properties of entropy are outlined in [1] and are summarized below: 1. If one of the probabilities pi is 1 and the remaining are 0 in a set of possible events, there is no entropy because we are certain of the outcome (i.e., H = 0). 2. H can only be positive. 3. The most uncertain situation occurs when the probabilities in a set of possible events are the same (i.e., in a set of n events and subsequently n probabilities, H is maximum when the probability of each event is 1 ). The maximum value of H is equal to log(n).
4. The joint entropy of two discrete random variables X and Y, where X has m possibilities and Y has n possibilities, is shown in Equations 2 and 3: Where p(i,j) is the joint probability of occurrence of i for random variable X and j for random variable Y. If X and Y are independent, their joint entropy is the summation of their individual entropies (the maximum case in Equation 3). 5. Building off of point 3 above, "averaging" (or equalizing) the probabilities pi increases H. 6. The conditional entropy between two discrete random variables X and Y (similar to point 5) that are not necessarily independent measures how uncertain we are of Y when we know X. The conditional entropy of Y, which is the average entropy of Y for each value of X, is shown in Equation (4) below.
In other words, the conditional entropy of Y is also the difference between the conditional entropy of X and Y and the entropy of X. 7. Having knowledge of random variable X will never increase the entropy (uncertainty) of random variable Y. Rather, knowledge of random variable X will either a) decrease the entropy (uncertainty) of random variable Y if X and Y are not independent of each other or b) remain unchanged if X and Y are independent of each other. (3) (1) Based off the above properties of H, the entropy of a system of variables that are all independent of one another is the summation of the entropy of each variable.
Review of previous work
The use of information theory in rock engineering design has so far been limited. [2] published a paper outlining various uses of information theory in rock engineering design, including the analysis of the uncertainty remaining about a variable being estimated with an empirical relation. They applied information theory to determine the most efficient site investigation methodology for rock mass characterization. The underlying theme of their applications is using Shannon's concept of entropy to quantify the level of uncertainty in geotechnical parameters as a method of information auditing.
With respect to rock mass characterization, [2] provided an example of applying entropy to Bieniawski's rock mass rating (RMR) system to determine the information content (i.e., how much uncertainty) of each RMR value. Using a Monte Carlo analysis, they were able to randomly generate combinations of RMR and compute their entropy. This allowed for the generation of a plot of the maximum and minimum entropy value of each RMR, which they recommended should be used as a reference for checking the entropy of a specific RMR combination against the overall range of entropies. Of significance is the variability in the maximum and minimum entropy values for RMR values; the variability is much greater for RMR values between 20 and 80. This can be attributed to the higher number of combinations (or pathways) for these RMR values.
While information theory has had limited direct use in rock engineering design, its prevalence in machine learning has resulted in more frequent indirect use as rock engineering moves to digitalization. Shannon's entropy is commonly found in loss functions -a measure of how good a prediction is using a supervised machine learning model (i.e., how far the model prediction is from its label). Examples include cross-entropy and Kullback-Leibler divergence, both of which are employed in neural networks [3]. Additionally, a decision tree can use "information gain" to control how it makes decisions at its nodes; using "information gain" tells the decision tree to go with the split that decreases the entropy of the label before and after the split [4].
The increased use of machine learning in rock engineering has highlighted the importance of data quality and the need for a unified and quantitative method of determining the quality of our data. One solution is to utilize the concept of entropy from information theory to quantify the level of uncertainty in our measurements as done by [2].
Rock mass classification systems and implications for machine learning
Building upon the work by [2], a site-specific method for auditing rock mass classification values, specifically the rock mass rating (RMR) system proposed by [5], [6], and [7] and the Q-system by [8], is proposed in the following section. The RMR system proposed by is the summation of ratings given to 5 parameters: 1. Strength of intact rock 2. Rock quality designation (RQD) 3. Spacing of discontinuities 4. Condition of discontinuities 5. Groundwater The difference between the 1973, 1976, and 1989 versions of RMR lie in the weights of the 5 parameters and the ratings used. The RMR value can also be adjusted for the discontinuity orientation with respect to the engineering structure (tunnels/mines, foundations, slopes) being designed. The Q-system proposed by [8]
Q= RQD Jn
Jr Ja
Jw SRF
Where RQD is the rock quality designation, Jn is the joint set number, Jr is the joint roughness number, Ja is the joint alteration number, Jw is the joint water reduction factor, and SRF is the stress reduction factor. As outlined by [2], there are several ways to obtain the same RMR value. This also applies to the Q-system; each RMR or Q value has several different combinations of the ratings of their respective parameters. Because both the RMR and Q-system are a system of variables, the entropy of each RMR or Q value is the summation of the entropy of their respective parameters. For RMR and Q values with multiple combinations, they will have multiple different entropy values corresponding to a specific combination.
The site-specific method for auditing RMR and Q values is outlined below: 1. Determine all possible combinations of RMR and Q, keeping in mind the geological constraints and the discrete nature of the ratings of the parameters. 2. Determine the discrete probability distributions of the ratings for each parameter in the site data. Update the combinations of RMR and Q determined from the previous step by removing combinations with a probability of zero. 3. Determine the entropy of each classification system parameter based off of the probabilities determined in the previous step. 4. Add the entropy of the parameters of each classification system parameter to determine the overall entropy of that specific classification value. This step has been simplified by assuming that all parameters are independent of one another, resulting in the maximal entropy for that combination. 5. Once the entropy for all combinations has been determined, the maximum and minimum entropy for each combination are plotted, similar to what was done in Mazzoccola et al.
(1997). 6. The entropy of the site data can be compared against the plot of maximum and minimum entropy for all possible combinations at that site to determine if it plots close to the maximum or minimum entropy. Those site data-points that plot close to the maximum entropy have a higher uncertainty and should be examined closely and used with caution.
An example analysis using field mapping data was performed and is shown below. Combinations for RMR76 and Q were generated using a Python script, keeping in mind the relationship between RQD and discontinuity spacing, the relationship between RQD and the number of joint sets, the rock wall contact, and the discrete nature of the classification ratings. Figure 1 show the distributions of all possible combinations of RMR76 and Q with respect to their respective rock mass classes. Similar to the combination analysis in [2], the distribution of RMR follows a normal distribution, with "Fair" and "Good" rock mass classes having the highest number of combinations. Unlike RMR, the Q-system follows a negative exponential distribution, with "Extremely poor" and "Very poor" rock mass classes having the highest number of combinations. Using site-specific data to determine the discrete probabilities of the ratings of each RMR and Q parameter, the plots in Figure 1 are then updated to reflect the site conditions by showing the distribution of all possible combinations of RMR76 and Q for this example site, shown in Figure 2. The site-specific data used were obtained from window mapping and include detailed rock mass classification information. The next step is to calculate the entropy of each parameter rating of all possible RMR76 and Q combinations for the example site by using Equation 1. The entropy of each RMR76 and Q value is the summation of the entropy of their respective parameter ratings. Following the calculation of the entropy of the RMR76 and Q values, the maximum and minimum entropy for each value are plotted, as shown in Figures 3 and 4. The entropy of RMR and Q values obtained during the site investigation can be checked against these plots to determine their entropy relative to the maximum and minimum entropy for that specific classification value. RMR or Q values with a high entropy (close to the maximum entropy for that specific value) have a high uncertainty; at least one of their parameters has a high entropy (unexpected value). These unexpected RMR and Q values are "flagged" and should be 1) re-examined to identify the parameter(s) with high entropy and determine if the unexpectedness is due to human/lab errors instead of unexpected rock mass conditions and 2) treated with caution if the high entropy is due to unexpected rock mass conditions. Figures 3 and 4 on the following page show that the greatest difference in entropy for an RMR or Q value corresponds to those values with the greatest number of combinations. For RMR, the greatest variation in entropy can be found in the "Fair" rock mass class, while the greatest variation in entropy for Q can be found in "Poor" and "Fair" rock mass classes. Additionally, both Figures 3 and 4 show oscillations in both the minimum and maximum entropy values for an RMR or Q value; however, Figure 4 (Q values) shows much greater oscillations. The oscillations are greatest for entropy values of Q values between 1 -10, which corresponds to the same range with the greatest difference in entropy. These oscillations indicate that even within rock mass classes, certain rock mass classification values are much more uncertain than the others. Incorporating this information auditing process during the design process could make it easier to flag uncertain data, which is becoming increasingly important as rock engineering moves to machine learning and other advanced data analysis techniques. One of the major limitations of any computer model, especially machine learning models, is the data quality. In the context of machine learning, a model trained with poor quality data will output poor quality results; the success and quality of the model is dependent on the quality of the data used. As a result, it is imperative that rock engineers focus on collecting better quality data in a less subjective manner. The information auditing process outlined above provides a method to quantify uncertainty, making it easier to find and resolve uncertain and poor quality data during pre-processing.
Additional benefits of using entropy to quantify the uncertainty in rock mass classification values include identifying "extreme" (i.e., unexpected) rock mass conditions and shifting the focus from the classification value to the actual parameters in classification systems.
Limitations in rock engineering
The method and analysis outlined in the previous section are only applicable when RMR and the Qsystem are used in a fully standardized manner consistent with how they were originally devised. Variations in how RMR and the Q-system are applied -for example treating RMR rating values as continuous instead of discrete, interpolating between ratings, or using company guidelines that introduce modifications to the original system -render the entropy approach, along with machine learning algorithms that use it, void. The fault lies with the inherent subjectivity which is present from rock mass characterisation through rock mass classification. A clear example is given by personal preferences that indicates which version of the RMR table is to be used; under these circumstances different entropy values would be determined for the same rock mass when using RMR76 and RMR89. Because of the entropy is a measure of uncertainty, in this case the difference between the entropy values associated to RMR76 and RMR89 (for a given rock mass) would then be a manifestation of human uncertainty, impossible to remove independently of the quality of the data that have been collected.
Conclusions
The shift to digitalisation in rock engineering has highlighted the importance of data quality and the ability to determine the quality of the data in a quantitative manner. This paper has presented an updated approach for auditing of rock mass classification values to help identify uncertain (poor quality) data by using the concept of Shannon's entropy from information theory. By quantifying the uncertainty of rock mass classification values, uncertain data can be flagged to be (i) re-examined to identify the parameter(s) with high entropy and determine if the unexpectedness is due to human/lab errors instead of unexpected rock mass conditions and (ii) treated with caution if the high entropy is due to an unexpected rock mass. This method also emphasizes the importance of focusing on the parameters in rock mass classification systems rather than solely on the classification value, allowing for a better understanding of the rock mass quality. As data collection processes improve and begin to include imaging technologies and machine learning, it is necessary for rock mass descriptions to become more quantitative and less subjective; in this context it easy to appreciate the role that information theory might have in future rock engineering. | 4,081 | 2021-08-01T00:00:00.000 | [
"Geology",
"Computer Science"
] |
The Undrained Strength of Soft Clays Determined from Unconventional and Conventional Tests
The laboratory fall cone test, considered an unconventional test, was performed to estimate the undrained shear strength of undisturbed samples of Brazilian coastal soft clays with different plasticity index values. The undrained shear strength determined by laboratory fall cone test was compared with the strength determined by conventional field and laboratory tests commonly used to estimate this parameter in cohesive soils: piezocone test, field vane test, unconfined compression test, unconsolidated undrained triaxial compression test and laboratory vane test. The fall cone test undrained shear strength results presented good agreement with the laboratory vane test strength results and reasonable agreement with unconfined compression test strength results. The strength results obtained by laboratory tests were compared with the continuous strength profile estimated from the piezocone test calibrated using the field vane test, and presented good agreement with fall cone test and laboratory vane test strength results. The normalised undrained shear strength was compared with some empirical correlations reported in the literature based on plasticity index, being verified some behaviour similarity.
Introduction
The properties of the soil are crucial to perform a geotechnical engineering design.Estimating geotechnical parameters is complex because of the difficulty in obtaining reliable experimental data and because of the natural variability of the subsoil.In soft cohesive soils, the determination of these parameters is considered to be even more complex, as it is necessary to understand not only the soils strength properties but also its deformability properties and hydraulic conductivity.For short-term stability analyses in these soils, the undrained shear strength S u is the most important design parameter (Shogaki, 2006).
Many factors affect the shear strength of clays, such as the types of minerals, humidity, stress history, draining during shear, load rate and soil structure, and it is not justifiable to attempt to attribute a unique shear-strength value to any given clay (Sridharan et al., 1971).Moreover, according to Lunne et al. (1997b), there is no unique value for S u in situ; this value depends on the mode of rupture, the anisotropy of the soil, the deformation rate and the stress history.
The standard tests to determine the shear strength of soils are typically classified as either laboratory or field tests.Field tests generally supply measurements of the soil strength that can be acquired more rapidly and in greater quantity than the measurements afforded by laboratory tests.However, they provide less precise measurements and, in some cases, are based on empirical correlations (Alshibli et al., 2011).
The conventional tests to determine S u in the laboratory are unconfined compression test (UCT), unconsolidated undrained triaxial compression test (UUT) and laboratory vane test (LVT), and in situ are piezocone test (CPTU), field vane test (FVT) and pressuremeter test (Kempfert & Gebreselassie, 2010).S u depends on the testing method, among other factors, thus to understand the relations between the strengths determined by each test and the reliability of these determinations is important when S u is a relevant parameter (Watabe & Tsuchida, 2001).
The fall cone test (FCT), considered unconventional test in many countries, was developed between 1914 and 1922 by the Geotechnical Commission of the Swedish State Railways and, compared with other test methods, it is considered to be a very simple method, which has led to its extensive use in Scandinavia (Hansbo, 1957).Although it was originally developed to estimate the strength of remoulded cohesive soils, it became widely used as a standard method of determining the liquid limit of clays (Koumoto & Houlsby, 2001), having already been included in the British, Swedish, Canadian and Japanese standards (Claveveau-Mallet et al., 2012;Feng, 2000;Tanaka et al., 2012).
The present study shows the result of five conventional and commonly applied tests for the S u determination -CPTU, FVT, UCT, UUT and LVT -and compares these results with those of the fall cone test (FCT), also known as the Swedish cone test.
The strength results obtained by laboratory tests were compared with the strength profile obtained from CPTU with the cone factors (N kt and N Du ) calibrated using the FVT, considered a referential test to obtain reliable values of S u (Schnaid & Odebrecht, 2012).The CPTU was adopted because it supplies a continuous profile of S u .Also it has a strong theoretical foundation and several well-known and comprehensive publications are available concerning its interpretation (Robertson, 2009).
This study also compares the normalised undrained shear strength results with some empirical correlations reported in the literature based on the plasticity index (I P ).
Soil
The investigated site is located in the city of Vila Velha, Espirito Santo State, in the coastal region of Brazil, near to Rio de Janeiro, composed of recent fluvial, fluvial-marine and fluvial-lacustrine sediments.The soft clay deposits in Brazil found all along the coast-line were originated in the Quaternary period.The local subsoil was formed by cycles of erosion and sedimentation which occurred during periods of regression and transgression of sea level, between the Pleistocene, 123000 years ago, and the Holocene, 5100 years before present (Suguio, 2010).
The investigated deposit is formed of a thick layer of soft clay, situated in an area near to a highway construction site, whose subsoil underwent rupture during the embankment operations.Standard penetration tests (SPT) and piezocone test (CPTU) performed locally indicate that the site (Fig. 1) is composed of a subsurface layer of a very soft organic clay, with water level 0.50 m below the surface, over a layer of very soft marine clay with thickness of 15.0 m, followed by a layer of sand.Fig. 1 also presents the clay layer SPT blow count (N value ) of zero values, low values of q t and f s , obtained from CPTU, and water content (w n ) values above the liquid limits (w L ) determined by characterization tests in SPT samples.
Testing program
Piezocone test (CPTU) and field vane tests (FVT) were performed near to the Standard Penetration Test location whose results are indicated in Fig. 1.CPTU was performed between the depths of 0.50 and 20.0 m, with three dissipation tests being performed at depths of 6, 7 and 12 m.At depths between 7.0 and 12.0 were performed the field vane tests (FVT) and also collected six undisturbed samples.The sampling procedures, packaging and transport of the undisturbed samples followed the requirements of the Brazilian standard ABNT (1997).
The undisturbed sampling tubes were segmented as illustrated in Fig. 2, allowing the FCT tests to be performed on the faces of all segments.The FCT and LVT were performed with the soil sample kept in the segmented sampling tube.Subsequently the sample was extracted for moulding the specimens to UUT, UCT and oedometer (OCT) tests.
Fall cone test (FCT)
The test consists of dropping a standard cone onto the soil under its own weight and after 5 seconds measuring the penetration depth of the cone into the soil.From the penetration depth, the undrained shear strength in both undisturbed (S u ) and remoulded (S ur ) conditions can be estimated by the following equation: where W is the mass of the cone in grams, d is the penetration depth of the cone in the soil in units of mm, and K is an empirical constant that depends on the cone tip angle (b) and on the cone roughness (x).Hansbo (1957) estimated the value of K by comparing the FCT results with FVT and LVT, with K equal to 1.0 and 0.30 for cone angles of 30°and 60°, respectively, that are used in the Canadian standard CAN (2006) to estimate S u and S ur (Claveveau-Mallet et al., 2012).Wood (1985Wood ( apud 1990) ) found K mean values of 0.85 and 0.29 for cones angles of 30°and 60°, respectively, by comparing results between FCT and LVT.The European standard ISO (2004) indicates K values of 0.80 to 1.0 for cone angle of 30°and 0.27 for 60°.Houlsby (1982) has presented a theoretical analysis of the cone test for strengths in the same range as those that have already been determined empirically.This analysis reinforces the use of empirical correlations and the relevance of certain variables in the determination of the constant K, such as the cone tip angle and its roughness.Koumoto & Houlsby (2001) have analysed the cone penetration mechanism into the soil, introducing the concept of dynamic strength for static results.They compared their theoretical K values with those obtained experimentally by other authors, concluding that there was good agreement in the results obtained for a cone with an angle of 60°, whereas for a 30°cone, the theoretical values were slightly higher than those obtained experimentally.
The fall cone tests were performed on the faces of the soil sample kept in the segmented sampling tube following the recommendations of the European standard ISO (2004).The cone has a weight (W) of 80 g, a cone tip angle of 30°a nd a mean roughness of 0.4 mm.Five measurements of the depth (d) were performed on each face of the segmented sample indicated in Fig. 2, keeping at least 25 mm distance between each point and from the edge of the sampler.Measurements higher than 10% of the mean value were ex- cluded from the estimate of S u and a K value 0.80 was adopted, as recommended by the standard ISO (2004).
Laboratory vane test (LVT)
The procedure to perform the LVT followed the recommendations of the American standard ASTM (2010), including those concerning the calibration of the springs.The vane has a height of 25.4 mm and a diameter of 12.7 mm, corresponding to the 2:1 ratio that is recommended to reduce the effects of the anisotropy on the shear strength.The vane was inserted into the soil sample kept in the segmented sampling tube, with a depth equal to twice its height, for measuring undisturbed strength (S u ).Two tests were performed for each segmented samples indicated in Fig. 2 in opposite faces.The remoulded conditions were created after the peak strength was reached.So the vane was manually rotated by ten complete turns, and the test was then repeated.The S u and S ur values were estimated based on the following equation, for the height of the vane being twice the diameter: where T is the maximum torque applied by the spring and D is the diameter of the vane in consistent units with strength.
The relationship between vane torque T and spring deflection measurement in the test was established through the calibration procedure.
Unconfined compression test (UCT)
The UCT was performed following the recommendations of the American standard ASTM (2006).Specimens were moulded for each segmented sampling tube indicated in Fig. 2, except for sampling from 7.0 m depth that was highly fissured and was discarded.They were prepared with a constant height to diameter ratio of 2 and unconfined compression tests with controlled strain were performed.The S u(UCT) value was calculated as half of the unconfined compression strength (q u ).
Unconsolidated Undrained Triaxial Compression Test (UUT)
The UUT was performed in accordance with the recommendations of the American standard ASTM (2003).Specimens were moulded for each segmented sampling tube indicated in Fig. 2 with a constant height to diameter ratio of 2 and wrapped in a membrane.The specimen was inserted into a triaxial cell for the application of confining pressure followed by the application of an axial load.The S u(UUT) value was calculated as half of the deviator stress (s d ), calculated without correction for membrane effects.
Field vane test (FVT)
The field vane tests were performed in accordance with the Brazilian standard ABNT (1989) using a steel vane retracted in the protective shoe for advancement without pre-drilling and the instrument is equipped with slip coupling.The vane prescribed by the Brazilian standard has a diameter of 65 mm, a height of 130 mm, and a vane thickness of 2 mm.The vane retracted in the protective shoe was inserted into the soil and once the desired depth was reached, it was pushed into the soil 0.50 m from the lower part of the protective shoe.Immediately was applied torque at a speed of 6 ± 0.6°/min and the torque curve vs. the applied rotation was recorded to determinate S u .The remoulded conditions were created by rotating the vane rapidly through ten revolutions and the test repeated to determine S ur .The S u(FVT) and S ur(FVT) values were estimated using Eq. 2, where T is the maximum value of torque corrected for rod friction measured by slip coupling.
Piezocone test (CPTU)
The cone test with porewater pressure measurements was performed following the recommendations of the American standard ASTM (2012).The penetrometer has a cross section area of 10 cm2 and the filter element located at the base (measurement of u 2 ).The penetration was performed at a constant speed of 20 ± 5 mm/s, taking automatic measurements of the following parameters: cone resistance (q c ), friction sleeve resistance (f s ) and porewater pressure (u 2 ).The corrected cone total resistance (q t ) was calculated using the following equation: where a n is the ratio between the areas obtained through calibration, which, in this case, was equal to 0.75.A large number of studies concerning the interpretation of the CPTU to obtain the undrained strength of clays can be found in the literature, representing two different interpretation approaches: one based on theoretical solutions and another based on empirical correlations, generally preferred as reported by Lunne et al. (1997b).The empirical approaches estimate S u by three empirical cone factors, N kt , N Du and N ke , generally used in combination with FVT data being given by the following equations (Danziger & Schnaid, 2000): In geotechnical engineering practice in Brazil, Eq. 4 is more used (Danziger & Schnaid, 2000;Almeida & Marques, 2014;Coutinho & Schnaid, 2010).In very soft clays, the Eq. 5 has more accuracy in u 2 and u 0 measurements than q t (Robertson & Cabal, 2015).
Field test results
The undisturbed and remoulded strengths obtained through the field vane test (FVT) are presented in Table 1.According to Skempton & Northey (1952) classification, the soil deposit can be considered sensitive.
The N kt and N Du values obtained by Eqs. 4 and 5 and calibrated using the FVT are shown in Fig. 3(a).Typically N kt varies from 10 to 20 (Lunne et al., 1997b;Robertson, 2009).For Brazilian soft clays, Coutinho & Schnaid (2010) reported N kt values between 9 and 18, Schnaid & Odebrecht (2012) between 10 and 20 for normally consolidated or slightly overconsolidated clays and Baroni (2016) between 6 and 18 for soft clays of Rio de Janeiro.
Although N kt(FVT) values of the studied deposit vary between 17 and 37, values at depths of 7 and 8 m do not have good agreement with the range reported in the literature, so N kt equal to 20 was adopted as representative of the deposit, being slightly higher than Brazilian reported clays.Almeida et al. ity of the Brazilian coast soils and the importance of estimating the N kt value for each deposit.Roberson & Cabal (2015) reported N Du values between 4 and 10.For Brazilian soft clays, Coutinho & Schnaid (2010) reported N Du values between 7 and 9.5 and Coutinho & Bello (2014) between 7.5 and 11 for Recife soft clays.N Du(FVT) equal to 10 was adopted as representative of the deposit, being similar to Brazilian reported clays.
Figure 3(b) shows the undrained shear strength values estimated by in situ tests.The S u estimate from CPTU used the adopted cone factors N kt and N Du , respectively, equal to 20 and 10.From 9.0 m depth there was a good agreement between the S u(CPTU) estimates by the two cone factors and between S u(CPTU) and S u(FVT) estimates.
Laboratory test results
Table 2 shows the quality classification of undisturbed samples based on the ratio between variation in the void ratio (De) and initial void ratio (e o ), proposed by Lunne et al. (1997a) modified by Coutinho (2007) for Brazilian clays.It is observed that samples numbers 2, 5 and 6 presented poor quality and numbers 3 and 4 presented good to excellent quality.
Overall, the results of tests performed on low-quality samples tend to underestimate S u .Tanaka (1994Tanaka ( , 2008) ) has observed for LVT tests performed on poor samples that quality of the sample has little influence on the results of S u , but for UCT tests S u was underestimated.
A summary of the soil properties obtained from undisturbed samples is presented in Table 3.The natural water content values are closer to the liquid limit and the samples can be subdivided into three groups depending on the I P value: (1°) I P greater than 60% and less than 100%, samples 1 and 2; (2°) I P greater than 100%, samples 3 and 4; and (3°) I P less than 50%, samples 5 and 6.
The X-ray diffraction measurements indicated that kaolinite and muscovite are the predominant clay minerals, being also detected the presence of quartz, illite and montmorillonite.
Strength results and comparison
Figure 4 shows the relationship between undrained shear strengths estimated from FCT and from conventional laboratories tests: LVT, UCT and UUT.For FCT and LVT it can be concluded that there was good agreement between the results, with a tendency for the S u(LVT) values to be slightly lower than those of the S u(FCT) , as shown by regression lines (R 2 = 0.84).The same behaviour has also been observed by Rajasekaran & Narasimha Rao (2004) on marine clays treated with lime.Those authors concluded that the FCT test is a good alternative for estimating the undrained strength of clays.
The S u(LVT) /S u(FCT) ratio had a mean of 0.92 with standard deviation of 0.17 and variation coefficient of 1.3%.
For FCT and UCT there was reasonable agreement between the S u results, with a tendency for the S u(UCT) values to be higher than the S u(FCT) , as shown by regression lines (R 2 = 0.62).The S u(UCT) /S u(FCT) ratio had a mean of 1.14 with standard deviation of 0.34 and variation coefficient of 30%.
296
Soils and Rocks, São Paulo, 40(3): 291-301, September-December, 2017.2012) have compared S u data estimated by UCT and FCT from four sites that have been extensively investigated in Japan (Atsuma, Takuhofu, Y-Ariake, & H-Osaka).These sites exhibit different characteristics but similar undrained shear strengths, varying between 20 and 80 kPa.In this study, the author recognised a tendency for the S u(UCT) values to be lower than the S u(FCT) , except for the Y-Ariake site.It was observed that the differences could not be attributed to the quality of the samples, consistent with the study of Horng et al. (2011), which concluded that the effects of disturbances in the samples are similar for UCT and FCT.
Lemos & Pires
Unexpectedly, the UUT results did not demonstrate good agreement with the FCT, as shown by regression lines (R 2 = -0.85).And the S u(UUT) /S u(FCT) ratio had a mean of 1.51 with standard deviation of 0.51 and a variation coefficient of 34%.
Figure 5 shows the correlation between undrained shear strengths estimated from FCT and conventional in situ tests: FVT and CPTU.For these correlations there was observed larger discrepancy between S u results and it was not possible to establish an adequate linear regression.
The S u(FVT) /S u(FCT) ratio had a mean of 0.92 with standard deviation of 0.49 and variation coefficient of 53%.Despite the larger discrepancy, it was observed a tendency for the S u(FCT) values to be higher than the S u(FVT) values, similarly to Tanaka et al. (2012) results.
The S u(CPTU-Nkt) /S u(FCT) ratio had a mean of 1.25 with standard deviation of 0.52 and variation coefficient of 41%.The S u(CPTU-NDu) /S u(FCT) ratio had a mean of 1.11 with standard deviation of 0.59 and variation coefficient of 53%.
It is difficult to judge whether variation in soil properties is caused by human factors or by the natural variability in the properties (Tanaka, 2008).The discrepancy observed in Figs. 4 and 5 can be considered to be predominantly attributable to the disturbance of the samples, as their quality was generally classified as poor.However, others important factors were observed in the samples and should be considered, such as the large vertical variability, indicated by liquid limit variations, the presence of shells (Fig. 6a), concretionary materials (Fig. 6b) and thin layers of sand and mica (Fig. 6c).These factors may have influenced the laboratory test results, particularly the FCT and LVT, and are also an indicative of horizontal variability.Figure 7 may help to understand the larger discrepancy between S u results presented in Fig. 5.It is observed that S u estimated by CPTU ranged between 3.6 kPa to 17.6 kPa with considerable variations at certain depths, such as between 7.0 and 7.2 m.For correlations presented in Fig. 7, the mean value of S u(CPTU) in 1.0 m range was considered, thus it was not possible to verify by this analysis if there was good agreement between the FCT and CPTU results.
Despite the heterogeneity of the deposit and discarding the very discrepant values of S and S u(UUT) , it can be visually observed in Fig. 7 that between the depths of 7.0 and 11.0 m there is a good agreement between the laboratory S u results and S u(CPTU) estimated with N kt cone factor.Between depths of 11.0 and 13.0 m the S u(FCT) and S u(LVT) do not present good agreement with S u(CPTU) .These samples presented the lowest plasticity index (mean of 46) and the highest percentage of sand (mean of 42%).Maybe a drained behaviour can explain this greater variation among the S u results.Larsson et al. (1987) observed that S u(FCT) values measured in specimens from greater depths than 10 and 15 m are often too low and the same behaviour occurs in clays of low plasticity and high sensitivity, which also can explain the lower S u(FCT) results between depths of 11.0 and 13.0 m.
Empirical correlations
Attempts to develop simple methods for estimating the undrained shear strength of soils based on physical indices, such as correlations based on plasticity index, have been conducted since the beginning of Soil Mechanics (Kempfert & Gebreselassie, 2010).However, several of the most well-known empirical correlations were established using data from soils obtained in countries of northern Europe and America, where the sediments were strongly influenced by the glaciers of the ice age period (Tanaka, 2000;Tanaka et al., 2001).
Although the correlations developed in a given geological context are not universally applicable and should be used with caution, as well as be calibrated locally (Larsson & Ahnberg, 2005;Leroueil et al., 2001), empirical correlations between the undrained shear strength (S u ) and the plasticity index (I P ) can be used to support and complement strength determinations (Larsson et al., 1987).Some of these correlations are presented in Table 4.Many of these correlations indicate a tendency for S u /s' P or S u /s' vo increase with increasing of I P .Figures 8 and 9 illustrates the relation between S u /s' vo and S u /s' P and I P using the in situ test data (FVT) and laboratory test data (FCT, LVT, UUT, UCT) obtained in this study and the empirical correlations presented in Table 4.
Despite the large dispersion, it can be observed in Fig. 8 that exists tendency for S u /s' vo increase with increasing of I P , as shown by linear correlation (R 2 = 0.38), similar to the Skempton linear correlation and being more accentuated for I P lower than 70%.Baroni (2016) reported for Rio de Janeiro soft clays that there is not a tendency for S u /s' vo increase with I P .
Figure 9 was elaborated with S u /s' P results of samples number 2 to 5 and presents the same behaviour of Fig. 8, a tendency for S u /s' P increase with increasing of I P and being more accentuated for I P lower than 70%.The regression line presented the same behaviour of Bjerrum & Simons potential correlation.Similarly, Futai et al. (2008) have observed for clay deposits in Rio de Janeiro that the S u /s' P ratio demonstrates a tendency to increase with increasing of I P , similar behaviour of the Canada clays.
As reported by Tanaka (1994), the S u /s' P ratios determined for various Japanese marine clays ranged between 0.25 and 0.35 and did not exhibit any significant relationship to I P , being S u values estimated by FVT and I P values ranging between 20% and 150%.Chung et al. (2007) had also concluded for a specific Japanese marine clay deposit that the S u /s' P ratio did not depend on I P .
Conclusions
In the present study, the undrained shear strength (S u ) results from laboratory fall cone test (FCT) were compared with the S u results from conventional field and laboratory tests commonly used in geotechnical engineering to estimate this parameter in cohesive soils: CPTU, FVT, UCT, UUT and LVT.The normalised undrained shear strength estimates were compared with some empirical correlations based on plasticity index.The following conclusions result from this study: • The S u values determined by FCT presented good agreement with the S u determined by LVT, with S u(LVT) /S u(FCT) = 0.98 obtained by linear regression (coefficient of determination R 2 = 0.84).
• For FCT and UCT there was reasonable agreement between the S u results, with a tendency for the S u(UCT) values to be higher than the S u(FCT) .The S u(UCT) /S u(FCT) = 1.14 was obtained by linear regression (coefficient of determination R 2 = 0.62).
• FCT and UUT did not demonstrate good agreement between the S u results, with variation coefficient of 34% for the S u(UUT) /S u(FCT) ratio.• The S u values determined by FCT did not present good agreement with the S u determined by FVT, with variation coefficient of 53% for the S u(FVT) /S u(FCT) ratio.• Despite the considerable variations between S u(CPTU) values estimated with N kt cone factor for certain depth ranges, there was a good agreement with S u(FCT) and S u(LVT) for depths until 11.0 m. • The difference between S u values determined through laboratory and in situ tests can be assigned to others important factors like the large vertical variability indicated by liquid limit variations and the presence of shells, concretionary materials and thin layers of sand and mica that may have influenced the laboratory test results, particularly the FCT and LVT.• The normalized undrained shear strength data, S u /s' P and S u /s' vo , determined using the various test methods presented a tendency to increase with increasing of I P , similar to some empirical correlations reported in the literature.
As a final contribution of this study, considering the simplicity and flexibility of the fall cone test (FCT) application and possibility to collect a greater number of data, it would be appropriate to use this method to support and complement other strength determinations.
Figure 2 -
Figure 2 -Undisturbed sampling tubes segmentation for laboratory tests.Values in mm.
Figure 3 -(a) N kt and N Du values with depth, (b) Strength estimates by FVT and CPTU.
Figure 4 -Correlation between S u values from FCT and conventional laboratory tests (LVT, UCU and UUT).
Figure 5 -
Figure 5 -Correlation between S u values from FCT and conventional in situ tests (FVT and CPTU).
Figure 7 -
Figure 7 -Undrained shear strength estimated by conventional and unconventional tests.
Figure 8 -
Figure 8 -Relationship between normalised undrained shear strength with effective vertical stress (S u /s' vo ) and plasticity index (I P ).Figure9-Relationship between normalised undrained shear strength with preconsolidation stress (S u /s' P ) and plasticity index (I P ).
Figure 9 -
Figure 8 -Relationship between normalised undrained shear strength with effective vertical stress (S u /s' vo ) and plasticity index (I P ).Figure9-Relationship between normalised undrained shear strength with preconsolidation stress (S u /s' P ) and plasticity index (I P ).
The Undrained Strength of Soft Clays Determined from Unconventional and Conventional Tests
Table 3 -
Properties of the soil studied. | 6,577.2 | 2017-12-20T00:00:00.000 | [
"Geology"
] |
Distribution of soil corrosion grade in Southern Hebei Province
. Transmission towers and substations are used in a variety of natural environments, coupled with the interference of the surrounding production and life, facing the test of soil corrosion. Based on the experimental data of 72 soil samples from five cities in Southern Hebei Province, the soil resistivity, soil pH value and soil moisture content were investigated, and the soil corrosion grade and soil corrosion grade distribution map were obtained. It provides a reference for the collection of soil corrosivity data in power transmission and transformation projects.
Introduction
With the rapid development of China's economy, the scale of power construction has increased sharply. As an important part of power grid operation, the service safety of transmission towers and substations has attracted more and more attention. The above equipment is used in a variety of natural environments, coupled with the interference of the surrounding production and life, it is very likely that the materials will be damaged and failed in advance due to soil corrosion, which will affect the safe and efficient operation of the key equipment of power transmission and transformation. Hebei Southern Power Grid (including Shijiazhuang, Baoding, Cangzhou, Hengshui, Xingtai and Handan) covers a complex environment with different soil corrosion risks. Therefore, it is necessary to collect soil corrosivity data for power transmission and transformation project.
Experiment
In this study, 72 stations in five cities of Southern Hebei Province were sampled by the principle of on-site sampling packaging and centralized measurement in laboratory. The soil resistivity was measured by FUZRR FR3010E resistivity meter. Then, digging out a pit with a depth of about 60cm and diameter of 40cm at the collection point, and sampled 1kg.
Take about 20g of sample soil, crush it, quickly put it into a large aluminium box with known accurate mass M 0 , cover it tightly, wipe the surface of the aluminium box clean, and weigh it to get the initial mass M 1 . Remove the lid of the box, put it under the box, and bake it in the oven preheated to 105 ± 2 for 12h. Take it out, cover it, cool it to room temperature ℃ in a dryer (about 30 min), and weigh it immediately to get m 2 [1]. The soil moisture content can be obtained by Formula 1.
A,% 100
(1) Take appropriate amount of soil samples, spread them evenly, air dry at room temperature for 24 hours, grind them and pass through a sieve with a diameter of 1 mm. Take 10g of airdried soil sample after sieving, put it into a 50ml beaker, add 25ml of deionized water, stir for 1 minute to make the soil particles fully disperse, and then use a pH meter to determine after standing for 30 minutes [2].
The physical and chemical properties of soil samples were sorted out, and the degree of soil corrosion was graded by using three kinds of data. The soil corrosion evaluation system of three index method is shown in Figure 1. According to the results, the soil is divided into five grades, and the soil corrosivity is gradually strengthened.
Soil moisture content
The moisture content of soil is an important factor affecting the corrosion of metal materials. [3] When the moisture content is very high or very low, the corrosion rate is the lowest. The corrosion rate of metals increases first and then decreases with the increase of water content. When the soil water content is about 20%, the corrosion rate is the highest. [4][5][6][7] In addition, the size of soil moisture also has an impact on soil resistivity. [8][9] The moisture content of soil samples at each station is shown in Figure 2. It can be seen that the moisture content of most stations is concentrated in the range of 12% -25%, and the soil corrosion capacity is increased due to the moisture content in this range.
Soil resistivity
For the common medium alkaline soil, the soil corrosivity increases with the decrease of resistivity. But for acid soil, the soil resistivity is high, and the corrosion is also serious. The soil resistivity of each station is shown in Figure 3. It can be seen that the soil resistivity of most stations is concentrated in 20-50 Ω section.
Soil pH value
The pH test results of soil samples at each site are shown in Figure 4. It can be seen that the pH value of most sites is in the range of weak alkalinity to alkalinity. With the increase of soil pH, the concentration of hydrogen peroxide in soil increases, and the depolarization of oxygen becomes the main cathodic reaction. When the depolarization of O 2 becomes the control step, the cathodic reaction rate accelerates, the dissolution rate of anode metal accelerates, and the corrosion aggravates. The pH value of some stations is above 9, and the soil corrosion ability is strong. Using the three-index evaluation system, the data and map are combined to get the results shown in Figure 5. Most of the sites are in the third grade of soil corrosion, which has a certain correlation with the industrial distribution of local cities. The sampling sites are located in the industrial concentration area, and the soil corrosivity is strong. In coastal areas, the corrosion is grade III, which is related to the soil moisture content and strong alkalinity.
Conclusion
There are many factors affecting the soil corrosion rate. This paper focuses on soil moisture content, soil resistivity and soil pH value. Based on the analysis of 72 soil samples from five cities in Southern Hebei Province, it is found that the soil corrosion in this area is characterized by "mild in the north and severe in the South". From the local point of view, the area with more serious soil corrosion is related to the concentration of heavy industry around the sampling point. The grade of soil corrosion in coastal areas is grade III. Soil water content and soil alkalinity play an important role in it. For the substation located in the area with high soil corrosion rate, the protection of grounding grid should be strengthened. | 1,432.2 | 2021-01-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Integrating Scientific Publication into an Applied Gaming Ecosystem
—The European (EU)-based industry for non-leisure games (so called Applied Games, AGs) is an emerging business. As such it is still fragmented and needs to achieve critical mass to compete globally. Nevertheless, its growth potential is widely recognized and even suggested to exceed the growth potential of the leisure games market. The European project Realizing an Applied Gaming Ecosystem (RAGE) is aiming at supporting this challenge. RAGE will help to seize these opportunities by making available an interoperable set of advanced Applied Game (AG) technology assets, as well as proven practices of using such AG assets in various real-world contexts. As described in [1] , RAGE will finally provide a centralized access to a wide range of applied gaming software modules, relevant information, knowledge and community services, and related scientific documents, taxonomies, media, and educational resources within an online community portal called the RAGE Ecosystem. Besides this, an integration between the RAGE Ecosystem and relevant social network interaction spaces that arranges and facilitates collaboration that underlie Re search and Development (R&D) , as well as market-oriented innovation and exploitation will be created in order to support community building, as well as collaborative asset exploitation of User G enerated Contents (UGCs) of the RAGE Ecosystem. In this paper, we will describe the integration of the Scientific Publication Platform (SPP) Mendeley [2] into the RAGE Ecosystem. This will allow for automating repetitive tasks, reducing errors, and speeding up time consuming tasks. On the other hand it will support information, UGC, and knowledge sharing, as well as persistency of social interaction threads within Social Networking Sites (SNSs) and Groupware Systems (GWS s) that are connected to the RAGE Ecosystem. The paper reviews relevant use cases and scenarios, as well as related authentication, access, and information integration challenges. In this way, on the one hand a qualitative evaluation regarding an optimal technical integration is facilitated while on the other hand design approaches for supporting features of resulting user interfaces are initiated.
I. INTRODUCTION AND MOTIVATION
The EU-based industry for Applied Games (AGs) is an emerging business. As such it is still fragmented and needs to achieve critical mass to compete globally. Nevertheless, its growth potential is widely recognized and even suggested to exceed the growth potential of the leisure games market. The RAGE project [3] is aiming at supporting this challenge. RAGE will help to seize these opportunities by making available an interoperable set of advanced technology assets, tuned to applied gaming, as well as proven practices of using asset-based applied games in various real-world contexts. This will be achieved by enabling a centralized access to a wide range of applied gaming software modules, information, knowledge and community services, as well as related document, publication, media, and educational resources within the RAGE Ecosystem. Furthermore, the RAGE project aims to boost the collaboration of diverse actors in the AG environment. Therefore, the main objectives of the RAGE Ecosystem are to allow its participants to get hold of advanced, usable gaming assets (technology push), to get access to the associated business cases (commercial opportunity), to create bonds with peers, suppliers, and customers (alliance formation), to advocate their expertise and demands (publicity), to develop and publish their own assets (trade), and to contribute to creating a joint agenda and road-map (harmonization and focus).
This means that seen as a whole, the RAGE project is a technology and know-how driven research and innovation project. Its main driver is to be able to equip industry players (e.g., game developers) with a set of AG technology resources (so-called Assets) and strategies (i.e., know-how being provided by means of information services and knowledge resources) to strengthen their capacities to penetrate a market (non-leisure), which is new for most of them, and to consolidate a competitive position in it. Fig. 1 represents the positioning of the project in the spectrum from 'theory to application'.
In consequence, the RAGE Ecosystem and its integration with social networks of game-research-, game-developing-, gaming-, and AG communities will on the one hand become an enabler to harvest community knowledge and on the other hand it will support the access of such communities to the RAGE Ecosystem as an information and knowledge resource.
Building on the results of the Social Networking Sites (SNSs) and Groupware Systems (GWSs) integration with the Munir Salman FernUniversität in Hagen Faculty for Multimedia and Computer Science Hagen, Germany Email: munir.salman@ studium.fernuni-hagen.de RAGE Ecosystem including corresponding SNS-enabled content and knowledge management, the RAGE Ecosystem will in the future also support Social Network Analysis (SNA) by means of applying technologies for Natural Language Analysis (NLA) for discourse analysis, as well as Named Entity Recognition (NER) and Semantic Representation and Annotation (SRA) of its results [4]. This will, e.g., enable users to utilize the envisioned Ecosystem with features of a social mediation engine going beyond content syndication, i.e., it will serve as a social space that mediates collaboration partners, while content remains the main attractor. Finally, an interactive map of supply-and demand-side stakeholders and resources will be provided for domain and community orientation, as well as visual access support.
In the remainder of this paper, section II provides a brief introduction of a set of exemplar target communities that are present in SNSs and GWSs. Furthermore, section III describes related research activities of these communities. Section IV is about state of the art in science and technology. Section V, more specifically, reviews the integration possibility of Mendeley and its Application Programming Interface (API) that supports integration with the RAGE Ecosystem. Furthermore, this section will investigate how to support a bi-directional access to resources, assets and community information between the RAGE Ecosystem and such SNSs and GWSs. Next, section VI will outline design approaches towards supporting users in the target communities by services provided by the RAGE Ecosystem by means of outlining several use case scenarios for using Social Networking Features (SNF) and Groupware Features (GWF) within the RAGE Ecosystem user interfaces. Finally, the paper will present conclusions and future work.
II. TARGET SNS AND GWS USER COMMUNITIES AND CORRESPONDING EXEMPLAR USER STEREOTYPES
As outlined above, the EU-based industry for AG is an emerging business, which is still fragmented and needs to achieve critical mass for global competition. The AG industry and developer groups want to keep their developments innovative, i.e., attractive and technologically in good condition. These groups already have a very good understanding of their competitive advantage and corresponding assets (e.g., software, documents, and social media objects, etc.). However, they also need innovative ideas to develop innovative AGs in order to stay competitive. Therefore, they look for possibilities to cooperate with AG Research and Development (R&D) groups. Besides this, the AGs that researchers create within research projects produce a lot of AG research assets and prototypes, which need to be fully developed and deployed by AG software developers to become marketable. Apart from AG developers and researchers, there are also AG customers and players who on the one hand want to learn about or contract the development of AGs and on the other hand can also contribute to the development of AG usage scenarios. Many of these communities (AG developers, researchers, customers and players) are already present in a fragmented way within several groups in several SNSs and GWSs. In [1] we have presented some examples of AG research, as well as industry and developer communities in, e.g., LinkedIn and Twitter. The Applied Games and Gamification (AGG) LinkedIn group [5] has over 4,500 members and has been running since 2011. The group claims to be one of the largest collective of creators, developers, researchers, and users of applied games and gamification globally. The typical users can be distinguished roughly into those from industry and those from academia, i.e., from professors and recent graduates in gaming and related technologies, to CEOs, founders and directors of a wide variety of organisation that work or research the domain. The majority of discussion posts are promotions of products, methodologies for design, reposts of other interesting blogs on the topic and individuals' thoughts on implications of games and gamification for learning, training and behaviour change. The most prolific posters tend to be consultants and individuals representing organisation that are looking to showcase their abilities to a more business oriented community toward winning more business. Many posts do not garner comments or discussion as they are often pointing to other resources; however posts which pose interesting questions do receive attention and lead to interesting discussions from the more active members. Similarly the Serious Games Group (SGG) on LinkedIn [6] has over 5200 members and has been running since 2008. Another AG research group example is the Game Research (GR) Mendeley group which has more than 140 members and more than 200 papers. The group memberships somewhat overlap with the applied games and gamification, however the audience tends to be more focused on the learning solutions and learning providers, with fewer CEOs and marketing directors, and more game designers as compared to the AGG, although the mode of use are very similar.
RAGE will help to overcome this fragmentation and aims to support the capturing, as well as the representation, management, sharing, and exchange of social media produced content and knowledge resources through its Ecosystem. Therefore, the integration of SNSs and GWSs hosting such target communities with the RAGE-Ecosystem and at the same time enabling the connectivity between SNSs, GWSs, and the RAGE-Ecosystem will connect research-, gaming industry-, intermediary-, education provider-, policy maker-and end-user communities. Furthermore, it will facilitate the centralized access to the valuable assets beyond the SNSs and GWSs.
As a whole take-up of RAGE results will generate impacts that will be visible through multiple enhancements in the performance of European Applied Game industries, especially in terms of reducing the current fragmentation, improving their innovation capacity and fostering their progress towards global technological leadership. By offering reusable Applied Games assets, the RAGE Ecosystem infrastructure and marketplace will play a key role in support of applied research and technology development, including demand driven research and productification activities, easing technology transfer and field validation of novel products and services, on a broad collaborative basis. The combined effects will allow end-toend Applied Games value chain players to dramatically improve their competitive position.
III. RELATED WORK
The work presented in this paper is related to a number of topics in research. The RAGE Ecosystem will be built upon the Educational Portal (EP) technology and application solution, which was developed by the software company GLOBIT [7] that already was used in the Alliance Permanent Access to the Records of Science in Europe Network (APARSEN) [8]. APARSEN was an EU-funded project within the digital preservation area with the goal to create a virtual research center in digital preservation in Europe. The so-called EP tool-suite offers a wide variety of tools and is currently extended by Research Institute for Telecommunication and Cooperation (FTK) within the RAGE project into an Ecosystem Portal (EP) tool suite [9]. This includes a web based, user-friendly User and Community Management (UCM) including an advanced Contact and Role Management (CRM) based on MythCRM [10], as well as knowledge management support in the form of Taxonomy Management (TM) support and semiautomatic taxonomy-based Content Classification (CC) support [4], [11], as well as a Learning Management System (LMS) based on Moodle [12] and an advanced Course Authoring Tool (CAT) [13]. In this way, the Content & Knowledge Management (C&KM) tools of the EP tool suite support the management of documents in a taxonomysupported Digital Library, the management of multimedia objects in a taxonomy-supported Media Archive and the management of Learning Objects in a competence-based LMS [14]. Furthermore, one of its additional purposes is to support Continuous Professional Education (CPE) and training of practitioners, experts, and scientists, which are members of professional communities of practice or scientific communities. Fig. 2 displays the components and services in the EP tool suite as described in [4]. EP was built based on Typo3 [7] and, therefore, can be extended with the help of Typo3 extensions. Evgeny, Bogdanov et al [15] extend a social media platform in higher education with lightweight tools (widgets) aimed for collaborative learning and competence development. Our work will establish the new EP module Community & Social Network Support (CSNS) on the basis of a so-called Agile Application Programming Interface (AAPI), which facilitates the connectivity to a wide range of SNSs and GWSs.
IV. RELEVANT STARTING POINTS WITHIN THE STATE OF THE ART IN SCIENCE AND TECHNOLOGY
SNSs and GWSs have changed the way of information sharing and learning processes by adding innovative features to social communication. SNSs were defined as "Internet or mobile-device based social spaces designed to facilitate communication, collaboration, and content sharing across networks of contacts. SNS allows its users to become content creators and content consumers at the same time, thus allowing instant participation, sharing of thoughts or information and personalised communication" [16]. Therefore, SNSs and GWSs are becoming increasingly important. This holds especially true for various Social Networking Features (SNFs) and Groupware Features (GWFs) like, e.g., rating, commenting, tagging, chatting, liking, posting new Social Media (SM) and User Generated Content (UGC), following actors or celebrities, playing games etc. These SNFs and GWFs are not only entertaining and exciting but also useful for learning and for information enrichment. Research has shown that distance education courses are often more successful when they develop communities of practice [17]. Besides, SM content becomes increasingly important in business and research. Kaplan [20] "The Social Semantic Web as a vision of a Web where all of the different collaborative systems and social network services, are connected together through the addition of semantics, allowing people to traverse across these different types of systems, reusing and porting their data between systems as required." RAGE will use Semantic Web technologies in order to describe in an interoperable way users' profiles, social connections, and social media creation and sharing across different SNSs and GWSs, as well as within the RAGE Ecosystem. Therefore, RAGE will be able to deliver well-grounded recommendation and mediation features AG R&D communities.
Today, most SNSs and GWSs provide so-called Application Programming Interfaces (APIs) for developers to integrate the SNSs and GWSs into their systems. Although, the SNSs and GWSs are different in their functionality, i.e., their social networking feature support, their software architecture for the communication with distributed other systems is similar. Most of the SNSs and GWSs offer REST APIs like [2], [21], [22] which can be used for integration with other systems.
In the following, the description of the Mendeley API software architecture as described in [2], [23] will be cited as an exemplary, illustrative, and at the same time representative example.
In summary, it is a big advantage to aim at supporting the integration of SNSs and GWSs including relevant SNFs and GWFs, as well as UGC capturing, management, sharing, and dissemination support through their REST API into the RAGE Ecosystem. This will on the one hand facilitate to extend the envisioned RAGE Ecosystem with features of a social mediation engine going beyond content syndication, i.e., it can serve a social space that mediates collaboration partners, while content remains the main attractor. On the other hand it focuses on identifying collaboration opportunities between individuals and among groups, to support matchmaking and collaboration between stakeholders, and to identify and provide support for innovation opportunities and creativity efforts. That allows communities (such as technology providers, game developers and educators, game industries, researchers) to create their own assets and post them to the Ecosystem's repository without major effort. Besides this, t h e above approach enables follow-up work in the area of social network analysis and discourse analysis, which can then be conducted and used to provide feedback, recommendations, mediations, and relevant information to the communities. This feedback can e.g., help gaming companies to develop new markets in applied gaming.
V. INTEGRATION APPROACH AND IMPLEMENTATION
The following section presents the main technical integration possibilities in the backend, as well as in frontend. In this way, our integration approach and methodology is enabling us to differentiate between how to get access to resources and assets in the RAGE Ecosystem from external SNSs and GWSs communities and how to push contents from the RAGE Ecosystem to the external SNSs and GWSs in order to improve user acceptance of services provided by the RAGE Ecosystem. Fig. 3 displays the concept of a bi-directional integration approach of the RAGE Ecosystem with SNSs (e.g. LinkedIn and Twitter) and GWS (e.g. Mendeley and GitHub) using a REST API. Corresponding to this bi-directional integration approach, the Tight and Loose Coupling methodologies, as described in [24], will be considered for achieving an integration of SNSs and GWSs to the RAGE Ecosystem and vice versa.
As an example, the so-called Mendeley API and its software architecture as described in [2] will be cited as an exemplary, illustrative, and at the same time representative example for the loose and tight coupling between the RAGE Ecosystem and GWS.
The Mendeley API is based upon the following standard: [23] facilitates the tight coupling integration of the Mendeley within the RAGE Ecosystem. This SDK includes code for the implicit grant and authorization code flows. For the RAGE Ecosystem we used the authorization code flow in order to acquire an access token. The RAGE server will do the token exchange and set the access token cookie. From the client-side point of view the flow could be started as shown in the following code.
Corresponding to this authorization code flow, Fig. 4 displays the successful login user interface through the RAGE Portal.
The Mendeley JavaScript SDK is available as an Asynchronous Module Definition (AMD) or a standalone library. The following code shows an example using the standalone library for capturing data for the user after the authentication is completed. Each call will either resolve with some data or reject with the original request and the API response [23].
VI. SNF USAGE SCENARIOS AND DESIGN CONCEPT
In addition to outlining our SNS integration approach and methodology, Fig. 6 displays how the SNS usage scenarios can be integrated into the RAGE Ecosystem itself. RAGE Ecosystem users can visit content and knowledge management support within the RAGE Ecosystem's a Digital Library, Media Archive, Software Repository (which is currently under development based on [17]), and LMS. Here, users have the opportunity to: a. Rate (1), like (2), and Comment (3): these Social Networking Features (SNFs) are e.g., important for the recommendation system (also currently under development [16] to get more useful suggestions. b. Tell a friend (4): users can send links to selected content (or the content itself) through email. Email addresses can be selected either from the RAGE address book or from users' address books, which are located in SNSs. c. Share and post (5): Users can share the selected content to one of their favourite SNSs or on the fly to more than one by selecting them from the share button. Users also have the possibility to publish content to a repository (e.g., GitHub's repository) or to cloud storage (e.g., into Dropbox). d. Favourite (6): Users can add content to their favourite lists, which facilitates to later, e.g., share/post their entire favourite list to a community. e. Share and post to RAGE Communities (7) within the RAGE Ecosystem and also from any other platforms outside the RAGE Ecosystem. A RAGE Share-Button can be released and, e.g., be integrated by developers
VII. CONCLUSION AND OUTLOOK
In this paper, we have introduced the RAGE Ecosystem supporting community-based content and knowledge management. In detail it will support the collection, sharing, access, and re-use of AG R&D assets, including UGC resources, as well as academic, industry, and end user best practice knowhow represented in corresponding knowledge resources. In this way, the RAGE Ecosystem will provide AG communities, and therefore SNSs and GWSs communities, too, an opportunity to interact, share and re-use content and UCG including corresponding knowledge resources, as well as communicate and collaborate using the RAGE Ecosystem as a back-end community content and knowledge management portal in addition to their favorite SNSs and GWSs. Besides this introduction, we have presented how a technical integration between the RAGE Ecosystem and SNSs and GWSs can be achieved to reduce the fragmentation and to increase the knowledge exchange among AG communities (such as AG developers, researchers, customers, and players). The RAGE Ecosystem and its SNSs, SNFs, GWSs and GWFs integration are currently under development. In the future, RAGE is aiming at increasing outreach and take-up of the RAGE Ecosystem through further SNSs and GWSs integration and SNFs and GWFs implementation. For example, the SNA and discourse analysis will be used for collecting, analyzing, and presenting data about various patterns of relationships among people, objects, and knowledge flows within the RAGE Ecosystem and will provide additional functionality and sophisticated services for end-users, enhancing the emergence of communities. In particular, future developments will focus on identifying collaboration opportunities among individuals and groups, to support matchmaking and collaboration among main stakeholders, and to identify and provide support for innovation opportunities and creativity efforts. In this way, the RAGE project currently anticipates the following tools and services: a) The RAGE Diagnostic tool based on various metrics for analyzing the usage of resources, the formation of different users groups, the level of social interactions, etc., b) the RAGE awareness tool can increase participation of different target groups in the Ecosystem, c) the RAGE Knowledge Mapping tool builds and analyses knowledge maps for all kind of resources available in the Ecosystem. d) the RAGE Professional support tool will support the users by letting them know whom or where to ask for support in different situations, e) the RAGE Community detection tool will use available clustering algorithms (also called ''community detection algorithms'') that automatically identify and locate existing communities, in order to enhance the communication between gaming practitioners, f) the RAGE Ecosystem analysis tool will apply network analysis including many algorithms for identifying the most important, or central in some sense, nodes within a network, g) the RAGE Recommendation may generate value interventions towards stimulating the participation of users. Such interventions include suggesting connections among users, setting up groups, closing the gaps in people's knowledge of other members' expertise and experience, and strengthening the cohesiveness within existing teams. Social media data such as tags, comments, purchasing patterns, and ratings can be used to link related gaming assets and users together into networks, the RAGE Social learning tool applies SNA to online learning environments, as well, focusing on the structural relationships between all learning objects and users, that support learning communities.
With the design and development of a comprehensive approach as pursued with the RAGE Ecosystem, ethical issues need to be taken into account. The integration of users' SN profiles from different SNSs, as well as the use of features carrying out analyses on top of Ecosystem user data have ethical implications in terms of privacy and data protection and require appropriate information and consent in the terms and conditions of use, as well as compliance to national and international data protection regulations. The same is for any use of log data for the purpose of system evaluation, or for UGC and user actions shared among different SNSs and the RAGE Ecosystem. In addition, with UGC questions related to verification and validation of contributions, as well as to copyright ownership and infringement become relevant. The consideration of such ethical and legal requirements shall be incorporated in the system design and development process in terms of an ethics-by-design approach [26]. This means that data protection and privacy is already taken into account when the system is being designed. Design principles, such as purpose binding, would ensure that personal information is only accessible, if there is a need for it when performing a certain action. The system can also control data access by respecting personal settings which data should be available to others or the public. Other ethics-enabled features include the modification or deletion of personal data.
ACKNOWLEDGEMENTS AND DISCLAIMER
This publication has been produced in the context of the RAGE project. The project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 644187. However, this paper reflects only the author's view and the European Commission is not responsible for any use that may be made of the information it contains. | 5,678.4 | 2016-09-27T00:00:00.000 | [
"Business",
"Computer Science"
] |
Forbidden transitions in nuclear weak processes relevant to neutrino detection, nucleosynthesis and evolution of stars
. The distribution of the spin-dipole strengths in 16 O and neutrino-induced reactions on 16 O are investigated by shell-model calculations with new shell-model Hamiltonians. Charged-current and neutral-current reaction cross sections are evaluated in various particle and γ emission channels as well as the total ones at neutrino energies up to E ν ≈ 100 MeV. Effects of multiparticle emission channels, especially the αp emission channels, on nucleosynthesis of 11 B and 11 C in core-collapse supernova explosions are investigated. The MSW neutrino oscillation effects on charged-current reaction cross sections are investigated for future supernova burst. Electron capture rates for a forbidden transition 20 Ne (0 +g.s. ) → 20 F (2 +g.s. ) in stellar environments are evaluated by the multipole expansion method with the use of shell model Hamiltonians, and compared with those obtained by a prescription that treats the transition as an allowed Gamow-Teller (GT) transition. Different electron energy dependence of the transition strengths between the two methods is found to lead to sizable differences in the weak rates of the two methods
Introduction
Roles of Gamow-Teller (GT) transitions in nuclear weak rates at stellar environments have been investigated in various astrophysical processes.Electron-capture and β-decay rates in sd-shell nuclei have been updated, and applied to nuclear URCA processes in O-Ne-Mg core of stars with M=8-10M ☉ [1][2][3].Those in fp-shell nuclei have been also updated with GXPF1J [4] and used to study synthesis of iron-group elements in Type Ia supernovae (SN) [5].Neutrinonucleus reaction cross sections for 12 C [6,7], 13 C [8], 40 Ar [9], 56 Fe and 56 Ni [10] have been updated, and applied to study ν-process nucleosynthesis [6,7,11] and ν properties [7,11].In β-decays of N=126 isotones, an important role of first-forbidden transitions on enhancing the rates compared with the FRDM model [12] was pointed out [13,14], and the short half-lives were used to study r-process nucleosynthesis in corecollapse SN and binary neutron star mergers [14].Here, we focus on forbidden transitions in 16 O and 20 Ne.We discuss spin-dipole strengths in 16 O and ν-induced reactions on 16 O in Sect. 2. In Sect.3, e-capture rates for a second-forbidden transition in 20 Ne are evaluated with the multipole expansion method.
O
ν-induced reactions on 12 C was studied with a shellmodel Hamiltonian, SFO [15], which can reproduce the GT strength in 12 C and magnetic moments of p-shell nuclei systematically.The configuration space of the SFO is p-sd shell, and the quenching factor for q= g A eff /g A is found to be close to 1, q=0.95, in contrast to the case within p-shell configurations such as the Cohen-Kurath Hamiltonian [16].The monopole term in spin-isospin flip channel is enhanced in the SFO.In case of 16 O, the GT strength is small and the spindipole strength is the dominant contribution to spindependent transitions.Therefore, the p-sd cross-shell matrix elements in SFO are improved by taking into account the tensor and two-body spin-orbit components properly: the tensor and two-body spin-orbit components are replaced by those of π+ρ mesonexchanges and σ+ρ+ω meson-exchanges, respectively.A new Hamiltonian thus obtained, SFO-tls [17], can reproduce low-lying energy levels of spin-dipole states in 16 O.Calculated spin-dipole strength, in 16 O is shown in Fig. 1.Sum value B(SDλ) -is nearly proportional to 2λ + 1, while the averaged energy position is the lowest (highest) for 2 -(1 -) as explained below.
The energy-weighted sum of the strength can be formulated as the double commutator of the spin-dipole operator with the Hamiltonian as The EWS λ is determined by the kinetic energy, one-body spin-orbit potential and two-body spin-dependent interactions [18].For 16 O, EWS λ evaluated with the kinetic energy and one-body spin-orbit potential V LS = ξΣ i ℓ i ・σ i is given as where f λ = 2, 1, and -1 for λ = 0, 1, and 2, respectively, and 16 O is taken to be a LS-closed core.For 16 O, the spin-orbit term splits them in the following order : The tensor interaction is attractive (repulsive) for 0 -and 2 -(1 -) [19].The averaged energy defined by <E λ >=EWS λ /B(SDλ) results in an order ; <E 2 > < <E 0 > < <E 1 >. 16O obtained with the SFO-tls.(Taken from Ref. [18])
ν-induced cross sections on 16 O
Charged-and neutral-current ν-nucleus reaction cross sections on 16 O are evaluated by shell-model calculations with the SFO-tls [18].The quenching for g A is taken to be q=0.95.The total μ-capture rate on 16 O obtained with q=0.95 is λ= 11.20×10 4 s -1 (10.21×10 4 s -1 ) for SFO-tls (SFO), which is close to the experimental value, λ= 11.26×10 4 s -1 [20].Total charge-exchange cross sections for 16 O (ν e , e -) 16 F at E ν < 100 MeV obtained for SFO-tls as well as for SFO and previous continuum-random-phase approximation (CRPA) calculation [21] are shown in Fig. 2. The multipolarities up to J=4 are taken into account.Dominant contributions come from the transitions with J π = 2 -and 1 -.The cross sections for SFO-tls are enhanced compared with those of SFO, and found to be close to those of the CRPA except at E ν < 30 MeV.Neutralcurrent cross sections for SFO-tls are also close to those of the CRPA.Partial cross sections for various particle and γ emission channels, including multi-particle emissions in addition to single-particle ones, are evaluated by the Hauser-Feshbach model.Cross sections for p, pp, 3 He, α and αp emission channels for the excitations of 2 - states are shown in Fig. 3 for SFO-tls.The proton emission channel gives the dominant contribution, while αp and αemission channels become important at higher excita-tion energies at E x ~30 MeV.Fig. 2. Calculated total cross sections for 16 O (ν e , e -) 16 F obtained by shell-model calculations with the SFO-tls and SFO as well as CRPA calculation [21].(Taken from Ref. [18]) Fig. 3. Calculated partial cross sections for 16 O (ν e , e -X ), with X = p, pp, 3 He, α, αp, via excitations of 2 -states in 16 F obtained by shell-model calculation with the SFO-tls.(Taken from Ref. [18]) A large branching ratio for the αp channel leads to the production of 11 B and 11 C by 16 O (ν, ν'αp) 11 B and 16 O (ν e , e -αp) 11 C reactions, respectively, in addition to the ordinary reaction channels 12 C (ν, ν'p) 11 B and 12 C (ν e , e -p) 11 C.The production yields of 11 B + 11 C in supernovae with 15 (20) M ☉ are found to be enhanced by about 13 (12)% compared with those without the multi-particle emission channels [18] (see Table 1).
νoscillations and detection of supernova ν
The MSW matter resonance oscillations [22] occur in C-He layer of supernovae for normal (inverted) mass hierarchy in charged-current reactions induced by e e ( ) .Event spectra of neutrino- 16 O charged-current reactions at Super-Kamiokande are evaluated for future super-nova neutrino bursts.The cross sections of the 16 O (ν e , e − ) X and 16 O ( e , e + ) X reactions for each nuclear state with a different excitation energy are evaluated, and dependence of the cross sections on the mass hierarchies are examined [23].Enhancement of expected event numbers for supernovae at 10 kpsc away from the earth is predicted for the e e ( ) -induced reaction in case of normal (inverted) hierachy [23].
3 Electron-capture rates for a secondforbidden transition in 20 Ne
Evolution of O-Ne-Mg cores in stars
Evolutions and final fate of a star with 8-10M ☉ are sensitive to the nuclear weak rates as well as its mass.The cooling of the O-Ne-Mg core of the star produced after carbon burning occurs at certain densities by nuclear URCA processes for pairs of nuclei with A = 25 and 23 [1][2][3].In later stage of the evolution, the core gets heated by γ emissions in double e-capture processes on 24 Mg and 20 Ne.It ws pointed out that a secondforbidden transition in 20 Ne, 20 Ne (0 g.s.+ ) → 20 F (2 1 + ), becomes important at high densities log 10 (ρY e ) = 9.3-9.6,where Y e is the lepton to baryon ratio, and at temperatures log 10 (T) < 9.0 in e-capture reactions [24].However, the transition strength has not been accurately determined [25].The e-capture rates for the second forbidden transition in 20 Ne can be important to estimate the heating of the core and to determine the final fate of stars with 8-10M ☉ .
+
) was measured, and log ft value was obtained to be 10.47±0.11[26], which is close to the lower limt value given in NNDC [25].The e-capture rates for the forbidden transition in 20 Ne were calculated assuming the transition to be an allowed GT transition [24].Here, we evaluate the e-capture rates accurately with the multipole expansion method [27].In this method, there are contributions from the Coulomb, longitudinal, transverse electric and axial magnetic terms with the multipolarity J π =2 + .The transition strength has a dependence on the electron energy as in the cases of first-forbidden transitions in contrast to the allowed GT transition.
Calculated e-capture rates for the forbidden transition obtained with the USDB Hamitonian are shown in Fig. 4 for the temperature log 10 T =8.6.Results obtained by the prescription assuming an allowed GT transition with the strength determined from the experimental log ft value [26] are also shown for comparison.
Sizable difference is found between the two methods [28].The rates obtained by the GT prescription are found to be enhanced (reduced) compared with those with the USDB at log 10 (ρY e ) < (>) 9.9.This comes from the difference in the electron energy dependence of the transition strengths between the two methods.The strength for the USDB is reduced (enhanced) at E e < (>) 9.9 MeV compared with the GT prescription.At log 10 (ρY e ) < 9.8, where the electron chemical potential is below 10 MeV, the shell-model strength is smaller than the GT one, and the rates for USDB also remain smaller than the GT one.At log 10 (ρY e ) ≥ 9.9, on the other hand, the electron energy larger than 10 MeV can contribute to the rates due to the enhancement of the chemical potential, and the rates for USDB begin to exceed the GT rate.
Effects of the Coulomb effects, that is, the screening effects on both electrons and ions [29][30][31][32], are investigated.The Coulomb effects reduce the e-capture rates and shift them toward higher density region due to an increase of the threshold energy by ΔQ C = μ C (Z-1)μ C (Z), where μ C (Z) is the Coulomb chemical potential of the nucleus with charge number Z due to the interaction of the ion with other ions in the electron background (see more details in Ref. [28]).Fig. 4. Calculated e-capture rates for 20 Ne (e -, ν e ) 20 F (2 1 + ) obtained by shell-model calculations with the USDB at log 10 T =8.6.Results obtained by the prescription assuming an allowed GT transition are also shown.
Heating of the O-Ne-Mg core due to γ emissions succeeding the double e-capture reactions, 20 Ne (e − , ν e ) 20 F (e − , ν e ) 20 O, is important in the final stage of the evolution of the core.Study of the evolution of the high density electron-degenerate core with the use of the present shell-model rates is in progress [33].
Summary
Forbidden transitions in 16 O and 20 Ne are studied by shell-model calculations with the use of new shellmodel Hamiltonians.Spin-dipole strength in 16 O is investigated based on non-energy-weighted and energyweighted sum rules.Total and partial ν-induced reactions on 16 O in various channels including multiparticle emissions are obtained with the SFO-tls, which can reproduce low-lying spin-dipole states in 16 O.Total cross sections are found to be close to those of a standard CRPA calculation, while a large branching ratio is noticed for the αp emission channel.This results in an enhancement of production yields of 11 B and 11 C in supernova explosion compared with the case without the multi-particle emission channels.Possible measurement of future supernovae at Super-Kamiokande by charged-current reactions on 16 O is discussed, and dependence of the cross sections on the ν mass hierarchies in MSW ν oscillations is investigated.Electron-capture rates for a second-forbidden transition 20 Ne (0 g.s.+ ) → 20 F (2 1 + ) are evaluated by shellmodel calculations with the USDB, and compared with those of the GT prescription, where the transition is treated as an allowed GT transition.Sizable difference is noticed in the e-capture rates in stellar environments, which is caused by the difference in the electron energy dependence of the transition strengths between the two methods. | 3,019.8 | 2019-01-01T00:00:00.000 | [
"Physics"
] |
Detrending Technique for Denoising in CW Radar
A detrending technique is proposed for continuous-wave (CW) radar to remove the effects of direct current (DC) offset, including DC drift, which is a very slow noise that appears near DC. DC drift is mainly caused by unwanted vibrations (generated by the radar itself, target objects, or surroundings) or characteristic changes in components in the radar owing to internal heating. It reduces the accuracy of the circle fitting method required for I/Q imbalance calibration and DC offset removal. The proposed technique effectively removes DC drift from the time-domain waveform of the baseband signals obtained for a certain time using polynomial fitting. The accuracy improvement in the circle fitting by the proposed technique using a 5.8 GHz CW radar decreases the error in the displacement measurement and increases the signal-to-noise ratio (SNR) in vital signal detection. The measurement results using a 5.8 GHz radar show that the proposed technique using a fifth-order polynomial fitting decreased the displacement error from 1.34 mm to 0.62 mm on average when the target was at a distance of 1 m. For a subject at a distance of 0.8 m, the measured SNR improved by 7.2 dB for respiration and 6.6 dB for heartbeat.
Introduction
A continuous-wave (CW) radar, which can be implemented with a simple hardware configuration, can detect movement; speed; and vital signals, such as respiration and heartbeat, from the Doppler frequency [1][2][3][4][5][6][7][8][9]. The CW radar can be extended to a frequencyshift keying (FSK) radar based on a similar hardware configuration. This can measure an absolute distance in a short range by using the phase differences between the transmitted and received signals obtained by continuously switching between two or more CW signals with different frequencies [10][11][12].
The phase-detection accuracy of CW/FSK radars determines the detection performance of the displacement measurement with sub-wavelength precision and the vital signal detection obtained from the movement of the surface of the human body [13,14]. A circle fitting method has been proposed to reduce the error in detecting the phase difference caused by the non-ideal characteristics of clutter in the surroundings or radar hardware and the imbalance between in-phase (I) and quadrature (Q) signals [15][16][17]. This method can increase the accuracy of the phase measurement in the baseband by approximating the trajectory of the measured I/Q signals to a circle [18]. In this method, the direct current (DC) offset can be removed by moving the center of the estimated circle to the center of the complex plane, and the I/Q imbalance can be corrected by compensating the ellipse trajectory with a circle [19,20]. Previous studies on the implementation of the circle fitting method generally assume that the DC offset and I/Q imbalance are almost constant while drawing the trajectory of the circle in the baseband of the CW radar [21][22][23]. This assumption can be accepted in general for an I/Q imbalance caused by hardware imperfections but not for a DC offset generated by clutters in the surroundings and device characteristics in the radar [24][25][26]. In particular, changes in the characteristics of components due to 2 of 17 heat generation, vibrations at a very low frequency, and unintentional movements of a subject with limited movement can generate the so-called DC drift phenomenon, which is the slow movement of the DC offset [27,28]. DC drift lowers the phase-detection accuracy in CW/FSK radars because it makes it difficult for the DC offset to be removed in the circle fitting method [29]. The signal-to-noise ratio (SNR) of the vital signal detection, the accuracy of the displacement measurement in the CW radar, and the accuracy of the range finding in the FSK radar decrease due to the lowered accuracy in phase detection.
In this study, a detrending technique that can compensate for the effect of DC drift is proposed to improve the phase-detection accuracy of the circle fitting method in CW radars. The proposed technique models and removes DC drift in the baseband signals obtained for a certain period of time by polynomial fitting. The simulation results for respiration detection using CW radar indicate that the proposed technique can remove the effect of DC drift, which is present in signals with a frequency 10 times lower than the desired signal. The measurement results of the displacement measurement of 0.3 m for a target object located at 1 m and the respiration and heartbeat detection for the subject at 0.8 m show that the proposed technique can eliminate DC drift in CW radars. The effect of DC drift in CW/FSK radars and the principle of the proposed detrending technique are described in Section 2. The simulation results in Section 3 show that the proposed detrending technique effectively cancels the DC drift modeled to periodic signals with a very low frequency. Section 4 presents the experimental setup for the validation of the proposed technique with displacement measurement and vital signal detection in the CW radar. The effects of the proposed technique are discussed using the CW radar measurement results in Section 5. The conclusions are presented in Section 6.
DC Offset and Drift in the CW Radar
When the residual phase noise in each channel is neglected by the radar correlation effect, the baseband I/Q signals I(t) and Q(t) in the CW radar can be expressed as where λ is the wavelength of the operating frequency, A I and A Q represent the amplitude of I/Q signals, d 0 is the distance from the radar to the target, ϕ E is the phase offset between the I and Q channels, and x(t) is the displacement of the target object. The measured data from the CW radar are x(t), and they are obtained from the phase change of the trigonometric function of I/Q signals, which vary with time. The FSK radar can additionally detect d 0 from the change in phase difference obtained using two frequencies [10]. The DC offset in each channel, DC I (t) and DC Q (t), can be expressed with two terms depending on the changes with time as follows: where i DC (t) and q DC (t) are the time-variant DC offset voltages, and I DC and Q DC are the static DC offset voltages in the I/Q baseband signals. I DC and Q DC , or some i DC (t) and q DC (t), which can be averaged for a certain period of time, can be easily removed using the calibration process in the digital domain, which moves the center of a circle or arc in the complex plane [30]. In addition, if the desired signal can be distinguished from the frequency of the time-variant DC offset due to the difference in the frequency domain, the DC offset can be easily removed by a high-pass filter in the baseband. However, DC offset removal by a high-pass filter can attenuate the level of the desired signal located at a low frequency, such as respiration and heartbeat signals. When the DC offset has periodicity at a very low frequency or does not converge for a certain time, the center of a circle or arc in the process appears to move, and the accuracy of extracting the phase variation decreases. DC drift is defined as a phenomenon in which the DC offset moves in a certain direction in the complex plane due to a very slow change. DC drift is presented as two cases, as shown in Figure 1, depending on the magnitude of x(t) compared to the wavelength of the operating frequency. The two cases might be understood as the same phenomenon in which the centers of the circles and arcs move in one direction at a very slow speed, but the algorithm to remove the effect of the DC drift cannot be the same between the two cases. The DC drift in Figure 1a is only presented by the center movement of the circle due to the long x(t) compared to the wavelength, and the DC drift can be compensated using the results of the method that estimated the vector for the movement of the center of a circle. However, when the method is used to remove the DC drift, the accuracy of the displacement or phase measurement deteriorates due to the time variation of the arc length generated by the DC drift as shown in Figure 1b. The variation in the arc length due to DC drift also occurs in the case of Figure 1a, but it is not necessary to obtain the variation in length because the accuracy of the circle fitting method is high due to the drawing of a perfectly circular trajectory. In this case, it is difficult to estimate the variation in the arc length in general. The phenomenon shown in Figure 1b is generated not only by the short length difference between the wavelength of the operating frequency and the desired displacement of the target but also by the characteristic change in the radar components due to heat generation and the ambient vibration in the very low frequency band [27,29]. In particular, the detection accuracy of the vital sign detection using the CW/FSK radar can be lowered by the DC drift generated by undesired movement of the subject in the radar sensor. Therefore, a generalized compensation technique for DC drift removal is required to improve the measurement accuracy of CW/FSK radars caused by changes at very slow frequencies or non-converged characteristics regardless of the displacement magnitude of the target object.
bration process in the digital domain, which moves the center of a circle or arc in the complex plane [30]. In addition, if the desired signal can be distinguished from the frequency of the time-variant DC offset due to the difference in the frequency domain, the DC offset can be easily removed by a high-pass filter in the baseband. However, DC offset removal by a high-pass filter can attenuate the level of the desired signal located at a low frequency, such as respiration and heartbeat signals. When the DC offset has periodicity at a very low frequency or does not converge for a certain time, the center of a circle or arc in the process appears to move, and the accuracy of extracting the phase variation decreases.
DC drift is defined as a phenomenon in which the DC offset moves in a certain direction in the complex plane due to a very slow change. DC drift is presented as two cases, as shown in Figure 1, depending on the magnitude of x(t) compared to the wavelength of the operating frequency. The two cases might be understood as the same phenomenon in which the centers of the circles and arcs move in one direction at a very slow speed, but the algorithm to remove the effect of the DC drift cannot be the same between the two cases. The DC drift in Figure 1a is only presented by the center movement of the circle due to the long x(t) compared to the wavelength, and the DC drift can be compensated using the results of the method that estimated the vector for the movement of the center of a circle. However, when the method is used to remove the DC drift, the accuracy of the displacement or phase measurement deteriorates due to the time variation of the arc length generated by the DC drift as shown in Figure 1b. The variation in the arc length due to DC drift also occurs in the case of Figure 1a, but it is not necessary to obtain the variation in length because the accuracy of the circle fitting method is high due to the drawing of a perfectly circular trajectory. In this case, it is difficult to estimate the variation in the arc length in general. The phenomenon shown in Figure 1b is generated not only by the short length difference between the wavelength of the operating frequency and the desired displacement of the target but also by the characteristic change in the radar components due to heat generation and the ambient vibration in the very low frequency band [27,29]. In particular, the detection accuracy of the vital sign detection using the CW/FSK radar can be lowered by the DC drift generated by undesired movement of the subject in the radar sensor. Therefore, a generalized compensation technique for DC drift removal is required to improve the measurement accuracy of CW/FSK radars caused by changes at very slow frequencies or non-converged characteristics regardless of the displacement magnitude of the target object.
Detrending Technique Based on Polynomial Fitting in Time Domain
The detrending technique is defined as removing the effect of DC drift in the I/Q baseband signals. As shown in Figure 2, the DC drift with a very low frequency component appears to change more slowly than the I/Q baseband signals in the time domain. DC drift in the form of non-convergence also appears similar to the case of the DC drift with a very low frequency component when the baseband signals, including DC drift, are not saturated at the input of the analog-to-digital converter (ADC). Figure 2a shows that different DC offsets and drifts occur in each channel due to the imperfections of the radar hardware module and the effects of the parasitic components in the module. As shown in Figure 2b, the proposed detrending technique effectively removes the DC or near DC components in the I and Q channels by obtaining the model equation using polynomial fitting and calibrating the original signals with the equation. The calibrated I/Q signals without DC or near DC components can greatly improve the accuracy of phase extraction through simple digital signal processing with a circle fitting algorithm.
Detrending Technique Based on Polynomial Fitting in Time Domain
The detrending technique is defined as removing the effect of DC drift in the I/Q baseband signals. As shown in Figure 2, the DC drift with a very low frequency component appears to change more slowly than the I/Q baseband signals in the time domain. DC drift in the form of non-convergence also appears similar to the case of the DC drift with a very low frequency component when the baseband signals, including DC drift, are not saturated at the input of the analog-to-digital converter (ADC). Figure 2a shows that different DC offsets and drifts occur in each channel due to the imperfections of the radar hardware module and the effects of the parasitic components in the module. As shown in Figure 2b, the proposed detrending technique effectively removes the DC or near DC components in the I and Q channels by obtaining the model equation using polynomial fitting and calibrating the original signals with the equation. The calibrated I/Q signals without DC or near DC components can greatly improve the accuracy of phase extraction through simple digital signal processing with a circle fitting algorithm. The generated DC drift in each channel of a single radar module is not exactly the same because DC drift appears individually, but there is a correlation between the DC drift of I/Q signals because the effect on DC drift is within the radar module. This correlation indicates that the DC drift in each channel can be modeled by polynomial fitting with the same order for a specific time period. The polynomial fitting for the input x can be expressed by the fitting equation f(x) as follows: where n is the maximum order of the polynomial fitting, and βi is the coefficient of the ith order polynomial. The time period and the maximum order of the polynomial are the most important factors for estimating the DC drift in the baseband signals. These parameters are closely related to each other in the implementation of the signal process because the signal components to be removed by the proposed detrending technique are mainly located in the frequency band below 0.1 Hz. When the polynomial fitting is performed with a low order to the sampled baseband signals in the proposed technique, the DC drift cannot be effectively extracted, as shown in Figure 3a. When a high-order polynomial is used to fit the DC drift in the proposed technique, as shown in Figure 3c, the desired signals, especially at the initial and final data, can be lost by the signal process. Therefore, it is important for the proposed detrending technique to determine the best fitting order of the polynomial that can remove only the DC drift, as shown in Figure 3b. Considering the The generated DC drift in each channel of a single radar module is not exactly the same because DC drift appears individually, but there is a correlation between the DC drift of I/Q signals because the effect on DC drift is within the radar module. This correlation indicates that the DC drift in each channel can be modeled by polynomial fitting with the same order for a specific time period. The polynomial fitting for the input x can be expressed by the fitting equation f(x) as follows: where n is the maximum order of the polynomial fitting, and β i is the coefficient of the ith order polynomial. The time period and the maximum order of the polynomial are the most important factors for estimating the DC drift in the baseband signals. These parameters are closely related to each other in the implementation of the signal process because the signal components to be removed by the proposed detrending technique are mainly located in the frequency band below 0.1 Hz. When the polynomial fitting is performed with a low order to the sampled baseband signals in the proposed technique, the DC drift cannot be effectively extracted, as shown in Figure 3a. When a high-order polynomial is used to fit the DC drift in the proposed technique, as shown in Figure 3c, the desired signals, especially at the initial and final data, can be lost by the signal process. Therefore, it is important for the proposed detrending technique to determine the best fitting order of the polynomial that can remove only the DC drift, as shown in Figure 3b. Considering the measurement environment, target object, and radar hardware components in the CW radar, this study assumed that the time period of the polynomial fitting in the proposed detrending technique was normally 60 s and less than 120 s at the maximum. The polynomial order in the proposed technique was adjusted from the second to the ninth order to determine the optimum order in a given time period. measurement environment, target object, and radar hardware components in the CW radar, this study assumed that the time period of the polynomial fitting in the proposed detrending technique was normally 60 s and less than 120 s at the maximum. The polynomial order in the proposed technique was adjusted from the second to the ninth order to determine the optimum order in a given time period.
Signal Processing of the CW Radar, Including the Proposed Technique
The phase difference between the transmitting (Tx) and receiving (Rx) signals is obtained from the baseband signals in the I/Q channels of the CW radar using digital signal processing, including the proposed detrending technique. It is theoretically necessary for accurate phase extraction to simultaneously sample the I/Q baseband signals, and synchronous ADC is generally used in the data acquisition (DAQ) to neglect the phase variation between signal channels due to the switching time in the input channel of the ADC. When the phase variation in the baseband signals is sufficiently slow compared to the sampling rate of the ADC, a multichannel ADC with switching operations in the input channels can be used because the switching time delay between the input channels is not significant. The sampled signals were low-pass filtered using a cut-off frequency of approximately 5 Hz in the processing for vital sign detection and small displacement measurements. The cut-off frequency of the filter can be determined based on the frequency of the desired signals in the CW radar. The model equation for the DC drift was extracted by polynomial fitting with the second or higher order using the data obtained for less than 60 s. DC drift was removed by subtracting the value obtained from the model equation obtained from each sampled raw signal value. After processing with the proposed detrending technique, the amplitude and phase mismatches of I/Q signals due to hardware imperfections were calibrated by using the Gram-Schmidt procedure, which produces two orthonormal vectors as follows: where Iort(t) and Qort(t) denote the calibrated baseband signals [31]. The calibrated I/Q baseband signals are displayed on the complex plane as a circle or an arc, depending on the length of the movement of the target object compared to the wavelength of the radar operating frequency. Circle fitting based on the Levenberg-Marquardt (LM) method can be used to check whether the proposed technique is effective in removing DC drift [18,32]. The correlation between the center of the generated circle and the origin of the complex plane showed that the DC drift was sufficiently removed by the proposed detrending technique. The phase information contained in the I/Q baseband signals is obtained using
Signal Processing of the CW Radar, including the Proposed Technique
The phase difference between the transmitting (Tx) and receiving (Rx) signals is obtained from the baseband signals in the I/Q channels of the CW radar using digital signal processing, including the proposed detrending technique. It is theoretically necessary for accurate phase extraction to simultaneously sample the I/Q baseband signals, and synchronous ADC is generally used in the data acquisition (DAQ) to neglect the phase variation between signal channels due to the switching time in the input channel of the ADC. When the phase variation in the baseband signals is sufficiently slow compared to the sampling rate of the ADC, a multichannel ADC with switching operations in the input channels can be used because the switching time delay between the input channels is not significant. The sampled signals were low-pass filtered using a cut-off frequency of approximately 5 Hz in the processing for vital sign detection and small displacement measurements. The cut-off frequency of the filter can be determined based on the frequency of the desired signals in the CW radar. The model equation for the DC drift was extracted by polynomial fitting with the second or higher order using the data obtained for less than 60 s. DC drift was removed by subtracting the value obtained from the model equation obtained from each sampled raw signal value. After processing with the proposed detrending technique, the amplitude and phase mismatches of I/Q signals due to hardware imperfections were calibrated by using the Gram-Schmidt procedure, which produces two orthonormal vectors as follows: where I ort (t) and Q ort (t) denote the calibrated baseband signals [31]. The calibrated I/Q baseband signals are displayed on the complex plane as a circle or an arc, depending on the length of the movement of the target object compared to the wavelength of the radar operating frequency. Circle fitting based on the Levenberg-Marquardt (LM) method can be used to check whether the proposed technique is effective in removing DC drift [18,32]. The correlation between the center of the generated circle and the origin of the complex plane showed that the DC drift was sufficiently removed by the proposed detrending technique. The phase information contained in the I/Q baseband signals is obtained using arcsine demodulation, which is a nonlinear demodulation method [33]. Figure 4 summarizes the overall flowchart of the proposed signal processing method for phase extraction in the CW radar. arcsine demodulation, which is a nonlinear demodulation method [33]. Figure 4 summarizes the overall flowchart of the proposed signal processing method for phase extraction in the CW radar.
Simulation Results
The proposed detrending technique was verified by simulation using MATLAB. The I/Q baseband signals with different amplitudes were generated by using (1) and (2) modeled with the vital signs, which represent a respiration signal at a frequency of 0.3 Hz and a heartbeat signal at a frequency of 1.3 Hz. In the model function using (1) and (2), the distance d0 and the phase error E were assumed to be 0.5 m and 0°, respectively. Without distinguishing the origin of the DC drift, the DC drift in the simulation was assumed to be a trigonometric function with a frequency of 0.03 Hz using (3) and (4) to describe the change in the DC offset at a very low frequency, and the static DC offset in each channel was determined to an arbitrary value within the dynamic range of the ADC. Each baseband signal was generated for 60 s, and a data set for each channel was constructed with a sampling rate of 1000 samples per second. Figure 5 shows the I/Q baseband signals generated from the simulation displayed in the complex plane and the time domain, respectively. The baseband signals with the displacement, which have shorter wavelengths than those of the 5.8 GHz radar frequency, are displayed as arcs on the I/Q plot due to DC drift as shown in Figure 5a
Simulation Results
The proposed detrending technique was verified by simulation using MATLAB. The I/Q baseband signals with different amplitudes were generated by using (1) and (2) modeled with the vital signs, which represent a respiration signal at a frequency of 0.3 Hz and a heartbeat signal at a frequency of 1.3 Hz. In the model function using (1) and (2), the distance d 0 and the phase error ϕ E were assumed to be 0.5 m and 0 • , respectively. Without distinguishing the origin of the DC drift, the DC drift in the simulation was assumed to be a trigonometric function with a frequency of 0.03 Hz using (3) and (4) to describe the change in the DC offset at a very low frequency, and the static DC offset in each channel was determined to an arbitrary value within the dynamic range of the ADC. Each baseband signal was generated for 60 s, and a data set for each channel was constructed with a sampling rate of 1000 samples per second. Figure 5 shows the I/Q baseband signals generated from the simulation displayed in the complex plane and the time domain, respectively. The baseband signals with the displacement, which have shorter wavelengths than those of the 5.8 GHz radar frequency, are displayed as arcs on the I/Q plot due to DC drift as shown in Figure 5a and as time-variant signals modulated with a very low frequency in the time domain as shown in Figure 5b. Figure 6 shows the simulation results of the comparison between the conventional DC offset cancellation and the proposed detrending technique. The data obtained using the conventional method, which only removes a static DC offset, did not converge to the Figure 6 shows the simulation results of the comparison between the conventional DC offset cancellation and the proposed detrending technique. The data obtained using the conventional method, which only removes a static DC offset, did not converge to the trajectory of a single circle, and the center of each circle was not the same as the origin of the complex plane. However, the data obtained using the proposed technique converged to a single trajectory of a single circle centered at the origin. A comparison of the characteristics of the proposed technique depending on the fitting order shows that the fitting data using the fifth-and seventh-order polynomials were located more accurately on the trajectory of the single circle than the data using the third-and ninth-order polynomials. The arcs obtained using the proposed technique with the third-order polynomial fitting are distributed near the trajectory of the single circle, as shown in Figure 6a, showing that the DC drift was not sufficiently removed compared to that in the technique with the other orders. This means that the DC drift for 60 s generated in the simulation cannot be sufficiently eliminated by the modeled function with the third-order polynomial. Both ends of the trajectory obtained using the ninth-order polynomial were wider than the midpoint of the trajectory, as shown in Figure 6d, which can be attributed to over-fitting distortion. The trajectories generated using the fifth-and seventh-order polynomials in Figure 6b,c have high accuracy and similarity. The simulation results, which depend on the order of the polynomial fitting in Figure 6, show that the proposed detrending technique has an optimum order for DC drift cancelation. However, the optimum order of the polynomial fitting can be changed by the DC drift waveforms, the data acquisition time (T DAQ ), and the sampling rate due to the dependency of the best fitting polynomial. The simulation results in the frequency domain, which were obtained using the fast Fourier transform (FFT) as shown in Figure 7, show that the SNR of the vital signal detection modeled in the simulation can be improved by the proposed detrending technique. The proposed detrending technique increased the normalized magnitude of the desired signal compared to the conventional method, because the noise related to DC drift is sig- The simulation results in the frequency domain, which were obtained using the fast Fourier transform (FFT) as shown in Figure 7, show that the SNR of the vital signal detection modeled in the simulation can be improved by the proposed detrending technique. The proposed detrending technique increased the normalized magnitude of the desired signal compared to the conventional method, because the noise related to DC drift is significantly reduced by the proposed technique. The SNRs of the respiration and heartbeat signals were increased by 6.2 dB and 6.9 dB, respectively. The SNR improvement did not show a significant difference in the proposed technique when third-order or higher polynomial fitting was used. The simulation results showing the SNR improvement regardless of the order of the polynomial fitting demonstrate that the proposed technique using polynomials of a certain order or higher is sufficiently useful in the application without finding the best fitting order.
CW Radar Sensor Module
The radar front-end circuit was implemented on an FR4 printed circuit board (PCB) with a thickness of 0.6 mm as shown in Figure 8a. Figure 8b shows the block diagram of the CW radar sensor operating at the 5.8 GHz ISM band. A 5.8 GHz signal was generated by a voltage-controlled oscillator (VCO) (HMC431LP4, Analog Devices Inc.) with an output power of 2 dBm and amplified by a power amplifier (HMV407MS8G, Analog Devices Inc.) with a power gain of 15 dB. A power divider (PD4859J5050S2HF, Anaren Inc.) was used to distribute the transmitted and local oscillator (LO) signals. The output power at the transmitter port was measured to be 10 dBm due to path loss. The receiver path consisted of a low-noise amplifier (HMC717ALP3E, Analog Devices Inc.), a down-conversion mixer (HMC951A, Analog Device Inc.), and low-pass filters (0500LP15A500, Johanson Technology). The total noise figure and gain of the receiver path were calculated to be 1.37 dB and 27.5 dB, respectively. The antenna gain of the patch antenna designed on an FR4 PCB with a thickness of 1 mm was measured to be 4.4 dBi [34]. The module size was set to 35 mm × 55 mm, excluding the Tx/Rx patch antennas.
CW Radar Sensor Module
The radar front-end circuit was implemented on an FR4 printed circuit board (PCB) with a thickness of 0.6 mm as shown in Figure 8a. Figure 8b shows the block diagram of the CW radar sensor operating at the 5.8 GHz ISM band. A 5.8 GHz signal was generated by a voltage-controlled oscillator (VCO) (HMC431LP4, Analog Devices Inc.) with an output power of 2 dBm and amplified by a power amplifier (HMV407MS8G, Analog Devices Inc.) with a power gain of 15 dB. A power divider (PD4859J5050S2HF, Anaren Inc.) was used to distribute the transmitted and local oscillator (LO) signals. The output power at the transmitter port was measured to be 10 dBm due to path loss. The receiver path consisted of a low-noise amplifier (HMC717ALP3E, Analog Devices Inc.), a downconversion mixer (HMC951A, Analog Device Inc.), and low-pass filters (0500LP15A500, Johanson Technology). The total noise figure and gain of the receiver path were calculated to be 1.37 dB and 27.5 dB, respectively. The antenna gain of the patch antenna designed on an FR4 PCB with a thickness of 1 mm was measured to be 4.4 dBi [34]. The module size was set to 35 mm × 55 mm, excluding the Tx/Rx patch antennas. the transmitter port was measured to be 10 dBm due to path loss. The receiver path consisted of a low-noise amplifier (HMC717ALP3E, Analog Devices Inc.), a down-conversion mixer (HMC951A, Analog Device Inc.), and low-pass filters (0500LP15A500, Johanson Technology). The total noise figure and gain of the receiver path were calculated to be 1.37 dB and 27.5 dB, respectively. The antenna gain of the patch antenna designed on an FR4 PCB with a thickness of 1 mm was measured to be 4.4 dBi [34]. The module size was set to 35 mm × 55 mm, excluding the Tx/Rx patch antennas.
Data Acquisition from the Radar and Reference Sensors
The baseband I/Q signals were captured by a synchronous multichannel DAQ board (NI USB-6366, National Instruments) with sampling rates of 1k samples per second in each channel. The sampling rate was fixed in the experiment based on the simulation results of the proposed method, and the T DAQ of the proposed detrending method was initially set to approximately 60 s considering the characteristics of the target object. Digital signal processing, including the proposed detrending technique, was implemented using MATLAB on a PC. The performance of the CW radar using the conventional method and the proposed technique was demonstrated by a displacement measurement using a linearly moving stage and vital signal (respiration and heartbeat signals) detection for the subject.
In the displacement measurement, the target object periodically traveled within the range of 26 mm, which is half of the wavelength of 5.8 GHz, at a distance of 1 m by the moving stage with a range resolution of 1 µm as shown in Figure 9a. The target object and the antennas were placed on the same horizontal plane at a height of 1 m from the floor in order for the radar to measure the same displacement controlled by the moving stage. The range finder (ILR-1182, Micro-Epsilon) based on a laser with a resolution of 0.1 mm was used as the reference for the displacement measurement. The measured data in the radar were synchronized after capturing in each DAQ board due to the different sampling rates between the radar and reference sensors. The T DAQ for processing the proposed technique in the displacement measurement was modified to 120 s, which is longer than that in the simulation, because it was expected that the DC drift can be effectively removed by the proposed detrending as the time increased. It is difficult to determine the performance degradation in the DC drift caused by unwanted movement in the displacement measurement using the radar due to the precise displacement control of the moving stage. Therefore, the displacement was set to a half-wavelength of the operating frequency for the 5.8 GHz radar to draw a circle in the I/Q plot, and the long T DAQ was set such that the DC drift in the radar module itself could be generated due to heat or vibration.
Displacement Measurement
The accuracy of the displacement measurement shows that the proposed detrending technique can more effectively remove DC drift compared to the conventional method because the DC drift reduces the accuracy of detecting the phase difference between the Tx and Rx signals in the CW radar. The baseband signals obtained from the ideal single target with a half-wavelength displacement theoretically draw an I/Q plot as a single radius circle centered on the origin of the complex plane. The I/Q plot displayed with the measurement data might form several circles with different radii or deviated center points. To compare the performance of the conventional DC offset cancelation and the proposed detrending technique, the centers of the generated circles were fixed at the The respiration and heartbeat signals were measured for the subject trying to maintain a still state by using the radar module at a distance of 0.8 m. Two contact sensors were used to determine the measurement accuracy of the vital signal detection: the respiration belt GDX-RB manufactured by Vernier Software & Technology, which sampled the respiration data with a sampling rate of 20 Hz, and the three-electrode ECG sensor EKG-BTA manufactured by Vernier Software & Technology, which monitored the heart rate with a sampling rate of 200 Hz. The detection accuracy was compared by synchronizing the data post-processing using the same method as in the displacement measurement because the sampling rates of both the CW radar and the reference sensors are different. Despite the subject's effort to maintain the posture for the measurement time, the generation of microscopic movements of the subject, such as moving the body back and forth or wiggling, cannot be avoided, and these movements can generate a DC drift with a very low frequency in the CW radar [28]. The proposed technique should be verified using a shorter T DAQ in the vital signal detection than the T DAQ in the displacement measurement because the DC drift generated by the unintentional movement of the human body is unpredictable and uncontrollable. Therefore, the T DAQ in the vital signal detection was arbitrarily set to 40 s, which is shorter than the 60 s verified in the simulation results.
Displacement Measurement
The accuracy of the displacement measurement shows that the proposed detrending technique can more effectively remove DC drift compared to the conventional method because the DC drift reduces the accuracy of detecting the phase difference between the Tx and Rx signals in the CW radar. The baseband signals obtained from the ideal single target with a half-wavelength displacement theoretically draw an I/Q plot as a single radius circle centered on the origin of the complex plane. The I/Q plot displayed with the measurement data might form several circles with different radii or deviated center points. To compare the performance of the conventional DC offset cancelation and the proposed detrending technique, the centers of the generated circles were fixed at the origin, and the accuracies of the displacement measurement using both methods were analyzed with the standard deviation (SD) of the radius. The I/Q plots to compare the proposed technique with the conventional method in Figure 10 show that the proposed technique can provide a more accurate single circle fitting by reducing the effect of the DC drift regardless of the fitting order. origin, and the accuracies of the displacement measurement using both methods were analyzed with the standard deviation (SD) of the radius. The I/Q plots to compare the proposed technique with the conventional method in Figure 10 show that the proposed technique can provide a more accurate single circle fitting by reducing the effect of the DC drift regardless of the fitting order. It can be regarded that the signal processing method with the smallest SD of the circle, that is, the technique with the highest accuracy of the circle fitting, offers the best performance in removing DC drift because the target in the measurement moves repeatedly in the range for the TDAQ. Figure 11 shows the standard deviation of the circle trajectory generated by the circle fitting method after processing the measurement data with the conventional static DC offset cancelation (no fitting, equivalent to order 0 in Figure 11) and the proposed technique (polynomial fitting with second to ninth order). Compared with the conventional method, the low SD of the proposed technique indicates that it is useful for improving the accuracy of phase detection by generating a highly accurate circle trajectory. In addition, Figure 11 shows that one polynomial fitting of the proposed technique was most effective in removing the DC drift. The best fitting displacement measurement was achieved using the fifth-order polynomial, which is a similar order to that presented in the simulation. Although the simulation and measurement results are based on data captured at different time durations, the similar orders of the polynomial fitting show that the proposed detrending technique using fifth-order polynomial fitting can be generally used to remove DC drift in simulation conditions and the experimental setup of CW radars. The measurement error in the displacement obtained by the radar, depending on the DC offset cancelation, is shown in Figure 12. The proposed technique using the fifth-order polynomial fitting, which shows the smallest SD in Figure 11, had the lowest error of 0.62 mm on average. These results show that the fifth-order polynomial was the best fit in the proposed technique. The average error in the displacement measurement was 1.34 mm when the conventional method was used. This was higher than the error obtained in the proposed technique when the fifth-and seventh-order (0.88 mm) polynomials were used but lower than those obtained when the third-(1.42 mm) and ninth-order (2.08 mm) polynomials were used. The increase in error using the third-and ninth-order polynomials It can be regarded that the signal processing method with the smallest SD of the circle, that is, the technique with the highest accuracy of the circle fitting, offers the best performance in removing DC drift because the target in the measurement moves repeatedly in the range for the T DAQ . Figure 11 shows the standard deviation of the circle trajectory generated by the circle fitting method after processing the measurement data with the conventional static DC offset cancelation (no fitting, equivalent to order 0 in Figure 11) and the proposed technique (polynomial fitting with second to ninth order). Compared with the conventional method, the low SD of the proposed technique indicates that it is useful for improving the accuracy of phase detection by generating a highly accurate circle trajectory. In addition, Figure 11 shows that one polynomial fitting of the proposed technique was most effective in removing the DC drift. The best fitting displacement measurement was achieved using the fifth-order polynomial, which is a similar order to that presented in the simulation. Although the simulation and measurement results are based on data captured at different time durations, the similar orders of the polynomial fitting show that the proposed detrending technique using fifth-order polynomial fitting can be generally used to remove DC drift in simulation conditions and the experimental setup of CW radars.
The measurement error in the displacement obtained by the radar, depending on the DC offset cancelation, is shown in Figure 12. The proposed technique using the fifth-order polynomial fitting, which shows the smallest SD in Figure 11, had the lowest error of 0.62 mm on average. These results show that the fifth-order polynomial was the best fit in the proposed technique. The average error in the displacement measurement was 1.34 mm when the conventional method was used. This was higher than the error obtained in the proposed technique when the fifth-and seventh-order (0.88 mm) polynomials were used but lower than those obtained when the third-(1.42 mm) and ninth-order (2.08 mm) polynomials were used. The increase in error using the third-and ninth-order polynomials indicates that the performance variation of the proposed technique might increase depending on the displacement when the DC drift is estimated with too low or high order polynomials. Therefore, the measurement results of the displacement demonstrate that the optimum order setup is important in the polynomial fitting of the proposed technique, and the fifth-order polynomial is the most effective in removing the DC drift in the experimental setup described in Section 4.
(c) (d) Figure 10. I/Q plots in the displacement measurement obtained by the circle fitting method after processing the conventional static DC offset cancellation and the proposed detrending technique depending on the order of the polynomial fitting: using the polynomial with (a) third order; (b) fifth order; (c) seventh order; (d) ninth order. Figure 11. Standard deviation for the radius of the generated circle trajectory in the I/Q plots shown in Figure 10.
The measurement error in the displacement obtained by the radar, depending on the DC offset cancelation, is shown in Figure 12. The proposed technique using the fifth-order polynomial fitting, which shows the smallest SD in Figure 11, had the lowest error of 0.62 mm on average. These results show that the fifth-order polynomial was the best fit in the proposed technique. The average error in the displacement measurement was 1.34 mm when the conventional method was used. This was higher than the error obtained in the proposed technique when the fifth-and seventh-order (0.88 mm) polynomials were used but lower than those obtained when the third-(1.42 mm) and ninth-order (2.08 mm) polynomials were used. The increase in error using the third-and ninth-order polynomials Figure 11. Standard deviation for the radius of the generated circle trajectory in the I/Q plots shown in Figure 10. indicates that the performance variation of the proposed technique might increase depending on the displacement when the DC drift is estimated with too low or high order polynomials. Therefore, the measurement results of the displacement demonstrate that the optimum order setup is important in the polynomial fitting of the proposed technique, and the fifth-order polynomial is the most effective in removing the DC drift in the experimental setup described in Section 4.
Vital Signal Detection
The effect of DC drift on vital signal detection using the CW radar is shown in Figure 13. The DC drift in the baseband I/Q signals is represented as a low-frequency modulation signal in the time domain, which is caused by the complex combination of an undesired very slow motion from the radar or the target, temporal change in DC offset in the radar, and noise floor increase due to heat generation. When DC drift is reduced by the proposed technique, the overall noise characteristics in the baseband can be reduced, not just the low-frequency noise, resulting in an improved SNR of respiration and heartbeat signals as shown in Figure 13.
Vital Signal Detection
The effect of DC drift on vital signal detection using the CW radar is shown in Figure 13. The DC drift in the baseband I/Q signals is represented as a low-frequency modulation signal in the time domain, which is caused by the complex combination of an undesired very slow motion from the radar or the target, temporal change in DC offset in the radar, and noise floor increase due to heat generation. When DC drift is reduced by the proposed technique, the overall noise characteristics in the baseband can be reduced, not just the low-frequency noise, resulting in an improved SNR of respiration and heartbeat signals as shown in Figure 13. 13. The DC drift in the baseband I/Q signals is represented as a low-frequency modulation signal in the time domain, which is caused by the complex combination of an undesired very slow motion from the radar or the target, temporal change in DC offset in the radar, and noise floor increase due to heat generation. When DC drift is reduced by the proposed technique, the overall noise characteristics in the baseband can be reduced, not just the low-frequency noise, resulting in an improved SNR of respiration and heartbeat signals as shown in Figure 13. Figure 13. DC drift in the respiration and heartbeat detection using the CW radar. Figure 13. DC drift in the respiration and heartbeat detection using the CW radar. Figure 14 shows the frequency spectrum of the measured respiration and heartbeat signals using the 5.8 GHz radar with the conventional DC offset cancelation and the proposed technique. The proposed technique using only the fifth-order polynomial fitting was used for vital signal detection because the simulation and displacement measurement results showed that the fifth-order polynomial fitting was the best for removing DC drift in the proposed method. The measurement results in the frequency domain show that the proposed technique reduces the voltage magnitude due to DC offset and DC drift more than twice as much as the conventional method, and the decrease in the signals at DC and near DC coincides with the description in Figure 13. However, the magnitudes of respiration and heartbeat signals by the proposed technique increased in the measurement results compared to the conventional method, contrary to the description in Figure 13. The performance improvement of the proposed method, shown in Figure 14, can be explained by the improvement in the accuracy of the phase extraction in digital signal processing. The signal processing in Figure 4 shows that the phase extraction was performed after the DC offset and drift removal. The respiration and heartbeat signals in the raw data obtained from the baseband channels are independent of the method used to remove the DC offset and drift, but the accuracy of the phase extraction after I/Q calibration can vary depending on the degree of the DC offset and drift removal as mentioned in Section 2.
Sensors 2021, 21, x FOR PEER REVIEW 15 of 18 Figure 14 shows the frequency spectrum of the measured respiration and heartbeat signals using the 5.8 GHz radar with the conventional DC offset cancelation and the proposed technique. The proposed technique using only the fifth-order polynomial fitting was used for vital signal detection because the simulation and displacement measurement results showed that the fifth-order polynomial fitting was the best for removing DC drift in the proposed method. The measurement results in the frequency domain show that the proposed technique reduces the voltage magnitude due to DC offset and DC drift more than twice as much as the conventional method, and the decrease in the signals at DC and near DC coincides with the description in Figure 13. However, the magnitudes of respiration and heartbeat signals by the proposed technique increased in the measurement results compared to the conventional method, contrary to the description in Figure 13. The performance improvement of the proposed method, shown in Figure 14, can be explained by the improvement in the accuracy of the phase extraction in digital signal processing. The signal processing in Figure 4 shows that the phase extraction was performed after the DC offset and drift removal. The respiration and heartbeat signals in the raw data obtained from the baseband channels are independent of the method used to remove the DC offset and drift, but the accuracy of the phase extraction after I/Q calibration can vary depending on the degree of the DC offset and drift removal as mentioned in Section 2.
The vital signals measured using the CW radar with the proposed technique had an accuracy of 98.89% in respiration and 99.54% in heartbeat as compared to the reference sensors. The accuracy difference depending on the polynomial order was not indicated in the experimental results because the same frequency peak of the vital signals was detected in the measurement regardless of the polynomial order in the proposed technique. The difference in the SNR between the conventional method and the proposed detrending technique can be expressed as the ratio of the magnitude of each vital signal shown in Figure 14, because the difference in the noise level due to DC offset outside the frequency band of interest is negligible. The SNR improvement of the vital signal detection was measured as 7.2 dB in respiration and 6.6 dB in heart rate, and the SNR of the respiration displayed in a lower frequency band was improved more than that of the heartbeat by the proposed technique. This is because the respiration signal was closer to the DC. The simulation and measurement results show an improvement in the accuracy of the displacement measurement and the SNR in the vital signal detection by using the proposed technique in the CW radar. The results show that for a data acquisition time from 40 s to 120 s and a detection distance within 1 m, a polynomial fitting of the fifth order The vital signals measured using the CW radar with the proposed technique had an accuracy of 98.89% in respiration and 99.54% in heartbeat as compared to the reference sensors. The accuracy difference depending on the polynomial order was not indicated in the experimental results because the same frequency peak of the vital signals was detected in the measurement regardless of the polynomial order in the proposed technique. The difference in the SNR between the conventional method and the proposed detrending technique can be expressed as the ratio of the magnitude of each vital signal shown in Figure 14, because the difference in the noise level due to DC offset outside the frequency band of interest is negligible. The SNR improvement of the vital signal detection was measured as 7.2 dB in respiration and 6.6 dB in heart rate, and the SNR of the respiration displayed in a lower frequency band was improved more than that of the heartbeat by the proposed technique. This is because the respiration signal was closer to the DC.
The simulation and measurement results show an improvement in the accuracy of the displacement measurement and the SNR in the vital signal detection by using the proposed technique in the CW radar. The results show that for a data acquisition time from 40 s to 120 s and a detection distance within 1 m, a polynomial fitting of the fifth order shows the best performance in the proposed technique. However, it is thought that the polynomial fitting order of the proposed technique can be sufficiently varied depending on the target object, measurement conditions, and surroundings because there is no theoretical basis that the fifth order would always be the best order of the polynomial fittings. It will be a good further research study for us to show whether the best order converges in more diverse targets and environments.
Conclusions
A detrending technique was proposed to remove the noise generated by the static and time-variant DC offset in a CW radar. DC drift, which is the time-variant DC offset, should be removed for accurate phase detection in CW radars. The proposed technique, which models DC drift using polynomial fitting, can remove DC drift in the time domain more effectively compared to the conventional method, which removes only the static DC offset or uses a high-pass filter. The simulation results for vital signal detection using the CW radar showed that the proposed technique effectively removes DC drift modeled with a frequency that is 10 times less than that of the respiration, and the effect of the DC drift removal can vary depending on the fitting order of the polynomial in the proposed technique. In the measurement using a 5.8 GHz CW radar, the accuracy of the displacement measurement and the SNR of the vital signal detection were improved by the proposed technique compared to the conventional method for removing the static DC offset. Compared to the conventional method, the proposed technique using the fifth-order polynomial fitting at a distance of 1 m or less showed an average improvement of 3.7 times in terms of the error in the displacement measurement, and the SNRs of the vital signal detections were increased by 7.2 dB and 6.6 dB for respiration and heartbeat, respectively. In addition, the simulation and measurement results showed that there is a best fitting order in the proposed technique, and the fifth-order fitting in the given simulation and measurement environment removed the effect of the DC drift most effectively.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the privacy of the subjects.
Conflicts of Interest:
The authors declare no conflict of interest. | 12,962.6 | 2021-09-24T00:00:00.000 | [
"Engineering",
"Physics"
] |
Aquifer potential investigation applying vertical electrical sounding in Bango sub-catchment area
The freshwater requirement in the Bango sub-catchment is rising along with land use change. The main source of freshwater that can be used is groundwater in the aquifer layer. The research on aquifer potential was conducted using the vertical electrical sounding (VES) method. Vertical electrical sounding is a geoelectric method to measure the resistivity of the rocks. This instrument is used to obtain subsurface information about aquifer depth. The measurements along fourteen measurement points that the length of each track used is a maximum of 4 km. The layer which is the potential to be an aquifer is a tuff layer with resistivity values for tuff layer at each measurement point varying from 20,2 to 78,16 Ωm. The results showed that the rock lithology is influenced by volcanic activity and there is a layer suspected of being a potential shallow aquifer. The potential shallow groundwater in the Bango sub-catchment is approx. 5,73 billion cubic meters.
Introduction
Water is one of the sources of power and energy on this earth.Water is one of the basic needs of living things, especially humans for survival [1].Based on its existence, water can be divided into 2 types, namely surface water and groundwater.In general, humans use groundwater both for direct needs such as drinking water, industrial water, and sanitation and for indirect needs such as irrigation, animal husbandry, hydroelectric power plants, and other purposes.In life, groundwater has a greater role because of its good water quality, relatively low investment costs, and easy utilization, which can be done in situ [2].
In recent decades there has been a change in land use in the Bango watershed where land previously used for agricultural activities has been converted into residential areas.The conversion of land into residential areas proves that there is population growth in the Bango watershed area.Of course, this will become a new issue related to the increasing need for clean water to meet the daily needs of the population.Population growth is followed by economic and industrial growth, such as the construction of factories and the planned Kawasan Ekonomi Khusus program that will be built in the Bango watershed area.Broadly speaking, the area that covers the Bango Sub-watershed requires a fairly high IOP Publishing doi:10.1088/1755-1315/1311/1/012038 2 supply of clean water.The filling of clean water comes from the groundwater that is present in the aquifer layer.The aquifer is a porous, permeable, and saturated layer, such as unconsolidated sand, which can flow and store groundwater.Therefore, the aquifer layer is an important source of water for wells and springs [3].
Aquifers can be located at various depths, ranging from shallow aquifers close to the ground surface to deep aquifers located at depths of hundreds to thousands of meters below the surface.Estimation of subsurface conditions, especially related to the presence of groundwater, is carried out using the technique of estimating specific resistance (resistivity) using the Vertical Electrical Sounding (VES) method [4].The detection of aquifers to determine the distribution and depth of groundwater is one of the efforts to fulfill clean water needs in the region.
Time and Place of Research
This study used a quantitative survey approach where measurements were taken in the Bango watershed area that crosses the administrative areas of Malang Regency and Malang City in August 2023.Astronomically, the Bango watershed is located at 07°45'52" S -07°59'40" S and 112°33'05" E -112°46'55" E. The measurement points were determined using the Systematic Grid Sampling method through the ArcGIS application with the coordinates of the measurement points shown in Figure 1.
Figure 1. Geoelectric Sounding Point Location Map
The geoelectric measurement points are spread across several different land uses including forests, agricultural land in the form of fields and paddy fields, and yards in residential areas.
Tools and materials
The materials needed in the research are as follows: 1) Bango watershed boundary map of Malang Regency, 2) Geological Map of Malang Sheet scale 1:100,000 produced by Geological Research and Development Center 1992, 3) Hydrogeological Map of East Java Province horizontal scale 1:125,000 and vertical 1:50,000 produced by the Directorate of Environmental Geology.The resistivity meter will be connected to the battery as an energy source and electrodes to conduct electric current into the earth.
Methods
The resistivity geoelectric method is one of the most widely used geophysical techniques to obtain information about subsurface conditions and potential groundwater aquifer layers [5], [6].The geoelectric method uses the principle of electric current flow where electric current flows through the rocks and is heavily influenced by the presence of groundwater and the salts contained therein [7].The geoelectric measurement technique carried out in this research uses VES (Vertical Electrical Sounding) with a Schlumberger configuration which can significantly reduce the time needed to carry out sounding and the costs are relatively small.
The first stage of geoelectric measurement is injecting electric current using 2 current electrodes (AB) and 2 potential electrodes (MN) arranged in a straight line with the arrangement of the potential electrode distance smaller than the current electrode distance [7].The reading of the resistivity meter at the sounding point is carried out each time the electrode displacement is changed arbitrarily, starting from a small electrode distance and then gradually increasing [8].The longer the AB electrode distance will cause the flow of electric current to penetrate deeper rock layers [9], [10].The geoelectric measurement data in the form of current (I) and voltage (V) is then processed to obtain the apparent resistivity value using the following equation: (1) where: ρa = apparent resistivity (ohm-m) K = geometry factor ΔV= potential difference (volts) I = current strength (ampere) = ( where: The next stage is the inversion process using the IPI2WIN application to reconstruct subsurface conditions based on measurement data.Inversion is often called curve fitting which is done by finding model parameters to produce a response that matches field observations [11].The results of the inversion or curve fitting process with IPI2WIN are in the form of resistivity values in ohmmeter units (ρ), thickness in meter units, and layer depth in meter units (d).Furthermore, the inversion results will be interpreted based on geological maps and resistivity values to determine the lithology of each layer.The range of resistivity values against rock types shown in Table 1, will help in the process of interpreting the resistivity log into a lithology log [12].Using the Rockwork 16 application, the amount of groundwater volume and visualization of the lithology cross-section can be determined.
Results and Discussion
There are 14 geoelectric measurement points with an interval of 4 km for each point spread over 2 points in Malang City and 12 points in Malang Regency.This research focuses on the potential of shallow groundwater so that geoelectric data collection is only carried out with a depth of 50-60m.Processing of geoelectric measurement data is carried out using IPI2win software to get the smallest error number (RMS error).Analysis based on differences in resistivity values resulted in 5 to 7 layers.Geological formations in the Bango watershed are dominated by Arjuno Welirang volcanic rock formations, Malang Tuff Formation, and Gendis and Buring volcanic formations.The range of resistivity values is very large ranging from 4,35 Ωm to 3889 Ωm.The low resistivity values are interpreted as clay and tuff layers.Rock layers of breccia to lava that are solid and hard are interpreted from high resistivity values.The next step is to interpret the geoelectric measurement data.The results of data interpretation at each measurement point are as Geoelectric points 7 and 8 identified rock layers consisting of topsoil, clay, and tuff layers with resistivity values ranging from 10,8 Ωm to 73,7 Ωm.The topsoil layer is located at a depth of 0 -1,56 meters, the next rock layer is dominated by inserted between clay and tuff layers.The clay layer is identified at depths of 1,5 -7,72 m; 14,2 -30 m; and 52,3 -60 meters.The tuff layer is identified at depths of 7,72 -14,2 m; and 30 -52,3 m.The A-A' profile cut in Figure 6 is planned to determine the rock lithology profile at the research site.By combining geoelectric points TGL 4, TGL 5, TGL 6, and TGL 14, which are located in the Arjuno Welirang volcanic rock formation, Malang Tuff Formation, and Gendis volcanic formation.Taking geoelectric points as cross-section profiles are based on the number of points in 1 (one) straight track and the location of points that can represent lithological differences.
Figure 5. Lithology profile of the A -A' section
Based on the cut profile A -A' of points TGL 04 and TGL 06, the surface layer is dominated by hard lava rock and breccia to volcanic breccia.Point TGL 14 which is on the opposite side is dominated by breccia to volcanic breccia on the surface, then clay and tuff.Point TGL 06 which is located in the lowland layer is dominated by clay and tuff layers.
Based on the hydrogeologic map, the Bango watershed area is an area with a diverse and widespread level of aquifer productivity.Areas included in the small productivity are in Tawangargo Village.Aquifers with local productivity are located in Donowarih Village and Sumbul Village where groundwater is generally not utilized due to small discharge and deep-water table.Boreholes are mostly found in the upstream and middle sections with a depth of more than 60 m.Meanwhile, dug wells are found in the downstream section with a depth of about 14 m.Of the rock types in the area, tuff is the best rock as an aquifer with a fairly large pore structure in the downstream part which is detected at a depth of between 3.83 and 57.05 m.In the upstream and middle parts, the aquifer layer is in the form of fractures between breccia rocks.While in the upstream and middle part of the aquifer layer in the form of fractures between volcanic breccia rocks and lava at a depth of more than 60 m.Then, the results of shallow groundwater volume in the Bango watershed area is 5.73 billion cubic meters.
Conclusion
In general, the rock lithology in the Bango watershed area is dominated by volcanic rocks consisting of clay, tuff, breccia, volcanic breccia, and lava.Where in the upstream part is dominated by breccia-lava rocks, the middle part is dominated by volcanic breccia rocks, in the downstream part is dominated by tuff rocks.Shallow aquifers are identified at the downstream area in the Malang Tuff formation with tuff as the aquifer layer at a depth of 3.83 -57.05 meters.Then in the upstream and middle areas of volcanic rock formations with aquifers in the form of fractures between volcanic breccia rocks and lava at a depth of more than 60 m.Bango sub-watershed has good groundwater potential with a shallow groundwater volume of about 5.73 billion cubic meters.The results of the study can be used as a reference for the utilization and conservation of groundwater in the Bango Subwatershed area.
9 Figure 6 .
Figure 6.3D visual of Bango watershed rock lithology
Table 2 .
Types of Rocks in Each Layer in Malang City
Table 3 .
Rock Type of Each Groundwater Layer in Malang District
Table 3
shows that the geoelectric points in Malang Regency are located in the geological formations of Arjuna Welirang Volcano (Qvaw), Middle Quaternary Volcano (Qp), and Malang Tuff (Qvtm).The resistivity values range from 5.45 Ωm to 3889 Ωm with rock layers consisting of topsoil, clay, tuff, breccia, volcanic breccia, and lava. | 2,630.6 | 2024-03-01T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Arabic Static and Dynamic Gestures Recognition Using Leap Motion
: Across the world, several millions of people use sign language as their main way of communication with their society, daily they face a lot of obstacles with their families, teachers, neighbours, employers. According to the most recent statistics of World Health Organization, there are 360 million persons in the world with disabling hearing loss i.e. (5.3% of the world’s population), around 13 million in the Middle East. Hence, the development of automated systems capable of translating sign languages into words and sentences becomes a necessity. We propose a model to recognize both of static gestures like numbers, letters, ...etc and dynamic gestures which includes movement and motion in performing the signs. Additionally, we propose a segmentation method in order to segment a sequence of continuous signs in real time based on tracking the palm velocity and this is useful in translating not only pre-segmented signs but also continuous sentences. We use an affordable and compact device called Leap Motion controller, which detects and tracks the hands' and fingers' motion and position in an accurate manner. The proposed model applies several machine learning algorithms as Support Vector Machine (SVM), K-Nearest Neighbour (KNN), Artificial Neural Network (ANN) and Dynamic Time Wrapping (DTW) depending on two different features sets. This research will increase the chance for the Arabic hearing-impaired and deaf persons to communicate easily using Arabic Sign language(ArSLR). The proposed model works as an interface between hearing-impaired and normal persons who are not familiar with Arabic sign language, overcomes the gap between them and it is also valuable for social respect. The proposed model is applied on Arabic signs with 38 static gestures (28 letters, numbers (1:10) and 16 static words) and 20 dynamic gestures. Features selection process is maintained and we get two different features sets. For static gestures, KNN model dominates other models for both of palm features set and bone features set with accuracy 99 and 98% respectively. For dynamic gestures, DTW model dominates other models for both palm features set and bone features set with accuracy 97.4% and 96.4% respectively.
Introduction
Sign language is the most common and important way for deaf and hearing impaired in order to communicate and integrate with their society. It is a kind of visual language that consists of a sequence of grammatically structured human gestures (Quesada et al., 2015). There is a large sector of Arabian community suffering from deafness and hearing impaired. In Egypt, the number of deaf people according to "Central Agency for Public Mobilization and Statistics" last study is around 2 million and increased in 2012 to be close to 4 million (http://www.who.int/mediacentre/factsheets/fs300/en/). Unfortunately, most of these people cannot read or write Arabic language and 80% of them are literal, they are isolated from their society. They are a large part of society and cannot be neglected however still they cannot communicate normally with their community because of the constraints of language known to them, most people are not familiar with sign language and cannot understand it. Thus, they are far from their own society, depressed, living lonely life. So, these restrictions must be broken because they prevent the deaf persons from enjoying his full rights and get their opportunities for full citizenship. For example, the deaf person must be able to express himself and what he wants easily. Hence, it is important to develop automated system capable of translating sign languages into words and sentences. This will help normal people to communicate effectively with the deaf and the hearing-impaired and will act as an interface between normal person who does not know the sign language and the deaf person.
The sign recognition approached can be categorized as sensor-based approach and image-based approach. In sensor-based approach the deaf person needs to wear external instruments such as electronic gloves which contain number of sensors during performing the signs to detect the different hands and fingers motions. In the Image-based systems the camera(s) are used to acquire the images of the hand during the motion (Samir Elons et al., 2013). However, the two approaches have advantages and disadvantages such as, in the sensor-based the data acquisition and data processing are simple and its reading and results are accurate, but in this approach the user is enforced to wear external gloves and this leads to difficulty in interactions and inflexibility in movements . The image processing approach allow the natural interaction for the user but it needs specific background and environmental conditions also it needs specific light intensity to achieve high accuracy also it needs complex computational in order to process and analysis the captured images (Vijay et al., 2012).
In this study, a new model for Arabic Sign Language Recognition (ArSLR) was developed for both static and dynamic gestures. It was built using the recently introduced sensor called Leap Motion Controller (LMC). Leap Motion Controller (LMC) was developed by an American company called leap motion company. It detects and tracks position and motion for hands, fingers and their joints with a rate of 200 frames per second approximately. The captured frames contain information about how many hands are being detected and also vectors that contain information about the position and rotation of hands and fingers based a skeletal model of the hand. The proposed models depend on two different sets of features called Palm-Features set and Bone-Features set which have common features between them. Several experiments are performed to test the proposed models, the experiments include both static gestures and dynamic gestures i.e., Arabic alphabets, Arabic numerical and the common signs used with Dentist for static gestures and common verbs and nouns using one hand and two hands for dynamic gestures. The static gestures include fixed gestures which haven't any type of motions or movement but the dynamic gestures contain movement and motion in performing it. Machine learning algorithms like Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Artificial Neural Network (ANN) and Dynamic Time Wrapping (DTW) are used to classify the datasets. Also, the system develops a method to segment a sequence of signs in order to recognize the continuous sentences which makes the system more reliable and more practical.
The main aim of this paper is to propose a recognition model for Arabic sign language using Leap Motion Controller (LMC) and apply it on static and dynamic gestures by choosing the optimal features with acceptable accuracy. The rest of this paper is organized as follows: In section 2, related work of Arabic sign language recognition is presented. Introduction of Leap Motion Controller is presented in section 3. The methodology and experimental results are presented in section 4 and 5 respectively. The segmentation part is presented in section 5. Finally, conclusion and future work is presented in section 6.
Related Work
The research on Arabic Sign Language Recognition (ArSLR) is recent compared with the work carried out in the field of other sign languages recognition. There are several researches and techniques have been developed but there is a big challenge in this field and it is still an open area for further research especially ArSL because most of sign languages have data bases and dictionaries which help the researchers in their works but ArSLR researches are still recent and no such data sets. Mohandes (2001) developed a model to recognize Arabic sign language alphabets, the support vector machine is used for classification, it was fed by the moment invariants and the features extracted using Hus moments, the recognition accuracy was 87%. Al-Jarrah and Halawani (2001), proposed a translator for 30 Arabic manual alphabets based on a neurofuzzy inference system, the system depends on image based approach and the captured images are segmented, processed, analyzed and converted to a set of features. The system achieved accuracy of 93.55%. Assaleh and Al-Rousan (2005), the system depends on sensor -based approach, the signer should use a glove with 6 different colors, 5 of them for fingertips and one for the wrist region, the model uses a polynomial classifier to recognize alphabet signs. A recognition rate was around 93.4%, the evaluation was done on a database of more than 200 samples for 42 gestures. In Mohandes and Buraiky (2007) used cost effective instrumented gloves to implement a robust and accurate ArSLR system. The Statistical features are extracted from the acquired signals and the gestured were classified using SVM classifier. The model was evaluated and tested on a database of 120 signs, the recognition accuracy was over 90%. In Maraqa and Abu-Zaiter (2008) built a model for alphabets recognition using recurrent neural networks. This model covered 30 gestures and the training dataset was 900 samples collected from two users. Colored gloves were used in their experiment. The recognition rate that has reached up to 95.11% In El-Bendary et al. (2010) presents s an automatic translation system of gestures of the manual alphabets in the Arabic sign language. The system depends on Imagebased approach, the main steps in this system are; preprocessing phase, frame detection phase, category detection phase, features extraction phase and classification phase. The single nearest neighbor technique is used for classification. Experiments results of this system were able to recognize the 30 Arabic alphabets with an accuracy of 91.3%. Hemayed and Hassanien (2010) introduced a new model for hand gesture recognition to recognize Arabic sign language alphabet and finally converts it into voice correspondences, it was based on Image-based approach. The input was the color image which was captured for the hand motion, then it was converted to YCbCr color space and then extract the skin region from colored images. Prewitt edge detector is used to extract the edges of the segmented hand gesture. In classification phase Principal Component Analysis (PCA) is used with a K-Nearest Neighbor Algorithm (KNN). They applied the technique on more than 150 signs and their accuracy was near to 97% at real time test for three different signers. In Shanableh and Assaleh (2011) presented a system for independent Arabic sign recognition using a video-based approach. The test experiment includes 3450 video segments covering 23 isolated gestures from three signers. The signers used colored gloves and the color information was used in the preprocessing phase. The input was image sequence through successive image differencing and the output was extracted features that were used to detect the motion information. The used classifier was KNN and the achieved recognition rate was 87%. In Mohandes et al. (2012) developed a system for Arabic signs Recognition, it was a vision-based model which used a Hidden Markov Model (HMM) to identify the pre-segmented Arabic signs from images. The experiments were done on a dataset consisting of 500 samples of 300 signs and achieved a recognition accuracy of 95%. In Mohandes (2013), two-handed Arabic sign recognition was introduced. The database which was used for evaluation consisted of 20 samples from each of 100 two-handed signs performed by two signers. For classification, the SVM was used, achieving an accuracy of 99.6% with 100 signs. In Guesmi et al. (2016) presented an automatic system for Arabic sign language recognition, they recognized in real time the static hand gestures of the Arabic sign language and then convert them into Arabic text. There were two main phases in the system: Hand detection and hand gesture recognition. The system using two classifiers Fast Wavelet transform (FWNC) and the Separator Wavelet Network Classifier (SWNC). The hand detection experiment contained 100 images, the results of (FWNC) was 99.20% and (SWNC) was 92.36%. The hand recognition experiment contained 28 signs for Arabic alphabet, the results of (FWNC) was 93.21% and (SWNC) was 71.07%.
Aly et al. (2016) developed a new system for Arabic finger spelling recognition. The system is a type of image-based approach. The input to the system is the two images which are captured from Softkinect camera. The information that were extracted from the images such as color and depth of the images are used to classify each hand spell. The accuracy achieved was 99.5%.
In addition to the different image-based and glovebased systems that are currently in use, some new systems for facilitating human-machine interaction have been introduced lately. Microsoft Kinect and the Leap Motion Controller (LMC) have attracted special attention. The Microsoft Kinect system uses an infrared emitter and depth sensors, in addition to a highresolution video camera.
In 2011, depth information from Kinect sensor is used by Biswas and Basu (2011) recognize signs in Japanese Sign Language (JSL). They use low level features and achieve more than 90% accuracy on 8 signs, Aliyu et al. (2016) proposed an Arabic sign language recognition system based on Microsoft Kinect system. The developed system was tested with 20 signs from the Arabic sign language dictionary. As stated before, the sensor based approach enforced the user to wear external hardware equipment (i.e., Gloves) and this is difficult in interactions and inflexible in movements, also there is another issue related to calibration because different people have different hand sizes and finger length/thickness. As well, image-based approach has challenge of environmental conditions such as lighting conditions, image background and different types of noise. Microsoft Kinect gave good results and does not enforce the user to wear any external hardware equipment. As well, the collected data is independent of the environmental conditions. But it can't detect the details of hand and fingers. So it fails in recognizing the gestures that are performed by hands but leap motion controller is very accurate in hand tracking and its details. So, we use it in this research for hand gesture recognition.
The Leap Motion Controller (LMC)
Recently, new systems and devices appeared for enabling human machine interaction to be easier. Such as, Leap Motion Controller (LMC) which was developed by an American company called leap motion company (http://dartmouthbusinessjournal.com/2013/08/the-leap-motion-controller-and-touchless-technology). It detects and tracks position and motion for hands, fingers and their joints. It operates at a rate of 200 frames per second approximately. These frames contain information about how many hands/tools are being detected and also vectors that contain information about the position and rotation of hands and fingers based a skeletal model of the hand that is represented in Fig. 1. The effective range of the LMC extends from 1 inch to 2 feet above the device (http://phys.org/news/2013-08-motion-readyprime.html). The LMC uses two high precision infrared cameras and three Light-Emitting Diodes (LEDs) to capture hand information within its active range (https://www.leapmotion.com) it is precise and handy in every day situation and also low cost. Figure 2 represents leap motion controller. There are several researches in ARSL which uses leap motion controller and the results were promising Elons et al., 2014).
Methodology
The overall workflow of the proposed model is composed of five steps as shown in Fig. 3. The first step is pre-processing phase which starts up the leap motion service in order to receive the hand gestures as input and perform some advanced algorithms on the received raw sensory data. The second step is tracking phase, in this phase the tracking layer matches the data to extract tracking information such as, fingers and tools. The third step concerns with the features extraction, the data obtained from LMC are analyzed to extract robust features that can be used to identify the signs these features are introduced as a vector to a classifier and used to identify the signs. Finally, the last step is to use the trained classifier to recognize users' gesture.
Preprocessing
The Leap Motion Service is the software on the computer that makes set of processes and mathematical operations on the received images. The preprocessing phase consists of several steps such as background compensation for objects (for example heads) and another environmental factor as lighting, then perform infrared scanning for the received images to construct a 3D representation for what the device sees as hands and tools, finally create a digital version of the hand in areal time.
Tracking
In this phase, some algorithms and mathematical operations are used to interpret the 3D data and infer the positions of detected objects, finally return the results which are expressed as a series of frames which contain information about the hand motion, the position of hands, fingers and their joints in 3D coordinates.
Features Extraction
The used features in the model were Palm-Features set and Bone-Features set, there are some common features between the two sets such as Palm position in the three directions (x, y, z), Palm Direction in (x, y, z) also Fingertips direction in (x, y, z) for every finger (Thumb, Index, Middle, Rin, Pinky) (Fig. 4) Pitch, Yaw and Roll angles (Fig. 5), The vector that resulted from the angles between the fingers (Fig. 6).
The palm vector which contains six scalar values and it will be as the following: The fingertips direction vector which contains fifteen scalar values and it will be as the following:
{ }
This vector along with previous vectors P_set vector for palm position and palm direction, F_Tip_set for fingertips direction, Arm_Dir_set for arm direction angels (Pitch, Yaw, Roll) and Finger_Angles_set for angles between fingers the combination of all of these vectors contains 85 scalar values and represented the features set S1:
H set P set F Tip set S Arm Dir set Finger Angles set
For the Bone features set, also in addition to the previous features P_set vector for palm position and palm direction, F_Tip_set for fingertips direction, Arm_Dir_set for arm direction angels (Pitch, Yaw, Roll) and Finger_Angles_set for angles between fingers, there is a vector resulted from subtracting the phalanx's start position from phalanx's end position for every finger the resulted value in (x, y, z) this vector contains 42 scalar values (Fig. 8
Single-Sign Classification
There are two types of gesture: Static and dynamic gestures. The group of static gestures includes fixed gestures, which are not taking into account the changes in time such as alphabets, numbers also pointing gestures ( Fig. 9) that are including pointing to spatial location or specific object, static semaphoric gestures (Fig. 10) which are type of static gestures such as thumbs-up meaning approval symbol and iconic gestures (Fig. 11) which used to represent shape, size, curvature of object or entities (Benko et al., 2012).
The dynamic gestures contain movements and motion during performing it, such as pointing and Manipulation gestures which are used to guide movement in a short feedback loop (Fig. 12) and dynamic iconic gestures which are often used to describe paths or shapes, such as moving the hand in circles, meaning "the circle" (Fig. 13).
Static single sign Classification
So, for static single sign, the classifier takes its input as a single frame which represents the sign in, we tested several classifiers and compare their results such as Support Vector Machine (SVM) classifier which is one of the most common machine learning classifiers and was widely used in object detection and recognition also (KNN) and (ANN), these methods are chosen because it took into consideration the state-of-the art methods for many different applications and they gave the best results.
Dynamic Single Sign Classification
The second type of signs is the dynamic single sign which contains movement and motion in performing it. For this type, there are two suggested methods.
In the first method, the classifier took a sequence of frames that compose a single sign and we used the classifiers as SVM or KNN or ANN and then classify each frame individually and the result is selected by simple majority (i.e.,) the classification with the most frames assigned to it (Fig. 14).
In the second method, Dynamic Time Warping (DTW) was used, it is known as an optimal alignment algorithm between two given sequences it measures the similarity between two sequences which are varying in time or speed, DTW has been used in various fields, such as speech recognition data mining and movement recognition. The DTW is very suitable for sign recognition applications, because of coping in that way with sign executions speeds (https://en.wikipedia.org/wiki/Dynamic_time_warping). In this case the set of frames in Test (The Test Sign) set will be compared with a set of frames in training set using DTW and each set will be treated as a signal or pattern. The sequences that will be compared must be "warped" non-linearly in the time dimension to determine their similarity independent of certain non-linear variations in the time dimension. The DTW will determine the most similar group in the training set to the test sign according to the calculated distance, so the most similar sequence to the test sequence is the sequence in the training set with the smallest DTW distances. Finally, it gets the most similar group and assign it to the test sign. The model of Dynamic sign recognition is represented in the (Fig. 15).
Experiments Results
We performed several experiments to test the developed models, the experiments include Arabic alphabets, Arabic numerical, the common signs used with Dentist, the common verbs and nouns using one hand and, common verbs and Nouns using two hands.
Arabic Alphabets
The developed system is used to recognize the twenty-eight Arabic alphabet signs from أ-ى as shown in Fig. 16. It should be noted that all Arabic alphabet signs are type of static gestures and are performed using a single hand. The training set was about 400 frames for each sign (Alphabet) and collected from two different users and tested by 200 frames from another third user, the number of features are 85 for palm features set and 70 features for bone features set.
The results of classifications using KNN, SVM and ANN are shown below in Fig. 17.
Arabic Numbers
Our developed system also is used to recognize the Arabic signs that represent the numbers from 0 to 10 which are shown in Fig. 18. It is also a type of static one hand gestures. The training set was about 400 frames for each sign (Number) collected from two different users and tested by 200 frames from another third user.
The results of classifications using KNN, SVM and ANN are shown below in Fig. 19.
Common Dentist Signs
The group of common dentist signs are consisting of ( pqrو tu ار`k Want to knit the face))but both are performed by a single hand, they are shown in Fig. 20.
The equivalent English meaning are represented in Table 1.
For Static Gestures, the training data set in classification method contains 50 samples for each sign from two different users and tested by 20 samples from another user, the comparisons between the used classifiers (KNN, SVM and ANN) in static dentist signs are shown in Fig. 21. As I explained before that the dynamic gestures are represented by a set of frames, so for recognition of this type we suggested "Classification + Majority" and DTW (Dynamic Time Wrapping). The comparisons between the used classifiers (KNN, SVM and ANN) + Majority in dynamic dentist signs are shown in Fig. 22.
The result of the second method "DTW" is listed in Table 2.
Common Verbs and Nouns with One Hand
The group of this experiment contains common words and verbs. The number of these signs are twenty words, some of them are static gestures such as the following(ceS(ھPhone),(زواجMarriage),حS{T(Success),|u (Love),اب (Father),ام (Mother),kr (Grand Father), f•(طChildren)) and some are dynamic such as the following: Fig. 23., also the equivalent English meaning are shown in Table 3. For Static Gestures, the training data set in classification method contains 50 samples for each sign from two different users and tested by 20 samples from another user, the comparisons between the used classifiers (KNN, SVM and ANN) in static common verbs and nouns with one hand are shown in Fig. 24.
For dynamic gestures, the results of the first method "Classification + Majority" and, the comparisons between the used classifiers (KNN, SVM and ANN) +Majority is are shown in Fig. 25.
For dynamic gestures, the results of the second method are represented in Table 4.
Common Verbs and Nouns with Two Hands
This experiment was performed using two hands and also contains static gestures such as •`k• ( •[X`(Mix)). Some of them are static gestures and some are dynamic but both are performed by a single hand, they are shown in Fig. 26. In this case, the number of features are 170 features for the Palm feature set and 140 features for the Bone feature set. The training set contains 50 samples for each sign from two different users and is tested by 20 samples from another user for both dynamic and static gestures. The equivalent English meaning are shown in Table 5.
For static gestures, the results of the first method "Classification + Majority", the comparisons between the used classifiers (KNN, SVM and ANN) + Majority is represented in Fig. 27.
For dynamic gestures, the results of the first method "Classification +Majority", the comparisons between the used classifiers (KNN, SVM and ANN) +Majority are represented in Figure 28 and the results of the second method "DTW" are represented in Table 6.
Segmentation
We also worked on solving the problem of segmentation in real time in order to recognize the continuous sentences. After some experiments, we decided to use the palm speed during the motion as a segmenter. As it is shown in Fig. 29, there are difference in velocity when the transition between gestures or signs occurred, the segmenter will detect the changes in the velocity and perform the segmentation according to it. It is noticeable that the velocity is decreasing to less than 30 mm per sec.
when the gesture end then it increasing again over 100 mm per sec. when performing the next gesture. The segmenter keeps track the palm velocity and then segments the sequence of frames to groups each group contains a set of frames which represents a particular sign. Once the segmenter performs the segmentation, it runs the process of classification on each group individually to get the corresponding sign. The segmentation method was applied on sequences of numbers as in Fig. 29 which represents the palm speed over a sequence of numbers "523" represented by three signs "5" then "2" and "3". Also, the segmentation method was applied on the finger spelling (i.e.) spelling the words with hand movement as in Fig. 30, which represents the palm speed over the word …[" | " " i.e., "Playground" which is represented by four signs "م" then "ل" then "ع" and ."ب" Finally, we applied the segmentation method on the sentences which are consisting of several signs as Fig. 31 which represents the palm speed over the sentence " Sھ ھ ‡ا pYأ ce " i.e., "This is my Father's Phone" which is represented by four signs " " ھ ‡ا then " ceSھ " then" " STا and ."اب" We tested the segmentation on over 30 sentences with different lengths, the test concerned with the ability of this method to segment the sentences which were composed from several signs correctly and the results were over 95%.
Conclusion and Future Work
In this study, we developed a model for Arabic sign recognition using the leap motion controller (LMC). We applied our model on both static and dynamic gestures. Our experiments include the 28 Arabic Alphabets from أto ى and Arabic Numbers from 0 to 10, eight common Arabic signs which are used at dentist, 20 common nouns and verbs used in the different aspects of life and finally 10 signs which are performed by two hands.
For static gestures, we used classification methods and compared the performance of three classifiers; Support Vector Machine (SVM) with poly kernel and RBF, K-Nearest Neighbor (KNN) and Artificial Neural Network (ANN) with a Multilayer Perceptron. Applying mentioned algorithms on two different features sets palm features set and bone features set.
For dynamic gestures, we suggested two methods: "Classification with Simple Majority" and "Dynamic Time Wrapping" which resulted in a good performance and accuracy of 98%. Also, the paper suggested a simple and effective solution to segment a series of continuous signs, which is the main problem in continuous recognition. This method depends on the motion speed, it works effectively in areal time and gave accuracy of 95%.
For future work, we plan to improve the accuracy of recognition by making additional features engineering and using deep learning with large samples of full sentences. | 6,587 | 2017-08-12T00:00:00.000 | [
"Computer Science"
] |
Autophagy in Neuroinflammation: A Focus on Epigenetic Regulation
Neuroinflammation, characterized by the secretion of abundant inflammatory mediators, pro-inflammatory polarization of microglia, and the recruitment of infiltrating myeloid cells to foci of inflammation, drives or exacerbates the pathological processes of central nervous system disorders, especially in neurodegenerative diseases. Autophagy plays an essential role in neuroinflammatory processes, and the underlaying physiological mechanisms are closely correlated with neuroinflammation-related signals. Inhibition of mTOR and activation of AMPK and FOXO1 enhance autophagy and thereby suppress NLRP3 inflammasome activity and apoptosis, leading to the relief of neuroinflammatory response. And autophagy mitigates neuroinflammation mainly manifested by promoting the polarization of microglia from a pro-inflammatory to an anti-inflammatory state, reducing the production of pro-inflammatory mediators, and up-regulating the levels of anti-inflammatory factors. Notably, epigenetic modifications are intimately associated with autophagy and the onset and progression of various brain diseases. Non-coding RNAs, including microRNAs, circular RNAs and long noncoding RNAs, and histone acetylation have been reported to adjust autophagy-related gene and protein expression to alleviate inflammation in neurological diseases. The present review primarily focuses on the role and mechanisms of autophagy in neuroinflammatory responses, as well as epigenetic modifications of autophagy in neuroinflammation to reveal potential therapeutic targets in central nervous system diseases.
Introduction
Neuroinflammation refers to an inflammatory immune response to a variety of endogenous or exogenous stimuli such as pathogens and misfolded proteins within the central nervous system (CNS), which contributes critically to the pathogenicity and progression of almost all neurological disorders, such as Alzheimer's disease (AD) and Parkinson's disease (PD) [1][2][3][4].The principal characteristic of neuroinflammation is the overproduction of inflammatory factors in the CNS, including cytokines like TNF-α, IL-18 and IL-6, chemokines such as CCL2, CCL5 and CXCL3, and small-molecule messengers including NO and ROS [5][6][7].Immune cells, particularly microglia, the primary resident innate immune cells in the CNS, are main sources of inflammatory mediators.And excessive pro-inflammatory mediators trigger excitotoxicity, apoptosis, and necrosis induing neuronal damage, and disrupt the permeability of blood-brain barrier that leading to neutrophil infiltration, thus causing or further aggravating CNS diseases [8,9].Moreover, hyperstimulation of microglia with a reduced ability to phagocytose and degrade misfolded proteins, promotes the aggregation of α-synuclein, β-amyloid (Aβ) and tau, to form Lewy bodies, amyloid plaques, and tau neurofibrillary tangles [10][11][12].At present, neuroinflammatory inhibitors have been widely explored as potential tools for treating neurological diseases [13], that mainly focus on these targets like TLRs, NF-κB, NLRP3 inflammasome, etc. [14][15][16].Noteworthy, microglia could recruit Aβ to autophagic vacuoles for degradation and inhibit Aβ-induced activation of NLRP3 inflammasome, indicating that the ability of microglia in phagocytosis and degradation of abnormal proteins is closely associated with autophagy-related mechanisms in neuroinflammation [17,18].And recently, many studies have reported the participation of autophagy in neuroinflammation and support an autophagic regulation role in the pathogenesis of neuroinflammation-related CNS disorders [19,20].
Autophagy is a strictly regulated process of cellular degradation to maintain cellular and intracellular homeostasis, and can be activated by pressure conditions like hypoxia, insufficient nutrition and infection [21].Through autophagy, cytoplasmic components, such as intracellular pathogens, impaired organelles as well as misfolded proteins, are transported to the lysosome for degradation [22,23].Despite autophagy exhibiting cytoprotective properties preventing a variety of diseases, such as neurodegenerative diseases, in some cases autophagy also exerts harmful effects through cytocidal effects on normal cells [24].Depending on the different ways in which cargoes are transported to the lysosome, autophagy is commonly categorized into macroautophagy, microautophagy and chaperonemediated autophagy among eukaryotic cells [21].In macroautophagy (hereafter termed autophagy) (Fig. 1), cytoplasmic cargo is captured by phagophore and forms a double-membrane autophagosome that subsequently fuses with the lysosome for degradation [25].In the above process, autophagy is initiated from ULK1 complexmediated phosphorylation of Beclin1-Vps34 complex, promoting formation of phagophore and autophagosome, whose membranes are formed by LC3-II binding to lipid phosphatidylethanolamine (PE) and autophagyassociated proteins (ATG) such as ATG5, ATG12 and ATG16L1 [23].Autophagy plays a vital role in neurological disorders, like AD, PD, and multiple sclerosis, probably through mediating inflammatory signals [26].In vitro, defective autophagy of microglia exacerbates the production of pro-inflammatory mediators induced by lipopolysaccharide (LPS) or Aβ and leads to apoptosis of neurons in the co-culture state.And in vivo, impaired microglia autophagy similarly exacerbates neuron loss and neuroinflammatory response caused by 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) or Aβ fibrils [18,27].Moreover, autophagy deficiency exacerbates CNS diseases due to NLRP3 inflammasome activation, inflammatory cytokines release and accumulation of abnormal proteins [18,27,28].Thus, the role of autophagy as a core mediator of neuroinflammation in CNS diseases deserves more attention.Epigenetic modification of autophagy is a new critical regulator to maintain cellular homeostasis via modifying autophagy-related gene and protein expression, thereby influencing subsequent autophagic flux, while epigenetic dysregulation leads to autophagic dysfunction, that induces a variety of CNS diseases [24,29].In addition, various epigenetic changes, including DNA methylation, histone modifications and non-coding RNAs, are involved in CNS diseases, such as PD, stroke and neuropathic pain via influencing multiple inflammatory signals to modulate microglia polarization and the production of inflammatory mediators [30][31][32][33].However, epigenetic mechanisms of autophagy that regulate neuroinflammation and related disorders have not been systematically described so far.Therefore, in this review, the role and mechanisms of autophagy in neuroinflammatory responses, as well as epigenetic modifications of autophagy in neuroinflammation-related neurological disorders were systemically reviewed to unveil potential therapeutic targets.
mTOR and autophagy
mTOR is a classic kinase that modulates cell growth and proliferation with a critical effect in the autophagy regulation of microglia phenotype.In LPS-stimulated N9 microglia, pro-inflammatory mediators like TNF-α, IL-6, and iNOS levels were up-regulated, and autophagy were suppressed by enhancing p62 expression and reducing LC3-II/I ratio and ATG5, Beclin1 and Vps34 expression with activation of PI3k/AKT/mTOR signaling pathway.However, mTOR inhibitor rapamycin alleviated LPSinduced neuroinflammation and promoted autophagy in N9 microglia and hippocampus and cortex from neuroinflammatory mice [34].Additionally, ablation of neural sphingosine-1-phosphate (S1P) lyase 1 (SGPL1) markedly activated primary microglia, increased proinflammatory cytokines levels and inhibited autophagy via mTOR signal.And LPS further enhanced IL-6 release and autophagy inhibition, which was reversed by rapamycin and SIPR2 inhibitor in primary microglia from SGPL1 ablated mice [35].Another study reported that in hypoxia-ischemia and LPS-induced cerebral palsy mouse pups, rapamycin suppressed microglia activation, decreased HIF-1α and cleaved Caspase-3 expression, upregulated Beclin1 expression and LC3-II/I ratio and suppressed phosphorylation of p70S6K and S6 in the brain tissue [36].Interestingly, inhibition of mTOR signaling not only inhibited microglia activation, but also promoted polarization of microglia toward an antiinflammatory state (M2) by activating autophagy.TNF-α, an important inflammatory cytokine, significantly reduced the expression of M2 markers, increased the expression of M1 markers, promoted the phosphorylation of AKT and mTOR, and impaired autophagic flux in microglia, which were reversed by AKT inhibitor perifosine.Moreover, autophagy activation by rapamycin enhanced M2 markers and reduced M1 markers' gene expression, whereas autophagy inhibition with 3methyladenine (3-MA) or ATG5 siRNA aggravated TNFα-stimulated M1 polarization in microglia [37].Besides, in primary microglia from SOD1-G93A mice, P2X7 activation using 2'-3'-O-(benzoyl-benzoyl) ATP increased M2 markers expression and activated autophagy via inhibiting mTOR phosphorylation [38].Another study reported that Sestrin2 promoted microglial M2 polarization to provide neuroprotection after oxygen and glucose deprivation (OGD) and re-oxygenation (OGD/R) in BV2 cells, characterized by restoration of autophagic flux with up-regulation of LAMP2 and inhibition of mTOR phosphorylation, which were further confirmed in the middle cerebral artery occlusion (MCAO)-induced mice [39].In sum, inhibition of mTOR signaling might suppress the activation of microglia and promote microglia polarization toward M2 state by activating autophagy.
Multiple antagonists or inhibitors also promote autophagy through mTOR signal to alleviating neuroinflammation in CNS diseases.Metabotropic glutamate receptor 5 (mGluR5) antagonist MPEP alleviated neuroinflammatory response, promoted autophagy with increased expression of LC3-II and Beclin1, and decreased expression of p62 via PI3K/AKT/mTOR signal in AD mice [40].BAY61-3606, a spleen tyrosine kinase (SYK) inhibitor, increased autophagic flux as well as inhibited mTOR phosphorylation, resulting in reduced neuronal and synaptic loss and pro-inflammatory cytokines secretion in Tau P301S mice [41].Moreover, MSDC-0160, a mitochondrial pyruvate carrier (MPC) inhibitor, improved motor dysfunction, protected nigrostriatal neurons from inflammation in MPTP-stimulated PD mice and engrailed1 genetic PD mice, evidenced by inhibiting mTOR signal and restoring autophagy [42].Altogether, blockade of mTOR signaling pathway observably attenuates neuroinflammation and related brain diseases via promoting autophagy.
AMPK and autophagy
AMPK is demonstrated to directly promote autophagy by phosphorylating autophagy-associated proteins, such as mTOR, ULK1 and Beclin1-Vps34 complexes.LKB1-AMPK pathway was suppressed in LPS-induced primary microglia, accompanied by the activation of M1 microglia, up-regulation of pro-inflammatory cytokines levels, and inhibition of autophagy with decreased p62 expression and increased LC3-II/I ratio, as well as Beclin1 and ATG5 expression.Whereas knockdown of PPARγ and its antagonist activated LKB-AMPK signal to inhibit LPS-induced changes in primary microglia [43].Additionally, AMPK-mTOR-p70S6K axis participated in the anti-neuroinflammatory effects of PNU282987, an alpha7 nicotinic acetylcholine receptor (α7nAChR) activator.PNU282987 down-regulated the levels of proinflammatory cytokines through increasing autophagy flux in LPS-induced BV2 cells and in spinal cord and spleen tissue from experimental autoimmune encephalomyelitis (EAE) mice.Nevertheless, an AMPK inhibitor compound C as well as blockade of autophagy by ATG5 siRNA, bafilomycin A1 and 3-MA attenuated the effects of PNU282987 [44].Moreover, inhibition of AMPK pathway also activated microglia and astrocytes, increased the production of TNF-α, NLRP3 and ASC, and reduced autophagy flux in the hippocampus from mice with high-fat feeding-triggered depression, while autophagy activator attenuated the above changes caused by high fat feeding [45].Thus, these results suggest that activation of AMPK signal could enhance autophagy, thereby inhibiting microglia M1 polarization and NLRP3 inflammasome activation, and subsequently reducing the expression of pro-inflammatory mediators in neuroinflammation-related CNS disorders.
HMGB1 and autophagy
HMGB1, as an autophagy sensor, is a DNA-binding protein in the nucleus or translocated into the cytoplasm during cellular stress, which were secreted from necrotic neurons to activate microglia and other immune cells to drive neuroinflammation in CNS diseases [46].Knockdown of HMGB1 exacerbated the decrease of cell viability, increased apoptosis and cleaved Caspase-3 production, and reduced autophagic flux with upregulation of p62 and down-regulation of LC3-II/I ratio via perturbing Beclin1-Bcl-2 complex formation in emulsified isoflurane-stimulated SH-SY5Y and PC12 cells [47].Additionally, HMGB1 siRNA decreased proinflammatory cytokines secretion and the number of LC3positive cells, and reduced the expression of TLR4, MyD88, Beclin1, and ATG5 in ipsilateral striatum of rats after intracerebral hemorrhage (ICH) [48].Moreover, exogenous disulfide HMGB1 impaired mitophagy flux and activated NF-κB signaling pathway, which were reversed by rapamycin in microglia.In vivo, HMGB1 siRNA attenuated microglia M1 activation and reduced the transcription and translation of RAGE, a HMGB1 receptor, in the rostral ventrolateral medulla of stressinduced hypertension in mice.And microglia-specific deletion RAGE suppressed the activation of M1 microglia, reduced autophagosomes accumulation and the expression of mitochondrial reactive oxygen species (mtROS), and impaired lysosomal function in stressed mice [49].To sum up, HMGB1 might have a dual regulatory role with regulating autophagy to neural injury in CNS diseases.
NLRP3 inflammasome and autophagy
NLRP3 inflammasome has been illustrated to be activated in neuroinflammation and related CNS diseases, especially in PD.NLRP3 gene knockout decreased proinflammatory cytokines secretion and Caspase1 expression, improved nigral autophagy, that manifested by an increase in LC3-II expression and a decrease in p62 level, and suppressed nigral dopaminergic degeneration, striatal dopamine (DA) deletion, glial activation and nigral α-synuclein aggregation in chronic MPTP-treated PD mice [50].ATG5 and ATG7 are among the core mediators of autophagosome biogenesis, and their absence induced NLRP3 inflammasome activation.Deletion of ATG5 worsened LPS-induced neuroinflammation, including enhancing the level of IL-1β, iNOS and TNF-α, and reducing the expression of Arg-1 and Ym-1 in BV2 cells, and also promoted the activation of NLRP3 inflammasome in LPS plus ATP-stimulated microglia.And microglia-specific knockdown of ATG5 aggravated motor dysfunction and dopaminergic neurodegeneration, enhanced microgliosis and astrogliosis, exacerbated NLRP3 inflammasome activation in the substantia nigra (SN) of MPTP-induced PD mice [27].Moreover, ATG5 knockout in microglia caused PD-like motor and cognitive dysfunction, and induced neuronal damage, increased the production of pro-inflammatory mediators, as well as promoted NLRP3 inflammasome activation and PDE10A expression in the striatum of mice, while inhibition of NLRP3 using MCC950 rescued microglia activation and neuronal loss [51].Apart from ATG5, ATG7 knockout also promoted the secretion of IL-1β, IL-6 and TNF-α, and cleaved Caspase-1 and ASC expression in BV2 cells.And in ATG7 conditional knockout mice, loss of ATG7 function triggered microgliosis and increased pro-inflammatory cytokines genes levels in sorted microglia from the brain of mice [52].Taken together, the results of in vitro and in vivo models of PD indicate that autophagy interacts with NLRP3 inflammasome, and the enhancement of autophagy in microglia might inactivate NLRP3 inflammasome, thus inhibiting the production of proinflammatory mediators to alleviate the symptoms of PD.
Autophagy attenuates neuroinflammatory response by inhibiting the activation of NLPR3 inflammasome in subarachnoid hemorrhage (SAH), migraine and other CNS disorders.LPS plus ATP stimulation increased NLRP3 inflammasomes activation, and inhibited autophagy with a reduction of Beclin1 and LC3-II proteins expression in primary microglia [53].NLRP3 inflammasomes were also activated in SAH, with increased levels of IL-1β and IL-18, and neurological dysfunction via suppressing autophagy, which were partly reversed by growth arrest-specific 6, a MerTK activator [54].Besides, autophagy inducer, rapamycin and P2X7R antagonist, Brilliant Blue G, suppressed microglia and NLRP3 inflammasome activation, decreased levels of pro-inflammatory cytokines and promoted autophagy in the trigeminal nucleus caudalis from nitroglycerininduced chronic migraine mice [55].Additionally, NLRP3 inhibitor, MCC950 and mitophagy inducer, rapamycin, mitigated neurological damage, decreased pro-inflammatory factors levels, reduced the expression of NLRP3, ASC and Caspase-1, promoted the expression of PINK1 and LC3-II, and attenuated mitochondrial dysfunction with increased mitochondrial DNA, mitochondrial membrane potential and ATP level, as well as decreased ROS production in OGD-induced primary cortical neuron and cortical tissue of traumatic brain injury (TBI) mice [56].Thus, enhanced autophagy of microglia and mitophagy of neurons could notably inhibit NLRP3 inflammasome activation thereby downregulating pro-inflammatory mediators' levels in migraine, TBI and other CNS disorders.
Additionally, autophagic inhibitor 3-MA treatment worsened ethanol-induced memory impairment, while autophagy activation with rapamycin suppressed IL-1β and IL-18 genes transcription and NLPR3, pro-Caspase-1 and Caspase-1 proteins expression in ethanol-induced microglia and mice [19].Moreover, NLRP3 inflammasomes were activated by anesthesia and surgery, along with cognitive impairment, microglia activation, increased pro-inflammatory cytokines expression, and impaired mitophagic flux, which were inverse by triggering receptor expressed on myeloid cells 2 (TREM2) in mice [57].Besides, LPS interfered mitochondrial function and mitophagy via activating NLRP3 inflammasomes in N9 microglia and hippocampus of db/db mice, but exendin-4, a GLP-1R activator, alleviated above changes and relieved depression-like behaviors in diabetic mice [58].Furthermore, autophagy agonist rapamycin improved cognitive dysfunction, inhibited the activation of prefrontal cortex microglia, decreased generation of proinflammatory cytokines and inactivated NLRP3 inflammasome via enhancing autophagy in sevoflurane exposure-induced rats [59].In general, autophagy might inhibit the activation of NLRP3 inflammasome and reduce the production of pro-inflammatory mediators to alleviate neuroinflammation and thus attenuate memory and cognitive behavioral deficits.
TLRs and autophagy
TLRs, abundantly expressed in CNS diseases, mainly activate neuroinflammatory response by targeting autophagy signals [60,61].TLR2 triggered by peptidoglycan significantly induced over-activation of autophagy with increased ratio of LC3-II/I and Beclin1 expression in BV2 cells.While both the TLR2 gene knockout and its antagonist CU-CPT22 dramatically inhibited autophagy levels and promoted microglia M2 polarization, including augmentation of CD206, Arg-1 and IL-10 in peptidoglycan-induced BV2.Further, TLR2 gene knockout also inhibited autophagy and elevated microglia survival in the hippocampus from peptidoglycan-induced mice [62].Besides, erythrocyte lysis and LPS increased the secretion of pro-inflammatory factors and TLR4 expression, and induced microglial autophagy in primary microglia, which were inhibited by knockdown of TLR4 gene [63].Moreover, TLR4 knockdown improved TBI-induced brain damage and neuroinflammation via inhibiting excessive autophagy in the hippocampus, which were abolished by autophagy inducer rapamycin [64].Other study also found that TLR4 antagonist improved behavioral impairment, brain edema and neurons loss, and suppressed MyD88/NF-κB signal and autophagy in the hippocampus of TBI rats [65].Altogether, these investigations indicate that the inhibition of TLRs, including TLR2 and TLR4, might suppress the over-activation of autophagy to alleviate neuroinflammation in CNS diseases.
FOXO1 and autophagy
Recently, some evidences suggested that FOXO1 may regulate autophagy to improve neuroinflammation.In the brain cortex of mice, LPS exposure inhibited FOXO1 signaling, promoted microglia M1 state activation, and suppressed autophagy with decreased of LC3, Beclin-1, ATG5, ATG7 and ATG12 transcription and translation.LPS also result in imbalance of brain renin-angiotensin system, including up-regulation of the expression of angiotensin II, angiotensin-converting enzyme (ACE), angiotensin type (AT) 1, AT2, and MasR proteins, as well as down-regulation of ACE2 expression.AVE, the selective MasR agonist, reversed LPS-induced above changes and exhibited neuroprotective actions, while FOXO1 inhibitor or autophagy inhibitor compromised the anti-neuroinflammatory actions of AVE in mice [66].Collectively, this indicates that the interaction between FOXO1 and autophagy promotes M1 to M2 polarization of microglia to alleviate neuroinflammatory response.
Others
In vitro and in vivo models of cerebral ischemia, in addition to mTOR and NLRP3 inflammasome, PRNP, sphingosine kinase 1 (SphK1), DJ-1 and other signals participate in the regulation of neuroinflammation through autophagy as well.In primary microglia from wild-type mice, autophagy inhibition with 3-MA and bafilomycin A1 further aggravated OGD/R-induced inflammatory response, induced microglial M1 polarization and increased the secretion of proinflammatory cytokines.And PRNP, the gene encoding the prion protein, its knockout suppressed microglia M2 polarization, further aggravated M1 polarization and shortened the accumulation of LC3-II, while microglia overexpressing PRNP showed increased level of LAMP1 [67], which suggested that PRNP attenuated OGD/Rinduced inflammation by augmenting and extending autophagy activation.Besides, OGD/R induced the activation of autophagy and the increase of SphK1 in primary microglia, while SphK1 knockdown significantly inhibited OGD/R-induced autophagy and thereby protected against neuronal damage via suppressing TRAF2 expression.Further, SphK1 knockdown reduced apoptotic neuronal death and microglial autophagy in peri-infarct cortical tissue [68].Moreover, DJ-1 siRNA decreased the expression of Sirt1 and suppressed ATG5-ATG12-ATG16L1 complex expression and conjugation to promote microglia polarization from M2 to M1 state during MCAO/reperfusion-stimulated I/R injury, and the inhibition of Sirt1 exacerbated the inhibitory effects of DJ-1 interference [69].
Autophagy is impaired in AD, PD and TBI, which could be repaired by the inhibition of 12/15Lipoxygenase (12/15LO), LRRK2, and STING.Over-expressed 12/15LO worsened behavioral deficits, increased Aβ peptides levels, and reduced LC3-II/I ratio and the expression of ATG7 and ATG12-ATG5, resulting in serious neuroinflammation in 3xTg mice [70].Besides, LPS induced dopaminergic neuron loss, a-synuclein accumulation and autophagic impairment with more lysosomes and autolysosomes, a continuous increase in p62 and an increase followed by a decrease in LC3-II and HDAC6 in the midbrain of PD mice [71].Additionally, following manganese exposure or LPS stimulation in mice, LRRK2 siRNA inhibited the loss of dopaminergic neurons, alleviated neuroinflammation and autophagy dysfunction [72].What's more, the expression of STING elevated in post-mortem human TBI brain and TBI mice brain.And genetic ablation of STING reduced lesion volumes, glia activation and pro-inflammatory mediator's gene expression, inhibited type-I IFN signaling including down-regulation of IFN-α, IFN-β and IRF3 expression and promoted autophagic flux in the brain from TBI mice [73].Additionally, pifithrin-α, a p53 inactivator, improved neurological functional outcomes, inhibited astrocytes and microglia activation, down-regulated the levels of pro-inflammatory cytokines via enhancing autophagy in striatal tissues of TBI rats [74].
Intriguingly, autophagy is over-activated in some CNS disorders, so its inhibition might mitigate neuroinflammation instead.During S. pneumonia infection, BV2 cells showed increased LDH, IL-6, and IL-18 levels, and up-regulation of NOD2 expression.And ablation of NOD2 partly reversed above changes to improve neuroinflammation via suppressing TAK1/NF-κB signal and hyperactivated autophagy [75].Besides, autophagy was induced accompanied by an accumulation of LC3 dots and autophagosomes, and an increase expression of LC3-II protein, which were reversed by siRNA targeting ATG5, Beclin1, ATG9 and ATG12 in streptococcus suis serotype 2-infected BV2 cells and mice [76].Moreover, ULK1 gene knockout alleviated TBIinduced behavior impairment, reduced neuroinflammation and apoptosis along with inhibition of microglia and astrocytes activation, an increase in Bcl-2 expression, a decrease in proinflammatory cytokines and the expression of Bax, cytochrome c, cleaved Caspase-3 and cleaved PARP in hippocampus.ULK1 gene knockout also inhibited the overactivation of autophagy via inhibiting the phosphorylation of p38 and JNK in hippocampus from TBI-stimulated mice, which were further confirmed in LPS-treated primary astrocyte with or without ULK1 siRNA transfection [77].Inhibition of autophagy with bafilomycin A1 ameliorated chronic unpredictable mild stress-induced depressive-like behaviors, inhibits apoptosis, reduced pro-inflammatory cytokines levels, increased synaptic plasticity-associated proteins SYP and PSD95 expression in brains from depression rats [78].
In summary, apart from mTOR, AMPK and NLRP3, other signals, such as Sphk1, DJ-1, LRRK2 could modulate autophagy to participate in microglia polarization as well as inflammatory mediators' production, thus affecting neuroinflammatory processes in cerebral ischemia, AD, TBI and other CNS diseases.
Non-Coding RNAs Modification
Non-coding RNAs, such as microRNAs (miRNAs), circularRNAs (circRNAs) and long non-coding RNAs (lncRNAs), represent key regulators of autophagy and are able to mediate the occurrence and progression of brain disorders by modulating autophagy [79].The role of noncoding RNAs modifications in modulating autophagy in neuroinflammation-associated CNS diseases are well shown in Table 1.
miRNAs
A study reported that miR-124 inhibition aggravated LPSinduced TNF-α and IL-1β increase, and miR-124 overexpression attenuated LPS-induced neuroinflammation by directly targeting p62 and p38, and upregulating the ratio of LC3-II/I in BV2 cells.Further, exogenous delivery of miR-124 also alleviated neuroinflammatory response via activating autophagy in the substantia nigra par compacta (SNpc) of MPTPinduced PD mice [80].Besides, inhibition of miR-3473b suppressed pro-inflammatory cytokines levels via directly targeting TREM2 and increasing the phosphorylation of ULK1 to promote autophagy in LPS-stimulated BV2 cells.And in the SNpc from MPTP-treated PD mice, MPTP up-regulated the expression of miR-3473b, inhibited autophagy, and exogenous delivery of miR-3473b antagomir reversed above changes [81].Moreover, Aβ degradation was decreased in primary microglia from 5xFAD mice, autophagy-inhibited microglia and NBR1 knockdown human microglia line HMC3.And the expression of NBR1 and ATG7 was suppressed in 5xFAD microglia and in microglia of human AD brain slices, mainly in regions of high Aβ burden, whereas miR-17 expression was increased in human AD brain slices.Further studies showed that miR-17 inhibition promoted autophagy and Aβ degradation and up-regulated the expression of NBR1 in primary microglia from 5xFAD mice [82].Another research reported that miR-223 deficiency further increased the number of autophagic vacuoles and GFP-LC3 accumulation, promoted LC3 lipidation and decreased TIMM23 expression in LPSinduced BV2 cells, which were blocked by miR-223 mimics via directly targeting ATG16L1 3′-UTR.And in EAE mice, miR-223 knockout alleviated EAE symptoms and CNS neuroinflammation, inhibited microglia activation and augmented autophagy in brain microglia [83].Interestingly, in some CNS disorders, autophagy over-activation leads to increased inflammation and microRNAs reduce inflammation via inhibiting autophagy.Microglia-derived IL-6 promoted PC12 neuronal apoptosis, as well as induced autophagy via promoting STAT3 phosphorylation and suppressing miR-30d that directly targets ATG5 [84].In addition, miR-144 directly interacted with mTOR 3′-UTR to inhibit mTOR expression, and further promoted autophagy mediated neuroinflammation in hemoglobin stimulated microglia and perihematomal brain from ICH mice [85,86].Another study reported that miR-23b overexpression downregulated pro-inflammatory mediators' expression in hemin-induced BV2 cells and prevented co-cultured HT22 cells apoptosis via directly targeting inositol polyphosphate multikinase (IPMK) and activating AKT/mTOR pathway to suppress autophagic flux.
Further, miR-23b overexpression also relieved ICHtriggered brain injury, inhibited neurons apoptosis and microglia activation, and reduced pro-inflammatory factors expression in the brain of ICH rats [87].Moreover, overexpression of miR-27a ameliorated TBI-triggered neurological deficiency, inhibited autophagy via targeting the 3' UTR of FOXO3a mRNA to suppress the expression of FOXO3a in the hippocampus after TBI rats [88].To sum up, miRNAs could reduce neuronal apoptosis, promote the degradation of abnormal proteins, and inhibit the production of pro-inflammatory mediators to alleviate CNS diseases by regulating autophagy.
circRNAs and lncRNAs
In methamphetamine-induced neuroinflammation in astrocytes, the expression of GFAP, LC3-II and DDIT3 were increased, which were further aggravated by autophagy inhibitor 3-MA and were reversed by endoplasmic reticulum stress inhibitor salubrinal and autophagy inducer rapamycin.Methamphetamine also inhibited miR124-2HG expression and promoted Sigmar1 expression.Nevertheless, miR124-2HG down-regulated Sigmar1 gene and protein expression, suppressed autophagy and endoplasmic reticulum stress, and inhibited astrocytes activation in methamphetamineinduced astrocytes.And inhibition of circHIPK2 that binds miR124-2HG had similar effects like miR124-2HG.In vivo, anti-miR124-2HG accelerated the expression of Sigmar1 and GFAP in the hippocampus, which were inhibited by Sigmar1 knockout.Besides, in methamphetamine-induced neuroinflammation mice, circHIPK2 knockdown also suppressed astrocytes activation and restrained Sigmar1 and GFAP proteins expression [89].Additionally, circRS-7 inhibition reduced chronic constriction injury (CCI)-triggered neuropathic pain, inhibited microglia activation, reduced the production of pro-inflammatory cytokines, and suppressed autophagy via targeting miR-135a-5p in CCI rats [90].The above studies indicate that circRNAs inhibit the production of related miRNAs to regulate autophagy and thus reduce inflammation in CNS diseases.
Long intergenic noncoding RNA Cox2 (lincRNA-Cox2) knockdown decreased IL-1β level, prevented NLRP3 inflammasome activation, and increased LC3-II/I ratio that were further up-regulated by NLRP3, ASC and Caspase-1 siRNA, as well as weakened by ATG5 and TIR-domain-containing adapter-inducing interferon-β (TRIF) siRNA via suppressing NF-κB signal in LPSinduced microglia cells.And in vivo, lincRNA-Cox2 knockdown attenuated EAE injure and neuroinflammation, while inducing autophagy of microglia in the spinal cords from EAE mice [91].Furthermore, in MPP + -induced SK-N-SH cells, lncRNA-HOTAIR knockdown reversed MPP + -induced cell damage, including enhanced cell viability, inhibition of apoptosis, as well as the down-regulation of proinflammatory mediator levels.Absence of lncRNA-HOTAIR also up-regulated miR-874-5p expression and reduced ATG10 protein expression, while the effects of HOTAIR knockdown on MPP + -induced neuronal injury were reversed by miR-874-5p inhibition [92].In summary, lncRNAs might reduce inflammation in neurological diseases by regulating autophagy to inhibit the production of pro-inflammatory mediators, activation of NLRP3 inflammatory vesicles and apoptosis.
Histone Modification
Histone deacetylases inhibitor, suberoylanilide hydroxamic acid increased the levels of H3 and H4 histone acetylation and the ratio of LC3-II/I, and reduced p62 expression in sevoflurane-exposed primary neurons.Histone deacetylases inhibitor also improved cognitive impairment, up-regulated histone acetylation levels, ameliorated autophagy impairments and suppressed the activation of NLRP3 inflammasome in the hippocampus from sevoflurane-stimulated aged mice [93].Moreover, anacardic acid, a histone acetyltransferases inhibitor, decreased pro-inflammatory mediators' levels, suppressed autophagy and down-regulated the expression of H3K9ac in calcitonin gene-related peptide (CGRP)stimulated astrocyte cells.And in CCI rats, CGRP antagonist and histone acetyltransferases inhibitor prevented neuropathic pain and astrocyte activation and inhibited the expression levels of H3K9ac in the spinal dorsal horn [94].In summary, histone modification may inhibit the activation of glial cells and NLRP3 inflammasome through regulating autophagy, as amply shown in Table 1.
Conclusion and perspectives
Inflammation in the CNS is a hallmark of neurological diseases, and therapeutic strategies targeting neuroinflammation have been extensively investigated.Autophagy, a tightly controlled cellular decomposition pathway, eliminates pathogens, damaged organelles and other cargos to maintain the homeostasis of the intracellular environment under various stimuli.Autophagy could influence CNS diseases by modulating neuroinflammation.According to recent studies, numerous signals, including cellular metabolism, apoptosis and inflammasome, are involved in the effects of autophagy on neuroinflammation (Fig. 2).Specifically, enhancement of autophagy through inhibiting mTOR pathway and promoting AMPK and FOXO1 pathways is able to suppress microglia activation, facilitate M2 microglia polarization, promote the production of antiinflammatory mediators, reduce the levels of proinflammatory mediators, alleviate apoptosis and inactivate NLRP3 inflammasome, thereby treating a variety of CNS diseases, such as AD, PD and TBI.Additionally, since epigenetic deficiencies happen during the early stages of CNS disorders, interference methodologies targeting epigenetic modifications have been proposed as prevention strategies.Notably, epigenetic modifications also affect autophagy in neuroinflammation, particularly non-coding RNAs and histone acetylation, which may adjust autophagy-related genes, thus impacting their transcription and subsequent autophagy to alleviate inflammation in CNS diseases (Fig. 3).Despite extensive research on the regulation of autophagy in neuroinflammation and related CNS diseases, there are still some limitations of the current studies.Firstly, the role of autophagy on neuroinflammation in AD, PD, TBI and cerebral ischemia attract more investigators, and other CNS diseases such as multiple sclerosis, migraine and ICH have received less attention, which need to be further studied.Secondly, current reports mainly focus on microglia, with less researches on other important brain cells such as astrocytes and neurons.Thirdly, the mechanisms by which autophagy regulates neuroinflammation are mainly concerned with mTOR and NLRP3 inflammasome, while other pathways are less well studied.In addition, epigenetic modifications, mainly including DNA methylation, histone modifications, and non-coding RNAs, modulate neuroinflammation and related brain diseases [95,96].However, currently, non-coding RNAs and histone modification have received extensive attention and research from scholars on autophagy regulation of neuroinflammation and DNA methylation have hardly been reported.DNA methylation mediates transcriptional silencing of downstream genes through recruitment of repressive transcription factors, which in turn leads to reduce gene and protein expression [97].DNA methylation may also regulate neuroinflammation and associated CNS disorders by regulating the expression of autophagy-related genes and proteins, which need to be further studied.Overall, these data indicate that the development of therapeutic agents that specifically target the epigenetic mechanisms of autophagy and the signaling connected to autophagy will have a significant impact on the treatment of CNS diseases linked to neuroinflammation.
Figure 1 .
Figure 1.Process of autophagy.Under stress conditions, Beclin1-Vps34 complex stimulated by ULK1 complex induces the initiation of autophagy and promotes formation of phagophore.Phagophores surround autophagic cargoes, then expand to form autophagosomes and fuse with lysosomes to form autolysosomes, where sequestered cargoes are digested and degraded.
Figure 2 .
Figure 2. Overview of the mechanisms of autophagy in neuroinflammation.The inhibitory effect of autophagy in neuroinflammation involves multiple signaling pathways, including cellular metabolism, apoptosis and inflammasome.
Figure 3 .
Figure 3. Epigenetic regulation of autophagy in neuroinflammation.Autophagy in neuroinflammation is influenced by epigenetic modifications, including non-coding RNAs and histone modification.
Table 1 .
Epigenetic regulation of autophagy in neuroinflammation and related CNS diseases. | 6,732.6 | 2023-07-23T00:00:00.000 | [
"Medicine",
"Biology"
] |
Cloud shadow removal for optical satellite data
Abstract. An improved cloud shadow removal algorithm for high spatial resolution optical satellite data over land is presented. The method is based on the matched filter method, which consists of the calculation of a covariance matrix and the corresponding zero-reflectance matched filter vector and the computation of the shadow function. The new additions consist of the usage of an improved cloud shadow map and further evaluations performed on the shadow function. The performance of the cloud shadow removal algorithm incorporated in the software package Python-based atmospheric correction (PACO) is compared to the deshadowing algorithm in atmospheric correction on a set of 25 Sentinel-2 scenes distributed over the globe covering a wide variety of environments and climates. Furthermore, an evaluation of the relative ratio between clear and shadow pixels with and without deshadowing is performed. The visual, spectral, and statistical results show that the new additions performed on the deshadowing algorithm can improve the cloud shadow removal performance used so far.
Introduction
For optical remote sensing of the Earth's surface, clouds and their shadows have always been a major disadvantage, since a lot of remote sensing applications are impacted by their presence. These applications involve, for example, radiation, image classification, the calculation of surface reflectance or land surface temperature, vegetation indices, etc. 1,2 The annual cloud coverage of the Earth lies around ∼70%. 3 Therefore, it is inevitable that no observations of a specific location on Earth will be continuously cloud-and cloud shadow-free, and the information that can be extracted from a scene will have a high percentage of degradation. 4 This means that scientists will have to find ways to work around or with the presence of clouds and cloud shadows. This is especially crucial for land applications for which the amount of usable data per scene and specific timing is of high importance, for example for crop yield estimation. 5 Furthermore, the use of cloud and cloud shadow free images enables to determine ground properties of the Earth's surface [6][7][8][9] and facilitates crop monitoring tasks. 2 Even geological applications 10 are disturbed if clouds and their shadows cover parts of high spatial resolution optical satellite data. This proves how important it is nowadays to have a correct and exact masking of clouds and cloud shadows as preprocessing step for atmospheric correction (ATCOR) and shadow removal of multi-spectral imagery. Hence, in the past years, more and more cloud and cloud shadow detection and removal approaches have been developed [11][12][13][14][15][16][17][18] and used to enable various applications. *Address all correspondence to Viktoria Zekoll<EMAIL_ADDRESS>To undergo a cloud shadow removal algorithm, the cloud and their shadows have to be detected and mapped. The detection of clouds can be done by studying each satellite scene separately using a mono-temporal approach [19][20][21][22][23][24][25][26] or through a multi-temporal methodology 27,28 and hence studying a time series of images. For the detection of the correct location and geometry of a cloud shadow, the direction of observation is crucial since they represent projections of clouds in an image. 1 In this paper, a mono-temporal cloud shadow detection approach is used as preprocessing step for cloud shadow removal, called thresholds, indices, projections (TIP) method. 29 Due to remotely sensed optical imagery of the Earth's surface being contaminated by cloud and cloud shadows, the surface information underneath a cloud covered region cannot be retrieved with optical sensors. The surface information underneath a cloud shadow, on the other hand, can be retrieved since the ground reflected solar radiance is a small non-zero signal. Now the total radiation signal that is measured at the sensor is composed out of a direct beam and a diffuse, reflected skylight component. This means that even if there is no direct solar beam arriving at the sensor from the shadow region, there will still be some information arriving from the reflected diffuse flux. 30 The proposed shadow removal method works with this knowledge and uses the estimate of the fraction of direct solar irradiance for a fully or partially shadowed pixel as basis for the removal algorithm. The aim is to provide an improved cloud shadow removal algorithm based on the current version of the matched filter proposed by ATCOR. 30 As opposed to the IDL ATCOR algorithm, the new cloud shadow removal algorithm is implemented into the Python-based atmospheric correction (PACO) software.
In this paper, the multispectral instruments (MSIs) of the Copernicus Sentinel-2 satellites are used. 31 The MSIs are sensors on-board of the satellite, which allow free access to the data and a high revisit time. 32 The 13 spectral bands of a Sentinel-2 scene are composed of 4 bands at 10 m, 6 bands at 20 m and 3 bands at 60 m spatial resolution. Furthermore, the PACO atmospheric processor is used. This is the python-based version of ATCOR. For PACO, the input data are in L1C radiances in units of [mW∕ðcm 2 à sr à μmÞ]. If the input scene is given in terms of top of atmosphere (TOA) reflectance, it has to be converted into TOA radiance. PACO performs the ATCOR using the spectral information in all bands by resampling them to a 20-m cube, yielding an image cube of 13 bands with a size of 5490 × 5490 pixels. This so called "merged cube" will be a Sentinel-2 TOA cube considered in the rest of this study.
Radiation Components and Surface Reflectance
The radiation signal in the solar region (0.35 to 2.5 μm) arriving at the sensor is due to four different components. 33,34 • Path radiance, L path : from photons that did not have contact with the ground and are scattered into the field-of-view of the sensor. • Ground reflected radiation from a pixel, L ground : the fraction of the diffuse and direct solar radiation incident on the pixel that is getting reflected from the surface. • Reflected radiation from the surrounding, L adj : the fraction of the solar radiation reflected from the neighborhood and scattered by the air volume into the field-of-view of the sensor. This radiation is also called adjacency radiation. • Reflected terrain radiance from opposite mountains, L terrain . Figure 1 shows the four different components arriving at the sensor. For a full evaluation of the radiation component in rugged terrain, please refer to Ref. 34 and Sec. 6.2 in Ref. 30.
From the four components, only the reflected radiation from a pixel contains the necessary information about the viewed pixel. Hence, in ATCOR, it is important to remove the other components and to retrieve the correct ground reflectance from the pixel of interest.
If we now combine all four components of the radiation to get the total radiation arriving at the sensor we can write 3 Methods
MF method: surface reflectance and covariance matrix
Deshadowing is the compensation process, which uses an estimate of the fraction of direct solar irradiance for a fully or partially shadowed pixel. The MF method needs at least one channel in visible and at least one spectral band in the NIR. The bands used in the MF are: blue, green, red, near-infrared (NIR), short-wave infrared 1 (SWIR1), and short-wave infrared 2 (SWIR2), if existing.
The method starts with the calculation of the surface reflectance image cube, ρ. The surface reflectance, ρ, is computed with the assumption of fully solar illumination, excluding water and clouds. Then the covariance matrix, CðρÞ, is calculated where ρ represents the surface reflectance vector of the three selected bands [see Eq. (2)] E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 7 ; 3 1 4 The MF vector, V MF , is tuned to a certain target reflectance spectrum, ρ t , to be detected. ρ is the scene-average spectrum without water and cloud pixels. For the shadow target, a target reflectance spectrum of ρ t ¼ 0 is selected, which will give the simplified version of the shadow MF vector, V sh , as follows: 35 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 7 ; 2 2 7
MF method: unscaled shadow function
The MF shadow vector, V sh , can be applied to the non-water and non-cloud part of the scene to give the un-normalized values, Φ, also called unscaled shadow function. Φðx; yÞ gives a relation measure of the fractional direct illumination for each pixel x; y [see Eq. Φðx; yÞ ¼ V T sh ðρðx; yÞ − ρÞ:
MF method: rescaling and scaled shadow function
The MF calculates a minimum RMS shadow target abundance for the entire scene. The values of Φ can be both, positive and negative. Therefore, Φ is rescaled into the physical range from 0 (full shadow) to 1 (full direct illumination). The histogram of the unscaled shadow function is used for rescaling and an illustration can be found in Fig. 3 of Ref. 36. The first peak of the histogram of Φ, Φ 2 , represents the shadow pixels. On the other hand, the highest peak of the histogram, ϕ max , represents the fully illuminated areas.
The rescaling of Φ is done by linear mapping of the Φ values from the unscaled interval (Φ min , Φ max ) onto the physically scaled interval (0,1). Hence, the scaled shadow function, Φ Ã , is calculated as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 4 ; 6 0 9 The scaling and normalizing of the MF vector into the (0,1) interval is based on the assignment of: • min and max direct sun fraction in the shadow regions (a min and a max ; the defaults are a min ¼ 0.20 and a max ¼ 0.95). 36 • Corresponding shadow thresholds Φ min and Φ max obtained from the normalized histogram of Φ.
The normalized and scaled shadow function is given as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 1 1 4 ; 4 7 1
MF method: iteration
The potential shadow pixels are those which satisfy ϕ n < 1, but as can be seen from Eq. (6), the value strongly depends on Φ max . Therefore, an iterative strategy is applied and the exact steps of the ATCOR MF iteration method can be found in Ref. 36.
Deshadowing reflectance equation
The scaled shadow function, Φ n , represents the fraction of the direct illumination for each pixel in the surface reflectance vector, ρ. The MF method tries to find the core shadows and then subsequently expands these core regions. This enables a smooth shadow to clear transition.
The scaled shadow function is only applied to the pixels in the final mask. The core shadow mask is defined by the pixels with Φðx; yÞ < Φ T , where Φ T is a threshold that is set in the neighbourhood of the local minimum of the histogram. The final deshadowing is performed by multiplying the direct illumination, E dir , with the pixel-dependent Φ n . This reduces the direct solar term and increases the brightness of a shadow pixel, since it is located in the denominator of the deshadowing equation [see Eq. (7)] E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 7 ; 1 1 4 ; 2 1 6
Proposed Method: Cloud Shadow Removal MF Method with New Additions
The proposed cloud shadow removal algorithm was created for data acquired by satellite/airborne sensors in multispectral and hyperspectral imagery. It is computed to create a fully automated shadow removal algorithm and based on the main concept of the MF method. 30,36 The MF method was first implemented from the IDL ATCOR version into the python-based PACO software. The new cloud shadow removal algorithm incorporates the main equations from the MF method used by the ATCOR deshadowing algorithm after. 30 To improve the results, additions have been added and are proposed in this paper.
Zekoll: Cloud shadow removal for optical satellite data Figure 2 shows a flow chart of the eight main steps performed during the new cloud shadow removal algorithm. The steps that have to be performed are: the calculation of the surface reflectance; the evaluation of the cloud shadow map with the TIP method after; 29 the calculation of the MF vector, which gives the unscaled shadow function, Φ; the calculation of the scaled shadow function, Φ Ã ; the calculation of the minimum direct sun fraction, a min and the normalized scaled shadow function, Φ n ; and the final step of the shadow removal.
The first step of the new method is the calculation of the surface reflectance for the bands required to perform the MF evaluation (blue, green, red, NIR, SWIR1, and SWIR2, if available). Constant atmospheric conditions, a standard atmosphere and a fixed visibility are assumed. Hence, the default of the visibility is set to 30 km, which corresponds to an aerosol optical thickness at 550 nm of 0.32 for sea level.
The new additions are the cloud shadow map from the TIP method developed in Ref. 29 and a newly defined iteration over the scaled minimum shadow function, a min .
The TIP cloud shadow detection is based on thresholds, indices and projections and has been able to improve the results of the previous ATCOR cloud shadow map calculation. The detailed evaluation of the TIP cloud shadow map calculation can be found in Ref. 29. Having included a better cloud shadow map will improve the results of the shadow removal algorithm.
The second newly implemented addition is performed within the computation of the matched filter vector Φ. The normalization of Φ into the physical range between 0 and 1 is evaluated with the minimum and maximum direct sun fraction in the cloud shadow area a min and a max and their corresponding shadow thresholds Φ min and Φ max . Φ min and Φ max are obtained from the normalized histogram of Φ. The default values of a max is set to 0.95, and as opposed to the previous method, the starting default value of a min is set to 0.01. Then the normalized scaled shadow function is calculated using Eq. (6).
The proposed iteration calculates the mean reflectance of all shadow pixels (Φ sh ) and all sun-lit pixels (Φ sun ) for all bands, j, and terminates when the overall absolute difference of these two values, P DðjÞ, is less than the difference of the previous iteration, P DðjÞ previous [see Eq. (8)]
X
DðjÞ ¼ absðΦ sh ðjÞ − Φ sun ðjÞÞ < X DðjÞ previous : When this limit is reached, then the two signals are as closely as possible. If the condition is not reached for a min in the range 0.01 to 0.3, then the iteration stops and takes the upper limit of a min ¼ 0.3 as default value. The iteration condition is performed for the total reflectance, hence taking into account all bands.
The final calculation of the reflectance is done using Eq. (7) with the corresponding scaled shadow function.
Results
The results of the deshadowing algorithm presented in this paper are first shown through three selected scenes from the data set where a visual and spectral comparison is performed (see Sec. 4.2). Additionally, a metric quantitative comparison for all evaluated scenes is shown and discussed in Sec. 4.3.
Data and Material for Training Set
To test the new cloud shadow removal method on a set of data, 25 Sentinel-2 (S2) scenes are chosen. A list of the investigated Sentinel-2 A and B scenes is given in Table 1. The scenes were selected to cover a wide variety of regions over the entire globe (see Fig. 3). This enables to validate the shadow removal algorithm for different continents, climates, seasons, weather conditions, and land cover classes. Furthermore, they have been selected to represent flat and mountainous sites with a cloud cover from 3% to 80% and they include the presence of cumulus, thin and thick cirrus clouds. The land cover types represented are: desert, urban, cropland, grass, forest, wetlands, and sand and coastal areas. The range of solar zenith angles is from 27 deg to 67 deg.
Cloud Shadow Removal Results
In the following section, three scenes (scene ID 18, 16, and 10 from Table 1) are chosen to show the results of the cloud shadow removal algorithm. For each scene, the deshadowing results are given for a subset to better evaluate the results spectrally and visually. In each subset, a clear pixel and a shadow pixel are selected that are located close by and represent the same ground properties. The visual and spectral comparison is done between the original scene, the new presented cloud shadow removal algorithm from PACO and the cloud shadow removal algorithm as given by ATCOR. Figure 4 shows the first scene results to be analyzed in this paper. It is scene number 18 of Table 1 To compare the new method visually with the previous version, Fig. 4 provides the subset from the original scene and the two deshadowed subsets from PACO (new version) and ATCOR (old version) in the middle row, respectively. For each subset the same zoom is provided (see Fig. 4 bottom row). In this image zoom, a clear pixel and a cloud shadow pixel are chosen. To not only provide visual results, the spectra of the clear pixel, the cloud shadow pixel, the PACO deshadowed cloud shadow pixel, and the ATCOR deshadowed cloud shadow pixel are presented in Fig. 5.
Netherlands, Amsterdam (scene ID 18)
The black curve of Fig. 5 represents the reflectance spectrum obtained from the cloud shadow pixel without correction. The pink curve represents the chosen clear pixel close by. The orange curve represents the reflectance spectrum of the cloud shadow pixel after deshadowing with the new PACO version. Finally the blue curve represents the cloud shadow pixel after being deshadowed with ATCOR. As can be seen from Fig. 5 both methods nicely deshadow the Table 1 Sentinel-2 level L1C test scenes. Information on scene climate, main surface cover, and rural/urban. (SZA = solar zenith angle).
Morocco, Quarzazate (scene ID 16)
In order to prove the promising results of the cloud shadow removal algorithm presented in this paper, a second scene is illustrated in Fig. 6. Figure 6 shows scene number 16 located in Quarzazate, Morocco. The scene was taken on the 30'th of August and has a zenith angle of 27.2 deg. The top left image of Fig. 6 shows the RGB ¼ 665∕560∕490 nm true color composite of Quarzazate and the top right image the chosen subset from this scene.
The same evaluation as done in Sec. 4.2.1 for scene ID 18 is performed on the Morocco example. Figure 6 shows the scene subsets of the original, PACO deshadowed and ATCOR deshadowed scene in the middle row and a zoom of the subset in the bottom row.
In Fig. 7, the reflectance spectra for the clear and cloud shadow pixel of the original scene, the PACO deshadowed scene, and the ATCOR deshadowed scene are given. The reflectance spectra again show how the ATCOR deshadowing algorithm rather overcompensates the cloud shadow pixel, whereas the PACO deshadowing algorithm follows the reflectance spectrum of the clear pixel very nicely for low wavelengths and then slowly becomes smaller.
France (scene ID 10)
As a final example, scene ID 10 is chosen. It represents a scene from France taken on the 16'th January 2016 and has a zenith angle of 66.8 deg. The top row of Fig. 8 shows on the left the original scene stretched into the RGB ¼ 665∕560∕490 nm bands for better optical comparison and the chosen subset on the right. For this example, two image zooms of the subset were taken.
As performed in Secs. 4.2.1 and 4.2.2, Figs. 8 and 10 show the scene subsets of the original, PACO deshadowed and ATCOR deshadowed scene in the middle row and the first zoom of the subset in the bottom row. In Fig. 9, the reflectance spectra for the clear and cloud shadow pixel of the original scene, the PACO deshadowed scene and the ATCOR deshadowed scene are given for Zekoll: Cloud shadow removal for optical satellite data zoom number 1. As can be seen from the visual zoom in Fig. 8 and the reflectance spectra given in Fig. 9, ATCOR is not able to deshadow this area of the cloud shadow. The new version on the other hand performs similar to the previous examples and nicely recovers most of the reflectance spectrum. Figure 10 shows the scene subsets of the original, PACO deshadowed and ATCOR deshadowed scene in the middle row and the second zoom of the subset in the bottom row. In Fig. 11 the reflectance spectra for the clear and cloud shadow pixel of the original scene, the PACO deshadowed scene and the ATCOR deshadowed scene are given for zoom number 2. In this example of France, ATCOR performs better visually as seen in Fig. 11, but looking at the reflectance spectras, the new deshadowing algorithm gives better results.
Validation of Dataset
To evaluate the data set, a metric for the comparison of the surface reflectance retrieval without deshadowing and the deshadowed reflectance is calculated. This metric is represented by the relative ratio of the mean reflectance vectors over all spectral bands with deshadowing and without deshadowing. Hence, the mean reflectance of all the clear pixels is divided by the mean reflectance of all cloud shadow pixels. For perfect deshadowing, the value of the relative ratio should lie as close as possible to the perfect value of þ1. Depending on the disagreement between clear pixels and cloud shadow pixels, the relative ratio will deviate from þ1.
The computation is done with the following three steps and is performed for each scene and for PACO and ATCOR.
• Calculation of the relative ratio R1: ratio of the mean reflectance vector of the clear scene pixels with ATCOR versus the mean reflectance of the shadow pixels but without deshadowing. • Calculation of the relative ratio R2 for PACO and ATCOR: ratio of the mean reflectance vector of the clear scene pixels versus the mean reflectance of the shadow pixels after deshadowing. • Comparison of R1 and R2 (see Table 2).
For an improvement in the reflectance vector after deshadowing, the relative ratio R2 should be closer to the value þ1 than the relative ratio R1. Table 2 gives the values of R1 and R2 for each scene. R2 was evaluated for the new method, PACO, and for the previous ATCOR method. The bold face numbers of Table 2 indicate the correlation coefficient with a value closest to +1 and hence with the best correlation. No cloud shadows are present in scene ID 5 and 15, hence no values for R1 and R2 are obtained.
As can be deduced from Table 2, all of the relative ratios are improved by the deshadowing algorithm for PACO apart from the case of scene ID 8. The best performance of PACO is obtained with scene ID 1 located in Gobabeb (Africa) with a value of R2 ¼ 0.999. The worst performance of PACO is obtained with scene ID 23 located in Barrax (Spain) with a value of R2 ¼ 0.596. This is due to the value of R1 for this scene to be the worst outlier and hence represents a hard scene to be deshadowed.
For the case of scene ID 8 located in Arcachon (France) a relative ratio close to þ1 for R1 is obtained, since the scene is covered by a film of haze. Hence, the overall scene appears brighter in the reflectance spectrum, even the shadows. This results in a ratio of R1 close to þ1. When the deshadowing is done for a cloud shadow with a bit of haze on top, the corrected deshadowed image does not have a film of haze covering the deshadowed area. Hence the reflectance spectrum of this cloud shadow will appear lower. This has as a consequence, that the value of R2 is a bit less than the value of R1.
In the case of the deshadowing algorithm in ATCOR, no values were obtained for the scenes 8, 9, 11, 12, and 17. For these scenes, the MF deshadowing was turned off during the ATCOR due to not enough pixels present or due to a problem in the matrix inversion. To summarize this section, it can be seen that the PACO deshadowing algorithm with the TIP cloud shadow masking and its additions to the MF method highly improves the deshadowing of the Sentinel-2 data.
Discussion
The improved deshadowing algorithm implemented in the python ATCOR (PACO) has shown very promising results and was able to improve the previous matched filter version implemented in ATCOR through its additions. Relative ratios close to the value of þ1 are reached as shown in Table 2 with a range between 0.596 and 1.245 with deshadowing, R2 (PACO) and 0.469 and 12.434 without deshadowing, R1. The high values that are reached without going through the cloud shadow removal algorithm can be explained by the presence of haze covering the scene (scene ID 8) or when the scene has a lot of water bodies which are part of the clear pixels (scene ID 17). The cloud shadow removal algorithm performed within PACO is done without any ATCOR on the haze particles. This is one additional step that can be taken into account to further improve the deshadowing algorithm. Hence, to fully correct the scene for the visibility and then perform the cloud shadow removal. The visual and spectral comparison with the deshadowing results obtained through PACO and ATCOR are able to show the achieved improvements, but also the weaknesses that will have to be changed in the future work. One of the weaknesses of the presented cloud shadow removal algorithm is the correction at the borders of the shadows. The visual results show, that ATCOR is able to better correct for the transition region between shadow pixels and clear pixels. Nevertheless, better results are obtained overall for PACO and this is also proven by the reflectance spectra shown.
The advantages of the deshadowing method presented in this paper are that it is performed through a fully automatic algorithm which is based on the matched filter method, the TIP cloud shadow masking and a iterative process of the scaled shadow function. It works for multispectral and hyperspectral imagery over land acquired by satellite/airborne sensors. Since the deshadowing algorithm relies on the TIP method after, 29 it must be noted that it is applicable for VNIR-SWIR sensors, such as EnMAP, 37 PRISMA, 38 Landsat-8, 39 and Landsat-9, 40 but for VNIR sensors, such as for example the DESIS sensor, 41 the method can only be implemented partially.
So far the new deshadowing algorithm has been tested for Sentinel-2 scenes having a geometric resolution of 20 m. Nevertheless, it can be assumed that the method will perform similar for Landsat-8, having a resolution of 30 m. For sensors with a ground sampling distance less than 5 m, additional problems will arise due to the TIP method still having to perform test for these cases.
Conclusions
An improved cloud shadow removal algorithm for high spatial resolution optical satellite data was presented. It is based on the matched filter method with the addition of an improved cloud shadow masking and an iterative process for the final reflectance value calculation. Through visual and spectral inspection it was shown that the new method improves the previous de-shadowing algorithm. This was furthermore presented through an evaluation of the relative ratio between the reflectance of clear and cloud shadow pixel and showed promising values for the new method. Future work will have to include the cloud shadow detection improvements of the TIP method and the evaluation of the cloud shadow border correction. | 6,613.6 | 2023-04-01T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
Insights into Desiccation and Self-Healing of Bentonite in Geosynthetic Clay Liners under Thermal Loads
Geosynthetic Clay Liners (GCLs) are widely used for protecting groundwater from pollution sources at the surface, including applications in which they are subject to significant thermal gradients. Hence, sodium bentonite in the GCL may undergo significant dehydration and cracking, and the GCL might fail as a result. The paper presents outcomes of a set of recent experimental and numerical investigations exploring the propensity of bentonite to desiccate and self-heal, as well as means of mitigating the effect of thermal gradients on the hydraulic conductivity of GCLs. An elasto-plastic thermo-hydro-mechanical model was found to yield reasonable predictions of experimental behaviour, except for the transient phase of preheating hydration. Introducing an airgap between the GCL and the heat source can reduce the extent of desiccation and its effects on hydraulic conductivity. However, the effectiveness of the solution will depend on other factors including subgrade, magnitude of thermal and mechanical loads and type of GCL.
Introduction
Geosynthetic Clay Liners (GCLs) are widely used in barrier systems around the world to protect groundwater from surface pollutants. They are made of a thin layer of sodium bentonite sandwiched between cover and carrier geotextiles and are relatively easy to transport and install. GCLs are typically placed over a layer of natural soil, which provides a crucial source of hydration for the GCL and helps ensure its bentonite maintains low hydraulic conductivity. They are then covered with a high-density polyethylene geomembrane (GMB).
In several engineering applications, GCLs are exposed to significant thermal gradients can cause dehydration and desiccation of the bentonite, with consequent loss of performance [1][2][3][4][5][6][7][8][9][10]. These applications include heat-generating organic waste in municipal waste landfills, incineration ash and other industrial waste in hazardous-waste landfills, solar ponds and brine ponds, especially on coal-seam gas extraction sites. Hence, it is important to understand the behaviour of GCLs in such applications in order to ensure adequate protection of underlying aquifers.
Past research on GCLs under thermal loads has led to important insights into their behaviour in single [1,2] and double [3,4] composite liner systems in waste landfills where moderate thermal gradients and high overburden loads (>200 kPa) occur. More recently, behaviour under higher thermal gradients and lower loads has been studied numerically [5,6] and experimentally [7][8][9][10]. However, significant uncertainties remain about the ability of multi-phase theories of behaviour, including critical-state soil mechanics, to predict bentonite dehydration, desiccation and self-healing under these conditions. In addition, the effectiveness of changes to the design of liner systems, in preventing or mitigating desiccation is of interest and remains an open question. This paper reports key findings, generated over the last twelve months from a program of research at the University of Sydney. Our focus here is on three research questions. Can thermo-hydro-mechanical models predict experimental observations? Does higher mass per unit area of bentonite reduce desiccation? Does an airgap separating the liner system from the heat source reduce the failure risk? Special attention is paid to bentonite dehydration, desiccation and self-healing.
The paper is structured as follows. First, complexities pertaining to research on GCLs are discussed and placed in the broader context of geotechnical research on bentonite clay. Second, materials and methods are described. Next, key research findings are discussed. Finally, important remaining questions and ongoing investigations are briefly presented.
Complexities of GCL Research
Five factors add complexity to the study of GCLs. First, GCLs are used under a wide range of environmental and operational conditions and theories describing their behaviour must be able to provide a reasonable coverage of this range. For example, adequate hydration of GCLs by the subsoil, up to gravimetric water contents of 90% or more, prior to exposure to leachate, is critical. Several factors influence hydration including, amongst others, type and initial water content of underlying soil [e.g., [11][12], chemical composition of the leachate and soil water [e.g., [13][14][15][16], temperature [e.g., 17,18], and overburden loads [e.g., 19,20]. These factors vary widely across the different applications in which GCLs are found.
Second, the composite nature of the GCL is such that it is often difficult to infer its behaviour from that of its most important component, namely the thin layer of bentonite. For example, the soil water characteristic curve (SWCC) of the bentonite and that of the geotextiles are very different and, as result, the SWCC of the GCL is usually markedly different to those of their bentonite. Much knowledge has accrued about the behaviour of bentonite over the last few decades, especially as a result of studies focussed on the deep burial of radioactive waste [e.g., [21][22][23]. However, clay in these repositories is subject to much higher overburden pressures than those typically encountered by a GCL. Hence, while research on bentonite can be useful for understanding GCLs behaviour, a GCL must be considered as a new material with its own properties.
Third, there is significant variability in the bentonite and geotextiles used to manufacture GCLs and empirical observations for one type of GCLs may or may not be relevant to others [24,25]. This variability is due to the variety of products on the market and a large and increasing number of manufacturers around the world. The diversity of products and usages is likely to continue, with one market research study estimating 4% annual growth rate of GCL usage worldwide up to 2022 [26]. For example, either granular or powder bentonite, with different properties, are used in different products and different techniques for holding the components together have been employed (stitch bonding, adhesion, needle-punching). In addition, even for the same product and sometimes within the same roll of GCL, variability of key properties such as mass of bentonite per unit surface area can be high. One important consequence of this variability is that generalising experimental findings must be done with great caution.
Fourth, given the thinness of the bentonite layertypically 7 to 10 mmit is difficult to make direct, nondestructive measurements of key mechanical and hydraulic variables in real time. While a GCL can be weighed and its height measured to infer basic volumetric and gravimetric data, it is much more difficult to track changes in real time. As a result, experimental investigations of GCLs typically instrument the subsoil beneath it, or a layer above it, and infer the behaviour of the GCL indirectly.
Finally, key behavioural patterns of GCLs, such as hydration and dehydration, consolidation and thermal loads, occur under conditions of partial, rather than full saturation [27]. Given the fast-evolving state of research in unsaturated soil mechanics, critical theoretical questions remain unanswered, especially in relation to the selection of stress state variables and characterisation and interpretation of soil water characteristic curves. Inevitably, therefore, our understanding of GCLs and our ability to model their behaviour is bound to evolve with the field of partially-saturated soil mechanics.
GCLs and Subgrade
Commercially available GCLs, Elcoseal X-2000 and X-3000 (Geofabrics Australia) were used. The needlepunched, thermally-treated GCLs were made of powder Na-bentonite held together by nonwoven polypropylene geotextile covers, with a woven scrim reinforcement added to the carrier side. Key GCLs properties are shown in Table 1. The main difference between the two GCLs is the higher mass per unit area of GCL_B. The subsoil was a well-graded sand (SW) found in Sydney, Australia (see Table 2). A coarse-grained soil has relatively low water retention, and therefore lower ability to rehydrate the heated GCL, and was hence more likely to approximate a worst-case scenario. 1.78 As placed gravimetric water content, w (%) 11
Instrumented Laboratory Soil Columns
The one-dimensional column test apparatus was designed to test composite liner systems under controlled thermal gradients and overburden loads (see Fig. 1). The 600mm column was made of polytetrafluoroethylene and held the subsoil and composite liner system, all placed between two temperature control cells. Four TDRs for measuring water content and three temperature sensors were installed along the subsoil depth at 150 mm intervals. The column was equipped with a loading frame and a LVDT sensor to monitor vertical deformation of the system. For detailed description, readers are referred to Yu and El-Zein (in press) [28].
All column tests were conducted as follows. First, the GCL specimens with as-received water contents were installed on top of the subsoil, covered by the GMB, then left to hydrate from the subsoil, under isothermal conditions (20±1 o C) and 20 kPa overburden load. Once bentonite swelling ceased, indicating adequate levels of hydration (44 to 56 days), heating was applied with top and bottom temperatures set at 78±1 o C and 20±1 o C, respectively (average thermal gradient of 96°C/m).
Fig.1. Laboratory column test diagram
Heating continued until no further shrinkage of GCLs was recorded (28 to 39 days). Specimens were then carefully taken out, rehydrated and their permeability to distilled water measured. X-ray images before and after the permeation tests were taken.
This procedure approximated key aspects of site conditions since, in the field, especially under brine ponds, composite liners can be exposed to temperatures up to 80 o C, under a few metres of water as overburden load, and leaks in the GMB may allow brine water to reach and rehydrate the desiccated GCL. Hence, a key measure of performance is the saturated hydraulic conductivity of the rehydrated GCL.
Several aspects of site conditions, however, were not replicated, most importantly the possibility that heating and leakage might occur concurrently rather than successively and that GCLs are permeated with brine rather than distilled water. The implications of these limitations will be discussed later.
Column tests were first conducted for a standard composite liner design (GMB+GCL) using GCL_A (lower mass per unit area of bentonite). Next, two variables were tested for: effects of higher mass per unit area of bentonite (GCL_B) and effects of an air gap between the source of heat and the composite liner, by inserting one or two layers of geo-composites (see Fig. 2). The idea for an air gap was first suggested by Bouazza et al. [7] to reduce thermal load on the GCL, using the low thermal conductivity of air. However, the effectiveness of the solution was not evaluated.
In each test, the following experimental data sets were established in real-time (time series): temperature profile in the subsoil, water content profile in the subsoil, swelling/shrinkage of the bentonite. All column tests were replicated and only minor differences in behaviour were found between original and repeated tests.
Numerical Modelling
The goal of the numerical modelling component of the investigation was to assess whether a multiphase, thermo-hydro-mechanical theory within unsaturated critical state soil mechanics can reproduce experimental observations. The simulations were conducted with CODE-BRIGHT, in 2D axisymmetric mode, and its associated Barcelona Basic Model (BBM) [29]. Nonlinear elastic and elasto-plastic constitutive equations were used alternately. Solid displacements, liquid and gas pressures, and temperature were adopted as primary state variables. Key model assumptions included small strains and strain rates, pressure-and temperaturedependent constitutive parameters and water retention dependent on void ratio and temperature.
All constitutive parameters of the GCL and the subsoil (water retention curves; thermal and hydraulic conductivities as functions of degrees of saturation or suction; mechanical constitutive parameters etc.) were determined independently of the experimental soil column datasets. To ensure high-quality of experimental validation, and despite the large number of constitutive and boundary-condition parameters in the model, only one parameter (lateral thermal flux) was back fitted to column test experimental results (detailed below).
No-flux boundary conditions for liquid and gas were applied at all boundaries. An overburden load of 20kPa was specified at the top and no-displacement at all other boundaries. Top and bottom boundary conditions replicated the isothermal hydration stage, followed by the heating stage, as described earlier. Only one boundary-condition parameter used in the model was back-fitted to measured temperatures as follows.
The soil columns were made of low conductivity material and wrapped in insulation foil. However, it was not possible to completely prevent lateral thermal losses. Hence, a constant-flux thermal boundary condition was applied in the simulation at the lateral boundaries and the value of the flux was determined by back-fitting temperature predictions to experimental measurements. The reader is referred to Ghavam-Nasiri, El-Zein, Airey et al. (in press) [30] for more details. The figures show reasonable agreement between experimental and numerical results. The steady-state temperature (Fig. 3a) and water content (Fig. 3b) in the subsoil are well captured but less so the transient responses. Temperature inaccuracy is likely due to the assumption of constant lateral heat flux, when in fact heat flux is likely to be time-dependent. Predictions of shrinkage of bentonite in the GCL were much better when elasto-plastic equations were used (Fig. 3c). However, neither model predicted the slow onset of swelling during the hydration stage. Finally, in the elasto-plastic model, tensile net stresses developed about 10 days after heating started (Fig. 3d), hence qualitatively predicting desiccation revealed by Fig. 3e. Fig. 4 shows x-ray images of desiccated GCL_B specimens after the heating stage. GCL_B has higher mass of bentonite per unit area than GCL_A. Both specimens have undergone significant dehydration and desiccation. The residual water contents after heating was slightly higher in GCL_B (8.1 and 9.2%) compared to GCL_A specimens (7.8 and 8.4%). Fig. 4, a denser fissuring pattern are seen in GCL_A specimens. This is confirmed by Table 3 which shows results of image analysis software quantifying cracking networks, with a smaller proportion of cracks and larger average crack width in GCL_B specimens. Nevertheless, the results indicate that, under the conditions enacted in the column experiments, higher mass per unit area of bentonite does not appear to reduce desiccation in a significant way.
Effects of Air Gaps
Case 1 represents the control design with no air gaps. Cases 2 and 3 include one and two 10-mm thick geocomposites, respectively (left and right-hand sides of Fig. 2). The tests were done with GCL_B.
The presence of airgaps in Case 2 and 3 lowered temperature on top of the GCL specimens from 78±1 o C to 51.7 o C and 43.8 o C respectively. Correspondingly, the average thermal gradient reduced from 95 o C/m for Case 1, to 52 and 39 o C/m for Cases 2 and 3 respectively. Fig 5 shows X-ray images before and after rehydration of the specimens. Image analysis did not reveal any significant decrease in crack area proportions (~30%). However, the average crack widths of Cases 2 and 3 were 22~28% smaller than those of Case 1. Examining the postrehydration x-ray images with contrast enhanced, there is evidence of more effective self-healing under lower thermal gradients. This is confirmed by the measurements of hydraulic conductivities for the three Cases shown in Fig 6. The figure shows that the steadystate hydraulic conductivity of the specimen from Case 1 is 2.5 times greater than that of the intact specimen (hydrated, unheated). On the other hand, the hydraulic conductivities of Case 3 and intact specimens are virtually indistinguishable from each other.
Discussion and Ongoing Research
The results presented in the previous section show that, using critical state soil mechanics, the thermal, hydraulic and mechanical behaviours of GMB-GCL composite lining systems, when subjected to high thermal gradients and low overburden pressure, can be predicted with reasonable accuracy. Non-linear elasticity predictions of GCL shrinkage upon dehydration were poor but much better results were obtained by an elasto-plastic model. However, neither framework was able to predict the transient deformation response of the GCL during the hydration phase (prior to heating), namely the delayed swelling of bentonite. There is evidence in the literature that vapour transfer plays an important role in GCL hydration [31], which may explain the lack of swelling in the early stages. Another possible line of enquiry is bentonite's complex porous structure in which water at the early stages of hydration may not have access to the interlayer space of clay particles where significant osmotic swelling takes place. Numerical investigations are currently underway to explore this question.
Two design approaches have been explored (higher mass per unit area of bentonite and air gaps above the liner), neither of which were successful in preventing desiccation under high thermal gradients and low overburden pressure, with a sand as subsoil. However, bentonite in the GCL revealed a remarkable capability for self-healing. When a 20mm airgap was introduced, the effect of desiccation on hydraulic conductivity to distilled water practically disappeared upon rehydration. Hence, design features that separate the thermal load from the top of the liner appear promising.
Three other lines of enquiry are possible. First, the composite liner may behave differently depending on the chronological order of desiccation and leakage from the GMB. In this paper, we assumed that thermal dehydration of bentonite occurs before leakage, and rehydration was conducted under isothermal conditions. The response may be different under different scenarios.
Second, rehydration and permeation tests were conducted with distilled water rather than brine or landfill leachate. The well-known chemical reactivity of sodium bentonite, especially its susceptibility to cation exchange, may lead to significant increases in hydraulic conductivity. We are currently measuring the hydraulic conductivities of the specimens with brine as a permeant. In addition, the effect of salt on GCL water retention curves is not well understood and requires further study.
Finally, amending the sandy subsoil to increase its water retention and therefore its ability to rehydrate the GCL may offer the most straightforward solution to the problem and is worth investigating. However, the impact of such modification on the speed and extent of initial hydration of GCL, prior to heat exposure, must be carefully considered before such a solution is adopted.
Research has been partly funded by Australian Research Council Discovery project DP170104192. The authors are grateful to professors R Kerry Rowe, David Airey and Malek Bouazza for their insights and roles in parts of the investigations referred to in this paper. | 4,139.8 | 2019-01-01T00:00:00.000 | [
"Geology"
] |
Polynomial upper bounds for the instability of the Nonlinear Schr\"odinger equation below the energy norm
We continue the study (initiated in \cite{ckstt:7}) of the orbital stability of the ground state cylinder for focussing non-linear Schr\"odinger equations in the $H^s(\R^n)$ norm for $1-\eps<s<1$, for small $\eps$. In the $L^2$-subcritical case we obtain a polynomial bound for the time required to move away from the ground state cylinder. If one is only in the $H^1$-subcritical case then we cannot show this, but for defocussing equations we obtain global well-posedness and polynomial growth of $H^s$ norms for $s$ sufficiently close to 1.
Introduction
We consider the Cauchy problem for the non-linear Schrödinger equation iu t + ∆u = F (u); u(x, 0) = u 0 (x) (1) where u(x, t) is a complex-valued function on R n × R for some n ≥ 1, u 0 (x) lies in the Sobolev space H s (R n ) for some s ∈ R, and the non-linearity F (u) is the power non-linearity F (u) := ±|u| p−1 u for some p > 1 and sign ±. We refer to the + sign as defocusing and the -sign as focusing. The long-time behavior of this equation has been extensively studied in the energy class (see e.g. [34], [35], [21], [29], [4]), however in this paper we shall be interested in regularities 1 − ε < s < 1 slightly weaker than the energy class. When p is an odd integer, then F is algebraic and there are a number of results addressing the long-time behavior in H s in this case ( [32], [12], [15], [16], [17], [2], [4], [3]). One of the main purposes of this paper is to demonstrate that these techniques can be partially extended to the non-algebraic case.
We review the known local and global well-posedness theory for this equation. It is known (see e.g. [1]) that the Cauchy problem (1) T.T. is a Clay Prize Fellow and is supported in part by grants from the Packard Foundation. 1 When p is not an odd integer one also needs the constraint ⌊s⌋ < p − 1 because of the limited regularity of F . We shall gloss over this technicality since we will primarily be concerned with the regime 0 < s < 1.
s ≥ max(0, s c ), where s c is the critical regularity In the sub-critical case s > s c the time of existence depends only on the H s norm of the initial data. We refer to the case s c < 1 as the H 1 -subcritical case, and the case s c < 0 as the L 2 -subcritical case. In particular, local well-posedness below H 1 is only known in the H 1 -subcritical case.
These local well-posedness results are known to be sharp in the focusing case (see [31]); for instance, one has blowup in arbitrarily small time when s < s c . In light of the recent work in [6] it is likely that these results are also sharp in the defocusing case (at least if one wants to have a fairly strong notion of well-posedness).
In the H 1 -subcritical case it is known [1] that the Cauchy problem (1) is globally well-posed in H s for all s ≥ 1; indeed, one also has scattering in the defocusing case [21], [29]. The global well-posedness is an easy consequence of the local theory, conservation of the L 2 norm and Hamiltonian combined with the Gagliardo-Nirenberg inequality. Similarly, in the L 2 -subcritical case s c < 0 one has global well-posedness in H s for all s ≥ 0.
This leaves open the question of the global well-posedness in H s in the intermediate regime 0 ≤ s c ≤ s < 1. When the initial data has small norm or is localized in space then global well-posedness is known (see e.g. [31]) but the general case is still open 2 .
For simplicity we restrict ourselves to the defocusing case. The cases when p is an odd integer have been extensively studied; we summarize the known results in Table 1.
Our first main result to extend these results 3 to fractional p in the regime 0 ≤ s c < 1.
Theorem 1.1. Suppose that we are in the defocusing case with s c < 1. Then the Cauchy problem (1) is globally well-posed in H s whenever for some ε(n, p) > 0. Furthermore, one has the polynomial growth bound for all times T ∈ R. 2 In the L 2 -critical case sc = 0 one might expect global well-posedness for large L 2 data by combining L 2 conservation and the local well-posedness theory. However a subtlety arises because the L 2 norm cannot be scaled to be small, and indeed in the focusing case the L 2 mass can concentrate to a point singularity. In the defocusing case the large L 2 global well-posedness remains an important open problem (even in the radial case). 3 Similar results have been obtained for the non-linear wave equation in [25]. However the argument in [25] cannot be extended to NLS because it relies on the gain in regularity inherent in the wave Strichartz estimates, which are not present for the Schrödinger Strichartz estimates. In these results the non-linearity is assumed to be defocusing.
One can probably modify these arguments and exponents to handle the focusing case, especially in the L 2 -subcritical case s c ≤ 0, but we shall not do so here. In the L 2 -subcritical case one has global well-posedness for all s ≥ 0 thanks to L 2 norm conservation, but the question of polynomial growth of H s norms for s close to 0 remains open.
We have not explicitly calculated ε(n, p); the exponents given by our arguments are significantly weaker than those in the results previously mentioned.
Our approach is based on the "I-method" in [24], [12], [13], [14], [15], [16], [17] (see also [23]). The idea is to replace the conserved quantity H(u), which is no longer available when s < 1, with an "almost conserved" variant H(Iu), where I is a smoothing operator of order 1 − s which behaves like the identity for low frequencies (the exact definition of "low frequencies" will depend ultimately on the time T ). Since p is not necessarily an odd integer, we cannot use the multilinear calculus (or X s,b spaces) in previous papers, and must rely instead on more rudimentary tools such as Taylor expansion (and Strichartz spaces). In particular there does not appear to be an easy way to improve the exponent ε(n, p) by adding correction terms to H(Iu) (cf. [13], [15], [17]). Also, we will avoid the use of L 2 conservation law as much as possible as this norm can be critical or supercritical.
As a partial substitute we shall use the subcritical L p+1 norm which we can control from (2) (cf. [24]).
In the cases when p is an odd power, smoothing estimates such as the bilinear Strichartz estimate of Bourgain (see e.g. [4]) are very useful for these types of results. However we will not use any sort of smoothing estimates in our analysis 4 , and rely purely on Strichartz estimates instead. (One of the advantages of the I-method is that one can use commutator estimates involving the operator I as a substitute for smoothing estimates even when the nonlinearity has no smoothing properties).
Our second result concerns the orbital stability of ground states in the focusing case. For this result we shall restrict ourselves 5 to the L 2 -subcritical case s c < 0.
When s c < 0 there exists a unique radial positive Schwartz function Q(x) which solves the equation ∆Q − Q = F (Q) on R n (see [8], [27], [26]). We refer to Q as the canonical ground state at energy 6 1. The Cauchy problem (1) with initial data u 0 = Q then has an explicit solution u(t) = e it Q. More generally, for any x 0 ∈ R n and e iθ ∈ S 1 , the Cauchy problem with initial data u 0 (x) = e iθ Q(x − x 0 ) has explicit solution e i(θ+t) Q(x − x 0 ). If we thus define the ground state cylinder Σ by we see that the non-linear flow (1) preserves Σ. We now investigate how the nonlinear flow (1) behaves on neighborhoods of Σ.
In [34], [35] Weinstein showed that in the L 2 -subcritical case s c < 0 and when n = 1, 3, the ground state cylinder Σ was H 1 -stable. More precisely, he showed an estimate of the form for all solutions u 0 to (1) and all times t ∈ R. In other words, solutions which started close a ground state in H 1 stayed close to a ground state for all time (though the nearby ground state may vary in time).
To prove (4), Weinstein employed the Lyapunov functional which is well-defined for all u ∈ H 1 . Since this quantity is a combination of the Hamiltonian (2) and the L 2 norm, it is clearly an invariant of the flow (1). The ground states in Σ then turn out to minimize L, so that L(u) ≥ L(Q) for all u ∈ H 1 . More precisely, we have the inequality see [34], [35]. The stability estimate (4) then follows easily from (6) and the conservation of L.
Weinstein's proof of (6) requires the uniqueness of the ground state Q, which at the time was only proven for n = 1, 3 [8]. However, this uniqueness result has since been extended to all dimensions n [26] (with an earlier partial result in [27]). Thus (6) (and hence (4)) holds for all dimensions n (always assuming that we are in the L 2 subcritical case s c < 0, of course). 5 In the L 2 -critical or L 2 -supercritical cases the ground state is known to be unstable, indeed one can have blowup in finite time even for data arbitrarily close to a ground state in smooth norms. See [28]. 6 Other energies E are possible but can be easily obtained from the energy 1 state by scaling.
In [18] the H 1 orbital stability result was partially extended to regularities H s , 0 < s < 1, in the special case n = 1, p = 3 (which among other things is completely integrable). The second main result of this paper is a partial extension of the results of [18] to arbitrary L 2 -subcritical NLS: In other words, if the initial data stays close to the ground state cylinder in H s norm, then the solution stays inside a ball of bounded radius in H s for a fairly long period of time. (After this time, one can use Theorem 1.1 to give polynomial bounds on the growth of the H s norm). Note that if one were to try to naively use perturbation theory to prove this theorem, one would only be able to keep u(t) inside this ball for times t = O(log(1/σ)) (because after each time interval of length ∼ 1, the distance to the ground state cylinder might conceivably double).
Of course, when s = 0 or s = 1 one can use the conservation laws to obtain Theorem 1.2 for all time t. However there does not seem to be any easy way to interpolate these endpoint results to cover the 0 < s < 1 case, since the flow (1) is neither linear nor complex-analytic.
The proof of Theorem 1.2 also proceeds via the "I-method". The main idea is to show that the modified Lyapunov functional L(Iu) is "almost conserved". It should be possible to refine this method by approximating u(t) carefully by a ground state as in [18] and obtain a more precise estimate of the form for all time t. (In [18] this is achieved in the model case n = 1, p = 3). However there seem to be some technical difficulties in making this approach viable, mainly due to the lack of regularity 7 of the non-linearity F , and so we will not pursue this matter.
It should be possible to remove the constraint s > 1 − ε(n, p) and prove Theorem 1.2 for all s > 0 (as in [18]). This may however require some additional assumptions on p (e.g. one may need p < 1 + 2 n ) as it becomes difficult to control the modified Hamiltonian for s close to zero otherwise.
The authors thank Monica Visan for some helpful corrections. 7 The specific obstacle is as follows. In our current argument we must estimate commutator expressions such as IF (u) − F (Iu). To utilize the ground state cylinder as in [18] one would also consider expressions such as F (Q + w) − F (Q). To use both simultaneously one needs to estimate a double difference such as I(F (Q + w) − F (Q)) − (F (Q + Iw) − F (Q)). However when p < 2, F is not twice differentiable, and so correct estimation of the double difference seems very subtle.
Preliminaries: Notation
Throughout the paper n, p, s are considered to be fixed. We will always have the implicit assumption "1 − ε(n, p) < s < 1 for some sufficiently small ε(n, p) > 0" in our arguments. We let A B or A = O(B) denote the estimate A ≤ CB, where C is a positive constant which depends only on n, p, s.
We write F ′ for the vector (F z , F z ), and adopt the notation In particular we observe the chain rule . We also observe the useful Hölder continuity estimate for all complex z, w, where θ := min(p − 1, 1) is a number in the interval (0, 1]. In a similar spirit we record the estimate for all complex z, w.
We define the spatial Fourier transform on R n bŷ For N > 1, we define the Fourier multiplier I = I N by where m is a smooth radial function which equals 1 for |ξ| ≤ 1 and equals |ξ| s−1 for |ξ| ≥ 2. Thus I N is an operator which behaves like the Identity for low frequencies |ξ| ≤ N , and behaves like a (normalized) Integration operator of order 1−s for high frequencies |ξ| N . In particular, I maps H s to H 1 (but with a large operator norm, roughly N 1−s ). This operator will be crucial in allowing us to access the H 1 theory at the regularity of H s . We make the useful observation that I has a bounded convolution kernel and is therefore bounded on every translation-invariant Banach space.
We also define the fractional differentiation operators |∇| α for real α by We then define the inhomogeneous Sobolev spaces H s and homogeneous Sobolev spacesḢ s by We shall frequently use the fact (from elementary Littlewood-Paley theory, see e.g. [30]) that if u has Fourier transform supported on a set |ξ| M , then one can freely replace positive powers of derivatives ∇ by the corresponding powers of M in L p norms for 1 < p < ∞, thus for instance In particular, for any u, u − Iu has Fourier support on the region |ξ| ≥ N , hence for any ε > 0. This fact (and others similar to it) will be key in extracting crucial negative powers of N in our estimates.
Preliminaries: Strichartz spaces
In this section we introduce the H 1 Strichartz spaces we will use for the semilinear equation (1), and derive the necessary nonlinear estimates for our analysis.
In particular, we obtain nonlinear commutator estimates involving the fractional nonlinearity F (u), which is the main new technical advance in this paper.
We will always assume we are in the H 1 -subcritical case Let t 0 be a time and 0 < δ ≤ 1. In what follows we restrict spacetime to the slab R n × [t 0 , t 0 + δ]. We define the spacetime norms L q t L r x by L r x dt) 1/q . We shall often abbreviate u L q t L r x as u q,r .
We shall need a space L q0 t L r0 x to hold the solution u, another space L q1 t L r1 x to hold the derivative ∇u (or I∇u), and the dual space x to hold the derivative nonlinearity ∇F (u) (or I∇F (u)). (Here of course q ′ denotes the exponent such that 1/q + 1/q ′ = 1). We also need a space L q0/(p−1) t L r1/(p−1) x to hold F ′ (u). To choose the four exponents q 0 , r 0 , q 1 , r 1 we use the following lemma (cf. [1]): Lemma 3.1. Let p be as above. There exist exponents 2 < q 0 , r 0 , q 0 , r 1 < ∞ and 0 < β < 1 obeying the relations 2 1 1 Proof We first choose β such that so that such an β exists from (9). Next, we choose 2 < q 0 < ∞ and p + 1 < r 0 < ∞ so that (11) holds; such a pair q 0 , r 0 exists since Next, we choose r 1 so that (12) holds. Finally we choose q 1 so that (10) holds.
Henceforth q 0 , r 0 , q 1 , r 1 are assumed to be chosen as above. We define the spacetime norm X by This X norm looks rather artificial, but it is easy to estimate for solutions of (1), and it can be used to control various spacetime norms of u. Indeed, we recall that the hypotheses (10), (11) imply the (scale-invariant) Strichartz estimate (see e.g. [22]) Lemma 3.2. We have ∇u q1,r1 + |∇| β u q0,r0 u X .
In future applications we shall need to control u q0,r0 and u 2p,2p in addition to the norms already controlled by Lemma 3.2. These norms cannot be controlled purely by the X norm, however from the conservation of the Hamiltonian we will also be able to control 8 the L ∞ t L p+1 x norm, and by combining these estimates we shall be able to estimate everything we need. More precisely, we have Lemma 3.3. Suppose that we are working on the slab R n × [t 0 , t 0 + δ] and that u is a function on this slab obeying the estimates u X + u ∞,p+1 1.
Proof We introduce the frequency cutoff λ := u ε X for some ε > 0 to be determined later, and smoothly divide u = u low + u high , where u low has Fourier support in the region |ξ| λ and u high has Fourier support in the region |ξ| λ.
Consider the contribution of u high . Then ∇ is bounded by λ −1 |∇|, and so (16) follows from Lemma 3.2. To prove (17), we let r be the exponent such that 2 2p + n r = n 2 .
The claim (17) then follows from Sobolev embedding and the high frequency assumption |ξ| λ.
Now consider the contribution of u low . Then the ∇ can be discarded. Since r 0 and 2p are both strictly larger than p + 1, we see from Bernstein's inequality (or Sobolev embedding) that for some c > 0. But by hypothesis we have u low ∞,p+1 1. The claim then follows from a Hölder in time.
We now use these estimates to prove some non-linear estimates involving F and I. We begin with a bound on F ′ (Iu).
Lemma 3.4. Suppose that
In particular we have To get the additional ε of regularity we shall use Hölder norms. Since Sobolev norms control Hölder norms (see e.g. [30]) we have have is the translation of u in space by y. From (7) and Hölder's inequality we thus have where θ := min(p − 1, 1). Using (18) and the observation that F ′ (u y ) = F ′ (u) y , we thus have a Hölder bound on F ′ (u): This yields the desired Sobolev regularity bound for any 0 < ε < βθ (see [30]).
From this Lemma one can already recover the proof (from [1]) ofḢ 1 local wellposedness of the NLS equation (1). Indeed, if u solves (1), then from (15) and the chain rule we have , which by Hölder, (12), (13), and Lemma 3.4 (discarding the epsilon gain of regularity) yields which (together with a similar inequality for differences in iterates of (1)) allows one to obtain well-posedness if theḢ 1 norm of the initial data is sufficiently small. (Large data can then be handled by a scaling argument).
By a variant of the argument just described, we can obtain bounds on I∇F (u) and the related commutator expression ∇(IF (u) − F (Iu)): Lemma 3.5. Suppose that we are working on the slab R n × [t 0 , t 0 + δ] and that u is a function on this slab obeying the estimates Then for some c > 0. Furthermore, we have for some α > 0. (The quantities c, α depend of course on n, p, s, ε).
One can think of (20) as a type of fractional chain rule for the differentiation operator I∇. The additional gain of N −α in (21) arises from the spare epsilon of regularity in Lemma 3.4 and the fact that I is the identity for frequencies N ; this gain is crucial to all the results in this paper.
Proof From Lemma 3.2 and Lemma 3.4 we have
and To utilize (22), (23) we use the following bilinear estimates.
Lemma 3.6. If s is sufficiently close to 1 (depending on ε), then we have and ,r0/(p−1) for some α > 0 depending on s and ε, and any f, g on the slab R n × [t 0 , t 0 + δ].
Proof From (13) and Hölder's inequality in time it will suffice to prove the spatial estimates and From (12) and Hölder's inequality we have If r1 g r0/(p−1) so (24) follows from (25).
It remains to prove (25). By applying a Littlewood-Paley decomposition to g, and lowering ε if necessary, we may assume thatĝ is supported in the region ξ ∼ M for some M ≥ 1.
Fix M , and suppose thatf is supported on the region ξ M . If M ≪ N then the left-hand side vanishes (since I is then the identity on both f and f g), so we may assume M N . Then by (12) and Hölder's inequality (discarding all the Is) we have which is acceptable if s is close to 1 and α is less than 1 − s.
It remains to consider the case whenf is supported on the region ξ ≫ M . By dyadic decomposition we may assume thatf is supported on the region ξ ∼ 2 k M for some k ≫ 1, as long as we get some exponential decay in k in our estimate.
Fix k. We compute the Fourier transform of I(f g) − (If )g: From our Fourier support assumptions we may assume ξ 1 ∼ 2 k M and ξ 2 ∼ M . We may assume that 2 k M N since the integrand vanishes otherwise. From the mean-value theorem we observe that From this, combined with similar bounds on derivatives of m(ξ 1 + ξ 2 ) − m(ξ 1 ), we obtain by standard paraproduct estimates (see e.g. [5]) Since 2 k M N , the claim (25) follows for α sufficiently small (note that we have an exponential decay in k so we can safely sum in k).
Since
I∇F (u) = I(∇u · F ′ (u)) we see that (20) follows from (22), (23) and the first part of Lemma 3.6. Now we prove (21). By the chain rule we have The contribution of (26) is acceptable from (22), (23), and the second part of Lemma 3.6, so we turn to (27 From (19) and Lemma 3.3 we have ∇ β u q0,r0 1 which in particular implies that The claim then follows (if α is sufficiently small) from (7) and Hölder's inequality.
4. Proof of Theorem 1.1 We now prove Theorem 1.1. As in the earlier paper [18] in this series, we break the argument up into a standard series of steps.
Step 0. Preliminaries; introduction of the modified energy.
It suffices to show the polynomial growth bound (3), since the global well-posedness then follows from the local well-posedness theory in [1]. By another application of the local well-posedness and regularity theory and standard limiting arguments, it suffices to prove (3) for global smooth solutions u which are rapidly decreasing in space.
Fix u, s, T . We shall allow the implicit constant in A B to depend on the quantity u 0 H s . By time reversal symmetry we may take T > 0. We will in fact assume T 1 since the case T ≪ 1 follows from the local well-posedness theory.
Let I = I N be as in Section 2.
We claim that For the kinetic energy component I(u 0 ) λ 2Ḣ 1 of the Hamiltonian, this follows from the computation For the potential energy component I(u 0 ) λ p+1 p+1 , we use Sobolev embedding to estimate where the exponent s c < s ′ < 1 is determined from the Sobolev embedding theorem as s ′ := n 2 − n p + 1 .
If s is sufficiently close to 1, then we have and so this bound is acceptable.
We now choose N and λ so that In other words, we choose the parameters so that I(u 0 ) λ has small Hamiltonian.
In Steps 2-4 we shall prove the following almost conservation law on H(Iu): Lemma 4.1. Suppose that u is smooth rapidly decreasing solution to (1), and that H(Iu(t 0 )) 1 for some time t 0 ≥ 0. Then we have for all t 0 ≤ t ≤ t 0 + δ, where α = α(n, p) > 0is a quantity depending only on n and p, and δ > 0 depends only on n, p, and the bound on H(Iu(t 0 )).
Let us assume Lemma 4.1 for the moment and deduce (3).
We apply the lemma to u λ . Iterating the lemma about N α times we obtain Since we are in the defocusing case we thus have On the other hand, by scaling the L 2 conservation law we have Combining these two estimates using Plancherel we obtain u λ (t) Ḣs λ −(1−s)sc for all 0 ≤ t ≪ N α .
Combining this with the L 2 conservation law we obtain If s is sufficiently close to 1, we can choose N, λ ≫ 1 obeying (29) such that The claim (3) then follows by unraveling the exponents.
It only remains to prove Lemma 4.1. This will be done in the next three steps.
Step 2. Control u at time t 0 .
Let u be as in Lemma 4.1. The hypothesis H(Iu(t 0 )) 1 immediately implies that Step 3. Control u on the time interval [t 0 , t 0 + δ].
We will make the a priori assumption that sup t0≤t≤t0+δ H(Iu(t)) ≤ C for some large constant C; this assumption can then be removed by the usual limiting arguments. In particular we have since we are in the defocusing case. Here and in the rest of this section, we adopt the convention that all spacetime norms are over the region R n × [t 0 , t 0 + δ].
The next step is prove the spacetime estimate where X is the space defined in Section 3. By another continuity argument we may make the a priori assumption that Iu X < C for some large constant C. We compute But then (33) follows from (31) and Lemma 3.5, if δ is chosen sufficiently small.
From (33) and (32) we see that (19) holds, so that Lemma 3.5 is now available.
Step 4. Control the increment of the modified energy.
It remains to deduce (30) from (33) and (32). By the fundamental theorem of Calculus it suffices to show From an integration by parts we have ∂ t H(Iu) = Re Iu t (−∆Iu + F (Iu)) dx.
Expanding Iu t and integrating by parts, it thus suffices to prove the estimates and We first prove (34). From (33) and Lemma 3.2 we have I∇u q1,r1 1 and the claim then follows from Hölder's inequality and Lemma 3.5.
Now we prove (35). By Cauchy-Schwarz we may estimate the left-hand side as If s is sufficiently close to 1 we have This concludes the proof of (30). The proof of Theorem 1.1 is thus complete.
Proof of Theorem 1.2
We now begin the proof of Theorem 1.2. The idea is to modify the previous argument to use the Lyapunov functional L instead of the Hamiltonian H.
Let σ, u 0 , u, p be as in Theorem 1.2. Again, we use time reversal symmetry to restrict ourselves to the case t > 0.
From the global well-posedness theory we know that u(t) is in H s globally in time; by limiting arguments we may also assume a priori that u(t) is smooth and rapidly decreasing. From L 2 norm conservation we have u(t) 2 = u 0 2 for all t. In particular, since u 0 is close in H s to a ground state, we have u(t) 2 = u 0 2 1.
We can now run the argument in the proof of Lemma 4.1 to prove that H(Iu(t)) = H(Iu(t 0 )) + O(N −α ); admittedly we are in the focusing case rather than the defocusing case, but an inspection of the argument shows that this does not matter thanks to the bounds (39).
By iterating (38) we see that L(Iu(t)) is bounded for all 0 ≤ t ≪ N α . By applying | 7,142.8 | 2002-12-01T00:00:00.000 | [
"Mathematics"
] |
New Presentation of the Acupuncture Law of Five Elements Considering the Shape of the Human NEMF
The law of five elements in ancient acupuncture reflects the order, in which the organs in the body become active. When acupuncture treatments are done this needs to be taken into consideration. This article discusses the Quantum Computer in the Subconscious, which works with the waves of our torus shaped NEMF, and through the waves of NEMF rules and regulates all the organs in the body. Based on this, this article offers a new presentation of the ancient acupuncture law of five elements, which consider the shape of the human nonlinear electromagnetic field (NEMF). The torus shape of the human NEMF and its dynamic was presented as two intersecting pyramids one with top up, the other with top down, in dynamic equilibrium. The upsidedown Yang pyramid represents the five hollow (Yang) organs, which are more active (Yang=active). The upright pyramid represents the five solid (Yin) organs, which are less active (Yin=passive). Then the coupled organs with strong functional dependence are on the vertical lines connecting the bases of the two pyramids and the vertical line connecting the two tops.
consciously aware of its existence. Our NEMF has the shape of a torus ( Figure 1) [6]. The six alternating vortices and anti-vortices of this NEMF are along the backbone of our body, which is axis of spinning of the torus. The six alternating vortices and anti-vortices rule the six endocrinal glands and through them NEMF rules and regulates everything in the body. The torus shape of our NEMF and its dynamic equilibrium is what gives us opportunity to interpret in a new way the classical acupuncture law of five elements reflecting the order in which our organs become active.
The ancient law of five elements
Specific acupuncture meridian represents each organ on the surface of the skin. Ancient acupuncture [7] considers two types of organs: hollow organs and solid organs. a.
Hollow organs are: stomach, bladder, gall bladder, large intestines, and small intestines. Being hollow the hollow organs are more active and since "Yang" means "active" the hollow organs are Yang organs. They are represented on the surface of the skin by Yang meridians, which run on the back of the body and outer (lateral) sides of arms and legs.
Being solid the solid organs are less active and since "Yin" means The energy Chi runs in the body in the order the organs are listed.
a. From midnight to dawn the Liver is most active.
b. From dawn to noon the Heart is most active.
c. From noon to evening the lungs are most active.
d. From evening to midnight the kidneys are most active.
Usually in the ancient books [7] a pentagram was used to represent the cycle of organ activity. However, since the ancient books state that the spleen is in the center, in some newer books tridimensional octahedral form was used, which includes the Earth (SP -ST) below, the Heaven (GV -CV) up; the four couples of organs are on the horizontal plane in-between.
The new interpretation of the law of five elements
The torus shape of our NEMF and its dynamic can be represented In the ancient texts [9], the pyramid with top down is the Light pyramid. Since "Light" is "Yang", the pyramid with top down is the Yang pyramid. In the ancient texts [8], the pyramid with top up is the Dark (Malhut) pyramid. Since "Darkness" is "Yin", the pyramid
Conclusion
This article offers a new interpretation of the acupuncture law of five elements, which consider the torus shape of the human NEMF, through which acupuncture cures. Acupuncture cures by correcting the regulating mechanisms, which rule and regulate all the organs in the body through the waves of our NEMF. Our Quantum Computer, which works with the waves of our NEMF, is located in the subconscious, and we don't have conscious awareness that it exists. However, from the subconscious this Quantum Computer rules and regulates the function of all organs. Considering the torus shape of our NEMF, we offered a new presentation of the ancient law of five element used in acupuncture practice. | 974.4 | 2019-09-18T00:00:00.000 | [
"Physics"
] |
Lysyl hydroxylase 2 mediated collagen post-translational modifications and functional outcomes
Lysyl hydroxylase 2 (LH2) is a member of LH family that catalyzes the hydroxylation of lysine (Lys) residues on collagen, and this particular isozyme has been implicated in various diseases. While its function as a telopeptidyl LH is generally accepted, several fundamental questions remain unanswered: 1. Does LH2 catalyze the hydroxylation of all telopeptidyl Lys residues of collagen? 2. Is LH2 involved in the helical Lys hydroxylation? 3. What are the functional consequences when LH2 is completely absent? To answer these questions, we generated LH2-null MC3T3 cells (LH2KO), and extensively characterized the type I collagen phenotypes in comparison with controls. Cross-link analysis demonstrated that the hydroxylysine-aldehyde (Hylald)-derived cross-links were completely absent from LH2KO collagen with concomitant increases in the Lysald-derived cross-links. Mass spectrometric analysis revealed that, in LH2KO type I collagen, telopeptidyl Lys hydroxylation was completely abolished at all sites while helical Lys hydroxylation was slightly diminished in a site-specific manner. Moreover, di-glycosylated Hyl was diminished at the expense of mono-glycosylated Hyl. LH2KO collagen was highly soluble and digestible, fibril diameters were diminished, and mineralization impaired when compared to controls. Together, these data underscore the critical role of LH2-catalyzed collagen modifications in collagen stability, organization and mineralization in MC3T3 cells.
Other modifying enzymes and associated proteins. We then analyzed the protein levels of LH1 and LH3 in KO clones by Western blot analysis (Fig. 2, Supplementary Fig. S3). The results showed that both LH1 and LH3 were comparable to controls (p > 0.05) though the former tended to be slightly lower in KO clones (Fig. 2). The collagen galactosyl transferase, GLT25D1, was significantly lower in the KO clones when compared to controls (Fig. 2). The reason for this is unclear, but the reduced level of GLT25D1 in KO could be partially compensated by unknown mechanisms since the total levels of G-+ GG-Hyl in KO collagen were only slightly lower (< 10%) than those of controls at all glycosylation sites analyzed (see below). The LH2-specific chaperone, FK506-binding protein 65 (FKBP65) 10 , and an additional potential binding partner, cyclophilin B (CypB) 33 , showed slightly but significantly lower (~ 70% of controls) or similar level (~ 90%), respectively, in KO clones when compared to controls (Fig. 2). Other LH2-associated proteins, heat shock protein 47 (Hsp47) and immunoglobulin heavy-chain-binding protein (Bip) 34 , were also significantly lower in KO than controls (Fig. 2).
All these measurements were conducted using cells cultured with the standard medium, thus, further analyses using those with the differentiation and mineralization media (see below) are necessary to obtain more comprehensive information.
Collagen type. We first examined collagen types by mass spectrometric analysis 35 . The data revealed that type I collagen is by far the predominant collagen type with a small amount of type III in all of the culture samples, which is consistent with our previous report 36 . The percentages of type I calculated by I/(I + III) × 100 were all > 96% and the difference between MC and KOs was within ~ 2% range ( www.nature.com/scientificreports/
Lys hydroxylation determined by high performance liquid chromatography (HPLC). In KO
clones, levels of Lys hydroxylation in collagen were slightly but significantly decreased compared with those from MC and EV (Table 2). We then analyzed Lys modifications at specific molecular loci in type I collagen (see below).
Lys modifications at specific molecular loci in type I collagen. Relative abundance of unmodified
Lys and its modified forms (Hyl, G-Hyl and GG-Hyl) at specific molecular sites of type I collagen are summarized in Table 3. Based on these values, an extent of Lys hydroxylation (%) was calculated as [Hyl/(Lys + Hyl) × 100] where Hyl is a sum of non-glycosylated, G-and GG-Hyl.
Lys hydroxylation in the telopeptides. None of the telopeptidyl Hyl is glycosylated (Table 3). Lys hydroxylation in the telopeptides of type I collagen, i.e. N-telo (α1 Lys-9 N and α2 Lys-5 N ) and C-telo (α1 Lys-16 C ) (note: α2 C-telo lacks Lys), is shown in Fig. 3a. The values of MC and EV were essentially identical with no statistical difference, i.e. ~ 55.4% at α1 Lys-9 N , ~ 22.7% at α2 Lys-5 N and ~ 56.8% at α1 Lys-16 C (Fig. 3a). In the KO type I collagen, however, none of the Lys residues was hydroxylated at any of these sites (Table 3, Fig. 3a). These results unequivocally demonstrate that LH2 is responsible for Lys hydroxylation in all telopeptides of type I collagen and that other LHs cannot compensate for this function.
Lys modifications in the helical domain. We then analyzed Lys modifications in the helical domain of type I collagen by using tryptic digests of collagen 8,33,37 (Table 3, Fig. 3b). In the helical domain, modified Lys residues were identified at 11 sites. The values in Fig. 3 represent percentages calculated as [Hyl/(Hyl + Lys) × 100] where Hyl includes glycosylated (G-and GG-) and non-glycosylated forms (Table 3). First, we examined the helical cross-linking sites, i.e. α1 Lys-87, α1 Lys-930, α2 Lys-87 and α2 Lys-933. At α1 Lys-87, a highly hydroxylated and the most heavily glycosylated site of type I collagen 16,17 , ~ 98% of Lys was hydroxylated in controls, MC and EV. In KO collagen, it was also almost all hydroxylated, showing only 2-4% less hydroxylated than controls (Table 3). For α1 Lys-930, using the collagenase-pepsin digest 37 , we analyzed Lys hydroxylation in the peptide containing α1 Lys-918/930 (GDKGETGEQGDRGIKGHR). In controls, these Lys residues were at least 87-89% hydroxylated (Hyl + Hyl), and those in KO (Fig. 3b). These data indicate that the contribution of LH2 towards helical Lys hydroxylation is low, especially at the cross-linking sites, and site-specific at non-crosslinking sites. (Table 4). When calculated as percentages of non-glycosylated-Hyl and glycosylated (G-and GG-) forms in total Hyl, the relative abundance of glycosylated Hyl at α1 Lys-87, the Values represent mean ± S.D. (n = 3) from three independent experiments. Statistical differences were determined by the method described above (Fig. 1 legend). *p < 0.05, **p < 0.01, and ***p < 0.001 between MC and KO; # p < 0.05, ## p < 0.01, and ### p < 0.001 between EV and KO, respectively. Original blots are presented in Supplementary Fig. S3. LH lysyl hydroxylase, GLT25D1 glycosyltransferase 25 domain containing 1, CypB cyclophilin B, FKBP65 FK506-binding protein 65, Hsp47 heat shock protein 47, Bip immunoglobulin heavychain-binding protein, Ab antibody, MC MC3T3-E1, EV empty vector, KO knock-out. www.nature.com/scientificreports/ major glycosylation site, was slightly but significantly lower (2-10%) and non-glycosylated-Hyl significantly higher in KO collagen compared to those of controls (Table 4). At all other sites, i.e. α1 Lys-99, α1 Lys-174, α1 Lys-564, α2 Lys-174, and α2 Lys-219, the same phenomena, i.e. a lower level of glycosylation of Hyl in KO collagen, were observed (Table 4) with the exception of KO-3 exhibiting similar levels of non-glycosylated and glycosylated Hyl to controls at some sites (p > 0.05, respectively). These data indicate that LH2 deficiency may cause diminished glycosylation at several sites. Interestingly, when a percentage of two glycosylation forms (G-+ GG-= 100%) was calculated, GG-form was lower and G-form higher at most sites in KO collagen when compared to controls ( Table 5). These data suggest that LH2 deficiency causes a relative decrease of galactosylhydroxylysineglucosyl transferase (GGT) activity leading to relative increase in the G-Hyl form, which is consistent with our recent report 6 . Collagen cross-link analysis. Control groups (MC and EV) showed essentially identical cross-link patterns ( Fig. 4) with no statistical difference in any of the cross-links. The amounts of cross-links of control and KO collagens are summarized in Table 7. In control groups, the major cross-link was DHLNL (Hyl ald × Hyl) representing ~ 67% of the total cross-links. The rest includes HLNL (Hyl ald × Lys or Lys ald × Hyl), Pyr (Hyl ald × Hyl ald × Hyl) and HHMD (Lys ald × Lys ald × His × Hyl). In KO collagen, none of the Hyl ald -derived cross-links (DHLNL, Pyr) were detected while Lys ald -derived cross-links, HLNL and HHMD, were both significantly increased by ~ 44 and ~ 400%, respectively. Though HLNL can be derived from Hyl ald or Lys ald , since Lys at the helical crosslinking sites are almost fully hydroxylated and telopeptidyl Lys is not hydroxylated in KO collagen (Table 3), it should be derived from Lys ald × Hyl in KO. In contrast to the striking difference in the type of cross-links, the difference in the total number of aldehydes involved in cross-linking is small (0.1-0.2 mol/mole of collagen) between control and KO collagens. This indicates that LOX/LOXL activities are not significantly affected in KO clones.
Pro 3-hydroxylation.
Collagen solubility, fibrillogenesis and matrix mineralization. We then evaluated the biochemical, morphological, and functional outcomes of LH2KO. First, we found that LH2KO resulted in a marked increase in collagen solubility ( Fig. 5. The fibrils in KO clones were generally circular in shape and overall similar to those of MC and EV. However, the collagen fibril diameters in all KO clones were smaller than those of MC and EV (Fig. 5), indicating defective lateral growth of fibrils in KO collagen. Lastly, we assessed the effects of LH2KO on in vitro mineralization. The controls (MC and EV) and KO clones (1-3) were cultured for 28 days and subjected to mineralization assay using Alizarin red S staining (Fig. 6). In the controls (MC and EV), mineralized nodules were well formed at this point, however, no nodules were observed in KO clones at this time point (Fig. 6a,b), demonstrating that the lack of LH2 results in defective matrix mineralization.
Discussion
In this study, by generating LH2KO clones, we extensively characterized the molecular phenotypes of type I collagen. The lack of LH2 resulted in complete absence of Lys hydroxylation in all telopeptides, i.e. N-(9 N ) and C-telo (16 C ) of an α1 and N-telo (5 N ) of an α2 chains, thus, LH2 is solely responsible for hydroxylation of all Lys residues in telopeptides. Consistent with these data, the Hyl ald -derived cross-links, the major cross-links in www.nature.com/scientificreports/ MC/EV collagen, were completely absent from KO collagen and were replaced with Lys ald -derived cross-links. Moreover, our data indicated that LH2 may also be involved in helical Lys hydroxylation in a site-specific manner. The lack of LH2-catalyzed modification has significant impact on collagen solubility, collagen fibrillogenesis and matrix mineralization. In addition, LH2 could be involved in glucosylation of galactosyl Hyl. It should be noted, however, that this study was conducted using osteoblastic MC3T3 cells, thus, the phenotypes observed could be due in part to attributes of these cells.
Though the role of LH2 as telopeptidyl LH has been widely accepted 28 , the evidence reported thus far was not complete due mainly to the lack of appropriate models and analytical tools. Since LH2 KO mice die at early embryonic stage 32 , we generated LH2 KO clones using MC cells. MC cells are derived from normal mouse Table 3. Summary of site-specific modification analysis by mass spectrometry of non-cross-linked, hydroxylated and glycosylated residues in type I collagen from controls (MC and EV) and KO clones. Lys hydroxylation and its glycosylation (%) represents the relative levels of Lys, Hyl, G-Hyl, and GG-Hyl (Lys + Hyl + G-Hyl + GG-Hyl = 100%). Values represent mean ± S.D. (n = 3) of triplicate analysis for each group. Statistical differences were determined by Kruskal-Wallis one-way analysis of variance and means comparison with the controls by Dunnett's method. Lys lysine, Hyl hydroxylysine, G-galactosyl-, GG-glucosylgalactosyl, MC MC3T3-E1, EV empty vector, KO knock-out. *p < 0.05, **p < 0.01, and ***p < 0.001 between MC and KO; ## p < 0.01 and ### p < 0.001 between EV and KO, respectively. Table 2). Values represent percentages of Lys hydroxylation calculated as Hyl/(Lys + Hyl) × 100. Hyl in the helical domain is a sum of non-glycosylated, G, and GG-Hyl (see Table 3). Lys lysine, Hyl hydroxylysine, G-galactosyl-, GGglucosylgalactosyl-, MC MC3T3-E1, EV empty vector, KO knock-out. Values represent mean ± S.D. (n = 3) of triplicate for each group. Statistical differences were determined by the method described above (Fig. 1 legend). *p < 0.05, **p < 0.01, and ***p < 0.001 between MC and KO; # p < 0.05, ## p < 0.01, and ### p < 0.001 between EV and KO, respectively. The relative levels at α1(I)K918/930 show the percentage of "Hyl + Hyl". See Table 3. 16,40 . These characteristics make MC cells an excellent model to investigate the biological functions of Lys modifications by manipulating specific LH gene expression and characterizing its effects on type I collagen 41 . Our current data unequivocally demonstrate that all Lys residues in telopeptides are hydroxylated solely by LH2, and neither LH1 nor LH3 can compensate for this function. It is not clear at this point what determines such substrate specificity for LH2. However, considering the fact that an acidic amino acid, Glu or Asp, is positioned next/close to telopeptidyl Lys residues (i.e. -Glu-Lys-Ser-in N-and C-telo of an α1 chain in both mouse and human, and -Asp-Lys-Gly-or -Asp-Gly-Lys-Gly-in N-telo of the mouse or human α2 chain, respectively), the presence of two basic Arg residues adjacent to the catalytic site of LH2 (R680 and R682) is likely important to determine such specificity 6 . Notably, these Arg residues are absent in LH1 or LH3 which explains their inability to compensate for LH2 deficiency. It is also interesting to note that, in MC/ EV type I collagen, both N-and C-telo Lys residues of an α1 chain are ~ 50% hydroxylated while the N-telo Lys 34 . In contrast, Syx et al. has stated that a mutant Hsp47, which showed a reduced binding to type I collagen, resulted in decreased LH2 44 . These inconsistent data suggest that Hsp47 may act as a positive or negative regulator of LH2 in a context-dependent manner. Interestingly, our present study showed that Fkbp65, Hsp47 and Bip protein levels were reduced in KO clones compared to MC (Fig. 2), suggesting that this chaperone complex may be destabilized by the lack of LH2.
It has been speculated that LH2 also catalyzes helical Lys hydroxylation based on its ability to hydroxylate the Lys residues in the synthetic (Ile-Lys-Gly) 3 peptide and the data from the LH2/proα1(I) co-expression in an insect cell system 22 . The results indicate that, in this system, LH2 may function as a helical LH when LH1 and 3 are absent. However, the effect of LH2 expression on Lys hydroxylation at the specific molecular loci in an α1 chain including its C-telo domain or in an α2 chain including its N-telo domain were not investigated. Recently, Gistelinck et al. has reported that, in the bone from a 4-year old patient carrying a PLOD2 heterozygous mutation, Lys in the α1(I) telopeptides was severely underhydroxylated while Lys at the helical cross-linking sites in type I collagen was normally hydroxylated 25 . Their findings are consistent with our current cell-based study showing that, when LH2 is absent, on the contrary to the changes in Lys hydroxylation in the telopetides, the extent of Lys hydroxylation in the helical domain was only minimally affected (Table 3, Fig. 3). It is important to note that, when these percentage differences are converted to the number of Hyl residues in a collagen molecule, the difference between MC/EV and KO is less than ± 0.03 residues at the cross-linking sites (α1-87, α1-918/930, α2-87, α2-933) and 0-0.2 residues at the non-cross-linking sites. Since Lys hydroxylation at the helical cross-linking sites is predominantly catalyzed by LH1 and its complex such as prolyl 3-hydroxylase 3 (P3H3), Synaptonemal Complex 65 (SC65) and CypB 9,33,45,46 , it is not surprising that absence of LH2 does not significantly affect Lys hydroxylation at these functionally critical sites in the helical domain. The significance of Lys hydroxylation at other sites in the helical domain is not well defined but, possibly, they may affect the interaction between collagen and collagen-binding proteins such as small leucine-rich proteoglycans and/or cell surface receptors such as integrins and discoidin domain receptor 2 47 .
Recently, Ishikawa and co-workers reported that the cooperation between LH1 and P3H3 is required for Lys hydroxylation in the helical domain of type I collagen, and that P3H3 may function as helical LH at specific cross-linking sites 46 . They also reported that LH2 level remained unchanged in LH1 null mice 46 . In the present study, we did not find a significant change of LH1 protein in LH2 KO clones. These findings suggest that there is no apparent direct interaction between LH1 and LH2. Thus, LH2 deficiency caused only a minute change in Lys hydroxylation in the helical domain of type I collagen.
One of the intriguing findings in the current study was that absence of LH2 affects Hyl glycosylation pattern. When the percentages of G-and GG-forms in total glycosylation forms (G-+ GG-) are calculated, KO collagen showed that at most sites, the GG-was decreased at the expense of G-form in KO type I collagen (Table 5). Recently, we have reported that LH2 potentially has galactosylhydroxylysyl glucosyltransferase (GGT) activity 6 and the current data (Tables 5, 9) supports this notion. Table 6. Summary of site-specific modification analysis by mass spectrometry of prolyl 3-hydroxylation in type I collagen from controls (MC and EV) and KO clones. 48 . The deficiency of any of these components severely affects this modification leading to severe forms of recessive osteogenesis imperfecta [49][50][51] . It has been reported that the α1 Pro-986, the major site for this modification, is hydroxylated by P3H1 49 , and another modification site, α1/2 Pro-707, mainly by P3H2 52 . In the present study, we found that the extent of P3H at these sites was slightly increased in KO clones, suggesting that LH2 may interact with the P3H complex for prolyl-3-hydroxylation at these sites (Table 6). Since LH2 interacts with CypB 33 , a P3H complex member, these slight changes could occur by the lack of this interaction.
The impact of LH2 deficiency on collagen stability, fibrillogenesis and mineralization was striking. First, collagen solubility with dilute acid and pepsin digestion were markedly increased in KO collagen, i.e. > 90% of KO collagen was solubilized by these treatments while it was only ~ 30% in control groups. The marked increases in solubility in KO collagen can be explained by the differences in the nature of the cross-links. In KO collagen, since telopeptidyl Lys is not hydroxylated, the cross-links formed are all Lys ald -derived, aldimine cross-links such as deH-HLNL and deH-HHMD. The aldimine bond is known to be labile to dilute acids, thus, readily dissociated 53 . In contrast, the Hyl ald -derived bifunctional aldimine cross-links are spontaneously rearranged to ketoamines that are stable to dilute acids. The collagens containing the stable Hyl ald -derived cross-links are also more resistant against enzymatic degradation than those with the Lys ald -derived cross-links 54,55 . Since the total number of aldehydes involved in cross-linking is only ~ 8% lower in KO collagen compared to the control, the data implies that the Hyl ald -derived cross-linking is critical to confer insolubility on type I collagen. This is likely the reason why collagen enriched in the Hyl ald -derived cross-links accumulates without being readily degraded by proteolytic enzymes in fibrosis 28,56,57 and also in desmoplastic tumors such as pancreatic ductal adenocarcinoma 58 , lung cancer 29 , breast cancer 31,59 and oral cancer 30 . Such stiffened collagen matrix may not only form a shelter for cancer cells to protect them from immune cells and anti-cancer drugs but also serve as a means for cancer cells to attach, migrate and metastasize efficiently 41,60,61 . Second, based on one experiment, fibrillogenesis in LH2 KO collagen is also affected showing smaller fibril diameters compared to those of controls. This could be due to several factors including: 1. since KO collagen is more susceptible to degradation (see above), collagen fibrils may not be able to grow, 2. altered Lys modifications (hydroxylation and glycosylation) of KO collagen may favor the association with collagen-binding proteins, such as decorin, that is known to inhibit collagen fibrillogenesis 62-64 , 3. altered post-translational modifications in KO collagen may inherently limit the growth of molecular packing into a fibril. Notably, when LH2 is overexpressed in MC cells, collagen fibrils are also smaller than controls 20 . This may indicate that the extent of LH2-mediated post-translational modifications should be kept at a certain range to establish an appropriate size of collagen fibrils in this cell culture system.
In bone, fibrillar type I collagen functions as an organizer of mineral deposition and growth [65][66][67] . Since initial mineralization appears to occur in the intermolecular channel formed by contiguous hole zones in the collagen Table 7. Levels of immature reducible cross-links (DHLNL and HLNL) and mature non-reducible cross-links (Pry and HHMD) from MC, EV, and KO clones. Total aldehydes = DHLNL + HLNL + 2 × Pyr + 2 × HHMD. Values represent mean moles/mole collagen ± S.D. (n = 3) of triplicate analysis of the hydrolysates. Statistical differences were determined by Kruskal-Wallis one-way analysis of variance and means comparison with the controls by Dunnett's method. DHLNL dihydroxylysinonorleucine, HLNL hydroxylysinonorleucine, HHMD histidinohydroxymerodesmosine, Pyr pyridinoline, MC MC3T3-E1, EV empty vector, KO knock-out. ***p < 0.001 between MC and KO; # p < 0.05, ## p < 0.01, and ### p < 0.001 between EV and KO, respectively. www.nature.com/scientificreports/ fibril 68 , the pattern of intermolecular cross-linking formed at the edge of hole zones should be critical to organize mineralization 69 . The LH2 KO collagen fibrils that contain abnormal cross-linking, highly soluble and smaller in size may not serve well as a stable template to accommodate and organize matrix mineralization. This may result in defective bone formation as seen in Bruck syndrome 1 and 2, that are caused by mutations in genes encoding LH2 chaperone FKBP65 and LH2, respectively. In addition to the structural function, LH2 may regulate cellular activities through its action on integrin β1 70 that may also impact the mineralization process. Recently, we have reported bone phenotypes of LH2 heterozygous mice (LH2 +/− ) in which LH2 expression levels are only ~ 50% of those of wild type mice (LH2 +/+ ). In this animal model, LH2 +/− femurs showed lower bone mineral density and inferior bone mechanical properties compared to those of LH2 +/+ mice 71 . When cultured, LH2 +/− osteoblastic cells mineralized poorly, which is consistent with our current study. Thus, while we cannot determine to what extent the LH2-catalyzed modification directly contributes to collagen mineralization, such modification appears to play a critical role in this process.
LH2 has two isoforms: one with an additional 63 bp-exon 13A (LH2b) and the other without (LH2a) 5 . It is generally accepted that LH2b is the telopeptidyl LH, but recently it has been reported that LH2a is also capable of catalyzing Lys hydroxylation in the telopeptides 6 . Inducing these isoforms in the LH2 KO cells separately and characterizing their collagen molecular phenotypes will provide valuable insights into their distinct and/or overlapping functions. This is now underway in our laboratory and will be the subject of the separate publication.
In conclusion, this study demonstrates that the major function of LH2 is to hydroxylate the N-(α1 and α2 chains) and C-telopeptidyl (α1 chain) Lys residues of type I collagen. The deficiency of LH2 profoundly affects collagen cross-linking, solubility, fibrillogenesis, and mineralization. These results underscore the pivotal role of the LH2-mediated post-translational modifications in the formation and function of fibrillar collagen in bone. (n = 3) from three independent experiments. Statistical differences were determined by the method described above (Fig. 1. legend) Fig. S4). Oligonucleotide pairs containing these gRNA sequences were cloned into pX335 (Addgene) that contains D10A mutant Cas9 (Cas9n) 74 , to produce pX335-mLH2-1 and -2. Evaluation of off-target effect. The specificity of the various gRNAs used in this study and their potential offtarget cleavage probabilities were initially evaluated using two different algorithms, online CRISPR RGEN Tools and Off-Spotter design prior to deploying them in MC cells. We also used the Plod2 sg RNAs as queries to search for similar sequences in the mouse genome using Cas-OFFinder (http:// www. rgeno me. net/ cas-offin der/). To test whether Plod2 sgRNAs target other genomic loci by mispairing, we performed experimental testing of five candidates selected from a search allowing up to two mismatches and two bulges. DNA primers were designed to amplify the DNA sequences containing the potential off-target sites in KO-1 using the standard PCR method (Supplementary Table S1). PCR products were separated by gel electrophoresis, and the DNA bands were isolated and extracted for Sanger sequencing. Sequencing results were aligned with mouse genome using blastn algorithm to identify potential sequence variants.
Quantitative real-time PCR.
To determine the expression of Plod2, MC, EV and KO clones were plated at a density of 2 × 10 5 cells/35 mm-dish. After 48 h, total RNA was extracted with TRIzol reagent (Invitrogen). Expression levels of Plod2 mRNA were assessed by one-step quantitative reverse transcription polymerase chain reaction (RT-PCR) with ABI Prism 7500 (Applied Biosystems). The specific probe and primers set for Plod2 was purchased from ThermoFisher Scientific (TaqMan Gene Expression Assay, Mm00478767_m1) that amplifies exon 4-5 boundary, thus, amplifying both Plod2a and Plod2b. The mRNA expression levels were normalized to beta-actin (Actb; Mm01205647_g1) and analyzed by the 2 −ΔΔCT method 75 .
Western blot analysis.
To determine the protein level, the KO clones and controls were plated onto 35-mm dishes at a density of 3 × 10 5 cells/dish. After culturing for 7 days, the cells were washed with phosphate-buffered saline (PBS), lysed with radio-immunoprecipitation assay (RIPA) lysis buffer (50 mM Tris-HCl, 150 mM NaCl, 0.5% Sodium deoxycholate, 0.1% SDS, and 1% NP-40), centrifuged at 12,000×g and the supernatant was collected. The total protein concentration was measured by the Pierce BCA Protein Assay Kit (Pierce Biotechnology, Rockford, IL, USA) according to the manufacturer's protocol. The cell lysate was mixed with Horseradish peroxidase (HRP)-conjugated anti-rabbit IgG (Cell Signaling Technology) was used as a secondary antibody and HRP-conjugated anti-β-actin rabbit monoclonal antibody (13E5, Cell Signaling Technology) was used as an internal control for protein loading. The reactivities of HRP were detected with SuperSignal West Pico Chemiluminescent Substrate (Thermo Fisher Scientific) and the chemiluminescence was scanned using an Odyssey Infrared Imaging System (LI-COR Biosciences). Quantitation of proteins was performed using the Image Studio software version 4.0 (LI-COR) with normalization to β-actin levels and was then shown as the change relative to the protein levels in MC as 1.0.
Collagen preparation for biochemical analysis. MC, KO and EV clones were cultured in α-minimum essential media (Invitrogen) containing 10% FBS, 100 units/ml penicillin, and 100 μg/ml streptomycin. When the cells grew to confluence, the medium was replaced with that containing 50 μg/ml of ascorbic acid. After 2 weeks of culture, the cells/matrix layers were scraped, thoroughly washed with PBS and cold distilled water several times by repeated centrifugation at 4000×g, and lyophilized.
Collagen type analysis. Collagen was extracted and purified from lyophilized cell/matrix layer of MC, EV, and KO clones by digestion with pepsin (Sigma-Aldrich, St. Louis, MO, USA; 5 mg/ml in 0.5 M acetic acid) and salt precipitation (0.7 M NaCl in 0.5 M acetic acid) as described previously 36 . Type I and III collagens were quantified by LC-MS using SI-collagen as an internal standard 35 . In brief, SI-collagen was first mixed into the purified collagen samples, and the samples were digested with sequencing grade trypsin (Promega, Madison, WI, USA; 1:50 enzyme/substrate ratio) in 100 mM Tris-HCl/1 mM CaCl 2 (pH 7.6) at 37 °C for 16 h after heat denaturation at 60 °C for 30 min. Generated marker peptides of type I and III collagens (two peptides for each α chain; stable isotopically heavy and light ones) were monitored by LC-QqQ-MS on a 3200 QTRAP hybrid QqQ/ linear ion trap mass spectrometer (AB Sciex, Foster City, CA, USA) with an Agilent 1200 Series HPLC system (Agilent Technologies, Palo Alto, CA, USA) using a BIOshell A160 Peptide C18 HPLC column (5 µm particle size, L × I.D. 150 mm × 2.1 mm; Supelco, Bellefonte, PA, USA) to determine the concentrations of type I and type III collagens.
Reduction with NaB 3 H 4 .
Lyophilized cell/matrix samples (~ 2.0 mg each) were suspended in buffer containing 0.15 M N-trismethyl-2-aminoethanesulfonic acid, and 0.05 M Tris-HCl, pH 7.4, and reduced with standardized NaB 3 H 4 . The specific activity of the NaB 3 H 4 was determined by the method previously reported 76 .
The reduced samples were washed with cold distilled water several times by repeated centrifugation at 4000×g and lyophilized.
Quantification of Hyl by HPLC.
Reduced collagen was hydrolyzed with 6 N HCl and subjected to amino acid analysis 77 . The level of total Hyl in a collagen molecule was calculated based on the value of 300 residues of Hyp per collagen molecule, which were quantified as residues/collagen molecule 14 .
Site-specific characterization of post-translational modifications of type I collagen. The purified collagen samples were digested with trypsin as described above to analyze the Lys post-translational modifications at the specific molecular sites within the triple helical domain of type I collagen 37 . In addition, to analyze Lys hydroxylation at the telopeptide domains of type I collagen, the lyophilized cell/matrix samples were sequentially digested with bacterial collagenase and pepsin as previously reported 37 . In brief, the samples were digested with 0.01 mg/ml of collagenase from Grimontia hollisae (Nippi, Tokyo, Japan) 78 in 100 mM Tris-HCl/5 mM CaCl 2 (pH 7.5) at 37 °C for 16 h after heating at 60 °C for 30 min. After addition of acetic acid (final 0.5 M), the collagenase-digests were further digested with 0.01 mg/ml of pepsin (Sigma-Aldrich) at 37 °C for 16 h. The trypsin-or collagenase/pepsin-digests were subjected to LC-QTOF-MS analysis on an ultra-high resolution QTOF mass spectrometer (maXis II, Bruker Daltonics, Bremen, Germany) coupled to a Shimadzu Prominence UFLC-XR system (Shimadzu, Kyoto, Japan) using an Ascentis Express C18 HPLC column (5 µm particle size, L × I.D. 150 mm × 2.1 mm; Supelco) 37 . Site occupancy of Lys hydroxylation/glycosylation (Lys, Hyl, G-Hyl, and GG-Hyl) was calculated using the peak area ratio of extracted ion chromatograms (mass precision range = ± 0.05) of peptides containing the respective molecular species as previously reported 8,33,37,79 . | 7,091 | 2022-08-22T00:00:00.000 | [
"Biology",
"Medicine"
] |
From Biomedical Signal Processing Techniques to fMRI Parcellation
In this paper a comparison between numbers of digital signal processing techniques are introduced, and especially its applications with biosignals like ECG, EEG and EMG. These techniques are used for data extraction from biosignals, processing and analyzing the main characteristics, advantages and disadvantages of these techniques are also introduced. Multivariate analysis is one of the most important techniques that has wide applications in biomedical fields and can be applied for different medical signals and images. For example, this technique is commonly used for the analysis of functional Magnetic Resonance Imaging (fMRI) which can be applied to identify technical and physiological artifacts in fMRI. Second part of this paper introduces a short survey on fMRI parcellation technique and especially based on a data driven approach. Brain parcellations divide the brain’s spatial domain into a set of non-overlapping regions or modules,and these parcellations are often derived from specific data driven or clustering algorithms applied to brain images. This paper considers as the first paper that presented a survey on using different DSP techniques with a variety of biosignal and analyzed these biomedical signals as well as introduced one of the most important application of multivariate methods with fMRI.
Biosignal is the term called on any signal senses from a biological tissues or medical source.These biosignals have a noise from different sources; several time-or frequencydomain digital signal processing (DSP) techniques can be used in order to remove noisealso it involves: adjusting signal characteristics, spectral estimation, multiplying two signals to perform modulation or correlation, Filtering and averaging.
DSP is a technique that deals with processing signals in digital domain.ECG, EEG, EMG, ERG, EOG and others are examples of the biosignals, and all these biosignals have been tested in the frequency domain.Frequency domain give more useful information than the time domain and finding the frequency domain of the biosignal is called spectral analysis.
There are many applications of signal processing in biomedical field to perform different techniques:Echo cancellation, Noise cancellation, Spectrum Analysis, Detection, Correlation, Filters, Computer Graphics, Image Processing, Data Compression, Machine vision, Sonar, Array Processing,Guidance, robotics and etc.
This paper provides a survey on most digital signal processing techniques that used with biomedical signals to remove noises from biological signals, extract some characteristic parameter and process them.The techniques that have been introduced in this paper are [John L. et al., 2004]:Spectral Analysis, Digital Filters (FIR and IIR filters), Cross-Correlation and Coherence Analysis, Ensemble Averages, Modern Techniques of Spectral Analysis (Parametric and Non Parametric Approaches), Time-Frequency Analysis Methods like: Short Term Fourier Transform STFT (The Spectrogram), Wigner-Ville Distribution (A Special Case of Cohen's Class, Choi-Williams and Other Distributions, Wavelet Analyses, Optimal Filter (Wiener Filters), Adaptive Filters Adaptive Noise Cancellation (ANC), Multivariate Analyses: Principal Component Analysis (PCA) and Independent Component Analysis(ICA).
Multivariate analysis is one of the most important techniques that has wide applications in biomedical fields and can be applied for different medical signals and images.For example, this technique is commonly used with functional Magnetic Resonance Imaging (fMRI) analysis which can be applied to identify technical and physiological artifacts in fMRI.
The fMRI technique scans the whole or part of the brain repeatedly and generates a sequence of 3-D images.The voxels of the brain that represent the real activity is very difficult to be detected because of a weak signal-to-noise ratio (SNR), the presence of artifacts and nonlinear properties.Due to these difficulties and when it is difficult to predict what will occur during the acquisition of detection, using of data mining is an important as complement or as replacement of the classical methods.
Parcellation approaches use brain activity and clustering approaches to divide the brain into many parcels with some degree of homogeneous characteristics.So, the brain is divided into define regions with some degree of signal homogeneity that help for analysis and interpretation of neuroimaging data as well as mining these data because of the amount of fMRI data are huge.Independent components analysis (ICA) and Principle components analysis (PCA) algorithms are able to separate the fMRI signals into a group of the defined components, ICA regarded as the reference method to extract underlying networks from rest fMRI [Beckmann an Smith, 2004].
Second part of this paper introduces a short survey on fMRI parcellation technique and especially based on a data driven approach.Brain parcellations divide the brain's spatial domain into a set of non-overlapping regions or modules, and these parcellations are often derived from specific data driven or clustering algorithms applied to brain images.
Biomedical signal processing techniques
The techniques mentioned above are explained in details in this section as well as their utilizationfor data extraction from biosignals,analyzing the main characteristics and processing, finally, Table .1 which illustratesadvantages, limitations and a comparison between the techniques is presented over with the conclusion.
Spectral Analysis (Classical Methods)
There are many different techniquesto perform spectral analysis and each one has different strengths and weaknesses points.These methods can be divided into two categories: classical and modern methods.The first one based on the Fourier transform and the second such as those based on the estimation of model parameters.In order to determine an accurate waveform spectrum, the signal must be periodic (or of finite length) and noise loose.In many biomedical applications, the problem is that the biosignal waveform is either infinite or of adequate length, so only a part of it is available for analysis.These biosignals are often distorted with noise, and if only a part of the actual waveform can be analyzed and/or if this biosignal includes noise, here all spectral analysis techniques must be used in order to approximate and so are estimates of the correct spectrum.Most of the spectral analysis techniques try to improve the estimationaccuracy of specific spectral features.
Generally, Spectral analysis method assume as the most DSP techniques that used with biosignal analysis, so it's important to know how different phenomena effect on the spectrum of the desired signal in order to interpret it correctly.[Marple S.L., 1987],has presented a long and rich history for developed approaches for spectral analysis decomposition.[Abboud et al., 1989], have introduced a study on interference cancellation in FECG (FetalElectrocardiograms) signal as recorded from the mother's abdomen and calculated the spectral curves of the averaged fetal and MECG (Maternal Electrocardiograms).These power spectrums were calculated by subtraction of an averaged MECG waveform using cross-correlation function and fast Fourier transform algorithm.This technique is not effective because of the weakness in SNR, high rate of consistency between maternal and fetal ECG, and the symmetry in the frequency spectra of the signal and noise components.
The classicalFourier transforms (FT) approach is the most popular techniques used for spectral estimation.
One of the advantages of the Fourier transforms (FT) method is that the sinusoidal signals have energy at only one frequency, and if the signal is divided into a series of cosines and sines with different frequencies, but the amplitude of these sinusoidal signals must be proportional to the frequency component consisted in the signal at those frequencies.An average periodogram is the approach that used for estimating the power spectrum of the signal waveform when this power spectrum is based on using Fourier transform followed by averaging it.Welch method is one of the most important methods to find the average periodogram.This method used for estimating the power spectrum divides the data in several epochs, overlappingor nonoverlapping, then performs an FFT on each epochs, calculate the magnitude squared (i.e., power spectrum), then averages these spectra.
For example, in Fig. 1the basic Fourier transform routine (fft) was applied EEG data,while in Fig. 2the Welch power spectral method was applied to the same array of data that was used to produce the spectrum of Fig. 1.A triangular window was applied and the segment length was 150 points and segments overlapped by 50%.
Comparing the spectrum in Fig. 2 with that of Fig. 1appears that the background noise is extremelyreducedand became smoother.
Digital Filters
Most signal waveforms are narrowband with respect to noise which most of it is broadband, and white noise is regard as the broadest noise band with a flat spectrum; for this reason, filters are used.Since the aim of using filters are to reshape the spectrum in order to provide some improvement in SNR,so they are closely related to spectral analysis.Filters can be divided into two groups according to the way that they achieve the reshape of the spectrum, finite impulse response (FIR) filters and infinite impulse response (IIR) filters, and based on their approach.IIR f1lters are more efficient in terms of computer time and memory than FIR filters; this is the disadvantage of FIR filters.While the advantages of FIR filter are: firstly they stable and have linear phase shifts, secondlyhave initial transient responses of limited length.FIR adaptive filters was developed by [Widrow et al., 1975;1976a]who proved that in order to obtain good performance high-order filters must be used and especially at low SNR.[Widrow et al., 1976b], they introduced study which included a comparative between three adaptive algorithms (LMS, differential steepest descent and the linear random search) and proved that there are a lot of advantages by using FIR filter like: ensured stability, simplicity and the filter rendering is not optimal.[Benjamin, 1982],has introduced study using a signal modeling approach and reporting a general form of a noise cancellation by combining two Infinite Impulse Response (IIR) filters namely a noise canceller and a line enhancer.[Fergjallah et al., 1990] Nonrecursive is the term named FIR filters and according to Eq. [1],the inputonly (first term) is used in filter algorithm not the output: ... (1) This leads to an impulse response that is finite, for this reason FIR name is given.Application and design of FIR filters using only convolution and FFT, because the impulse response of the filter process is the same as FIR coefficient function.
Fig. 3 shows the application and construction of FIR bandpass filter to an EEG signal using convolution and FFT to evaluate the filter's frequency response.
Firstly and in order to show the range of the filter 's operation with respect to the frequency spectrum of the EEG data, a spectrum analysis is implemented on both the EEG data and the filter by using the FFT to analyze the data without windowing or averaging as shown in Fig. 4 shows the result of applying this FIR bandpass filter to the EEG signal.Unlike FIR filters, IIR filters and according to Eq. [1] the input and output are used in filter, so the design of IIR filters is not as simple as FIR filters and most of the principles of analog filter design can be used with IIR filters.Also IIR filters can be duplicated as all the famous analog filters types (Butterworth, Chebyshev Type I and II, and elliptic).IIR filters can recognize a specific frequency norm (cutoff sharpness or slope) with lower filter order.This is the essential advantage of IIR against FIR filters.While the main disadvantage of IIR filters is that they have nonlinear phase characteristics.
Cross-Correlation and Coherence Analysis
These two techniques used with any pairs of signals (stochastic, deterministic and multichannel signal), and they are very effective in determining the relationships between pairs of biomedical signal.
In 1986 a cross correlation analysis procedure has been introduced by [Levine et al., 1986]through removing ECG signals distorted from the Diaphragmatic Electromyogram (EMGdi).
Ensemble Averages
Ensemble average deals with sets of data and especially in a random process when the probability density function is not known.These data are obtained when many records of the signal are possible; such many records could be taken from many sensors.
[Hae-Jeong Park et al., 2002],have used two-step process in the interference cancellation in EEG signal which: detection the ECG artifact by using the energy interval histogram method and ECG artifact removal by adjustment of ensemble average subtraction.
In many biosignals, the multiple records obtain from repeated responses to the same sensor in the same place.Then these sets of data are collected and added point by point.If the size of the ensemble average is equal to factor x (for example), then Signal to Noise Ratio (SNR) improves by factor ∠x.
We can conclude that the signal in ensemble averaging is not necessary to be periodicbut it must be repetitive, while the noise not correlated to the signal but must be random.As well as, the transient position of each signal must be known.
The multiple signals recorded through eye movement (shown upwardin Fig. 5), and an ensemble of individual was taken.While downward, the ensemble average is founded by taking the average at each point in time for the individual responses.The extended vertical line at the same time through the upper and lower traces is the average of the individual responses at that time.
Modern Techniques of Spectral Analysis
These techniques are designed to overcome some distortions generatedfrom the classical techniques, and especially when the introduced data segments are short.All the classical approaches are based on the Fourier transform, and any waveform out of the data window istacitly assumed to be zero and this assumption is scarcely true, so such a proposition can produce distortion in the estimate as well as the distortions caused by diverse data windows (including the rectangular window).Modern approaches divided into two wide classes: parametric (model-based) and nonparametric (eigen decomposition).
[Marple S.L., 1987], introduced rigorous treatment of Fourier transform, parametric modeling methods (including AR and ARMA), and eigen analysis-based techniques.[Shiavi R., 1999], emphasizes spectral analysis of signals buried in noise with excellent coverage of Fourier analysis, and autoregressive methods have been presented, as well as good introduction to statistical signal processing concepts.
Parametric Approach
The need for windowing in the classical techniques will be removed by using parametric approach, and it can improve spectral resolution and fidelity especially when the waveform includess a large amount of noise, but itprovides only magnitude acquaintance in the form of the power spectrum as well as needs more decision in their application than classical methods.Generally, a linear process is referred to as a model in parametric method to estimate the power spectrum, and this model is simulated to be driven by white noise.The output of this model is compared with the input so that the model parameters set for the best match between output and the waveform of interest in order to give the best assessment of the waveform's spectrum.
In this approach there are various model types are used and differentiated according to their transfer functions, the most three common models types are:autoregressive (AR), moving average (MA), and autoregressive movingaverage (ARMA).The selection of which one is the most convenient model than other, requires some learning of the probable shape of the spectrum waveform.For example, the AR model is especiallyused for evaluating sharp peaks spectra but no deep hollows, so it's transfer function has a constant in the numerator and a polynomial in the denominator and is sometimes called an allpole model as same as an IIR filter with a constant numerator.Inversly the MAmodel is useful for estimating spectra with the hollows but no sharp peaks, so it's transfer function has only a numerator polynomial and is called an all-zero model and is similar to an FIR filter.ARMA model is combines both the AR and MA features and used with the spectrum that contain both sharp peaks and the hollows; so it's transfer function has both numerator and denominator polynomials and is called a pole-zero model.
Weakness in the MA model that restricts its advantage in power spectral estimation of biosignals is unability tomodel narrowband spectra well.Thus only the AR model is used in power spectral analysis.Accorsing to algorithms that process data,AR spectral estimation techniques can be divided into two classes: process block data (when the entire spectrum waveform is available in memory) and process data sequentially (when incoming data must be estimated rapidly for real-time considerations).onlyblock processing algorithms will be considered in this research as they locate the major application in biomedical engineering and are the only algorithms processed in the MATLAB Signal Processing Toolbox.There are many approaches for estimating power spectrum and the AR model coefficients directly from the waveform.There are four approaches which have received the most interest are: the Yule-Walker, the Burg, the covariance and the modified covariance methods.
Non Parametric Approach
For obtaining better frequency estimation characteristics and better resolutionespecially at high noise levels, Eigen analysis(frequency estimation) spectral methods are the promoted one; so they can remove much of the noise contribution and especially effective in recognize sinusoidal, exponential, or other narrowband processes in white noise.When the noise is colored (i.e.not white but contains some spectral features), there doing can be decadent; as well as are not well-suited to estimating spectra that contain broader-bandfeatures.
The main characteristicof eigenvector approaches is to divide the information comprised in the data waveform or autocorrelation function into two subspaces: a signal subspace and a noise subspace.The Eigen decomposition output eigenvalues with decreasing order and orthonormal eigenvectors, and if those eigenvectors that are considered portion of the noise subspace are removed, the effect of that noise is functionally cancelled.Functions can be counted based on either signal or noise subspace and can be sketched in the frequency domain displaying sharp peaks where sinusoids or narrowband processes exist.So the main problem of applying eigenvector spectral analysis is the selectionof a suitable dimension of the signal (or noise) subspace.
Generally, parametric methods are more complicated than the nonparametric but the later techniques are not deemed correct power spectral estimators because the nonparametric approach do not keep signal power and cannot be able to reconstruct the autocorrelation sequence by applying the Fourier transform to estimators.So the best termed for this approach is frequency estimator because it extend spectra in relative units.In MATLAB there are two popular versions of frequency estimation based on eigenanalysis available in the Signal Processing Toolbox.These eigenvector spectral analysis methods are the MUltiple SIgnal Classifications (MUSIC) and the Pisarenko harmonic decomposition (PHP) algorithms.
Fig. 6 shows spectrum obtained using 3 different methods applied to an EEG data waveform, and comparison between classical FFT-based (Welch method) without window nor overlap, autoregressive (Modified Covariance Spectrum) and eigenanalysis without window nor overlap (MUSIC Method) spectral analysis methods; (e) plot the singular values from the eigenvector routine as determined by pmusic with high subspace dimension to get many singular values.
such the Welch spectrum it showspseudo peaks concerning the noise, so it also would be complicated to exactly locate the frequencies of the peaks from this spectrum.While the eigenanalysis spectral analysis methods fix the peaks frequencies with an excellent frequency resolution.Plot (e) in this figure shows the singular values determined from theeigenvalues that can be used to estimate the dimensionality of multivariate data, i.e. produce a curve in order to examine a break point(or change in slope) between a sharp and smooth slope, then this break point is possessed as the dimensionalityof the signal subspace.The concept is that a soft decrease in the singularvalues is associated with the signal noise while a fasterindicates that thesingular values are associated with thesignal.Regrettably, well definition in break points is not always located when real data are involved.
Time-Frequency Analysis
The main concern in many biomedical signals and medical images is on timing information because many of these waveforms are not stationary.For example, the EEG signals variation extremely relying on different inner cases of the object like:eyes closed,meditation or sleep.While classical or modern spectral analysis methods provide a good and complete processing for waveforms that are stationary, many approaches have been advanced to extract both frequency and time information from a waveform (Fourier transformenable to describe both time and frequency characteristics of the waveform).Essentially these approachescan be divided into two groups: time-frequency methods and time-scale methods(Wavelet analyses).[Boashash B., 1992], in early chapters he provided a very useful introduction to time-frequency analysis followed by a number of medical applications.
Fourier transform provides a good representation of the frequencies in a waveform but not their timing, whilein the phase portion of the transform the timing is encoded but this encoding is hard to explain and progress.
Short-Term Fourier Transf. (The Spectrogram)
This is the first straightforward approachhas been advanced to extract both frequency and time information from a waveform From this figure we conclude that the most strength and clearly identifies components is the eigenvector method (d); while the spectrum created by the classical Welch method does detect the peaks but also shows a number of other peaks in response to the noise, so it would be complicated to locate and definitively the signal peaks from this spectrum.Also the AR method detects the peaks and so smooth's the noise, but which based onslicing the waveform into a digitof short segments, and by using the standard Fourier transformthe analysis on each of these segments is done.Before the Fourier transform is applied to that segment, a window function is applied to a segment of data in order to isolate it from the waveform.Since the Fourier Transform is applied to a segment of data that is shorter than the overall waveform then is termed STFT(short-term Fourier transform)or the spectrogram.The spectrogram has two maintroubles (1) An optimal window length selection for data segments which have various features may not be possible and (2) The time-frequency tradeoff, i.e. when the data length is shorten to improve time resolution (made smaller window) then it will reduce frequency resolution and vice versa, beside it could also effect on the waste of low frequencies that are no longer fully included in the data segment.This tradeoff hasbeen equated so the product of timeand frequency resolution must be bigger than some minimum.
Despite these restrictions, spectrogram or STFT has been used successfully in a wide variety cases and especially in a number of biomedical applications where only high frequency components are of interest and frequency resolution is not critical.
The spectrogram can be generatedin MATLAB either by using the standard fft function or by using a specialfunction of the Signal Processing Toolbox.
Wigner-Ville Distribution (A Special Case of Cohen's Class)
The trade-off between time and frequency resolution in the spectrogram or STFT and to overcome some of these shortcomings, a number of other time-frequency methods has been developed and the first of these was the Wigner-Ville distribution.This dual nameis due to its development which was used in physics by Wigner, and then used in signal processing by Ville.Itis deemed as a special case of similar wide variety transformations of Cohen'sclass of distributions.[Boashash et al., 1987], they proposed practical information on calculating the Wigner-Ville distribution.[Cohen L., 1989], he introduced classic review article on the various time-frequency methods in Cohen's class of time-frequency distributions The Wigner-Ville distribution and all of Cohen's class of distributions; use autocorrelation function for calculating the power spectrum approach by using a variationof the autocorrelation function where time remains in the result and unlike the standard autocorrelation function where time is summed or integrated out of the result which is only a function of the shift or lag.As in the classic approach for computing the power spectrum was to take the Fourier transform of the standard autocorrelation function and in order to construct this autocorrelation function, the waveform is compared with itself for all possible relative lags or shifts.
For using with these distributions, an equation is introduced here in both continuous and discreetformEq.In these distributions,a variation of the autocorrelation function isaccomplished by comparing the waveform with itself for all possible lags, but the compare is done for all potential values of time insteadof integrating over time.This equation is called instantaneous autocorrelation function Eq. [4,5]; this function keeps both lags and time, and is as a twodimensional function.
.. (4) ..( 5) Where: T and n are the time lags as in autocorrelation, and * represents the complex conjugate of the signalx.
Actually most actual signals are real, soEq.[3] can be applied to either the realsignal itself or a complex part of the signal which is known as the analytic signal.
This distribution has a number of advantages and shortcomings against STFT,it greatest strength is that produces definiteor good picture of the time-frequency structure.The Wigner-Ville distribution has many characteristics the same as of the STFTwhile only one property not participated by the STFT which is finite support in time (the distribution is zero before the signal starts and after it ends) and frequency (the distribution does not contain frequencies beyond the range of the input signal).Because of the cross products, this distribution is not necessarily zero whenever the signal is zero and this Cohen'sproperty called strong finite support.
While the most serious shortcoming is the production of cross products which will produce time-frequency energiesthat are not exist in the original signal, although these are contained within the time and frequency boundaries of the original signal.This property has been the main impulse for the improvement of other distributions that introduce different filters to the instantaneous autocorrelation function to qualify the damage caused by the cross products.
As well as, the Wigner-Ville distribution can have negative regions with no meaning; also it has week noise properties, because cross products of the noise and essentially when the noise is distributed across all time and frequency.In some cases, a window can be used in order to reduce the cross products and noise influences, andthe desired window function is applied to the lag dimension of the instantaneous autocorrelation function.So windowing is a compromise sought between a reduction of cross products and loss of frequency resolution.
The Choi-Williams and Other Distributions
The general equation for determining a time-frequency distribution from Cohen's class of distributions is defined by Eq. [6];this equation israther formidable but can be simplified in practice: ... (6) Where:g(v,T) is known as a kerneland provides the two-dimensional filtering of the instantaneous autocorrelation.
These other distributions are defined by Eq. [6] and now the kernelg(v,T) is no longer 1.Eq. [6] can be simplified two different forms: a) The integration with respect to the variable
Analytic Signal
Analytic Signal is a modified version of the waveform and all transformations in Cohen's class of distributions produce better results when applied to it.When the real signal can be used, the analytic signal (a complex version of the real signal) has several advantages.One of the most important advantages is that the analytic signal does not contain negative frequencies, so its use will reduce the number of cross products.Also the sampling rate can be reduced when the analytic signal is used.The analytic signal can be constructed by using several approaches.
Wavelet Analysis
Wavelet transform used as another method to describe the properties or processing biomedical images anda nonstationary biosignal waveform (that change over time).The wavelet transform is divided into segments of scale rather than sections of time, and it is applied on a set of orthogonal basis functions obtained by contractions, dilations and shifts of a prototype wavelet.
The main differencebetween wavelet transforms and Fourier transform-based methods is thatthe Fourier transform-based methods use windows of constant width while the wavelet uses windows that are frequency dependent.By using narrow windowsfor high-frequency components,wavelet transforms enable arbitrarily good time resolutionwhereas the wavelet transforms enable arbitrarily good frequency resolution by using broad windows for lowfrequency components.
The continuous wavelet transform (CWT) can be represented mathematically in Eq. [7]: ... (7) x(t) : given signal a : scale factor b : time * : complex conjugate Probing function ψis called "wavelet" because it can be any of a number of different functions so it always takes on an oscillatory form.Also a prototype wavelet function ψ(t)is termed a mother wavelet when b = 0 and a = 1, then the wavelet is in its natural form, that is, ψ 1,0 (t)=ψ(t).... (8) By adjusting the scale factor, the window duration can be arbitrarily changed for different frequencies, i.e.if a is greater than one then the wavelet function is stretched along the time axis whereaswhen it is less than one (but still positive) it contacts the function.Negativevalues of a simply flip the probing function on the time axis.
Because of the redundancy in the transform by using CWTcoefficients then it is rarely performed recovery of the original waveform using CWT coefficients, while the more parsimonious discrete wavelet transform DWTwhen reconstruction of the original waveform is desired.The redundancy in CWT is not a problem in analysis applications but will be costly when the application needs to recover the original signal because for recovery, all of the coefficients (that are generated due to oversampling many more coefficients than are really required to uniquely specify the signal) will be required and the computational effort could be too much.
While the DWT may still require redundancy to produce a bilateral transform except if the wavelet is carefully chosen such that it leads to an orthogonal family or basis in order to produce a nonredundant bilateral transform.[Wickerhauser, 1994], rigorous and extensive treatment of wavelet analysis was presented.
[ Aldroubi et al., 1996], they presented a variety of applications of wavelet analysis to biomedical engineering.In 1996 a new wavelet analysis method was implemented by [Ye Datian et al, 1996] in the detection of FECG from abdominal signal, while [Echeverria et al., 1996]have used wavelet multi resolution decomposition and a pattern matching procedure in the same year, but the output result still combined the MECG.[Rao et al., 1998], good development of wavelet analysis was presented including both the continuous and discreet wavelet transforms.Also in the same year, a new method called Wavelet Analysis and Pattern Matching (WA-PM) was developed by [Echeverria et al., 1998].
This procedure was used for processing the abdominal ECG in the off-line, so the disadvantage of this method is time consuming, while the authors have proved that it is a reliable [ Conforto et al., 1999],they found that the wavelet filter gives excellent implementation in input conservation and time-detection of EMG impulses distorted with artifacts when made comparison between four techniques deals with motion artifact removal from EMG signal: 8 th order Chebyshev HPF, moving average filter (MA), moving median filter and finally using an adaptive filter based on orthogonal Meyer wavelets.In 2000 a wavelet transform based method was developed by [Khamene et al., 2000]to extract the FECG from the combined abdominal signal.
[ Mochimaru et al., 2002], they have used wavelet theory to detect the FECG, and he concluded that this method gives good time resolution and weak frequency resolution at high frequencies, and good frequency resolution and weak time resolution at low frequencies.Also in this year, a wavelet based denoising technique has been proposed by [Tatjana Zikov et al., 2002] for ocular artifacts removal in EEG signal.This method is unsupported on the reference EOG or visual performance.While in the same year, statisticalwavelet threshold have been used by [Browne et al., 2002],which is capable of recognizing the EEG and the artifact signals and separating localized artifacts in the timefrequency domain or thathave a spectrum which is uncharacteristic of theEEG.
The author concluded that this method is best when compared with specialized methods that used for artifact elimination in some cases, but it has a disadvantage that it fail to get better the elimination of baseline drift, eye movement and step artifact.
[ Grujic et al., 2004], they have compared wavelet andclassical digital filtering that used fordenoising of Surface Electromyogram (SEMG) signals, the results show that the main advantages of wavelet technique are: the filtered signal with no artificial information inserted on it and the signal components may be separately threshold in order to generate the filtered signal.While the main disadvantage is that the definition of a mother wavelet as a priori and this selection may affect the final results.In the same year, [Azzerboni et al., 2004], they have proposed a method that used wavelet transform and ICA combination for removing the artifacts in surface EMG, according to this study, the author found that in order to identify the artifact user interface is needed.Also in the same year, [Inan Güler et al., 2004] have proposed a new approach based on Adaptive Neuro Fuzzy Inference System (ANFIS) for detection of ECG changes in patients with partial epilepsy which is implemented in two steps: feature extraction using the wavelet transforms and the ANFIS.The author concluded that the proposed ANFIS is effective in detecting the ECG changes in patients with partial epilepsy.
[ Jafari et al., 2005], they have used Blind Source Separation (BSS) in the wavelet domain, so he managed the problem of FECG extraction and especially when the surrounding is noisy and time varying.In the same year, [Ping Zhou et al., 2005]have presentedthe performances of different methods that used for ECG artifact removal like: HPF, spike clipping, template subtracting, wavelet threshold and adaptive filtering.He examined the ECG artifacts removal from the myoelectric prosthesis control signals, taken from the reinnervated pectoralis muscles of a patient with bilateral amputations at shoulder disarticulation level.
Filter Banks
DWT-based analysis, for most signal and image processing applications, is easier to understand and implement using filter banks.
[ Strang et al., 1997], they introduced thorough coverage of wavelet filter banks including extensive mathematical background.
Subband coding is using a group of filters to divide up a signal into different spectral components.Fig. 15 shows the basic implementation of the DWT that is the most uses as a simple filter bank which consisting of only two filters (lowpass and highpass)applied to the same waveform and also filter outputs consist of a lowpass and highpass subband.The analysis filters is the Filter Bank that decomposes the original signal while the syntheses filters is the filter bank that reconstructs the signal.FIR filters are used throughout because they are stable and easier to implement.
Wavelet analysis is a good techniquewhen it is used especially with signals that have long durations of low frequency components and short durations of high frequency components like EEG signals or signals of differences in interbeat (R-R) intervals.
Wavelet analysis based on filter bank decomposition is especiallybeneficial for detecting small discontinuities in a waveform and it is sensitive for detecting small changes even when they are in the higher derivatives.This feature is also beneficial in image processing.
Fig. 16 shows the ECG signal generated by the analysis filter bank with the top-most plot showing the outputs of the first set of filters with the finest resolution, the next from the top showing the outputs of the second set of set of filters, etc.Only the lowest (i.e., smoothest) lowpass subband signal is included in the output of the filter bank; the rest are used only in the determination of highpass subbands.The lowest plots show the frequency characteristics of the high and lowpass filters.
Advanced Signal Processing Techniques: Optimal and Adaptive Filters
The first used methods for interference cancellation were the non-adaptiveinterference cancellation techniques, and Wiener optimal filtering was one of the widely used non-adaptive techniques.The drawbacks of the Wiener filter are the requirements of autocorrelation matrix and cross-correlation vector which is a time consuming process because it involves matrix inversion.Due to these limitations,Adaptive Interference Cancellation (AIC) has been used to rectify wiener optimal filtering restrictions.
Optimal Signal Processing, Wiener Filters
By using FIR and IIR filters design, the user is unable to know which frequency characteristics are the best or any type of filtering will be so effective on splitting upnoise fromsignal, so in this case the user will depend on his knowing on signal or source features orby trial and error.
For this reason an optimal filter theory was advanced to select the most suitable frequency characteristics for processing by using various approaches with a wide range depending onsignal and noiseproperties.Wiener filter is a good developed and popular type of filters that can be used when a representation of the required signal is available.A linear process (either an FIR or IIR filter) is operated whenthe input waveform has signal and noiseand FIR filters more popular process are used due to their stability.The basic concept of Wiener filter theory is to reduce the variation between the filtered output and some required output, this reduction is based on the least mean square approach which set the filter coefficients to minimize the square of the difference between the required and real waveform after filtering.[Haykin, 1991], he introduced a definitive text on adaptive filters including Weiner filters and gradient basedalgorithms.
The Wiener-Hopf approach has other different applicationslike interference canceling,systems identification, and inverse modeling or deconvolution as well as standard filtering.
Adaptive Signal Processing
Unlike classical spectral analysis methods, FIR and IIR filters and Wiener filter that Fig. 16.ECG signal generated by the analysis filter bank cannot respond to changes that might happen through the path of the signal, adaptive filters have the ability of modifying their properties according to selected properties of analyzedsignal;i.e. the filter coefficients are adjusted and applied in an ongoing basisin adaptive filter while in the Wiener filter(a block approach)the analysis is applied to the complete signal and then the resultant optimal filter coefficients will applied to the complete signal.
An ideal adaptive filter paradigm is designed to make the filter's output as close to some desired response as possible by reducing the error to a minimum through modifying the filter coefficientsbased on some signal property by a feedback process.The FIR filters with stability criteria make them effective in optimal filtering and adaptive applications [Ingle et al., 2000].For this reason, the adaptive filter can be performed by a set of FIR filter coefficients.
The similarities between optimal and adaptive filterare:firstly, the nature of the desired response that depend on the given problem and its formulation which regarded as the most complex part of the adaptive system specification [Stearns et al., 1996]; secondly,the problem with an error minimization between the input and some required output response and this reduction is based on the squared error that is minimized to construct a desired signal.
LMS recursive algorithm is a simpler and more popular approach which is based on gradient optimization when it is adapted for use in an adaptive environment with the same Wiener-Hopf equations [John L. et al., 2004].The LMS algorithm uses a recursive gradient method also called the steepest-descent method for detection the filter coefficients that output the minimum sum of squared error.The advantage of the LMS algorithm is simple and easy of mathematical computation, while the drawbacks of this method are the influence of non-stationary interferences on the signal, the influence of signal component on the interference, computer word length requirements, coefficient drift, slow convergence rate and higher steady-state error.
Adaptive filter has a number of applications in biomedical signal processing.For example, it can be used to eliminatea narrowband noise 60 Hz line source that distorts a broadband signal, or inversely it can be used to eliminate broadband noise from a narrowband signal and this process is calledadaptive line enhancement (ALE)or Adaptive Interference Suppression.In ALE the narrowbandcomponent is the signal while in adaptive interference suppression it is the noise.Also it can also be used for different applications same as the Wiener filter likeinverse modeling, system identification andadaptive noise cancellation (ANC) which is the most important application in biomedical signal processing, this application require convenient reference source to be correlated with the noise but not the signal of interest.
[ Suzuki et al., 1995], theyhave advanced a real-time adaptive filter for cancelling of the surrounding noise during lung sound measurements.[He et al., 2004], they have proposed a method based on adaptive filtering for cancelling ocular artifacts by recording separately two reference inputs (vertical and horizontal EOG signals) and then subtracted from the original EEG.Recursive Least Square (RLS) algorithm was used to follow the non-stationary portion of the EOG signals, the author concluded that this method is easy to implement, stable, converges fast and suitable for on-line removal of EOG artifacts.[Marque et al., 2005], they have developed an adaptive filtering algorithm particularly for removing the ECG signal distorted Surface Electromyogram (SEMG).The procedures of this study are: firstly record the ECG with a shape similar to that found in the distorted SEMGs; secondly tested the competence algorithms on 28 erector spinae SEMG recordings.The best results have been got by using the simplified formulation of a fast Recursive Least Square algorithm.
Fig. 17. ALE application on ECG signal
Fig. 17 introduces an application of adaptive line enhancer that applied to ECG signal where the LSM recursive algorithm used to implement the ALE filter in order to eliminate broadband noise from anarrowband signal.
Adaptive Noise Cancellation (ANC)
Removing 60 Hz noise from an ECG signal was one of the earliest biosignal processing applications of adaptive noise cancellation (ANC) [Widrow et al., 1975]; also it has been used to minimize interference with the mother's EEG through the fetal ECG (FECG) measurement.
In ANC approacha reference channel carries a signal that is correlated with the interference noise but not with the signal of interest.This approach consists of an adaptive filter that operates on the reference signal to produce an estimate of the noise, and thenthis estimation is subtracted from the signal channel to produce the desired output signal.
[ Widrow et al., 1960], they have innovated the LeastMean Square (LMS) adaptive algorithm andAdaline network at Stanford University whichwas used simply and widely.[Widrow, 1964], they introduced one of the earliest applications of adaptive noise cancellation was to eliminate 60 Hz noise from an ECG signal [Koford et al., 1966], they have worked on adaptive systems to design a linear optimal pattern classifier.[Huhta et al., 1973], they have built an ANC system at Stanford University to cancel the 60 Hz interference in ECG.The efficiency of ANC to decrease additive (periodic or stationary) random interference in periodic and random signals has been proved by [Widrow et al., 1976;Glover, 1977].[Griffiths, 1978], has presented a gradient adaptive lattice noise canceller.[Ferrara et al., 1982], early application of adaptive noise cancellation to a biomedical engineering problem by one of the founders of the field has been introduced.[Widrow et al.,1985], has demonstrateAdaline, Madaline, and LMS methods for ANC.[Park et al., 1988],have implemented an adaptive multi-channel noise canceller system for real time FECG monitoring by using the TMS32020 programmable digital signal processor.In the same year, [Widrow et al., 1988],have described neural networks and the practical applications of the adaptive linear combiner in signal processing.
[ Akkiraju et al., 1992], have proposed theANC technique for decreasing the interfering cardiacactivity from the recorded myoelectric activity of respiratory muscles.[Chen et al., 1994],have used adaptive cancellation of ECG artifacts in the diaphragm EMG signals recorded through intraesophageal electrodes during swallowing and inspiration process.[Yilmaz et al., 1996], have introduced some aspects of using dynamic neural networks (Adaptive non-linear filtering) for ECG signals filtering in comparison with adaptive linear filters.[Selvan et al., 1999],have presented two commonadaptive filtering techniques, namely adaptivenoise cancellation and adaptive signalenhancement by using single recurrent neuralnetwork for the elimination of ocular artifacts fromEEG signal.In the same year, [Hee-Kyoung Park et al., 1999]have proposed a neuro-fuzzy controller for the adaptive cancellation of noise in a duct by producing controlled sound pressure with the same amplitude and opposite phase to the eliminated noise.Output results have displayed that the presented ANC system effectively removes the noise.
[ Grieve et al., 2000], have examined the use of ANC filter with a neural network as the adaptive element for filtering the stimulus artifact in Somatosensory Evoked Potentials (SEP) measurement.[Zarzoso et al., 2001], have presented a comparison between the Blind Source Separation (BSS) procedure basedon higher-order statistics versus Widrow's multireference adaptive noise cancelling approach for noninvasive FECG extraction.[Ziarani et al., 2002],have proposed anonlinear adaptive method for removing power line interference in ECG signals.
In the same year, [Hyung-Min Park et al., 2002] have proposed ANC based on Independent Component Analysis (ICA) by using higher-order statistics.Also in same year, [Ng et al., 2002] used Back-Propagation (BPN) neural networks to diagnose breastcancer.[Martens et al., 2004],have used a simple ANC with internal reference signal for power line reduction in ECG.They have succeeded in removing power line harmonics while they have not succeeded in neglect the interference in the case of significant line frequency variations.In the same year, [Yin H. et al., 2004] have proposed an application of ANC with artificial neural network (ANN) based fuzzy inference system (FIS) for fast estimation of visual evoked potentials.[Martens et al., 2006], have improved an adaptive canceller for decreasing the main power line interference component and harmonics for ECG.By using this method, the signal to power line interference ratio calculated is 30 dB, this value is higher than that output by a notch filter.
Multivariate Analyses
Principal component analysis (PCA) and independent component analysis (ICA) sign into a section of statistics known as multivariate analysis.Due to the name "multivariate" its means that the analysis of multiple measurements or variables, but actually manages them as a single existence i.e. variables from various measurements produced on the same systemor process so these different variables are often represented as a single vector variable that has the multiple variables.
The aim of multivariate analysis is to manage transformations and reduce the dimensionality of the multivariate data set that output the data group smaller and simplest to recognize by transforming one set of variables into a new smaller.
One of the applications in biomedical signal processing isshown in EEG signal analysis where many signals are recorded from electrodes placed in the region of the cortexaround the head and these multiple signals are acquired from a smaller number of neural sourcesand perform combinations with the EEG signals.
The two techniques PCA and ICAvary in their objectives and the standardsthat used to the transformation.
In PCA, the purpose is to transform the data set so into a new smallest set of variables that are uncorrelated, i.e. minimize the dimensionality of the data (not important to output more meaningful variables) by rotating the data in M-dimensional space.While in ICA the purpose is a little different by detecting new variables or components that are both statistically independent and nongaussian.
Principal Component Analysis (PCA)
PCA is a statistical procedure that uses an orthogonal transformation to transform a set of possibly notice correlated variables into a set of linearly uncorrelated variables called principal components.The number of principal components is less than or equal to the number of original variables.
This technique is indicated for reducing the number of variables in a data set without loss of information and as possible as produce more meaningful variables because it cannot usually easy to interpret.Data reduction considers the most importantfeature that made PCA is applied successfully in image compression, but in many applications PCA is used only to introduce information on the correct dimensionality of a data set.
[ Berg et al., 1994],have presented a new multiple source eye correction (MSEC) approach of eye artifact treatment based on multiple source analysis.Principal Component Analysis (PCA) method requires a precise modeling of propagation ways for the included signals.Output results have displayed that PCA method cannot perfectly separate ocular artifact from EEG, when both the waveforms have similar voltage magnitudes.[Lagerlund et al., 1997], have presented PCA withSingular Value Decomposition (SVD) for spatial filtering of multichannel ECG.This method needs the distribution of the signal sources to be orthogonal while its performance is finite to decorrelating signals, so it cannot treat with higher-order statistical dependencies.
PCA works by transforming a set of correlated variables into a new set of uncorrelated variables that are named the principal components.While, if the variables in a data set are already uncorrelated then PCA is of no value i.e. uncorrelated and the principal components are orthogonal and are ordered in expression of the variability they represent.So for a single dimension or variable, the first principle component represents the largest amount of variability in the original data set.
The PCA operation can be performed in different methods, but the most straightforward one is a geometrical interpretation.While PCA is applicable to data sets containing any number of variables, it is simpler to describe using only two variables since this leads to easily visualized graphs.
Independent Component Analysis (ICA)
PCA shows that the uncorrelated data is not enough to output independence variables at least when the variables have nongaussian distributions.
ICA is a computational method for separating a multivariate signal into collective subcomponents.These subcomponents are assumed to be non-Gaussian signals and statistically independent from each other.
The purpose of ICA is to transform the original data set into number of independent variables in order to detect more meaningful variables and not to minimize the dimensionality of the data set; but when data set minimization is also required,PCA is used for preprocessing the data set.This problem is shown in biosignal processing like EEG signals when a combinationbetween the underlying neural sources in the head and EEG signal itself.
The main computational difference between ICA and PCA is that PCA uses only second order statistics while ICA uses higher order statistics.
Most signals do not have a Gaussian distribution so they have higher-order moments while variables with a Gaussian distribution have zero statistical moments higher than second order,for this the higher order statistical properties are good analysis in ICA.
While the similarity between ICA and PCA is in the component's determination thatinitiate by eliminatingthe mean values of the variables which is called centering the data, then whiten the data which is termed sphering the data.These whitened data are uncorrelated but allthe components have unit variances.
ICA necessarily to meet two conditions: the source variables must be independent and non-Gaussian (not need the distribution of the source variables to beknown)and these two conditions are work together when the sources are real signals.While third limitation is that the mixing matrix must be square i.e. the number of sources should equal the number of measured signals, but it is not real limitation because PCA can be applied to minimize the dimension of the data set in order to be equal to the source data set.[Comon, 1994], has presented ICA which is an expansion of PCA.It not only decorrelates but can also treat with higher order statistical dependencies.[Lee et al., 1996],have used Infomax (an optimization principle for artificial neural networks).[Vigário, 1997], has introduced ICA for extracting of EOG signal from EEG recording.Extended Infomax which was the most popular ICA algorithms for denoising EEG, as well as JADE (JointApproximation Diagonalisation of Eigen Matrices which is another signal source separation technique) that have been used by [Cardoso, 1998].[Vigon et al., 2000], have made quantitative evaluation of different techniques used for ocular artifact cancellation in EEG signals.The obtained results showed that the signal separation techniques of JADE and extended ICA are more efficacious than EOG subtraction and PCA for filtering ocular artifact from the EEG waveforms.In the same year, [Jung et al., 2000],have presented the successful application of ICA for removing EEG artifacts, the results showed that a number ofdifferent artifacts have been cancelled and separated successfully from EEG andmagnetoencephalogram (MEG) recordings.The disadvantages of this method are firstly, visual examination of the acquired sources wanted to carry out artifact removing and secondly, there is undesirable data loss in the conditions where complete trials are removed.
[Hyung-Min Park et al., 2002], have proposed ANC based on Independent Component Analysis (ICA) by using higher-order statistics.[Nicolaou et al., 2004], have proposed the application of TDSEP (Temporal Decorrelation Source Separation which is a specific extension of ICA) for automatic artifact removal from EEG signals.This analysis has an advantage of separating signals with Gaussian amplitude distribution (because separation is based on the correlation of the sources).[Azzerboni et al., 2004],have proposed a method that used wavelet transform and ICA combination for removing the artifacts in surface EMG, according to this study, the author found that in order to identify the artifact user interface is needed.[YongHu et al., 2005], have introduced a denoising method using ICA and a HPF for removing the ECG interference in SEMG recorded from trunk muscles.
In biomedical image processing, ICA has been used to uncover active neural areas in functional magnetic resonance imaging by estimating the underlying neural sources in EEG signal and to detect the underlying neural control components in an eye movement motor control system.One of the most important applications of PCA and ICA is their ability to search for components related to blood flow dynamics or artifacts during Functional Magnetic Resonance Imaging (fMRI).FMRI is a technique for measuring brain activity and has become a major tool to image human brain function by detecting the changes in blood oxygenation and flow that occur in response to neural activity.The development of FMRI in the 1990s and became one of the most popular neuroimaging methodswhich mostly attributed to Seiji Ogawa and Ken Kwong.The application of PCA and ICA on fMRI image has a rectangle active area (Fig. 18)to recognizethe artifact and signal components in a region with active neuronsshown in Fig. 19-20.The pattern results by using ICA shown in Fig. 19 are better separated than the results shown in Fig. 20 usingPCA.
FMRI Parcellation
To obtain the best performance for whole brain functional connectivity data analysis, the brain must be divided into ROIs to be used as network nodes.The structures of ROIs are normally at the level of many voxels constituting which a possibly small brain region, and rarely at the level of a specific voxel.Several methods have been proposed for defining ROIs and study function beyond the voxel description, which include using three strategies: (1) Randomly splitting the brain into anatomical or functional regions of interest (ROIs), (2) Anatomical brain atlas, (3) Brain parcellations using data-driven or clustering functional data.
For randomly splitting the brain into ROIs, the selection of these regions is depended on background and long experiments because of the cancellation problem, so any signal lies outside the ROI will be ignored as a consequence and the final results will not fit perfectly the new data.Therefore, this is regarded as a limitation by using the first strategy for defining ROIs.
The anatomical brain atlas provides a set of ROIs that cover all the brain volume • Divided into two categories: classical and modern methods.Classical method based on the Fourier transforms.
• Modern method based on the estimation of model parameters.
• The most DSP techniques that used with biosignal analysis.
• Most of the spectral analysis techniques try to improve the estimation accuracy of specific spectral features.
• The classical Fourier transforms (FT) approach is the most popular techniques used for spectral estimation.
• One advantages of FT method is that the sinusoidal signals have energy at only one frequency.
• An average periodogram is the approach that used for estimating the power spectrum of biosignal.
• Welch method is one of the most important methods to find the average periodogram • Cannot be used with transient responses of limited len functions or any systems with nonezero conditions.
• The waveform outside the data window is implicitly a zero; such an assumption can lead to distortion in the addition, there are distortions due to the various data window the rectangular window).
• Cannot respond to changes that might occur during th signal.
igital Filters • Used to reshape the spectrum in order to provide some improvement in SNR • These filters are closely related to spectral analysis.
• Filters can be divided into two groups according to the way that they achieve the reshape of the spectrum, FIR and IIR filters, and based on their approach • IIR filters are more efficient in terms of computer time and memory than FIR filters; this is the disadvantage of FIR filters.
• The user is unable to know which frequency character best or any type of filtering will be so effective on splitt from signal.
• The user will depend on his knowing on signal or sourc by trial and error.
• Cannot respond to changes that might occur during th signal.
Cross-rrelation and
Coherence Analysis • Used with any pairs of signals (stochastic, deterministic and multichannel signal) • Very effective in determining the relationships between pairs of biomedical signal.
Ensemble
Averages • Deals with sets of data and especially in a random process • These sets of data are obtained when many records of the signal are possible and from many sensors • In many biosignals, the multiple records obtain from repeated responses to the same sensor in the same place.
• The signal is not necessary to be periodicbut it must be repetitive Modern echniques of Spectral Analysis • Overcome some distortions generated from the classical techniques, and especially when the introduced data segments are short.
• No need for windowing as in the classical method.
• Divided into two wide classes: parametric and nonparametric.
• Parametric approach consists commonly from three models types are: AR, MA and ARMA.
• ARisused for estimating sharp peaks spectra but no deep hollows.Inversly the MA model is useful for estimating spectra with the hollows but no sharp peaks, while ARMA model is combines both the AR and MA features and used with the spectrum that contain both sharp peaks and the hollows.
• Parametric methods are methodologically and computationally and more complicated than the nonparametric because they necessitate an a main selectionof the order of the modeland the structure, but the later techniques are not deemed correct power spectral estimators because the nonparametric approach do not keep signal power and cannot be able to reconstruct the autocorrelation sequence by applying the Fourier transform to estimators.
• Eigenvector or nonparametric methods are not well-suited at estimating spectra that contain broader-bandfeatures while are good to identifying sinusoidsin noise.
me-Frequency Analysis • Used to extract both frequency and time information from a nonstationary waveform.
• unable to completely solve the time-frequency problem Time-Scale Methods (Wavelet Analyses) • Describe the properties a nonstationary waveform.
• It is divided into segments of scale rather than sections of time.
• It is applied on a set of orthogonal basis functions obtained by contractions, dilations and shifts of a prototype wavelet.
• uses windows that are frequency dependent minimize the dimensionality of the data (not important to output more meaningful variables) by rotating the data in M-dimensional space.
• While in independent component analysis the purpose is a little different by detecting new variables or components that are both statistically independent and nongaussian.
Brain parcellations are either anatomical or functional parcellations.The parcels in anatomical parcellations these parcels must be performed with the most appropriate atlas, while the functional parcellations can be derived either from resting-state functional Magnetic Resonance Images (rs-fMRIs), activation data or other analyses; and functional parcellations use datadriven or clustering functional data.Parcellation approaches use brain activity and clustering approaches to divide the brain into many parcels or regions with some degree of homogeneous characteristics.So, the brain is divided into define regions with some degree of signal homogeneity that help for analysis and interpretation of neuroimaging data as well as mining these data because of the amount of fMRI data are huge.
Parcellationwith BOLD Shape
Parcellation method is another approach that used for fMRI data analysis; it is used to overcome the mis-registration problem and dealing with the limitation of spatial normalization.As we explained previously that there are either anatomical or functional parcellations, the most important area of neuroscience is functionally parcellation of the human cerebral cortex.Parcellation of the human brain has been done by Brodmann in early 20th century, based on the brain cytoarchitecture and divided it into 52 different fields.a) Flandin et al., (2002), they have used a brain parcellation technique to overcome the shortcomings of spatial normalization for model-driven fMRI data analysis.By using GLM parameters and group analysis, they functionally parcellate the brain of each subject into about 1000 homogenous parcels.b) Thirion et al., (2006), they have used a multi-subject whole brain parcellation technique to overcome the shortcomings of spatial normalization of fMRI datasets.By using GLM parameters analysis, they parcellate the whole brain into a certain number of parcels.They collected voxels from all subjects together, and then derived parcel prototypes by using C-means clustering algorithm on GLM parameters.
Parcellation Based on a Data Driven Approach
Data-driven analysis is widely used for fMRI data processing to overcome the limitation associated with the shape of the Hemodynamic Response Function (HRF) and task-related signal changes that must be introduced.As well as the assumption about the shapes of BOLD model, there is another limitation related to the subject behavior during the task.So data-driven analysis is used with parcellation technique for fMRI data processing, in which the detection of brain activation is obtains from the information of the fMRI signal only.a) Yongnan From this paper we conclude that there is a growing interest andmotivation in using digital signal processing techniques with biosignals like ECG, EEG and EMG, e.g., the majority of the literaturereviewed in this paper were published within recent years.
From second part of this article we conclude that in spite of brain atlasesobvious usefulness, but existing atlases are limited due to the inconsistentlyand may not fit the data well.
Unlike brain atlases, data-driven parcellations that used to define regions of interest can be derived from various image modalities reflecting different neurobiological information.
As well as the most popular parcellation techniques that are depend on the assumption about the shapes of BOLD model or the subject behavior during the task,parcellations can also be obtained from data-driven analysis such as independent components analysis (ICA) and variants of principal components analysis (PCA) which rely on a linear mixing approach that changes the nature of the problem and implies other probabilistic models.
This work is of vital important for researcher who wants to learn about DSP applications with biomedical signals as well as it consider as a good starting for researcher who areinteresting in fMRI brain activation extractions techniques.In future, this work can be extended so introduce some results related to fMRI parcellation approach deals with resting-state data or task data.
Fig. 1 .Fig. 2 .
Fig. 1.The application of the basic Fourier transforms routine (fft) on EEG signal to produce the spectrum have used frequency domain digital filtering techniques with application to the ECG for power-line interference removal.[Outram et al., 1995] have presented two techniques novel optimum curve fitting and digital filtering techniques to improve the feature of the FECG with minimum distortion and to measure its features like time constants, amplitudes and areas.[Stearns et al., 1996], they introduced a good treatment of the classical Fourier transform and digital filters and all basic digital filters can be interpreted as linear digital, also they covers the LMS adaptive filter algorithm.[Ingle et al., 2000], excellent treatment of classical signal processing methods including the Fourier transform and both FIR and IIR digital filters have been introduced.In 2003 a different step size LMS for possible improvements in the performance of adaptive FIR filters in non-stationary environments have introduced by [JoonwanKim et al., 2003].
Fig. 3 .
Fig.3.Frequency response of the FIR bandpass filter is shown.
Fig. 4 .
Fig. 4. The frequency spectrum of EEG signal using the FFT is shown
Fig. 5 .
Fig. 5. Multiple signals are recorded(shown upward), and an ensemble of individual was taken.While downward, the ensemble average is founded by taking the average at each point in time for the individual responses[John L. et al., 2004]
Fig. 6 .
Fig.6.Shows spectrum obtained from 3 different methods applied to (a) an EEG data waveform, and comparison between (b) classical, (c) AR and (d) eigenanalysis spectral analysis methods, (e) shows the singular values determined from the eigenvalues.
v
can be performed in advance since the rest of the transform (i.e., the signal portion) is not a function of v; for any given kernel.b)Or use can be made of an intermediate function, called the ambiguity function.One popular distribution is the Choi-Williams (exponential distribution ED) because it has anexponential-type kernel.It is having reduced cross products as well as has better noise characteristics than the Wigner-Ville.[Boudreaux et al., 1995], they presented an exhaustive, or very nearly so, compilation of Cohen's class of time-frequency distributions.
Now the different members of Cohen's class of distributions can be implemented by a general routine that starts with the instantaneous autocorrelation function, evaluates the appropriate determining function, filters the instantaneous autocorrelation function by the determining function using convolution, then takes the Fourier transform of the result.In this paper the routine is set up to evaluate EEG signal treated with four various distributions: Wigner-Ville distribution (as the default), Choi-Williams, Born-Jorden-Cohen and Rihaczek-Margenau, Fig 7-14.
Fig. 13 .
Fig.13.The Choi-Williams determining function The orthogonal basis functions denoted by ψ* a,b (t) are obtained by scaling and shifting ψ(t) by scale a and time b respectively Eq.[8]:
Table 1 .
Comparison of Different Signal Processing Techniques It is a most useful method for frequency based analyzing systems or responses either the signals are periodic or aperiodic.
• good technique when it is used especially with signals that have long durations of low frequency components and short durations of high frequency components like EEG signals or signals of differences in interbeat (R-R) intervalsMultivariateAnalyses • In principal component analysis, the purpose is to transform the data set so into a new smallest set of variables that are uncorrelated, i.e.
Ji et al., (2009), they have introduced a parcellation approach for fMRI datasets based on Independent Component Analysis (ICA) and Partial Least Squares (PLS) instead of the GLM, and they used a spectral clustering of the PLS latent variables to parcellate all subjects data.b) Thomas B. et al., (2013), they have proposed a novel computational strategy to divide the cerebral cortex into disjoint, spatially neighboring and functionally homogeneous parcels using hierarchical clustering parcellation of the brain with resting-state fMRI.c) Thirion B. et al., (2014), they have studied the criteria accuracy of fit and reproducibility of the parcellation across boot strap samples on both simulated and two task-based fMRI datasets for the Ward, spectral and k-means clustering algorithms.They have addressed the question of which clustering technique is appropriate and how to optimize the corresponding model.The experimental results show that Ward's clustering performance was the best among the alternative clustering methods.CONCLOUSION DSP techniques deals with processing signals in digital domain which can be used for removing noise as well as adjusting signal characteristics, spectral estimation, multiplying two signals to perform modulation or correlation, Filtering and averaging.ECG, EEG, EMG, ERG, EOG and others are examples of the biosignals, and all these biosignals have been tested in the frequency domain.There are many applications of signal processing in biomedical field to perform different techniques: Echo cancellation, Noise cancellation, Spectrum Analysis, Detection, Correlation, Filters, Computer Graphics, Image Processing, Data Compression, Machine vision, Sonar, Array Processing, Guidance, robotics and etc. | 15,415.4 | 2015-01-01T00:00:00.000 | [
"Medicine",
"Engineering",
"Computer Science"
] |
Behavioural response of female Culex pipiens pallens to common host plant volatiles and synthetic blends
Most mosquito species need to obtain sugar from host plants. Little is known about the chemical cues that Culex pipiens pallens use during their orientation to nectar host plants. In this study, we investigated the behavioural responses of female Cx. pipiens pallens to common floral scent compounds and their blends. Behavioural responses of female Cx. pipiens pallens to 18 individual compounds at different concentrations were determined in the olfactometer bioassays. A synthetic blend composed of behaviourally active compounds was formulated, and its attractiveness to mosquitoes was tested. Several most attractive compounds constituted a reduced blend, and its attractiveness was tested against the solvent and the full blend, respectively. Mosquito response in the olfactometer was analyzed by comparing the percentages of mosquitoes caught in the two arms by χ2 test (observed versus expected). Fifteen of the 18 compounds were attractive to female Cx. pipiens pallens in the dose-dependent bioassays, with the exception of β-pinene, acetophenone and nonanal. (68.00 ± 2.49) % mosquitoes responded to the full blend composed of these 15 compounds on their optimal doses when tested against the solvent, with the preference index at 46.11 ± 3.57. Six individual compounds whose preference indices were over 40 constituted the reduced blend, and it attracted (68.00 ± 1.33) % mosquitoes when tested against the solvent while its preference index was 42.00 ± 3.54. When tested against the full blend simultaneously in the olfactometer, the reduced blend could attract (45.00 ± 2.69) % of released mosquitoes, which was as attractive as the full blend. Our results demonstrate that female Cx. pipiens pallens is differentially attracted by a variety of compounds at different concentrations. Alteration of the concentration strongly affects the attractiveness of the synthetic blend. Several floral scent volatiles might be the universal olfactory cues for various mosquito species to locate their nectar host plants, which could be potentially used in trapping devices for surveillance and control of them.
Background
Culex pipiens pallens Coquillett, the most prevalent Culex species in Northeastern Asia, is the primary vector of lymphatic filariasis and epidemic encephalitis [1,2]. The application of large quantities of insecticides in controlling this mosquito species has resulted in the resistance of Cx. pipiens pallens to most types of insecticides, which makes their control increasingly difficult [3]. Therefore, the development of odour-bait technology has been advocated as novel and more environmentally friendly methods against mosquitoes, and it will be an important component of integrated vector management strategies [4,5].
Sugar feeding is a common behaviour of adult mosquitoes, and most mosquito species need to obtain sugar as a source of energy [6,7]. Floral and extrafloral nectaries are the primary sugar sources for mosquitoes [6]. Mounting evidences indicate that feeding on different kinds of nectar sources affects the flight performance, survival and fecundity of adult mosquitoes [8][9][10]. Plant-sugar feeding behaviours of Anopheles gambiae, Aedes albopictus and Cx. pipiens have been observed in the laboratory and field experiments [11][12][13], and the results reveal that mosquitoes show differential preference for certain host plants [11][12][13][14][15][16]. However, the cues responsible for the differential attraction of mosquitoes to them are not well understood [6,17].
Previous studies indicated that visual cues and volatile compounds released by flowers were important cues for mosquitoes to discriminate and locate nectar host plants [6,7]. Various volatile compounds have been identified from the preferential plant hosts of some mosquito species, and most of them are aromatics, monoterpenes and fatty acid derivatives [16][17][18][19][20][21][22]. Several individual compounds could elicit electrophysiological and behavioural responses in An. gambiae, Cx. pipiens molestus and Ae. aegypti, and the attractiveness of compound blends on basis of their mean ratios in the scent profiles to mosquitoes has been evaluated [16,17,[19][20][21]. Moreover, the combination of vertebrate kairomones and synthetic plant odours as attractants in traps has shown a synergistic attraction in trapping outdoor populations of malaria vectors [4], which exhibits extensive application prospect in surveillance and control of mosquito populations.
Floral volatile organic compounds are mainly composed of four chemical groups: aromatics, monoterpenes, sesquiterpenes and fatty acid derivatives [23]. In the present study, we investigated the behavioural responses of Cx. pipiens pallens females to 18 individual compounds from different groups which are distributed in the volatiles of mosquitoes' host plants in the dual-choice olfactometer. The attractiveness of a full blend composed of 15 behaviourally active compounds was also tested. Six most attractive compounds were mixed and its attractiveness was tested against the solvent and the full blend, respectively. Our study demonstrated that several odours might be universal chemical cues mediating mosquitoes' orientation to host plants. By optimizing the dose and constituent of their mixture, we aimed to develop a synthetic floral odorant blend which could be used as attractants in traps.
Mosquitoes
Mosquitoes used in this study were collected from water pools in the urban residential areas in Hangzhou, Zhejiang, China, and had been kept in insectary over 50 generations. They were reared at 25 ± 1°C with a photoperiod of 14:10 (L:D) and 75 % relative humidity. The adults were maintained on a diet of mouse blood for 3 consecutive days since the 5th day after emergence, with 5 % sucrose solution continuously available. Fully engorged females were allowed to lay eggs on oviposition cups (5 cm diameter, 7 cm depth) inside the cages. Eggs were transferred into plastic basins (30 cm diameter, 11 cm depth) filled to a depth of 8 cm with dechlorinated tap water, and commercial rat food was provided for mosquito larvae. Pupae were collected and transferred to meshcovered cages (35 × 35 × 35 cm). Newly emerged female adults intended for use in bioassays were given access to water only. Experiments were conducted 12-24 h after emergence.
Ethics statement
Although mosquitoes were fed upon the mouse blood, this was performed only as a routine method for mosquito maintenance and it was approved by Zhejiang University ethical review committee.
Dual-choice olfactometer assays
All behavioural assays of female Cx. pipiens pallens to the individual compounds and their synthetic blends were conducted in a modified dual-choice Y-shape olfactometer, as shown in Fig. 1. The olfactometer was made of glass and it consisted of three parts: the release part, the flight part and the olfactometer arms. The release part (diameter 5 cm, length 25 cm) with a demountable glass air outlet was at the downwind end of the olfactometer. To allow the mosquitoes to acclimatize in the release part prior to each trial, a demountable fabric mesh screen was placed between the release part and the flight part. A built-in polycarbonate funnel-shaped outlet (length 4 cm, narrower opening upwind with diameter 2 cm) was located at the upwind of the flight part (diameter 5 cm, length 40 cm) to avoid mosquitoes passing through the tunnel hastily. The angle between two olfactometer arms (diameter 5 cm, length 30 cm) was 90°. Odorants from the glass jars (diameter 8 cm, height 15 cm) entered the olfactometer arms through two rubber plugs (not shown in the schematic drawing), which were inserted on the upwind of the arms, respectively.
Eight doses of each compound were prepared at a concentration of 0.25, 0.5, 0.75, 1, 1.25, 1.5, 1.75 and 2 ug/mL in pentane. The stimulus and the solvent were released by placing 550 ul onto 100 mg medical dental rolls in the petri dishes, respectively. They were left for 10 min at room temperature to allow the solvent to evaporate, and they would be placed into the glass jars, respectively.
Bioassays were conducted during the first 2 h of the dark period, and one 15-W red fluorescent bulb was placed above the olfactometer to illuminate the test arena evenly [17]. The temperature and humidity were adjusted to 25-28°C and 65-75 %, respectively. Air flow from a compressed air pump was firstly charcoal-filtered and humidified by passing through the distilled water. The purified air entered each glass jar at a flow rate of 0.7 m 3 /h, and the wind speed at the centre of the flight part was 10 cm/s. Groups of 10 robust females were released into the introduce part for 10 min to acclimatize prior to each test. Dental rolls containing the stimulus and the solvent were then placed into the glass jars simultaneously. Mosquitoes in the introduce part were released 30 s later by removing the mesh screen, and the numbers of mosquitoes in two olfactometer arms were counted 5 min later, respectively. Mosquitoes in the olfactometer were removed immediately by an aspirator after each trial. The trials for each dose of individual compounds were replicated 10 times. Mosquitoes and dental rolls were replaced with new ones in each trial. The positions of the treatment and control arms were alternated between trials to eliminate positional biases.
All equipments were washed by ethanol and placed in an oven at 100°C for 6 h. Surgical gloves were worn during the experiments to avoid contamination.
According to the attractiveness of the 18 individual compounds at different concentrations tested above, a full-component blend comprised of optimal doses of 15 individual behaviourally active compounds was prepared at different concentrations (4-fold, 2-fold, 1, 0.5 dilution, 0.25 dilution). The attractiveness of the full blend at different concentrations was tested against the solvent in the olfactometer, respectively. The bioassay procedures were the same as described above.
In order to simplify the constituent of the fullcomponent blend, a reduced blend composed of the 6 most attractive compounds at their respective optimal concentrations was prepared. Its attractiveness to female Cx. pipiens pallens was tested against the solvent and the full blend in the dual-choice assays, respectively. All the bioassays were carried out as described above and replicated 10 times.
Statistical analysis
The percentages of mosquitoes responding to the test odours at each concentration and its control odour were calculated respectively when the trials were replicated 10 times. For each compound/blend at each concentration, the χ 2 observed versus expected test (based on the percentages of responding mosquitoes in the olfactometer Fig. 1 A schematic drawing of the Y-tube olfactometer (not drawn to scale). Dental rolls containing the stimulus and the solvent were placed in the petri dishes, respectively. Mosquitoes entered the flight part when the fabric mesh screen was removed, and they would make the behavioural choice at the funnel-shaped outlet arms) was used to assess the attractiveness of the tested odours with the significance threshold at 0.05 [19]. A preference index (hereafter referred to as PI) was also used to evaluate the attractiveness of the individual compounds and their blends tested in the dual-choice bioassays. It was calculated according to the formula: where NP is the number of mosquitoes responding to the test odours and NC is the number of mosquitoes responding to the control odours [17,19]. A positive value indicates a majority of the mosquitoes responding to the tested odorants, while a negative value indicates the converse [17]. All statistical analysis was carried out using SPSS v. 16 (SPSS Inc, Chicago, USA).
Behavioural responses of Cx. pipiens pallens to individual compounds
When the selected 18 compounds were prepared at concentrations ranging from 0.25 ug/mL to 2 ug/mL, 15 compounds were attractive to female Cx. pipiens pallens compared to the solvent, with the exception of β-pinene, acetophenone and nonanal (Table 1, see Additional file 1: Figure S1 for release rates). Of the tested monoterpenes, αpinene and D-limonene were only attractive to mosquitoes at the concentrations of 0.5 ug/mL and 0.25 ug/mL, respectively (Table 1, Additional file 2: Figure S2A). Linalool and linalool oxide were optimally attractive at 0.75 ug/mL and 0.5 ug/mL, respectively (Table 1, Additional file 2: Figure S2A). (E)-β-ocimene was attractive to mosquitoes in a dose-dependent manner, and its optimal dose was 1 ug/mL (Table 1, Additional file 2: Figure S2A). All the selected aromatic compounds were attractive to Cx. pipiens pallens females, with the exception of acetophenone. Phenyl acetaldehyde was attractive to mosquitoes at 0.5 ug/mL ( Table 1, Additional file 2: Figure S2B). The optimal dose of methyl salicylate for attraction was 1.25 ug/mL ( Table 1, Additional file 2: Figure S2B). Multiple concentrations of benzaldehyde, benzyl alcohol and phenylethyl alcohol attracted significantly more Cx. pipiens pallens than the solvent, and they were optimally attractive at 0.5 ug/mL, 1ug/mL and 0.75 ug/mL (Table 1, Additional file 2: Figure S2B), respectively.
Among the 6 tested fatty acid derivatives, female Cx. pipiens pallens were attracted by 5 different compounds, with the exception of nonanal (Table 1, Additional file 2: Figure S2C). (Z)-3-hexenyl acetate and (E)-2-hexenal were both attractive to mosquitoes at 0.75 ug/mL and 1 ug/mL, with the optimal dose at 1 ug/mL (Table 1, Additional file 2: Figure S2C). (Z)-3-hexen-1-ol and hexanal were attractive to mosquitoes The trials for each dose of each compound were replicated 10 times, and the percentage of mosquitoes responding to the compounds and their pairwise control were calculated and compared by χ 2 test (observed vs. expected) at low concentrations, and they were optimally attractive at 0.5 ug/mL and 0.75 ug/mL (Table 1, Additional file 2: Figure S2C), respectively. The attractiveness of hexanol to mosquitoes varied between different concentrations, with the optimal dose at 1 ug/mL (Table 1, Additional file 2: Figure S2C).
Attraction of Cx. pipiens pallens to the behaviourally active compound blends
The mixture composed of these 15 behaviourally active compounds at different concentrations was formulated, and the results showed that mosquitoes responded to the compound blend in a dose-dependent manner. (68.00 ± 2.49) % Cx. pipiens pallens females were strongly attracted by the blend composed of compounds on their optimal doses (χ 2 = 8.05, df = 1, P < 0.01), with the PI at 46.11 ± 3.57 (Fig. 2, see Additional file 3: Figure S3 for release rate). When their concentrations decreased to 50 % and 25 % of the optimal doses respectively, the attractiveness of the blend to mosquitoes decreased and was not significantly different from that of the solvent (χ 2 = 1.66, df = 1, P > 0.05 and χ 2 = 1.83, df = 1, P > 0.05, respectively) (Fig. 2). When the concentrations increased to twice and 4-fold of the optimal doses respectively, most of the tested mosquitoes responded to the solvent rather than the compound blend (χ 2 = 0.184, df = 1, P > 0.05 and χ 2 = 1.25, df = 1, P > 0.05, respectively) (Fig. 2).
Discussion
In the present study, we modified the Y-tube olfactometer and tested the behavioural response of female Cx. pipiens pallens to common host plant volatiles and their synthetic blends. The results showed that most mosquitoes could respond to the tested odours in the olfactometer in 5 min, indicating it was an excellent device for testing the behavioural responses of mosquitoes to volatile compounds. Besides, consistent with the previously reported results in other mosquito species [17][18][19][20][21], our results showed that Cx. pipiens pallens females could discriminate and respond to different floral scent odours, and their behavioural responses were closely related to the concentrations of the tested compounds. Most individual compounds were attractive to Cx. pipiens pallens at the lower end of concentration range, and there was a clear repellent effect for many of the compounds at the higher doses. Increased concentrations or dilutions of the full blend significantly affected the attractiveness of the blend, which indicated the concentration dependent reversal of response. Previous reports have demonstrated that mosquitoes show differential preference to various host plants in the laboratory and field experiments [14][15][16], whereas how mosquitoes successfully select and locate their nectar host plants is still not well understood. The structure of inflorescence, the accessibility of sugar and its related nutritional benefits from plants are considered to be important factors influencing mosquitoes to select different flowering plants and seed pods as their sugar sources [6,10,16,24], whereas the color of flowers as well as the floral scent compounds is thought to be important cues for mosquitoes to discriminate and locate various plants [6]. The compounds tested in the present study have been identified from volatiles and pentane extracts of Silene otites (Caryophyllaceae), Asclepias syriaca (Asclepiadaceae) and other host plants of mosquitoes, and most of them are electrophysiologically and/or behaviourally active to various mosquito species [17][18][19][20][21]. In the present study, newly emerged female Cx. pipiens pallens were differentially attracted by these compounds, which indicated that several volatile compounds might be universal Fig. 2 Percent of Cx. pipiens pallens responding to a mixture of 15 compounds with different dilutions. Asterisks (**) denote significant differences at P < 0.01 by χ 2 test (observed vs. expected) olfactory cues in various mosquitoes' orientation to sugar sources [7,22].
Most of the selected compounds were behaviourally active to female Cx. pipiens pallens when their concentrations ranging from 0.25 ug/mL to 2 ug/mL, with the exception of β-pinene, acetophenone and nonanal. βpinene is a common constituent of floral scents, and it has been detected in the volatiles of the flowering plants which are attractive to An. gambiae and Cx. pipiens [16,17,21]. β-pinene elicits strong electrophysiological response in An. gambiae, but consistent with the present results, it is not attractive to An. gambiae either [17]. Acetophenone is distributed in the floral scent profiles of S. otites, and it elicits weak electrophysiological response in Cx. pipiens molestus [19,20]. In the present study, Cx. pipiens pallens were not attracted by acetophenone when it was prepared at the concentration of 0.25-0.75 ug/mL, and it exhibited repellent effect when prepared at 1.00-1.75ug/mL. However, the result was contrary to that demonstrated in Cx. pipiens molestus, which is highly attractive to acetophenone [19]. Nonanal has been detected in the volatiles of Senna occidentalis flowers and pentane extracts of A. syriaca flowers, both of whom are attractive to An. gambiae [16,21]. However, the synthetic blend of its host plant volatiles is as attractive as the reduced blend without nonanal in the subtractive bioassays, which indicates that nonanal does not contribute to the attractiveness of the blend of plant volatiles [21]. Besides, nonanal is one important olfactory cue for mosquitoes searching for blood hosts [25,26], and the olfactory receptors of nonanal have been identified in Cx. pipiens quiquefasciatus [26]. In the present study, newly emerged Cx. pipiens pallens were not attracted by nonanal at low concentrations and were repelled by it at high concentrations. Thus, we speculate that nonanal may have important roles in mediating female mosquitoes' searching for blood hosts rather than sugar meals.
In the present study, we formulated a reduced blend composed of six compounds with highest average preference index, and its attractiveness to mosquitoes was tested against the solvent and the full blend, respectively. The results showed that the full blend and the reduced blend could attract (68.00 ± 2.49) % and (68.00 ± 2.13) % mosquitoes when tested against the solvent respectively. When tested against each other, the full blend was as attractive as the reduced blend (percent response: 47 % versus 45 %). However, the mean preference indices of the full blend and the reduced blend were 46.00 and 42.00 respectively, which were higher than that of most individual compounds but similar to or lower than that of (E)-β-ocimene and phenylethyl alcohol (mean PI: 52.73 and 46.00, respectively). The similar results have also been observed in the bioassays of Cx. pipiens molestus to synthetic blends, where a mixture of the four most attractive compounds attracts more mosquitoes than the mixture of 14 compounds but is as attractive as phenyl acetaldehyde [19]. Therefore, in the present study, existence of other compounds in the blends could not significantly strengthen the attractiveness of the blends, and the lack of additive effect suggests that the reduced blend could be further simplified without loss of attraction.
Based on the sugar feeding behaviour of adult mosquitoes, the newly developed attractive toxic sugar baits (ATSB) have been successfully applied in controlling a variety of mosquito species [27][28][29]. As part of the ATSB optimization process, an increasing number of attractive flowering plants and seed pods have been identified as potential sugar sources for a variety of mosquito species . Asterisks (**) denote significant differences at P < 0.01 by χ 2 test (observed vs. expected). NS denotes no significant difference [13][14][15][16], but little is known about the olfactory basis of floral preference of mosquitoes among their nectar host plants. The present study exploited the attractiveness of synthetic floral volatile compounds to Cx. pipiens pallens, which could be potentially used in trapping devices for surveillance and control of mosquito populations. Compared with the vertebrate kairomone-based attractants, phytochemical attractants could attract both males and females in all gonotrophic states [7]. Given that different mosquito species might use different chemical cues for orientation to nectar host plants, and they usually visit multiple flowering plant species for sugar feeding, more volatile compounds that are behaviourally active to mosquitoes remain to be exploited. Besides, how to optimize the release rate, to combine the kairomones with phytochemicals and to develop the trap devices deserve further research to maximize the capture rate [7]. | 4,877.6 | 2015-11-17T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Evaluating Google Speech-to-Text API’s Performance for Romanian e-Learning Resources
This paper presents a way of performing ASR on multimedia e-learning resources available in Romanian with the usage of the Google Cloud Speech-to-Text API. The material presents the history of ASR systems together with the main approaches used by the algorithms behind these systems. The cloud computing providers, that offer ASR solutions via SaaS, are analyzed as well. After performing a short literature review, the author focuses on applying the Google Cloud Speech-to-Text API on various video e-learning resources available online on YouTube. By doing this, the resources can be easily indexed and transformed into searchable materials. The WER score is used in order to measure the accuracy of the model and to compare it with similar works. The results are more than satisfying, thus the proposed model can be used as a method of automating the indexing of multimedia e-learning resources.
Introduction
Automatic Speech Recognition, abbreviated ASR, is not a new topic in the Computer Science field. In fact, the first steps were done in the '50s when a digit recognition system was developed by AT&T Bell Labs [1]. However, it was the 2000s when the speech recognition systems have successfully started to understand large vocabularies in uncontrolled environments. Table 1 summarizes the evolution of ASR systems. Most probably, in the following years, the humanity will face real time speech recognition systems that will achieve an accuracy similar to the one of a native speaker [2]. But, until then, let us see the existing ASR related algorithms. Real time recognition with 100% accuracy, all words that are intelligibly spoken by any person, independent of vocabulary size, noise, speaker characteristics or accent There are several approaches, in terms of algorithms, that were or are still used for performing ASR [2]: template-based approaches focus on matching unknown speech with pieces of already known information; knowledge-based approaches, where variations in speech are saved into the system and more complex rules are deduced by an inference engine; neural network-based approaches which use neural network AI algorithms to automatically detect speech based on training data; dynamic time warping-based approaches that focus on matching sequences of the same speech that vary in time or speed; statistical-based approaches that use automatic learning procedures on large amounts of training data; the most popular statistical algorithm is the Hidden Markov Model (abbreviated HMM) which is the current state-of-the-art. Nevertheless, this paper scope is not to develop an ASR algorithm for Romanian. There are several attempts in this direction like [3] and [4], but because Romanian is categorized as an under-resourced language [5] it would be relatively hard to develop a complete ASR algorithm with a satisfying performance. Instead of that, we will focus in the next part of this chapter on the ASR algorithms available in cloud as part of the main cloud computing providers' portfolio. There are five major players in this field, each one of them with its own personal assistant: Google with the so called Google Assistant, Apple with Siri, Microsoft with Cortana, Amazon with Alexa and IBM with Watson. From these five players, Apple doesn't have a cloud service that allows the usage of the technology behind Siri and IBM doesn't have an assistant per se, Watson being used as a term with larger connotations. Below are presented the main cloud computing providers that offer ASR algorithms in the form of SaaS. Amazon provides a cloud service called Amazon Transcribe that focuses on ASR. It was launched in 2018 and supports five languages in different dialects: English, Spanish, French, Italian and Portuguese [6]. Unfortunately, the support for Romanian was not announced yet and it is still questionable if it will be. Microsoft offers a Speech to Text component in its Azure Cloud Service [7]. This one has support for six languages in different dialects (English, Chinese, French, German, Italian and Spanish), but again, Romanian is not among them. IBM has integrated also in the Watson Cloud Services a speech to text algorithm. It supports the largest number of languages among the systems presented so far (Portuguese, French, German, Japanese, Korean, Mandarin Chinese, Modern Standard Arabic, Spanish, UK English and US English), but still no support for Romanian [8].
Google is the only cloud provider that supports Romanian among the languages available in the Cloud Speech-to-Text component. The component offers ASR support for around 120 languages and dialects [9] and it will be the one that we will focus on in the following chapters of this paper.
Related Work
This chapter focuses on the literature review in the field of ASR in general, ASR applied on Romanian, in particular, and Google Speech Recognition API alternatives. Therefore, authors in [5] do a survey of the under-resourced languages for speech recognition. Romanian is identified as being one of them, among other European languages like Croatian, Icelandic, Latvian, Lithuanian and Maltese. Another problem raised by this paper, that affects Romanian too, is the use of diacritics. Even if human readers can easily identify texts without diacritics, these texts cannot be used as training data for an ASR algorithm. The paper also proposes WER (Word Error Rate) as the main metric for evaluating ASR performance. Paper [3] presents some approaches for performing ASR for Romanian language. The authors use a training dataset of 3300 phrases uttered by 11 speakers, 7 males and 4 females. They use the WRR (Word Recognition Rate) as main metric of their algorithm and obtain the highest recognition rate, 90.41%, by using cepstral analysis. This is one of the best results obtained so far by an ASR algorithm on Romanian and will be the point of reference for our approach. In paper [2] are presented the main challenges related to ASR. A classification of the speech recognition algorithms is done, together with a presentation of the evolution of ASR systems (both topics are already presented in the introduction). Also, the results obtained in [3] are listed here as the main accomplishment for Romanian. In [4] the authors try to enhance ASR for Romanian by using machine translated and webbased text corpora. They are using a training dataset of 644 phrases uttered by 21 speakers, or, in terms of time, more than 6.5 hours of speaking. The lowest WER obtained was 20.7% when the largest dataset (europarl + 9am.DRS2 + hotnews.DRS2) was used. We will not focus on this approach in the current paper, first of all because we are not developing an ASR algorithm per se and second of all because Google is already using web-based text corpora for training its speech-to-text algorithm. The result can be however usefully for comparison purposes. Paper [10] measures the accuracy of the Google Web Speech API in recognizing and transcribing words spoken by Japanese English learners. The tests are performed on simple English phrases. The results present Google's API with a mean accuracy of 89.4 for native speakers and 65.7 for non-native speakers. No alternatives to Google's API are presented in this paper and because the authors do not specify if standard metrics (WER, WRR) are used for evaluating the algorithm's performance and what they mean by accuracy, it is relatively hard to compare this approach with similar ones. In [11] the authors try to outperform the Google Speech Recognition API by using Sphinx, a group of open source speech recognition systems developed at Carnegie Mellon University. The corpus consists of 3000 sentences related to bus schedule information. Sphinx obtained a WER of 51.2% and outperformed Google by 3.3%. The results are arguably relevant because, first of all, the difference is not significant and, second off all, because Google does not train its algorithm to be domain-specific (Pittsburgh bus stations in this case). In the same time, a similar approach for Romanian could take a lot of development time, most of it being spent for gathering training data. Since the results obtained in this paper when it comes to compare the two systems' accuracy are more or less similar, we considered using the out-of-the-box Google algorithm for measuring ASR performance on Romanian e-learning resources. Future work can focus also on testing Sphinx's performance for Romanian in the conditions of a larger corpus of YouTube video than the one used by this paper. To conclude this chapter, there are several experiments that tried to perform ASR for Romanian, but none of them for e-learning resources. The added value of this paper in comparison with similar works that are using Google Speech-to-Text API is, first of all, the fact that we are doing this on an under-resourced language and, second of all, the fact that we will try to outperform the existing attempts. servers and of the already existing machine learning algorithms, Google created the first large scale ASR system, being considered the pioneer of modern ASR algorithms [12]. The Cloud Speech-to-Text API (figure 1) was launched in April 2017 and has included since the beginning support for 90 languages, 30 being added afterwards in August 2017 [13]. Beside the basic ASR, the cloud service offers additional features like phrase hints, real-time streaming, language auto-detection, inappropriate content filtering, automatic punctuation, multichannel recognition and others [14]. "In ancient times having power meant having access to data, today having power means knowing what to ignore" said Yuval Noah Harari in his book "Homo Deus". We are leaving in a decade where information is everywhere. The problem that raises is how can one know if the e-learning resources that he or she is interested in are the proper ones. Especially if the resources are in video format. With YouTube gaining more and more grip, humanity has to deal with 300 hours of video uploaded to this platform every minute [15].
Among those are, of course, valuable resources, but how can one determine what videos to watch and what to ignore, which ones are valuable e-learning materials and which one are just cat videos. The problem exists in large deposits of e-learning data also. How can one determine fast if a video contains the information that he or she is interested in? Even if the subject of the video or the title suggests so, the content may not be the desired one. One possible way of resolving this is to automatically transcribe the voice from multimedia e-learning materials. By doing this, classical text-based search algorithms can be applied to the newly obtained text resource. The search can be done in a statistical manner, based on keywords or by using Named Entity Recognition algorithms which can determine the taxonomy of the content within the video, together with different relations between the identified entities. From this point on, semantic approaches can be used to filter the data [16]. The ASR algorithms can help in automating the task of transcribing the speech from video (tutorials, webinars, online lessons) or audio (e-books, podcasts) e-learning resources, thing that we will focus on in greater detail in the next chapter.
Evaluating Google Speech Recognition API's Performance for Romanian
In order to evaluate Google Cloud Speech-to-Text API's performance when it comes to Romanian e-learning resources, a corpus of YouTube videos was selected. The dataset is widely presented in table 2, but in large terms it consisted of 20 e-learning videos from the YouTube platform, 10 of them presented by male speakers (5 different speakers) and 10 of them by female speakers (5 different speakers). Because of the limitations imposed by the free version of the Google Cloud Speechto-Text API, we considered and analyzed just the first minute of each video. The correct transcription for each video was manually created prior to the speech-to-text analysis, obtaining a corpus of 3100 words divided in 210 sentences. 1 10 142 Entrepreneurship male 2 18 140 Entrepreneurship male 3 8 144 Programming male 4 10 134 Programming male 5 12 209 IT male 6 18 195 Lifestyle male 7 6 57 Auto male 8 16 186 Media male 9 18 109 Lifestyle male 10 7 160 Lifestyle male 11 7 151 Lifestyle female 12 7 172 Personal development female 13 6 151 Parenting female 14 6 119 Lifestyle female 15 7 167 Lifestyle female 16 7 162 Lifestyle female 17 13 217 Beauty female 18 11 195 Lifestyle female 19 9 134 Beauty female 20 8 162 Lifestyle female The most widely present measurement of a speech recognition, that we also used to evaluate our proposed system's accuracy is Word Error Rate (WER) [2], computed as following: where S is the number of substitution, D is the number of deletions, I is the number of insertions and N is the total number of words. The pseudocode for computing WER is presented in figure 2. For the training dataset, the implementation of the WER algorithm in Python from [17] was used. The WER value obtained for the whole corpus was 30.96% which places our approach under the results obtained in [3], which had a WER of just 9.59% (WER = 1 -WRR) and [4], which had a WER of 20.7%, but above the results obtained by the authors of [11] where WER was 54.5%. Having in consideration the fact that both approaches (ours and [11]) are using Google Speech Recognition API we can consider that our results are more than satisfying.
Another aspect that we must mention is that our results are influenced also by the quality of the videos (the quality of the audio recordings actually), by the person's accent and speech clarity, by the background noises and by other external factors. Most probably the better results of other algorithms were obtained in controlled environments by using high quality audio recordings. To emphasize this, we must say that one of the videos, where the female speaker had a clear voice and the recording was done by using a high quality microphone, obtained a WER of just 9.93% which is comparable with the results obtained by [3]. Table 3 and table 4 illustrate the results obtained per speaker gender and per video subject. We can see from the tables above that the best results were obtained by female speakers and that the subject with the lowest WER was Parenting, even though just one video is not enough to validate this hypothesis. Another interesting fact is that the subjects with the highest WER were Programming and Beauty. This can be due to the fact that both domains are using a lot of English terms to describe some of the entities, tools or actions involved, terms that cannot be easily translated into Romanian, thus unidentifiable by a Romanian focused ASR system.
Conclusions and future work
The main objective of this research paper was to obtain an accurate enough ASR model that can process multimedia e-learning resources which contain Romanian speech. For achieving this, first the ASR term was defined, then a short history of speech recognition systems was presented. The main algorithms used to perform ASR, together with the cloud providers that offer ASR solutions in the form of SaaS were analyzed as well. After a short literature review which emphasized the added value of the current research, the paper shifted into applying the Google Cloud Speech-to-Text API on a corpus of various video e-learning resources available online on YouTube.
The main reason behind this approach was to transform the multimedia resources into textbased resources with the scope of making them more searchable by classic keywordbased algorithms or by semantic solutions. Another side effect of this approach is that it can help the automation of subtitles generation for e-learning videos. A difficult process that is done in most of the cases (at least for Romanian) manually and that can help people with hearing problems. Thus, a dataset of 3100 words divided in 210 sentences was used, uttered by 10 speakers (5 males and 5 females). In order to measure the accuracy of the model, the Word Error Rate indicator was computed. The Google Cloud Speech-to-Text API obtained a WER of 30.96% for the used dataset, which is a better result that the one obtained by similar works which are using Google Speech Recognition API, but worse than other solutions performed for Romanian in controlled environments with homogenous datasets. Even so, some videos obtained promising results, having an WER of just 9.93%, which gives us hope that by tuning the system properly and by using more qualitative audio recordings, the current model has the potential of obtaining better results. Future work will focus on using a larger corpus with higher quality audio files with the scope of obtaining a WER under 10%. If this objective will not be met, we are also considering using the Sphinx's implementation in Python to check if it can outperform the Google's algorithm. Another future objective is to use the results obtain by the ASR algorithm as input data for a Named Entity Recognition solution that will semantically index the multimedia e-learning resources of interest. By taking this approach, similar e-learning materials can be aggregated into a domain ontology and semantic searches can be made. | 3,969 | 2019-03-30T00:00:00.000 | [
"Computer Science"
] |
Separability and Killing Tensors in Kerr-Taub-NUT-de Sitter Metrics in Higher Dimensions
A generalisation of the four-dimensional Kerr-de Sitter metrics to include a NUT charge is well known, and is included within a class of metrics obtained by Plebanski. In this paper, we study a related class of Kerr-Taub-NUT-de Sitter metrics in arbitrary dimensions D \ge 6, which contain three non-trivial continuous parameters, namely the mass, the NUT charge, and a (single) angular momentum. We demonstrate the separability of the Hamilton-Jacobi and wave equations, we construct a closely-related rank-2 Staeckel-Killing tensor, and we show how the metrics can be written in a double Kerr-Schild form. Our results encompass the case of the Kerr-de Sitter metrics in arbitrary dimension, with all but one rotation parameter vanishing. Finally, we consider the real Euclidean-signature continuations of the metrics, and show how in a limit they give rise to certain recently-obtained complete non-singular compact Einstein manifolds.
Introduction
Four-dimensional solutions of the Einstein equations have been extensively studied for many decades. In relatively recent times, since the discovery of supergravity and superstring theory, solutions of the Einstein equations, or the coupled Einstein-matter equations, in higher dimensions have also been found to be of physical interest. An encyclopaedic classification of all the known four-dimensional solutions can be found in [1], but the higher-dimensional cases have been less extensively investigated. Various classes of higher-dimensional solution have been obtained, including black holes that generalise the four-dimensional Schwarzschild, Reissner-Nordström and Kerr solutions.
There are few general methods available for solving the Einstein equations. Almost always, one is forced to make a symmetry assumption. If the isometry group has orbits of codimension one, the problem then reduces to solving non-linear ordinary differential equations. A much more challenging task results if the orbits have higher codimension, since then non-linear partial differential equations are involved. Two techniques have proved useful in the past for tackling cases like these, where the number of independent variables is two or greater. One technique is to adopt the Kerr-Schild ansatz, which in effect reduces the Einstein equations to linear equations, and this has proved useful recently in obtaining the general higher-dimensional version of the Kerr-de Sitter metrics [2]. Another technique, pioneered by Carter, is to require of the metric that it admit separation of variables for the Hamilton-Jacobi equation, or for the wave equation or Laplace equation [3]. This has the further advantage that having obtained the metric, one is actually in a position to do something with it; namely, to study its geodesics and eigenfunctions explicitly.
In a remarkable paper, Carter exploited this idea to obtain a general class of metrics in four dimensions which include the Kerr-Taub-NUT-de Sitter solutions [3]. This general class, in the formalism given by Plebanski [4], takes the form where They are solutions of the coupled Einstein-Maxwell equations with a cosmological constant λ, and with electric and magnetic charges given by e and g. We shall restrict attention to the case of pure Einstein metrics in this paper, and so we set e = g = 0. The remaining constants (γ, m, ℓ, ǫ) effectively comprise 3 real continuous parameters and one discrete parameter, since one can always make coordinate scaling transformations to absorb the magnitude of, say, the dimensionless constant ǫ. Thus one may view (γ, m, ℓ) as continuous parameters, and take ǫ = +1, −1 or 0. The constants (γ, m, ℓ) are related to the angular momentum, mass and NUT charge. Special cases of (1) include the Kerr-de Sitter solution and the Taub-NUT-de Sitter solution.
These four-dimensional metrics have a simple higher-dimensional generalisation [5], which, as we shall show, also has the property that both the Hamilton-Jacobi equation and the wave equation may be solved by separation of variables. Thus, for these very special metrics, which encompass the Kerr-de Sitter metrics in arbitrary dimension with all but one rotation parameter vanishing, the geodesic flow on the cotangent bundle is a completely integrable system in the sense of Liouville. Such integrable dynamical systems are comparitively rare, and when arising from the metric, are associated with the existence of special tensor fields, called Stäckel-Killing tensor fields, on the manifold. We shall construct these explicitly for the higher-dimensional metrics. Although the Säckel-Killing tensor in the four-dimensional metrics admits a Yano-Killing tensor square root, it appears rather unlikely that this feature will extend to the higher-dimensional generalisations.
A feature of the four-dimensional metrics is that they may be, upon analytic continuation to (2, 2) metric signature, be cast into a double Kerr-Schild form, and this has played some rôle in their construction, and that of more generalised metrics, by Plebanski. We find that the higher-dimensional metrics may also be cast in double Kerr-Schild form, and that, although it is not true in general, that the double Kerr-Schild form renders the Einstein equations linear, in our case we find that it does result in linear equations.
An important application of the original four-dimensional metrics was the first construction of a complete, non-singular, compact, inhomogeneous Einstein manifold [6]. This was done by analytically continuing the metrics to positive-definite (Euclidean) signature, and making a careful study of regularity conditions near coordinate singularities. We show that the same can be done for the higher-dimensional metrics discussed in this paper.
Higher-dimensional Generalisation
In this paper, we consider higher-dimensional generalisations of the form where and dΩ 2 k = g ij dx i dx j is an Einstein metric on a space of dimension k, normalised so that its Ricci tensor satisfies R ij = (k − 1) g ij . One might, for example, take dΩ 2 k to be the metric on the unit sphere S k . It was shown in [5] that these metrics satsify the D = k + 4 dimensional Einstein equationR The verification of the Einstein equations can be performed rather straightforwardly in a coordinate basis. From (3), and decomposing the coordinate indices as M = (µ, i), we may write the components of the D-dimensional metric aŝ where g µν is the four-dimensional Plebanski-type metric with the modified functions X and Y given in (4). It is easily seen that the non-vanishing components of the affine connection Γ M N P are given byΓ Here, the explicit index values 1 and 2 refer to the q and p coordinates respectively. From these expressions, it is straightforward to substitute into the expression for the curvature, and hence to verify that (3) satisfies (5).
The arbitrary-dimensional Kerr-de Sitter metrics with a single rotation parameter a, which were obtained in [7], arise as special cases of the more general Einstein metrics (3).
Specifically, if we take the parameters in (4) to be and define new coordinates according to where Ξ ≡ 1 + λ a 2 , then (3) reduces precisely to the metrics obtained in [7]. The more general solutions that we have obtained include the NUT charge ℓ as an additional nontrivial parameter, when D = 5.
The case D = 5 is somewhat degenerate in the above construction, in that the ostensibly additional NUT parameter ℓ is fictitious in this case. This is easily seen from the expressions for the metric functions X and Y in (4) when k = 1: One can absorb the parameter ℓ by means of additive shifts in the constants γ and m. Since a constant scaling of the S k metric dΩ 2 k in (3) is irrelevant when k = 1, the upshot is that our construction in D = 5 is equivalent to the one where the NUT charge ℓ is set to zero, thus reducing to a case already considered in [7]. In all other dimensions D ≥ 4, the NUT charge is a non-trivial additional parameter.
Separability
The covariant Hamiltonian function on the cotangent bundle of the metrics (3) is given by The coordinates τ and σ are ignorable, and their conjugate momenta P τ and P σ are constants. The Hamiltonian-Jacobi equation has separable solutions of the form where The three quantities µ 2 , c and κ are separation constants, associated to the three mutually Poisson-commuting functions H and If equations (14) hold, then C takes the value c and K takes the value κ. The function W satisfies the Hamilton-Jacobi equation governing geodesic motion on the k-dimensional Einstein manifold with metric dΩ 2 k . For a general Einstein manifold, this is as far as one can go with finding the geodesics. However, in special cases, such as the sphere S k , there will be further constants of the motion. If there are k−1 such constants arising from k−1 mutuallycommuting independent functions on the Einstein manifold, then the geodesic flow on the (k + 4)-dimensional Einstein manifold will be completely integrable. Another way to say this is that while the product of two manifolds with completely-integrable geodesics gives a new manifold with completely-integrable geodesics, this property will not in general be true for warped products such as we are considering here. However, for the very special choice of warp function p 2 q 2 /γ which arises in the metrics (3), together with the simple k-dependent modificications of the X and Y functions in (4), the property of complete integrability is maintained.
These complete integrability properties may be viewed as the classical limit of the quantum-mechanical statement that the Schrödinger equation∇ 2 ψ = µ 2 ψ is also separable. This can be seen from the Laplacian Multiplying∇ 2 ψ = µ 2 ψ by (p 2 + q 2 ) immediately reveals the separability.
Associated with K is a rank-2 symmetric Stäckel-Killing tensor K M N , given by K = 1 2 K M N P M P N , where K is given in (15). It satisfies the Killing-tensor equation by virtue of the fact that K Poisson-commutes with the Hamiltonian. We may also then define the second-order differential operator 2 . General theory [3] shows thatK andĤ commute, and, moreover, they obviously commute with the operatorĈ ≡ In the Carter class of four-dimensional Kerr-Taub-NUT-de Sitter metrics, it is known [8] that the Stäckel-Killing tensor K M N can be written as the square of a Yano-Killing 2-form Y M N : where In four dimensions, i.e. k = 0, one has from (15) that whence * Y = q dq ∧ (dτ − p 2 dσ) − p dp ∧ (dτ + q 2 dσ) .
Thus one can write [8] One might wonder whether in the higher-dimensional metrics (3), the Stäckel-Killing tensor we have found might also be expressible as the square of a Yano-Killing tensor.
For a general dimension D = 4 + k this looks unlikely, because there is no obvious 2form available on the additional k-dimensional Einstein manifold. If, however, the higherdimensional manifold is Kähler-Einstein, the Kähler form J ij becomes available, and an obvious generalisation of (21) isŶ This is, by construction, one possible square root of the Killing tensor K M N . However, a simple calculation shows that, for example,∇ µŶjk +∇ jŶµk = 0, and thusŶ M N is not a Yano-Killing tensor. There is no other obvious candidate forŶ M N that might yield a generalisation of the four-dimensional Yano-Killing tensor.
It is interesting to note that the more general class of accelerating type-D metrics of Plebanski and Demianski [9] does not admit a separation of variables, for either the Hamiltonian-Jacobi equation or the wave equation. These metrics are of the form where X and Y are certain polynomial functions of p and q respectively [9]. The conformal prefactor (1 − p q) −2 spoils the separability of the Hamilton-Jacobi equation, except in the massless case. Thus the Plebanski-Demianski metrics admit conformal Stäckel-Killing tensors but not Stäckel-Killing tensors in the strict sense.
Double Kerr-Schild Metric
It is of interest to note that the higher-dimensional metrics (3)
may be cast in a double
Kerr-Schild form. This is most conveniently done by analytically continuing to a real form of the metric with signature (2, 2 + k). This continued metric can then be written in the form where the fiducial "base" metric ds 2 is the de Sitter metric, and k M and l M are two linearlyindependent mutually-orthogonal affinely-parameterised null geodesic congruences: Note that the indices on k M and l M can be raised with eitherĝ M N orḡ M N .
Specifically, the analytically-continued metric is obtained from (3) by sending resulting in the metric where If we now define new coordinatesτ andσ by then a straightforward calculation shows that (29) can be written as (26), where It is easily verified that k M and l M satisfy (27). Note that as vectors, one has The metric ds 2 appearing in (32) is the (4 + k)-dimensional de Sitter metric (since it is the Kerr-Taub-NUT de Sitter metric with the mass and NUT parameters m and ℓ set to zero).
Its inverse is given simply by
then the inverse full metric is given byĝ
In a metric of single Kerr-Schild form, where dŝ 2 = ds 2 + U (k M dx M ) 2 and k M is a null geodesic congruence, a straightforward calculation shows that the Ricci tensor of the full metric, written with mixed indicesR M N , is given exactly by [1,10] In other words, the "linearised approximation" is exact in this case. In where f (p) and g(q) are arbitrary functions, then the Ricci tensorR M N is given exactly by (35).
It is perhaps worth remarking that one can always choose to view a double Kerr-Schild metric of the form (26) as a single Kerr-Schild metric, by including one or other of the added null terms U (k M dx M ) 2 or V (l M dx M ) 2 as part of the fiducial "base" metric ds 2 . Thus in or present example one can view the higher-dimensional Kerr-Taub-NUT-de Sitter metrics (3) as either Kerr-de Sitter with the NUT charge added via a Kerr-Schild term, or else as massless Kerr-Taub-NUT-de Sitter with the mass added via a Kerr-Schild term.
Euclidean-signature Metrics
It is also of interest to consider Einstein metrics of positive-definite signature. We can perform such a "Euclideanisation" of the metric (3) by making the the following analytic continuation: The metric (3) then becomes where It was shown recently in [11] that after Euclideanisation, the higher-dimensional Kerrde Sitter metrics with a single rotation parameter that were found in [7] yield, in a special limiting case, complete Einstein metrics that extend smoothly onto non-singular manifolds.
Since our more general Einstein metrics encompass those in [7], they certainly admit the same non-singular complete metrics as special limiting cases. However, the additional NUT charge parameter that we have in our new metrics in D ≥ 6 provides additional possibilities for obtaining complete, compact Einstein metrics, as we shall now show.
The metrics (38) are of cohomogeneity two, since the metric functions depend on both the coordinates p and q. 1 In a compact metric, the endpoints in the ranges p 1 ≤ p ≤ p 2 1 We are not concerned here with any cohomogeneity that might be associated with the k-dimensional Einstein metric dΩ 2 k if it were not taken to be a sphere or any other homogeneous metric. For simplicity, and without losing any essential generality, we shall consider dΩ 2 k to be the round metric on S k in what follows. and q 1 ≤ q ≤ q 2 of the p and q coordinates will be defined by degenerations of the metric, corresponding to collapsing of the principal orbits. This can occur at zeros of ∆ p or ∆ q , or at p = 0 or q = 0. Although the possibility of obtaining compact non-singular metrics of cohomogeneity two cannot be immediately excluded, it is certainly the case that nonsingularity is most easily achieved by reducing the cohomogeneity to degree one, and so we shall make this assumption in the discussion that follows. The reduction of cohomogeneity can be achieved by choosing the parameters so that the coordinate range for either p or q shrinks to zero; i.e. p 1 = p 2 , or q 1 = q 2 . It turns out that for regular solutions, we should arrange to shrink the coordinate range for q, by choosing the parameters so that ∆ q has two roots that coalesce at q = q 0 : where ∆ ′ q denotes the derivative of ∆ q with respect to q. It is useful to re-express (γ, m, ℓ, λ) in terms of dimensionless parameters (g,m,l, L): The conditions (40) for the double root can conveniently be used to solve for ǫ andm in terms of q 0 : Moving slightly away from the case of the double root, by displacingm away slightly from m 0 , we now definẽ where c −1 ≡ (k + 3)L − (k − 1)g, and then send δ → 0. We find that in this limit the metric becomes ds 2 where This metric can be recognised as the n = 1 special case of the class of cohomogeneity-one Einstein metrics obtained in [12], having a base Einstein-Kähler manifold K 2n of dimension 2n, with fibres that involve a complex line bundle over K 2n and an additional S k warpedproduct factor. The conditions for regularity of such metrics were analysed in detail in [12].
Applied to our n = 1 case where K 2 = S 2 , these results show that compact non-singular Einstein metrics can be achieved by choosing the parameters so that r ranges either from 0 to r 0 , where ∆ p (r 0 ) = 0, or else by choosing the parameters such that r ranges between two distinct positive roots r 1 and r 2 of ∆ p . The former requiresl = 0, and was in fact obtained in [11] as a limit of the single-rotation Kerr-de Sitter metrics; in this case, it is essential for regularity that the metric dΩ 2 k be the round metric on S k . The latter requiresl = 0. It was obtained in [12] by starting with a cohomogeneity-one metric ansatz, but here we have shown how it arises as a limiting case of the more general higher-dimensional Kerr-Taub-NUT-de Sitter metrics that we have constructed in this paper. (In this case, since the coefficient of dΩ 2 k is everywhere non-vanishing, one can choose any regular Einstein metric for dΩ 2 k .)
Conclusions
In this paper, we have studied some properties of a class of higher-dimensional generalisations of the Kerr-Taub-NUT-de Sitter metrics. We have shown that they share many, but not all, of the remarkable properties of their four-dimensional progenitors. For example, they admit separation of variables for both the Hamilton-Jacobi and wave equations, and we have exhibited the associated second-rank Säckel-Killing tensors. By contrast to the fourdimensional case, however, the Säckel-Killing tensor appears not to have a Yano-Killing tensor square root. It should be emphasised that the separability of the higher-dimensional solutions, which leads in many cases to completely-integrable geodesics flows, is a rather non-trivial consequence of the detailed form of the solutions, which is mandated by the Einstein equations. A recent study of separability in the higher-dimensional Kerr-de Sitter metrics showed that this is possible at least in the special case of all rotation parameters equal, when there is an enhanced isometry group [13]. In our case, the equations can be separated in situations where there is only a single rotation parameter, together with a NUT charge. Previous results on separability, in the absence of the NUT charge and cosmological constant, were obtained in [14].
Like their four-dimensional progenitors, the metrics (3) may be cast in double Kerr-Schild form, which may provide a fruitful ansatz for the further study of higher-dimensional solutions of the Einstein equations. This is because the solutions cast in this form may be regarded as their own linear approximations. This, unlike the single Kerr-Schild ansatz, is not a general feature in the double Kerr-Schild case, but it does hold for the metrics we have considered. More generally, the double Kerr-Schild ansatz leads to quartic non-linearity in the Einstein equations.
The double Kerr-Schild form requires us to consider an analytically-continued form of the metric with two time directions. Analytic continuation will also produce metrics of positivedefinite (Euclidean) signature. This we have done, and by considering special limiting cases, obtained complete non-singular compact Einstein manifolds that were previously obtained in [12]. | 4,793.6 | 2004-05-07T00:00:00.000 | [
"Mathematics"
] |
Two Novel Quassinoid Glycosides with Antiviral Activity from the Samara of Ailanthus altissima
Phytochemistry investigations on Ailanthus altissima (Mill.) Swingle, a Simaroubaceae plant that is recognized as a traditional herbal medicine, have afforded various natural products, among which C20 quassinoid is the most attractive for their significant and diverse pharmacological and biological activities. Our continuous study has led to the isolation of two novel quassinoid glycosides, named chuglycosides J and K, together with fourteen known lignans from the samara of A. altissima. The new structures were elucidated based on comprehensive spectra data analysis. All of the compounds were evaluated for their anti-tobacco mosaic virus activity, among which chuglycosides J and K exhibited inhibitory effects against the virus multiplication with half maximal inhibitory concentration (IC50) values of 56.21 ± 1.86 and 137.74 ± 3.57 μM, respectively.
Extraction, Isolation and Sructure Elucidation
The air-dried and milled samara of A. altissima was extracted with methanol to afford a crude extract, which was resuspended in water and fractioned by liquid-liquid partition. From the trichloromethane partition, a total of ten lignans were isolated, while compounds 1 and 2, together with 4, 6, 9 and 16, were purified from the n-butyl alcohol partition ( Figure 1).
Extraction, Isolation and Sructure Elucidation
The air-dried and milled samara of A. altissima was extracted with methanol to afford a crude extract, which was resuspended in water and fractioned by liquid-liquid partition. From the trichloromethane partition, a total of ten lignans were isolated, while compounds 1 and 2, together with 4, 6, 9 and 16, were purified from the n-butyl alcohol partition ( Figure 1). Compound 1 was isolated as a colorless crystal, and its molecular formula was established as C 26 Figure S1). Its IR spectrum ( Figure S2) displayed absorption bands indicating the presence of hydroxyl (3427 cm −1 ), δ-lactone (1731 cm −1 ) and double bond (1640 cm −1 ). The 1 H-NMR ( Figure S3) Figure S4) and distortionless enhancement by polarization transfer (DEPT) ( Figure S5) NMR spectra revealed that compound 1 has 26 carbons including two carbonyl (δ C 207.7 and 169.1), two olefinic carbons (δ C 134.6 and 124.2), one hemiketal carbon (δ C 106.9), six saccharide-type carbons (δ C 105.2, 76.7, 76.3, 74.2, 70.0 and 61.1), as well as three methyl, three methylene, seven methine, and two quaternary carbons. The above 1 H and 13 C-NMR data were similar to those of the quassinoid glycosides isolated from the same plant materials, as reported previously in our paper [13]. Specifically, a keto group was attached at C-12 position, as indicated by the observed HMBC correlations ( Figure 2 Compound 1 was isolated as a colorless crystal, and its molecular formula was established as C26H36O12 by high resolution ionization mass spectroscopy (HRESIMS) (m/z 564.2175 [M + Na + H] + , calcd for C26H37O12Na 564.2177) ( Figure S1). Its IR spectrum ( Figure S2) displayed absorption bands indicating the presence of hydroxyl (3427 cm −1 ), δ-lactone (1731 cm −1 ) and double bond (1640 cm −1 ). The 1 H-NMR ( Figure S3 1), as well as three methyl, three methylene, seven methine, and two quaternary carbons. The above 1 H and 13 C-NMR data were similar to those of the quassinoid glycosides isolated from the same plant materials, as reported previously in our paper [13]. Specifically, a keto group was attached at C-12 position, as indicated by the observed HMBC correlations ( Compound 2 was purified as a colorless crystal with a molecular formula of C32H46O17 as shown by a sodiated molecular ion peak at m/z 725.2656 (calcd for C32H46O17Na 725.2627) observed with HRESIMS ( Figure S10). The IR spectrum ( Figure S11) displayed absorption bands indicating the presence of hydroxyl (3398 cm −1 ), δ-lactone (1718 cm −1 ) and double bond (1648 cm −1 ). Its 13 C-NMR and DEPT spectra (Figures S13 and S14) showed 32 carbon resonances, including two methyl, six methylene, eighteen methine, and six quaternary carbons. Compound 2 was also a quassinoid glycoside as deduced from a comparison of its 1 H and 13 C-NMR data with those of compound 1, as well as the analysis of its DEPT, COSY, HSQC, HMBC, and NOESY spectra (Figures S13-S18). Signals Compound 2 was purified as a colorless crystal with a molecular formula of C 32 H 46 O 17 as shown by a sodiated molecular ion peak at m/z 725.2656 (calcd for C 32 H 46 O 17 Na 725.2627) observed with HRESIMS ( Figure S10). The IR spectrum ( Figure S11) displayed absorption bands indicating the presence of hydroxyl (3398 cm −1 ), δ-lactone (1718 cm −1 ) and double bond (1648 cm −1 ). Its 13 C-NMR and DEPT spectra (Figures S13 and S14) showed 32 carbon resonances, including two methyl, six methylene, eighteen methine, and six quaternary carbons. Compound 2 was also a quassinoid glycoside as deduced from a comparison of its 1 H and 13 C-NMR data with those of compound 1, as well as the analysis of its DEPT, COSY, HSQC, HMBC, and NOESY spectra (Figures S13-S18). Signals of a terminal double bond [δ H 5.22 (br s, 2H, H-21); δ C 120.3 (C-21)] were observed in its 1 H-NMR and 13 C-NMR spectra (Figures S12 and S13), and the HMBC correlations ( Figure 3) observed between H-12 [δ H 3.89 (1H, s)] and C-21, between H-14 [δ H 2.80 (dd, J = 13.5, 5.4 Hz, 1H) and C-21, between H-21 and C-12 (δ C 81.0), as well as HMBC correlations between H-21 and C-14 (δ C 48.0). Two anomeric protons appearing at δ H 4.60 (d, J = 7.8 Hz, 1H, H-1 ) and 4.56 (d, J = 7.9 Hz, 1H, 1 ) in the 1 H-NMR spectra indicated the presence of two glucopyranosyl units, both of which must be β-anomer as suggested by their coupling constant. Acid hydrolysis of 2 afforded only d-glucose, which was identified by thin-layer chromatography (TLC) comparisons with sugar standards, and the HMBC correlations between the anomeric proton H-1 and C-2 (δ C 84.1), between H-2 [δ H 4.19 (1H, m)] and C-1 (δ C 105.0), and between the anomeric proton H-1 and C-3 (δ C 88.1), between H-3 [δ H 3.58 (1H, t, J = 8.9 Hz)] and C-1 (δ C 105.2) confirmed that two glucopyranosyl units were connected via a (1→3) linkage, and the saccharide moiety was attached at the C-2 position. The NOESY cross-peaks (
Antiviral Activities against the Replication of Tobacco Mosaic Virus
The isolated compounds were tested for their inhibitory activities against the replication of tobacco mosaic virus using the leaf-disc method. The lignans (3-16) obtained showed only weak or no inhibitory effect at a concentration of 0.5 mM ( Table 1). Both of the quassinoid glycosides obtained, chuglycoside J (1) and K (2), exhibited antiviral activities with IC50 values determined as 56.21 ± 1.86 and 137.74 ± 3.57 μM, while the commercial antiviral agents, ningnanmycin and ribavirin, possessed an IC50 of 183.31 ± 4.26 and 255.19 ± 4.57 μM, respectively.
Antiviral Activities against the Replication of Tobacco Mosaic Virus
The isolated compounds were tested for their inhibitory activities against the replication of tobacco mosaic virus using the leaf-disc method. The lignans (3-16) obtained showed only weak or no inhibitory effect at a concentration of 0.5 mM ( Table 1). Both of the quassinoid glycosides obtained, chuglycoside J (1) and K (2), exhibited antiviral activities with IC 50 values determined as 56.21 ± 1.86 and 137.74 ± 3.57 µM, while the commercial antiviral agents, ningnanmycin and ribavirin, possessed an IC 50 of 183.31 ± 4.26 and 255.19 ± 4.57 µM, respectively.
Discussion
Quassinoids, one kind of degraded triterpenoid derivative with multiple bioactivities such as anticancer, antimalarial, antimicrobial, antidiabetic, antiviral, and anti-inflammatory effects, are widely distributed in the family Simaroubaceae and the secondary metabolites characteristic of this family [26][27][28]. Quassinoids can generally be classified into five groups according to the basic features, including C 18 , C 19 , C 20 , C 22 and C 25 types. By the year 2004, more than 200 natural quassinoids obtained from 34 species in 14 genera were reported, and the structural characteristics of 190 quassinoids were reported between the year 2004 and 2018 [27,28]. Thus far, more than 50 quassinoids have been isolated from A. altissima, most of which belong to the C 20 class bearing a δ-lactone moiety [13,26]. Pharmacological and clinical investigations have revealed that C 20 quassinoids from Ailanthus genus plant are very promising for their medical use, such as antitumor, antimalarial, antiviral, antiparasitic properties etc. [4][5][6]. We have previously reported the identification of eighteen C 20 quassinoids including nine new quassinoid glycosides, named chuglycosides A-I, from the samara of A. altissima. The generated name chuglycoside arises from 'chu', the Chinese phonetic alphabets of one of the variant names of A. altissima, which is commonly used in the classics of traditional Chinese medicine. The quassinoids previously obtained from the samara of A. altissima can generally be classified as the C-6 or C-15 substituted derivatives of chaparrione and ailanthone, as well as their monoglycosides [13]. Ailanthone is the best known bioactive secondary metabolite isolated from A. altissima, which displays multiple pharmacological properties, in particular significant antitumor effects against a variety of cancer cell lines in vitro [29,30]. Two more C 20 quassinoid glycosides were reported based on our current findings, among which chuglycoside J wears a keto group in C-12, which, in the case of quassinoids from A. altissima, usually appears as a hydroxy substituted group, while chuglycoside K was the only one diglycoside we obtained from the extract of the samara of A. altissima. These findings suggest that phytochemical investigations to reveal the structural diversity of quassinoids synthesized by A. altissima are still worth undertaking.
In the field of modern agriculture, plant diseases caused by viruses are one of the major causes of biological disasters in agriculture and horticulture, which result in dramatic losses every year all over the world. The research and development of efficient antiviral agents with characteristics of low pesticide resistance, eco-friendliness and a novel mechanism is urgently and continuously needed [31,32]. Tobacco mosaic virus, the type member of the Tobamovirus group, is a positive-strand RNA virus that infects more than 400 plant species belonging to 36 families, such as tobacco, tomato, potato, and cucumber [33][34][35][36][37]. The tobacco mosaic virus/tobacco system is employed as a useful model in studies designed to clarify antiviral properties and the mechanism of action of novel antiviral agents. Our continuous efforts regarding the discovery of novel anti-phytopathogen viruses from natural products and studies of the mechanisms of antiviral action have proven that quassinoids from A. altissima could inhibit the coat protein expression and systemic spread of tobacco mosaic virus in tobacco [13] and this evidence has proven that quassinoids from A. altissima could be considered as lead structures for antiviral agent design and development.
Extraction and Isolation
The air-dried samara of Ailanthus altissima was collected in October 2013 at Muyang, Jiangsu, China. The voucher specimen was deposited at the Key Laboratory of Biopesticide and Chemical Biology, Ministry of Education, Fujian Agriculture and Forestry University, under the accession number MF131001.
The plant material was extracted and partitioned as described previously in our paper [13]. In brief, the milled air-dried samara of A. altissima (sample MF131001, 7500 g) was extracted three times with a total of 25 L methanol at room temperature. The dried extract was then resuspended in water and successively partitioned with n-hexane, trichloromethane, and n-butyl alcohol.
The n-butyl alcohol partition (90 g) was fractioned by silica gel column chromatography and eluted with mixtures of 0 to 100% methanol in chloroform to give fourteen fractions (fraction C1-C14). Fraction C2 was subjected to MCI gel column chromatography and eluted with a gradient of 15-100% water in MeOH to afford fractions C10a-C10e. Fraction C10e was chromatographed over a silica gel column and eluted with CHCl 3 -MeOH (v/v 97:3) to yield 16 (5.5 mg). Fraction C3 (3.5 g) was separated by MCI gel column chromatography and eluted with a gradient of 15-100% methanol in water to afford fractions C3a-C3p. Fraction C3i was chromatographed over a silica gel column using a mixture of CHCl 3 -MeOH (96:4) as eluent to yield 9 (22.0 mg). Fraction C5 (2.2 g) was chromatographed using a MCI gel column and eluted with a gradient of 15-100% methanol in water to afford fractions C5a-C5e. Fraction C5c was purified using RP-18 gel column chromatography with a solvent of 30% methanol in water as eluent to give 2 (5.0 mg). Fraction C8 (10.1 g) was subjected to MCI gel column chromatography and eluted with a gradient of 5-100% water in MeOH to afford fractions C8a-C8k. Fraction C8c was subjected to RP-18 gel column chromatography and eluted with 30% MeOH in water in to give 4 (70.9 mg). Fraction C10 (10.1 g) was subjected to MCI gel column chromatography and eluted with a gradient of 5-100% methanol in water to afford fractions C10a-C10h. Fraction C10e was further purified using RP-18 gel column chromatography and eluted with 30% methanol in water in to afford 6 (11.5 mg). Fraction C11 (10.3 g) was chromatographed using a MCI gel column and eluted with a gradient of 5-100% methanol in water to afford fractions C11a-C11h. Fraction C11e was then separated using RP-18 gel column chromatography and eluted with a mixture of 45% MeOH in water and finally chromatographed over a silica gel column and eluted with CHCl 3 -MeOH-H 2 O (v/v/v 80:20:2) to yield 1 (
Acid Hydrolysis of Compounds 1 and 2
Compound 1 or 2 (each 2 mg) was hydrolyzed at 95 • C for 2 h in 2 mL of 1 M HCl (dioxane-H 2 O, v/v 1:1), respectively. After being evaporated to dryness, the reaction mixtures were diluted in water and extracted with 2 mL ethyl ether three times. The aqueous layer was neutralized with NaHCO 3 and evaporated under vacuum to furnish a neutral residue for thin-layer chromatography (TLC) analysis, which indicated the presence of only d-glucose (Rf 0.40; eluted with MeCOEt-isoPrOH-Me 2 CO-H 2 O, v/v 20:10:7:6).
Antiviral Assay
The isolated quassinoids and lignans were dissolved in dimethyl sulfoxide (DMSO) and diluted to the required concentration before the test. Two commercial agents, ningnanmycin and ribavirin, were used as positive control agents, while a solution of 0.01 M phosphate-buffered saline (PBS) containing 1% DMSO was used as negative control. Purified tobacco mosaic virus (TMV) U1 strain was obtained from the Institute of Plant Virology, Fujian Agriculture and Forestry University. Nicotiana tabacum cv. K 326 , which were cultivated to 5-6 leaf stage in an insect-free greenhouse, were used for the anti-tobacco mosaic virus (TMV) assay.
The antiviral assay was conducted using the leaf-disc method as previously described in our paper [5,7,13,38,39]. In brief, the growing leaves of tobacco were mechanically inoculated and infected with the target virus. Six hours later, leaf discs of 1 cm diameter were punched and floated on the test solutions, while leaf discs from the healthy leaves were used as mock. Six replicates were carried out for each sample. The test solutions with leaf discs were kept in a Petri dish and incubated for 48 h at 25 • C in a culture chamber, and then the leaf discs were grounded with the addition of 0.01 M pH 9.6 carbonate coating buffer (500 µL for each leaf disc) and centrifuged. The supernatant of each sample (200 µL) was transferred to a 96-well plate, which was then used to perform a standard indirect enzyme-linked immunosorbent assay as described in the literature [34,35]. The virus concentration was calculated according to a standard curve constructed based on the optical density at 405 nm (OD 405 ) values of a series of the diluted solutions of purified TMV. | 3,676.4 | 2020-12-01T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Occurrence of L1014F and L1014S mutations in insecticide resistant Culex quinquefasciatus from filariasis endemic districts of West Bengal, India
Lymphatic filariasis causes long term morbidity and hampers the socio-economic status. Apart from the available treatments and medication, control of vector population Culex quinquefasciatus Say through the use of chemical insecticides is a widely applied strategy. How-ever, the unrestrained application of these insecticides over many decades has led to resistance development in the vectors. In order to determine the insecticide susceptibility/resistance . from were collected and assayed against six different and kdr mutations in the voltage-gated sodium along the use of synergists to evaluate the role of major detoxifying enzymes in resistance in in indicating the observed pyrethroids DDT. study reports L1014S mutation in . quinquefasciatus the first
Methods
In order to determine the insecticide susceptibility/resistance status of Cx. quinquefasciatus from two filariasis endemic districts of West Bengal, India, wild mosquito populations were collected and assayed against six different insecticides and presence of L1014F; L1014S kdr mutations in the voltage-gated sodium channel gene was also screened along with the use of synergists to evaluate the role of major detoxifying enzymes in resistance development.
Results
The collected mosquito populations showed severe resistance to insecticides and the two synergists used-PBO (piperonyl butoxide) and TPP (triphenyl phosphate), were unable to restore the susceptibility status of the vector thereupon pointing towards a minor role of metabolic enzymes. kdr mutations were present in the studied populations in varying percent with higher L1014F frequency indicating its association with the observed resistance to pyrethroids and DDT. This study reports L1014S mutation in Cx. quinquefasciatus for the first time.
Introduction
Culex quinquefasciatus commonly known as the southern house mosquito is widely distributed in tropical and subtropical regions. Culex quinquefasciatus is the most abundant species of mosquito in the sub-Himalayan region of West Bengal [1,2]. This mosquito species is opportunistic and may breed and habituate any temporary collection of stagnant water apart from its other natural breeding habitat-drains, stagnant puddles of water, cemented channels, muddy pools and water-filled artificial containers. Culex quinquefasciatus serves as a vector for many diseases like lymphatic filariasis, West Nile Fever, Saint Louis Encephalitis [3] and even act as a bridge to transport slyvatic arboviruses from bird to mammals [4,5]. Some studies suggest its role in transmission of Zika virus [6] and Plasmodium relictum that cause avian malaria [7]. In Southeast Asia, Cx. quinquefasciatus is a primary vector of lymphatic filariasis which is one of the most important Neglected Tropical Diseases (NTDs) and ranks second in causing long term morbidity in the human society [8]. Negative impact of the disease on the socio-economic status of an individual is also non-negligible. Although there are drugs and treatments available to combat lymphatic filariasis and the disease was aimed to be eradicated by the year 2020 [8], management of vector population through the use of chemical insecticides is still one of the major strategies of disease control. In India, 257 districts in 21 states and Union territories are endemic to filariasis with a probability of affecting approximately 650 million population [9]. Ministry of Health and Family Welfare, Government of India, have designed 'Twin pillar strategies' that include Mass Drug Administration (MDA) for prevention of disease transmission and Morbidity Management and Disability Prevention (MMDP) for taking care of infected patients for elimination of filariasis in India. In the state West Bengal, 12 districts are reported to be endemic to the disease where Coochbehar and Malda are the only two districts of northern West Bengal that are endemic to lymphatic filariasis [9]. Apart from disease control programmes and strategies like mass drug administration (MDA), proper sanitation and hygiene to check the spread of disease, vector control and management also form an important aspect regarding the control of the proliferation of mosquito-borne diseases in these two districts. Synthetic insecticides in the form of indoor residue spraying, insecticide impregnated bed nets and outdoor fogging is in use since many decades to control vector-borne diseases and nuisance caused by mosquitoes. WHO has approved the use of 4 classes of insecticides i.e., pyrethroids, organophosphates, organochlorines and carbamates to be applied against mosquito vectors [10]. However, the continuous exploitation of these insecticides on the mosquito vectors led to the development of insecticide resistance.
Of the four mechanisms of insecticide resistance development in mosquitoes, metabolic detoxifying enzymes and target-site insensitivity have been widely studied and known as the prime mechanism of resistance in the vector population [10]. Resistance involving the major detoxifying enzymes occurs either by an enhanced enzyme action or by a qualitative increase in the number of isozymes [11,12]. Target-site insensitivity on the other hand occurs as a result of mutation in the target receptor at the neurological site of an insect to which a particular insecticide binds [13]. Carbamate and organophosphate insecticides target the acetylcholinesterase enzyme (AChE) [14], while the other two groups-synthetic pyrethroids and DDT attack the voltage-gated sodium channel in the neuron membrane which results in prolonged opening of the channel causing involuntary muscle spasms and death, a condition termed as the knock down effect [14,15]. With the rising problem of DDT resistance in mosquito vectors, in the mid-1970s synthetic pyrethroids in the form of deltamethrin, cypermethrin and permethrin were introduced for mosquito vector control. However, excessive use of DDT in the past and synthetic pyrethroids in present to combat the agricultural pests and vectors carrying many human diseases have resulted in the development of resistance in the pest population known as knock down resistance (kdr) through point mutation in the sodium channel gene, thereby rendering the channel unfit for insecticide binding. Mutation in the 1014 position of voltage-gated sodium channel gene from Leucine to Phenylalanine (L1014F) is the most common kdr mutation found in Culex sp. and the only kdr mutation found in Cx. quinquefasciatus. In Culex sp., the L1014S mutation (Leucine to Serine) has only been found in Cx. pallens and Cx. pipiens until now.
The objective of this study was to map resistance status of Cx. quinquefasciatus from two lymphatic filariasis endemic districts of West Bengal against different insecticides and to screen the presence of kdr mutations in the sodium channel gene associated with insecticide resistance. The involvement of metabolic enzymes in resistance development through synergistic assays was also evaluated.
Ethics statement
The Institutional Animal Ethics Committee (IAEC) Department of Zoology, University of North Bengal (Regn no. 840/GO/Re/S/04/CPCSEA) granted a waiver for ethics approval as there was no human trail or higher vertebrates involved in the present study. The IAEC also approved the use of rat for blood feeding (approval no. IAEC/NBU/2019/19). All procedures were performed in accordance with relevant guidelines of the IAEC and ARRIVE.
Study area
Field Cx. quinquefasciatus populations for the study were collected from two districts of West Bengal, India-Coochbehar and Malda (Fig 1). These two districts out of total eight districts in northern West Bengal are endemic to the disease lymphatic filariasis. These districts have a tropical climate with four seasons-dry season (March-April), rainy season (May-September), autumn season (October) and winter season (November-February). The northern part of West Bengal receives an annual rainfall of 200-400 cm and has an average temperature range of about 30˚C during summers. Coochbehar district has a population of about 2.82 million and Malda about 3.9 million, both share international borders with Bangladesh [16]. Therefore, insecticide susceptibility mapping of wild Cx. quinquefasciatus was carried in these two districts owing to their disease endemicity, dense population, poor sanitation and infrastructure.
Mosquito collection
Mosquito larvae and pupae were collected from 3 densely populated sites in Coochbehar and Malda districts (Table 1 and Fig 1) and was labeled as F0. Sampling was conducted from January 2019-February 2020 from natural breeding habitats of Culex sp-drains, stagnant water, pools, plastic containers, sewers and cemented channels. A 500 ml plastic beaker was used for the purpose and 8-10 dips were made at a particular sampling site. Average larval density was calculated for each sampling site. Prior permission from land owners were taken when sampling from private areas and from Officer-in-charge for sampling from Government areas. In the laboratory, Cx. quinqueasciatus larvae and pupae were identified following standard mosquito identification keys [17] and reared to F1 generation under controlled laboratory conditions. Adult bioassay test and kdr mutation screening was performed with the F1 generation in order to maintain population homogeneity.
Laboratory rearing of susceptible mosquito population
Susceptible population of Cx. quinquefasciatus was reared in laboratory following the protocol [18]. Larvae and pupae of Cx. quinquefasciatus were collected from drains and stagnant puddles in and around the medicinal garden of University of North Bengal (26.71˚N, 88.35˚E) located at a rural area of Darjeeling District. The University medicinal garden is organically maintained therefore, mosquito larvae collected were earlier not exposed to insecticides. The sample was brought to the laboratory and kept in enamel trays. Pupae were separated in 1000 ml glass beakers to avoid over-crowding and covered with a mosquito net. Larvae were provided with ground fish feed (Optimum mini pellets; made in Thailand) and newly emerged adults with cotton balls soaked in 5% sucrose solution (Merck cat. no. 61806905001730), as food source. Three to four days old female mosquitoes blood fed on trimmed rat kept in a cage inside the rearing setup. Tap water boiled with hay and then cooled to room temperature was placed in a glass beaker and served as the egg laying apparatus. Egg rafts were transferred to another enamel tray where the eggs hatched into first instar larvae. During the entire rearing period, a temperature of 25 ± 2˚C and relative humidity of 70-80% was maintained. Adults from tenth generation were used as the susceptible population (LAB strain) in this study.
Insecticide susceptibility tests
Adult bioassay tests with six different insecticides were performed following WHO guidelines [10]. 25-30 non blood-fed adults from each population and LAB strain were exposed to insecticide impregnated papers for an hour and then shifted to retention tubes. Adult mosquitoes from the assay were maintained at laboratory condition and provided with 5% sucrose solution to feed upon. After 24 hours of insecticide exposure, mortality percentages were calculated and each experiment was run thrice. In the control, 25-30 adult mosquitoes were exposed to filter paper sprayed with acetone and carrier oil. For the calculation of knock down time (KDT) of the synthetic pyrethroids and DDT, knocked down mosquitoes were calculated after every 10 minutes during the 1 hour exposure to insecticides.
Synergist assay
The synergist assays were conducted using two most commonly used synergists Piperonyl butoxide (PBO)-cytochrome P 450 (CYP 450 ) inhibitor and Triphenyl phosphate (TPP)-carboxylesterases (CCEs) inhibitor. Therefore, the test was conducted in order to study the effectiveness of synergists in increasing the mortality rate of adult mosquitoes against insecticides by affecting the detoxifying enzymes. Synergists were used in their sub-lethal dose i.e., 4% PBO and 10% TPP. Thirty non-blood fed adults (from each study site and LAB strain) were exposed to synergist impregnated paper for an hour after which they were exposed to insecticide impregnated paper for an hour. The mosquitoes were then shifted to retention tube like in the adult bioassay test and mortality counted after 24 hours. The adult bioassay data was taken as positive control and exposure to insecticide-free filter paper was regarded as negative control for the synergist assay.
DNA extraction
Genomic DNA of 20 adult mosquitoes each from six study sites that survived after 24 hours of bioassay test against synthetic pyrethroids and DDT were extracted according to the High Salt protocol [19] with minor modifications. Individual mosquito was homogenized using digestion buffer in a 1.5 ml micro-centrifuge tube. 20 μl proteinase K was added and the samples incubated at 55-60˚C in a water bath for at least 2 hours. Chloroform and sodium chloride solution was then added and the sample centrifuged at 14000 g for 15 mins. The supernatant was transferred to new micro-centrifuge tube, chilled 70% ethanol added and centrifuged at 10000 g for 5 mins. The supernatant was discarded and pellet suspended in autoclaved distilled water and stored at -20˚C for further use. The same protocol was followed to extract DNA from 20 (twenty) adults of LAB strain, with no prior exposure to insecticides.
Detection of kdr mutation
Allele-specific PCR (AS-PCR) reactions were performed using the extracted genomic DNA individually to detect two kdr mutations at the sodium channel gene, L1014F and L1014S following the standard protocol [20,21] with minor modifications. Cgd1 (5-GTGGAACTTCACCGACTTC-3), Cgd2 (5-GCAAGGCTAAGAAAAGGTTAAG-3), Cgd3 (5-CCACCGTAGTGATAGGAAATTTA-3) and Cgd4 (5-CCACCGTAGTGATAG GAAATTTT-3) primers were used to detect the L1014F mutation [20,21] and an additional primer Cgd5 (5-CCACCGTAGTGATAGGAAATTC-3) for L1014S was used in the assay. Four PCR reactions were run in parallel. Cgd1 and Cgd2 primers were combined in the first reaction, Cgd2 and Cgd3 in the second, Cgd2 and Cgd4 in third reaction and Cgd2 and Cgd5 primers combined in the fourth reaction. PCR conditions were an initial denaturation at 95˚C for 15 min, followed by 30 cycles at 94˚C for 45 seconds, 49˚C for 45 secs, 72˚C for 45 secs, and a final extension of 10 mins at 72˚C. The amplified fragments were analysed on 3% agarose gels under UV light. Ethidium bromide was used to stain the agarose gel.
Calculations
Mortality percentages against each insecticides were calculated and the mosquito populations were termed resistant (<90% mortality), susceptible (98-100% mortality) and unconfirmed resistance (90-98% mortality) accordingly [10]. In case of more than 10% mortality in control setup, the data was corrected using Abbott's formula. Mortality percentages of adult bioassay data were subjected to one way ANOVA at 95% confidence level in SPSS software version 21.0. KDT 50 and KDT 90 values were also calculated using SPSS software version 21.0 at 95% confidence level by subjecting knocked down values to probit analysis.
Mosquito collection
A total of 17,341 mosquito larvae and pupae were collected from six different sites of the two districts. Immature stages of Anopheles sp., Chironomids and drain flies were also found to be associated with Cx. quinquefasciatus larvae in most of the breeding habitats. Details of the mosquito collection, nature of the sampling site and its larval density, co-existence of other species are provided in Table 1.
Adult bioassay
Culex quinquefasciatus adults from six different sampling sites were found to be resistant to multiple insecticides as they showed low mortality percentages against all six insecticides used in the study (
Synergist assay
Result of the synergist assays showed a non-significant increase in the mortality rate of Cx. quinquefasciatus adults against insecticides used ( Table 4). Susceptibility of Cx. quinquefasciatus populations to synthetic pyrethroids and DDT could not be restored with the use of two synergists though there was an increase in the mortality rate. Likewise, mortality percentage against malathion and propoxur showed an increase when compared to the mortality rate of adult bioassay test but the observed resistant status could not be reverted to susceptible indicating only a minor involvement of major detoxifying enzymes behind resistance in the two districts. Thus, CYP 450 s and CCEs are probably not the major mechanism of resistance in the populations under study.
Detection of kdr
The PCR analysis of kdr allele showed presence of 5 genotype frequencies in varying number in all study areas. L1014F mutation was found to occur in all study sites with maximum resistant homozygote (F/F) genotype frequency in Malda (30%) followed by MEK (25%) and HCP (25%) ( Table 5). TFG showed highest heterozygote genotype frequency (L/F) of 35% followed by COB (25%). The homozygote wild genotype frequency (L/L) was comparatively higher ranging from 35-50%. The mosquito population under study showed an average resistant allele frequency (F) to be at 28.75% of the entire population with Malda showing the highest allele frequency i.e., 37.5% of the population. No kdr mutation was observed in LAB strain (susceptible population). PCR analysis of L1014S mutation in the voltage-gated sodium channel gene also showed the presence of mutation but to a lower extent when compared to L1014F mutation in the same population (Table 5). Highest heterozygote genotype frequency (L/S) was shown by TFG (30%) and homozygote mutated genotype frequency (S/S) ranged from 5-10% in all of the populations under study. COB and TFG populations showed highest wild allele frequency (L)
Discussion
The prime objective of the present study was to assess the insecticide susceptibility status of Cx. quinquefasciatus, a vector of lymphatic filariasis in two filariasis endemic districts of northern West Bengal against four classes of insecticides and also to find out the presence of kdr mutation in the vector population and its association with resistance to synthetic pyrethroids and DDT. High larval density observed in both the districts can be attributed to the ample habitat provided for mosquito breeding along with little or no control measures taken against this vector of lymphatic filariasis. The field mosquito population showed severe resistance to propoxur-a carbamate insecticide ( Table 2). The World Health Organisation Pesticide Evaluation Scheme (WHOPES) recommends 2 hours of exposure period to propoxur for Cx. quinquefasciatus. However, in this study to maintain the homogenity of the experiment, 1 hour exposure time was followed for all six insecticides. Till date, there is no report on propoxur being used as a mosquitocide in India [9] thereby indicating the indirect exposure of Cx. quinquefasciatus to other insect repellents that contain propoxur and are used in the household. The indoor resting habit of this vector might have added upon its exposure to such repellents. Similar findings on resistance of Cx. quinquefasciatus to carbamate insecticides have been reported by researchers [10,22]. Likewise, the adult bioassay test of field caught population of Cx. quinquefasciatus showed severe resistance to malathion with three (TFG, SAM, HCP) out of six population showing zero mortality and MLT with highest mortality rate of 8%. Malathion belonging to organophosphate class of insecticide is not directly applied against mosquito as a part of mosquito control programmes but applied on large scale in the agricultural sector. Therefore, severe resistance to malathion observed in the present study may be linked to the indirect exposure of the vector population to malathion residues from the agricultural run-off which accumulate in the nearby drains and channels that might harbor Cx. quinquefasciatus populations. The two districts under study-Coochbehar and Malda depends largely on the agricultural practices, Coochbehar cultivating mainly paddy, tobacco and jute and the district of Malda being dependent on prime orchard crops like mango, banana and litchi for their economy. Thus, the contamination of Cx. quinquefasciatus breeding habitats by the seeping of excessive malathion used in the agricultural practices in the adjoining drains leads to indirect exposure of Cx. quinquefasciatus to the insecticide thereby bringing about the onset of resistance development in the vector population [23,24]. Resistance to malathion in Cx. quinquefasciatus has been reported to be associated with an increase in the production of non specific esterases [10, 25,26] and that of carbamate by an increased level of CCEs activity primarily and rarely by CYP 450 s and GSTs activity [27]. No significant increase in mortality percent of Cx. quinquefasciatus in the synergism test with TPP except for MEK population show that the CCEs were not the major mechanism of resistance development in the studied populations of mosquito vector of Coochbehar and Malda districts. Moreover, similar result on the use of PBO suggest little involvement of CYP 450 s in resistance development as well thereupon providing a hint on the presence of other mechanisms of resistance mainly target-site mutation behind the observed resistance to malathion and propoxur. Organophosphates and carbamate being acetylcholinesterase inhibitor insecticides target the ace-1 gene [12]. Further studies on unveiling the molecular mechanism behind resistance development against these two classes of insecticides should be an important focus and carried out by mapping the presence of mutation in ace-1 gene which in turn leads to inhibition on the proper functioning of acetylcholinesterases.
Low mortality rates in the studied mosquito populations against three synthetic pyrethroids (deltamethrin, lambdacyhalothrin, permethrin) suggest a severe degree of resistance level against this insecticide class. This observed resistance in Cx. quinquefasciatus populations of two filariasis endemic districts of West Bengal is of immense concern owing to the fact that synthetic pyrethroids are the only class of insecticides that are used in the insecticide impregnated bed nets as a control program against malaria as recommended by WHO in West Bengal and around the globe [28] and for indoor spraying against mosquitoes owing to their rapid action and safety to humans. Many studies have already reported on the inefficiency of pyrethroid treated bed nets to counter Cx. quinquefasciatus and malarial vector Anopheles species [29,30]. Though there are no reports on the application of pyrethroid insecticides directly on Cx. quinquefasciatus in West Bengal, the domestic use of synthetic pyrethroids to control household pests and combat the nuisance caused by mosquito biting may be considered as the most probable cause of resistance development to synthetic pyrethroids in Cx. quinquefasciatus as observed in the current study. Synthetic pyrethroids-containing products like mosquito coils, repellent oils, fumigants and sprays add upon the resistance development of Cx. quinquefasciatus to synthetic pyrethroids as this vector being anthropophilic [31] and the adults rest indoors along with Aedes mosquitoes, come into direct contact to the insecticide class. Moreover, the application of pyrethroids in agricultural practices in Coochbehar and Malda districts together with organophosphates may create insecticide selection pressure on the vector population. This secondary resistance in non-target mosquito population has also been reported in a previous study from sub-Himalayan West Bengal [32]. Similarly, resistance observed against DDT might also be linked to the secondary exposure of widespread use of DDT in the vector management programs [33]. The two districts apart from being endemic to filariaisis are also endemic to dengue thereby increasing the probability of untargeted exposure to insecticides aimed at controlling the Aedes mosquito populations but applied mostly on drains-the natural habitat of Culex mosquitoes.
The higher KDT 50 and KDT 90 values in the study (Table 3) show a slower effect of synthetic pyrethroids and DDT on Cx. quinquefasciatus from all six studied sites. As pyrethroids are mainly known for their rapid knock down effect on the target, longer knock down time taken by the vector population depicts an alternation in their target site thereupon imparting a negative impact on the insecticide receptor binding in the mosquito vector [34]. This observation is well supported by the results of synergist assay test where both PBO and TPP exposure prior to DDT and pyrethroids exposure did not have a significant increase in mortality rate and was unable to revert the resistance status of Cx. quinquefasciatus (Table 4). PBO and TPP are chemical synergists which when combined with insecticides inhibit the major detoxifying enzymes of vector thereby rendering the vector population susceptible to insecticides. Resistance to synthetic pyrethroids in insects is caused by an increased quantitative level of CYP 450 s metabolic enzymes [35,36] while synergist PBO inhibit the same [37] therefore suggesting major role of other mechanisms of resistance apart from the metabolic enzymes in Cx. quinquefasciatus from two districts of West Bengal. Skovmand et al., 2018 [38] also reported similar indifference in mortality percentage of Cx. quinquefasciatus against pyrethroids with the use of PBO though there are studies that contrast such findings [39,40]. The practice of incorporating PBO into pyrethroid-treated long lasting insecticide impregnated bed nets (LLINs) [37] may therefore yield below expected results in controlling the vector population in these two districts.
ASPCR analysis of L1041F mutation in the Cx. quinquefasciatus population in the two districts of West Bengal showed that the homozygote resistant genotype frequency ranged from 0-30%. In MLT 30% of the tested mosquito population was found to have the dominant resistant genotype frequency. HCP in the same district and MEK in Coochbehar district showed 25% resistant genotype frequency. This finding of the present study is of prime concern due to the high numbers of resistant homozygote in the population. Though kdr mutation is a recessive trait with dominant phenotype arising only in the presence of two homozygote mutant alleles yet, COB, TFG and SAM should also not be neglected because of their low homozygote resistant genotype frequency as these populations show high heterozygote genotype frequency (L/F). Moreover, low susceptible wild genotype frequency may cause problem in the long run of vector management with chemical insecticides as in high intensity of insecticide selection pressure, the lack of susceptible mosquitoes to pass their genes to the next generation may lead to irreversible state of insecticide resistance [13].
Mutation in the 1014 codon of voltage-gated sodium channel gene from leucine to phenylalanine is the most common and widely studied kdr mutation in insects though L1014S (leucine to serine), L1014C (leucine to cysteine) and L1014H (leucine to histidine) mutations have also been reported [41]. Presence of L1014F mutation in Cx. quinquefasciatus have been studied and reported from sub-Himalayan West Bengal in a previous study [42], India [21] and from different regions of the world [11,13,20,39,43,44]. However, L1014S mutation in the voltage-gated sodium channel gene in Cx. quinquefasciatus was not reported earlier. Higher survival rate of the studied mosquito population against synthetic pyrethroids and DDT together with the ineffectiveness of synergists to restore the susceptibility status of mosquito population and high frequency of L1014F mutation observed in the study indicates the association of kdr mutation with insecticide resistance in Cx. quinquefasciatus from six study sites of two filariaisis endemic districts. Moreover, there are similar findings of correlation between kdr mutation and inefficiency of the insecticides to control the vector population [45][46][47]. On the contrary, few studies differ from the above findings where mosquito vectors with high kdr frequency still show high mortality when treated with pyrethroid insecticides [48,49]. Thus, kdr mutation at DNA level alone is not sufficient to produce a resistant phenotype unless combined with RNA transcription [50].
Comparatively higher survival rate of Cx. quinquefasciatus to DDT than synthetic pyrethroids might be linked to the presence of L1014S mutation in the present study as this mutation is said to confer higher resistance to DDT than to pyrethroids [11,20,47]. However, the presence of L1014S mutation in the study in low frequency when compared to L1014F mutation suggest the role of detoxifying enzymes apart from kdr mutation in the development of resistance against DDT in the mosquito population [11,35]. The phenomenon of cross resistance between DDT and synthetic pyrethroids can also not be ruled out due to the high frequency of L1014F mutation in the studied population [39]. Though, different insecticide selection pressure combined with environmental factors influence the presence of a particular type of kdr mutation in the vector population [34], secondary mutations occurring at the cytoplasmic portion of sodium channel further increase the resistance level associated with mutation at 1014 codon [51].
Conclusion
This study first reports resistance status of wild Cx. quinquefasciatus populations to commonly used insecticides from filariasis endemic districts of northern West Bengal and also the presence of two kdr mutations pertaining to the observed resistance. To our knowledge this is the first report of the presence of L1014S mutation in Cx. quinquefasciatus. Prior to this study, L1014S mutation was reported from Cx. pallens and Cx. pipiens only. The observed resistance can be linked to the presence of kdr mutations L1014F and L1014S in the sodium channel gene. Involvement of metabolic resistance in the studied populations was not found in the present study. Though, the presence of kdr mutation indicates resistance status in vectors other mechanisms of resistance and several co-factors combinely work to impact upon insecticide resistance level. Resistance development at a particular site depends on insect biology, dominant mechanisms of resistance and history on previous strategies taken to control vector population. As such, studying and monitoring the site-specific resistance intensity along with the mechanisms associated is important. Transmission of vector-borne diseases will increase in the years ahead especially those carried by Culex mosquitoes due to their opportunistic behavior, adaption to climate changes and poor sanitary conditions [52] thereby further indicating on the need of mapping insecticide resistance of wild mosquito populations from different sub-regions of tropical and sub-tropical countries. | 6,514.2 | 2022-01-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Intermodal and cross-polarization four-wave mixing in large-core hybrid photonic crystal fibers
: Degenerate four-wave mixing is considered in large mode area hybrid photonic crystal fibers, combining photonic bandgap guidance and index guidance. Co- and orthogonally polarized pump, signal and idler fields are considered numerically by calculating the parametric gain and experimentally by spontaneous degenerate four-wave mixing. Intermodal and birefringence assisted intramodal phase matching is observed. Good agreement between calculations and experimental observations is obtained. Intermodal four-wave mixing is achieved experimentally with a conversion efficiency of 17 %. idler were considered. Intramodal birefringence assisted phase matching and intermodal phasematching were observed in good agreement with the numerical results. The conversion efficiency of the intermodal FWM process was measured for the short wavelength component, and a conversion efficiency of 17 % was obtained. This work characterizes the different phase matching techniques in the hybrid PCF, and demonstrates the potential of using LMA hybrid PCFs to extend the spectral coverage of high power fiber light sources.
Introduction
Rare-earth doped fiber lasers and amplifiers are undergoing rapid development, and several efforts have been made to utilize the full spectral potential of the emission cross sections of the available rare-earth dopants. However, there exist spectral regions where no emission of rare-earth dopants is available, and thus they are insufficient to cover the full optical spectrum. To achieve full spectral coverage different mechanisms must be considered, and one approach is to utilize the nonlinear response of silica.
Four-wave mixing (FWM) in silica fibers can be used to convert optical output frequencies from those easily achievable with rare-earth doped fiber lasers to frequencies that are less accessible. FWM requires phase matching; thus, fiber dispersion properties are crucial. The two contributions to fiber dispersion are material and waveguide dispersion. The material dispersion of silica is intrinsic; therefore, only the waveguide dispersion is useful in tailoring the fiber dispersion. Sufficiently large waveguide dispersion can be achieved by confining the fiber mode very tightly, as in for example photonic crystal fibers (PCFs) [1]. However, large mode area (LMA) fiber designs are necessary for power scaling, and material dispersion often becomes the dominant contribution for larger core sizes.
There is a growing interest in FWM in photonic bandgap (PBG) guiding fibers [2][3][4][5][6]. Fiber modes near the PBG edges are strongly affected by waveguide dispersion, thus the PBG effect can be used to tailor fiber dispersion, even in LMA fibers. Birefringence-assisted FWM in polarization maintaining (PM) PCFs has also been investigated [7][8][9]. Phase matching relies on the different phase velocities of the waves propagating in the two polarization modes of a PM PCF. This method can also be applied to achieve control of the phase matching in LMA fibers, which was recently demonstrated [10]. Intermodal FWM has been known for many years [11,12], and it has also been proposed for phase matching in LMA fibers [13]. In multimode fibers each mode has distinct dispersion properties, and phase matching can be achieved by choosing the modes with appropriate dispersion in the FWM process.
In this work we consider FWM in a LMA hybrid PCF with a core diameter of 36 µm, where index-guiding and guidance through the PBG effect are simultaneously present and different phase matching mechanisms are utilized. Fiber birefringence gives rise to intramodal FWM in the same transmission band as that of the pump laser. Intermodal FWM across transmission bands is also observed, where phase matching is achieved for modes having dispersion properties which are both affected and unaffected by the waveguide dispersion arising from the PBG effect. Various phase matching techniques in hybrid PCFs provide control of the generated spectral components in a FWM process, demonstrating that FWM in a LMA hybrid PCF is a viable method by which it is possible to increase the spectral coverage of high power fiber lasers and amplifiers.
Hybrid photonic crystal fibers
In Fig. 1(a) a microscope image of the cross section of the LMA hybrid PCF considered in this work is shown. The cladding consists of airholes positioned in a hexagonal lattice with a hole-to-hole spacing of 9.3 µm and an airhole diameter of 1.6 µm. Seven missing airholes define the core area, which has a diameter of 36 µm. Eight of the airholes are replaced by high-index Germanium-doped silica rods (Ge-rods) with a diameter of 6.6 µm and numerical aperture of 0.29. The diameter of the fiber is 437 µm, not taking the polymer coating into account. The combination of airholes and Ge-rods in the hybrid PCF gives rise to two different guiding mechanisms; the core modes are confined by index-guiding caused by the airholes and by the photonic bandgap effect arising from the presence of the high-index Ge-rods. The number of modes guided by the core can be controlled through the airhole diameter, while the spectral position of the transmission band is controlled by the Ge-rod diameter [14][15][16].
Measured fiber properties
In Fig. 1(b) a spectrum of unpolarized white light transmitted through a 6 m section of hybrid PCF coiled with a diameter of 50 cm is shown. Three transmission bands are observed, one band ranging from 775 nm to 900 nm, a second band ranging from 975 nm to 1200 nm, and a third band starting at 1400 nm. These observed transmission bands correspond to the 4th, 3rd, and 2nd bandgaps, respectively, with the numbering corresponding to standard notation in PBG fibers [17]. Furthermore, an image of the hybrid PCF output for a launched wavelength of 1064 nm is shown in the inset of Fig. 1 The twofold symmetry of the hybrid PCF gives rise to birefringence. The fiber has two principal axes; the slow axis oriented in the direction along the Ge-rods, and the fast axis oriented in the direction orthogonal to the Ge-rods. The total birefringence is given by the geometrical birefringence and the stress induced birefringence. The stress induced birefringence is generated during the fabrication process of the fiber, caused by the differences in the thermal expansion coefficients of Ge-rods and pure silica. The stress distribution is non-symmetric in the fiber core, since the Ge-rods are only present along one axis of the fiber, resulting in stress induced birefringence. The stress induced birefringence is substantially larger than the geometrical birefringence in the hybrid PCF.
In Fig. 2(a) a measurement of the group birefringence in a 1.5 m length of the hybrid PCF coiled with a diameter of 30 cm using the scanning wavelength method is shown [18]. Polarized white light is launched in the fiber core with a polarization angle of 45 • with respect to the principal axes of the fiber. The output is collected through a polarizer which also has an angle of 45 • with respect to the principal axes of the fiber. The resulting beat spectrum is shown in Fig. 2(a), and has a period characteristic of the difference in propagation constants of the two principal axes and the given fiber length. An unpolarized white light transmission measurement in the hybrid PCF is also shown, to indicate the values of group birefringence with respect to the transmission band edges. The group birefringence is larger than 2 × 10 −5 in the spectral range 1025 nm -1130 nm.
Birefringence also impacts the transmission bands of the fiber. In Fig. 2(b) a white light transmission measurement of light polarized along the slow and fast axes in a 6 m length of hybrid PCF coiled with a diameter of 50 cm is shown. The transmission band is slightly red shifted when the polarization is oriented along the slow axis. At 975 nm the transmission band edge for the slow axis is red shifted by 12 nm as compared to the transmission band edge for the fast axis.
Calculated fiber properties
Properties of the hybrid PCF are also considered numerically. The propagation constants and field distributions of the modes in the hybrid PCF are calculated with a full-vector modal solver based on the finite element method [19]. The values used for airhole-and fiber diameter in the calculations are identical to those of the fabricated fiber, but the Ge-rod diameter has been increased by 6 % in the calculation to obtain a better agreement between calculated and measured transmission properties. The discrepancies between the calculated and measured transmission properties are probably due to the uncertainties of the physical values of the refractive indices and dimensions in the fabricated fiber. The refractive index of the Ge-rods is approximated with a staircase function [14]. The stress birefringence, induced by thermal expansions during the fabrication process of the fiber, has to be taken into account in the calculation to achieve correct properties for the modes polarized along the slow and fast axis. This stress birefringence is determined using a plane strain approximation, assuming that the strain along the longitudinal fiber direction is zero. The local anisotropic refractive index is thus given by n i = n 0 −C 1 σ i −C 2 (σ j + σ k ), where σ x , σ y , and σ z are the calculated stress distributions of the fiber cross section, n 0 is the refractive index of the unstressed material, C 1 and C 2 are the first and second stress-optical coefficients, respectively, and i, j, k is a cyclic permutation of x, y, z. The stress induced birefringence is calculated with the 2D Solid Mechanics interface in [19]. The mechanical and optical properties used in the stress birefringence calculations are shown in Table 1. The properties shown correspond to those of pure silica and are applied identically to the Ge-doped silica regions, which is in agreement with approximations used in previous calculations of thermally induced stress birefringence [20]. Stress birefringence arises purely from the difference in the thermal expansion coefficients in the calculation, which are set to 0.4 × 10 −6 1/K and 1.7 × 10 −6 1/K for silica and the Ge-rods respectively.
The fundamental mode (FM) and LP 11 modes are considered in the calculations. The calculated field distributions for the modes at 1064 nm are shown in Fig. 3. The LP 11 mode with a minimum in the field distribution along the slow axis is labeled HOM11, see Fig. 3(b), while the LP 11 mode with a minimum along the fast axis is labeled HOM12, see The calculated overlap integrals of the FM and LP 11 modes with the core region are shown in Fig. 4(a). Large values of overlap integral in a wide spectral range can be interpreted as a transmission band. When several closely-spaced dips in the overlap integral values are observed in a spectral region, the region falls outside a transmission band and optical confinement in the core is not expected. Three transmission bands are observed for the FM in the considered spectral region, one band ranging from 770 nm to 900 nm, a second band ranging from 975 nm to 1200 nm, and a third with the blue edge of the transmission band at 1370 nm, all of which is in good agreement with experimental results. Three transmission bands are also observed for the LP 11 modes, however, the overlap integral values of the LP 11 modes are lower than for the FM. For the FM the values are close to 0.9 in the transmission bands, while for the LP 11 modes the values are closer to 0.7. HOM content is therefore expected in the hybrid PCF, but the LP 11 mode will experience a higher loss than the FM due to lower confinement.
The calculated group velocity dispersion (GVD) is shown in Fig. 4(b). The group velocity dispersion of bulk silica is also shown in the figure for reference, which is calculated from the Sellmeier equation of fused silica. The GVD is very similar to that of bulk silica in the transmission bands of the hybrid PCF, but near the transmission band edges the GVD diverges substantially, and large values of normal and anomalous dispersion are present. This behavior is related to the guiding properties of the hybrid PCF. Within a transmission band the core mode is well confined and material dispersion dominates. Waveguide dispersion is the dominant contribution to the GVD near the band edges, and the GVD is thus very different from that of bulk silica.
The calculated mode birefringence given by the effective refractive index along the slow axis, n eff,s , minus the effective refractive index along the fast axis, n eff,f , is shown in Fig. 4(c) for the FM and LP 11 modes. The thermally induced stress birefringence is largest in the proximity of the Ge-rods. HOM12 is thus affected by the thermally induced stress to a larger degree than HOM11 due to the different field distributions.
Calculated group birefringence of the FM, given by the group index along the slow axis, n gs , minus the group index along the fast axis, n gf , is shown in Fig. 4(d). The reference temperature stated in Table 1 was chosen to achieve a reasonable match between the measured and calculated group birefringence. Furthermore, the FM transmission band from approximately 975 nm to 1200 nm is shown for both polarizations. The transmission band of the slow axis is red shifted compared to the transmission band of the fast axis; at 975 nm the transmission band edge is shifted by 9 nm. Similar behaviour is observed in the experimental and calculated group birefringence, see Figs. 2 and 4(d). The group birefringence has the highest values for shorter wavelengths and decreases for longer wavelengths. Furthermore a similar spectral red shift of the transmission band for light polarized along the slow axis compared to the transmission band for light polarized along the fast axis is observed both in the measurement and in the calculation.
In the following a pump laser with a wavelength of 1064 nm will be considered in the FWM processes. According to Fig. 4(b) the GVD at 1064 nm is −26 ps/(nm·km) for both polarizations of the FM, and the pump wavelength is thus located within the normal dispersion regime. The birefringence for the FM is 2.12 × 10 −5 at 1064 nm.
Four wave mixing theory
In a degenerate FWM process two pump photons of equal frequency are annihilated and signal and idler photons are generated. Energy conservation has to be fulfilled, which can be stated through the relation 2ω 1 = ω 2 + ω 3 , where ω 1 , ω 2 and ω 3 are the angular frequencies of the pump, signal and idler fields, respectively. Furthermore the phase matching condition must also be fulfilled. Phase matching contributions arise from both linear and nonlinear dispersion, where the linear contribution is given by: Δβ = β 2 + β 3 − 2β 1 , where β 2 , β 3 , and β 1 are the propagation constants for the signal, idler, and pump fields, respectively, and the nonlinear contribution arises from the Kerr effect.
The parametric gain of a FWM process can be derived from the wave equation. Assuming that the process occurs in a non-magnetic isotropic medium with no free charges or currents, the wave equation for the electric field, E, can be expressed in terms of the nonlinear polarization, P NL , [21]: where n is the linear refractive index, c is the speed of light, and μ 0 is the vacuum permeability.
For a degenerate FWM process E and P NL can be decomposed as: The nonlinear polarization can be written as, [22]: By combining Eqs. (2) and (4) a complete expression for the nonlinear polarization can be obtained. This is a rather comprehensive expression with 6 3 terms. To evaluate the equations for the pump, signal and idler fields, only the terms oscillating at the pump, signal and idler frequencies must be considered: In the undepleted pump approximation, it is assumed that the pump field is much stronger than the signal and idler field. Eqs. (5)- (7) can be simplified in this approximation, since several of the terms become negligible compared to terms including the square or greater of the magnitude of the pump wave field. The expressions of Eqs. (5)-(7) reduce to: The fields, E j , can be represented in terms of their Jones vectors, A j (z), to account for polarization effects. The Jones vector is a two-dimensional column vector representing the components of the electric field in the x − y plane. E j can be written as: where F j (x, y) represents the fiber mode profile in the transverse plane, and in the following A j is implicitly assumed to depend on z. It is often assumed that the mode profiles are nearly the same for all considered fields, leading to the assumption of the same effective mode area for all fields. This approximation is not valid for fibers with frequency-dependent field distributions [3], such as the hybrid PCF considered in this work, and will therefore not be used. By inserting Eq. (11) into Eq. (2), and Eqs. (8)-(10) into Eq. (3), expressions for all the terms in the wave equation, Eq. (1), are obtained. First, the pump field is considered, i.e. the terms oscillating at the pump frequency are collected. By inserting the aforementioned equations in Eq. (1), the terms oscillating at the pump frequency are: The first four terms on the left-hand side of Eq. (12) are recognized as the solution to the linear wave equation, e.g. when nonlinear polarization is not considered, and these terms sum to zero. Eq. (12) thus reduces to: Applying the slowly-varying envelope approximation, xxxx with the nonlinear refractive index, n 2 = 3 8n χ xxxx , multiplying both sides with F * 1 (x, y) and integrating over the transverse coordinates the following is obtained: A change of variable is applied such that |A 1 | 2 represents the optical power, by replacing A 1 with A 1 1 2 ε 0 cn ∞ −∞ F 1 (x, y)F * 1 (x, y)dxdy and the nonlinear refractive index with n 2 2 ε 0 cn : Eq. (15) describes the dynamic of the pump field in the FWM process in the undepleted pump regime. In the same manner the equations for the signal and idler can be derived: where the overlap integrals are defined as: The parametric gain can be calculated from Eqs. (15)- (17). In the following, two different cases are considered; signal and idler fields are generated with the same polarization state as the pump field, and signal and idler fields are generated with polarization states orthogonal to the polarization state of the pump field. The effective phase mismatch and parametric gain is derived in both cases.
Co-polarized pump, signal and idler fields
Firstly, a co-polarized pump, signal and idler field are considered. If it is assumed that the pump, signal and idler fields are all polarized along x, the Jones vector can be written: whereê x is the unit vector along x and |A j | 2 is the optical power. By inserting Eq. 20 in Eqs.
(15)- (17), following a similar approach as in [21], and assuming n 2 ω l c ≈ n 2 ω 1 c for l = 2, 3, an expression for the effective phase mismatch, κ , and the parametric gain, g , can be obtained: where P p = |A p (0)| 2 is the pump power at z = 0.
Orthogonally polarized pump and signal and idler fields
In this case, a pump field orthogonally polarized to the signal and idler fields is considered. Assuming the pump field is x-polarized, the Jones vectors can be written: Again Eq. 23 is inserted in Eqs. (15)- (17), and by following a similar approach as in [21] and assuming n 2 ω l c ≈ n 2 ω 1 c for l = 2, 3, an expression for the effective phase mismatch, κ ⊥ , and the parametric gain, g ⊥ , can be found for the pump field orthogonally polarized to the signal and idler fields: where P p = |A p (0)| 2 is the pump power at z = 0. Eqs. (21) and (24) can be simplified if the overlap integrals are approximated with 1 A e f f , where A e f f is the effective mode area. Thereby the effective phase mismatches are given by: A e f f c P p and κ ⊥,approx = Δβ − 2 3 n 2 ω 1 A e f f c P p , in agreement with [22]. An interesting difference between the two cases is observed. For co-polarized pump, signal and idler fields, the nonlinear contribution to the effective phase mismatch is positive, for orthogonally polarized pump, signal and idler fields, the nonlinear contribution to the effective phase mismatch is negative. In the following, the field overlap integrals are used to calculate the parametric gain, since the 1 A e f f -approximation does not apply to fibers with strongly frequency-dependent field distributions such as the hybrid PCFs.
Calculated parametric gain
The parametric gain of a degenerate FWM processes in the hybrid PCF is calculated for two cases; for signal and idler fields generated with same polarization states as the polarization state of the pump field, and for signal and idler fields generated in orthogonal polarization states with respect to the polarization state of the pump field. Eqs. (22) and (25) are used. A pump power of 25 kW and a pump wavelength of 1064 nm is used in the calculations. A value of 2.7 × 10 −16 cm 2 /W is used for the nonlinear index coefficient, n 2 [23].
In Fig. 5 the calculated parametric gain is shown. The gain is calculated for the pump in the FM polarized along the slow axis, Fig. 5(a) and Fig. 5(c), and for the pump in the FM polarized along the fast axis, Fig. 5(b) and Fig. 5(d), indicated in the figures with the arrow in the "Input" microscope image inset. The gain is calculated for the signal and idler being generated in the same mode, which is either one of the two polarizations of the FM, HOM11, or HOM12. The polarization of the generated FWM components is indicated with the arrow in the "Output" microscope image inset. The legend in the bottom right graph applies to all the graphs. A zoom at the FWM components generated in the HOM11 and HOM12 modes are shown in all graphs. A zoom of the FWM components generated in the FM polarized along the fast axis for an input pump polarized along the slow axis, which are observed in Fig. 5(a), is shown in Fig. 6(a).
Several narrow gain peaks are observed in Fig. 5. In all polarization combinations, intermodal FWM gain peaks are observed for signal and idler generated in HOM11 and HOM12. The position of these gain peaks differ for the different combinations, but the peaks are observed at 830 nm-850 nm and 1425 nm-1485 nm. The widths of the gain peaks are rather small, on the order of a few nm. The FWM components at 830 nm-850 nm lies well within the 4th bandgap, and the dispersion properties are not affected by the PBG effect, see Fig. 4(b). The HOM11 FWM components at 1425 nm-1485 nm lies well within the 2nd HOM11 bandgap, and the dispersion properties of these components are therefore neither affected by the PBG effect. The phase matching technique for HOM11 is thus based on intermodal phase matching. The dispersion properties of HOM12 in the 2nd HOM12 bandgap are affected by the PBG effect for wavelengths shorter than 1480 in the slow axis, and wavelengths shorter than 1470 nm in the fast axis, see Fig. 4(b). The phase matching for HOM12 is thus based on intermodal phase matching and phase matching by dispersion tailoring through the PBG effect in Fig. 5(a), 5(b) and 5(d). However, for the orthogonally polarized pump, signal and idler fields, with the pump polarized along the slow axis shown in Fig. 5(c), the dispersion properties of HOM12 are affected to a very small degree by the PBG effect.
The relative different spectral positions of the gain peaks can be explained by considering Δβ .
where Δn j , j = 1, 2, 3 are the birefringence at the signal, idler and pump wavelengths. If Δβ decreases for a given input-and output polarization state, the spectral distance from the pump to the generated FWM pair increases; the short wavelength FWM component is blue shifted, and the long wavelength FWM component is red shifted. In the co-polarized case, the spectral distance between the pump and the signal and idler generated in HOM11 is larger for FWM components generated in the slow axis, Fig. 5(a), compared to the FWM components generated in the fast axis, Fig. 5(d). Thus −Δn 2 0. For the co-polarized case with signal and idler generated in HOM12 the situation is the opposite, the spectral distance between the pump and the signal and idler is larger for FWM components generated in the fast axis, Fig. 5(d), compared to the FWM components generated in the slow axis, Fig. 5(a). Thus −Δn 2 ω 2 c − Δn 3 ω 3 c + 2Δn 1 ω 1 c < 0. Δn 2 and Δn 3 are much higher for HOM12 than for HOM11, since HOM12 are affected by the stress induced birefringence to a larger degree than HOM11, see Fig. 4(c), which supports the observed behaviour. For orthogonally polarized pump, signal and idler, the spectral distance between pump and signal and idler generated in both HOM11 and HOM12 is larger for the pump polarized along the slow axis, Fig. 5(c), than for the pump polarized along the fast axis, Fig. 5 Intramodal FWM, where the signal and idler are generated in the FM, is only observed for orthogonally polarized pump, signal and idler fields, with the pump polarization along the slow axis as shown in Fig. 5(c). The parametric gain for the intramodal FWM is 1.6 times larger than the gain for the intermodal FWM, where the signal and idler are generated in the LP 11 modes, for the same input and output polarization states. In general, the gain is largest for co-polarized pump, signal and idler configuration, as predicted by Eqs. (22) and (25).
In practice, the gain peaks may be broader and of lower magnitude than observed in Fig. 5. Small fluctuations in the physical dimensions of the fabricated hybrid PCF in the longitudinal direction of the fiber may alter the phase matching criteria, and thus the spectral position of the maximum of the parametric gain can change slightly throughout the length of the hybrid PCF.
Comparison with a simulated polarization maintaining large mode area fiber
The birefringence assisted phase matching in the hybrid PCF can be compared with the phase matching in a LMA PM fiber with no PBG elements. For this purpose a simulated LMA PM fiber with MFD 28 µm (LMA-PM-28) is considered. The waveguide dispersion in a LMA-PM-28 is small compared to the material dispersion, so it is assumed, that the dispersion properties are the same as for bulk silica. The mode indices of the slow and fast axis are calculated from the Sellmeier equation for fused silica, but a negative constant offset is introduced in the fast axis to represent birefringence. The parametric gain of birefringence assisted FWM in the LMA-PM-28 is calculated for a pump wavelength of 1064 nm and a pump power of 25 kW for a range of birefringence values between 1 × 10 −5 and 3 × 10 −5 in steps of 1 × 10 −7 . The calculated maximum gain varies between 1.9 m −1 and 2.2 m −1 for the considered range of birefringence values and the spectral widths of the gain peaks are less than 2 nm. The spectral location of the maximum gain is shown in Fig. 6(b). The pump is polarized along the slow axis, and the signal and idler are orthogonally polarized along the fast axis. The phase matching in the full spectral range of 800 nm to 1500 nm is shown in the inset of the figure.
As shown in Fig. 6(b), the phase matched wavelengths are very sensitive to the birefringence. For values between 1 × 10 −5 and 3 × 10 −5 the phase matched wavelengths are shifted from 1015 nm and 1119 nm to 982 nm and 1162 nm. In Fig. 6(a) the parametric gain of the LMA-PM-28 with a birefringence of 2.21 × 10 −5 is compared to the intramodal parametric gain of the hybrid PCF for orthogonally polarized pump, signal and idler, with the pump polarized along the slow axis. The birefringence of the hybrid PCF is 2.12 × 10 −5 at 1064 nm. The gain is slightly higher in the hybrid PCF, but the spectral positions of the gain peaks are identical. This comparison underlines that the intramodal FWM is due to birefringence assisted phase matching in the hybrid PCF. Note that the parametric gain of the simulated LMA-PM-28 is
Measurements
In Fig. 7 a schematic illustration of the measurement setup is shown. A linearly polarized Ytterbium-doped 40 ps 1064 nm fiber laser with a repetition rate of 1 MHz is used as pump laser. To ensure stable laser performance the average output power of the laser is fixed, and a half-wave plate, λ /2, and a polarizing beam splitter is used to adjust the input power. Another half-wave plate, λ /2, is used to adjust the polarization along the principal axes of the hybrid PCF. A 1064 nm laser line filter is inserted before the fiber input to avoid spontaneous emission from the laser.
On the output side of the fiber a polarizer is inserted to measure the light polarized along the Ge-doped rods and the light polarized orthogonally to the Ge-rods. In practice the polarizations are separated by a polarizing beam splitter. The light is collected by an integrating sphere connected to an optical spectrum analyzer (OSA) through a multimode fiber with core diameter 100 μm. Two lengths of the hybrid PCF are considered, 2 m and 6 m. The hybrid PCFs are coiled with a diameter of approximately 50 cm.
Spontaneous degenerate four-wave mixing
In Figs. 8(a) and 8(b) the output spectra of the hybrid PCFs of respectively 2 m and 6 m are shown for different pump peak powers. The pump laser is launched in the FM of the hybrid PCF, polarized along the slow axis or the fast axis. The input polarization is indicated in the figures with the arrow in the "Input" microscope image inset. The output spectra of the two principal axes of the fibers have been collected. The polarization of the collected spectra is indicated with the arrow in the "Output" microscope image inset. Note that the spectral component at 1064 nm observed in the spectra of orthogonal output polarization with respect to that of the input pump polarization stems from cross-coupling between the two polarizations. The pump peak powers are stated in the legends, and the legends shown in the bottom graphs also applies to the graph right on top of those, which have the same polarization input state.
In Fig. 9(a) an image of the output from the 2 m long hybrid PCF with an input peak power (a) 2 m.
(b) 6 m. shown. For low input powers the relationship between the output and input power corresponds to the coupling efficiency. Different spectral components are observed in Fig. 8. Raman scattering is observed at the first Stokes wave at approximately 1120 nm in the same output polarization as the input polar- ization in the 2 m length of hybrid PCF and in the 6 m length of hybrid PCF. The Raman threshold is substantially lower in the 6 m PCF, and thus a larger amount is observed in the longer fiber. Furthermore, Raman scattering is observed at the second Stokes wave at approximately 1180 nm in the longer fiber. Raman scattering is also observed in the polarization orthogonal to the input polarization in the longer fiber, but of much lower magnitude. This is attributed to cross-coupling between the two principal fiber axes, since the Raman gain is substantially lower for orthogonally polarized pump and Stokes waves [21].
Intermodal FWM between the pump in the FM and the signal and idler in the LP 11 mode is observed at 848 nm and 1425 nm in Fig. 8(a) in good agreement with the calculated parametric gain. The relative difference between the spectral locations of the calculated and experimentally observed FWM components is less than 4 % for all considered polarization state combinations. The 848 nm component is also visible for three of the configurations in Fig. 8(b) at high pump powers. Loss is expected for the LP 11 mode, and the output average power with respect to the input average power decreases for higher pump peak powers, both in the 2 m and 6 m long fiber, after the onset of the intermodal FWM as observed in Fig. 10. For the highest pump power the output average power with respect to the input average power is less than 30 % in the 6 m hybrid PCF for the pump polarized along the slow and the fast axis as shown in Fig. 10(b) and 10(d). This loss is due to intermodal FWM in the lossy LP 11 mode. For a pump peak power of approximately 150 kW the intermodal FWM at 848 nm and 1425 nm are clearly observed in the 2 m long hybrid PCF as shown in Fig. 8(a), but they are not observed in the 6 m long hybrid PCF as shown in Fig. 8 For the pump polarized along the slow axis, birefringence assisted intramodal FWM, where the signal and idler are generated in the FM polarized along the fast axis, is observed in the 3rd bandgap for both hybrid PCF lengths. The FWM components are generated at 995 nm and 1145 nm in this process, in good agreement with the calculated parametric gain. The relative difference between the spectral locations of the calculated and experimentally observed intramodal FWM components is less than 1 %. In the 6 m long hybrid PCF the 995 nm and 1145 nm components are also observed in the output polarized along the slow axis, note that the 1145 nm component falls within the same spectral region as the Raman scattering. However, the magnitude is much smaller for the output polarized along the slow axis compared to the output polarized along the fast axis, and the intramodal FWM components observed in this polarization are attributed to cross-coupling between the two polarizations. In Fig. 10(b) a relative large amount of power arising in the fast axis for peak powers above 50 kW is observed. This is the power arising in the intramodal FWM components.
Generation efficiency of 848 nm
In the 2 m long hybrid PCF a fairly strong generation of light at 848 nm is observed for copolarized pump, signal and idler, with the pump polarized along the slow axis. The generation efficiency of the 848 nm component is measured in the 2 m fiber piece, by positioning the fiber in a half-coil instead of the fully coiled fiber considered in Fig. 8(a), to decrease the bend loss at 848 nm. The FWM components are divided spatially through a prism, as in Fig. 9(a). Thereby it is possible to measure the power of the 848 nm component only, by physically blocking the other components.
In Fig. 11 the conversion efficiency of the 848 nm component is shown, given by average output power at 848 nm measured after the prism, divided by the pump laser average input power measured right after the 1064 nm filter, see Fig. 7. The conversion efficiency is maximum for an input peak power of 165 kW. The maximum conversion efficiency is given by 17.3 %, corre- sponding to an average output power of 1.2 W at 848 nm. The conversion efficiency decreases for input powers beyond 165 kW, since cascaded FWM sets in. Light at 705 nm was observed at higher power levels, an image of the fiber output at this wavelength is shown in the inset of Fig. 11. The 848 nm component acted as a pump for input powers beyond 165 kW for a FWM process generating FWM components at 1064 nm and 705 nm. The mode at 705 nm is very lossy, and a red glow from the fiber cladding in the longitudinal direction could be observed with the naked eye.
Conclusion
Degenerate FWM in a LMA hybrid PCF has been considered. Expressions of the effective phase mismatch and the parametric gain has been derived for co-and orthogonally polarized pump, signal and idler, which were used to calculate the parametric gain in the hybrid PCF. Intramodal birefringence assisted phase matching and intermodal phase matching were observed. The parametric gain of a simulated PM LMA fiber with a MFD of 28 µm was also calculated, by assuming the refractive mode indices of the slow and fast axis were given by the Sellmeier equation, however, a negative offset corresponding to the fiber birefringence was introduced in the fast axis. Similar spectral positions of the parametric gain were obtained in the hybrid PCF and the simulated LMA fiber of similar birefringence, underlining that the intramodel FWM observed in the hybrid PCF relies on birefringence assisted phase matching.
Spontaneous degenerate FWM was measured in 2 m and 6 m lengths of hybrid PCF for a linearly polarized pump at 1064 nm. The FWM processes for co-and orthogonally polarized pump, signal and idler were considered. Intramodal birefringence assisted phase matching and intermodal phasematching were observed in good agreement with the numerical results. The conversion efficiency of the intermodal FWM process was measured for the short wavelength component, and a conversion efficiency of 17 % was obtained. This work characterizes the different phase matching techniques in the hybrid PCF, and demonstrates the potential of using LMA hybrid PCFs to extend the spectral coverage of high power fiber light sources. | 8,881.6 | 2015-03-09T00:00:00.000 | [
"Physics"
] |
Baroreflex Sensitivity Measured by Pulse Photoplethysmography
Novel methods for assessing baroreflex sensitivity (BRS) using only pulse photoplethysmography (PPG) signals are presented. Proposed methods were evaluated with a data set containing electrocardiogram (ECG), blood pressure (BP), and PPG signals from 17 healthy subjects during a tilt table test. The methods are based on a surrogate of α index, which is defined as the power ratio of RR interval variability (RRV) and that of systolic arterial pressure series variability (SAPV). The proposed α index surrogates use pulse-to-pulse interval series variability (PPV) as a surrogate of RRV, and different morphological features of the PPG pulse which have been hypothesized to be related to BP, as series surrogates of SAPV. A time-frequency technique was used to assess BRS, taking into account the non-stationarity of the protocol. This technique identifies two time-varying frequency bands where RRV and SAPV (or their surrogates) are expected to be coupled: the low frequency (LF, inside 0.04–0.15 Hz range), and the high frequency (HF, inside 0.15–0.4 Hz range) bands. Furthermore, time-frequency coherence is used to identify the time intervals when the RRV and SAPV (or their surrogates) are coupled. Conventional α index based on RRV and SAPV was used as Gold Standard. Spearman correlation coefficients between conventional α index and its PPG-based surrogates were computed and the paired Wilcoxon statistical test was applied in order to assess whether the indices can find significant differences (p < 0.05) between different stages of the protocol. The highest correlations with the conventional α index were obtained by the α-index-surrogate based on PPV and pulse up-slope (PUS), with 0.74 for LF band, and 0.81 for HF band. Furthermore, this index found significant differences between rest stages and tilt stage in both LF and HF bands according to the paired Wilcoxon test, as the conventional α index also did. These results suggest that BRS changes induced by the tilt test can be assessed with high correlation by only a PPG signal using PPV as RRV surrogate, and PPG morphological features as SAPV surrogates, being PUS the most convenient SAPV surrogate among the studied ones.
Novel methods for assessing baroreflex sensitivity (BRS) using only pulse photoplethysmography (PPG) signals are presented. Proposed methods were evaluated with a data set containing electrocardiogram (ECG), blood pressure (BP), and PPG signals from 17 healthy subjects during a tilt table test. The methods are based on a surrogate of α index, which is defined as the power ratio of RR interval variability (RRV) and that of systolic arterial pressure series variability (SAPV). The proposed α index surrogates use pulse-to-pulse interval series variability (PPV) as a surrogate of RRV, and different morphological features of the PPG pulse which have been hypothesized to be related to BP, as series surrogates of SAPV. A time-frequency technique was used to assess BRS, taking into account the non-stationarity of the protocol. This technique identifies two time-varying frequency bands where RRV and SAPV (or their surrogates) are expected to be coupled: the low frequency (LF, inside 0.04-0.15 Hz range), and the high frequency (HF, inside 0.15-0.4 Hz range) bands. Furthermore, time-frequency coherence is used to identify the time intervals when the RRV and SAPV (or their surrogates) are coupled. Conventional α index based on RRV and SAPV was used as Gold Standard. Spearman correlation coefficients between conventional α index and its PPG-based surrogates were computed and the paired Wilcoxon statistical test was applied in order to assess whether the indices can find significant differences (p < 0.05) between different stages of the protocol. The highest correlations with the conventional α index were obtained by the α-index-surrogate based on PPV and pulse up-slope (PUS), with 0.74 for LF band, and 0.81 for HF band. Furthermore, this index found significant differences between rest stages and tilt stage in both LF and HF bands according to the paired Wilcoxon test, as the conventional α index also did. These results suggest that BRS changes induced by the tilt test can be assessed with high correlation by only a PPG signal using PPV as RRV surrogate, and PPG morphological features as SAPV surrogates, being PUS the most convenient SAPV surrogate among the studied ones.
INTRODUCTION
The baroreflex system plays an important role in regulating short-term fluctuations of arterial blood pressure (BP) (La Rovere et al., 2008;Robertson et al., 2012). Arterial baroreceptors (placed in the wall of the carotid sinuses and aortic arch) sense changes in BP and modulate efferent autonomic neural activity to the central nervous system accordingly. A rise in sensed BP leads to an increase of vagal neurons discharge and a decrease in the discharge of sympathetic neurons, resulting in decreased heart rate (HR), cardiac contractility and peripheral vascular resistance. On the contrary, decreased BP enhances sympathetic and inhibits vagal activity, leading to increased HR, cardiac contractility and peripheral vascular resistance.
Cardiovascular diseases are frequently associated to an impairment of baroreflex mechanisms, resulting in chronic adrenergic activation. Reduced baroreflex control of HR has been reported in coronary artery disease, heart failure, hypertension and myocardial infarction (La Rovere et al., 2008;Pinna et al., 2017). Assessment of baroreflex in humans is usually approached measuring the changes in HR in response to changes in BP, the so-called baroreflex sensitivity (BRS). Alternatively, spontaneous beat-to-beat fluctuations of systolic arterial pressure and RR interval can be analyzed, allowing BRS assessment during daily-life. A wide spectrum of techniques has been used for spontaneous beat-to-beat BRS assessment. Traditional approaches, such as the sequence technique and those based on the spectral analysis of systolic arterial pressure and RR interval series (α index), were reviewed in La Rovere et al. (2008).
In order to deal with the nonstationary nature of cardiovascular variability, methods based on wavelet transform (Nowak et al., 2008;Keissar et al., 2010) and quadratic timefrequency representations (Orini et al., 2011) have been proposed. In Orini et al. (2012) a framework for nonstationary BRS assessment, based on a time-frequency distribution, was presented, taking into account the strength and prevalent direction of local coupling between RR variability (RRV) and systolic arterial pressure variability (SAPV) series. Alternatively, in Chen et al. (2011) dynamic assessment of BRS is accomplished based on a closed loop model within a point process framework. A critical review of clinical studies using spontaneous BRS was reported in Pinna et al. (2017). Despite some limitations, such as the lack of standards and the poor measurability in some patient populations, published studies support spontaneous BRS as a powerful tool for prognostic prediction in diseases such as hypertension, myocardial infarction, chronic heart failure and diabetes (La Rovere et al., 2008;Di Rienzo et al., 2009;de Moura-Tonello et al., 2016).
Spontaneous BRS assessment and monitoring during daily life is limited by the requirement of continuous BP recording, which is usually accomplished by the volume-clamp method or tonometry method, neither of them being suitable for ubiquitous monitoring (Mukkamala et al., 2015). This limitation may be overcome by using a surrogate of systolic arterial pressure which does not require the BP recording. Many works have attempted BP estimation based on pulse transit time (PTT), which is the time delay for the pressure wave to travel between two arterial sites. Most of these approaches, reviewed in Mukkamala et al. (2015), are based on models of arterial wall mechanics and wave propagation in the artery. Due to ease of measurement, pulse arrival time (PAT), which is the time delay between the electrocardiogram (ECG) waveform and a distal arterial waveform, has been widely used instead of PTT for BP estimation. PAT is the sum of PTT and the pre-ejection period (PEP), which varies beat-to-beat depending on ventricular and arterial pressures, short-term physiologic control and medication. Although the effect of PEP modulation makes PAT more inconvenient than PTT for BP estimation, half of the studies reviewed in Mukkamala et al. (2015) used PAT as a surrogate of PTT. Some of these methods have been used for BRS assessment. For instance, in Abe et al. (2015) it was proposed to evaluate baroreflex function using the maximum normalized cross-correlation between the LF components of HRV and PAT, derived from ECG and pulse photoplethysmographic (PPG) signals.
In Liu et al. (2011) it was suggested that PAT can track BP variations in HF range, but was inadequate to follow the LF variations. To overcome this limitation (Ding et al., 2016) proposed to estimate BP combining PAT with a new index, the photoplethysmogram intensity ratio (PIR), which can reflect changes in arterial diameter due to arterial vasomotion. In order to avoid PEP influence in BP estimation, PTT has been derived from impedance plethysmography recorded at the wrist and PPG at the finger (Huynh et al., 2018), or from a ballistocardiogram and PPG at the foot (Martin et al., 2016). Alternatively, PTT was estimated from two PPG signals recorded at ear and toe in Chen et al. (2009) and at forearm and wrist in Wang et al. (2018). Some works have investigated the correlation between PAT and PTT estimated from PPG signals at finger and forehead at rest (Liu et al., 2015) and during a tilt-test (Lázaro et al., 2016). In Li et al. (2014) different PPG indices were investigated for BP estimation. The time ratio of systole to diastole, time span of PPG cycle, diastolic time duration and area ratio of systole to diastole are at least as good as PTT for BP estimation, and can be derived from just one PPG signal.
The PPG signal can be acquired with a sensor placed in many places of the body. Furthermore, its recording is very simple, economical, and comfortable for the subject. Thus, PPG signal is a very interesting signal for ambulatory scenarios and wearable devices, and assessing BRS from PPG signal may have significant impact in such applications. Moreover, several studies have compared pulse rate variability (PRV), derived from the PPG to HRV derived from the ECG, reporting good agreement even in non-stationary situations and during abrupt autonomic nervous system changes (Gil et al., 2010;Wong et al., 2012;Posada-Quintero et al., 2013;Schfer and Vagedes, 2013). In this work we investigate the feasibility of assessing BRS solely from one PPG signal. The proposed approach is based on using PPGbased surrogates of RRV and SAPV series. On one hand, pulseto-pulse variability (PPV) series was used as surrogate of RRV series. On the other hand, different PPG morphological features which are hypothesized to be related to the BP were used for generating series that were used as surrogates of SAPV. The ability of the proposed methods to capture changes in autonomic nervous system control was evaluated in a tilt-test database.
Data and Preprocessing
A data set containing ECG, BP, and PPG recordings from 17 healthy subjects (11 men), aged 28.5 ± 2.5 years, during a tilt table test was used for method evaluation. The protocol started with 4 min in supine position (Rest1), followed with 5 min in 70 • -tilt-up position (Tilt), and ended with 4 min back to supine position (Rest2). The table took 18 s for automatic transitions between stages.
ECG lead V4 was recorded by Biopac ECG100C with a sampling rate of 1,000 Hz, BP signal (x BP (n)) was recorded by Finometer system with a sampling rate of 250 Hz, and PPG signal was recorded from the index finger by BIOPAC OXY100C with a sampling rate of F s = 250 Hz. A low-pass filter with a cut-off frequency of 35 Hz was applied to the PPG in order to attenuate noise. This preprocessed PPG signal is denoted x PPG (n) in this paper. Several points were measured over the PPG pulses. Some of them were measured directly over the pulse as those described in section 2.1.1, and others over the waves extracted from the pulse by the pulse decomposition analysis (PDA) technique described in section 2.1.2.
Pulse Delineation
Several points of the ith PPG pulse were detected in order to take different morphological measurements. All these points are illustrated in Figure 1. First, PPG pulses were detected by an algorithm based on a low-pass derivative and a timevarying threshold (Lázaro et al., 2014). This algorithm detects the maximum up-slope point (n U i ), and later it is used for detecting the pulse apex point (n A i ) and the pulse basal point (n B i ) as: Subsequently, n A i and n B i are used to compute the mediumamplitude point (n M i ). This point is considered as a robust measure of PPG pulse location because it is located during the interval of the steepest slope of the PPG pulse, and it is set as: Pulse onset n O i and end n E i points were detected based on the first derivative (Lázaro et al., 2013). In addition, pulse up-slope end n SE i was detected in a similar way. Let x ′ PPG (n) be the first derivative of x PPG (n) computed by successive differences, after a 5-Hz-low-pass filter. Then, n SE i is set as: where η was set to 0.05 similarly to the case of n O i and n E i (Lázaro et al., 2013).
Pulse Decomposition Analysis
PDA is a field in PPG signal proccessing that consits of modeling the PPG pulse as a main wave superposed with several reflected waves, increasing the robustness of some morphological measurements and even allowing others that would not be possible directly over the pulse. Several models can be found in the literature, based on different shapes including Gaussians (Baruch et al., 2011), LogNormal (Huotari et al., 2011), and Rayleight (Goswami et al., 2010). In this work, a modification to the PDA technique presented in Lázaro et al. (2018) is proposed. The main difference of this technique with respect to other PDA techniques in the literature is that the waves are extracted one-by-one, instead of fitting a several-waves-model at once. The modification proposed in this paper consists of not assuming a specific shape for the superposed waves, although it is assumed that they are symmetrical.
First, the baseline of the PPG signal was estimated by cubicspline-interpolation of x PPG (n B i ), and subsequently subtracted from x PPG (n). This baseline-removed version of PPG signal is denoted x b PPG (n) in this manuscript. Then, the beginning and the end of the ith PPG pulse were considered to be n B i and n B i+1 , respectively. Note that this criterion ensures that each PPG pulse begins and ends with zero amplitude, as subtracted baseline was estimated at those n B i . Later, the algorithm extracts recursively the jth inner wave of the pulse by the following steps: 1. Set the beginning of the up-slope of the jth wave (n b SO j,i ) as the previous to the first non-zero-amplitude sample. Note that in case of j = 1 (the main wave), this corresponds to n B i . 2. Set the end of the up-slope of the jth wave (n b SE j,i ) as the first relative maximum. 3. Estimate the jth wave y b j,i (n) by concatenating the upslope with itself horizontally flipped, assuming that it is symmetric: 4. Substract y b j,i (n) to x b PPG (n) and go back to step 1 to continue extracting the (j + 1)th wave.
Once the desired number of waves have been extracted, they can be modeled in order to measure morphological features. In this work, three waves were extracted per PPG pulse. Subsequently, these y b j,i (n) were normalized to the unit in amplitude and to 1,000 samples by spline interpolation, and then they were modeled as Gaussian waves, each one defined by an amplitude, a mean, and a standard deviation (SD). Once these values are estimated, they were re-converted to the original scales of amplitude and time. An illustration of the steps of this algorithm can be observed in
Systolic Arterial Pressure Variability Surrogates Based on Pulse Signal
Four pulse morphological features that have been related to the BP and/or to the arterial stiffness in the literature were measured from each PPG pulse: amplitude (PA), width (PW), up-slope (PUS), and slope transit time (PSTT). Pulse amplitude and width were measured as in Lázaro et al. (2013). The pulse amplitude corresponds to that amplitude reached by n A i with respect to n B i , and the pulse width was measured as the time interval between n O i and n E i . Pulse up-slope was measured as the first derivative value at n U i , and PSTT was measured as the time interval between n O i and n SE i . Later, PA, PW, PUS, and PSTT series were computed as: where δ(·) denotes the Kronecker delta function, and the superscript "u" denotes that the signals are unevenly sampled, as the PPG pulses occur unevenly in time. A median-absolutedeviation outlier-rejection (Bailón et al., 2006) rule was applied to each one of these series, rejecting those points of the series that are outside the boundaries defined as the median ± 5 times the SD of the previous 50 points. Subsequently, a 4-Hz-evenly sampled version of each one of them was obtained by linear interpolation. The resulting signals are denoted using the same nomenclature, this time without the superscript "u" [e.g., d PA (n)].
Systolic Arterial Pressure Variability Surrogates Based on Pulse Decomposition Analysis
Seven morphological features were extracted from each PDAbased modeled PPG pulse. Specifically, the amplitude, mean, and twice the SD of the Gaussian-function fitted to the main wave were studied (m A1 , m B1 , and m C1 , respectively). Moreover, the feature related to twice the SD of the first reflected wave were also studied (m C2 ), as well as the time delay between the main wave occurrence m B1 and those of reflected ones m B2 and m B3 (m T12 and m T13 , respectively). Furthermore, the percentage of amplitude that it is lost in the first reflection was also estimated as: Figure 3 illustrate these measures. These features extracted from the PDA are also hypothesized to be related to the BP and/or to the arterial stiffness since they are related to amplitude, relative position between the waves, and waves dispersion by SD. Their associated series were computed as: The outliers of these series were rejected by the same medianabsolute-deviation-based rule applied in the case of the features which were measured over the pulse (see section 2.2.1), and similarly, they were linearly interpolated obtaining a 4 Hz evenly sampled version of each one of them denoted without the superscript "u".
Baroreflex Sensitivity Indices
The BRS indices were computed based on the α index, which is computed from a spectral analysis of RRV and SAPV. Several α-index surrogates based on PPG signal were computed, using PPV as RRV surrogate, and the SAPV surrogates described above. The PPV was estimated using n M i as fiducial point: Frontiers in Neuroscience | www.frontiersin.org FIGURE 2 | Example of the steps for the pulse decompostion analysis that lead to extraction of main wave (first row), first reflected wave (second row), and second reflected wave (third row). In addition, the subsequent modeling of the extracted waves can be observed in the third column in magenta, red, and blue for the main, first reflected, and second reflected waves, respectively. These series were also outlier-rejected and linearly interpolated to an even sampling rate of 4 Hz. Then, a power spectrum was computed from d PPV (n), obtainingS PPV (f ), and from the kth SAPV surrogates, obtaining S k (f ), for each one of the stages of the protocol, where k can be PA, PW, PUS, PSTT, A1, B1, C1, C2, T12, T13, and A12. These power spectra were obtained by the Welch periodogram, using a 2 min Hamming window and 50% of overlap. Then, the PPG-based surrogates of the α index were extracted from these spectra, within LF ( where LF and HF denote the LF and HF bands, respectively. In addition, in order to take into account the non-stationarity of the protocol, the BRS indices were computed using a timefrequency technique for instantaneous measurement of α index, described in Orini et al. (2012). A time-frequency distribution was applied to d PPV (n) obtaining S PPV (n, f ), and to each one of the PPG-morphology series used as SAPV surrogates obtaining S k (n, f ). In addition, a cross time-frequency spectrum S PPV,k (n, f ) was also computed as in Orini et al. (2012). The instantaneous frequencies of the main components of S PPV,k (n, f ) within [ 0.04, 0.15] Hz [for LF band, f LF (n)] and [0.15, 0.4] Hz [for HF band f HF (n)] were computed as the frequencies where S PPV,k (n, f ) is maximum within those bands. Then, LF (n) and HF (n) were defined as the frequency bands centered at f LF (n) and f HF (n), respectively, with a bandwidth equal to the frequency resolution of the used time-frequency distribution. Then, the PPG-based surrogate of α index was computed for each S k (n, f ) as the square root of the ratio between the powers of d PPV (n) (as a surrogate of RRV) and d k (n), for each one of the defined bands: (n) during the protocol. For BRS assessment, it is convenient to measure these indices only when PPV and k series are coupled. In order to detect these time courses, a time-frequency coherence (γ PPV,k (n, f )) was computed, and PPV and k series were considered to be coupled in those areas where γ PPV,k (n, f ) is over a significance level. The indices α LF k (n) and α HF k (n) measured only when γ PPV,k (n, f ) is significant within LF and HF , respectively, are denoted α LF γ k (n) and Frontiers in Neuroscience | www.frontiersin.org For validation purposes, the conventional α index was also computed from the RRV and the SAPV, denoted with no subindex [α {LF,HF,LF γ ,HF γ } (n)], and taken as reference. The RRV was computed by the interval function using the R points (n R i ) determined from the ECG by Martínez et al. (2004): The SAPV was computed from the maximum of BP pulses (n A i ), which were detected similarly to the case of the PPG pulses (see section 2.1.1):
Performance Metrics
A unique value per subject and stage of the protocol (Rest1, Tilt, and Rest2) was obtained for each one of the three studied α-index estimation methods: 1. Welch-periodogram approach (α {LF,HF} ): As it is based on a non-time-frequency technique, a unique value per subject and stage is available. 2. Time-frequency approach (ᾱ {LF,HF} ): The median of α {LF,HF} (n) within each stage and each subject was taken as the unique value per subject and stage. 3. Time-frequency-coherence approach (ᾱ {LF γ ,HF γ } ): The median of α {LF γ ,HF γ } (n) within each stage and each subject was taken as the unique value per subject and stage.
Then, correlation between the indices (α and α k ) obtained from the 17 subjects and the 3 stages of the protocol (Rest1, Tilt, and Rest2) were computed. The distributions of these indices were found to be not normal by the Kolmogorov-Smirnov test. Thus, the Spearman's correlation coefficient was used. Furthermore, the Wilcoxon signed-rank test was applied to see if the indices can find significant (p < 0.05) differences between the different stages of the protocol. As the SAPV surrogates, the α-index surrogates have different units and magnitude than classical α index. Thus, these surrogates cannot be directly compared to the classical α index, but their evolution can be compared. In order to do this, the relative variation of the αindex between consecutive stages was computed for each subject: where α S1 and α S2 represent the studied index within stages S1 and S2, respectively. A linear regression of the α-index surrogates which obtained best results in terms of correlation (those based on PUS, as it can be observed in section 3) was performed, obtaining similar units than those of the conventional α index (ms/mmHg). This linear regression was performed in order to compare those indices in a Bland-Altman plot. In addition, a multiple linear regression was performed using all the studied α-index surrogates in order to study whether their information is complementary or redundant. The combined α-index surrogates are denotedα {LF,HF} (Welch-periodogram approach),α {LF,HF} (time-frequency approach), andα {LF γ ,HF γ } (time-frequency-coherence approach). . The highest correlations were obtained for the α-index surrogates based on PUS. A scatterplot of these indices is shown in Figure 5. In addition, a Bland-Altmant plot of these indices and their associated conventional α indices is shown in Figure 6, after a linear regression in order to obtain similar units and magnitudes. The obtained limits of agreement were 0.94 ± 21.90 ms/mmHg (mean of the two values ± 1.96 × SD), -8. 99E-15 ± 60.90, 1.29 ± 20.13, 4.56E-15 ± 40.49, 1.40 ± 18.43, 5.92E-15 ± 46.89 Frontiers in Neuroscience | www.frontiersin.org FIGURE 6 | Bland-Altman plots of α vs. α PUS indices (Welch-periodogram approach, first column), ofᾱ vs.ᾱ PUS indices (time-frequency approach, second column), and ofᾱ γ vs.ᾱ γ PUS indices (time-frequency coherence approach, third column), after a linear regression to convert all units to ms/mmHg. Note that scales are not the same for LF band (first row) than for HF band (second row).
RESULTS
FIGURE 7 | Bland-Altman plots of α vs. its multiple-linear-regression-based combination of surrogatesα (Welch-periodogram approach, first column), ofᾱ vs. its multiple-linear-regression-based combination of surrogatesα (time-frequency approach, second column), and ofᾱ γ vs. its multiple-linear-regression-based combination of surrogatesα γ (time-frequency coherence approach, third column). Note that scales are not the same for LF band (first row) than for HF band (second row).
DISCUSSION
Novel methods for measuring BRS using a PPG signal have been presented. They are based on surrogates of the α index, defined as the ratio of the power of RRV series and the power of SAPV series. In this work, PPV is used as a surrogate of RRV, and the SAPV is surrogated by different morphological features of the PPG pulse which have been related to BP in the literature. Some of these features are based on a novel PDA technique that has been presented in this paper. Many modeling functions have been applied to fit the PPG pulses in the literature. The novelty of the proposed PDA technique is that the used modeling function does not affect to the decomposition, as it is applied individually to the already extracted waves. It is worthy to note that the goal in this paper is not to obtain a very accurate measure of the studied morphological features, but in deriving a measure which is proportional to those features (as only their variability is needed). Keeping this in mind, a Gaussian function was used because it satisfies de goal while being a simple function that makes sense from the physiological point of view. Three approaches were studied for estimating the α index from RRV and SAPV (or their surrogates): one based on Welch periodogram (α {LF,HF} ), and two based on a time-frequency distribution which takes into account the non-stationarity of the protocol (Orini et al., 2012). This method redefines both LF and HF bands, making them time-varying following the dominant frequencies in such bands (ᾱ {LF,HF} ). Alternatively, this method computes a time-frequency coherence between the RRV and the SAPV (or their surrogates), and estimates the α index in restricted areas where the obtained coherence is statistically significant, i.e., in those areas evidencing that RRV and SAPV (or their surrogates) are coupled (ᾱ {LF γ ,HF γ } ).
The correlation analysis shows how the PPG-based surrogates of α index track the changes of the conventional (ECG-and-BPbased) α index. Five out of the eleven SAPV surrogates leaded to α-index surrogates which obtained at least moderate correlation (>0.5). Those SAPV surrogates are, in order of cases getting the highest correlation: PUS, A1, PA, A12, and T12. Specifically, those α-index surrogates based on PUS obtained high correlation (>0.7) in all the cases. Those α-index surrogates based on A1 also obtained high correlation in four out of the six cases (α LF A1 , α HF A1 , α HF A1 andᾱ HF γ A1 ), while the remaining two (ᾱ LF A1 andᾱ LF γ A1 ) were very close to obtain it (correlation was 0.69 in both cases).
Results regarding the BRS assessment are shown in Table 2 for the Welch-periodogram approach (α {LF,HF} ), Table 3 for the time-frequency approach (ᾱ {LF,HF} ), and Table 4 for the timefrequency-coherence (ᾱ {LF γ ,HF γ } ). The conventional α indices showed significant differences between both rest stages and Tilt within both LF and HF, and for the 3 approaches. The highest difference was observed between Rest1 and Tilt within HF band The only SAPV surrogate which led to α-index surrogates showing the same behavior than the conventional α-index was PUS. None of the other SAPV surrogates led to α-index surrogates finding significant differences between Rest2 and Tilt within LF (the smallest observed change) with the exception of PA when using the Welch-periodogram approach (α LF PA ). However, PUS, A1, PA, and A12-based α-index surrogates found significant Rest1 and Tilt within both the LF and HF band, and between Rest2 and Tilt within the HF band, for the 3 approaches. The T12-based α-index surrogates also found these differences for both the time-frequency and the time-frequency-coherence approaches while they found significant differences only within HF band for the Welch-periodogram approach. In general, those PPG-based-α-index surrogates exploiting the pulse amplitude (PA, A1, and A12) obtained better results than those exploiting the pulse dispersion (PSTT, B1, C1, C2, T12, and T13) for BRS assessment, with the exception of T12. However, the best results were obtained for the index derived from PUS, which exploits both PPG amplitude and pulse dispersion. Another possible reason of the better results obtained by PUS is that it is measured at the beginning of the pulse, which would be the part related to a unique wave (main wave, before superposition of reflections) containing the BP information better expressed than the reflected waves.
The Bland-Altman plots (Figure 6) for PUS-based αindex surrogates (after converting units to ms/mmHg by a linear regression) are wider for HF (±60.90 ms/mmHg, ±40.49 ms/mmHg, and ±46.89 ms/mmHg, for Welchperiodogram, time-frequency, and time-frequency-coherence approaches, respectively) than for LF (±21.90, ±20.13, and ±18.43 ms/mmHg, for Welch-periogram-, time-frequency-, and time-frequency-coherence approaches, respectively). When combining all the PPG-based α-index surrogates by a multiple-linear regression, these limits of agreement are narrower, specially for within HF (±25.48, ±20.00, and ±15.80 mm/mmHg, for Welch-periogram-, timefrequency-, and time-frequency-coherence approaches, respectively). These results suggest that there is complementary information among the SAPV surrogates and thus, they could be combined for improving the α-index surrogate. However, this combination may require a calibration process which may be subject-specific in a final application. Further studies including data from same subjects during different days must be elaborated in order to explore techniques to combine the information of the different α-index surrogates. Comparing the correlations obtained by the PUS-based α-index surrogates among the three α-index estimation approaches, the highest correlation within LF was obtained when using the Welch-periodogram approach (0.81), while the highest correlation within HF was obtained when using the time-frequency-coherence approach (0.81). However, given the intrinsic non-stationarity of the cardiovascular system, our recommendation is to use the time-frequency-coherence approach (Orini et al., 2012) because it takes into account the time-varying dominant frequencies and the strength of the coupling between RRV and SAPV (or their surrogates) and thus, its estimates are more related to the BRS than the estimates from the other two approaches.
Based on these results, our recommendation for PPGbased BRS assessment isᾱ {LF γ ,HF γ } PUS . First,ᾱ LF γ PUS presented a significant decrease of more than 100% in median in tilt with respect to supine, which is in concordance with the decrease in referenceᾱ LF γ . Second,ᾱ HF γ PUS also presented a significant decrease in tilt with respect to supine, in this case around 2 times lower with respect to Rest1 (26.90%) than to Rest2 (61.31%), and these results are also in accordance to the referenceᾱ HF γ (with 35.37% and 57.92%, respectively). It is worthy to note that the best α surrogate may be not derived from the best SAPV surrogate, because PPV was used as RRV surrogate while it is the sum of RRV and PAT variability (PATV) (Gil et al., 2010). Thus, for obtaining a exact surrogate for the ratio RRV/SAPV using PPV as numerator of the ratio, the best denominator is not exactly SAPV, but SAPV×(1+PATV/RRV).
These results support the potential value of the proposed index as a surrogate of BRS to monitor baroreflex impairment in certain applications. For example, in de Moura-Tonello et al. (2016) the square root of the RR and systolic BP series power (α index) at rest was significantly reduced (around 50%) in type 2 diabetes mellitus patients without cardiovascular autonomic neuropathy with respect to healthy controls of similar age and antropometric characteristics. In Ranucci et al. (2017) preoperative BRS was evaluated in 150 patients undergoing coronary surgery and related to postoperative complications such as atrial fibrillation, renal function impairment and low cardiac output syndrome. The α index was significantly lower (around 30% in median) in patients experiencing postoperative acute kidney dysfunction, as well as in patients with low cardiac output state (around 50% in median). However, clinical studies have to be elaborated in order to evaluate the proposed indices in different applications. To the best of our knowledge, this is the first time that these indices are studied for BRS assessment, so healthy volunteers with presumably efficient baroreflex were used in order to observe actual changes along the protocol. Different results may be obtained with patients of different diseases, specially taking into account that coherence is reduced in heart disease patients.
Results reported in this work suggest that BRS can be assessed with high correlation by only a PPG signal based on PPV (as RRV surrogate), and PPG-amplitudebased and/or PPG-dispersion-based features (as SAPV surrogates), being PUS the most convenient SAPV surrogate for BRS assessment. The PPG signal recording is simple, economical, and comfortable for the subject. Moreover, PPG signal can be acquired in many places of the body. Thus, these results are very interesting for ambulatory scenarios and for wearable devices. Future studies may include an surrogate of the α index using a combination of different PPG-based SAPV surrogates, specially amplitude-and dispersion-based features.
ETHICS STATEMENT
This study was carried out in accordance with the recommendations of Comité Ètico de Investigación Clínica de Aragón (CEICA) with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. This study was exempt from approval of the ethical committee because no new data were registered for its development. | 8,173.2 | 2019-04-18T00:00:00.000 | [
"Computer Science"
] |
Summary of Known Genetic and Epigenetic Modification Contributed to Hypertension
Hypertension is a multifactorial disease due to a complex interaction among genetic, epigenetic, and environmental factors. Characterized by raised blood pressure (BP), it is responsible for more than 7 million deaths per annum by acting as a leading preventable risk factor for cardiovascular disease. Reports suggest that genetic factors are estimated to be involved in approximately 30 to 50% of BP variation, and epigenetic marks are known to contribute to the initiation of the disease by influencing gene expression. Consequently, elucidating the genetic and epigenetic mediators associated with hypertension is essential for better discernment of its pathophysiology. By deciphering the unprecedented molecular hypertension basis, it could help to unravel an individual's inclination towards hypertension which eventually could result in an arrangement of potential strategies for prevention and therapy. In the present review, we discuss known genetic and epigenetic drivers that contributed to the hypertension development and summarize the novel variants that have currently been identified. The effect of these molecular alterations on endothelial function was also presented.
Introduction
Hypertension, defned by persistent elevated systolic blood pressure (SBP) and diastolic blood pressure (DBP) >140 mmHg and >90 mmHg, has remained to be a major global health challenge. 31.1% of the world's adult population with a prevalence of 28.5% in high-income countries (HICs) and 31.5% in low-/middle-income countries (LMICs) are estimated to have hypertension [1][2][3]. Tis proportion is known to have doubled from 1990-2019 and is expected to continue rising in the upcoming years [4]. Impairments in the integrated control systems of BP have been identifed as the main driver of the disease [1,5]. Nevertheless, hypertension pathophysiology is still concealed [6]. Whereas, identifable causes such as parenchymal renal disease, aldosteronism, and renovascular hypertension are known to be involved in 5% of the cases, 90-95% of all hypertension cases are left undetermined [7,8].
Studies found that the hypertension prevalence is diverse among ethnicities and families. Moreover, family history was discovered to be a nonmodifable risk factor for the hypertension development demonstrated by a positive relationship of BP among siblings and between parents and ofspring, suggesting a multifactorial nature of the disease [9][10][11][12]. Considering hypertension as a major risk factor for several lethal states of cardiovascular disease (CVD), such as heart attack, congestive heart failure, stroke, and peripheral vascular disease (PVD) [13], researchers have switched focus largely on searching for feasible molecular mechanisms, which may be associated with hypertension, and determining on how genetic variation may alter the BP architecture [14]. With the advancement in gene detection and mapping, a growing body of evidence has successfully demonstrated that hypertension results from a complex interaction among genetic, epigenetic, and environmental elements [15].
To date, approximately 30 to 50% of BP variation is known to be infuenced by a genetic factor, and epigenetic modifcations are evident to be associated with the hypertension development [7,[16][17][18]. Consequently, identifying genetic and epigenetic variants that are linked to the disease is important for resulting in a better discernment of the hypertension pathophysiology [7]. Scrutinizing epigenetics aspects could particularly elucidate processes that cannot be described by classic Mendelian inheritance [19]. Finally, elucidating the pathogenetic mechanisms that initiate hypertension development is essential to determine the potential efort required to prevent and cure the disease [20]. Unravelling its genetic basis could also be contributed to the knowledge of an individual's predisposition to hypertension and thus can accelerate the early enactment of preventive eforts and therapeutic plans [15].
Te current review aims to provide an overview of known genetic and epigenetic modifcations that contributed to hypertension development and summarize the currently identifed novel variants. To the best of our knowledge, this is the frst review to discuss both genetic and epigenetic factors associated with hypertension in conjunction.
Monogenic Hypertension.
Although it is widely accepted that hypertension is an outcome of complex multifactorial interaction, approximately 30% of the cases are inherited by a single genetic mutation which generally conforms to the Mendelian inheritance [21,22]. Tis monogenic form is commonly associated with electrolyte disturbances and manifested as refractory hypertension, hypokalaemia, metabolic alkalosis, and low renin levels [23]. Also, the mechanisms that are recognized to explain the physiopathology of identifed monogenic hypertension are excessive sodium ion reabsorption, excessive steroid synthesis, and excessive mineralocorticoid synthesis [24].
Apparent Mineralocorticoid Excess (AME)
. AME is an ultrarare autosomal recessive disorder caused by a defciency in an 11 beta-hydroxysteroid dehydrogenase type 2 (HSD11B2) enzyme due to the HSD11B2 gene mutation. Patients with AME display low birth weight, postnatal failure to thrive, severe juvenile low-renin, the onset of severe hypertension, and hypokalaemia [25]. Primarily expressed in sodium-transporting epithelia, the HSD11B2 enzyme facilitates the metabolic conversion of cortisol to its inactive form, cortisone, at aldosterone binding sites thus protecting mineralocorticoid receptors (MRs) from cortisol excess [25]. In the compromised HSD11B2 activity as a result of mutation, the MRs were overstimulated by cortisol causing intense water and sodium retention, hypokalaemia, and hypertension [26]. To date, about 40 causative mutations in the HSD11B2 gene have been recognized [27]. However, several novel mutations associated with AME have been identifed.
Utilizing the next-generation sequencing (NGS) and Sanger sequencing, Fan et al. [26] found that a paternally inherited c.343_348del and maternally inherited c.1099_1101del variant were detected in exon 2 and 5 of a teenager with AME. Both mutations are characterized to result in Glu115 deletion, Leu116 deletion, a truncated 11βHSD2 protein production, and Phe367 deletion. Moreover, Bertulli et al. [28] reported that a novel homozygous frameshift variant in exon 5, c.900 dup, was found in a child with AME. In six afected patients from an Omani family, Yau et al. [27] revealed that a homozygous c.799A > G mutation within exon 4 of the HSD11B2 gene is causing NAD misalignment under p.T267A substitution. Meanwhile, Pizzolo et al. [29] and Alzahrani et al. [30] demonstrated that AME is associated with homozygous c.C662G variant and a missense biallelic mutation c.G526A of exon 3. Te c.C662G mutation was identifed to change alanine to glycine at position 221 leading to a chemical bond disturbance due to enzymatic afnity decline. Meanwhile, c.G526A mutation was evident to alter aspartic acid (GAT) to asparagine (AAT) at codon 176. (LS). LS, also known as pseudohyperaldosteronism, is a rare autosomal dominant inherited disorder clinically manifested by an early onset of hypertension, metabolic alkalosis, hypokalaemia, low renin levels, and suppressed aldosterone production [31,32]. Genetic studies showed that this syndrome is caused by mutations in the terminal ends of the epithelial sodium channel (ENaC) [33]. ENaC is a membrane-bound ion channel primarily present in the apical portion of the aldosterone-sensitive epithelial cells such as the distal nephron, lung, and collecting duct. Composed of α, β, and c homolog subunits containing a proline-tyrosine (PY) motif encoded by sodium channel epithelial 1 subunit alpha (SCNN1A), sodium channel epithelial 1 subunit beta (SCNN1B), and sodium channel epithelial 1 subunit gamma (SCNN1G) gene, respectively, this channel is responsible for Na + reabsorption that provides the rate-limiting step for fuid intake into the bloodstream [34,35]. PY motif (PPxY) is a conserved ubiquitination site of Nedd4 that mediates both internalization and degradation of ENaC. A point mutation in β or c subunit of SCNN1B and SCNN1G genes has been characterized as the cause of the progression of the disease caused by a PY confguration change [32,36].
Liddle's Syndrome
Mutation in the PY motif is known to truncate the cytoplasmic COOH terminus of the ENaC β and c subunits, impair the efective binding of Nedd4 to ENaC, and increase ENaC function that raises sodium reabsorption, elevate blood volume, and elevate BP [32,37]. Since, the frst nonsense p.Arg566 * substitution reported by Liddle et al. in 1963, various causative mutations have been reported in recent years. In a total of thirteen individuals from a Czech family, Mareš et al. [38] reported that a novel nonsense mutation c.C1988A in the 13 th exon of the SCNN1B gene of ENaC β subunit was identifed in 7 individuals causing a p.Tyr604 * which induce a premature termination codon at amino acid position 604 shortening β subunit from 640 to 603 amino acids. Meanwhile, a frameshift mutation in exon 13 of SCNN1G, p.Arg586Valfs * 598, was identifed by Fan et al. [39] resulting in a premature stop codon at the 598 positions and deleting the PY motif. Furthermore, Ding et al. [40] found that a new deletion c.1721delC of the SCNN1B gene was found to be associated with LS due to p.Pro574HisfsX675.
Glucocorticoid-Remediable Aldosteronism (GRA).
GRA is a rare hereditary cause of primary aldosteronism by which aldosterone is regulated by an adrenocorticotrophic hormone (ACTH) instead of the renin-angiotensin system. Patients with GRA are characterized by the clinical presentation of early-onset of severe hypertension with the low plasma renin activity (PRA) and mild hypernatremia in which synthetic glucocorticoid administration suppresses this mineralocorticoid excess [41,42]. Identifed as the most cause of monogenic hypertension, this is an autosomal dominant inherited trait due to a chimeric gene duplication arising from unequal crossing over between the 5′ adrenocorticotropin-responsive regulatory sequences of 11β-hydroxylase (CYP11B1) gene to the 3′ coding sequences of aldosterone synthase (CYP11B2) gene. Te fusion is leading to ectopic expression of aldosterone synthase in the zona fasciculata of the adrenal cortex thereby aldosterone is released under ACTH [43,44]. (GS). GS, also known as type II pseudo-hypoaldosteronism or familial hyperkalaemia and hypertension, is an autosomal dominant inherited form of arterial hypertension characterized by elevated BP, hyperkalaemia, and metabolic acidosis. Individuals with GS usually show an efective efcacy of low-dose thiazide diuretics or sodium restriction suggesting that the loss of thiazide-sensitive NaCl cotransporter (NCCT) function in the distal convoluted tubules is involved in the initiation of the disease [45]. GS is interestingly distinguished from other syndromic forms by its normo-or hypokalaemia feature; thus, serum potassium is a useful determiner of the illness [46,47]. Mutations in genes related to the regulation of the NaCl cosymporter NCCT, WNK1, WNK4, CUL3, and KLHL3 have later been discovered to be associated with GS and its varying severity [46].
Gordon's Syndrome
Both WNK1 and WNK4 belong to WNK (with no lysine kinase) family that lacks a conserved lysine residue for ATP docking. Studies demonstrate that the wild-type WNK4 is a natural inhibitor of the thiazide-sensitive NCCT thus reducing membrane expression of NCCT and repressing the renal outer medullary potassium channel (ROMK). Missense mutations in WNK4 convert the action of WNK4 from NCCT and ROMK suppression leading to Na + and K + retention. Meanwhile, the biological function of WNK1 upon NCCT is still unclear. Nevertheless, WNK1 is known to contain WNK4-NCCT interaction thus deletion in WNK1 seems to be the gain-of-function due to increased expression [48][49][50][51]. On the other hand, Kelch-like 3 and Cullin 3 proteins encoded by CUL3 and KLHL3 genes assemble to form an E3 ubiquitin complex which in the basal state, interacts with the NCCT, ENaC, and ROMK as a regulator to control normokalaemia and normotension [47]. Reports suggest that the CUL3-KLHL3 E3 complex regulates BP through its ability to ubiquitylate WNK and that its connection with GS is due to the accumulation of WNK4 [52].
In addition to the known causative variant, various novel mutations have been identifed associated with GS. Sakoh et al. [53] reported that a novel missense mutation of D564N in the acidic motif in WNK4 is identifed in the mother and the proband of Japanese familial leads to the diagnosis of GS. Te mutation is hypothesized to result in the WNK4 binding disruption leading to WNK4 protein excess. Moreover, Kelly et al. [54] found a missense mutation of c.1492C > T p.His498Tyr in the ffth kelch motif of the protein afecting exon 13 of the KLHL3. Tis fnding supports the existing knowledge that the CUL3 and KLHL3 proteins are important regulators of the NCCT and thus are a potential target for a novel antihypertensive drug. Te known mutations that contributed to monogenic hypertension are summarized in Table 1.
Te Renin-Angiotensin-Aldosterone System (RAAS).
Te renin-angiotensin-aldosterone system (RAAS) is a vital homeostatic regulator of arterial pressure, tissue perfusion, and extracellular volume by modulating blood volume, sodium reabsorption, potassium secretion, water reabsorption, and vascular tone. As the name indicates, renin and angiotensin are two critical components forming the system [55,56]. Renin is produced in the specialized kidney granular cells called juxtaglomerular (JG) in its 406 amino acid-long precursor named prorenin. In response to the JG cell activation due to low arterial BP, low sodium chloride, or sympathetic nervous system activity, prorenin is converted to renin by which in the bloodstream, it acts on its target, angiotensinogen, and cleaves angiotensinogen into angiotensin I [57,58]. In the vascular endothelium of the lungs and kidney, the decapeptide angiotensin I is further converted by the endothelial-bound angiotensin-converting enzyme (ACE) into an octapeptide hormone angiotensin II, a potent vasoconstrictor and the primary active product of the RAAS [55,59].
Te RAAS system is one of the well-elucidated systems upon its genetic predisposition to hypertension [7]. Among all the components, genetic polymorphism in angiotensinogen (AGT) and angiotensin-1-converting enzyme (ACE) are the most reliable candidate risk for hypertension given consistent fndings. Te candidacy of the AGT gene in hypertension was frst described by Jeunemaitre and coworkers in Utah and Paris family. Trough a sibship analysis, it is shown that AGT variants are associated with hypertension, and plasma concentrations of angiotensinogen were signifcantly diferent among hypertensive subjects with diferent AGT genotypes providing a possible International Journal of Hypertension Te asterisk ( * ) symbol represents a translation termination (stop) codon.
mechanism for the genetic links [60]. Te role of AGT on hypertension was strengthened by observations in animal models in which transgenic mice carrying overexpression of a rat angiotensinogen gene develop hypertension, while knockout mice with a disrupted corresponding gene product exhibit normal BP [61,62]. Among all the identifed polymorphisms of AGT, M235T, and T174M variants, showed the most signifcant association with hypertension in various in diferent populations [63,64].
Although the contribution of the ACE gene to hypertension is still contradictory, consistent fndings on the implication of insertion/deletion (I/D) polymorphism within intron 16 (II homozygote, ID heterozygote, and DD homozygote genotypes) of the ACE gene to the etiology of hypertension have been widely reported [65]. A case-control study among hypertensive patients and normotensive control groups found that the DD genotype and D allele of the ACE gene have had a strong association with a high risk of hypertension in the study population [66]. Furthermore, a recent meta-analysis study demonstrated that ACE I/D polymorphism is strongly associated with elevated BP in the dominant, recessive, and homozygous codominance models as well as in the allele contrast model. Individuals carrying the D allele were shown to have 1.49 times higher risk of the developing hypertension than patients with the I allele. In accordance, subjects with the DD genotype were at a 2.17 times greater risk of hypertension compared to the codominant genotype II, dominant (DD + ID vs. II), and recessive model [65]. Interestingly, some studies showed that genetic variants in ACE are gender-specifc in that it infuences BP more in males than in females [67,68].
G-Protein Coupled Receptor (GPCR) Kinases.
G protein-coupled receptors (GPCRs) are the largest superfamily of integral membrane receptors that played as a key transducer in a variety of signal transduction systems for normal physiological processes. A variety of GPCRs are involved in the regulation of BP and the maintenance of normal cardiac function [69]. Upon stimulation, specifc binding of G-protein to activated GPCR releases a rapid GDP ⟶ GTP nucleotide exchange and dissociates the GPCR-G-protein complex as well as the G-protein into free functional subunits, Gα and Gβc, which stimulates the activation of downstream efectors. In the vasculature, GPCRs mediate BP by modulating the equilibrium between vasoconstriction and vasodilatation. Angiotensin II (Ang II) type 1 receptor (AT1R), α-adrenergic receptor, endothelin A receptor, and neuropeptide Y receptor are the vasoconstriction driver under GPCRs whereas acetylcholine receptors, β-adrenergic receptor, the endothelin B receptor, and the dopamine receptor are a GPCRs-induced mediator to promote vasodilation. Te balance between these prohypertensive and antihypertensive shiftings is crucial to maintain normotensive states [70][71][72].
Recently, changes of cytosine to thymidine (C825T) in the β3 subunit of the G protein gene (GNB3) have been discovered in individuals with essential hypertension and considered as a candidate mutation for arterial hypertension. Zha et al. [73] reported that a decrease in the GRK2 (G-proteincoupled receptor kinase 2) ubiquitination levels, a protein vital for the GPCR desensitization process, and an increase in the GRK2 protein levels were identifed in Gβ3 825T allele carriers resulting in the loss of the 41 amino-acid residues which disturb the Gβ3-DDB1 binding and alter the action of Gβ3s on GRK2 ubiquitination [73]. Moreover, El Din Hemimi et al. [74] demonstrated that the C825T allele is a risk factor for essential hypertension in the Egyptian population whereas a signifcant gender-specifc efect of GNB3 C825T polymorphism on the serum sE-selectin levels was observed in males associated with CVDs outcomes [74,75].
Immune System and Infammation.
Te implication of the immune response to hypertension genesis has been identifed by a large number of investigations for decades. Te initial study aimed to examine the role of immune cells in hypertension was done in the 1960s in rats with partial renal infarction. At the time, the investigators found that immunosuppression administration attenuates hypertension in the animal model and that transfer of lymphocytes from rats with renal infarction causes hypertension in normotensive recipient rats [76,77]. To date, several immune mediators have been recognized for their possible role in hypertension initiation.
A study on subjects with pulmonary arterial hypertension (PAH) in China suggests that −572C/G promoter polymorphism in the interleukin-6 (IL-6) gene is associated with serum IL-6 levels and risk of hypertension [78]. Moreover, a meta-analysis showed that a −308G/A polymorphism in the tumor necrosis factor-α (TNF-α) increases the risk of essential hypertension in the Asian population [79]. Another meta-analysis involving 1,092 patients with essential hypertension and 1,152 controls found a signifcant association between the TNFα G308A gene polymorphism with hypertension [80]. Furthermore, a recent review indicated that there is a correlation between the serum TGFβ1 levels, polymorphisms of the TGFβ1 gene, and the severity of hypertension [81]. Table 2 are the summary of the known genetic variants that contributed to hypertension.
Epigenetic of Hypertension
Epigenetics is a heritable modifcation to the regulation of the gene activity, without changing the primary DNA sequence or genotype. In addition to genetic variants, epigenetic modifcations have been demonstrated to play signifcant functions in the pathogenesis of hypertension [19].
DNA Methylation.
DNA methylation is a stable and inheritable epigenetic modifcation to the control gene expression. It primarily occurs at cytosine in cytosine-guanine dinucleotides (CpG) islands involving the biochemical process of a methyl group (CH 3 ) additionally derived from S-adenosyl-l-methionine to carbon number 5 of the pyrimidine ring in cytosine residue to form 5-methylcytosine (5mC). CpG islands are short sequences of palindromic International Journal of Hypertension 5 DNA typically located within the promoter region or 5ʹ-end of a gene [17,19]. In normal cells, CpG islands are mostly methylated, except for those located in the promoters, which are normally maintained unmethylated by unknown mechanisms [82]. Catalyzed by DNA-methyl transferases (DNMTs) enzyme, DNA methylation is a cellular mechanism used to suppress gene transcription by which hypermethylation results in gene silencing [83]. Several genespecifc DNA methylations have been reported related to hypertension.
AT1Ar and AT1b
Methylation. Angiotensin II (Ang II) is a key efector of RAAS involved in BP and fuid volume control mediated by tissue-specifc membrane receptors: angiotensin type 1 receptor (AT1R) and angiotensin type 2 receptor (AT2R). Expressed in multiple organs, the AT1R is composed of two functional subunits of the AT1a receptor (AT1aR) and AT1b receptor (AT1b). Activation of AT1R initiates the chronic hypertensive efect by the Ang II efect on the renal vasculature [84][85][86]. A study in male spontaneously hypertensive rats (SHRs) and age-matched Wistar-Kyoto rats (WKY) showed that the mRNA and protein expression of AT1aR was signifcantly higher in SHRs in comparison with WKY. A bisulfte sequencing analysis further demonstrated that the AT1aR promoter of the aorta and mesenteric artery of the SHRs was hypomethylated in contrast to the control rat suggesting that the AT1aR promoter hypomethylation might be responsible for the elevation of AT1aR expression in SHRs [86].
In accordance, in a study aimed to investigate the efcacy of losartan, a prehypertensive therapy, the mRNA and protein expression levels of AT1aR were signifcantly increased in high-fat-fed SHR rats, and out of seven promoter regions, fve promoter areas were signifcantly hypomethylated [87]. On the other hand, a rat model of a maternal low-protein diet is utilized; Bodgarina and coworkers [88] have successfully demonstrated that in the frst week of life, the expression of the AT1b receptor gene in the adrenal is overexpressed leading to upregulated receptor protein expression and enhanced angiotensin responsiveness. Pyrosequencing and in vitro analysis demonstrated that the proximal promoter of the AT1b gene in the adrenal is low methylated and that AT1b gene expression is strongly infuenced by promoter methylation, respectively [88].
ACE Gene Methylation.
Angiotensin-converting enzyme (ACE) is a key enzyme in RAAS that cleaves the Cterminal of the inactive angiotensin I (Ang 1) to the active Ang II [59]. A cross-sectional study in low-birth-weight children showed that there is a signifcant negative correlation between the ACE activity and BP with the level of DNA methylation [89]. Moreover, 24-h average exposure to fne particulate matter (PM) 2.5 among students in China was signifcantly associated with decreasing ACE methylation, increasing ACE protein, and increase SBP and DBP. ACE hypomethylation were shown to mediate the upregulation of ACE protein and elevation in ACE protein was found to associate with elevated BP [90]. In addition, a metaanalysis study found that overexpression of the ACE II gene due to DNA methylation results in vasoconstriction, increased peripheral resistance, and hypertension [91].
TLR4 and IL-6 Methylation.
Toll-like receptor (TLR) is an important mediator of infammatory pathways which play a major role in immune responses by recognizing a wide variety of pathogen-associated ligands derived from various microbe components [92]. Meanwhile, interleukin-6 (IL-6) is a member of the proinfammatory cytokine that plays a pivotal role in T-cell-mediated immunity and the proliferation and diferentiation of nonimmune cells [93]. In accordance with hypertension, Mao et al. [94] demonstrated that hypomethylation in the CpG6 of the TLR2 promoter was signifcantly associated with the risk of hypertension [94]. Moreover, a cross-over trial of controlled-human exposure to concentrated ambient particles showed that reduced TLR4 methylation was associated with higher postexposure SBP and DBP [95]. Whereas, a signifcant correlation was discovered between the risks of essential hypertension with hypomethylations in the CpG site of IL-6 promoter in which the degree of DNA methylation was diverse among gender [96].
Histone
Modifcation. Inside the cell, nuclear DNA is packaged into a highly condensed nucleosome with the help of two copies of histone proteins H2A, H2B, H3, and H4. Among all histone proteins, the N-terminal of H3 and H4 histone tails are subject to a variety of posttranslational modifcations (PTM) that control chromatin structure and gene expression. Acetylation of nearby lysine residues, methylation of arginine residues, and phosphorylation of serine or threonine residues are the most occurring PTM product of the histone modifcation [17,19]. In a salt-sensitive hypertension model of Wistar rats, Han et al. [97] demonstrated that upregulated histone H3K27me3 due to deoxycorticosterone acetate International Journal of Hypertension (DOCA) administration was signifcantly correlated with hypertension feature relief after resveratrol intake [97]. Moreover, a polymorphism in the DOT1L (rs2269879) and MLLT3 (rs12350051) genes were signifcantly associated with greater SBP and DBP which was possibly mediated by hypermethylation of H3 histone [98]. Meanwhile, Mu et al. found that a down-regulation in the WNK4 gene was shown to be associated with increased H3 and H4 acetylation, leading to upregulation of NCC which therefore promotes hypertension [99].
3.6. MicroRNA (miRNA). miRNA is a single-stranded noncoding RNA that is involved in various cellular functions by serving as a posttranscriptional regulator of gene expression [100,101]. It is transcribed in the nucleus from DNA sequences into primary miRNA (pri-miRNA), and then, it processed into a precursor miRNA (pre-miRNA) with the help of RNA polymerase II enzyme and a microprocessor complex consisting of polymerase III enzyme, Drosha, and the DiGeorge critical region 8 (DGCR8) protein [102]. After being exported to the cytoplasm by exportin 5 (XPO5)/RanGTP complex, pre-miRNA is cleaved by RNase III endonuclease resulting in ∼18-to 23-nucleotide-long mature miRNA duplexes [103]. It interrupts gene expression through the binding to the untranslated region of the messenger RNA (mRNA) resulting in protein synthesis alteration, mRNA destabilization, and gene expression repression. Since its frst discovery, the correspondence of miRNA with hypertension has been widely reported [103][104][105]. Using a microarray, Li et al. [106] demonstrated that out of the total miRNAs detected in the experiment, 46 miRNAs were found to be profoundly expressed in hypertensive patients suggesting the miRNA contribution on the pathogenesis of hypertension [106]. Furthermore, dysregulation of miR-126-3p, 182-5p, and 30a-5p expression is evident to be the culprit of hypertension progression in the South African population meanwhile dysregulation of two novel miRNAs, miR-novel-chr1_36178 and miR-novel-chr15_18383, was signifcantly correlated with the SBP and DBP of the same population [104,107]. In addition, an elevation of the circulated miR-505 plasma level is found among spontaneously hypertensive rats and angiotensin IIinfused mice related to endothelial dysfunction and infammation [105]. Understanding the role of miRNAs in hypertension is thus essential as it could add basic knowledge about the disease prognosis and possible therapeutic strategies.
Effect of Genetic and Epigenetic Alteration on Endothelial Function
Te main untoward health consequence of hypertension is the increased risk for cardiovascular disease (CVD). It is estimated that about 54% of strokes and 47% of coronary heart diseases are attributable to elevated BP. Although the exact development cascade of hypertension to cardiovascular events remains to be elucidated, the occurrence of CVD is known to be associated with a compromised endothelial function [108,109]. In the basal state, healthy endothelium is essential for the cardiovascular system as it is a key regulator of vascular tone through the release of endothelium-dependent relaxing factors such as nitric oxide (NO) and prostaglandins [110]. Moreover, it also exhibits antioxidant, anti-infammatory, and antithrombotic properties that contribute to blood pressure control. In hypertension, the function of the vascular endothelium is altered characterized by attenuated endothelium-dependent vasodilatation and endothelial infammatory activation which result in the initiation of the cardiovascular states [110,111]. In accordance, modifcation in genetic and epigenetic architecture has been reported to result in endothelial dysfunction. Described in activatable Cul3∆9 and E-Cul3∆9 transgenic mice, it is reported that CUL3 mutation is associated with hypertension through reduced endothelial NO bioavailability, renovascular dysfunction, and increased saltsensitivity of BP [112]. Furthermore, mice with 11βHSD2 knockout are shown to have endothelial dysfunction causing enhanced constriction to norepinephrine due to the impaired NO activity. Also, diminished relaxation responses to endothelium-dependent and -independent vasodilators and unaltered renal mineralocorticoid excess after fudrocortisone treatment were reported [113]. In terms of epigenetic basis, a growing body of evidence demonstrated that endothelial function is regulated by epigenetic mechanisms by which some genes need to be activated or silenced to achieve homeostasis. For instance, trimethylation of histone 3 lysine 27 (H3K27me3) is shown to contribute to endothelial gene expression alteration through gene silencing [114]. Moreover, histone acetyltransferase 7 (KAT7) appeared to involve in endothelial function by regulating the transcription of the vascular endothelial growth factors (VEGFs) [115]. Finally, the association between genetic and epigenetic modifcation with hypertension is summarized in Figure 1.
Conclusion
Tis current review has discussed the current known genetic and epigenetic modifcation contributed in the pathogenesis of hypertension derived from both the animal report and human experiments. Te development in genetic and epigenetic basis of hypertension could result in the improved diagnosis, prognosis, and treatment of the disease. International Journal of Hypertension
Data Availability
Te data used to support the fndings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest
Te authors declare that they have no conficts of interest. | 6,505.8 | 2023-05-09T00:00:00.000 | [
"Biology",
"Medicine"
] |
Efficient and Secure Data Transmission Approach in Cloud-MANET-IoT Integrated Framework
The Internet of Things (IoT) devices have capabilities to interact and communicate in 5G heterogeneous networks. The IoT devices also have capabilities to form a network with neighborhood devices without a centralized approach. This network is called the mobile ad hoc network (MANET). Through an infrastructure-less system of the Internet of Things environment, the MANET enables IoT nodes to interact with one another. Those IoT nodes could interactively connect, communicate as well as share knowledge between several nodes. The role of cloud throughout this structure would be to store as well as interpret information through IoT nodes. The communication security has also been introduced to be one of the techniques in which trying to solve the data transmission security issue that could result in the performance increase in cloud consumption and ubiquity. Our purpose in this research would be to establish a communication system among IoT nodes in such an embedded Cloud and MANET structure. The main goal of this research is to create an efficient and secure approach for communication in Cloud-MANET-IoT integrated framework. This approach is implemented and tested.
INTRODUCTION
The IoT connected devices are devices that collaborate and communicate with each other or with other machine. They are expected to increase significantly to approximately 22 billion by 2025 [1]. As shown in Figure 1, the number of IoT connected devices are continuously increasing between 2015 and 2025, although the whole production of IoT devices would not hinge onto the populous of humanity. Although its popularity is increasingly exponentially, one of the most exhaustive issues these devices face is the surveillance and the monitoring of portable devices [2]. The concept of the internet of things could be defined as "an omnipotent as well as an ubiquitous structure, which further enables monitoring control of a physical planet through gathering, processing, as well evaluating data received from sensors". The IoT framework allows its subscribers to detonate an association to something like a network of computing resources in such a seamless fashion, in which subscribers could rapidly measure high or low requirements to insignificant communication from the supplier. Recently, mobile-based technology has been so popular as the number of connected devices has been rising exponentially day by day [3]. The connected devices are capable of transmitting information to any and all connected devices throughout the network [4]. Figure 2 shows the Cloud-MANET-IoT framework. Whenever a wireless device does not have adequate information, it communicates the query to its nearby devices. Communication networks protection is the most essential part, which is predicated on cryptography and encryption keys and provides security in intelligent interaction of ad-hoc networks [5]. Connected devices contribute to strengths, such as the context awareness, enhanced processing capabilities, and autonomy from power sources. In 2008, the development of the IoT begin by communicating physical things to the World Wide Web [6]. The sensory elements were linked to something like an intelligent repository with a bundle of intelligent data. The structure seems to have the image processing innovation for business as well as consumer verification of both the physical things, buildings, persons, sources, destinations, etc. Then, the IoT shifted from databased technology to operable-based technology, i.e. IPv4 (man to machine) to IPv6 (machine to machine) [7]. It integrates detectors, connected devices as well as modules like Smart Grid. The growing numbers of detectors and sensor networks are connected to the World Wide Web. Machine-to-machine (M2M) interaction has been the main innovation to sustain sensor data transfer [8]. There are many techniques for transforming integrated M2M interaction paradigms into the proposed systems. In such a broader context, each of the prior end-users will have their issues about weaknesses in cloud services and obstacles that could deter them from accomplishing the goals. The following are the components of IoT: i) Identifiers (ii) Detectors (iii) Surveillance (iv) Calculations (v) Utilities (vi) Terminology, etc. IoT-Connected Devices
IoT-Connected Devices in Billion
Electronic copy available at: https://ssrn.com/abstract=3639058 There are three techniques that have contributed to the development of Cloud-MANET-IoT system. The ubiquitous computation is capable for smart physical things to perform the data transmission among physical devices. The IPv6 is utilizing pervasive computations that cover the area of proposed architecture as well as supporting machine-tomachine talking. IPv4 internet have a disadvantage that adds billions of smart appliances around each other; However, it is feasible in IPv6 internet as it enables the IoT to wirelessly connect billions of smart devices around the world [9]. Association in omnipresent computations used the specified mobility using sensor interconnection. Such techniques must be frequently improved to allow the advancement of smart devices internet including multi-sensor structures to store, calculate, evaluate and own mechanism abilities with smaller size or lower power consumptions [10].
The rest of the paper is organized as follows. Section II represents the related works, Section III represents the methodologies, Section IV represents the data transmission approach in Cloud-MANET-IoT framework, Section V shows the results and discussions and Section VI shows the conclusion.
II. RELATED WORKS
A decentralized based infrastructure is referred to as MANET. The IoT devices form MANET with nearby devices. The MANET is a set of many endpoints that transmit the information, which uses a peer-to-peer mechanism. Therefore, a specifically assigned device is independent of any existing framework, such as wireless devices as well as wireless networks. Although the self-created network is usually helpful to create MANET, the MANET connected with the Internet is significantly more attractive. The Internet is considered as the key part of the daily life of many people by providing an extensive scope of administration. Access points are being used to integrate the MANET and the connected outer spaces.
In the article [11], the authors discuss ways to minimize the limitations connected to IoT deployment and implementation in a smart city environment using a multi mediator scheme. Also, they developed the data transmission scheme in the Vehicular Ad-Hoc Networks (VANET) for the traffic congestion in the smart cities [11].
In article [12], the authors proposed an efficient energyaware routing scheme for cloud-based MANET in the 5G network. They obtained positive results and demonstrated better performance in terms of less energy consumption compared to previous studies [12].
In article [13], the authors proposed an approach to secure communication in mobile ad-hoc networks of Android devices. In this approach, they developed an algorithm for secure communication in the MANET of smart devices [13].
In article [14], the authors provided the solution of the data transfer securely among the smart devices and IoT based cloud services.
In article [15], the author described the roles of the Cloud-MANET framework in the Internet of Things. They developed an algorithm for secure communication among smart devices. They also compared the previous approaches and obtained positive results [15].
III. METHODOLOGIES
The research contribution ties a special wireless communication approach that uses cloud computing and MANET techniques in the internet of things. The following are the key challenges of the centralized system so that the proposed research is highly needed.
1. It is not simple to set up data from billions of detectors in such a centrally controlled structure of smart devices. 2. Handling computing resources in a wide network that can retrieve environmental data from either the centrally controlled structure is not easy. 3. It is very difficult to control detectors operating the same kind of information similarity and stored in the centrally controlled structure. Nowadays, smart devices are becoming the requirements of our daily life. These devices could connect with each other. The time is not far away whereas in real-time millions of physical objects will relate to each other [16]. They will interact or send data as well as process their required information onto the cloud [17]. However, there seems to be a lack of a scientific standardized security point of view on the architecture of IoT. Cloud computing has indeed been regarded as one of the most famous computing concepts. This also resulted from innovations in previous computing frameworks that integrate simultaneous computation, grid computation, distributed computation, and other computing frameworks [18]. Cloud computing offers its consumers with three important features of administration: SaaS, PaaS as well as IaaS. Software as a service (SaaS) is specifically intended for consumers that must use the software as part of their routine practices [19]. Platform as a Service (PaaS) is specifically intended for software developers that need technologies to establish their software or methodology. Infrastructure as a service (IaaS) is specifically intended for network designers that need development abilities, interaction, protection difficulties and cyberattacks to smart devices [20]. The author has identified some security issues or attacks in this article. The first threat is the interruption of action due to attacks [21]. Recently, existing attacks in a distributed system could be made accountable for serious security violations. The second challenge would be to reject provider threats. This is given as distinctive, numerous or easy attacks. A challenge occurs if it arrives in differentiating in both the unlawful packet as well as the legal packet [22].
Clearly, such attacks must provide moment cooperation, in which unexperienced cybercriminals could cause them because they only operate easy codes and techniques [23]. As just a matter of fact, packets can pass by the selected providers. Its restraint of these attacks could be attempted via emerging technologies. Any of these innovations seem to be the protection system for the intrusion [24].
The cloud-MANET-IoT layered structure ( Figure 3) is a novel methodology in the connected device interaction throughout the Internet of Things that realizes as well as integrates neighboring devices without any centrally controlled technology. The current mobile network would not enable all smart devices to also be linked without centrally controlled technology even though they are very near to one another [25].
The suggested methodology would be very helpful in M2M networks because there are many machines close to one another in the M2M network. Developing the MANET framework integrated with IoT and Cloud could be very effective and beneficial for saving energy and speed performance. Cloud-based services in MANET designed for user-to-device connectivity might be a very valid solution to optimizing the strengths of the smart devices [26]. Consumers of devices will be using cloud storage services to explore devices, reduce valuable information in big data, and that can process videos, pictures, messages, and recordings [27]. Throughout this paper, the author is suggested a new structure to improve the ability of smart devices on the internet for MANET and cloud computing which could be helpful throughout the highly diverse 5G service.
The following procedure is used to connect a device with the Cloud-MANET-IoT framework.
i. Register the IoT node in the clouds. The cloud server will generate the node id as well as login details. ii. Enter the node id and login details in the clouds. iii. Install the configuration file on the new connected device. iv. Start the MANET service on the IoT nodes using this file. Step 1: Discover all IoT nodes in each direction of the selected IoT node.
Step 2: Find a Communication session using the formula.
Step 3: Calculate speed and directions.
Step 5: Find entropy per symbol.
Step 6: Find Transmission in bits/sec. MANET mobile devices could interact with one another in the Cloud-MANET-IoT framework, but at least one smart device should be tied to wireless or Wi-Fi networks. All MANET smart devices will be independently enrolled in the cloud. The proposed system would be accessed in the disjointed style. Whenever a MANET is detected, cloud providers would trigger in real-time as well as provide services to MANET's devices. The smart devices submit a report to the cloud for a communication session. Cloud offers better interaction with smart devices. Communication life is defined as a mathematical function in equation (1) Electronic copy available at: https://ssrn.com/abstract=3639058 The integral expression would be 0 whether the limit tends to be as high as 0. Since computation session life using the above deterministic function, every device involves computational the values of π and μ. Such two variables are connected to the link established between MANETs and cloud service which could be calculated and used the preceding function via connected devices, e µ(1/2)σ2 .
Whenever a smart device estimates the communication life among MANET and Cloud, this will efficiently transfer or receive information. A link would be triggered and there is high stabilization. They considered that when generating a cloud session, every smart device is guaranteed of creating the path around MANET and cloud. Using the Gauss-Markov utility system, the electronic devices could start moving from one area to the next area via the velocity of 20 meters per second. The next equations (2) and (3) can be used to measure the move of the smart device velocity inside the MANET scope.
The δ is a random degree if computational velocity and the trajectory of the smart device in such a period (t). During the time interval [ti, ti−1], the transmission (tk) of data (Ik) between the number of smart devices could be calculated. Within the MANET, the smart devices could start moving as well as connect directly the cloud storage service using the multifaceted function (t) in equation (4).
where: k = 0,1,2,3.......∞ (+ve) When electronic devices start moving outside of the MANET, after which k will be a negative number. Here, they recognize that somehow the transition of data takes place concurrently. We understand that the probability becomes proportional to the inverse data, ∝ 1 .
The probability density function for transmission is calculated mathematically as follows.
where: = 1 So, we have divided the probability density function of all associations that used the entropy per symbol for all smart devices of 3-dimensional locations.
Here 2 ( , , ) is the Chi-Square distribution method that is used for congruence. Now, we will measure all the probabilities, entropies for each direction as well as finally, create the transition matrix from the probabilities of all smart devices.
The authors have calculated the throughput of devices utilizing in the proposed model that used 5, 15 and 25 devices at 5 m/s, 10 m/s, and 25 m/s (Figure 4, 5 and 6). The data is gathered in Table 1, 2, and 3 at the time of testing.
The proposed framework was implemented in three ways. Firstly, all the connected devices were discovered throughout the self-created MANET network. Secondly, IoT devices communicate to the cloud and the thirdly, the IoT devices communicate with MANET devices, including the sessions and transmit data from one IoT device to the next that utilize cloud as a provider. We tested the above approach on a different number of IoT smart devices. The methodology was operated on numerous MANETs and utilized the cloud as a provider. The study was performed using the smart device with the suggested formulation of the calculation as well as other existing methodologies that had been built in the same setup systems. The researchers realize that somehow the output of the proposed approach had been stronger than any of the other approaches.
V. RESULT AND DISCUSSION
For secure data transmission among IoT, nodes have been introduced in this research using a framework named Cloud-MANET-IoT. Two mobile apps, one for smart device communication and another for the administrator to monitor and control have been developed. Next, the framework was tested using these mobile apps on three Android devices. Two of them were supported by the 4G network and one was supported by 3G networks. Amazon Web Services (AWS has been used to execute the cloud services. The whole cloud service communicated to the MANETs and the applications were deployed on every device. The device was also enrolled on the cloud since the mobile applications were set up on the devices. The cloud would produce a device Id for each, and every device was approved in the cloud. Using cloud-based services, the devices were able to interact with some other devices within the area of the same MANET or any other MANET. Mobile apps would configure a smart IoT device.
When the smart IoT smart node starts, it was automatically connected to Amazon's cloud service and started to connect with another IoT node. The Amazon cloud produced database schema utilities to store information regarding devices, queries, communicated texts, and information about devices in the neighborhood.
VI. CONCLUSION
The Cloud-MANET-IoT framework has the ability to play an important role in diverse 5G communication. The framework have been developed to enhance interaction efficiency. It was found that the cloud framework was predicated on such a diversified structure, many precautions or security risks linked to disbursed concepts are acquired. Data transmission security issues or obstacles which depend on behind cloud computing were enticed. Moreover, many of these excessive risks has strengthened in most of the cloud framework. The security requirements as well as communication security problems between all connected devices throughout the cloud services were also analysed. The proposed framework has been constructed, implemented and tested. | 3,956.4 | 2020-03-30T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Spin transport and spin manipulation in GaAs (110) and (111) quantum wells
Spin dephasing via the spin–orbit interaction is a major mechanism limiting the electron spin lifetime in zincblende III–V quantum wells (QWs). QWs grown along the non‐conventional crystallographic directions [111] and [110] offer new interesting perspectives for the control of spin–orbit (SO) related spin dephasing mechanisms due to the special symmetries of the SO fields in these structures. In this contribution, we show that the combination of such special symmetries with the transport of carriers by the type II modulation accompanying a surface acoustic wave allows the transport of spin polarized carriers over distances of tens of μ m in GaAs(110) QWs. In the case of GaAs(111), the Rashba contribution to the SO field generated by an electric field perpendicular to the QW plane is used to compensate the Dresselhaus contribution at low temperatures, leading to spin lifetimes of up to 100 ns. The compensation mechanism is less effective at high temperatures due to nonlinear terms of the Dresselhaus contribution. Perspectives to overcome this limitation via the combination of (111) structures with the transport of spins by surface acoustic waves are discussed.
Introduction
The manipulation of spins in semiconductor materials has become an active area of investigation, in particular after different proposals for spin-based electronic information processing have been put forward [1][2][3]. Device functionalities based on electron spins require processes for the generation, storage, detection, and transport of spins as well as interaction mechanisms to efficiently manipulate the spin vector. Advanced applications further presuppose the interaction between two spins in order to realize spin-spin gates. In this respect, two main challenges are (i) the enhancement of the electron spin coherence times in intrinsic III-V bulk semiconductors, which is typically of only one ns, thus restricting the number of spin manipulation steps that can be realized before decoherence effects set in, and (ii) the development of spin manipulation techniques that do not compromise the spin lifetime.
While isolated electron spins can be efficiently manipulated using a magnetic field, electron spins in a crystal interact with other spins from electrons (the exchange interaction) or nuclei (the hyperfine interaction) as well as with the lattice potential. These interactions can be used for the realization of spin control gates, but at the same time they may also lead to spin relaxation if not appropriately controlled. The electron-hole exchange interaction leads to the so-called Bir-Aronov-Pikus (BAP) spin dephasing mechanism [4], which is particularly important for excitons [5] as well as in highly p-type doped III-V semiconductors at low temperatures [6]. Spin scattering via the hyperfine-interaction is normally negligible for free carriers, but becomes important for localized electrons in quantum dots.
In this contribution, we investigate spin transport and manipulation in III-V semiconductor quantum well (QW) structures grown along non-conventional crystallographic directions, such as the [110] and [111] directions. The motivation for these studies arises from the special characteristics of the interaction of spins with the lattice potential in these structures, the well-known spin-orbit (SO) interaction. The SO-interaction arises from the fact that an electron experiences a varying electric field while moving through a noncentrosymmetric crystal (such as the zinc-blende semiconductors). The latter translates into a momentum ( k) dependent effective magnetic field B SO in the electron reference frame, which acts on its spin. The SO-coupling can be controlled by external electric [1,7] and strain fields [8].
As a result, it provides an interesting mechanism for spin generation and detection (i.e., optical orientation [9]) as well as for spin manipulation without the application of magnetic fields. Examples are the generation of non-equilibrium spin populations using electric currents [10][11][12][13][14] , as well as the spin-galvanic [15,16] and spin Hall effects [13,17,18]. The SO-interaction can also lead to spin dephasing in an electron ensemble, since electrons with different ks will experience SO-fields of different strengths and directions. Their spins will then precess at different rates, leading to a reduction of the resulting spin ensemble, an effect known as the Dyakonov-Perel (DP) spin dephasing mechanism [19][20][21]. The SO-interaction can also couple different spin states during electron scattering processes. One example is the Elliott-Yafet spin dephasing mechanism [22], which describes spin-flip transitions induced during electron scattering by impurities or phonons and is expected to be influential in highly doped, low-bandgap materials. Finally, the SO-coupling is also behind the intersubband spin relaxation mechanism [23] in QW structures, where electron scattering between two subbands is accompanied by a spin flip.
Two main approaches have been proposed to control the relaxation and to manipulate electron spins in semiconductors. The first, which will be discussed in detail in Section 2, exploits the dependence of the SO-interaction on the symmetry of QW structures. The second relies on the use of confinement potentials of quantum wires and dots (QDs). Confinement isolates spins from the semiconductor matrix, thus ensuring the long spin coherence times required for the coherent manipulation by external fields [24,25]. Confinement can also be efficiently employed to control SO-effects via motional narrowing. The latter relies on the fact that the DP spin scattering rate is inversely proportional to the momentum scattering rate [19]: frequent momentum scattering at the borders of the confinement potential randomizes the effective SO-field B SO experienced by the spins, thus leading to longer lifetimes [26,27]. The required confinement dimensions are dictated by the SO length SO , which is defined as the typical distance required for a one rad spin precession under B SO . SO can reach a few μm in wide GaAs QWs, thus making it possible to employ mesoscopic confinement potentials for spin control. Finally, no DP relaxation is expected in onedimensional systems (1D) since the axis of the fictive SOfield B SO is fixed and its direction reversible with k [28,29].
The highest degree of spin control and manipulation has so far been achieved in QDs defined either by Stranski-Krastanov growth or by metal gates [25]. In these systems, single electron spins in QDs have been manipulated using a variety of mechanisms including the electrical control of the exchange interaction [30] as well as electron spin resonance (ESR) induced by a varying magnetic field induced by a strip-line [31] or by a periodically varying B SO [32]. In the last case, the effective magnetic field associated with the SO-interaction can be varied through an oscillatory spin motion induced by an electric field [33,34]. Proposals for coherent spin control using ESR generated through carrier motion in a magnetic field gradient are also available [35,36].
A further challenge toward spin-based information processing schemes is the realization of scalable systems consisting of several spin subsystems. These processes normally require the transport of spins, which cannot be easily combined with 3D-confinement. In this contribution, we explore the use of moving piezoelectric potentials created by surface acoustic waves (SAWs) for spin transport and manipulation. These moving fields provide a way of combining the advantage of confinement with the long range transport required to couple remote spin systems.
In this work, we review recent results on the control of the SO-interaction and acoustic transport in GaAs (110) and (111) QWs. We start in Section 2 with a theoretical description of SO-effects arising from the symmetry of these QWs, and how they can be controlled using spatial confinement or external electric and strain fields. In (110) QWs, the lifetime enhancement associated with the crystallographic symmetry is restricted to spins aligned with the growth direction [23,37]. Higher lifetimes for other orientations can be achieved by lateral confinement. (111) QWs are particularly interesting because SO-effects can be suppressed for all spin orientations via the application of a vertical (i.e., perpendicular to the QW plane) electric field [38,39]. Experimental exploitation of the non-conventional QW orientations requires the development of epitaxial growth procedures to create (Al,Ga)As structures with quality similar to the conventional (001) ones. Recent results about the growth via MBE of high quality QWs as well as QWs embedded in microcavities are presented in Section 3.
The experimental studies of the spin dynamics in the QWs were carried out by combining spectroscopic techniques with spin transport by SAWs. The use of SAWs for the transport of carriers and spins in (Al,Ga)As(001) structures is reviewed in Section 4. The application of SAWs for the transport of spins in (110) QWs is discussed in Section 5. Emphasis is placed on the combination of lateral confinement with acoustic transport in order to achieve efficient spin transport at temperatures above 100 K. Section 6 addresses the spin dynamics in GaAs(111) QWs. Here, we demonstrated that an electric field applied across the QW can suppress SOeffects, leading to lifetimes exceeding 100 ns. Approaches for acoustic spin transport are discussed in Section 7, which also summarizes the main conclusions of this work.
Symmetry effects on the spin-orbit interaction 2.1 The Dresselhaus contribution
The SOinteraction in III-V QWs is governed by two major contributions. The first one is of intrinsic nature and associated with the bulk inversion asymmetry (BIA) of the III-V zinc-blende lattice. The effective SO-magnetic field B D (k) = Ω D /(gμ B ) associated with this contribution, which is normally denoted as the Dresselhaus term, can be expressed in Table 1 Spin precession frequencies Ω i for electron spins in III-V (001), (110), and (111) QWs with in-plane wave vector (k x , k y ) induced by the bulk inversion asymmetry (Dresselhaus contribution [43], Ω D ) and by structural inversion anisotropies created by the application of a vertical electric field E z (Rashba contribution [44], Ω R ). Also listed are the SIA contributions due to the biaxial strain induced by lattice mismatch ( Ω B ) [41] and to the strain field of a Rayleigh SAW ( Ω S ). The strain-induced contribution of BIA-type [41], which is considered to be small, is not taken into account. d eff denotes the effective QW thickness, while γ, r 41 , and C 3 are constants describing the dependence of the spin splittings due to the Dresselhaus, Rashba, and strain contributions. The parameter γ relates the splitting between the energetic spin eigenstates in the bulk III-V material to the electron wave vector (cf. Ref. [45]). The parameter r 41 for the Rashba term follows the convention in Winkler [46]. For the biaxial strain, the non-vanishing strain components are assumed to be u xx , u yy = u xx , and u zz . The SAW calculations assume that electrons propagate with a wave vector k x = m e v SAW / (where m e and v SAW denote the electron mass and the SAW propagation velocity, respectively) under the strain field of a Rayleigh SAW, which have non-vanishing strain components u xx , u zz , u xz (see Section 4 for details).
a Calculated for a biaxial strain with in-plane component u yy = u xx and out-of-plane component u zz ; b Calculated including only the SIA-type strain contribution [41,42]; c The SAW propagates along the x direction (cf. Section 4).
terms of a wave vector-dependent spin-splitting energy Ω D . Here, μ B is the Bohr magneton and g the electron g-factor. Expressions for Ω D (k) are summarized for QWs with different growth directions in Table 1. Ω D (k) depends on the spatial extent d eff of the electronic wave function along the growth direction, as well as on the spin-splitting parameter γ for the QW material. The expressions listed in Table 1 assume the axes listed in the first row of the table (where x and y lie in the QW plane and z is along the growth direction) and apply for small k-vectors (i.e., k x , k y π/d eff ). The first row of plots in Fig. 1 sketches the orientation of B D (k) for QWs grown along different crystallographic directions. B D (k) lies in the QW plane for (001) and (111) QWs, its amplitude and orientation depending on k. For (110) QWs, in contrast, B D (k) has a single component along z that only depends on k y . As a result, DP relaxation does not affect zoriented spins, thus leading to long spin lifetimes [23,37,40].
The Ω D component along z leads, however, to short relaxation times for spins oriented in the xy plane, thus preventing efficient spin manipulation. The lifetime of these spins can be enhanced by using lateral confinement to force the carriers to move along a single direction. Note, in particular, that B SO only depends on theŷ component of the carrier momentum and vanishes for k y = 0. As a result, the SO-field vanishes for transport channels along thex direction. This effect will be explored for spin transport at high temperature to be described in Section 5.2.
The Rashba contribution
The second important contribution to the SO-interaction arises from structural inversion asymmetries (SIA) in the QW potential introduced by an external field along z. In most cases, the SIA is generated by an electric field E z perpendicular to the QW plane, leading to the so-called Rashba SO-contribution [44]. Expressions for this contribution to the SO-precession frequency, Ω R (k), in QWs of different symmetry are listed in the 3rd row of Table 1. Ω R (k) lies always in the QW plane and perpendicular to k. Since its strength can be electrically controlled, the Rashba effect provides a powerful approach for the dynamic manipulation of moving spins [1].
In (110) QWs, the Rashba effect can be used to rotate spins around axes perpendicular to z [47]. For the other orientations, the Rashba field can also compensate the BIA contributions. In (001) QWs, Ω D (k) + Ω R (k) vanishes for spins propagating along the x-direction ([110] in cartesian coordinates) provided that [48] A similar expression, but with a negative sign, applies for k|| [110]. Under compensation, long-living spin eigenstates in the form of a persistent spin helix exist, as recently verified by different groups [49][50][51].
(111) QWs are particularly interesting for the investigation of SO-effects because the Rashba and Dresselhaus contributions have the same symmetry (cf. Fig. 1). As a result, the total SO-interaction precession field Ω D (k) + Ω R (k) vanishes for all (small) k-vectors by selecting an electric field given by This compensation mechanism was originally proposed in the theoretical works by Cartoixa et al. [38] and Vurgarfmann and Meyer [39]. Numerical calculations have also been carried out for doped QWs [52]. Recent experimental verification has been provided by Ballocchi et al. [53] and Hernández-Mínguez et al. [54] (see also Refs. [55][56][57]). These experiments will be reviewed in Section 6.
Strain effects
In addition to electric fields [1,7], strain fields also affect the QW symmetry, thereby introducing a SIA contribution to the SO-coupling. 1 Epitaxial layered structures are normally subjected to biaxial strain induced by the mismatch between the lattice parameters of the individual layers. Theoretical predictions for the dynamics of spins in strained QWs have been presented in Refs. [42,60] -experimental results demonstrating spin pre-cession induced by a strain field in (001) QWs are given in Refs. [8,59,61].
Expressions for biaxial strain contribution ( Ω B ) in QWs with different orientations are summarized in the 4th row of Table 1. As in the case of the Dresselhaus and Rashba mechanisms, the expressions are only valid for small electron wave vectors k. In this approximation, Ω B vanishes in (001) QWs. In (110) and (111) QWs, in contrast, Ω B has the same symmetry as the Dresselhaus contribution [42]. Lattice strain induced effects are expected to play a prominent role in (In,Ga)As structures grown on GaAs substrates.
Higher order terms in k
The expressions for the Dresselhaus contribution in Table 1 are valid in the low temperature regime, where the thermal expectation values of k 2 x , k 2 y k 2 z = (π/d eff ) 2 . Outside this regime, one has to take into account contributions of higher order in k in the Dresselhaus term. In (110) QWs and in the absence of manybody effects, these contributions do not change the symmetry properties of the SO-interaction [62]. We will examine here only the case of (111) QWs, where the following correction up to third order terms in k has to be added to Ω D [38]: being k = (k x , k y ). An additional SO-contribution arises from the fact that the GaAs QWs are normally grown on surfaces slightly tilted away from the the [111] direction (see Section 3.1). By calculating the Dresselhaus contribution for this new orientation, one can show that the tilting, δθ, introduces an extra SO-term given by: Note that this contribution lifts the degeneracy between the x and y components of the precession frequency given by the other Dresselhaus contributions. The lifetimes τ i for spins oriented along the axis i (i = x, y, z) can be estimated from the total precession frequency Ω SO D (k) according to: Here τ * p depends on the momentum scattering time. The thermal averages indicated by . . . can be calculated by taking However, a better approximation is obtained integrating the previous equation for k θ (E) = 2m * E/ 2 1/2 , where the electron energy E is assumed to follow a Boltzmann distribution.
Equation 5 yields the lifetime of spins oriented along a certain crystallographic direction. For spins precessing around an in-plane magnetic field with a short precession period compared with the spin lifetime, the average lifetime τ R becomes [23] 1 where the magnetic field is oriented along the y direction.
Since τ x ≈ τ y , the same expression also applies for spins precessing around x.
Epitaxial growth of GaAs(110) and GaAs(111)B quantum wells
As was pointed out in the introduction, QWs grown in other than the most commonly used [001] direction are predicted to be advantageous for electron spin control. From the growth point of view, molecular beam epitaxy (MBE) on GaAs(110) and GaAs(111) appears very interesting and different from growth on GaAs(001). Already a comparison of the surface energies for different substrate orientations suggests that epitaxial growth on (111)B and (110) surfaces is distinctly different from epitaxy on common (001) substrates. The surface energy of the (001) surface can be estimated to be 2.9 J m −2 and is thus much larger than the surface energies derived for the (110) and (111) surfaces, which amount to 2.1 and 1.7 J m −2 , respectively [63]. The surface energy can be assumed to be inversely proportional to the lifetime of Ga adatoms on the growing GaAs surface [64]. Therefore, Ga adatoms migrating on the (001) surface are expected to be much more efficiently incorporated into the crystal than on (110) and (111) surfaces.
The previous growth model ignores the distinct surface reconstructions occurring during growth on different GaAs surfaces. Growth on GaAs(001) is usually accomplished under As-rich condition on a 2 × 4 reconstructed surface, where As dimers are typically arranged in parallel rows along [110], separated by left-out trenches in between [65,66]. The nucleation process starts on a ridge of As dimers when a trapped Ga atom bonds to another As dimer. This nucleus grows then as a two-dimensional island by capturing further Ga atoms and As dimers, thus filling also the trenches. When this island is big enough, the uppermost As dimers split to form the stable 2 × 4 reconstructed surface once again [67]. Contrary, the nucleation behavior on (110) and (111) surfaces can be described by conventional two-dimensional growing islands without the need to consider any site specificity [67]. Thus, the widespread growth on GaAs(001) has to be considered as a well-studied but rather unique scenario, which does not necessarily apply to other substrate orientations.
MBE growth of (Al,In,Ga)As(110) QWs
The GaAs(110) surface is very unique in the sense, that it is the only stable surface of GaAs that does not show any surface reconstruction. It is further characterized by the same number of Ga and As atoms in each monolayer. The Ga-As bonds run symmetrically to the (110) plane leading to the non-polar nature of this surface. When homo-epitaxial growth is accomplished on GaAs(110) under the conditions typically used for growth on GaAs(001), strong facetting is observed leading primarily to elongated triangular features with {100} side-facets [68].
During the hetero-epitaxial growth required for the fabrication of QW structures on GaAs(110), effective strain relaxation by dislocations moving on slip planes is restricted to the [001] direction. In contrast, this mechanism is possible along both orthogonal <110> surface directions of hetero-epitaxial layers grown on GaAs(001) [69]. Furthermore, the elastic properties of GaAs(110) and GaAs(001) are different, resulting in a strongly reduced critical thickness for hetero-epitaxial growth on GaAs(110) compared to growth on GaAs(001) [70].
The (110) samples used in this study were grown at a relatively low temperature of 490 • C under high As 4 pressure in order to suppress facetting and to ensure a mirror-like surface. This is illustrated in Fig. 2, where three Nomarski micrographs of the surfaces of nominally identical (Al,Ga)As(110) microcavity structures are shown. The only varied parameter was the As 4 background pressure (p As 4 ), which was increased from (a) p As 4 = 6.0 × 10 −7 mbar to (c) p As 4 = 1.4 × 10 −6 mbar. Only the sample grown under the highest p As 4 shows a mirror-like surface. The reduced mobility of the group-III adatoms due to the low growth temperature and the high As flux seems to promote smooth surfaces on GaAs(110) based heterostructures. Under these conditions, QWs with good optical qualities can be grown. Figure 3 shows a photoluminescence (PL) spectrum taken at T = 5 K on a sample containing one GaAs QW and three (In,Ga)As/(Al,Ga)As QWs. The thickness of each QW is 20 nm (the In contents x and the full width at half maximum, FWHM, of the lines are indicated in the plot). The PL line of the GaAs QW is characterized by a narrow linewidth of only 0.5 meV, which is not much broader than the best values of To increase the efficiency of photon to electron-hole pair interconversion during optical probing of the spin dynamics, (110) single QWs were grown embedded in a microcavity structure (cf. Section 5.2). In such cases, all λ/4 layers of thick mirror layers were grown as short-period-super-lattices (SPSLs), as it has been shown that the numerous interfaces of SPSLs hinders the propagation of misfit dislocation through the epilayers [70]. The matching of microcavity resonance wavelength and QW emission wavelength was ensured by applying in situ continuous spectral reflectivity measurements [71].
MBE growth of (Al,Ga)As(111) QWs
The GaAs(111) surface is characterized by its threefold symmetry with alternating 2 11 and 1 10 directions separated by 30 • . Unfortunately, this threefold symmetry leads to the formation of pyramidal structures on the surface under usual growth conditions [72]. However, mirror-like morphologies without any pyramidal features can be achieved when substrates with a slight off-orientation (≤ 5 • ) are used [73]. Epilayers with surfaces free of pyramids can also be deposited on exactly oriented GaAs(111)B(= (111)) substrates if the V/III flux ratio and growth temperature are accurately chosen [74]. Under these conditions, a static √ 19 × √ 19 R 23 • surface reconstruction can be observed [72,[74][75][76][77]. Correspondingly, the (111)B samples used in this study were grown at a temperature of 600 • C on √ 19 × √ 19 R 23 • reconstructed surfaces of slightly off-oriented (1-3 • ) GaAs(111)B substrates to ensure mirror-like surfaces without pyramidal features.
Due to the threefold symmetry of the (111) surface, opposing directions are not equivalent, an important point to take into account when choosing the direction of the off-orientation. The enormous impact of the direction of the off-orientation on the epitaxial growth is illustrated in Fig. 4. Nomarski micrographs display the morphologies of two nominally identical (Al,Ga)As microcavity structures grown on (111)B substrates with a 3 • off-orientation toward (a) (211) and (b) (211). Whereas in the latter case a mirrorlike surface is obtained (panel b), an off-orientation toward the opposing direction leads to zig-zag shaped growth steps, Even when the growth parameters are chosen to yield mirror-like surfaces on slightly off-oriented (111)B substrates, the next challenge is then to adjust the level of step bunching according to the desired application. Although Ren and Nishinaga [78] demonstrated a clear correlation of step bunching with growth temperature, the controlled tuning of step bunching level is experimentally difficult to realize. This is illustrated in Fig. 5, which displays three atomic force micrographs taken at different positions of a 2 substrate. In our growth setup, there is a slight lateral temperature gradient from the centre of the wafer (highest temperature) to the edge of the wafer (lowest temperature). Thus, for the highest temperature (panel a) we observe a rather regular step bunching with step heights of about eight monolayers (MLs) and terrace widths on the order of 70-80 nm. In the area midway between centre and edge (medium temperature, panel b) we see some long range step bunching with step heights on the order of about 30 MLs and large terrace widths of about 300 nm. Finally, in the area near to the wafer edge (coldest region, panel c) the surface is characterized by double ML steps, thus indicating that step bunching was strongly suppressed in this area. Depending on the intended application, step bunching might or might not be favorable and, correspondingly, a tight control of growth temperature can play an important role.
Although there is a good understanding of the epitaxial growth processes on GaAs(111)B substrates, the optical quality of QWs are still inferior compared to QWs grown on the GaAs(001) surface. This is illustrated in Fig. 6, where we compare our best PL spectra obtained from GaAs (001) and GaAs (111)
Acoustic transport of carriers and spins
This section reviews recent results on carrier and spin transport by SAWs in III-V semiconductor nanostructures. SAWs are elastic vibrations propagating along a surface. In a piezoelectric material (such as the III-V compounds), these waves are accompanied by a piezoelectric potential Φ SAW , which enables their electric generation by interdigital transducers (IDTs) placed on the surface of the material (cf. Fig. 7). As illustrated in the inset of the figure, Φ SAW creates a moving modulation of the conduction and valence band edges, which can capture electrons and holes and transport them with the acoustic velocity.
The initial investigations of the acoustic carrier transport in semiconductors date back to the seventies, when acoustically induced electron transport [79,80], as well as the role of photogenerated carriers in the SAW attenuation was established [81]. By reducing the lateral dimensions of the potentials, the acoustic transport of single-electrons [82][83][84] has been demonstrated. The type II potential modulation can trap and transport photoexcited electrons and holes over hundreds of μm [85]. This approach has also been used to create acoustically pumped single-photon sources [86].
The mobile character of the piezoelectric potential is specially suitable for the transport and manipulation of photoexcited spins using the scheme illustrated in Fig. 7. Here, optically oriented electron spins are captured by Φ SAW and transported along the SAW propagation direction. Magnetic fields or electric gates based on the SO-interaction can then be used to control the spin orientation during transport. The spin state can be probed during transport via PL spectrometry [87,88] or Kerr reflectometry [89,90]. After transport, Figure 7 Transport and manipulation of spins in GaAs QW structures using surface acoustic waves. Electrons and holes generated by light pulses are spatially separated and transported by the type II modulation of the SAW field (cf. inset). During the acoustic transport, the spins interact with magnetic and electric fields created by gate electrodes. Spin polarization is detected either by Kerr rotation of the polarization axis of a second laser pulse, or by the carrier recombination induced by metal stripes (M) placed beyond the interaction region, where the SAW potential quenches. The polarization of the luminescence is used to determine the spin state. the electrons and holes can be forced to recombine, leading to the emission of circularly polarized light. The acoustic potentials have the following favorable properties: (i) the spatial separation of electrons and holes by the type II potential increases the recombination lifetime and suppresses spin dephasing via the electron-hole exchange interaction (the excitonic or BAP mechanism) [5,87,88]; (ii) motional narrowing effects induced by mesoscopic (μm-sized) confinement potentials reduce spin dephasing within an electron spin ensemble. Very long spin lifetimes (> 25 ns) and coherent transport lengths (> 100 μm) have been measured during acoustic transport by mobile potential dots (dynamic quantum dots, DQDs) produced by acoustic fields in GaAs (001) QWs [88,91]; (iii) the spins are transported with a well-defined average momentum k SAW = m * v SAW along the SAW propagation direction, where m * is the electron effective mass and v SAW the acoustic propagation velocity. This is specially interesting for studies of SO-effects in view of the dependence of B SO on electron momentum. The controlled precession of spins during acoustic transport has been demonstrated in Ref. [88]. The dependence of the precession frequency on QW thickness has been used for the direct determination of the SO-splitting constant in GaAs QWs (see Section 2) [88,91,92]. Finally, Sanada et al. [93] have recently demonstrated the full control of the spin vector via the SO-interaction by tailoring the shape of the acoustic transport channel;
Feature Article
Phys. Status Solidi B 251, No. 9 (2014) 1743 (iv) the dynamic strain field of a SAW can also be used to induce a SO-field for spin control [90,94,95]. We briefly consider here the effects of the strain field of a Rayleigh SAW. For a Rayleigh SAW propagating along x, the strain has three non-vanishing components with amplitudes u xx , u zz , u xz , which vary in time and space with the acoustic frequency (f SAW ) and wavelength (λ SAW ), respectively. If the carriers are transported close to the minima of the electronic piezoelectric energy, they experience a constant strain field during transport. The last row of Table 1 summarizes the k-dependence of the SAW strain contribution Ω S for QWs with different orientations. For a SAW along x (i.e., for k y = 0), ( Ω S ) is proportional to the Rashba contribution and can, therefore, be compensated by an applied electric field. This becomes particularly interesting for the acoustic transport of long-living z-oriented spins in (110) QWs, where precession due to the strain field can be avoided by the application of an electric field.
Most of the previously mentioned investigations have been carried out in (Al,Ga)As(001) QW structures. In the following sections, we review recent results on spin transport in (110) QWs as well as future perspectives for the spin transport and manipulation in (111) structures.
Acoustic spin transport in GaAs(110) QWs
This section reviews results on the acoustic transport of electron spins in GaAs(110) structures. We first analyze spin transport by SAWs along a single QW at low temperature and compare the transport dynamics to the predictions of Section 2. The long lifetimes of spins in these QWs make them good candidates for spintronic devices also at higher temperatures, where spin dephasing is more pronounced. In the second part of this section, we address acoustic transport at higher temperatures in piezoelectrically defined channels created in GaAs(110) microcavity structures. In these channels, the electron propagation is confined within mesoscopic 1D channels, which, according to the discussion in Section 2.1, provides further suppression of the spin dephasing mechanisms. This, together with the enhanced optical generation and detection of electron spin polarization in QWs embedded in microcavities, allows for spin transport over long distances at temperatures exceeding T = 100 K.
Low temperature spin transport
The studies were carried out on a 20 nm-thick undoped GaAs(110) QW with Al 0.15 Ga 0.85 As barriers located 400 nm below the surface. In order to enhance the SAW piezoelectric field, the samples were coated with a 424 nm thick piezoelectric ZnO film. SAWs propagating along the x [001] surface direction were generated by IDTs for an acoustic wavelength λ SAW = 5.6 μm fabricated by optical lithography [40].
The experiments for the optical detection of spin transport were carried at 20 K in a cold finger cryostat with an optical window and coaxial connections for the application of radio-frequency (rf) signals to the IDTs. An external coil applies in-plane magnetic fields, B ext , of up to 150 mT. The acoustic transport of spins was studied in this sample by spatial-and time-resolved PL as well as magneto-optic Kerr reflectometry [96,97]. This last technique has been successfully applied for spin transport studies in GaAs QWs involving a single acoustic wave [89], as well as moving potential dots generated by the interference of two perpendicular SAW waves [90]. Here, a circularly polarized pump laser pulse focused on the SAW path (15 μm diameter spot) optically generates out-of-plane polarized spins. They are then probed during transport by measuring the rotation of the polarization angle, δθ K , of a weaker, linearly polarized probe pulse reflected on the sample (spot of typically 10 μm diameter). δθ K is proportional to the projection of the average spin vector along the direction perpendicular to the QW plane (the z direction). The probe pulse can be displaced by x along the SAW path and time-delayed, t, with respect to the pump pulse. Both pump and probe energies are tuned to the electron heavy-hole exciton energy on the QW (λ hh 813.1 nm). Further details about the experimental setup can be found at Ref. [89].
The SAWs were generated by applying a radio-frequency signal of f SAW = 524 MHz with a nominal power P rf = 20 dBm to the IDT. Although the cryostat nominal temperature was 20 K, additional heating induced by the rf power increases it to ≈53 K, as estimated from the energetic shifts of the electron heavy-hole emission line.
The two-dimensional plots of Fig. 8 compare the spatialtemporal evolution of the out-of-plane spin polarization generated by the pump in the absence and presence of a SAW (left and right panels) under in-plane magnetic fields . Except for carrier diffusion shortly after the pump pulse (i.e., for times <0.1 ns), no transport takes place in the absence of SAWs. δθ K in this case evolves according to: where h is Planck's constant, ν L the Larmor precession frequency, and T * 2 the effective spin relaxation time [23]. By fitting the data along the line x = 0 (dashed line) in Fig. 8a to Eq. (7), we obtained T * 2 = 1.15 ns for nonprecessing spins. As the direction of the optically excited spins is parallel to the effective magnetic field associated with the SO-interaction, DP spin dephasing is not expected in this case. The spin lifetime is mainly limited by the BAP scattering [4] induced by the high carrier density created by the pump pulse. Under precession around B ext (Fig. 8b), T * 2 reduces to 0.78 ns. The latter is attributed to the activation of DP dephasing for rotating spins moving with k y = 0 (cf. Table 1).
In our undoped samples, T * 2 is related both to the intrinsic spin lifetime τ s and to the carrier recombination time, τ r , by the expression We have estimated τ r from the time dependence of the probe reflectance on the sample surface, which yields 1.84 ns in the absence of a SAW. The situation changes dramatically when SAWs are applied to the QW. In this case, the photo-excited electrons and holes are stored and transported at different phases of the SAW field (cf. upper inset of Fig. 7), thus reducing the overlap of their wave functions and increasing τ r to values above 40 ns. The electrons maintain the spin polarization over transport distances of more than 15 μm, as illustrated in Fig. 8c [23,40]. This result is also consistent with a short hole spin relaxation time (in the tens of ps range) [98], which makes the polarization dynamics during acoustic transport entirely determined by the electron spins. Finally, the fact that T * 2 increases under a SAW indicates that the contribution Ω S from the SAW strain (cf. Table 1, last row) is negligible for the present experimental conditions.
High temperature transport in GaAs (110) microcavity structures
The efficiency of the optical processes for the generation and detection of spins, as well as the spin lifetimes, normally reduce at high temperatures. The reduction in optical efficiency can be partially compensated by embedding the QWs within optical microcavities. Spin scattering can be minimized by combining the special properties of (110) QWs with lateral confinement.
We argued in Section 2.1 that electron transport along 1D channels fixes the rotation axis of the SO-field. If it is further assumed that these fields depend only linearly on k, spin dephasing by means of the DP mechanism becomes totally suppressed [28,99,100]. The enhancement of the spin lifetimes of electrons in etched (In,Ga)As channels [on GaAs(001)] with decreasing channel widths has been experimentally observed in experiments at T = 5 K by Holleitner et al. [29]. Here, we use a similar technique to transport spins in (110) QWs embedded in microcavities at temperatures above 100 K.
Instead of defining the channels by chemical etching, which might induce spin dephasing by electron scattering at the rough sidewalls (i.e., via the Elliot-Yafet spin dephasing mechanism [22]), our approach employs piezoelectrically defined acoustic transport channels. The experimental setup is depicted in Fig. 9a: the transport channel is defined by a thin metal layer containing a narrow slit (3-5 μm-wide) oriented along the [001] surface direction of a (110) QW structure grown on an (Al,Ga)As Bragg mirror stack (Fig. 9b). The whole structure is then coated by a piezoelectric ZnO/a-SiO 2 layer stack, which simultaneously acts as the upper Bragg mirror of an optical microcavity. A narrow (approx. 30 μm wide) SAW beam with wavelength λ SAW = 5.6 μm (corresponding to a frequency f SAW = 536 MHz) is then launched along the slit direction by a focusing IDT. The metal layer screens the piezoelectric potential generated by the SAW everywhere in the QW plane except underneath the slit. The latter then defines a narrow channel for the ambipolar acoustic transport of electrons and holes. The transport is blocked at the end of the slit, where the quenching of Φ SAW by the metal forces the recombination of the carriers.
The lower Bragg mirror and the cavity layers (including the 20 nm-thick GaAs QW) in Fig. 9b were grown by MBE. The 12 nm thick Ti layer was evaporated and photolithographically patterned to form slits with nominal widths (L W ) of 3 and 4 μm. The upper ZnO/SiO 2 Bragg mirror was rf-sputtered on top of the patterned Ti layer. The vertical distance between metallic layer and QW is only 110 nm, thus enhancing the intensity of the stray fields created by the piezoelectric layers. The composition and thicknesses of the layers were optimized to increase the SAW field close to the QW structure. SAWs result from the confinement of the acoustic fields due to the lower acoustic velocity near the surface. This waveguiding effect reduces if a GaAs substrate is overgrown with an Al x Ga 1−x As layer, since the average acoustic velocity near the surface increases with the Al content x. The latter has been minimized in Fig. 9b by reducing the average Al content (≤56%) of the Bragg mirror layers and by the insertion of 3/4 λ layers with low x in the lower Bragg mirror. For the same reason, the upper Bragg mirror consists of only two mirror layer pairs on top of a λ/2 ZnO layer, leading to a cavity quality factor (Q) of only about 100. A significantly enhanced QW PL has nevertheless been observed when the temperature dependent cavity resonance wavelength matches the QW heavy hole exciton wavelength at T = 105 K (not shown here). Figure 10 shows a calculated snapshot of the moving profile of the piezoelectric energy, −eΦ SAW , at the depth of the QW in the structure depicted in Fig. 9. This calculation was carried out for L W = 3 μm, λ SAW = 5.6 μm and an acoustic linear power density (P SAW ) of 200 W m −1 . Due to the type II like modulation of the piezoelectric potential, electrons and holes (symbolized by red and blue dots, respectively) are transported spatially separated from each other. The orange lines mark the borders of the semitransparent Ti layer. The amplitude of −eΦ SAW is also plotted in Fig. 9a as a dashed line. Based on this calculation, the acoustic transport takes place with the carriers confined within a 1-2 μm wide chan- nel. Furthermore, the efficient screening of −eΦ SAW at the end of the slit should stop the acoustic transport and induce recombination.
The optical detection of spin transport was carried out using the scheme displayed in Fig. 11a. A circular polarized laser (denoted hν in the figure) focused onto the SAW channel generates carriers in the QW with out-of-plane polarized spins at varying distances x from the end of the channel The two-dimensional intensity images of the total PL (= I + I ) for different x are depicted in Fig. 11b. At the generation point, the high concentration of photo-generated charge carriers partially screens Φ SAW , thus reducing the transport efficiency. As a result, some of the carriers recombine radiatively giving rise to a weak PL signal. Away from the generation point, the electrons and holes are efficiently captured and transported by the SAW field, thereby making the recombination (e.g., via trapping sites) negligible. In contrast, at the end of the channel, where Φ SAW is screened by the Ti layer, considerable PL intensities are recorded and the electron spin polarization can be analyzed. By scanning x in small steps (approx. 2 μm) one can determine the electron spin polarization as a function of the transport length [ρ s ( x)]. Taking into account the well-defined SAW velocity v SAW , the electron spin lifetime τ s can be determined from the decay ρ s (t) = ρ s (t = 0) · exp(− x/(v SAW τ s )). The propagation delay t and distance x are related to each other by t = x/v SAW . Figure 12 displays τ s measured in samples with nominal channel widths of 3 and 4 μm under varying acoustic power levels. Considering the large range of applied acoustic power values, τ s can be regarded as power independent. However, electron spin lifetimes measured in the narrower channel (τ s ≈ 10-11 ns) are clearly larger than the ones deduced for the wider channel (τ s ≈ 7-8 ns). This enhancement of electron spin lifetimes is expected as the channel widths of the investigated samples are comparable to the SO precession length, which is of approx. 4μm for a 20 nm thick GaAs QW.
Figure 13
Spectroscopically resolved PL as a function of the bias voltage, V b , applied to the SQW sample and the corresponding vertical electric field (E z ) estimated from the fitting of the observed Stark shift (white broken line). The inset shows a scheme of the n-i-p structure in which the SQW is embedded.
Suppression of the spin-orbit interaction in GaAs(111)B quantum wells
The special characteristics of the SO-fields in GaAs(111) QWs makes them very good candidates for spintronics. As mentioned in Section 2, an electric field applied across a (111) QW can suppress spin dephasing mechanisms associated with the SO-interaction. The studies of the spin dynamics in GaAs(111) QWs as a function of electric and magnetic fields, as well as temperature presented in this section provide experimental evidence for the electric suppression of spin dephasing.
Electric control of the spin lifetime
The experiments were carried out in QWs embedded in the intrinsic region of a n-i-p structure (cf. inset of Fig. 13). A bias voltage (V b ) applied between the top n-doped layer and the p-doped substrate generates the vertical electric field (E z ) necessary for SO-compensation. Two kinds of samples were studied: a GaAs single quantum well (SQW) with a thickness of 25 nm and a multiple quantum well (MQW) composed of 20 QWs, each 25 nm-thick, separated by (Al,Ga)As barriers. The position of the QWs between the n-doped layer and the p-doped substrate is the same in both samples. To confine the applied electric field along the z-direction, the n-doped layer and part of the top (Al,Ga)As spacer at the intrinsic region were chemically etched into mesa structures with a diameter of 300 μm.
We determined the field applied to the QWs by measuring the PL under different voltage biases (cf. Fig. 13 for the SQW sample, note that forward and reverse biases correspond to V b > 0 and V b < 0, respectively). The quantum confined Stark effect (QCSE) induced by V b modifies the energy, line width, and intensity of the PL line. By comparing the energy shift induced by V b under reverse bias with numer- ical simulations of the field distribution, we determined the relationship between V b and E z indicated in the upper and lower horizontal axis of the figure.
The n-i-p structures were designed to reach SOcompensation under a reverse bias given by Eq. (2). For low reverse bias (V b ≥ −0.9 V), the PL intensity is relatively high (cf. Fig. 13) and the spin polarization ρ s can be determined from PL circular polarization using Eq. (10). The dependence of ρ s on reverse bias obtained in the MQW sample using this approach is illustrated by the profiles in Fig. 14a, while the corresponding spin lifetimes are plotted as black dots in Fig. 14c. The spin lifetime increases with the amplitude of the reverse bias from barely 1 ns for V b = 0.3 to up to 60 ns for V b = −0.9 V. Control experiments carried out in a p-i-n structure on GaAs(111)B, where the reverse electric field has the opposite orientation relative to the [111] axis, have shown the opposite behavior [53,57]. Since in both cases the bias leads to the spatial separation of the carriers along the growth direction, the bias dependence of ρ s cannot be accounted for by the BAP mechanism.
For reverse voltages V b < −1 V, the overlap between the electron and hole wave functions in the QW as well as the PL reduce significantly, thereby hindering the detection of the spin dynamics from the PL polarization [57]. We overcome this limitation by carrying out experiments under pulsed reverse bias [54]. Here, laser and bias pulses with the same repetition frequency are synchronized with each other so that the laser pulse hits the sample at a time instant t = 0 shortly after the application of a reverse bias pulse of amplitude V b . The photoexcited carriers are then driven toward opposite interfaces of the QW, where they remain stored until the pulse is removed. The stored carriers quickly recombine when the bias pulse is removed giving rise to a short PL pulse. Figure 15 shows time-resolved PL traces of the MQW sample submitted to t p = 40 ns long bias pulses with a repetition period of 80 ns. For V b < −1 V, the QCSE prevents the recombination of the photoexcited carriers during the bias pulse. Electrons and holes remain then stored in the QWs during a time t p . Information about the stored carrier density and spin polarization is extracted at the end of the bias pulse, when a small forward bias (V b = 1 V) is applied to induce carrier recombination. As shown in Fig. 15, the amplitude of the PL pulses after the bias pulse increases for V b < −1 V, since for these biases the carrier lifetime exceeds the pulse width. The PL rise time is determined by the falling time of the bias pulses of 2 ns. The time-integrated PL after bias pulses with V b < −2 V corresponds to 60% of the PL emitted right after the laser pulse under a bias of 0 V (dark blue curve in Fig. 15). This means that the QWs can efficiently store a high density of carriers over long times. The reduction of the retrieved PL intensities for pulse voltages V b < −3.5 V is attributed to field-induced carrier extraction to the contacts during the storage time, as this bias range also coincides with the onset of the breakdown current of the structures.
The spin dynamics for large reverse biases (V b < −1 V) can be obtained from the circular polarization of the PL pulses emitted at the end of the bias pulses. Figure 14b shows We have determined the out-of-plane spin lifetimes τ z for each V b by measuring ρ s at the end of voltage pulses of different durations. The results for the MQW sample, indicated by the open squares in Fig. 14c, cover the range of large reverse bias and are thus complementary to those obtained for lower fields from Fig. 14a [black dots]. The spin lifetime increases with reverse bias until it reaches a maximum of about 100 ns for a bias V b ≈ 2.1 V, which corresponds to a vertical electric field of about E c = 15 kV cm −1 (the compensation field). Beyond this value the spin lifetime decays. The observation of a maximum in Fig. 14 unambiguously establishes the BIA/SIA compensation as the mechanism for bias-induced spin lifetime enhancement in (111) QWs. Furthermore, the peak spin lifetimes exceeding 100 ns are among the longest values reported for undoped GaAs structures. Finally, it is important to note that the Rashba contribution in the n-i-p structures can be electrically increased to values up to approximately 1.5-2 times the Dresselhaus contribution. This range is limited by the onset of field-induced carrier extraction from the QW for V b < −3.6 V.
Spin lifetime under an external magnetic field
The reverse field is also expected to increase the lifetime τ R of spins precessing around an externally applied in-plane magnetic field B ext y. In order to avoid perturbations related to the non-uniformity of the vertical field distribution across the QWs in MQW samples, the investigations under B ext were performed on the SQW sample. Figure 16a shows I (t) and I (t) under V b = −1.2 V and B ext = 60 mT for this sample. Quantum beats accompanying the PL intensity decay are clearly observed, which are attributed to the Larmor precession of the electron spins around the external magnetic field [101]. Figure 16b shows the spin polarization, ρ s , obtained from Eq. (10). Due to the fact that the temporal width of the laser pulse (600 ps) is comparable to the spin precession period (3 ns), the initial spin polarization is smaller than the theoretically expected value of approx. 0.25. The g-factor obtained from a fitting to an exponentially decaying cosinus function (pink broken line) is |g| ≈ 0.42, in good agreement with the expected value for such a wide QW. As the bias voltage increases, |g| shifts toward smaller values at a rate ≈ 0.013 V −1 .
Feature Article
Phys. Status Solidi B 251, No. 9 (2014) 1749 Figure 16c compares the electric field dependence of the spin lifetime for out-of-plane (black squares) and precessing (red circles and green triangles) spins in the SQW. The electric field dependence of τ z and τ R can be divided into two regions: for electric fields away from E c , τ R is longer than τ z . This behavior is due to the fact that, at low temperatures (that is, for small k-vectors), Ω SO,z (k ) Ω SO,x (k ) ≈ Ω SO,y (k ) (cf. Section 2). Under this condition, spins along one of the in-plane directions (e.g., x) only feel the dephasing SO-field along the orthogonal in-plane direction (i.e., Ω SO,y ). Spins along z, in contrast, will see two dephasing fields (associated with Ω SO,x and Ω SO,y ) and decay faster. Therefore, τ x τ y = 2τ z and, according to Eq. (6), τ R ≈ 4τ z /3.
The situation reverses close to E c : here, the in-plane linear Dresselhaus contribution can be compensated by the Rashba term (cf. Table 1), but the z-component Ω SO,z (cf. Eqs. 3 and 4) not, thus leading to τ R < τ z . A reduced lifetime for precessing spins (relative to z-oriented ones) has been observed over a wide range of magnetic fields (cf. inset of Fig. 16c, with data measured for V b = −1 V). Finally, at the intersection of the two regions all three components of Ω SO are equal, and the spin lifetime becomes isotropic.
The measured spin lifetimes τ z and τ R can be compared to the expected values from the model explained in Section 2. According to the model, the ratio between the Dresselhaus and Rashba constants, γ/r 41 , and the momentum scattering time τ * p , determine the compensation field E c (cf. Eq. 2) and the spin lifetime at compensation (cf. Eq. (5)). These two parameters were varied in Eq. (5) to fit the measured results. The broken lines in Fig. 16c show the result of our calculations for γ/r 41 = 2.86 VÅ, and t * p = 5 ps. The model predicts a much stronger field dependence than the measured one. The reason for these discrepancies will be further discussed below. The qualitative features, however, are very well reproduced, including the reduction of the spin lifetime around E c for precessing spins as well as opposite behavior away from the compensation field.
Spin lifetime dependence on temperature
According to the theoretical model, the Rashba term compensates the Dresselhaus one when the following condition is fulfilled: At very low temperatures, the k 2 term, which comes from nonlinear terms of the Dresselhaus contribution (cf. Eq. (3)), can be neglected. In this case, compensation takes place at a vertical electric field fully determined by γ and r 41 (cf. Eq. (2)). When the temperature increases, however, a larger range of electron k-space states above the conduction band minimum are occupied, and k in Eq. (11) cannot be neglected any more. As a consequence, SO-compensation does not occur simultaneously for all wave vectors at a fixed external electric field. The inclusion of high order terms in k leads to a temperature dependence of the maximum spin lifetime, as well as of E c . Figure 17 shows the dependence of τ z on E z and temperature measured in the SQW sample. As expected, the τ z around E c decreases from 50 ns to barely 10 ns as T increases from 33 to 60 K. The inset shows the values of E c obtained for the measured temperatures. The decrease of E c with T is a clear confirmation that, in this temperature range, the cubic terms in the Dresselhaus contribution cannot be neglected. The broken lines in Fig. 17 show the results of our model for a ratio γ/r 41 = 2.86 VÅ, and assuming the same momentum scattering time (τ * p = 5 ps) for all temperatures. As in the case of the magnetic field dependence, the simulations reproduce reasonably well the decrease of the maximum spin lifetime and the shift of E c with increasing temperature. They predict, however, a much stronger dependence of the spin lifetime on the electric field amplitude and on temperature. This discrepancy is due to the fact that we suppose the same value of τ * p for all electrons, independently of their energy state and temperature.
Finally, we briefly discuss additional possible mechanisms for the different field dependences of the calculated and measured profiles. One obvious mechanism is the nonuniformity of the field distribution. This mechanism can be discarded for the SQW sample: within the range of reverse bias around the compensation field, the PL shifts in An alternative mechanism to account for the discrepancy between calculation and measurements are SO-contributions induced by strain fields (cf. Section 2.3). The biaxial strain due to the difference between the elastic constant of GaAs and (Al,Ga)As (which was estimated to have levels lower than 3×10 −6 ), is expected to have negligible effects on the spins. In addition, this type of strain induces a SO-contribution in GaAs(111) QWs with the same symmetry as the Dresselhaus term in Table 1, and can thus be compensated by the applied bias. Another possibility are uniaxial strain fields induced, for instance, by sample mounting.
7 Conclusions and future perspectives The special symmetry of the SO-interaction in III-V semiconductor QWs grown along the non-conventional crystallographic directions [110] and [111] allows the detection of electron spins with long decoherence times. In (110) QWs, the combination of such SO-symmetries with the special properties of SAWs leads to acoustic transport of electron spins with outof-plane polarization along distances of several micrometers at low temperatures. Acoustic spin transport also reduces BAP spin dephasing via the spatial separation of electrons and holes by the type II modulation potential. Furthermore, by embedding these QWs within optical microcavities compatible with SAW generation, we have demonstrated acoustic spin transport within narrow piezoelectric channels over distances of several tenths of μm at temperatures above 100 K. Further developments of these structures should enable spin transport also at higher temperatures.
In the case of GaAs(111) QWs, the application of vertical electric fields enables the efficient suppression of the SOinteraction at low temperatures. As a result, spin lifetimes up to 100 ns have been reached. These lifetimes arise from the compensation of terms of the SO-interactions that are linear in electron momentum k. At higher temperatures, higher orders terms in k become important and prevent the compensation effect that leads to such long lifetimes. According to the results obtained from their (110) counterparts, acoustic spin transport within narrow channels appear to be a promising candidate to overcome this limitation: the well-defined velocity and direction of the acoustic wave allows the spatial confinement of spins and their transport along special crystallographic directions, for which the high order terms of the SO-fields in (111) QWs are minimized. Figure 18 shows a device proposal combining SAWs with vertical electric fields. This type of devices was designed based on our experience with (110) QW structures for acoustic transport (see, for instance, Fig. 9). The QW is embedded within a piezoelectric ZnO/SiO 2 microcavity for operation at relatively high temperatures. The gate placed along the SAW path allows the application of vertical electric fields for the suppression of the SO-interaction in the QW. In addition, the narrow channel defined by the gate enhances the piezoelectric potential Φ SAW underneath it, thereby laterally confining the electrons and holes moving in the SAW field. This confinement effect, Figure 18 (a) Top view and (b) cross-section along the transport path of a QW inserted within a hybrid microcavity for acoustic spin transport. The (111) QW is sandwiched between two distributed Bragg reflectors (Bragg mirrors DBR 1 and DBR 2 ). The stopper and the gate for the application of the vertical field E z consist of a semitransparent metal layer. (c) Depth profile for the piezoelectric potential (Φ SAW ) calculated underneath the gate (red curve, cf. vertical dashed line in (b)) and on the free surface (green curve). Note the larger amplitudes for Φ SAW underneath the gate [102]. These profiles were calculated for a Rayleigh SAW with λ SAW = 5.6 μm and linear power density of 38.6 W m −1 propagating along the [112] surface direction of a (111) structure. In the diagram, the microcavity is grown on a 1 μm thick GaAs spacer deposited on the doped GaAs subtrate.
which is discussed in detail in Ref. [102] and illustrated by the calculated depth profiles for Φ SAW in Fig. 18c, arises from the fact that Φ SAW near the interface between the ZnO/SiO 2 stack and the GaAs structure increases when the surface of the samples is coated with a conductive layer. The metal structure placed underneath the ZnO/SiO 2 DBR at the end of the path (named "stopper" in Fig. 18) quenches the SAW piezoelectric field at the end of the transport path, thereby forcing the carrier recombination required for the optical retrieval of the spin information. The realization of a device of such characteristics is expected to be one of our main activities during the next years. | 14,200.2 | 2014-09-01T00:00:00.000 | [
"Physics"
] |
Direct Simulation Monte Carlo Method for Acoustic Agglomeration under Standing Wave Condition
Acoustic agglomeration proves promising for preconditioning fine particles (i.e., PM2.5) as it significantly improves the efficiency of conventional particulate removal devices. However, a good understanding of the mechanisms underlying the acoustic agglomeration in the standing wave is largely lacking. In this study, a model that accounts for all of the important particle interactions, e.g., orthokinetic interaction, gravity sedimentation, Brownian diffusion, mutual radiation pressure effect and acoustic wake effect, is developed to investigate the acoustic agglomeration dynamics of PM2.5 in the standing wave based on the framework of direct simulation Monte Carlo (DSMC) method. The results show that the combination of orthokinetic interaction and gravity sedimentation dominates the acoustic agglomeration process. Compared with Brownian diffusion and the mutual radiation pressure effect, the acoustic wake plays a relatively more important role in governing the particle agglomeration. The phenomenon of particle agglomeration becomes more pronounced when the acoustic frequency and intensity are increased. The model is shown to be capable of accurately predicting the dynamic acoustic agglomeration process in terms of the detailed evolution of particle size and spatial distribution, which in turn allows for the visualization of important features such as “orthokinetic drift”. The prediction results are in good agreement with the experimental data.
INTRODUCTION
Particles with an aerodynamic diameter not greater than 2.5 µm are referred to as fine particles, or PM 2.5 .These particles are generated mainly from coal-fired power plants, industrial processes and vehicles (Ehrlich et al., 2007;Li et al., 2013;Pui et al., 2014).Due to the small particle size, it is extremely difficult for the conventional devices, e.g., bag filters, electrostatic precipitators (ESPs), cyclones and wet scrubbers, to effectively remove PM 2.5 from the flue gas.Consequently, a large amount of PM 2.5 is emitted into the atmosphere, causing a board range of adverse effects to human health.It has been reported that the collection efficiency of PM 2.5 using conventional devices can be improved by means of preconditioning technologies such as acoustic agglomeration (Hoffmann, 2000;Liu et al., 2009;Fan et al., 2013), electric agglomeration (Chang et al., 2015) and heterogeneous condensation (Fan et al., 2009;Yang et al., 2010).Amongst these technologies, acoustic agglomeration that applies an intense acoustic field to manipulate the motion, collision and hence agglomeration of PM 2.5 , has been recognized as a promising method for the removal of PM 2.5 .Under the acoustic field, the fine particles coagulate into large agglomerates which can then be collected using the conventional devices.
Experiments have been carried out in the past decades to determine the factors affecting the agglomeration behavior of PM 2.5 under the acoustic field (Volk et al., 1976;Rajendran et al., 1979;Tiwary et al., 1984;Hoffmann et al., 1993;Kashkoush and Busnaina, 1993;Sharifi et al., 1994;Capéran et al., 1995;Manoucheri and Ezekoye, 1996;Gallego-Juárez et al., 1999;Spengler and Jekel, 2000;De Sarabia et al., 2003;Komarov et al., 2004;Liu et al., 2009;Liu et al., 2011;Wang et al., 2011;Yan et al., 2015).The experimental results indicated that the removal efficiency using acoustic agglomeration depends on the acoustic frequency and intensity, particle size and concentration, and residence time.However, a good understanding of the mechanisms underlying the acoustic agglomeration is still largely lacking.Attempts through theoretical analysis have also been directed towards the explanation of the occurrence of the acoustic agglomeration phenomenon (Shaw and Tu, 1979;Chou and Shaw, 1981;Chou et al., 1982).Different mechanisms of acoustic agglomeration, e.g., orthokinetic and hydrodynamic interactions and acoustically induced turbulent deposition, have been reported.The orthokinetic interaction is described as the relative motion of particles induced by the viscous entrainment of the acoustic wave.The acoustic particle entrainment associated with orthokinetic interaction has been reported in many studies (e.g., Hoffmann and Koopmann, 1996;Hoffmann and Koopmann, 1997;González et al., 2000;Cleckler et al., 2012).The hydrodynamic interaction consists of the mutual radiation pressure effect that represents Bernoulli's hydrodynamic principle and the acoustic wake effect caused by asymmetric flow fields around the particles under the Oseen flow condition (Hoffmann and Koopmann, 1996).The particle interactions due to the mutual radiation pressure effect and the acoustic wake effect have been observed in both experiments and model simulations (Hoffmann and Koopmann, 1996;Hoffmann and Koopmann, 1997;González et al., 2001;González et al., 2002;González et al., 2003).Generally, the acoustic turbulence occurs at an acoustic intensity higher than 160 dB (Chou et al., 1982;Tiwary et al., 1984;Chen et al., 2008).Therefore, for an acoustic intensity less than 160 dB, the acoustic particle interactions are mainly caused by the orthokinetic interaction, the mutual radiation pressure effect and the acoustic wake effect.
As the computer technologies advance, investigations of acoustic agglomeration using numerical models become increasingly popular.Three numerical simulation methods, namely sectional method (Ezekoye and Wibowo, 1999;Zhang et al., 2012), method of moment (Zhang et al., 2011) and direct simulation Monte Carlo (DSMC) method (Funcke and Frohn, 1995;Sheng and Shen, 2006;Sheng and Shen, 2007) have been generally employed in the literature for predicting the particle agglomeration.However, the DSMC method has shown great advantages over the other two methods (Zhang and Fan, 2012;Wei, 2013).The sectional method and the method of moment solve mathematical equations for particle agglomeration through an appropriate discretization scheme or by quadrature.The numerical implementation of these methods is complicated, causing difficulties in programing.Moreover, discrete errors using these methods are generally large.The DSMC method describes directly the dynamic evolution of particle position, size and number in a dispersed system through an amount of random samples from the system.The DSMC method itself has discrete nature, making it easier to program and better to reflect the physical process compared with the former methods.Furthermore, in the DSMC method, the real particles are replaced by the sample particles, leading to a much less number of particles to be simulated and hence a substantial reduction in the computational time.Previous studies using the DSMC method for acoustic agglomeration have shed some insight into the role of orthokinetic interaction in the standing wave (Funcke and Frohn, 1995) and the roles of orthokinetic interaction and mutual radiation pressure effect in the travelling wave (Sheng and Shen, 2006;Sheng and Shen, 2007).However, questions remain as to whether the acoustic wake effect should be included or not.As the standing wave is much more efficient than the travelling wave (Rajendran et al., 1979;Fan, 2008), it is necessary to develop an adaptable method by which our understanding of acoustic agglomeration, particularly in the standing wave, can be further improved.Sheng andShen (2006, 2007) combined the DSMC model with the general dynamic equation for particle coagulation (Friedlander, 1997).Unfortunately, their method was unable to track the particle trajectories and accordingly failed to capture the evolution of the spatial distribution of particles.Recently, the DSMC method has been adopted by our group to investigate the particle collision rate due to the orthokinetic interaction in the standing wave (Fan et al., 2013).The trajectories of sample particles have been tracked and the detailed dynamic process of acoustic collision has been demonstrated.
In the present work, the DSMC method is extended to study the acoustic agglomeration dynamics with the inclusion of the mutual radiation pressure effect and the acoustic wake effect.Benchmarked experimental data is used to validate the model developed in this study.The model is then applied to further investigate the phenomenon of acoustic agglomeration under the standing wave conditions.Specifically, the relevant importance of different mechanisms in governing the process of acoustic agglomeration has been discussed.Moreover, the effects of key operating conditions (i.e., acoustic frequency, acoustic intensity and initial particle size distribution) on the acoustic agglomeration have been investigated.The objective is to achieve a fundamental understanding of the mechanisms and physics underlying the phenomenon of acoustic agglomeration.
MODEL AND METHOD
The acoustic agglomeration occurring in a chamber is considered.The standing wave is generated by the superposition of a sinusoidal incident wave propagating along the horizontal direction and its own reflected wave.For clarity, let's take the direction of the wave motion as x, the gravitational direction as z, and the direction perpendicular to x and z as y.Prior to the application of the acoustic field, the particles are carried along by the gas flow in the y direction.
Equations of Wave Motion
The equation of wave motion for the standing wave in the non-rotational and inviscid gas can be derived as (Bruneau, 2006) u gx (x, t) = u a sin(kx) sin(2π ft) (1) where u gx (x,t) is the oscillating velocity of the gas at position x and time t; u a is the velocity amplitude; k is the wave number; and f is the frequency.The viscosity and rotation of the flow field around a particle can be neglected, if the following relations are satisfied (Cleckler et al., 2012) d p /λ << 1 (2) where d p is the particle diameter; λ is the wavelength; Re p is the particle Reynold number; ρ g is the gas density; g u and p u are the gas and particle velocities, respectively; µ g is the dynamic viscosity of the gas; and c is the speed of sound.While it can be easily obtained that relations (2) and (3) are well satisfied, the particle velocity needed to compute Re p is not directly available.However, Re p can be estimated using Re p ≈ ρ g |u gx -u px |d p /µ g ≤ ρ g u a (1 -η)d p /µ g , where η is the particle entrainment coefficient represented in a function of the acoustic frequency and the particle relaxation time (Hoffmann and Koopmann, 1996;González et al., 2000).For the PM 2.5 subjected to the acoustic field presented in this work, Re p ≤ 0.13 is obtained.Therefore, Eq. ( 1) can be used as an approximate solution to describe the gas velocity induced by the acoustic wave.
The acoustic intensity described by the sound pressure level, L (dB), is often used to describe the acoustic field.It can be expressed as a function of the gas velocity amplitude where P r is the reference sound pressure, and P r = 2 × 10 -5 Pa.
Equations of Particle Motion
Forces acting on a particle in the acoustic field include the gravitational force, the buoyancy force, the drag force, and the unsteady forces (e.g., the Basset force, the virtual mass force and the pressure gradient force).Since pervious work has shown that the unsteady forces are negligible compared with the drag force (Cleckler et al., 2012), the equations of particle motion can be written as where m p is the particle mass; ρ p is the particle density; g is the acceleration of gravity; and the subscripts x, y and z represent the components along the x, y and z directions, respectively.C c is the Cunningham slip correction coefficient, given by where Kn is the Knudsen number, which is defined as Kn = 2λ g /d p with λ g being the mean free path of gas molecules.
Note that the first terms on the RHS of Eqs. ( 6) are the components of the drag force in the Stokes flow regime (Re p << 1).For increasing Re p , the Stokes drag force (F s ) can be extended to Oseen drag force (F o ) to include the first-order inertial effect in the form of F o = F s (1 + 3Re p /16) (Dianov et al., 1968;González et al., 2001).
The particle velocity can be solved by Eqs. ( 6).Subsequently, the particle displacement is calculated by where ( ) is the particle displacement at time t + Δt.Δt is the time step and often substantially smaller than both the sound period 1/f and the particle relaxation time τ to achieve the high computational accuracy,
DSMC Method
DSMC method is employed to predict the particle collision and the subsequent agglomeration behavior.The computational domain is divided into a number of small cells.It is assumed that the collision is binary and occurs only between particles residing in the same cell.The probability of particle i colliding with other particles in the time interval Δt' is expressed as where P i is the collision probability between sample particle i and all other particles; P ij is the collision probability between particle i and j; N is the number of the sample particles within the cell of particle i; β ij is the collision rate between particle i and j; w j is the number weight of the sample particle j (i.e., the number of real particles the sample particle j represents); and V ci is the volume of the cell in which particle i resides.
The collision rate (also called the agglomeration kernel) β ij is used to represent the effect of particle interactions on the acoustic agglomeration.Besides the acoustically induced particle interactions, Brownian diffusion as a result of the particles' random motions and gravity sedimentation due to the difference in the particle settling velocities can, in principle, induce particle agglomeration.Therefore, all of the important particle interaction mechanisms including the Brownian diffusion, the gravity sedimentation, the orthokinetic interaction, the mutual radiation pressure effect and the acoustic wake effect are considered in the present work.The overall collision rate β ij can be obtained from the mechanisms mentioned above using a simple additive method where are the collision rates due to the Brownian diffusion, the gravity sedimentation, the orthokinetic interaction, the mutual radiation pressure effect and the acoustic wake effect, respectively.
The combined effect of the orthokinetic interaction and the gravity sedimentation can be obtained from the relative motion of particle i and j based on the numerical solutions of Eqs. ( 6 where pij u is the relative velocity between particle i and j. When submicron-or micron-sized particles are considered, agglomeration caused by Brownian diffusion may be significant.The collision rate between particle i and j due to Brownian diffusion is given by (Sheng and Shen, 2006) where k B is the Boltzmann constant; and T is the temperature.The collision rate resulting from the mutual radiation pressure effect and the acoustic wake effect in the travelling wave have been derived by Song (1990) and Dong et al. (2006).Here, the collision rate is extended to suit the standing wave condition using the local velocity amplitude where g ij (r) is the hydrodynamic interaction function based on the mutual radiation pressure effect, which is a function of the separation distance between the particles, r. g ij (r) depends also on the longitudinal wave strength parameter, the viscous wave strength parameter, the longitudinal wavenumber, and the viscous wavenumber.For details of g ij (r) see (Song, 1990).Moreover, l is the slip coefficient and reads To include all collision events, the time interval Δt' must be sufficiently small to ensure P i < 1 for each sample particle.According to Eq. ( 11), the time interval satisfies The modified Nanbu method (Tsuji et al., 1998) is used to judge the occurrence of collision between a pair of sample particles.A random number R with a uniform distribution in the interval [0, 1) is generated.Subsequently, a candidate collision partner of sample particle i (i.e., the sample particle j) is selected from the sample particles in the same cell following where int[R × N] represents the integer part of R × N.
It is considered that the sample particle i collides with the selected sample particle j during the time interval Δt' It is assumed that all of the collisions lead to agglomeration.To reflect the consequence of the agglomeration event, the number weight, the volume and the velocity of the colliding sample particles are adjusted (Zhao et al., 2005).
The velocity of the formed agglomerate agg u is given by the conservation of momentum where superscript 0 indicates the value just before the collision.
The post-collisional number weight, volume, and velocity of the colliding sample particles are updated by (Zhao et al., 2005;Zhang and Fan, 2012): If Fan et al., Aerosol and Air Quality Research, 17: 1073-1083, 2017 1077 0 0 , , where superscript new indicates the post-collisional value.
Simulation Procedure
A three-dimensional domain with a volume of λ × 10 mm × 10 mm is considered.The domain is evenly divided into 100 cells along the x direction.Initially, the sample particles with a number weight w 0 are uniformly distributed in the domain.The initial average velocity components of the sample particles in x, y and z directions are 0, 0.5 m s -1 and 0, respectively, and each velocity component has a random fluctuation in the range from -0.02 m s -1 to 0.02 m s -1 .Based on Eqs. ( 9), ( 10) and ( 20), the time step Δt for solving the particle motion and the time interval Δt' for calculating the particle collision can be estimated.The computational parameters used in the simulations are listed in Table 1.
Steps of the numerical simulation are detailed as follows: (1) The gas flow field and the particle distribution are initialized.Sample particles are generated according to the initial particle size distribution.Meanwhile, the number weight, the velocity and the position of the sample particles are specified.(2) New velocities and positions of all the sample particles after the time step Δt are calculated using the equations of particle motion (i.e., Eqs. ( 6) and ( 8)).
(3) The locations of the particles are updated.For particles not in the simulation domain, the periodic boundary condition is applied to relocate the particles.(4) Steps ( 2) and (3) are repeated for Δt/Δt' times.
(5) The collision pairs are searched using the DSMC method.
If two particles collide with each other, the agglomeration events are handled by changing the number weight, volume, and velocity according to Eqs. ( 24)-( 26).( 6) Steps ( 2)-( 5) are repeated until the time reaches a specified value.
The numerical simulations are performed on a computer equipped with 4 CPUs (Intel i5) and RAM of 8192 MB.It takes 4-8 days to run a simulation case, depending on the number of computational particles, which ranges from 7650 to 30600.
Model Validation
The experimental data on acoustic agglomeration of coalfired PM 2.5 by Zhao (2007) is used to validate the model.For consistency, the acoustic wave used in the simulation is kept exactly the same as that used in the experiment (i.e., a standing wave with f = 1000 Hz and L = 158.5 dB).The prediction results with the Stokes drag force as well as the Oseen drag force and the experimental data of the particle size distribution at t = 3.2 s are shown in Fig. 1.It can be seen that the difference between the particle size distributions after acoustic agglomeration predicted with the two drag forces is negligibly small, because Re p is not big enough to produce obvious differences in the drag forces and in the consequent agglomeration rates.It can be also seen that the prediction results agree reasonably with the experimental data.Both the prediction results and the experimental data show that the particle number concentration decreases under the standing wave.The area under the particle size distribution curve obtained in the experiments is slightly smaller than that in the simulation, implying particles agglomerate to a greater extent in the experiments.This can be understood by the fact that the particles are assumed to be spheres in the simulation.In reality, the aggregates generated during the acoustic agglomeration of solid particles cannot be spherical.Instead, chain-like structures (Hoffmann and Koopmann, 1996;Komarov et al., 2004;Liu et al., 2009) are often observed.Compared with the equivalent spherical particle, the chain-like aggregate can be entrained more easily by the acoustic wave (Yang and Fan, 2015).Accordingly, the volume the aggregate sweeps during its oscillating motion is larger.Therefore, the particle interactions involving the aggregates could be underestimated if the aggregates are considered spherical.
Comparison of Different Agglomeration Mechanisms
Particle agglomeration in the acoustic field is governed by several mechanisms including the orthokinetic interaction, the gravity sedimentation, the Brownian diffusion, the mutual radiation pressure effect and the acoustic wake effect.The effect of the orthokinetic interaction on the acoustic agglomeration has been studied under both the travelling wave (Ezekoye and Wibowo, 1999;De Sarabia et al., 2003;Sheng and Shen, 2006;Sheng and Shen, 2007;Zhang et al., 2011;Zhang et al., 2012) and the standing wave (Funcke and Frohn, 1995) conditions.The mutual radiation pressure effect under the travelling wave condition has also been reported in previous work (Ezekoye and Wibowo, 1999;Sheng and Shen, 2006;Sheng and Shen, 2007).However, investigation on acoustic agglomeration due to the mechanisms other than the orthokinetic interaction under the standing wave condition is very scarce.Therefore, it is necessary to examine the effect of the hydrodynamic particle interactions, specifically the mutual radiation pressure effect and the acoustic wake effect, on the acoustic agglomeration dynamics.
The simulation results of the acoustic agglomeration under different particle interaction mechanisms are given in Fig. 2. The particle size distribution before and after acoustic agglomeration is shown in Fig. 2(a) and the evolution of particle number concentration during acoustic agglomeration is shown in Fig. 2(b).The acoustic intensity and the frequency used in the simulations are 155 dB and 5000 Hz, respectively, and the residence time is 3 s.From Fig. 2(a) it can be seen that the particle number concentration in the submicron size range decreases remarkably and that in the micron size range increases slightly, indicating the occurrence of particle agglomeration.It is also noted that the particle number concentration in the micron size range is slightly higher when all of the particle interaction mechanisms are included than that obtained when only the orthokinetic interaction and the gravity sedimentation are considered.This suggests that other mechanisms apart from the orthokinetic interaction and the gravity sedimentation that drive acoustic agglomeration exist.Fig. 2(b) clearly demonstrates the effect of different mechanisms on the acoustic agglomeration.The combined effect of the orthokinetic interaction and the gravity sedimentation plays a dominant role in the particle agglomeration, whilst Brownian diffusion is negligible.With respect to the hydrodynamic interactions, the acoustic wake effect is more influential on the particle agglomeration behavior than the mutual radiation pressure effect.The result simply suggests that the acoustic wake effect should be taken into account in the numerical simulation or theoretical analysis of the acoustic agglomeration phenomenon.
Dynamic Process of Acoustic Agglomeration
Fig. 3 shows a sequence of snapshots during the acoustic agglomeration.It can be seen that an increasing number of particles drift to and gather at the wave node with the increase in the residence time.The drift motion of a single PM 2.5 under the standing wave has been theoretically described by Czyz (1987Czyz ( , 1990) ) and numerically analyzed by Song and Fan (2016).It has been recognized that the particles with diameters above 0.5 µm are more prone to drift.The drift velocity decreases rapidly with the decrease in particle diameter when the diameter is less than 0.5 µm.
The drift-to-node phenomenon results from the asymmetric motion of the gas in the standing wave.The drift velocity varies with the particle position and size (Song and Fan, 2016), thus the relative drift velocity may contribute to particle interaction and agglomeration.It is worth noting that the effect of the acoustic particle drift has not been modeled in previous simulations of acoustic agglomeration.Nevertheless, this effect is naturally included in the model developed in the present work in the equations of particle motion and in the collision rate of orthokinetic interaction.
To distinguish the contributions of relative drift motion and oscillatory motion between the particles to the acoustic agglomeration, the former is henceforth referred to as "orthokinetic drift" and the later "orthokinetic oscillation".On this basis, the orthokinetic interaction involved in acoustic agglomeration under the standing wave condition is actually comprised of orthokinetic drift and orthokinetic oscillation.In addition to the orthokinetic drift, the drift-to-node motion of particles also leads to an increase of the local particle number concentration at the node, which in turn may contribute to the particle agglomeration.In our model, the effect of increased particle number concentration on the acoustic agglomeration is directly taken into account in the collision probability, since a higher particle number concentration leads to a greater collision probability as given by Eq. ( 11).The evolutions of the particle size distribution and total particle number concentration are shown in Fig. 4. It can be seen that the concentration of small particles decreases and the concentration of large particles increases with time due to the formation of agglomerates by acoustic agglomeration.It is also found that total particle number concentration decreases monotonically with time but slowly for a long time, which is in accordance with the experimental findings (Manoucheri and Ezekoye, 1996;Liu et al., 2009).The results indicate that the present model is capable of accurately capturing the phenomenon of acoustic agglomeration.
Effect of Acoustic Frequency
Fig. 5 shows the effect of acoustic frequency on the particle agglomeration.The acoustic intensity is constant at 155 dB and the residence time is 3 s.It is clearly shown that as the acoustic frequency increases, the particle number concentration peak decreases and the particle number concentration above 1 µm increases.This result is contrary to common beliefs that the acoustic agglomeration should be suppressed with increasing acoustic frequency, as the particle oscillatory motion is damped at a higher acoustic frequency.It is evident that the particle agglomeration becomes more pronounced at a higher acoustic frequency.For instance, the total particle number concentrations are reduced to 72.5%, 61.3% and 49.4% of the initial concentration after acoustic agglomeration at the frequency of 2000 Hz, 5000 Hz and 8000 Hz, respectively.This can be explained by the orthokinetic drift (i.e., the drift-to-node motion of the particles) in a standing wave.On the one hand, for the acoustic frequency range used here, the drift velocity increases with the increase of the frequency (Czyz, 1987(Czyz, , 1990;;Song and Fan, 2016).On the other hand, increasing frequency shortens the wavelength.As a result, particles accumulate more rapidly, leading to more concentrated regions in the volume, which in turn induces more intense particle interactions and agglomeration due to the orthokinetic effect, the acoustic wake effect as well as the mutual radiation pressure effect.It is also worth noting that the effect of the acoustic frequency is not robust, instead it strongly depends on the particle size distribution (Gallego-Juárez et al., 1999;Liu, et al., 2011).Therefore, numerical and experimental investigations are still required to explore the effects of frequency on the acoustic agglomeration for different particle size distributions.
Effect of Acoustic Intensity
Fig. 6 gives the effect of acoustic intensity on the acoustic agglomeration at f = 5000 Hz and t = 3 s.As expected, both the peak of the particle size distribution and the total particle number concentration decrease when the acoustic intensity increases.Specifically, when the acoustic intensities are 145 dB, 150 dB and 155 dB, the total particle number concentrations decrease by 20.6%, 30.9%, and 38.7%, respectively.This is justifiable as the higher acoustic intensity produces the larger amplitude of particle oscillatory motion and the greater velocity of the particle drift-to-node motion, providing more opportunities for particles to collide due to the orthokinetic interaction.Meanwhile, faster drift-to-node motion leads to smaller separations between the particles, resulting in higher collision probability caused by both the acoustic wake effect and the mutual radiation pressure effect.The simulation results are well supported by previous experimental results which show that particle agglomeration is enhanced as the acoustic intensity increases (Gallego-Juárez et al., 1999;Komarov et al., 2004;Liu et al., 2009;Yan et al., 2015).
Effect of Particle Size Distribution
The effect of particle size distribution on the acoustic agglomeration is shown in Fig. 7.The initial particle size distributions can be approximately described using lognormal functions with the same geometric standard deviation but different geometric mean diameters (i.e., 0.15 µm, 0.3 µm and 0.5 µm).The acoustic frequency of 5000 Hz, the acoustic intensity of 155 dB and the residence time of 3 s are used in the simulations.It can be seen that the particle size distribution has a significant influence on the acoustic agglomeration.For initial distribution 1, most of the particles are fully entrained by the acoustic wave, only a small amount of relatively large particles exhibit different entrainment rates of oscillatory motion along with obvious drift-to-node motion.The motion behaviors of the relatively large particles lead to the orthokinetic interaction and the other acoustically induced interactions resulting from the entrainment motion of the particles.As the geometric mean diameter increases, for example, to 0.3 µm as represented by initial distribution 2, there are less particles that can be fully entrained and more particles with different entrainment rates.In this case, particles move with a greater relative velocity, which promotes the particle interaction and agglomeration.When the geometric mean diameter of the particles increases up to 0.5 µm as given by distribution 3, much more intense acoustic agglomeration occurs.Correspondingly, the peak of the particle size distribution decreases dramatically and the total particle number concentration decreases to 5% of its initial value in 1 s.The particle size distribution is an important factor affecting the acoustic agglomeration.Experimental investigations have demonstrated that with large additional particles as a second mode or with particle enlargement by heterogeneous condensation the removal of particles from the small particle range can be significantly enhanced (Hoffmann et al., 1993;Wang et al., 2011;Yan et al., 2015).
CONCLUSIONS
The acoustic agglomeration process of PM 2.5 in the standing wave has been modeled using the DSMC method.The effects of the drag force resulting in orthokinetic interaction and the gravitational force leading to gravity sedimentation on particle motion are taken into account, therefore the particle trajectories can be tracked.The contributions of other mechanisms such as the Brownian diffusion, the acoustic radiation pressure effect and the acoustic wake effect are included by a simple addition of the relevant collision rates to that obtained from the relative velocity based on the particle motion.
The simulation results are verified by the experimental data.On this basis the processes of acoustic agglomeration are studied by considering different acoustic mechanisms and varying key operating parameters, such as the residence time, the acoustic frequency, the acoustic intensity and the particle size distribution.The simulation results show that the combined effect of the orthokinetic interaction and the gravity sedimentation plays a dominant role in governing the acoustic agglomeration.The effect of the Brownian diffusion on the particle agglomeration is marginal.Compared with the mutual radiation pressure effect, the acoustic wake effect appears more influential on the acoustic agglomeration.The visualization of the acoustic agglomeration process reveals the critical feature of "orthokinetic drift" involved in the orthokinetic interaction and shows the high concentration regions resulting from the "orthokinetic drift".It is also found that the particle agglomeration phenomenon is more pronounced for higher acoustic frequency, higher acoustic intensity and larger particles.This work provides valuable data and vital information for the future optimization of acoustic agglomeration in the standing wave.
Fig. 2 .
Fig. 2. Comparison of the effects of agglomeration mechanisms.L = 155 dB, f = 5000 Hz, t = 3 s.(a) Particle size distribution before and after acoustic agglomeration.(b) Evolution of total particle number concentration with time
Fig. 4 .Fig. 5 .
Fig. 4. Evolution of particle size and concentration with time.L = 155 dB, f = 5000 Hz.(a) Evolution of particle size distribution with time.(b) Evolution of total particle number concentration with time.
Fig. 6 .Fig. 7 .
Fig. 6.Effect of acoustic intensity on acoustic agglomeration.f = 5000 Hz, t = 3 s.(a) Particle size distribution before and after acoustic agglomeration.(b) Evolution of total particle number concentration with time.
Table 1 .
Computational parameters used in the simulation. | 7,245 | 2017-01-01T00:00:00.000 | [
"Physics"
] |
Advancing Patient Care: How Artificial Intelligence Is Transforming Healthcare
Artificial Intelligence (AI) has emerged as a transformative technology with immense potential in the field of medicine. By leveraging machine learning and deep learning, AI can assist in diagnosis, treatment selection, and patient monitoring, enabling more accurate and efficient healthcare delivery. The widespread implementation of AI in healthcare has the role to revolutionize patients’ outcomes and transform the way healthcare is practiced, leading to improved accessibility, affordability, and quality of care. This article explores the diverse applications and reviews the current state of AI adoption in healthcare. It concludes by emphasizing the need for collaboration between physicians and technology experts to harness the full potential of AI.
Introduction
Artificial intelligence is increasingly being used as a virtual tool in many countries around the world.With its ability to mimic human cognitive functions, AI has revolutionized industries, improved efficiency, and unlocked new possibilities.During the past few years, governments have adopted a variety of smart applications that can use AI and its subsets provide predictions and recommendations in various fields, such as healthcare, finance, agriculture, education, social media, and data security.
Since the outbreak of COVID-19 in 2019, AI technologies have experienced accelerated adoption and utilization across various domains within the healthcare sector.In response to the pandemic, AI has emerged as a valuable tool and is being used for disease detection and diagnosis, medical imaging and analysis, treatment planning and personalized medicine, drug discovery and development, predictive analytics, and risk assessment.In 2018, Loh E. [1] stated that AI has the potential to significantly transform physicians' roles and revolutionize the practice of medicine, and it is important for all doctors, in particular those in positions of leadership within the health system, to anticipate the potential changes, forecast their impact and make strategic plans for the medium to long term.In contrast, in 2021, Mistry C. et al. [2] assessed that the necessity for deploying advanced digital devices has become a requirement to offer augmented customer satisfaction, permitting tracking, checking the health status, and achieving better drug adherence.
The field of AI is continuously evolving and researchers are exploring various avenues to create intelligent systems with different capabilities.The authors employed a visual representation, in the form of Figure 1, to illustrate the diverse subtypes of AI.Table 1 provides an overview of the definitions of terms related to AI and their integration within the healthcare sector.
visual representation, in the form of Figure 1, to illustrate the diverse subtypes of AI.Error!Reference source not found.provides an overview of the definitions of terms related to AI and their integration within the healthcare sector.
Term Definition
Artificial Intelligence (AI) The first definition was been given in 1950 by Alan Turing, the founding father of AI, as the science and engineering of making intelligent machines, especially intelligent computer programs [3].According to Salto-Tellez M. et al. [4], AI represents a range of advanced machine technologies that can derive meaning and understanding from extensive data inputs, in ways that mimic human capabilities.In the present context of medical practice, a specific definition may be a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation [5].
Machine Learning (ML) ML, a subset of artificial intelligence, exhibits the experiential "learning" associated with human intelligence, while also having the capacity to learn and improve its analyses through the use of computational algorithms [6,7].Alpaydin E. [8] defined machine learning as the field of programming computers to optimize a performance criterion using example data or past experience.ML-based tools are used in the healthcare system to provide various treatment alternatives and individualized treatments and improve the overall efficiency of hospitals and healthcare systems while lowering the cost of care [9].
Term Definition
Artificial Intelligence (AI) The first definition was been given in 1950 by Alan Turing, the founding father of AI, as the science and engineering of making intelligent machines, especially intelligent computer programs [3].According to Salto-Tellez M. et al. [4], AI represents a range of advanced machine technologies that can derive meaning and understanding from extensive data inputs, in ways that mimic human capabilities.
In the present context of medical practice, a specific definition may be a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation [5].
Machine Learning (ML) ML, a subset of artificial intelligence, exhibits the experiential "learning" associated with human intelligence, while also having the capacity to learn and improve its analyses through the use of computational algorithms [6,7].Alpaydin E. [8] defined machine learning as the field of programming computers to optimize a performance criterion using example data or past experience.ML-based tools are used in the healthcare system to provide various treatment alternatives and individualized treatments and improve the overall efficiency of hospitals and healthcare systems while lowering the cost of care [9].
Deep Learning (DL)
Deep Learning, a subset of Machine Learning, refers to a deep neural network, which is a specific configuration where neurons are organized in multiple successive layers that can independently learn representations of data and progressively extract complex features, performing tasks such as computer vision and natural language processing (NLP) [10].In experimental settings across multiple medical specialties, DL performs equivalently to healthcare professionals for detecting diseases from medical imaging [11].
Term Definition
Natural Language Processing (NLP) Natural Language Processing is a theoretically-motivated range of computational techniques for analyzing and representing naturally-occurring texts at one or more levels of linguistic analysis for the purpose of achieving human-like language processing for a range of tasks or applications [12].NLP techniques have been used to structure information in healthcare systems by extracting relevant information from narrative texts so as to provide data for decision-making [13].
Robotics
The robot has been defined as "a reprogrammable multifunctional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for the performance of a variety of tasks" by the Robot Institute of America [14].The term "robotics" refers to the study and use of robots.Robotic assistance has been shown to improve the safety and performance of intracorporeal suturing, which is heavily required in urological and gynecological procedures [15].
Artificial Neural Network (ANN)
An Artificial Neural Network, a subset of Machine Learning, is a computational model inspired by the biological neural networks in the human brain.These systems are mainly used for pattern identification and processing and are able to progressively improve their performance based on analytic results from previous tasks [16 -18].Many areas have been integrating the use of ANNs to facilitate the diagnosis, prognosis, and treatment of many diseases [19][20][21].
Convolutional Neural Network (CNN)
A Convolutional Neural Network is a Deep Learning algorithm specifically designed for image and video processing, primarily used in medical image analysis and diagnostics.CNNs have demonstrated superior performance as compared with classical machine learning algorithms and in some cases achieved comparable or better performance than clinical experts [22].
Role of Artificial Intelligence in Healthcare 2.1. Disease Detection and Diagnosis and Medical Imaging
The application of AI within the diagnostic process supporting medical specialists could be of great value for the healthcare sector and the patients' overall well-being [23].The fundamental goal of the diagnosis of a disease lies in determining whether a patient is affected by a disease or not [24].The first step in the diagnostic process involves obtaining a complete medical history and conducting a physical examination.For instance, a technique can use sound analysis to recognize COVID-19 from different respiratory sounds, e.g., cough, breathing, and voice [25].Additionally, for a precise diagnosis, AI algorithms can be used for the analysis of medical scans and pathology images.Imaging applications include the determination of ejection fraction from echocardiograms [26], the detection and volumetric quantification of lung nodules from radiographs [27], and the detection and quantification of breast densities via mammography [28].Imaging applications in pathology include an FDA-cleared system for whole-slide imaging (WSI) and their integration into a laboratory offers many benefits over light microscopy [29].
Treatment Planning and Personalized Medicine
AI tools have the ability to analyze large amounts of data and detect patterns.Therefore, they can make predictions for efficient and personalized treatment strategies.Personalized medicine, as an extension of medical sciences, uses practice and medical decisions to deliver customized healthcare services to patients [30].For example, CURATE.AI is an AIderived platform that maps the relationship between an intervention intensity (input-drug) and a phenotypic result (output) for an individual, based exclusively on that individual's data, creating a profile, which serves as a map to predict the outcome for a specified input and to recommend the intervention intensity that will provide the best result [31].
Drug Discovery and Development
The use of AI has been increasing in the pharmaceutical industry, and as a result, it has reduced the human workload as well as achieved targets in a short period of time [32].AI can recognize hit and lead compounds, and provide a quicker validation of the drug target and optimization of the drug structure design [33,34].In January 2023, Insilico Medicine announced an encouraging topline readout of its phase 1 safety and pharmacokinetics trial of the molecule INS018_055, designed by AI for idiopathic pulmonary fibrosis, a progressive disease that causes scarring of the lungs [35].
Predictive Analytics and Risk Assessment
Disease risk assessment is the process of evaluating a person's probability of developing certain diseases, based on risk factors such as genetic predispositions, environmental exposures, and lifestyle choices.AI techniques have been adopted to address the various steps involved in clinical genomic analysis-including variant calling, genome annotation, variant classification, and phenotype-to-genotype correspondence-and perhaps eventually they can also be applied to genotype-to-phenotype predictions [36].Moreover, Ramazzotti et al. accomplished a successful prognosis prediction for 27 out of 36 cancers by employing AI to analyze various types of biological data such as RNA expression, point mutations, DNA methylation, and omics data of copy number variation.The data used for analysis was sourced from The Cancer Genome Atlas (TCGA) [37].
Methodology
We conducted a comprehensive review of current literature including original articles that studied various clinical applications of AI in healthcare.We performed extensive searches on Google Scholar, PubMed, and ScienceDirect databases to identify relevant manuscripts.As keywords, we used "artificial intelligence", "deep learning", and "machine learning", combined with "clinical applications", and "healthcare".We restricted our search to papers published in English between 2013 and 2023 and found more than 200 relevant manuscripts.The inclusion criteria focused on studies that examined the application of artificial intelligence in different medical specialties.We excluded reviews and editorial comments.
Results
After a thorough review and assessment of the 223 articles, we identified and included a subset of 52 papers that were directly relevant to our research, including four on cardiology, three on dermatology, two on gastroenterology, three on neurology and neuroscience, three on ophthalmology, three on psychiatry, three on forensics and toxicology, four on radiology, 17 on pathology, two on urology, and four on obstetrics and gynecology, listed in Table 2.These selected studies provided valuable insights into the use and impact of AI in various medical specialties, forming the basis of our review.2018) declared, using machine learning and deep learning, AI has been deployed to interpret echocardiograms, to automatically identify heart rhythms from an electrocardiogram (ECG), to uniquely identify an individual using the ECG as a biometric signal, and to detect the presence of heart disease such as left ventricular dysfunction from the surface ECG [38][39][40].In a study conducted in China by Weng S.F. et al. between 2005 and 2015, using routine clinical data of over 350,000 patients, machine learning significantly improved the accuracy of cardiovascular risk prediction, correctly predicting 355 (an additional 7.6%) more patients who developed cardiovascular disease compared with the established algorithm [41].
AI in Dermatology
According to Young AT. et al. (2020), automated AI diagnosis of skin lesions is ready to be tested in clinical environments and has the potential to provide diagnostic support and expanded access to care [42].A meta-analysis of 70 studies found the accuracy of computeraided diagnosis of melanoma to be comparable to that of human experts [43].In 2017, Esteva et al. supported the view that a convolutional neural network (CNN), the leading DL algorithm for image analysis, trained on 129,450 images, achieved performance comparable to dermatologists on two binary classification tasks, carcinomas versus seborrheic keratoses and melanomas versus nevi, for both dermoscopic and non-dermoscopic images [44].
AI in Gastroenterology
Kröner PT. et al. (2021) stated that the clinical applications of AI systems in gastroenterology and hepatology include the identification of premalignant or malignant lesions (e.g., identification of dysplasia or esophageal adenocarcinoma in Barrett's esophagus, pancreatic malignancies), detection of lesions (e.g., polyp identification and classification, small-bowel bleeding lesion on capsule endoscopy, pancreatic cystic lesions), development of objective scoring systems for risk stratification, predicting disease prognosis or treatment response (e.g., determining survival in patients post-resection of hepatocellular carcinoma), determining which patients with inflammatory bowel disease (IBD) will benefit from biologic therapy, or evaluation of metrics such as bowel preparation score or quality of endoscopic examination [45].A study conducted by Martin D.R. et al. (2020) using histopathologic images of gastric biopsies as an input had a diagnostic accuracy of 98.9-99.1% for detecting current Helicobacter pylori infection vs. 79.0-79.4% mean accuracy of endoscopists for detecting currently infected H. pylori in two studies [46].
AI in Neurology and Neuroscience
According to Pedersen M. ( 2020), AI has the potential to create a paradigm shift in the diagnosis, treatment, prediction, and economics of neurological disease [47].Hazlett HC. et al. (2017) stated that a deep learning algorithm used magnetic resonance imaging (MRI) of the brain of individuals 6 to 12 months old to predict the diagnosis of autism in individual high-risk children at 24 months, with a positive predictive value of 81% [48].Moreover, Ienca M. and Ignatiadis K. (2020) emphasized that the use of pattern recognition and feature extraction algorithms, for example, can significantly contribute to diagnosing brain diseases, such as brain tumors or Alzheimer's disease, earlier, more accurately, and at more treatable stages compared to conventional predictive models [49].
AI in Ophthalmology
Rathi S. et al. (2017) declared that teleophthalmology has been well established to aid in the detection of retinopathy of prematurity (ROP), diabetic retinopathy screening, and is being explored for glaucoma screening and other fields of ophthalmology [50].Furthermore, Gulshan V. et al. ( 2016) demonstrated the clinical utility of a deep machinelearning algorithm that evaluated retinal fundus photographs from adults that detected referable diabetic retinopathy with high sensitivity and specificity [51].Long E. et al. (2017) showed that an AI agent, using deep learning and neural networks, accurately diagnosed and provided treatment decisions for congenital cataracts in a multihospital clinical trial, performing just as well as individual ophthalmologists [52].
AI in Psychiatry
The emerging literature has shown that AI is proving to be useful in psychological medicine and psychiatry.According to Pham KT. et al. (2022), within the last two decades, AI began to incorporate neuroimaging studies of psychiatric patients with deep learning models to classify patients with psychiatric disorders [53].Vieira S. et al. (2017) were able to classify schizophrenia patients and controls with an accuracy of 85.5% by extracting functional connectivity patterns from resting-state functional MRIs of schizophrenia patients and healthy controls [54].Researchers at the Vanderbilt University Medical Centre created machine-learning algorithms that achieved 80-90% accuracy when predicting whether someone will attempt suicide within the next 2 years, and 92% accuracy in predicting whether someone will attempt suicide within the next week [1].
AI in Forensics and Toxicology
Forensic medicine and toxicology are important branches of the investigation of crimes.In 2022, Wankhade TD. et al. stated that various procedures of forensic medicine such as analysis of toxins, collection of the various samples of medicolegal importance from body cavities, detection of pathological changes in various organs of the body, detection of various stains on the body, detection of a weapon used in crime, time since death calculations, etc. are the areas where AI will play a key role in framing the various opinions of medicolegal importance [55].For example, according to Thurzo A. et al. ( 2021), threedimensional convolutional neural networks (3D CNN) of artificial intelligence can be used in biological age determination, sex determination, automatized 3D cephalometric landmark annotation, soft-tissue face prediction from the skull and in reverse, and facial growth vectors prediction [56].
In toxicology, deep learning might automatically identify high-level drug use patterns by combining data from social media, poison control logs, published reports, and national surveys [57].
AI in Radiology
According to Hosny A. et al. (2018), AI methods automatically recognize complex patterns in imaging data and provide quantitative, rather than qualitative, assessments of radiographic characteristics [58].Chen, H et al. (2016) maintained that studies have also shown that deep learning technologies are on par with radiologists' performance for both detection [59] and segmentation [60] tasks in ultrasonography and MRI, respectively.Additionally, Wang, H. et al. (2017) declared that for the classification tasks of lymph node metastasis in PET-CT (positron emission tomography-computed tomography), deep learning had higher sensitivities but lower specificities than radiologists [61].
AI in Surgery
According to Zhou, XY. et al. (2020), advances in surgery have revolutionized the management of both acute and chronic diseases, prolonging life and extending the boundary of patient survival [62].Moreover, current robots can already automatically perform some simple surgical tasks, such as suturing and knot tying [63,64].For example, in 2016, a smart surgical robot stitched up a pig's small intestines completely on its own and was able to outperform human surgeons who were given the same task [65].
AI in Pathology
In the modern healthcare system, AI and Digital Pathology (DP) have the potential to challenge traditional practice and provide precision for pathology diagnostics.Cui M., and Zhang D.Y. (2021) defined DP as the process of digitizing histopathology, immunohistochemistry, or cytology slides using whole-slide scanners as well as the interpretation, management, and analysis of these images using computational approaches [66].According to Niazi M. K. K. et al. (2019), whole-slide imaging (WSI) allows entire slides to be imaged and permanently stored at high resolution, a process that provides a vast amount of information, which can be shared for clinical use or telepathology [67].Two scanners, the Philips IntelliSite Pathology Solution (PIPS) and Leica Aperio AT2 DX, are approved by the Food and Drug Administration (FDA) to review and interpret digital surgical pathology slides prepared from biopsied tissue [68,69].
The use of digital image analysis in pathology can identify and quantify specific cell types quickly and accurately and can quantitatively evaluate histological features, morphological patterns, and biologically relevant regions of interest [72][73][74].As Balázs et al. (2020) declared, recent groundbreaking results have demonstrated that applications of machine learning methods in pathology significantly improve Ki67 scoring in breast cancer, Gleason grading in prostate cancer, and tumor-infiltrating lymphocyte (TIL) scoring in melanoma [74].Shaban et al. (2019) trained a novel CNN system to quantify TILs from WSIs of oral squamous cell carcinomas and achieved an accuracy of 96% [75].Furthermore, Hekler A. et al. conducted a study in 2019 which concluded that a CNN was able to outperform 11 histopathologists in the classification of histopathological melanoma images and thus shows promise to assist human melanoma diagnoses [76].Table 3 summarize the applications of AI systems in pathology.
Examples of AI Systems Applications in Pathology
1. Differentiate between benign and malignant tumors 2. Grading of dysplasia and in situ lesions [70] 3. Metastasis and micrometastasis detection [74] 4. Relationships between different immune cell populations [70,71] 5. IHC/ISH scoring of multiple biomarkers and topography of the immune response [72] 6. Mitosis detection [78,79] In 2014, Dong et al. designed a computational pathology method to identify and quantify nuclear features from diagnostic tumor regions of interest (ROIs) of intraductal proliferative lesions of the breast, with high accuracy for distinguishing between benign breast ductal hyperplasia and malignant ductal carcinoma in situ [77].Moreover, Coutre et al. (2018) used image analysis with DL to detect breast cancer histologic subtypes [80].In addition, AI algorithms have been developed to provide quantitative measurements of immunohistochemically stained Ki-67 [81], ER [80], PR, and Her-2/neu images [82].
AI in Urology
AI applications in urology include: utilizing radiomic imaging or ultrasonic echo data to improve or automate cancer detection or outcome prediction, utilizing digitized tissue specimen images to automate detection of cancer on pathology slides, and combining patient clinical data, biomarkers, or gene expression to assist disease diagnosis or outcome prediction [89].For example, Kott et al. tested an AI-based system for detecting prostate cancer which yielded 91.5% accuracy in classifying slides as either benign or malignant, and 85.4% accuracy in finer classifications of benign vs. Gleason 3 vs. 4 vs. 5 [83].In another study, Baessler et al. applied ML-based CT radiomics to determine whether the lymph nodes dissected in patients with metastatic or advanced non-seminomatous testicular germ cell tumor were benign or malignant, with 88% sensitivity, and 72% specificity [84].
AI in Obstetrics and Gynecology AI in Obstetrics
The fields of prenatal diagnosis, labor, and high-risk pregnancy are areas of significant importance in medicine, and they can be associated with medicolegal issues.Studies show that AI tools can be used to reduce these issues and to improve patients' outcomes (both mothers' and newborns').In a study conducted by Idowu et al. [85], electrohysterography signals were employed, and three distinct machine learning algorithms were utilized to assist in the accurate detection of true labor, and the reliable diagnosis of premature labor.In another study, Manna et al. [86] proposed a method that combines AI and ANNs to extract texture descriptors from oocyte or embryo images.This approach enables AI to effectively identify the most viable oocytes and embryos, increasing the likelihood of successful pregnancies.
AI in Gynecology
Numerous research investigations focusing on cervical cancer and cervical intraepithelial neoplasia (CIN) have documented the application of AI.The primary areas where AI has been employed include the assessment of colposcopy, MR imaging (MRI), CT scans, cytology, and data related to human papillomavirus (HPV) [90].Additionally, Zhang et al. [87] demonstrated in their that using deep learning on color ultrasound tests as imaging assessments resulted in an impressive accuracy of 0.99 in predicting the definitive diagnosis of ovarian tumors.Moreover, Hart G. et al. emphasized that the application of machine learning shows immense potential in aiding the early detection of endometrial cancer.This approach achieves high-accuracy predictions by primarily relying on personal health information even before the onset of the disease, eliminating the necessity for invasive or costly procedures such as endometrial biopsy [88].
Discussion and Challenges
The literature review underscores the remarkable potential of AI in different medical specialties, to revolutionize screening and diagnostic procedures, and therefore, improving patient care.AI can improve diagnostic accuracy while limiting errors and impact patient safety such as assisting with prescription delivery [91][92][93].Nevertheless, there are some challenges that need to be considered as AI usage increases in healthcare, such as ethical, social and technical challenges.For example, AI processes may lack transparency, making accountability problematic, or may be biased, leading to unfair, discriminatory behavior or mistaken decisions [94].Moreover, AI algorithms are unable to perform a holistic approach to clinical scenarios and are not fully able to take into consideration the psychological and social aspects of human nature, which are often considered by a skilled healthcare professional [95].Addressing those challenges requires collaboration between healthcare professionals, researchers, policymakers and technology developers to ensure that AI tools are implemented responsibly, ethically and safely in the healthcare sector.
Conclusions
Artificial intelligence systems powered by machine learning and deep learning are rapidly implemented in medicine.Moreover, combining AI with actual knowledge in various medical specialties could result in dramatic changes, such as advanced diagnostics, correct risk and prognosis evaluation, and even treatment suggestions.Thus far, AI is proving to be effective and the research will continue to improve, as more applications are discovered and explored.The integration of digital pathology based on AI systems in our current practice will help enhance patient care.In conclusion, AI's role in medicine will continue to expand.In collaboration with experts in technology and ethics, we can revolutionize health care, making it more precise and we can pave the way for a healthier future with the right implementations of AI.
Figure 1 .
Figure 1.Illustration of the AI subtypes.
Figure 1 .
Figure 1.Illustration of the AI subtypes.
Table 1 .
Definitions of terms related to AI.
Table 1 .
Definitions of terms related to AI.
Table 2 .
Scientific articles that analyze the use of artificial intelligence in medical specialties.
Table 3 .
Examples of AI systems applications in pathology. | 5,678.4 | 2023-07-31T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Planning and Control of Multi-Robot-Object Systems under Temporal Logic Tasks and Uncertain Dynamics
We develop an algorithm for the motion and task planning of a system comprised of multiple robots and unactuated objects under tasks expressed as Linear Temporal Logic (LTL) constraints. The robots and objects evolve subject to uncertain dynamics in an obstacle-cluttered environment. The key part of the proposed solution is the intelligent construction of a coupled transition system that encodes the motion and tasks of the robots and the objects. We achieve such a construction by designing appropriate adaptive control protocols in the lower level, which guarantee the safe robot navigation/object transportation in the environment while compensating for the dynamic uncertainties. The transition system is efficiently interfaced with the temporal logic specification via a sampling-based algorithm to output a discrete path as a sequence of synchronized actions of the robots; such actions satisfy the robots' as well as the objects' specifications. The robots execute this discrete path by using the derived low level control protocol. Simulation results verify the proposed framework.
I. INTRODUCTION
T Emporal-logic-based motion planning has gained significant amount of attention over the last decade, as it provides a fully automated correct-by-design controller synthesis approach for autonomous robots. Temporal logics, such as linear temporal logic (LTL), provide formal high-level languages that can describe planning objectives more complex than the well-studied navigation algorithms, and have been used extensively both in single-as well as in multi-robot setups (see, indicatively, [1]- [10]). The task is given as a temporal logic formula, which is coupled with a discrete representation of the underlying system, abstracted from the underlying continuous dynamics, in order to derive an appropriate highlevel discrete path.
There exist numerous works that consider temporal-logicbased tasks, both for single-and multi-robot systems [1]- [3], [5], [7], [9]. Nevertheless, the related works consider temporal logic-based motion planning for fully actuated, autonomous robotic agents. Consider, however, cases where some unactuated objects must undergo a series of processes in a workspace with autonomous robots (e.g., car factories or automated warehouses). In such cases, the robots, except for satisfying their own motion specifications, are also responsible for coordinating with each other in order to transport the objects around the workspace. When the unactuated objects' specifications are expressed using temporal logics, then the abstraction of the robots' behavior becomes much more complex, since it has to take into account the objects' goals.
In addition, the spatial discretization of a multi-agent system to an abstracted higher level system necessitates the design of appropriate continuous-time controllers for the transition of the agents among the states of discrete system. Many works in the related literature, however, either assume that there exist such continuous controllers or adopt non-realistic assumptions. For instance, many works either do not take into account continuous agent dynamics or consider single or double integrators [1], [3], [8]- [10], which can deviate from the actual dynamics of the agents, leading thus to poor performance in real-life scenarios. Discretized abstractions, including design of the discrete state space and/or continuous-time controllers, can be found in [7], [11]- [14] for general systems and [15], [16] for multi-agent systems. Moreover, many works adopt dimensionless point-mass agents and therefore do not consider inter-agent collision avoidance [7], [9], [10], which can be a crucial safety issue in applications involving autonomous robots.
Since we aim at incorporating the unactuated objects' specifications in our framework, the robots have to perform (cooperative) transportation of the objects around the workspace, while avoiding collisions with obstacles. Cooperative transportation/manipulation has been extensively studied in the literature (see, for instance, [17]- [27]), with collision avoidance specifications being incorporated in [24], [25], [28]. Cooperative object transportation under temporal logics has also been considered in our previous work [29].
This paper proposes a novel algorithm for the motion and task planning of a multi-robot-object system, i.e., a system that consists of multiple robots and unactuated objects, subject to tasks that are expressed by LTL constraints. We consider that the robots and the objects evolve subject to continuoustime and -space dynamics that suffer from uncertainties and in an environment cluttered with obstacles. Our contribution is summarized as follows. First, we abstract the continuous dynamics of the multi-robot-object system into a finite transition system that encodes the coupled behavior of the robots and the objects in the environment. We achieve such an abstraction by developing continuous feedback-control protocols that guarantee i) the navigation of the robots and ii) the cooperative transportation of the objects by the robots in the environment. These protocols combine barrier functions and point-world transformations to guarantee collision-avoidance properties as well as adaptive-control methodologies to compensate for the dynamic uncertainties. The robot navigation and cooperative object transportation constitute robot action primitives that enable the transitions in the aforementioned transition system. Second, we compose the resulting transition system with an automaton-based representation of the underlying LTL task. Finally, we use a sampling-based procedure to search the composition of the transition system and the task's automaton for the derivation of an optimal high-level plan, as a sequence of robot actions, that satisfies the task.
This paper extends our previous work [30] in the following directions: firstly, we consider transportation of the objects by multiple robots (as opposed to [30], which consider singlerobot object transportation). Secondly, we consider more complex, obstacle-cluttered environments. Thirdly, we consider dynamic uncertainty in the dynamics of the robots and objects. Finally, instead of model-checking tools based on graph search algorithms, we use a sampling-based procedure to derive an optimal task-satisfying plan, yielding significantly lower complexity in terms of runtime and memory.
The rest of the paper is organized as follows. Section II provides necessary background and formulates the problem, while Section III presents the proposed solution. Section IV provides simulation results and, finally, Section V concludes the paper.
II. PRELIMINARIES AND PROBLEM FORMULATION A. Task Specification in LTL
We focus on the task specification φ given as a Linear Temporal Logic (LTL) formula. The basic ingredients of a LTL formula are a set of atomic propositions Ψ and several boolean and temporal operators. LTL formulas are formed according to the following grammar [31]: φ ::= true | a | φ 1 ∧ φ 2 | ¬φ | • φ | φ 1 Uφ 2 , where a ∈ Ψ, φ 1 and φ 2 are LTL formulas and •, U are the next and until operators, respectively. Definitions of other useful operators like (always), ♦ (eventually) and ⇒ (implication) are omitted and can be found at [31]. The semantics of LTL are defined over infinite words over 2 Ψ . Intuitively, an atomic proposition ψ ∈ Ψ is satisfied on a word w = w 1 w 2 . . . if it holds at its first position w 1 , i.e. ψ ∈ w 1 . Formula φ holds true if φ is satisfied on the word suffix that begins in the next position w 2 , whereas φ 1 Uφ 2 states that φ 1 has to be true until φ 2 becomes true. Finally, ♦φ and φ holds on w eventually and always, respectively. For a full definition of the LTL semantics, we refer the reader to [31].
B. Problem Formulation
Consider N > 1 robotic agents operating in a compact 2D workspace W ⊂ R 2 with an outer boundary with M > 0 objects, and N := {1, . . . , N }, M := {1, . . . , M }; The objects are represented by rigid bodies whereas the robotic agents are fully actuated and holonomic, equipped with a transportation tool, such as a robotic arm. In addition, the workspace is populated with J ∈ N connected, closed sets {O j } j∈J , with J := {1, . . . , J}, representing obstacles, and we define the free space as W f := W\ j∈J O j .
We further assume that there exist K > 1 points within W free , denoted by p π k , corresponding to certain properties of interest (e.g., gas station, repairing area, etc.), with K := {1, . . . , K}. Since, in practise, these properties are naturally inherited to some neighborhood of the respective point of interest, we define for each k ∈ K the region of interest π k , corresponding to p π k , as the closed ball π k :=B(p π k , r π k ) ⊂ W free , with r π k > 0 positive radii, ∀k ∈ K. These properties of interest are expressed as boolean variables via finite sets of atomic propositions. In particular, we introduce disjoint sets of atomic propositions Ψ i , Ψ o j , expressed as boolean variables, that represent services provided to agent i ∈ N and object j ∈ M in Π := {π 1 , . . . , π K }. The services provided at each region π k are given by the labeling functions which assign to each region π k , k ∈ K, the subset of services Ψ i and Ψ o j , respectively, that can be provided in that region to agent i ∈ N and object j ∈ M, respectively. In addition, we consider that the agents and the object are initially (t = 0) in the regions of interest π init(i) , π inito(j) , where the functions init : N → K, init o : M → K specify the initial region indices. In the following, we present the modeling of the coupled dynamics of the object and the robots.
We denote by x i ∈ R 2 the position of robot i's center of mass. In this work we explicitly consider the actions of robot navigation, as well as (cooperative) transportation of an object via x i , where the joint variables of the potential mounted robotic arm are assumed to be fixed. We consider that the robotic arm joints are used only for grasping/releasing an object (when the respective mobile base is fixed), actions that we do not explicitly model. The dynamics describing the motion of robot i's center of mass are given by where m i is the unknown mass of robot i, f i : R 4 → R 2 are unknown friction-like functions, u i ∈ R 2 is the control input of robot i, and h i ∈ R 2 is the force exerted by robot i at the grasping point with object j in case of contact. The aforementioned dynamics concern the cases when (i) robot i is navigating to some pre-defined point, and (ii) robot i is transporting, possibly collaboratively with other robots, an object j. In both of these cases, the joint variables of the mounted robotic arm are assumed to be constant. The procedures of grasping/releasing an object, where the robotic arm would have to be activated, are not considered here. We consider that each robot i, for a given x i , covers a spherical region of constant radius r i ∈ R >0 that bounds its volume, i.e.,B(x i , r i ), ∀i ∈ N ; Moreover, we consider that the agents have specific power capabilities, which for simplicity, we associate to positive integers ζ i > 0, i ∈ N , via an analogous relation. The overall configuration is x : Regarding the objects, we denote by x o j ∈ R 2 the position of the jth object's center of mass, ∀j ∈ M. We consider the following second-order Newton-Euler dynamics: where m o j is the unknown mass of object j, f o j : R 4 → R 2 are unknown friction terms, and h o j ∈ R 2 are the forces exerted to the jth object's center of mass, in case of contact with a robot. The overall object configuration is denoted by . The functions f i and f o j are assumed to satisfy the following assumption: Assumption 1: The functions f i , f o j : R 4 → R 2 are analytic and satisfy ∀x, y ∈ R 2 , where α i , α o j are unknown positive constants, ∀i ∈ N , j ∈ M. The aforementioned assumption is inspired by standard friction-like terms, which can be approximated by continuously differentiable velocity functions [32].
Similarly to the robots, each object's volume is represented by the spherical set of constant radius r o j ∈ R >0 , i.e., B(x o j , r o j ), ∀j ∈ M. Next, we provide the coupled dynamics between an object j ∈ M and a subset A ⊆ N of robots that grasp it rigidly. For these robots, it holds that h o j = i∈A h i , and since the joint variables of the robotic arms are fixed, j and x i , ∀i ∈ A. Therefore, one obtains that where for an unknown positive constant α A,j . Regarding the volume of the coupled robots-object system, we consider that it is bounded by a sphere of radius centered at x o j with constant radius r A,j ∈ R >0 , i.e., B(x o j , r A,j ), which is large enough to cover the volume of the coupled system. Moreover, in order to take into account the introduced robots' power capabilities ζ i , i ∈ N , we consider a function Λ ∈ {True, False} that outputs whether the robots that grasp an object are able to transport the object, based on their power capabilities. For instance, is the mass of object j and ζ A := [ζ i ] i∈A , implies that the robots A have sufficient power capabilities to cooperatively transport object j.
Next, we define the boolean functions AG j : R ≥0 → 2 N0 , with N 0 := N ∪ {0}, to denote which robots grasp an object j ∈ M; AG j = 0 means that no robots grasp object j. Note also that i ∈ AG j ⇔ i / ∈ AG j = False, ∀j ∈ M\{j}, i.e., robot i can grasp at most one object at a time. We further denote AG := [AG 1 , . . . , In the following, we use the term "entity" to refer to single robots, objects as well as systems comprised by robots that grasp an object (robots-object systems). The number of these systems depends on the variables AG. Given a grasping configuration AG ∈ (2 N0 ) M , considerT (AG) number of entities, indexed by the set T (AG) := {1, . . . ,T (AG)}. Each entity (robot, object, or robots-object system) is characterized by the respective configuration (x i , x o j , x o j ) and radius (r i , r o j , r A,j ), respectively, which we denote for simplicity by the generic variables x e i , r e i , for all i ∈ T (AG). We further define the free space for each entity where the incorporation of r e i enlarges the obstacles and the other entities with the radius r e i . We give now the definitions for the transitions of the robots and the objects between the regions of interest.
where i k ∈ K, for all i ∈ T (AG) and some t 0 ∈ R ≥0 . Then, entity j ∈ T (AG) executes a transition from π j k to π j k , with for all i ∈ T (AG). Then, agent j grasps object , denoted by j g − → , if there exists a finite t f ≥ t 0 such that j ∈ AG (t f ). Similarly, we can define the releasing action j r − → for an agent j ∈ N and object ∈ M. Loosely speaking, the aforementioned definitions correspond to specific action primitives of the robots, namely robot navigation or object transportation, which define the navigation transition, and grasping and releasing actions. When the navigation transition from π j k to π j k corresponds to a robot navigation for robot j ∈ N , we denote it by π j k → j π j k ; when it corresponds to a cooperative transportation of object j ∈ M by a subset of robots A, we denote it by π j k T − → A,j π j k . We also assume the existence of a procedure P s that outputs whether or not a set of non-intersecting spheres fits in a larger sphere as well as possible positions of the spheres in the case they fit. More specifically, given a region of interest π k and a number N ∈ N of sphere radii (of robots, objects, or object-robots systems) the procedure can be seen as a function P s := [P s,0 , P s,1 ] , where P s,0 : ≥0 → {True, False} outputs whether the spheres fit in the region π k whereas P s,1 provides possible configurations of the robots and the objects or 0 in case the spheres do not fit. For instance, P s,0 (r π2 , r 1 , r 3 , r o 1 , r o 5 ) determines whether the robots 1, 3 and the objects 1, 5 fit in region π 2 , without colliding with each other; . . , 4}, and the pairwise intersections of these sets are empty. The problem of finding an algorithm P s is a special case of the sphere packing problem [33]. Note, however, that we are not interested in finding the maximum number of spheres that can be packed in a larger sphere but, rather, in the simpler problem of determining whether a set of spheres can be packed in a larger sphere.
Our goal is to control the multi-robot-object system defined above such that the robots and the objects obey a given specification over their atomic propositions , of robot i and object j, respectively, their corresponding behaviors are given by the infinite sequences ≥ 0, ∀ ∈ N, representing specific time stamps. The sequences σ i , σ o j are the services provided to the agent and the object, respectively, over their trajectories, i.e., j are the previously defined labeling functions. The following Lemma then follows: We are now ready to give a formal problem statement: Problem 1: Consider N robotic agents and M objects in W satisfyinḡ Note that it is implicit in the problem statement the fact that the agents/objects starting in the same region can actually fit without colliding with each other. Technically, it holds that P s,0 (r π k , [r i ] i∈{i∈N :init(i)=k} , [r o j ] j∈{j∈M:inito(j)=k} ) = True, ∀k ∈ K.
A. Continuous Control Design
The first ingredient of our solution is the development of feedback control laws that establish the navigation transition of Def. 1, i.e., robot navigation and cooperative object transportations. We do not focus on the grasping/releasing actions (see Def. 2) and we refer to some existing methodologies that can derive the corresponding control laws (e.g., [34], [35]).
The control design is based on the integration of the adaptive control scheme and point world transformation proposed in [36] and [37], respectively. The former accommodates the uncertain robot and object dynamics and the latter deals with the complex environment obstacles. Since [37] consider single-robot navigation in complex environments, we consider here a sequential execution of the navigation and object-transportation transitions 1 . That is, only one entity is allowed to navigate from one RoI to another, viewing the rest of the entities as fixed obstacles.
1) Robot Navigation
Assume that the conditions of Problem 1 hold for some t 0 ∈ R ≥0 , i.e., all robots and objects are located in some RoI with zero velocity. We first consider the control problem of safe navigation for a robot i ∈ N satisfying i / ∈ AG j (t 0 ) for all j ∈ M, i.e., not grasping any objects, from region i k to As stated before, the other entities are fixed and viewed as static obstacles by robot i. Moreover, to account for safety specifications, we wish the robot to avoid entering the RoI π k , k ∈ K\{i k , i k }. Therefore, the free space Let the desired navigation goal of the robot be x d,i ∈ π i k , which will be provided by the procedure P s , as explained in the previous section. Next, we use the workspace transfor- Let now a constantr > 0 satisfying The transformed free space of the robot can be defined as F H := D\{b 1 , . . . , bJ }. We define next the setJ 0 := {0} ∪ J as well as the distances d : Note that, by keeping d (χ) > 0, d 0 (χ) > 0, we guarantee that χ ∈ F H and hence the safety of the robot. We also define the constantr as the minimum distance of the goal to the transformed obstacles/workspace boundary. We revisit now the notion of the 2nd-order navigation function from [36]: where β : R >0 → R ≥0 is a (at least) twice contin. differentiable function and k 1 , k 2 are positive constants, with the followings properties: x is strictly decreasing. An example for β that satisfies properties 1) and 4), is By appropriately choosing τ , only one β(d (χ)), ∈J 0 affects the robotic agent for each χ ∈ F H , with β (d (χ d )) = β (d (χ d )) = 0. Hence, properties 2) and 3) of Def. 3 are satisfied.
Proposition 1 ( [36]): By choosing τ as τ ∈ (0, min{r 2 ,r d }), we guarantee that at each χ ∈ F H there is at most one ∈J 0 such that d (χ) ≤ τ , implying that β (d (χ)) and β (d (χ)) are non-zero. Intuitively, the obstacles and the workspace boundary have a local region of influence defined by the constant τ .
To compensate for the unknown mass and friction terms of the dynamics (1), we define the estimatesm,α of m and α (see Assumption 1), respectively, to be used in the control design.
Given the aforementioned definition, we design a reference signal v d : F H → R 2 for the robot velocityẋ as where J H (x) := ∂H(x) ∂x is the nonsingular Jacobian matrix of H. Next, we define the respective velocity error e v :=ẋ − v d (x), and design the control law as u : where k ϕ is a positive gain, andm,α evolve according tȯ with k m , k α positive constants. The aforementioned control protocol guarantees safe asymptotic convergence of the robot to its goal from almost all collision-free initial conditions, i.e., except for a set of measure zero, provided that τ is sufficiently small and k ϕ > α 2 (see Theorem 2 in [36]). Therefore, since the convergence is asymptotic, we conclude that there exists a finite time instant t i > t 0 such thatB(x i (t i ), r i ) ⊂ π i k , achieving thus the transition π i k → i π i k .
2) Cooperative Object Transportation
We deal now with the control design for the cooperative object transportation problem. Consider an object j ∈ M grasped by a team of robots A at t 0 ∈ R ≥0 , i.e., AG j (t 0 ) = A, evolving subject to the dynamics (5) and satisfyingB(x o j (t 0 ), r A,j ) ⊂ π j k , for some j k ∈ K. The goal is the transportation of the object to some π j k , j k ∈ K. As in the robot navigation case, we let x e j = x j , r e j = r o A,j , denoting the entity consisting of object j and the robots A, viewing the rest of the entities as static obstacles. By also aiming to avoid entering the RoI π k , for k ∈ K\{j k , j k }, the free space of the Let the desired navigation goal of the object be x d,j , provided by the procedure P s . Similarly to the robot navigation case, we use the transformation χ = H(x) to transform the environment to the open unit disk D, the obstacles to points b , the object-robots entity to the point χ, and the object goal to χ d = H(x d,j ). Next, by employing the function β(), we design a reference signal v d : F H → R 2 for the object velocity that is identical to (9). In addition, we define the adaptation variableŝ m A,j andα A,j as the estimates of the unknown coupled mass and friction coefficients m A,j and α A,j (see Assumption 1), respectively, and design the control law for the robots A as u : where cf are load sharing coefficients satisfying cf ≥ 0, for all ∈ A, and ∈A cf = 1; k ϕ is a positive gain, andm A,j , α A,j evolve according to (11).
By invoking the property ∈A cf = 1 and the fact that the object-robots system is converted to a point by the transformation H, we guarantee, similar to the robot navigation case, the safe, asymptotic convergence of the object-robots entity to π j k from almost all collision-free initial conditions, provided that k ϕ > α 2 . Hence, there exists a finite time instant t j > t 0 such thatB(x j (t j ), r o T ,j ) ⊂ π j k , achieving thus the transition π i k T − → A,j π j k .
B. High-Level Plan Generation
The second part of the solution is the derivation of a high-level plan that satisfies the given LTL formulas φ i and φ o j . Thanks to (i) the proposed control laws that allow robot transitions and object transportations π k → i π k and π k T − → A,j π k , respectively, and (ii) the off-the-self control laws that guarantee grasp and release actions i g − → j and i r − → j, we can abstract the behavior of the multi-robot-object system using a finite transition system as presented in the sequel.
Definition 4:
The coupled behavior of the overall system of all the N agents and M objects is modeled by the transition system are the set of statesregions that the agents and the objects can be at, with Π i = Π o j = Π, ∀i ∈ N , j ∈ M; By definingπ := (π k1 , · · · , π k N ) ,π o := (π k o 1 , · · · , π k o M ), then the coupled state π s := (π,π o , AG) belongs to Π s , i.e., = , i.e., the respective robots and objects fit in the region, ∀k ∈ K, b) k i = k o j for all i ∈ N , j ∈ M such that i ∈ AG j , i.e., a robot must be in the same region with the object it grasps, (ii) Π init s ⊂ Π s is the initial set of states at t = 0, which, owing to (i), satisfies the conditions of Problem 1, (iii) → s ⊂ Π s × Π s is a transition relation defined as follows: given the states π s , π s ∈ Π, with π s :=(π,π o , AG) :=(π k1 , . . . , π k N , π k o 1 , . . . , π k o M , AG 1 , . . . , AG M ), π s :=( π, π o , AG) a transition π s → s π s occurs if all the following hold: a) i ∈ N , j ∈ M such that i ∈ AG j , i / ∈ AG j (or i / ∈ AG j , i ∈ AG j ), and k i = k i , i.e., there are no simultaneous grasp/release and navigation actions, b) i ∈ N , j ∈ M such that i ∈ AG j , i / ∈ AG j (or i / ∈ AG j , i ∈ AG j ), and k i = k o j = k i = k o j , i.e., there are no simultaneous grasp/release and transportation actions, c) i ∈ N , j, j ∈ M, with j = j , such that i ∈ AG j and i ∈ AG j , i.e., there are no simultaneous grasp and release actions, d) j ∈ M such that k o j = k o j and i / ∈ AG j , ∀i ∈ N ( or i / ∈ AG j , ∀i ∈ N ), i.e., there is no transportation of a nongrasped object, e) j ∈ M, T ⊆ N such that k o j = k o j and Λ(m o j , ζ T ) = ⊥, with (i ∈ AG j , i ∈ AG j ) ⇔ i ∈ T , i.e., the agents grasping an object are powerful enough to transfer it, (iv) Ψ :=Ψ ∪Ψ o withΨ = i∈N Ψ i andΨ o = j∈M Ψ o j , are the atomic propositions of the agents and objects, respectively, as defined in Section II.
(v) L : Π s → 2 Ψ is a labeling function defined as follows: Given a state π s as in (12) and ψ s := (vi) Λ and P s as defined in Section II. (vii) χ : (→ s ) → R ≥0 is a function that assigns a cost to each transition π s → s π s . This cost might be related to the distance of the robots' regions in π s to the ones in π s , combined with the cost efficiency of the robots involved in transport tasks (according to ζ i , i ∈ N ).
Given φ and the PBA, an infinite path π pl := π s,1 π s,2 . . . of a T S satisfies φ if and only if trace(π pl ) ∈ Words(φ), which is equivalently denoted by π pl |= φ, where trace(π pl ) ∈ 2 Ψ ω is defined as trace(π pl ) = L(π s,1 )L(π s,2 ) . . . . Specifically, if there is a path satisfying φ, then there exists a path π pl |= φ that can be written in a finite representation, called prefix-suffix structure, i.e., π pl = π pre pl [π suf pl ] ω , where the prefix part π pre pl is executed only once followed by the indefinite execution of the suffix part π suf pl . The prefix part π pre pl is the projection of a finite path p pre that lives in Π P onto Π s .
Computing a plan π pl is typically accomplished by applying graph-search methods to the PBA. Specifically, to generate a motion plan π pl that satisfies φ, the PBA is viewed as a weighted directed graph G P = {V P , E P , w P }, where the set of nodes V P is indexed by the set of states Π P , the set of edges E P is determined by the transition relation −→ P , and the weights assigned to each edge are determined by the function χ P . Then, to find the optimal plan τ |= φ, shortest paths towards final states and shortest cycles around them are computed. More details about this approach can be found in [38], [39] and the references therein. While any of the aforementioned methodologies could be used, in this work we employ STyLuS * , an algorithm that is designed to solve complex temporal planning problems in large-scale multi-robot systems and has been shown to achieve significantly lower complexity, in terms of running time and memory, than standard graphsearch methods [40]. Specifically, STyLuS * is a samplingbased method that builds incrementally trees that approximate the state-space and transitions of the product automaton and does not require sophisticated graph-search techniques. Technically, STyLuS * builds a tree G T = {V T , E T , Cost} first, where V T ⊆ Π P is the set of nodes, E T is the set of edges, and Cost is defined as per χ P , determining the cost of reaching a tree node from its root. This tree is rooted at the initial state π 0 P and is used for the synthesis of the prefix part. The tree is constructed incrementally, in a sampling-based fashion, and its construction terminates after a user-specified number of iterations n max . Then, we compute paths in the constructed tree structure that connect the root to the detected, if any, final states. These paths correspond to prefix parts of candidate feasible paths. To construct the corresponding suffix paths, new trees are built, similarly, rooted at the previously detected final states aiming to compute cycles around the tree roots. Among all the detected prefix-suffix paths, STyLuS * returns the one with the minimum cost. As it was shown in [40], STyLuS * is probabilistically complete and asymptotically optimal; that is, the probability of finding a feasible and the optimal solution converges to 1 as n max → ∞.
Finally, note that the constructed trees explore the state space of the product automaton and, therefore, the designed prefix-suffix path is defined as an infinite sequence of product automaton states. By projecting it onto the state-space of the transition system T S, we obtain a high-level prefix-suffix plan defined as a sequence of states π pl := π s,1 π s,2 . . . |= φ. The corresponding sequence of atomic propositions is ψ pl = trace(π pl ) = ψ s,1 ψ s,2 . . . , with π s, := (π ,π o, ,w ) ∈ Π s , ∀ ∈ N, where •π := π k 1, , π k 2, , . . . with k i, ∈ K, ∀i ∈ N , •π o, := π k o 1, , π k o 2, , . . . with k o j, ∈ K, ∀j ∈ M, •w := w 1, , w 2, , . . . with w i, ∈ AG i , ∀i ∈ N , , ∀j ∈ M. The path π pl is then projected to the individual sequences of the regions π k o j,1 π k o j,2 . . . for each object j ∈ M, as well as to the individual sequences of the regions π ki,1 π ki,2 . . . and the boolean grasping variables w i,1 w i,2 . . . for each robot i ∈ N . The aforementioned sequences determine the behavior of robot i ∈ N , i.e., the sequence of actions (transition, transportation, grasp, release or stay idle) it must take.
The sequences π ki,1 π ki,2 . . . , ψ i,1 ψ i,2 . . . and π k o j,1 π k o j,2 . . . , ψ o j,1 ψ o j,2 . . . over Π, 2 Ψi and Π, 2 Ψ o j , respectively, produce the trajectories q i (t) and x o j (t), ∀i ∈ N , j ∈ M. The corresponding behaviors are . . . , respectively, according to Section II-B, with A i (q i (t i, )) ⊂ π k i, , σ i, ∈ L i (π k i, ) and O j (x oj (t oj,m )) ∈ π k o j, , σ o j, ∈ L o j (π k o j, ). Thus, it is guaranteed that σ i |= φ i , σ o j |= φ o j and consequently, the behaviors β i and β o j satisfy the formulas φ i and φ o j , respectively, ∀i ∈ N , j ∈ M. The aforementioned reasoning is summarized in the next theorem: Theorem 1: The execution of the path (π pl , ψ pl ) of T S guarantees behaviors β i , β o j that yield the satisfaction of φ i and φ o j , respectively, ∀i ∈ N , j ∈ M, providing, therefore, a solution to Problem 1.
IV. SIMULATIONS
In this section, we provide two case studies in an obstaclecluttered office environment. We choose the atomic propositions for the robots and objects as Ψ i = {"i-π 1 ", . . . , "i-π K "} and Ψ o j = {"O j -π 1 ", . . . , "O j -π K "}, respectively, for i ∈ N , j ∈ M, indicating their presence in the regions of interest. For the constructed transition systems, we set the cost χ as the Euclidean distance among the RoI contained in the nodes of the transitions.
For the continuous control design, we choose the robot dynamics of the form (1) with mass m i = 1, We further choosē r = 0.1 and a variation of (8) for β with τ =r 2 . Finally, we set the control gains to k 1 = 0.01, k 2 = 5, k φ = 1, and k m = k α = 0.01.
We further illustrate the continuous control design. In particular, we consider the actions of robot navigation and object transportation. Firstly, we examine the navigation of robot 1 from π 1 to π 3 . The results are depicted in Figs. 2, 3. The left part of Fig. 2 shows the trajectory of robot 1 in the environment, where the obstacles and boundary have absorbed the the spherical volume of the robot; the right part of Fig. 2 shows the trajectory of robot 1 in the transformed point world, where the obstacles are represented by points. Fig. 3 depicts the control input u 1 (t) (left) and the evolution of the adaptation signalsm 1 (t),α 1 (t) (right).
We further examine the transportation of object 1 by robots 1 and 2 from π 2 to π 4 , where we chose the load-sharing coefficients cf 1 = cf 2 = 0.5. The results are depicted in Figs. 4, 5. The left part of Fig. 4 shows the trajectory of the coupled object-robots system in the environment, where the obstacles and boundary have absorbed the its spherical volume; the right part of Fig. 2 shows the trajectory of the coupled system in the transformed point world, where the obstacles are represented by points. Fig. 3 depicts the control input u 1 (t) (left) and the evolution of the adaptation signalsm 1 (t),α 1 (t) (right) of the robot 1, which are identical to the ones of robot 2.
V. CONCLUSION
We propose an algorithm for the control and planning of multi-robot-object systems subject LTL tasks. We develop adaptive feedback-control protocols for robot navigation and cooperative object transportation, which enable the abstraction of the underlying continuous dynamics to a finite transition system. We compose the transition system with an automaton that represents the LTL task and use a sampling-based planner to derive an optimal task-satisfying plan for the robots. | 9,301.6 | 2022-04-25T00:00:00.000 | [
"Computer Science"
] |
An Efficient Deep Learning Approach to Pneumonia Classification in Healthcare
This study proposes a convolutional neural network model trained from scratch to classify and detect the presence of pneumonia from a collection of chest X-ray image samples. Unlike other methods that rely solely on transfer learning approaches or traditional handcrafted techniques to achieve a remarkable classification performance, we constructed a convolutional neural network model from scratch to extract features from a given chest X-ray image and classify it to determine if a person is infected with pneumonia. This model could help mitigate the reliability and interpretability challenges often faced when dealing with medical imagery. Unlike other deep learning classification tasks with sufficient image repository, it is difficult to obtain a large amount of pneumonia dataset for this classification task; therefore, we deployed several data augmentation algorithms to improve the validation and classification accuracy of the CNN model and achieved remarkable validation accuracy.
Introduction
e risk of pneumonia is immense for many, especially in developing nations where billions face energy poverty and rely on polluting forms of energy. e WHO estimates that over 4 million premature deaths occur annually from household air pollution-related diseases including pneumonia [1]. Over 150 million people get infected with pneumonia on an annual basis especially children under 5 years old [2]. In such regions, the problem can be further aggravated due to the dearth of medical resources and personnel. For example, in Africa's 57 nations, a gap of 2.3 million doctors and nurses exists [3,4]. For these populations, accurate and fast diagnosis means everything. It can guarantee timely access to treatment and save much needed time and money for those already experiencing poverty.
Deep neural network models have conventionally been designed, and experiments were performed upon them by human experts in a continuing trial-and-error method. is process demands enormous time, know-how, and resources. To overcome this problem, a novel but simple model is introduced to automatically perform optimal classification tasks with deep neural network architecture. e neural network architecture was specifically designed for pneumonia image classification tasks. e proposed technique is based on the convolutional neural network algorithm, utilizing a set of neurons to convolve on a given image and extract relevant features from them. Demonstration of the efficacy of the proposed method with the minimization of the computational cost as the focal point was conducted and compared with the exiting state-of-the-art pneumonia classification networks.
In recent times, CNN-motivated deep learning algorithms have become the standard choice for medical image classifications although the state-of-the-art CNN-based classification techniques pose similar fixated network architectures of the trial-and-error system which have been their designing principle. U-Net [5], SegNet [6], and Car-diacNet [7] are some of the prominent architectures for medical image examination. To design these models, specialists often have a large number of choices to make design decisions, and intuition significantly guides manual search process. Models like evolutionary-based algorithms [8] and reinforcement learning (RL) [9] have been introduced to locate optimum network hyperparameters during training. However, these techniques are computationally expensive, gulping a ton of processing power. As an alternative, our study proposes a conceptually simple yet efficient network model to handle the pneumonia classification problem as shown in Figures 1 and 2.
CNNs have an edge over DNNs by possessing a visual processing scheme that is equivalent to that of humans and extremely optimized structure for handling images and 2D and 3D shapes, as well as ability to extract abstract 2D features through learning. e max-pooling layer of the convolutional neural network is effective in variant shape absorptions and comprises sparse connections in conjunction with tied weights. When compared with fully connected (FC) networks of equivalent size, CNNs have a considerably smaller amount of parameters. Most importantly, gradientbased learning algorithms are employed in training CNNs and they are less prone to diminishing gradient problem. Since the gradient-based algorithm is responsible for training the whole network in order to directly diminish an error criterion, highly optimized weights can be produced by CNNs.
Related Works
Latest improvements in deep learning models and the availability of huge datasets have assisted algorithms to outperform medical personnel in numerous medical imaging tasks such as skin cancer classification [11], hemorrhage identification [12], arrhythmia detection [13], and diabetic retinopathy detection [14]. Automated diagnoses enabled by chest radiographs have received growing interests.
ese algorithms are increasingly being used for conducting lung nodule detection [15] and pulmonary tuberculosis classification [16]. e performance of several convolutional models on diverse abnormalities relying on the publicly available OpenI dataset [17] found that the same deep convolutional network architecture does not perform well across all abnormalities [18], ensemble models significantly improved classification accuracy when compared with single model, and finally, deep learning method improved accuracy when compared to rule-based methods.
Statistical dependency between labels [19] was studied to arrive at more precise predictions, thereby outperforming other techniques on given 13 images selected from 14 classes [20]. Algorithms for mining and predicting labels emanating from radiology images as well as reports have been studied [21][22][23], but the image labels were generally constrained to disease tags, thus lacking contextual information. Detection of diseases from X-ray images was examined in [24][25][26], classifications on image views from chest X-ray were carried out in [27], and body parts segmentation from chest X-ray images and computed tomography was performed in [23,28]. Conversely, learning image features from text and creating image descriptions relative to what a human would describe are yet to be exploited.
Materials and Methods
We present the detailed experiments and evaluation steps undertaken to test the effectiveness of the proposed model. Our experiments were based on a chest X-ray image dataset proposed in [29]. We deployed Keras open-source deep learning framework with tensorflow backend [10] to build and train the convolutional neural network model. All experiments were run on a standard PC with an Nvidia GeForce GTX TITAN Xp GPU card of 12 GB, cuDNN v7.0 library, and CUDA Toolkit 9.0.
Dataset.
e original dataset [25] consists of three main folders (i.e., training, testing, and validation folders) and two subfolders containing pneumonia (P) and normal (N) chest X-ray images, respectively. A total of 5,856 X-ray images of anterior-posterior chests were carefully chosen from retrospective pediatric patients between 1 and 5 years old. e entire chest X-ray imaging was conducted as part of patients' routine medical care. To balance the proportion of data assigned to the training and validation set, the original data category was modified. We rearranged the entire data into training and validation set only. A total of 3,722 images were allocated to the training set and 2,134 images were assigned to the validation set to improve validation accuracy.
Preprocessing and Augmentation.
We employed several data augmentation methods to artificially increase the size and quality of the dataset.
is process helps in solving overfitting problems and enhances the model's generalization ability during training. e settings deployed in image augmentation are shown below in Table 1. e rescale operation represents image reduction or magnification during the augmentation process. e rotation range denotes the range in which the images were randomly rotated during training, i.e., 40 degrees. Width shift is the horizontal translation of the images by 0.2 percent, and height shift is the vertical translation of the images by 0.2 percent. In addition, a shear range of 0.2 percent clips the image angles in a counterclockwise direction. e zoom range randomly zooms the images to the ratio of 0.2 percent, and finally, the images were flipped horizontally. Figure 3 shows the overall architecture of the proposed CNN model which consists of two major parts: the feature extractors and a classifier (sigmoid activation function). Each layer in the feature extraction layer takes its immediate preceding layer's output as input, and its output is passed as an input to the succeeding layers. e proposed architecture in Figure 3 consists of the convolution, maxpooling, and classification layers combined together. e feature extractors comprise conv3 × 3, 32; conv3 × 3, 64; conv3 × 3, 128; conv3 × 3, 128, max-pooling layer of size 2 × 2, and a RELU activator between them. e output of the convolution and max-pooling operations are assembled into 2D planes called feature maps, and we obtained 198 × 198 × 32, 97 × 97 × 62, 46 × 64× 128, and 21 × 21 × 128 sizes of feature maps, respectively, for the convolution operations and 99 × 99 × 32, 48 × 48 × 64, 23 × 23 × 128, and 10 × 10 × 128 sizes of feature maps from the pooling operations, respectively, with an input of image of size 200 × 200 × 3 as shown in Table 2. It is worthy to note that each plane of a layer in the network was obtained by combining one or more planes of previous layers. e classifier is placed at the far end of the proposed convolutional neural network (CNN) model. It is simply an artificial neural network (ANN) often referred to as a dense layer. is classifier requires individual features (vectors) to perform computations like any other classifier. erefore, the output of the feature extractor (CNN part) is converted into a 1D feature vector for the classifiers. is process is known as flattening where the output of the convolution operation is flattened to generate one lengthy feature vector for the dense layer to utilize in its final classification process. e classification layer contains a flattened layer, a dropout of size 0.5, two dense layers of size 512 and 1, respectively, a RELU between the two dense layers and a sigmoid activation function that performs the classification tasks.
Results
To evaluate and validate the effectiveness of the proposed approach, we conducted the experiments 10 times each for three hours, respectively. Parameter and hyperparameters were heavily turned to increase the performance of the model. Different results were obtained, but this study reports only the most valid.
As explained above, methods such as data augmentation, learning rate variation, and annealing were deployed to assist in fitting the small dataset into deep convolutional neural network architecture.
is was in order to obtain substantial results as shown in Figure 4. e final results obtained are training loss � 0.1288, training accuracy � 0.9531, validation loss: 0.1835, and validation accuracy of 0.9373.
CNN frameworks always require images of fixed sizes during training. us, to demonstrate the validation performance of our model on variant input data, we reshaped the X-ray images into 100 × 100 × 3, 150 × 150 × 3, 200 × 200 × 3, 250 × 250 × 3, and 300 × 300 × 3 sizes, respectively, trained them three hours each, and obtained their overall average performance as shown in Figure 4 and Table 3.
e larger the size of the transformed images, the lesser the validation accuracy obtained. In contrast, smaller-sized training images induced a slight improvement in validation accuracy as shown in Figure 5. However, the little slips in the validation accuracy do not register substantial impact on the overall classification performance of the proposed model. Larger images also required more training time and computation cost, and the performances of 150 × 150 × 3 and 200 × 200 × 3 image sizes were similar, as shown in Table 3 and Figure 5, respectively. Finally, we propose the 200 × 200 × 3 model since it produced better validation accuracy of approximately 94 percent with a minimal training loss of 0.1835.
Discussion
We developed a model to detect and classify pneumonia from chest X-ray images taken from frontal views at high validation accuracy. e algorithm begins by transforming chest X-ray images into sizes smaller than the original. e next step involves the identification and classification of images by the convolutional neural network framework, which extracts features from the images and classifies them. Due to the effectiveness of the trained CNN model for identifying pneumonia from chest X-ray images, the validation accuracy of our model was significantly higher when compared with other approaches. To affirm the performance of the model, we repeated the training process of the model several times, each time obtaining the same results. To validate the performance of the trained model on different chest X-ray image sizes, we varied the sizes of the training and validation dataset and still obtained relatively similar results. is will go a long way in improving the health of at-risk children in energy-poor environments. e study was limited by depth of data. With increased access to Journal of Healthcare Engineering data and training of the model with radiological data from patients and nonpatients in different parts of the world, significant improvements can be made.
Conclusions
We have demonstrated how to classify positive and negative pneumonia data from a collection of X-ray images. We build our model from scratch, which separates it from other methods that rely heavily on transfer learning approach. In the future, this work will be extended to detect and classify X-ray images consisting of lung cancer and pneumonia. Distinguishing X-ray images that contain lung cancer and pneumonia has been a big issue in recent times, and our next approach will tackle this problem.
Data Availability
e data used to support the findings of this study are included within the article.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 3,106.6 | 2019-03-27T00:00:00.000 | [
"Computer Science"
] |
Automatic Classification of Rotor Faults in Soft-Started Induction Motors, Based on Persistence Spectrum and Convolutional Neural Network Applied to Stray-Flux Signals
Due to their robustness, versatility and performance, induction motors (IMs) have been widely used in many industrial applications. Despite their characteristics, these machines are not immune to failures. In this sense, breakage of the rotor bars (BRB) is a common fault, which is mainly related to the high currents flowing along those bars during start-up. In order to reduce the stresses that could lead to the appearance of these faults, the use of soft starters is becoming usual. However, these devices introduce additional components in the current and flux signals, affecting the evolution of the fault-related patterns and so making the fault diagnosis process more difficult. This paper proposes a new method to automatically classify the rotor health state in IMs driven by soft starters. The proposed method relies on obtaining the Persistence Spectrum (PS) of the start-up stray-flux signals. To obtain a proper dataset, Data Augmentation Techniques (DAT) are applied, adding Gaussian noise to the original signals. Then, these PS images are used to train a Convolutional Neural Network (CNN), in order to automatically classify the rotor health state, depending on the severity of the fault, namely: healthy motor, one broken bar and two broken bars. This method has been validated by means of a test bench consisting of a 1.1 kW IM driven by four different soft starters coupled to a DC motor. The results confirm the reliability of the proposed method, obtaining a classification rate of 100.00% when analyzing each model separately and 99.89% when all the models are analyzed at a time.
Introduction
Induction Motors (IMs) are widely used in a large part of industrial applications in industrialized countries [1]. Their robustness, reliability, easy maintenance and low cost, among other characteristics, have contributed to this fact. Squirrel Cage Induction Motors (SCIM), more specifically, are a significant part of the IMs used in those applications [2], consuming almost 89% of the power that industrial facilities demand [3]. Despite those characteristics, SCIM are not immune to failures. Due to the high currents during the start-up and other transients, they deal with thermo-mechanical stresses in the rotor bars that can lead to a fault. This is particularly true in applications where continuous cycles of start-stop are required [4]. To avoid stresses during the start-up, several starting systems are used in the industry. Among others, the use of auto-transformers, stator resistors, soft starters or the star-delta starting are the most usual starting systems [5]. In this context, soft starters have become one of the most preferred starting systems due to their advantages. By means of a power electronics circuit, based typically on thyristors connected in anti-parallel 3 of 29 as a method to detect the presence and the severity of bar breakages in induction motors driven by soft starters. The accuracy rate achieved in this work was 94.40%. By their side, in [28], the authors used Linear Discriminant Analysis (LDA) and an FFNN, applied to a combination of current and stray-flux signals, to detect the presence and the severity of bar breakages in an IM driven by soft starters. In this case, the accuracy rate achieved was 94.40%.
Attending to all the above-mentioned considerations, the automatic methods for rotor fault detection in soft-started induction motors are still improvable. This work presents a new methodology for the automatic detection and severity categorization of rotor faults in induction motors driven by soft starters. The novelty of the proposed methodology is the use of the Persistence Spectrum (PS) applied to the start-up transient stray-flux signals. Then, a convolutional neural network (CNN) is used to automatically categorize the severity of the rotor faults. In order to improve the dataset, data augmentation techniques are used. In this regard, Data Augmentation Techniques (DAT) have been proven to be a reliable way to enhance the data base used in CNN. In [18,27] it was stated that the use of Data Augmentation Techniques is a reliable method to deal with the scarcity of samples, providing a good dataset to use in a CNN. In particular, adding Gaussian noise to a signal is one of the DAT that is commonly employed.
On the other hand, Persistence Spectrum (PS), also known as Power-Spectrum Histogram, shows the percentage of the time that a particular frequency is present in a signal. That is to say that the longer a given frequency persists in a signal as it evolves, the higher is its percentage of time. Therefore, the brighter it will appear in the persistence spectrum (PS). Hence, if there is any hidden component in the signal, it will be revealed, even if it is a light one. Thus, the PS images are suitable to be used in a CNN.
To summarize, since the use of soft starters makes it more difficult to identify the fault-related patterns with the most commonly used time-frequency tools, the main goal of this work was to obtain a methodology that led to an easier identification of the presence of rotor faults in soft-started induction motors. Additionally, the suitability of the method to perform an automatic fault classification system was also a goal of this work. Having this in mind, the characteristics of the PS made it suitable for this application, one of them being its ability to reveal very short events present in the analyzed signal. Finally, the stray-flux during the startup transient was the chosen magnitude for this study due to its richer harmonic content compared to other magnitudes.
In order to verify the effectiveness of the proposed methodology, a test-bench consisting of a 1.1 kW induction motor and a DC motor acting as a load was used. Four different commercial soft starters were used to start the motor. The obtained results, achieving an accuracy rate of 100% for each model separately and 99.89% for all the models together, show the capabilities of the proposed approach.
Finally, to provide a global idea of the paper, its structure is presented here: Section 2 exposes the materials and methods, including the theoretical background and the proposed methodology. Section 3 gives information about the experimental setup used for the tests. Section 4 shows the results and their discussion and finally, Section 5 gives the conclusions of the study.
Stray-Flux Analysis
In recent years, the use of the magnetic flux generated by electrical motors to obtain information about their health state has gained interest. The analysis of this magnitude has been proven to be a good alternative to other typical techniques used in the industry for the condition monitoring of electrical motors (e.g., MCSA).
Within this methodology, two approaches have arisen: (1) air-gap flux analysis [29] and (2) stray-flux analysis [30]. Among them both, the second one has attracted a significant interest because of many reasons. Among them, the low cost of the required sensors [31] and their simple and flexible installation on the frame of the motor [20], the fact that it is a non-invasive technique [17] as well as the fact that it provides reliability in some cases where other techniques yield to false fault positives [4,32] are the most important.
Due to the non-invasive nature of the stray-flux analysis technique, it is possible to install sensors in different positions on the motor frame. This fact allows one to capture different flux components depending on the sensor position [33]. In an induction motor, axial and radial stray-flux components can be distinguished [34]. It has been proven in other works that the presence of faults in electrical motors may affect to the stray-flux, thus amplifying some specific frequency components of the stray-flux signal that depend on the existing fault [35]. In Figure 1, the different stray-flux components and positions of the sensor are shown. In this regard, in position A, mainly the axial flux is captured by the sensor, while in position C, mainly the radial component is acquired. Finally, setting the sensor in position B allows one to capture a combination of radial and axial stray-flux.
Within this methodology, two approaches have arisen: (1) air-gap flux analysis [29] and (2) stray-flux analysis [30]. Among them both, the second one has attracted a significant interest because of many reasons. Among them, the low cost of the required sensors [31] and their simple and flexible installation on the frame of the motor [20], the fact that it is a non-invasive technique [17] as well as the fact that it provides reliability in some cases where other techniques yield to false fault positives [4,32] are the most important.
Due to the non-invasive nature of the stray-flux analysis technique, it is possible to install sensors in different positions on the motor frame. This fact allows one to capture different flux components depending on the sensor position [33]. In an induction motor, axial and radial stray-flux components can be distinguished [34]. It has been proven in other works that the presence of faults in electrical motors may affect to the stray-flux, thus amplifying some specific frequency components of the stray-flux signal that depend on the existing fault [35]. In Figure 1, the different stray-flux components and positions of the sensor are shown. In this regard, in position A, mainly the axial flux is captured by the sensor, while in position C, mainly the radial component is acquired. Finally, setting the sensor in position B allows one to capture a combination of radial and axial stray-flux.
Fault-Related Patterns: Theoretical Frequency Evolution during the Start-Up Transient
Many previous works have proven that the presence of rotor faults affect the strayflux, amplifying some specific harmonics which are related to each fault. Particularly, as it has been stated by some researchers, rotor bar breakages affect the following harmonics in the Fourier spectrum of stray-flux signals: 1. Side band harmonics (fSH): These harmonics mainly appear in the radial component of the stray-flux [35,36]. Their frequency values can be calculated by Equation (1): 2. Axial components: Mainly observed in the axial component of the stray-flux [17], can be calculated by Equations (2) and (3): For all the above-mentioned components, refers to the slip and is the supply frequency.
The theoretical transient evolutions of radial and axial components related to rotor bar breakages are shown in Figure 2. As it can be seen in that figure, the upper side harmonic, given by = • (1 + 2 • ) and depicted in blue, drops from 150 Hz to almost 50 Hz. On the other hand, the lower side harmonic, given by = • (1 − 2 • ) and depicted in orange, drops from 50 Hz to 0 Hz and then rises to almost 50 Hz. Regarding the
Fault-Related Patterns: Theoretical Frequency Evolution during the Start-Up Transient
Many previous works have proven that the presence of rotor faults affect the stray-flux, amplifying some specific harmonics which are related to each fault. Particularly, as it has been stated by some researchers, rotor bar breakages affect the following harmonics in the Fourier spectrum of stray-flux signals:
1.
Side band harmonics (f SH ): These harmonics mainly appear in the radial component of the stray-flux [35,36]. Their frequency values can be calculated by Equation (1):
2.
Axial components: Mainly observed in the axial component of the stray-flux [17], can be calculated by Equations (2) and (3): For all the above-mentioned components, s refers to the slip and f is the supply frequency.
The theoretical transient evolutions of radial and axial components related to rotor bar breakages are shown in Figure 2. As it can be seen in that figure, the upper side harmonic, given by f SH = f ·(1 + 2·s) and depicted in blue, drops from 150 Hz to almost 50 Hz. On the other hand, the lower side harmonic, given by f SH = f ·(1 − 2·s) and depicted in orange, drops from 50 Hz to 0 Hz and then rises to almost 50 Hz. Regarding the axial components, s· f (depicted in yellow) drops from 50 Hz to almost 0 Hz, while 3·s· f (depicted in purple) drops from 150 Hz to almost 0 Hz. That evolution is valid for stray-flux signals during a direct online start-up transient. axial components, • (depicted in yellow) drops from 50 Hz to almost 0 H • (depicted in purple) drops from 150 Hz to almost 0 Hz. That evolution stray-flux signals during a direct online start-up transient.
Persistence Spectrum
Persistence Spectrum (PS) is a commonly used technique in spectrum anal known as Spectrum Histogram, it is a histogram in power-frequency space. It to see the percentage of time that a specific frequency is present in a signal. The a specific frequency persists in a signal as it evolves, the brighter it will appea Therefore, it allows one to see very short events and even low power signals other signals [37].
The procedure to obtain the PS follows the steps listed below [37]:
•
Step 1: The original signal is split into different segments of the same Figure 3). These segments may overlap or not; but overlapping leads to mo spectrum analyses. The time resolution, or segment length, has to be smaller than the signal length. The number of segments, is given by Equa with being the signal duration or length, the length of the overlap and resolution or segment length. Symbols ⌊ ⌋ denote a function that rounds the r nearest integer.
•
Step 2: Once the signal is split, the power spectrum of each segment is co applying the Short-Time Fourier Transform (STFT), as shown in Figure 3 matrix is obtained by applying Equation (5).
Persistence Spectrum
Persistence Spectrum (PS) is a commonly used technique in spectrum analyzers. Also known as Spectrum Histogram, it is a histogram in power-frequency space. It allows one to see the percentage of time that a specific frequency is present in a signal. The more time a specific frequency persists in a signal as it evolves, the brighter it will appear in the PS. Therefore, it allows one to see very short events and even low power signals hidden in other signals [37].
The procedure to obtain the PS follows the steps listed below [37]:
•
Step 1: The original signal is split into different segments of the same length (see Figure 3). These segments may overlap or not; but overlapping leads to more detailed spectrum analyses. The time resolution, or segment length, has to be equal to or smaller than the signal length. The number of segments, is given by Equation (4): with N x being the signal duration or length, L the length of the overlap and M the time resolution or segment length. Symbols denote a function that rounds the result to the nearest integer.
•
Step 2: Once the signal is split, the power spectrum of each segment is computed by applying the Short-Time Fourier Transform (STFT), as shown in Figure 3. The STFT matrix is obtained by applying Equation (5).
Sensors 2023, 23, 316 6 of 29 sent higher presence in time of a component.
In Figure 3, an overview of the Persistence Spectrum computation procedure is shown [37]. In this figure, a 50% overlap rate is applied, which is the same rate used in this work. As it was stated in [38][39][40], the mth element of the STFT matrix is given by Equation (6): where: x(n) = input signal at time n, g(n) = window function (Kaiser window in this work), X m ( f ) = Discrete Fourier Transform (DTF) of windowed data centered in time mR, R = number of samples between subsequent DFT (difference between segment length and overlap length).
For each segment, as stated in [39,41], the power spectrum is given by Equation (7): • Step 3: A bivariate histogram of the power spectrum logarithm is computed for each time value. In this regard, each segment corresponds to a time value. Every powerfrequency bin in which there is signal energy at that time, increases the corresponding matrix element by "1" (see Figure 3).
•
Step 4: Once all the bivariate histograms are obtained, an accumulated histogram is plotted against the frequency (X axis) and the power (Y axis). Brighter colors represent higher presence in time of a component.
In Figure 3, an overview of the Persistence Spectrum computation procedure is shown [37]. In this figure, a 50% overlap rate is applied, which is the same rate used in this work.
Convolutional Neural Network (CNN)
Being a type of Artificial Neural Networks (ANN), CNNs perform specially well in recognizing images. They are composed of an input layer, several hidden layers and an output layer. What differentiates CNNs from other ANNs is the presence of at least one convolutional layer as a part of the hidden layers. And it is this convolution operation which identifies local characteristics of the input data that can be used for the classification.
The basics of CNNs are explained In [42]. Nevertheless, their way of operation is summarized here:
•
Learning stage: Assuming that the input data x l−1 includes m 2-D matrices, they are convolved in the convolutional layer with the learnable kernels that a layer consists of. That is to say that for each input matrix x l−1 i (i ∈ m), it is convolved with the kernel (or filter) k j . After this, the sum of all that is added to the bias b l . Then, the activation function f (typically a ReLU function) is fed with the result and produces the final output of the j th kernel (or filter). This is mathematically expressed in Equation (8). After this, a batch normalization layer is typically used. It helps to make the training faster by normalizing every input channel across a mini batch. Finally, a pooling layer divides the input into smaller areas and then calculates the average or the maximum of that areas [43].
Consisting typically of two layers (fully connected layer and classification layer), this stage combines all the features extracted from the input data in the learning stage. First, the fully connected layer generates a vector with as much dimensions as the number of classes the CNN is able to predict. Then, a classification layer, usually using a softmax function, provides the classification output.
Proposed Methodology
The proposed methodology consists of 5 main steps. First, the current and stray-flux signals are captured by means of different sensors. Then, White Gaussian Noise is added to the original signal to increment the number of signals. In the third step, the persistence spectrum of each signal is computed. Then, the PS images obtained are cropped and resized to adapt them to the requirements of the CNN, and finally, these images are used as input of the classification CNN in the fifth step. Stated yet another way, the input of the CNN will be the PS images after being cropped and resized to 224 × 224 × 3 images, while the output of the CNN will be the three rotor fault classes, namely healthy state, one broken bar and two broken bars.
To better illustrate the sequence of the procedure, a flux diagram of the proposed methodology is shown in Figure 4: As mentioned before, the proposed methodology follows the steps listed here:
•
Step 1: Acquisition of the current and stray-flux signals. These two magnitudes were captured, simultaneously, during the start-up transients. To do this, a current clamp and a coil-based flux sensor, both of them described in the next section, were used. The signals were stored in a waveform recorder (oscilloscope) and then downloaded to a PC, where the signal analyses were performed. The selected position of the flux sensor was the one allowing one to capture a combination of radial and axial stray-flux (Position B, see Figure 1).
•
Step 2: Data augmentation. In order to generate a higher number of training samples, a Data Augmentation Technique (DAT) was applied. In this case, the addition of White Gaussian Noise (WGN) to the original stray-flux signals was the selected technique.
The addition of Gaussian Noise to the original signals is a data augmentation tool that is frequently used. This technique can increase the dataset by selecting different values of standard deviations ( ) [44] or, since it directly affects the value of , different values of Signal-to-Noise Ratio (SNR). In this regard, also in [44], it is proven that values of SNR smaller than 10 dB report low improvements to the accuracy of the classification methods. On the other hand, authors in [45] pointed out that large ranges of SNR in the injected noise allow one to obtain better performance of the test datasets. In other works, as in [46], the authors set the SNR range for the injected noises between 10 dB and 20 dB. Taking all this into account and also that a level of SNR of 20 dB is commonly considered as a good value of AWGN in electrical signals [19], a set of Gaussian Noises with SNR from 10 dB to 20 dB, in steps of 0.2 dB, was performed for this work. Thus, the number of signals of the dataset, including the original ones, reached the values shown in Table 1: As mentioned before, the proposed methodology follows the steps listed here:
•
Step 1: Acquisition of the current and stray-flux signals. These two magnitudes were captured, simultaneously, during the start-up transients. To do this, a current clamp and a coil-based flux sensor, both of them described in the next section, were used. The signals were stored in a waveform recorder (oscilloscope) and then downloaded to a PC, where the signal analyses were performed. The selected position of the flux sensor was the one allowing one to capture a combination of radial and axial stray-flux (Position B, see Figure 1).
•
Step 2: Data augmentation. In order to generate a higher number of training samples, a Data Augmentation Technique (DAT) was applied. In this case, the addition of White Gaussian Noise (WGN) to the original stray-flux signals was the selected technique.
The addition of Gaussian Noise to the original signals is a data augmentation tool that is frequently used. This technique can increase the dataset by selecting different values of standard deviations (σ) [44] or, since it directly affects the value of σ, different values of Signal-to-Noise Ratio (SNR). In this regard, also in [44], it is proven that values of SNR smaller than 10 dB report low improvements to the accuracy of the classification methods. On the other hand, authors in [45] pointed out that large ranges of SNR in the injected noise allow one to obtain better performance of the test datasets. In other works, as in [46], the authors set the SNR range for the injected noises between 10 dB and 20 dB. Taking all this into account and also that a level of SNR of 20 dB is commonly considered as a good value of AWGN in electrical signals [19], a set of Gaussian Noises with SNR from 10 dB to 20 dB, in steps of 0.2 dB, was performed for this work. Thus, the number of signals of the dataset, including the original ones, reached the values shown in Table 1: In Figure 5, a comparison between one original stray-flux signal and three of the signals with AWGN is shown. In addition, the Persistence Spectrum computed for each of the mentioned signals are shown. Schneider 420 420 420 ABB 252 252 252 Siemens 336 336 336 Omron 252 252 252 Total 1260 1260 1260
Soft-Starter Healthy 1 BB 2 BB
In Figure 5, a comparison between one original stray-flux signal and three of the signals with AWGN is shown. In addition, the Persistence Spectrum computed for each of the mentioned signals are shown.
•
Step 3: Computation of the Persistence Spectrum of each signal. Once all the strayflux signals were obtained, both the captured ones and those resulting from the data augmentation process, the start-up transient was identified and isolated from the signal itself. Then, the Persistence Spectrum (PS) was computed for all the transients obtained, setting an overlap of 50% and using the Kaiser window as window function. The process to obtain the PS was the one referred in Section 2, Section 2.3. As a result, a set of 3780 images was obtained, one for each transient. Those images were stored in different folders. For each model of soft starter, the images were divided into three folders depending on the health state of the rotor (namely healthy, one broken bar and two broken bars). Those folders contained, in each case, the resulting PS images for the different parameter settings of the soft starter model, with and without load. In Figure 6, an example of the PS images obtained is shown.
•
Step 3: Computation of the Persistence Spectrum of each signal. Once all the strayflux signals were obtained, both the captured ones and those resulting from the data augmentation process, the start-up transient was identified and isolated from the signal itself. Then, the Persistence Spectrum (PS) was computed for all the transients obtained, setting an overlap of 50% and using the Kaiser window as window function. The process to obtain the PS was the one referred in Section 2.3. As a result, a set of 3780 images was obtained, one for each transient. Those images were stored in different folders. For each model of soft starter, the images were divided into three folders depending on the health state of the rotor (namely healthy, one broken bar and two broken bars). Those folders contained, in each case, the resulting PS images for the different parameter settings of the soft starter model, with and without load. In Figure 6, an example of the PS images obtained is shown. Sensors 2023, 23, x FOR PEER REVIEW 10 of 29 As explained in Section 2, Section 2.3, the Persistence Spectrum represents, by means of a color map, the percentage of time that a particular frequency appears in a signal. The x-axis shows the frequency (in Hz) and the y-axis the power (in dB). Thus, it is a timefrequency view. In this regard, the representing limits for the frequency were set between 0 Hz and 200 Hz, while for the power, between −100 dB and 0 dB. Those limits were chosen due to the following reasons: • Regarding the power, the limits were set attending to the maximum and minimum value obtained from all the Persistence Spectra computed for all the signals.
•
Regarding the frequency, the limits were set attending to the frequencies where the components related to the patterns of the studied fault (broken bars) must appear.
•
Step 4. Crop and resize images. In order to adapt the PS images obtained in the previous step to the needs of the CNN, they were cropped and resized. The main aim for the cropping was to eliminate the color bar and the axis legends, keeping only the area where the PS was represented. On the other hand, since the CNN input size for the images was set in 224 × 224 pixels, the cropped images needed to be reduced to that size. In Figure 7, an example of the cropped and resized images against the PS images can be seen. As explained in Section 2.3, the Persistence Spectrum represents, by means of a color map, the percentage of time that a particular frequency appears in a signal. The x-axis shows the frequency (in Hz) and the y-axis the power (in dB). Thus, it is a time-frequency view. In this regard, the representing limits for the frequency were set between 0 Hz and 200 Hz, while for the power, between −100 dB and 0 dB. Those limits were chosen due to the following reasons:
•
Regarding the power, the limits were set attending to the maximum and minimum value obtained from all the Persistence Spectra computed for all the signals.
•
Regarding the frequency, the limits were set attending to the frequencies where the components related to the patterns of the studied fault (broken bars) must appear.
•
Step 4. Crop and resize images. In order to adapt the PS images obtained in the previous step to the needs of the CNN, they were cropped and resized. The main aim for the cropping was to eliminate the color bar and the axis legends, keeping only the area where the PS was represented. On the other hand, since the CNN input size for the images was set in 224 × 224 pixels, the cropped images needed to be reduced to that size. In Figure 7, an example of the cropped and resized images against the PS images can be seen.
•
Step 5. Automatic fault identification (CNN). For the automatic classification of the different health states of the rotor (healthy, 1 BB and 2 BB), a self-developed Convolutional Neural Network (CNN) was used. It was implemented in MATLAB platform, and the detailed information of the CNN layers is shown in Figure 8 and Table 2. Additionally, the MATLAB pseudocode is shown in Appendix A, Figure A1. As explained in Section 2, Section 2.3, the Persistence Spectrum represents, by means of a color map, the percentage of time that a particular frequency appears in a signal. The x-axis shows the frequency (in Hz) and the y-axis the power (in dB). Thus, it is a timefrequency view. In this regard, the representing limits for the frequency were set between 0 Hz and 200 Hz, while for the power, between −100 dB and 0 dB. Those limits were chosen due to the following reasons: • Regarding the power, the limits were set attending to the maximum and minimum value obtained from all the Persistence Spectra computed for all the signals.
•
Regarding the frequency, the limits were set attending to the frequencies where the components related to the patterns of the studied fault (broken bars) must appear.
•
Step 4. Crop and resize images. In order to adapt the PS images obtained in the previous step to the needs of the CNN, they were cropped and resized. The main aim for the cropping was to eliminate the color bar and the axis legends, keeping only the area where the PS was represented. On the other hand, since the CNN input size for the images was set in 224 × 224 pixels, the cropped images needed to be reduced to that size. In Figure 7, an example of the cropped and resized images against the PS images can be seen.
•
Step 5. Automatic fault identification (CNN). For the automatic classification of the different health states of the rotor (healthy, 1 BB and 2 BB), a self-developed Convo lutional Neural Network (CNN) was used. It was implemented in MATLAB plat form, and the detailed information of the CNN layers is shown in Figure 8 and Table 2. Additionally, the MATLAB pseudocode is shown in Appendix A, Figure A1. With regards to the training process, the Stochastic Gradient Descent with Momen tum algorithm was selected. The initial learning rate was set in 10 −4 , the momentum in 0.9 and the 2 regularization factor (or weight decay factor) in 10 −4 . Furthermore, the min-Batch size was set in 10, attending to the results available in the technical literature For instance, in [47], it is stated that sizes smaller than 32 allow one to obtain better train ing stability and generalization results. By their side, authors in [48] say that values above 10 allow faster computations. Finally, the maximum number of epochs was set in 20. The number of validation samples during the training was 25% of the available samples, ran domly selected. An overview of the properties is shown in Table 3. With regards to the training process, the Stochastic Gradient Descent with Momentum algorithm was selected. The initial learning rate was set in 10 −4 , the momentum in 0.9 and the L 2 regularization factor (or weight decay factor) in 10 −4 . Furthermore, the min-Batch size was set in 10, attending to the results available in the technical literature. For instance, in [47], it is stated that sizes smaller than 32 allow one to obtain better training stability and generalization results. By their side, authors in [48] say that values above 10 allow faster computations. Finally, the maximum number of epochs was set in 20. The number of validation samples during the training was 25% of the available samples, randomly selected. An overview of the properties is shown in Table 3.
Experimental Setup
In order to validate the effectiveness of the proposed methodology, several tests were carried out in the laboratory. The test-bench employed was the one shown in Figure 9. It consisted of a 1.1 kW squirrel cage induction motor (tested motor), coupled to a DC motor which acted as a load. The tested motor (SCIM) was started by means of four different commercial soft starters. During every start-up, the stray-flux and the current demanded by the motor were captured. To capture the stray-flux, a handmade coil-based sensor attached to the motor frame was used. A picture of it and its shape and main dimensions are shown in Figure 10(a1,a2), respectively. To capture the current signal of one of the supply phases of the motor, a current clamp was used (see Figure 10b). All these signals were recorded with an oscilloscope and then downloaded to a PC, where the signal analyses were performed. Both the stray-flux and the current signals were acquired for 40 s at a sampling rate of 5 kHz. All the analyses and training and validation processes were conducted on a PC, with an Intel Core i5-9400 1 CPU (2.9 GHz) and 8 GB of memory.
Experimental Setup
In order to validate the effectiveness of the proposed methodology, several tests wer carried out in the laboratory. The test-bench employed was the one shown in Figure 9. I consisted of a 1.1 kW squirrel cage induction motor (tested motor), coupled to a DC moto which acted as a load. The tested motor (SCIM) was started by means of four differen commercial soft starters. During every start-up, the stray-flux and the current demanded by the motor were captured. To capture the stray-flux, a handmade coil-based sensor at tached to the motor frame was used. A picture of it and its shape and main dimension are shown in Figure 10(a1,a2), respectively. To capture the current signal of one of th supply phases of the motor, a current clamp was used (see Figure 10b). All these signal were recorded with an oscilloscope and then downloaded to a PC, where the signal anal yses were performed. Both the stray-flux and the current signals were acquired for 40 s a a sampling rate of 5 kHz. All the analyses and training and validation processes wer conducted on a PC, with an Intel Core i5-9400 1 CPU (2.9 GHz) and 8 GB of memory. The main characteristics of the tested motor (SCIM) are shown in Table 4. The main characteristics of the tested motor (SCIM) are shown in Table 4. On the other hand, the main characteristics of the current clamp are listed in Table 6 and a picture of it is shown in Figure 10b. The main characteristics of the coil-flux sensor are the ones listed in Table 5. In addition, as said before, in Figure 10(a1, a2)), the shape and dimensions of the coil sensor can be seen. On the other hand, the main characteristics of the current clamp are listed in Table 6 and a picture of it is shown in Figure 10b. To carry out all the tests, four different models of soft starters were employed. Each of them had different topologies, controlling one-, two-or the three-supply phases depending on the model. Furthermore, each model allowed one to control the start-up time-ramp and the initial voltage or torque. The different models of soft starters used for the tests were the ones shown in Figure 11, and their main characteristics are listed in Table 7. Regarding the studied fault, different bar breakages were in SCIM tested. Firstly, once the healthy rotor was tested, one rotor ing a hole in the junction with the end short-circuit ring. Then, tests were carried out, a second rotor bar, contiguous to the prev same way. A detail of the healthy rotor and the bar breakages fo 12. Regarding the studied fault, different bar breakages were induced in the rotor of the SCIM tested. Firstly, once the healthy rotor was tested, one rotor bar was broken by drilling a hole in the junction with the end short-circuit ring. Then, once the one-broken-bar tests were carried out, a second rotor bar, contiguous to the previous, was broken in the same way. A detail of the healthy rotor and the bar breakages forced is shown in Figure 12. The tests were carried out following the same sequence for the four models of soft starters. First, the healthy motor was started, without load, by means of one of the soft starters. Different combinations of time-ramp and initial voltage/torque were performed for each model of soft starter and for each of those combinations, the tested motor was started once. Then, the same tests were repeated, but this time with the tested motor fully loaded. This was achieved by varying the excitation voltage of the DC machine coupled to the tested motor. Afterwards, the procedure was repeated first for the case of one broken bar and then for the case of two broken bars.
For each start-up, the coil-flux sensor was placed in a position which allowed one to obtain a combination of axial and radial flux (called Position B, see Figure 1). In addition, the current signals of one of the supply phases was captured by means of the above-mentioned current clamp. These tests allowed one to obtain a batch of signals of the tested SCIM under different starting conditions. The different combinations of parameters performed for each soft starter are listed in Table 8, and also the number of signals obtained for each model. The tests were carried out following the same sequence for the four models of soft starters. First, the healthy motor was started, without load, by means of one of the soft starters. Different combinations of time-ramp and initial voltage/torque were performed for each model of soft starter and for each of those combinations, the tested motor was started once. Then, the same tests were repeated, but this time with the tested motor fully loaded. This was achieved by varying the excitation voltage of the DC machine coupled to the tested motor. Afterwards, the procedure was repeated first for the case of one broken bar and then for the case of two broken bars.
For each start-up, the coil-flux sensor was placed in a position which allowed one to obtain a combination of axial and radial flux (called Position B, see Figure 1). In addition, the current signals of one of the supply phases was captured by means of the above-mentioned current clamp. These tests allowed one to obtain a batch of signals of the tested SCIM under different starting conditions. The different combinations of parameters performed for each soft starter are listed in Table 8, and also the number of signals obtained for each model.
Results and Discussion
In this section, the results obtained by applying the proposed methodology are shown. First, a comparison of the different persistence spectra for the four models of soft-starter and each health case are presented, highlighting the differences found. Then, the effectiveness of the CNN proposed for each model of soft starter separately and for all of them combined is shown.
Although many analyses were carried out in this work, only the most representative are shown here. In this regard, in Figure 13, persistence spectra for each rotor health state and soft starter model are compared. Those persistence spectra correspond to tests when the motor was fully loaded. The settings for each soft starter model, were those corresponding to the combination of longest time-ramp and lowest initial voltage (see Table 8). In Figure 14, as an example, the above-mentioned differences for each health state of the rotor are highlighted in a set of PS images. The same differences can be identified in all the cases studied. Taking a look at the images in Figure 13, some differences can be distinguished. For all the cases, some components from 0 Hz to 50 Hz increment their amplitudes as the fault worsens. This also happens to some components from 150 Hz to 50 Hz. This fact fits with the typical behavior of the axial and radial components associated to the presence of broken bars. As it can be seen in Figure 2, axial components s· f and 3·s· f evolve from 50 Hz to almost 0 Hz for the first one and from 150 Hz to almost 0 Hz for the second one. On the other hand, radial components evolve from 50 Hz to 0 Hz and again to almost 50 Hz for the case of f ·(1 − 2·s) and from 150 Hz to almost 50 Hz for the case of f ·(1 + 2·s). Since the position of the flux sensor allowed one to capture the combination of radial and axial stray-flux, it makes sense to see the influence of both types of components in persistence spectra.
In Figure 14, as an example, the above-mentioned differences for each health state of the rotor are highlighted in a set of PS images. The same differences can be identified in all the cases studied. For the case of the SCHNEIDER model, 945 training samples (315 samples per category) were used to train the CNN and 315 different samples (105 samples per category) were used for the validation. In Figure 15, it can be observed that the accuracy of the CNN reaches 100%, which means that the methodology can identify and separate the different rotor health states in every case, even with different combinations of time-ramp and initial voltage. In addition, the training process reaches 100% accuracy after about 150 iterations, in epoch two. Moreover, it becomes stable at 100% after, more or less, 900 iterations. Figure 15, it can be observed that the accuracy of the CNN reaches 100%, which means that the methodology can identify and separate the different rotor health states in every case, even with different combinations of time-ramp and initial voltage. In addition, the training process reaches 100% accuracy after about 150 iterations, in epoch two. Moreover, it becomes stable at 100% after, more or less, 900 iterations.
For the case of the ABB model, 567 training samples (189 samples per category) were used to train the CNN and 189 different samples (63 samples per category) were used for the validation. In Figure 16. it can be observed that the accuracy of the CNN also in this case reaches 100%. Furthermore, the training process reaches 100% accuracy after about 180 iterations, in epoch four. Moreover, it becomes stable at 100% after, more or less, 200 iterations.
For the case of the SIEMENS model, 756 (252 samples per category) training samples were used to train the CNN and 252 different samples (84 samples per category) were used for the validation. In Figure 17, it can be observed that the accuracy of the CNN also in this case reaches 100%. In addition, the training process reaches 100% accuracy after about 68 iterations, in epoch one. Moreover, it becomes stable at 100% after, more or less, 160 iterations. reaches 100%, which means that the methodology can identify and separate the differe rotor health states in every case, even with different combinations of time-ramp and init voltage. In addition, the training process reaches 100% accuracy after about 150 iteratio in epoch two. Moreover, it becomes stable at 100% after, more or less, 900 iterations. For the case of the ABB model, 567 training samples (189 samples per category) were used to train the CNN and 189 different samples (63 samples per category) were used for the validation. In Figure 16. it can be observed that the accuracy of the CNN also in this case reaches 100%. Furthermore, the training process reaches 100% accuracy after about 180 iterations, in epoch four. Moreover, it becomes stable at 100% after, more or less, 200 iterations. the validation. In Figure 16. it can be observed that the accuracy of the CNN also in t case reaches 100%. Furthermore, the training process reaches 100% accuracy after abo 180 iterations, in epoch four. Moreover, it becomes stable at 100% after, more or less, 2 iterations. For the case of the SIEMENS model, 756 (252 samples per category) training samples were used to train the CNN and 252 different samples (84 samples per category) were used for the validation. In Figure 17, it can be observed that the accuracy of the CNN also in this case reaches 100%. In addition, the training process reaches 100% accuracy after about 68 iterations, in epoch one. Moreover, it becomes stable at 100% after, more or less, 160 iterations. were used to train the CNN and 252 different samples (84 samples per category) we used for the validation. In Figure 17, it can be observed that the accuracy of the CNN al in this case reaches 100%. In addition, the training process reaches 100% accuracy af about 68 iterations, in epoch one. Moreover, it becomes stable at 100% after, more or le 160 iterations. For the case of the OMRON model, 567 training samples (189 samples per category) were used to train the CNN and 189 (63 samples per category) different samples were used for the validation. In Figure 18, it can be observed that the accuracy of the CNN also in this case reaches 100%. Furthermore, the training process reaches 100% accuracy after about 160 iterations, in epoch three. Moreover, it becomes stable at 100% after, more or less, 340 iterations. used for the validation. In Figure 18, it can be observed that the accuracy of the CN in this case reaches 100%. Furthermore, the training process reaches 100% accura about 160 iterations, in epoch three. Moreover, it becomes stable at 100% after, m less, 340 iterations. Finally, in Figure 19, the confusion matrix and the training progress for all the models of soft starters combined are shown. In this case, 2835 training samples (945 samples per category) were used to train the CNN and 945 (315 samples per category) different samples were used for the validation. Although four different topologies of soft starter and different combinations of time-ramp duration and initial voltage were compared in this case, the accuracy achieved a rate of 99.89%. That is to say that only one of the samples was misclassified. Moreover, the misclassified prediction was among the healthy and first For the case of the OMRON model, 567 training samples (189 samples per category) were used to train the CNN and 189 (63 samples per category) different samples were used for the validation. In Figure 18, it can be observed that the accuracy of the CNN also in this case reaches 100%. Furthermore, the training process reaches 100% accuracy after about 160 iterations, in epoch three. Moreover, it becomes stable at 100% after, more or less, 340 iterations.
Finally, in Figure 19, the confusion matrix and the training progress for all the models of soft starters combined are shown. In this case, 2835 training samples (945 samples per category) were used to train the CNN and 945 (315 samples per category) different samples were used for the validation. Although four different topologies of soft starter and different combinations of time-ramp duration and initial voltage were compared in this case, the accuracy achieved a rate of 99.89%. That is to say that only one of the samples was misclassified. Moreover, the misclassified prediction was among the healthy and first stage of failure (one broken bar). The training process reaches the referred accuracy after about 650 iterations, in epoch three, and it becomes stable at 99.89% after, more or less, 3000 iterations.
Once the capabilities of the proposed methodology have been exposed, in Table 9, it is compared with the results of other methodologies proposed for broken bar automatic detection in soft-started induction motors. Additionally, since there are not many works focused on soft starters, the results of other works focused on Direct Online starting are also included in the table. Once the capabilities of the proposed methodology have been exposed, in Table 9, it is compared with the results of other methodologies proposed for broken bar automatic detection in soft-started induction motors. Additionally, since there are not many works focused on soft starters, the results of other works focused on Direct Online starting are also included in the table. With regards to the accuracy of the methodologies, some of the works in Table 9 achieved a rate of 100% [22,23,49], but all of them were focused on DOL starting, and they were analyzing current signals. On the other hand, those works focused on soft-started induction motors and achieved, in both cases, an overall accuracy of 94.40%, analyzing the stray-flux [27] and the combination of stray-flux and current [28]. Both of them relied on the STFT as the time-frequency analysis tool, which displays noisy time-frequency maps when soft starters are used, making it more difficult to identify the typical patterns related to broken bars.
On the contrary, the proposed methodology relies on the use of Persistence Spectrum as time-frequency display. This method allows one to see even very short events, leading to an easier identification of fault-related patterns and allowing one to achieve an accuracy rate of 100% when analyzing each model of soft starter separately and 99.89% when analyzing all the models combined.
Conclusions
In this work, a novel methodology to automatically detect and categorize the severity of rotor faults in induction motors driven by soft starters is presented. This methodology relies on the computation of the Persistence Spectrum of the start-up transient of stray-flux signals. Then, the images obtained are used as input for a self-developed CNN in order to obtain their classification. Experimental results prove that not only the accuracy achieved is very high, improving the ones of other works focused on soft-started induction motors, but also the convergence of the training progress to the final accuracy rate is very fast.
Thus, taking into account all the above-mentioned and the results shown in the previous section, the following conclusions arise:
•
The use of the persistence spectrum as a way to analyze the stray-flux signals during the start-up transient allows one to detect the health state of the rotor. • Even in the case analyzed in this paper, where soft starters are used to drive the motor and so the level of noise in the signal makes it difficult to identify the characteristic patterns of the fault when using typical time-frequency tools (such as STFT), the use of this method allows one to identify not only the presence of the fault, but also its severity.
•
Even when different starting settings are performed and different topologies of soft starters are used, this method achieves a very high accuracy rate (99.89%), proving its reliability. • This method is a promising way to diagnose induction motors when using soft starters and could lead to integrate the diagnosis system in the soft starter itself, only by adding an external flux sensor.
The results prove that the use of this method could lead to a reliable diagnosis of the health state of the rotor of SCIMs, allowing one to schedule proper maintenance and hence, reducing the energy consumption due to the running of damaged motors and avoiding unscheduled shutdowns of the processes depending on them.
Finally, although a very high accuracy has been achieved with this classification method, further studies have to be carried out in order to evaluate the generalized possibilities of the proposed methodology. The authors are carrying out more tests to evaluate the application of this method to other faults and SCIMs of different nominal power. Furthermore, the authors plan to evaluate the application of this method to other types of motors, like synchronous reluctance motors or permanent magnet synchronous motors, to detect other kind of failures, as well as proposing complementary methodologies based on computing statistical indicators based on the obtained results that may enhance the diagnosis in some specific cases. In this appendix, the pseudocode of the CNN is shown in Figure A1: | 12,250 | 2022-12-28T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
A Novel Coupling Structure for Inline Realization of Cross-Coupled Rectangular Waveguide Filters
This paper presents a new cross-coupling structure for introducing transmission zeros in waveguide filters with inline configuration. The cross-coupling is realized by an internal bypass metallic loop structure (wire), short-circuited to the filter top plate and passing through the waveguide cavities. Positive and negative cross-couplings are realized by only varying the length of the structure suitably. Full-wave electromagnetic (EM) simulations of the coupling structure are performed for extracting the coupling coefficient sign and value ( ${k}$ ). The relationship of the values of ${k}$ with the coupling structure dimensions is presented graphically. The loading effect of coupling structure on the resonance frequency of the waveguide cavities is also discussed, as well as the possibility of post-manufacturing tunability. The feasibility of the proposed coupling structure is demonstrated by the design of three test filters. Two filters verify the in-line design of a triplet configuration using the proposed coupling wire to introduce a transmission zero below or above the passband. These filters have been also manufactured and measured for an experimental validation of the new structure. The third filter demonstrates the use of the wire loop to create a negative coupling in a cross-coupled configuration. Power handling capability of this configuration has been also thoroughly analyzed by full-wave EM simulation.
I. INTRODUCTION
In recent years, the continuous growth of the frequency channels request has resulted in increasingly stringent requirements on the selectivity of microwave filters used in communication systems. To achieve such a result, the introduction of transmission zeros (TZ) in the frequency characteristic is becoming usual in commercial products. Microwave filters in rectangular waveguide have long been used in the past and still nowadays are employed, especially in space communications equipment [1]. The introduction of TZ in the response of rectangular waveguide filters is typically realized by using folded configuration to allow the cross-couplings implementation (i.e. coupling between nonadjacent resonators) [2]- [4]. On the other hand, the in-line form is the most popular topology of rectangular waveguide filters because it can be easily manufactured and integrated with other subsystems like common manifold multiplexers.
The associate editor coordinating the review of this manuscript and approving it for publication was Feng Lin.
Unfortunately, the in-line topology does not easily allow, basically, the introduction of couplings between non-adjacent resonators. For this reason, various solutions, alternative to the cross-coupled configuration, have appeared in the literature for introducing TZ in the response of in-line rectangular waveguide filters. We can mention, for example, dual-mode cavities filters [5]- [7], evanescent mode filters [8], singlet or doublet implemented inside the waveguide through suitably placed discontinuities [9]- [11], extractedpole configuration using non-resonating nodes [12]- [14], use of frequency-dependent couplings [15], [16]. All these solutions, however, make unavoidably more difficult the design and, often, more critical and expensive the filter fabrication with respect the basic form of the in-line waveguide filter without transmission zeros (which typically employs inductive irises as coupling structures). To overcome this limitation, we introduce in this work a new coupling structure, allowing the implementation of cross-couplings in rectangular waveguide filters with in-line topology. VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ The proposed structure consists in a simple metallic wire (with circular or rectangular cross section) inserted inside the waveguide close to the lateral wall, with the terminals connected to the top wall of two non-adjacent cavities (Fig. 1). To the best of authors' knowledge, such a structure has never been proposed in the literature to realize couplings in rectangular waveguides. In [17], a wire terminated with capacitive plates is shown as possible cross-coupling in coaxial cavity filters, but the required isolation from ground (capacitive coupling), the implementation relatively complex (the wire must be inserted inside the lateral wall of the filter housing) and the not good performances regarding the generated spurious resonances make this coupling solution not really attractive.
The coupling structure here introduced allows realizing both positive and negative coupling (simply changing the length). Moreover, the coupling implementation is very simple and filters using this coupling are easy to realize as those with all-pole response. We can also observe that, being the proposed coupling structure ground connected, the transmission of internal heat to an external sink is made easier. The use of this coupling is then attractive for high power applications like output multiplexers for satellite communications.
Various characteristics of the coupling structure have been investigated by performing full-wave EM simulations and post processing. In section II, the coupling structure is introduced and its capability to realize negative and positive coupling is described. The coupling performances are evaluated and presented in section III. Two design examples are given in section IV and measurements on fabricated filters are presented. Tunability of the coupling structure is also described there. Section V shows the use of the novel coupling structure to realize a negative coupling in folded waveguide filters in place of the usual capacitive probe. The power handling of filters employing this coupling structure is also discussed in Section V. Conclusions are drawn in Section VI.
II. PROPOSED CROSS-COUPLING STRUCTURE
A waveguide implementation of the trisection topology realizing one TZ is shown in Fig. 1. The cross-coupling between non-adjacent cavities is realized by an internal bypass loop structure formed by a metal strip/wire with opposite ends short-circuited to the top plate of the filter. The strip (or wire) has thickness (or diameter) t. The coupling structure is placed at distance d from the sidewall and height h from the top wall of the filter cavities. The length L of the loop is determined by the position of the short circuits which are symmetrically placed inside the first and last cavities. To allow the strip passing through the cavities, a small rectangular aperture of dimensions (a 2 , b 2 ) is created in the coupling irises. All the relevant parameters of the coupling structure are shown in Fig. 1.
The trisection waveguide filter in Fig. 1 can be schematically represented by the routing scheme in Fig. 2.
There are two types of couplings, mainline positive couplings (M s1 , M 12 , M 23 and M 3L ) and one cross-coupling M 13 (may be positive or negative). For mainline couplings in the Empirical circuit of the proposed cross-coupling structure between resonators R 1 and R 3 . X 13 is the reactance representing the coupling M 13 , X p account for the loading effect produced by the loop coupling on the cavity resonators. direction of propagation, shunt inductance is a good equivalent for the impedance or admittance inverter derived from the normalized coupling matrix. These couplings are realized by irises in the end walls of waveguide cavities as shown in Fig 1. The proposed cross-coupling structure realizing M 13 is equivalent to sidewall coupling in rectangular waveguide filters, where a coupling window is created in the sidewall at the voltage maximum point of the coupled resonators. From a circuit point of view (Fig. 3), this coupling can be represented by a positive or negative series reactance (X 13 ), connecting two equivalent shunt resonators (representing the coupled waveguide cavities). Note that the shunt reactances X p are introduced to account for the loading effect of the loop on the equivalent resonators. Note also that the sign of X 13 is the same of M 13 parameter [18], [19].
The circuit in Fig. 3 allows evaluating the coupling coefficient k, which is also related to the even (f even ) and odd (f odd ) mode resonances of the coupled resonators. In fact, it has [20]: The coupling coefficient concept is the bridge between the equivalent circuit and the physical implementation of coupled resonators structures. Once the value of k has been obtained from the lumped circuit synthesis, we can impose it on the physical coupling structure by means of the full-wave electromagnetic simulation of the structure. In particular, the eigenvalue analysis is employed to get an accurate and fast computation of the resonating modes of the coupled structure, from which k can be evaluated once f even and f odd have been identified among the computed resonance frequencies of the modes. In this regard, we observe that the eigenvalue analysis does not allow to easily evaluate the sign of k. In fact, the eigenmode frequencies are always sorted in ascending order and the proper selection of f even and f odd may require considering the actual field distribution of the computed modes. For this reason, before facing the evaluation of the coupling coefficient for the proposed coupling loop, we will discuss how to identify the sign of k. Despite the method we will present is substantially empirical, it has allowed to correctly identify the values of the loop length L for which k is positive or negative. The method is based on extracting the sign of the equivalent reactance X 13 by means of an EM simulation. From Fig. 3, we observe that jX 13 represents the inverse of the admittance parameter Y 13 . The latter can be evaluated by shorting the shunt resonators, splitting X 13 into two equal reactances in series and evaluating the impedance Z in in the middle point. From Fig. 4, we get jX 13 = 4 × Z in . This is replicated in the EM simulator (HFSS has been used) by placing a lumped port at the longitudinal center of the coupling structure between the top plate and the coupling structure as shown in Fig. 4. We assume the input impedance observed at this port equal to the impedance Z in defined in the lumped circuit. The sign of the coupling is then represented by the sign of the imaginary part of Z in , computed by EM simulation. Although it cannot be demonstrated that this method provides the exact value of X 13 , we have verified by several design examples that it identifies the sign of X 13 with good accuracy. In the following, all the investigations and analysis of the coupling structure are performed at 10 GHz using WR75 waveguide (a = 19.05 mm, b = 9.525 mm). The assigned loop parameters are t = 1 mm, h = 1 mm, d We have removed the resonators from the simulation because they are assumed short-circuited. The loop is inserted in a box whose length is 3λ g0 (with λ g0 waveguide wavelength at 10 GHz). The computed Im(Z in ) is reported in Fig. 5 for the three assigned values of d, as a function of L normalized to the free space wavelength λ 0 . The behavior of Im(Z in ) reported in Fig. 5 suggests a possible model for the frequency dependence of X 13 . In fact, the loop can be represented by two short-circuited transmission lines connected in parallel at the point where the lumped port is placed, providing the following expression for the impedance Z in [21].
With a suitable choice of the characteristic impedance Z c , the computed Z in by full-wave simulation can be well reproduced by this model. This is shown in Fig. 5 where the curve computed with the model is reported (dashed orange curve, Z c = 100 Ohm).
Note that the reason we have assumed a TEM transmission line model is that the physical structure of the loop can be assimilated to a thick microstrip in air (t is the strip thickness and h is gap between the strip and the ground plane, represented by the top wall of the enclosure). Assuming the proposed model for Z in , we observe that positive coupling can be achieved for nλ 0 < L < (2n+1)λ 0 /2, where n≥0 is an integer. The condition for negative coupling is (2m-1)λ 0 /2 < L < mλ 0 where m > 0 is a positive integer. For the assigned coupling coefficient sign, the length L of the loop is chosen and is placed in longitudinal symmetry between the coupled resonators. Following the procedure in section III, the value of the coupling coefficient k can be finally determined. The model of the loop above introduced suggests that this structure can also be used as an embedded TEM resonator in the waveguide structure. Advanced filtering functions may be realized using this embedded resonator, but further research is required for validating this concept (in particular the effects due to the relatively low unloaded Q of this resonator).
III. CROSS-COUPLING STRUCTURE ANALYSIS
The net coupling between resonators can be inductive or capacitive. Once the nature of the coupling is determined using method described in section II, the coupling coefficient k can be calculated by using the eigenmodes solver of an EM simulator [20].
The frequencies f 1 and f 2 are the eigenmode frequencies of mode 1 and mode 2 in the eigenmode analysis of the waveguide structure given in the Fig.1 (with all couplings between adjacent cavities set to zero). Initially, all three resonators are designed to resonate on TE 101 mode at 10 GHz (physical length equal to 24.285 mm in WR75 waveguide). As the coupling under consideration is M 13 resonator 2 must be strongly detuned by means of a tuning screw. The effect of coupling structure dimensions on the coupling coefficient k has been thoroughly analyzed both for negative and positive couplings. For negative coupling, the relation between k and height h is shown in Fig. 6. By changing h from 0.5 mm to 1.5 mm, k can be varied from 0.0009 to 0.0056. These values are large enough to realize narrow to moderate bandwidth filters. From Fig. 6, it is evident that the coupling increases with d and h. In fact, when these parameters increase, the loop is moved towards higher E field region.
This phenomenon is evident from the electric field plot inside the coupled resonators shown in Fig. 7. Electric field is confined between coupling structure and sidewalls in resonator 2. Therefore coupling between resonators 1 and 3 is strong and coupling between resonators 1 and 2 is negligibly small.
Rigorous EM simulations were performed to observe the relation between k and cross-section (width w and thickness t) of the coupling structure for fixed h and d. Simulation results were similar to the data presented in Fig. 6 and are not presented to avoid repetition.
The coupling structure for positive M 13 is structurally similar to its counterpart negative cross-coupling structure. Simulation results show that k can be varied from 0.0005 to 0.012. The comparison of simulation results shows that rate of rising of k for positive coupling is higher compared to negative coupling as shown in Fig. 8.
The loading effect of the coupling structure on the coupled resonators 1 and 3 can be related to the shift in the resonance frequency of uncoupled resonators determined by eigenmode analysis. Fig. 9 shows the relation of k and average reso-nance frequency of the coupled resonators (resonator 1 and resonator 3). Due to loading effect of the coupling structure, the frequency is lower than the resonance frequency of uncoupled resonators (10 GHz). The physical length of resonators 1 and 3 must be slightly decreased to compensate the loading effect. Resonator 2 is bypassed by the coupling structure, therefore a loading effect on resonator 2 was also observed. The length of the resonator 2 must be slightly increased to compensate the loading effect of the coupling structure. EM simulations were also performed to analyze the loading effect of negative M 13 couplings. In this case, the loading effect can be compensated by slightly increasing the length of coupled resonators.
The full-wave EM dimensioning starts by designing irises for mainline sequential couplings. For irises with symmetric structure on both sides, couplings can be transformed to measurable scattering parameters like |S 21 | and S 21 [23], [24]. These parameters can be used to calculate the iris dimensions for input/output coupling, inter-resonator couplings and loading effect of these couplings. The loading effect can be considered as an FIR (Frequency Invariant Reactance) and will be absorbed in the corresponding resonators. The aperture for passing through the cross-coupling structure can be considered as part of mainline couplings and can be designed accordingly. Mainline couplings' value for both filters are the same; therefore, iris widths for both filters are the same. Fig.10a shows the iris parameters for mainline couplings (all irises are of 1.0 mm thickness). Waveguide filter structure, relevant parameters of cavity resonators and interface waveguide dimensions are shown in Fig.10. For M 13 of the first example From Fig. 6 (d = 1 mm), we can obtain the h value producing the required k 13 (h = 0.75 mm). Proceeding similarly for the second filter we obtained h = 1.05 mm. For both filters, the length of the cavities has been corrected to take into account the loading effect of the coupling wire. The dimensions of Filter1 and Filter2 are given in Table 1. The common dimensions for both designs are shown in Fig. 10.
All the dimensions of both filters are the same except cavity lengths and coupling loop parameters. The coupling loop can be integrated with the filter structure through a couple of holes in the top plate. The prototype filter was manufactured using CNC machining. The filter top lid with the connected wire coupling is shown in Fig. 11a, while the internal structure of the filter is reported in Fig. 11b (overall dimensions 38.1 mm × 38.1 mm × 94.0 mm). It can be noticed that the filter structure is as easy to manufacture as the classical all-pole filter with inductive irises. Moreover, a fine tuning of the TZ frequency is possible in post-production, as will be discussed in the following. EM simulation and measured S-parameters of the filter having TZ below the passband are reported in Fig. 12. The position of TZ in simulation is at 9.940 GHz whereas in measurement TZ is at 9.935 GHz. This shift in the TZs is due to a slight increase in bandwidth (measured equiripple BW = 43 MHz and simulated BW = 40 MHz). The measured insertion loss at f 0 is 0.8 dB. The loss is higher than expectation due to low cost prototype (no silver plating) The simulated and measured S 11 is less than −20 dB within the entire passband. Note that the slight ripple observed in the out-of-band response in Fig. 12 is likely due to a poor grounding of the coupling wire with the filter top plate. In fact the wire terminals are not bonded or welded with the lid but only forced in small holes. Consequently, not good contact may have been obtained. Wideband performance of the filter is presented in Fig. 13. A spurious resonance can be observed at relatively large distance from the passband (2 GHz). Note that the spike level is low, because the coupling from input to output through the coupling loop is very weak and waveguide resonators are decoupled at this frequency. As described in the previous section, the loop length selection allows flexibility, therefore spike position can be relatively controlled. Spurious passband at 14.68 GHz is due to TE 102 mode resonance of waveguide resonators at this frequency. In Section V we will show that, using a compact loop, the spike can be moved above the TE 102 mode resonance of the waveguide resonators.
EM simulation and measured S-parameters of the filter having TZ above the passband are reported in Fig. 14. The position of TZ in simulation and measurements are 10.06 GHz and 10.082 GHz respectively. This displacement is owing to the widening of the bandwidth (measured equirip- ple BW = 50 MHz and simulated BW = 40 MHz). The increase in bandwidth of both filters may be due to the manufacturing tolerances of the coupling irises. The presence of the coupling wire could also increase the coupling produced by the irises. The simulated and measured S 11 is less than −18 dB within the passband. The measured insertion loss at f 0 is 0.7 dB. The loss is higher than expectation due to low cost prototype (no silver plating, wire loop not bonded or welded). Fig. 15 shows the wideband performance of the filter realizing TZ above the passband (only EM simulations are reported). In this case, TZ is closer to the passband, but the distance (1 GHz) remains very large (compared to the bandwidth) It has been remarked that the novel coupling structure allows the realization of rectangular waveguide filters with in-line topology and TZ in the frequency response. It is then interesting to compare a filter that uses the novel coupling with a classical reference filter having the same features. We have selected to this purpose an extracted-pole filter, being an in-line topology widely used in the practice. The reference filter adopts singlets blocks for the TZ extraction [25] and has been designed with the same requirements of the test tri-section filter with upper TZ. The filter structure and its parameter values are shown in Fig. 16. Both filters have been simulated including losses in the conductors (silver conductivity is assumed). EM simulation results of both filters are plotted in Fig. 17. Filter with an internal loop has mid-band insertion loss of 0.40 dB whereas the extracted pole filter has insertion loss of 0.38 dB. The advantage of the novel solution is the filter overall size (93.12 mm × 19.05 mm × 9.525 mm), much smaller of the extracted-pole solution (105.8 mm × 40.2 mm × 9.525 mm). It must be remarked that the same losses are obtained with both the solutions but the one employing the loop coupling allows a noticeable reduction of both the filter volume and footprint. The feasibility of post-manufacturing tuning has been investigated by means of EM simulations. Tuning allows to refine the position of TZ, whose placement may be affected by fabrication tolerances. To this purpose, a tuning screw inserted in close vicinity of the coupling structure can be used as shown in Fig. 18a. The results of EM simulations (Fig. 18b) show that a very accurate positioning of TZ can be obtained in this way. For example, a tuning screw of 2.0 mm diameter, initially penetrated 0.5 mm can move TZ 8 MHz towards the passband by further penetration of 1.2 mm.
Finally, we remark the capability of the proposed structure to place transmission zeros very close to the passband. For example, considering the filter with the zero below the passband, we can move up the zero to 9.975 GHz. Fig. 6 shows, in fact, the feasibility of the required cross-coupling (equal to −0.0047).
V. ANOTHER APPLICATION OF THE NOVEL COUPLING LOOP
Although the main goal of the proposed coupling loop is the realization of in-line waveguide filters with transmission zeros, it can be also advantageously adopted in classical cross-coupled configurations requiring negative couplings. In these cases, a capacitive probe is often adopted, which however suffers from some drawbacks due to the isolation requirement from the filter body. It can be mentioned, for instance, the reduced power handling capability and the increased fabrication complexity. These drawbacks can be overcome by using the novel coupling loop, as illustrated in the design example reported in the following.
A. DESIGN OF A QUARTET SECTION FILTER
The quartet section is a basic building block of many advanced filtering structures. Although, strictly speaking, this topology is not in-line, the use of the novel loop structure makes easier and cheaper the waveguide implementation of the quartet, compared to other possible solutions.
A design example (Filter3) is then presented to illustrate the application of the new loop coupling in the quartet configuration. As known, with this 4-pole topology we can introduce two symmetric TZ by means of a single cross-coupling between resonators 1 and 4. The following requirements are assumed for Filter3: Center frequency f 0 = 10 GHz, bandwidth BW = 40 MHz, Return Loss RL = 22 dB, normalized transmission zeros z = ±j1.7 (which correspond The design methodology presented in Section III is applied to calculate the dimensions of the filter. Also in this case inductive irises are used for the mainline couplings, although the couplings 1-2 and 3-4 are implemented in the side wall instead of the end wall.
Coupling sign and dimensions of the coupling loop can be calculated using the analysis procedure described in sections II and III. However, since there is no cavity to by-pass, the loop is much shorter than in the case of the trisection, with the beneficial effect of pushing spurious resonance to a much higher frequency.
For the dimensioning of the loop, we have first assigned the length L = 17.739 mm, chosen in the range λ 0 /2 < L < λ 0 to get a negative value for k 14 . Then, a chart similar to the one in Fig. 6 has been computed, reporting k as a function of the parameter h (height of the loop), for different values of the separation d from the sidewall (Fig. 19).
From Fig. 19, assigning d = 0.6 mm and h = 0.7 mm, we get the required value of k 14 = (BW/f 0 ) × M 14 = −0.00117. The other relevant dimensions of the designed quartet filter are reported in Table 2, while the filter structure is shown in Fig. 20.
Full-wave EM simulation results of the filter structure, reported in Fig. 21, show a close matching to theoretical design requirements. Insertion loss at the center of the filter passband is 0.54 dB, corresponding to an average unloaded Q of the resonators equal to 7300. Broadband simulation results of the filter are shown in Fig. 22. We can observe that the loop resonance has moved above the TE 102 mode resonance. The low level of first spurious band at 14.7 GHz is due to weak coupling of side irises to TE 102 resonant mode.
B. POWER HANDLING CAPABILITY OF THE LOOP COUPLING
As previously mentioned, the use of the proposed coupling structure is particularly convenient in high power filters. Therefore, we have investigated the possible limitations on power handling introduced by the coupling loop. To this purpose, the electric field distribution inside the quartet filter body (Fig. 20) has been computed by means of a full-wave simulator (HFSS). We know from the filters theory that the RF voltage across the equivalent resonators (V RF,k ) is frequency dependent and varies from resonator to resonator [26]. For the considered filter, the RF voltage on resonator 2 (V RF,2 ) is larger than that on the other resonators, at all frequencies in the passband (assuming the exciting signal tone applied at port 1). Although the maximum of V RF,2 is close to the passband edges, it must be considered that most of the signal energy is concentrated around the center of the passband (f 0 ), so limitations to the filter power handling due to possible breakdown phenomena have been investigated at f 0 . We have first verified that the maximum E-field is found on cavity 2. Fig. 23a shows the computed E-field with 5 KW input power (single sinusoidal tone applied at port 1). Note that the instantaneous peak value of the E field depends on the phase of the input tone. In the considered case, the maximum of E (E max ) has been found on the second cavity (as expected), and it results E max = 19 KV/cm (below the theoretical limit for the air breakdown, equal to 22.8 KV/cm). We have then evaluated the E-field in critical areas around the coupling loop, where the field is presumably relatively large. In Fig. 23b the result of the simulations are shown. The peak value in the first area is E max,1 = 11.235 KV/cm and E max,2 = 11.78 KV/cm in the second area (note that these results are obtained with different values of the exciting tone phase). Comparing E max,1 and E max,2 with E max , we note that the peak values of the E field near the loop are well below the maximum at the cavity center. We conclude observing that power-handling capability of microwave filters depends on many other environmental factors and their cumulative effect is difficult to estimate accurately through basic EM simulations. Nevertheless, the computed results show that, very likely, the novel coupling loop does not play a limiting role for the maximum allowed power.
C. REALIZABLE BANDWIDTH AND PERFORMANCE COMPARISON
As a last observation, we want to remark the capability of the proposed coupling structure to realize the coupling coefficients required in moderate/large bandwidth filters. Although typical applications of waveguide filters call for a normalized bandwidth of a few percent, in some special cases a bandwidth as large as 10% may be required. Design of such filters requires extensive use of numerical optimization to account for the strong frequency dispersion of the waveguides. A limiting factor may also arise from the high value of the required coupling coefficients, especially those referring to the cross-couplings. This, however, is not the case with the proposed new coupling loop. In fact, we have evaluated the values of the coupling k 14 used in the quartet example above reported by modifying some dimensions in order to increase the realizable values. In particular, we have assigned L = 20.336 mm, a 2 =5 mm, b 2 = 4 mm. Fig. 24 shows the computed k 14 vs. h, for three values of d (2.5 mm, 3 mm, 3.5 mm). Observing the computed results, the large realizable values of k 14 can be appreciated. As a reference, a quartet filter with 8% normalized bandwidth at 10 GHz, with two normalized TZs at z = ±j1.7, requires k 14 = −0.0236, which can be implemented with the loop coupling considered in Fig. 24 by assigning d = 3.5 mm and h = 1.87 mm. Note that such a large coupling coefficient is hard to be achieved with the conventional capacitive probe usually adopted to implement a negative cross-coupling in waveguide filters. Table 3 shows the performance comparison of the proposed loop coupling with a conventional probe.
VI. CONCLUSION
A new cross-coupling loop structure for true inline realization of cross-coupled waveguide filters has been proposed and discussed in this paper. Rigorous EM simulation results have been presented to demonstrate that the new coupling structure has the ability to realize positive and negative coupling values by adjusting its length. In fact, although the proposed coupling is intrinsically magnetic, the sign can be reversed because the central portion of the coupling acts like a transmission line, whose length impacts on the coupling sign. The desired level is achieved by adjusting the height from top plate of filter, cross-section and distance of the coupling structure from the sidewall of the filter. Two prototype filters with triplet topology have been fabricated, one having the TZ below the passband and the other with the TZ above the passband. Measured results validate the coupling structure's capability of realizing cross-coupling and generating TZs.
A quartet section based filter design example is also presented to show that the proposed loop coupling structure can be a potential candidate for the realization of advanced filtering structures having a wide spurious free band. By analysis of E-field inside the filter structure, it is also demonstrated that the novel coupling structure is suitable for realizing high power filters. It is also predicted that the coupling structure can realize an embedded TEM mode resonator inside the waveguide structure, which may lead towards the realization of advanced filter functions in a compact structure.
MUHAMMAD LATIF received the B.Sc. degree in electrical engineering from the University of Engineering and Technology (UET) Lahore, Pakistan, and the M.Sc. degree in satellite communications engineering from the University of Surrey (UniS), U.K., in 2003 and 2006, respectively. He is currently pursuing the Ph.D. degree with the Department of Electrical Engineering, UET Lahore. Since 2003, he has been working with public sector research organization, where he has participated in the design and development of microwave passive components and filters. He is also associated with the Institute of Space Technology (IST) Research Labs Lahore. His core research interests include synthesis and realization of microwave filters and multiplexing networks. VOLUME 8, 2020 GIUSEPPE MACCHIARELLA (Fellow, IEEE) is currently a Professor of microwave engineering with the Department of Electronic, Information and Bioengineering, Politecnico di Milano, Italy. His research activity has included the past several areas of microwave engineering: microwave acoustics (SAW devices), radio wave propagation, numerical methods for electromagnetic, power amplifiers, and linearization techniques. He has been a Scientific Coordinator of PoliEri, a research laboratory on monolithic microwave integrated circuit (MMIC), which was jointly supported by Politecnico di Milano and Ericsson Company. His current activities are mainly focused on the development of new techniques for the synthesis of microwave filters and multiplexers. He authored or coauthored more than 150 papers on journals and conferences proceedings. He has been responsible of several contracts and collaborations with various companies operating in the microwave industry. He has been the Chair of IEEE Technical Committee MTT-8 (Filters and Passive Components). He is serving since several years in the TPC (Technical Program Committee) of the IEEE International Microwave Symposium and European Microwave Conference.
FAROOQ MUKHTAR received the B.Sc. degree from the University of Engineering and Technology (UET), Lahore, Pakistan, in 2007, and the M.Sc. degree in microwave engineering and the Dr.-Ing. degree under Prof. Peter Russer from Technical University Munich, Germany, in 2009 and 2014, respectively. He started his career as Tutor for high-frequency course and as Lab-Engineer for electromagnetic compatibility (EMC) testing in UET. During that time he has worked as a part-time scientific co-worker at the Institute for Nanoelectronics on algorithms for Brune's synthesis of multiport circuits and conducting tutorials on post-graduate course quantum nanoelectronics. He is currently an Assistant Professor with UET Lahore, working on high frequency topics; leaky wave and configurable antennas, filters, and metamaterials. He is also consulting Smart Wires, Inc. through Powersoft19 in the areas of electromagnetic simulations and compatibility. | 7,945 | 2020-01-01T00:00:00.000 | [
"Physics"
] |
CO2 Mitigation Measures of Power Sector and Its Integrated Optimization in China
Power sector is responsible for about 40% of the total CO2 emissions in the world and plays a leading role in climate change mitigation. In this study, measures that lower CO2 emissions from the supply side, demand side, and power grid are discussed, based on which, an integrated optimization model of CO2 mitigation (IOCM) is proposed. Virtual energy, referring to energy saving capacity in both demand side and the power grid, together with conventional energy in supply side, is unified planning for IOCM. Consequently, the optimal plan of energy distribution, considering both economic benefits and mitigation benefits, is figured out through the application of IOCM. The results indicate that development of demand side management (DSM) and smart grid can make great contributions to CO2 mitigation of power sector in China by reducing the CO2 emissions by 10.02% and 12.59%, respectively, in 2015, and in 2020.
Introduction
Global climate change is a salient challenge in achieving sustainable development of human society. CO 2 emissions and other greenhouse gas are the leading cause of global warming. If we do not take measures, CO 2 emissions related to fuel energy will be doubled in 2050. As per statistical results of International Energy Agency (IEA), power sector is responsible for about 40% of the total CO 2 emissions [1]. Consequently, CO 2 mitigation in power sector is of great significance in achieving global mitigation goal.
Power sector can be divided into supply side, demand side and power grid according to transmission process. The CO 2 emissions of power sector are concentrated in supply side, where fossil fuels burn. Unreasonable utilization in demand side and losses in power grid would increase energy consumption in supply side, which also indirectly contributes more CO 2 emissions. CO 2 mitigation measures adopted in supply side could be divided into three categories [2][3][4][5][6][7][8]: (a) improving conversion efficiency of fossil energy and lower energy intensity; (b) developing nonfossil energy like renewable energy and nuclear energy and adjust energy mix; (c) developing carbon capture and storage (CCS) technologies. The most effective measure in CO 2 mitigation in demand side is to implement DSM, which improves utilization efficiency through incentive policies [9,10]. The literature [11,12] shows that DSM could reduce energy consumption by 5% to 15%. Power grid is not only a bridge connecting supply side and demand side physically, but also an important medium of achieving mitigation benefits of both sides. Besides, it provides support for large-scale applications of nonfossil energy (including nuclear energy, hydroelectric energy, and wind energy) [12]. With the development of smart grid and ultra-high voltage grid (UHV), losses decreased vastly. Thereby, power grid shows greater mitigation potential compared with other energy transmission methods. Based on current energy mix and generation technology, low-carbon power dispatch is an effective way to control CO 2 emissions in a short period of time [13,14].
The current researches on CO 2 mitigation measures mainly focus on application of various mitigation techniques and macroinfluence of policies. However, researches on optimization of CO 2 mitigation from the point of whole power sector are still rare. Integrated resource planning (IRP) 2 The Scientific World Journal and integrated resource strategic planning (IRSP) minimize power supply through optimization on both demand and supply sides [15][16][17][18]. However, energy saving capacity of power grid is neglected. In this study, an optimization model IOCM, considering all mitigation potential of supply side, demand side, and the power grid, is proposed and applied to the power sector in China.
In Section 2, the status quo of CO 2 emissions of power sector is briefly represented. In Section 3, various measures on CO 2 mitigation are discussed. Then, the IOCM is proposed in Section 4, and the result of IOCM applied to power sector in China is analyzed in Section 5. Finally, conclusions are made in Section 6.
CO 2 Emissions of Power Sector in China
The major CO 2 emissions country in the world, China, officially pledged to reduce its CO 2 intensity by 40-45% from the 2005 level and increase the share of nonfossil energy in primary energy to 15% by 2020 [19]. CO 2 emissions from power sector reached 3294.7 million tonnes (Mt) in 2009, accounting for 48% in total emissions [1]. Consequently, CO 2 mitigation in power sector is of great significance in achieving long-term mitigation goal in China and even making contribution to global mitigation.
China is at critical stage of industrialization and urbanization, and the demand for electricity increases rapidly. Electricity generation reached 4721.7 terawatt hours (TWh) in China in 2011, ranking in the second place in the world. Meanwhile, primary fuel mix is dominated by coal in China. Electricity from coal-fired power plants accounts for approximately 80% of the total electricity generation [20]. Therefore, the demand for electricity in China was the largest driver of the rise in emissions. As per statistical results of IEA, the CO 2 emissions from electricity and heat production increased by 210% from 1,072.0 to 3,324.3 Mt between 1995 and 2009 [1], as shown in Figure 1.
The primary responsibility of power sector is to ensure sufficient, safe, and stable power supply. Development is still the primary task, so effective measures should be taken to reduce CO 2 emission, under the premise of meeting the power demand of economic and social development. Among the period of the "Eleventh Five-Year Plan" (from 2006 to 2010), measures to develop nonfossil energy, reduce coal consumption and line losses, and so forth, power sector has cut 1.74 billion tonnes of CO 2 . The contribution ratio of various measures for CO 2 mitigation is displayed in Figure 2, among which measures to reduce coal consumption ranked at the top, up to 51% [21]. Although some achievements have been made concerning CO 2 mitigation, a comparatively big gap from the target still exists. Therefore, various measures for CO 2 mitigation should be promoted. Coal consumption reduction 51% Hydro 37%
Nuclear
Wind Line losses reduction 3% 4% 5% generation has been derived from coal in the recent two decades, although the emissions performance of coal-fired power generation has improved significantly [1]. Based on the generation mix, the measures on CO 2 mitigation can be divided into three major categories.
Efficiency Improvement of Utilization of Fossil Energy.
Efficiency improvement refers to the use of less amount of fossil fuel and CO 2 emission to produce the same amount of electricity by improving conversion efficiency. This measure is useful for China, where coal is the major energy resource and widely promoted in power sector, thereby, receiving extensive attention at present. Replacement of backward units with advanced coal-fired generation units with large capacity and high efficiency is an important measure to improve the conversion efficiency. There also exist a number of small-sized, low efficiency coalfired generation units in China. As a result, this measure still has great potential in more reduction. However, the potential for efficiency improvement and CO 2 mitigation will continuously decrease as the capacity increases. At this time, more high-efficient generation technologies should be developed like integrated gasification combined cycle and ultra supercritical power generation. In addition, efficiency improvement is expected to reduce CO 2 emissions for per kilowatt hour (kWh), but the total CO 2 emissions of power sector The Scientific World Journal 3 in China may still increase continuously as installed capacity has grown quickly in recent years.
Adjust Energy Mix.
Replacement of coal with lowcarbon fuel or energy with near-zero CO 2 emission, such as natural gas, renewable energy, and nuclear energy, to adjust energy mix is a significant measure for controlling CO 2 emissions in the process of power generation.
(1) Low-Carbon Fuel. CO 2 emission of each unit of electricity generated by natural gas is 50%∼60% lower than that of traditional thermal power units [22]. Consequently, improvement of utilization ratio of low-carbon fuel like natural gas is a feasible measure of CO 2 mitigation. However, natural gas resource for generation in China is severely scarce, which contributes little to emission reduction. Thus, the key point to decide whether to put low-carbon fuel into wide use lies in the chance to get stable natural gas supply in low cost or get new gas resource at lower cost.
(2) Renewable Energy. Renewable power generation technologies mainly include hydropower, wind power, solar power, biomass power, ocean power, and geothermal power. Generally, most renewable power generation produces CO 2 during the process of manufacturing equipment and consumables, but no direct CO 2 emissions arise during power generation process. As a result, it can be seen as near-zero CO 2 emissions. According to factors regarding technologies and resources, renewable power generation can be developed properly and also serve as an important method of CO 2 reduction in electricity industry.
China has abundant hydropower resources that remain as the most developed renewable energy resources in the country. The technology of hydroelectricity is relatively mature, of which the installed capacity and generating capacity is the second largest method of power generation next to coal in China. However, large-sized hydropower plants exert some indirect negative impacts on the environment along with the vigorous development of hydropower. Wind energy resource in China is densely located in western, northern, and coastal areas, which is appropriate for centralized development. Similar to wind energy resource, solar energy resource is mainly in western and northern areas. With the development of solar technology, solar energy development is fastened in our country. Low in cost, biomass energy is developed rapidly but with problems in limited resource, collection of biomass, and equipment manufacture. So it should be developed, accordingly. Besides, renewable energy like geothermal energy and ocean energy has certain irreplaceable status in certain area, which leads to fast development in research.
However, the impact on grid stability must be taken into deep consideration, as the connected renewable energy may bring fluctuations into the grid. For instance, wind and sun energy bring along strong fluctuations on a daily and seasonal basis. When the proportion of wind energy is too large, it leads to strong fluctuations in power grid [23].
(3) Nuclear Power. Nuclear power, a relatively mature technology, is applied to electricity generation for the remarkable advantages of low operating costs and near-zero CO 2 emissions. However, it has also encountered barriers, which is mainly related to the public safety like nuclear weapons proliferation and waste management. Chinese technology on nuclear power also requires further development. After over 20 years of development, the basis of China's nuclear industry gradually formed. At present, nuclear power has entered a period of rapid development, and meanwhile the security is always on the primary position.
CCS Technologies.
CCS is a process, in which CO 2 is separated from industrial or energy production chain, then transported to a storage location, and isolated from the atmosphere for a long period of time. It is widely recognized as an exceptional technology in global mitigation, because of its huge potential of an 85% to 90% reduction of CO 2 emissions in thermal power stations [24]. So far, CCS demonstration projects have been constructed in several thermal power stations of Beijing, Shanghai, and some other places. However, they are restricted to small-scale plants. Due to the high investment cost and large energy consumption of CCS under the current technology level, only small-scale projects can be implemented as future technical reserves of CO 2 mitigation. Since new CCS technology with low cost and energy consumption is the focus of future research, China should track related technical updates in order to meet the growing demand of CO 2 mitigation.
CO 2 Mitigation Measures in Demand
Side. DSM, the most effective measures on mitigation in demand side, is a series of electricity management activities aiming at energy conservation and environmental protection, by optimizing the terminal power consumption mode and improving utilization efficiency. Thus, the power demand and CO 2 emissions in power industry decrease indirectly.
In China, the DSM has been explored and carried out since the 1990s. During 1991 to 1995, a number of DSM seminars lectured by international experts were organized in China. During 1996 to 2000, several DSM demonstration projects, such as the applications of peak-valley price, energy-saving lamps, were gradually developed, which accumulated experience for DSM. Especially since 2002, DSM has received extensive attention of the whole society due to the tense relationship between power supply and demand. Since then, DSM has entered a period of rapid development in China. New policies on DSM have been released by national and provincial governments, which play an optimistic role in implementing the orderly use of electricity, enhancing energy efficiency, and easing the contradiction between power supply and demand.
Drawing on advanced experience from foreign countries, DSM work can be expanded successfully through the following advices.
(1) Governments of all levels should play a leading role in creating a conducive environment for DSM.
(2) An effective incentive for stable financial support for carrying out DSM should be quickly set up. (4) Energy-saving intermediary organizations can help to form a market mechanism of energy conservation.
CO 2 Mitigation Measures in Power Grid.
Power grid is not only a bridge connecting supply side and demand side physically, but also an important medium of achieving mitigation benefits of both sides. The smart grid is considered as a way to reduce energy consumption, improve the electricity network efficiency, and manage renewable energy generation. Besides, it provides accesses for nonfossil energy to get into the grid, including nuclear power, hydropower, wind power and other near-zero emission energy. Briefly speaking, smart grid is an important mean to realize energy conservation and emission reduction, as well as the climate change mitigation. Smart grid is an electricity network that uses digital and other advanced technologies to monitor and manage the electricity transmission from all generation sources to meet the varying electricity demands of end-users, as seen in Figure 3. The significance of the smart grid construction in the promotion of energy conservation and the development of low-carbon economy is as follows.
(1) Large-scale clean energy units are allowed to get connected into the grid speed up the development of clean energy promote the optimization, and adjustment of energy mix.
(2) Consumers are guided to arrange time duration of electricity consumption, cut down the peak load and the coal consumption in a reasonable manner.
(3) Line losses will decrease remarkably in transmission due to the applications of advanced technologies, including UHV, flexible transmission, low-carbon power dispatch, and distributed generation, as well as dual-direction interaction between consumers and the grid.
(4) Effective interaction between the grid and consumers will be achieved. The promotion of energy-saving technologies will improve the power consumption efficiency.
(5) Large-scale applications of electric vehicles will be promoted. Low-carbon economy will get improved and the mitigation benefits will be achieved.
State Grid Corporation is committed to building a strong smart grid in China, in which UHV power grid is taken as the backbone, and grids at all levels develop coordinately. Plans for a pilot smart grid were outlined in 2010, programming the extension deployment to 2030. Investments in the smart grids will have reached at least USD 96 billion by 2020 [25].
The Construction of IOCM
Since any single measure is far from the goal of CO 2 mitigation in power sector, all feasible options must be taken into consideration. In this section, equivalent virtual energy, consisting of the energy saving capacity in both demand side and the power grid, together with the conventional energy in supply side is unified planning for IOCM. Objective function of IOCM denotes the lowest cost and CO 2 emissions. Finally, the optimal plan of energy distribution, considering both economic benefits and mitigation benefits, can be figured out by multiobjective optimization calculations (Figure 4). Virtual energy in power grid works mainly through smart grid technologies, such as UHV, low-carbon power dispatch, and distributed generation, bringing less line loss. The energy, coming from line losses reduction, is regarded as the smart grid virtual energy (SGVE). Virtual energy in demand side works through DSM, mainly including energy-saving lamps (LVE), energy-saving motors (MVE), energy-saving transformers (TVE), frequency converter (FCVE), and efficient appliances (AVE). For instance, all DSM programs, aiming at promotion of energy-saving lamps, can be gathered up as LVE.
Objective Functions.
The lowest-cost objective function denotes the minimum net present value of the total cost, when meeting the electricity demand. It can be expressed by where y is the yth year of studied; Y is the total number of years studied; n is the nth type of energy source; N is the total types of energy source, including fossil fuel, nuclear, renewable energy and virtual energy; ACI yn is the annual value of the investment cost of the nth type of energy per unit capacity in the yth year, China Yuan/kilowatt (CNY/kW); CO yn is the operating cost of the nth type of energy per unit generation in the yth year, China Yuan/kilowatt hour (CNY/kWh); C yn is the installed capacity of the nth type of energy in the yth year, kilowatt (kW); C 0 n is the existing installed capacity of the nth type of energy, kW; P yn is the average power generation output of the nth type of energy in the yth year, kW; r is the discount rate.
The lowest-emissions objective function denotes the lowest CO 2 emissions, when meeting the electricity demand. It can be expressed by where θ yn is the CO 2 emissions coefficient of the nth type of energy in the yth year tonnes/kWh. Value of θ yn decreases with the further efficiency improvement of utilization of fossil energy and wider applications of CCS technologies. Optimal plans of energy distribution based on (1) and (2) are quite different. The optimal plan based on (1) contains a mass of low-cost energy, while that these based on (2) contains a mass of clear energy instead. However, contradict results in a certain extent, and the fuzzy multiobjective planning method is used here to solve this problem.
Constraint Conditions
(1) Electricity Demand Constraints. The electricity generation of both conventional and virtual energy is not less than the predicted electricity demand where E y is the predicted value of electricity demand in the yth year kWh.
(2) Peak Load Constraints. The total installed capacity of both conventional and virtual energy is not less than the sum of peak load and reserve capacity N n=1 where D y is the predicted value of peak load in the yth year, kW; R is the coefficient of reserve capacity.
(3) Generation Output Constraint. The annual electricity generation of each type of energy cannot exceed its upper limit P yn · 8760 ≤ C yn · T n n = 1, 2, . . . , N; where, T n is the annual utilization hours of the nth type of energy, hour.
(4) Installed Capacity Constraints. Due to the technology, funds, policy, and other limits, the annual installed capacity of each type of energy has its upper limit which cannot be exceeded C yn ≤ C max yn n = 1, 2, . . . , N; y = 1, 2, . . . , Y , (6) where C max yn is the maximum allowable capacity of the nth type of energy in the yth year, kW.
6
The Scientific World Journal where the first type to mbth type is conventional energy; the math type to mbth type is nonfossil energy; α y represents the minimum proportion of nonfossil energy in total electricity generating in the yth year.
where M y is the upper limit of CO 2 emissions in the yth year tonnes.
Case Study
According to the "12th Five-Year Plan" (TFP), the peak load, the total electricity consumption, and installed capacity are expected to reach 1040 gigawatts (GW), 6270 TWh, and 1437 GW, respectively, in 2015. Moreover, the peak load, the total electricity consumption, and installed capacity are expected to reach 1377 GW, 8200 TWh, and 1885 GW, respectively, in 2020 [20]. The power demand and installed capacity for each type of energy forecasting in 2015 and in 2020 are shown in Table 1.
IOCM is applied to the optimization of energy sources in 2015 and in 2020, respectively. In this model, 7 types of conventional energy sources, including coal, gas, hydro, nuclear, wind, solar, and biomass, and 6 types of virtual energy sources, including LVE, MVE, TVE, FCVE, AVE, and SGVE, are taken into consideration. The main parameters, presented in Table 2 come from the literatures [17][18][19][20] directly or are estimated based on literatures indirectly.
According to the results of IOCM, the total installed capacity will reach 1366 GW in 2015, of which the conventional energy is 1266 GW, while virtual energy is 100 GW. Moreover, the total installed capacity will reach 1837 GW in 2020, of which the conventional energy is 1630 GW, while virtual energy is 207 GW. Installed capacity for each type of conventional energy is shown in Table 3. It is obvious that lower-cost clean energy, such as hydro, wind, and nuclear power will have a rapid development. In comparison with TFP, in 2015, the installed capacity of coal-fired plants will decrease by 93 GW, accounting for 9.97% of the total installed capacity of coal-fired plants, while CO 2 emissions will decrease by 378 million tonnes (Mt), accounting for 10.02% of total emissions. Moreover, in 2020, installed capacity of coal-fired plants will decrease by 145GW, accounting for 12.50% of the total installed capacity of coal-fired plants, while CO 2 emissions will decrease by 573 million tonnes (Mt), accounting for 12.59% of the total emissions. The result of IOCM in comparison with TFP is shown in Table 4. As forecasted in Table 5, in 2015, CO 2 mitigation of virtual energy will reach 228 Mt, accounting for 60.39% of the total CO 2 mitigation. In 2020, CO 2 mitigation of virtual energy will reach 421 Mt, accounting for 73.43%. Consequently, promoting the development of smart grid constructions and DSM program, as well as adjusting the energy mix and improving energy efficiency are the most effective measures on CO 2 mitigation. The optimization of the CO 2 mitigation of power sector, under the premise of meeting the demand on electricity, leads to less CO 2 mitigation and installed capacity. Moreover, this could be a good solution to problems that are caused by factors like resources, capital, environmental, and other things and helps to achieve the goal of sustainable development.
Conclusions
Power sector is under tremendous pressure of CO 2 mitigation. Based on the discussion of all feasible measures, IOCM is proposed, which integrates the equivalent virtual energy, consisting of the energy saving capacity in both demand side and the power grid, with conventional energy in supply side. Then, the optimal plan of energy distribution, considering both economic benefits and mitigation benefits, can be figured out by multiobjective optimization calculations. The main conclusions are as follows.
(1) Development of measures on CO 2 mitigation should depend on the mature degree of technology. Largesized replacement of small-sized coal-fired generation units with high efficient large-sized units is a mature technology deserving wide promotion. In the short and mediumterm, emphasis should be put on mature and low-cost generation technologies, such as hydropower, nuclear power, and wind power. Through more research funds, obstacles in the development of solar and CCS energy could be removed in the medium and long term.
(2) Since any single measure is far from the goal of CO 2 mitigation in power sector, emphasis should be put on the integrated application of various measures, including adjusting energy mix, improving energy efficiency in supply side, as well as the energy saving in demand side and power grid.
(3) Based on the data from TFP, IOCM is applied to the optimization of power energy resources in power sector in 2015 and in 2020. CO 2 mitigation is indirectly achieved by the development of DSM and smart grid constructions which lead to less demand and loss.
The results indicate that development of DSM and the smart grid can make great contributions to CO 2 mitigation of power sector in China, reducing the CO 2 emissions by 10.02% and 12.59%, respectively, in 2015 and in 2020. | 5,649 | 2012-11-06T00:00:00.000 | [
"Economics",
"Engineering",
"Environmental Science"
] |
Accuracy of Noise-Power-Distance Definition on Results of Single Aircraft Noise Event Calculation
Aircraft performance and noise database together with operational weights (depending on flight distances) and operational procedures (including low noise procedures) significantly influence results of noise exposure contour maps assessment in conditions of real atmosphere. Current recommendations of the Standard SAE-AIR1845A allow the definition of flight profiles via solutions of balanced motion equations. However, differences are still supervised between the measured sound level data and calculated ones, especially in assessing the single flight noise events. Some of them are well explained by differences between balanced flight parameters (thrust and velocity first of all) and monitored ones by the traffic control system. Statistical data were gathered to make more general view on these differences and some proposal to use them in calculations has been proven. Besides, the real meteorological parameters provide inhomogeneous atmosphere conditions always, which are quite different from the main assumptions of the SAE-AIR1845A, stipulating inaccuracies of sound level calculations.
Introduction
Current recommendations on aircraft noise calculations are defined by ICAO document 9911 [1]. Its methodology applies to long-term average noise exposure only, " . . . it cannot be relied upon to predict with any accuracy the absolute level of noise from a single airplane movement and should not be used for that purpose". These recommendations are looking enough for overall noise exposure and impact assessment from the airport activities, used for a number of purposes in aircraft noise management, including the noise zoning around the airports. Current versions of appropriate models and software (INM, ANCON, STAPES, SONDEO, IsoBella, etc., [2][3][4][5][6]) are fully correspondent with these recommendations, all of them were verified by CAEP for their relevance with Doc 9911 [1]. In their structure and main methodical approach existing models are integrated noise models, they combine the assessment module of the flight path parameters necessary for noise calculations and the aircraft noise assessment module itself.
However, a number of national noise regulation rules require single noise event assessment or via L Amax , or via L AE (SEL), or via any other noise descriptor, which is correspondent with separate noise events, particularly with aircraft flying-by. The L Amax is the peak noise level of the event in decibels, the L Aeq is the averaged during specific time period sound level, which includes the event in decibels, and the SEL is the average sound level for the event in decibels accounting for both intensity and duration-generally talking exposure. SEL takes all of the energy under the line in a sound pressure level versus a time graph and compressed it to a 1-second value. SEL for each flight operation is then adjusted to reflect the duration of the operation and arrive at a "partial" contribution to noise index L DN (or L DEN ) for the operation [1]. The partial L DN are then added logarithmically to determine total noise exposure levels at any point of noise control for the average day of the year. For example, the Sanitary Norms of Ukraine [7] still require for assessment of L Amax together with L Aeq during the day (07:00-22:00) and night (22:00-07:00) periods for any type of source, aircraft noise also. Thus, the noise zoning around the airports is defined by calculations of L Amax , L Aeq , L DEN (European noise index is also required to be assessed by newish aviation rules of Ukraine APU-381 [8]). The predefined maximum size (area) for any of these criteria should be used for the adoption of appropriate noise zone and protection measures.
Usually, the aircraft fleet and air traffic with appropriate distribution of flights between the routes are necessary input data for aircraft noise calculations. ICAO DOC 9911 guidance [1] for use in Ukraine is somewhat limited in calculating the maximum sound level L Amax and/or sound exposure level L AE , but is mandatory for use in accordance with the requirements of the aviation rules of Ukraine APU-381 [8] for spatial noise zoning around the airport. Complementarily to this any new development inside the specific noise requires the assessment (preferably by measurements) of L Amax together with L Aeq at its location for comparison with the norms for this territory and if necessary-to define appropriate protection measures from aircraft noise impact. Follow-up measures-noise monitoring, performed either or as portable or as continuous aircraft noise measurements in the vicinity of airports-must be done at this site and other locations for comparison to show the adequacy of predefined and realized noise protection measures. And once again monitoring (instrumental or calculation-based at sites where measurement terminals are absent) should be provided for the maximum sound level of any peak noise events L Amax , sound exposure level L AE and for the equivalent sound level L Aeq (for example, L Aeq , logged with a 1 s step usually for current monitoring requirements [9]) for a specific period of the day to inform the population and authorities how the noise norms are fulfilled and if they are still violated-the new measures to protect the people from noise must be proposed. In such a case, the requirements for the accuracy of noise calculation models are quite similar to the accuracy of an instrumental noise assessment.
Basic Calculation Equations and Terms
For any particular environmental acoustic source including an aircraft its sound pressure level (SPL Σ ) at any point of noise control at distance R from the source [10,11] is defined by the following equation: where L W is the sound power level for the particular i-th acoustic source of the aircraft under consideration, normalized to the reference distance R 0 (usually R 0 = 1 m); n-number of acoustic sources with an essential contribution to the overall spectrum of the aircraft at specific flight stage (or flight mode); ∆L Θ is the correction for directivity of sound radiation; ∆L f is the spectral correction for frequency band f ; ∆L v is the correction for aircraft speed v; ∆L int is the correction for sound interference corresponding to the "ground" noise attenuation; ∆L scr is the correction for sound diffraction corresponding to noise attenuation by acoustic screen, and α is the sound absorption coefficient in air. The main contributions in this model from the source are defined by L W , ∆L Θ , and ∆L f -these contributions are defined by models of the individual acoustic sources of the aircraft as listed in [10].
The number of acoustic sources that are sufficient for the particular type of aircraft under consideration depends on the type of the engine in its power plant and on aircraft flight mode. The model of dominant acoustic sources for overall aircraft noise assessment TRANOI [10,11] is based on semi-empirical models for them as it is recommended by ICAO Manual [12] and it has quite a high accuracy for spectral and overall SPL assessment (σ Σ = ±1.2 dB) of the aircraft in any possible flight event. The use of such a sophisticated acoustic model of the aircraft in a number of noise management tasks is still impossible. It is due to the huge number of input data necessary to calculate all components of the model. All acoustic sources properties of the aircraft under consideration at departure and/or arrival airport, are mostly unavailable for wide usage due to their confidential character (they are protected by the manufacturer of the aircraft and engine).
To simplify the calculation procedures for aircraft noise assessment in and around the airports an approach of Noise-Power-Distance-relationships (NPD-relationships or NPD-dependences) was introduced [1,13], which is still the most common type of acoustic model for aircraft and usually included in integrated noise models and used for calculating the noise levels around airports. For example, the contribution to the sound exposure level L AE from each flight path segment is calculated as follows (Doc 9911 [1], ECAC [13]): where L E∞ (P,d) is the initial value of sound exposure level for the flight segment of infinite length, which is determined based on the interpolation of NPD-dependences data for the actual values of thrust (or power) of the engine P and distance d, corrections ∆ V , ∆ I (ϕ), Λ(β, l), ∆ SOR are quite similar to corrections of the model (1)-the correction for aircraft speed ∆Lv, the correction for directivity of sound radiation ∆L Θ , the correction for sound interference corresponding to the "ground" noise attenuation ∆L int , but they are generalized for total aircraft fleet-not specific to aircraft type. The correction ∆L F is a ratio of sound energy received from a particular segment to the energy received from an infinite flight path, it is used for exposure levels only. NPD-relationships in model (2) include the sound power level of the aircraft type under consideration and the two last corrections of the model (1) for divergence and absorption of the sound waves in a real atmosphere. Because the sound absorption is frequency-dependent the corrections for sound directivity pattern ∆L Θ and frequency band ∆L f of the noise source also influence NPD-data sufficiently.
A similar concept to NPD-relationships-noise radius R N -is used in a Ukrainian calculation method [11]. It is an extension of the concept of the shortest distance d that is used in NPD-relationships. The moving aircraft is represented as an axially-symmetric noise source, around which the cylindrical surfaces with constant sound levels are formed, such as shown in [10]. For any time-integrated (exposure type) sound levels (or noise indices) at constant flight operational mode and specific magnitude of exposure levels the product R N × v = const, as a function of noise radius R N and flight velocity v only. This result has been verified by different numerical investigations for a variety of aircraft types, even after accounting for the influence of atmospheric absorption in various states of atmosphere. At high flight velocities, airframe noise sources contribute to the magnitude of overall aircraft noise level sufficiently compared to engine noise sources, due to this the relationship R N × v =const is violated. However in most cases of practical interest, the flight velocities do not exceed their critical values at take-off and landing conditions, at least for aircraft with acoustical performances in accordance with the limits of ICAO Chapter 14 [14], so this relationship can be used in calculations and modeling of aircraft noise around airports. The correction to exposure type sound levels for variations of flight velocity v is equal to the ICAO recommendation: where v 0 is the reference velocity usually used for NPD-relationships determination.
Historical Development of the Integrated Aircraft Noise Models
The simplest type of aircraft noise model is confined to the so-called aircraft grouping method and includes the definitions of noise footprints (contours for specified values of noise indices and areas, bounded by these contours- Figure 1) for any group of the aircraft and particular flight stage. Usually, two flight stages were considered as enough for aircraft noise zoning and land use planning around the airports-departures (takeoff/climbing) and arrivals (descending/landing) of the aircraft. Flight paths for departure and arrival were defined with respect to the predefined group, for example in the first Soviet Union method [15] there were 5 aircraft groups in a list, a basic group included the Tupolev-134 and Tupolev-154, Iljushin-62, Antonov-10, and Antonov-12 aircraft. The aircraft Antonov-22 and Iljushin-76 were included in the noisiest group, which was on 5 dBA noisier in comparison to the basic group. For all of them besides the averaged flight profiles, there were also averaged relationships of the sound levels from the distance (Figure 2), which were defined from the flight trials, averaged for the group, and used further for noise calculations. The idea of noise certification of the aircraft was developed at this time, but still not included in consideration to aircraft noise calculation, never mind the direct correlation was shown between the values of sound levels (noise indices) at certification points ( Figure 1) and noise footprint area/size for appropriate flight stage [10]. This method defines the appropriate group for the aircraft under consideration depending on the most important operational parameters including take-off mass, number and type of engines in the power plant.
The calculated noise footprints (or contours) were very close to the form of ellipse or part of it, the method of aircraft type grouping predicts noise levels at the point of noise control with an accuracy of ±5 dBA for take-off/climbing and ±3 dBA for landing.
In principal, an ellipse may be defined as a result of the intersection between the plane surface of noise calculation and a cylinder with radius a (shown in Figure 1), where a cylindrical surface is defined by acoustic energy (or acoustic exposure) distributed around the line source of noise generation by aircraft in move along the flight path (or it particular part-flight segment). The difference between the certified noise level at take-off (L 2 ) and the level corresponding to the final point on the contour L along the flight (landing or take-off) axis may be written (taking in mind a/a 2 = x/x 2 ): where the constant C defines the attenuation rate, for spherical spreading its value is near to 20, a is the minimum distance from the flight path to the final point on the contour (Figure 1), a 2 is the minimum distance from the take off path to the certification point No. 2 (for take-off). In a same way for the certification sideline point No. 1, the minimum distance b 1 enables connection of the certificated level L 1 with final point of the (maximum half-width) contour b: Since area S is proportional to the product x·y, then, from (4) and (5), it is proportional to L 2 and L 1 : which is the same as Equation (4.1.1) in [10]. Grounding on Equation (6) one may deduce that for aeroplanes with quite equal certification levels the appropriate footprints will be close to equal also. Very close to linear the relationships between noise contour areas and EPNL values for take-off flight procedures (for EPNL values at control point No. 2) and for landing flight procedures (for EPNL values at control point No. 3), shown in Figure 4.1 in [10], confirm this suggestion and prove the basic idea of the grouping of aeroplanes with similar noise certification data in first noise contour calculation method [15]. The fruitful idea of the aircraft grouping is still used even in current calculation methods [1]. If the necessary for calculation data are absent in database for any aircraft it may be calculated by substitution aircraft, which is definitive for calculated type in particular and for any other type of the aircraft from the group, to which a substitution type is defined.
In the first ICAO document [16] on the subject of aircraft noise calculation, this radius a ( Figure 1) was called Noise-Power-Distance relationship (NPD-relationship), taking in mind that for the first jet aircraft its noise was defined by jet engine completely and engine power setting defines also the value of noise level at a specific distance or a value of distance for specific noise level. In a number of works, this radius was called noise radius [10,11]. Because during the specific flight stage (either departure or arrival) the engine operation mode is changing sufficiently an ICAO circular [16], so as an international standard SAE-1845A [17], recommended to divide the flight path on a number of segments with specific engine setting and wing flaps deflections and to use for these settings (power) a specific NPD-relationship (NPD-curve). NPD-curves were defined as aircraft type specific-specific to the aircraft group. It was assumed that the NPD-curve relationship adequately represents the atmosphere state and flight segments in any airport study area. NPD-curves were recommended to be defined during noise certification trials as detailed as it was necessary to cover possible flight modes during the departure and arrival stages of the aircraft type under consideration. Such procedure provided more accurate noise results in comparison with the first aircraft noise calculation method at least on 1-2 dBA [10,11].
Current ICAO Doc 9911 [1] is much closer to standard SAE-1845A [17], it recommends a number of improvements for the further increase of the accuracy of noise calculations: a number of segments for the flight path is defined by requirements to the change of the flight parameters like flight velocity, wing aerodynamic configuration, and engine thrust, all of them are defined in accordance with meteorological parameters of the case under consideration and flight safety requirements. In a part of noise level calculations, few corrections were introduced: for engine type (jet or propeller, Figure 3), for engine installation (under the wing, over the wing, at the tail of the fuselage), and so-called lateral attenuation effect. In ICAO Doc 9911 the engine type and installation effect are combined in a single correction on engine installation effect (two lateral directivity functions are employed: for aircraft with tail-mounted and wing-mounted engines respectively), two corrections were remained specific-on lateral attenuation and on the directivity of noise propagation from the aircraft (which accounts for the pronounced directionality of jet engine noise behind the ground roll segment, Figure 3). All the corrections in ICAO Doc 9911 [1] and SAE-1845A [17] are used as averaged for the overall aircraft fleet. An ECAC Doc 29 [13] recommends using two corrections on noise directivity -for turbofanpowered jet aircraft and for turboprop-powered aircraft, which contribute essentially for noise footprint contour and area first of all for departure flight noise-at takeoff roll of the aircraft along the runway.
Lateral Attenuation
Lateral attenuation due to SAE Standard AIR1751A [18] was processed and presented in a way how overall aircraft noise is attenuated to the side of the aircraft flight pathrelative to the level directly beneath it. The standard equation was derived empirically from a large set of flight noise data based on aircraft types with ICAO Chapter 2 acoustic performances, including and predominantly with tail-mounted engines such the Boeing 727, Lockheed L-1011, Falcon 20, Falcon 50, Douglas DC-9, Douglas DC-10, etc., [18]. The equation for lateral attenuation presented in Standard AIR1751A is calculated as a function of lateral distance d and elevation angle β only [18]-without any difference to the spectrum of noise generated by aircraft and to a surface covering performances over which the effect is observed. This lateral attenuation model is considered as still reliable by SAE 1845A [17] and ICAO Doc 9911 [1], especially for aircraft with tail-mounted engines ( Figure 4) but the latest SAE Standard AIR-5662 [19] method recognizes that part of this "lateral attenuation" is, in fact, a lateral directionality associated with engine installation effects. Measurement results even show an amplifying effect for bigger elevation angles as for departures, so as for arrivals. This suggestion was proved the experimental data of the engine installation effects investigation [20] showing the significant differences in the engine installation component of lateral attenuation between jet airplanes with wing-mounted engines and jet airplanes with tail-mounted engines. Possible reasons for these differences are related to the differences in physical geometry of these two groups of airplanes. By Standard AIR-5662 [19] this attenuation is still termed "lateral attenuation" and is in excess of the attenuation from wave divergence and atmospheric absorption. It is applied to turbofan-powered transport-category airplanes with engines mounted at the rear of the fuselage or under the wings or general-aviation airplanes. Sound wave propagation is considered over "acoustically soft" ground surfaces such as lawn or field grass. So, the method is inconsistent with "acoustically hard" ground surfaces such as frozen or compacted soil, asphalt, concrete, water, or ice.
NPD-Relationship for Homogeneous Atmosphere
All current software products, such as Integrated Noise Model (the FAA's Integrated Noise Model-INM-is well known amongst noise modeling specialists), are integrated models because their two basic components-flight path and acoustic modules-are used to assess a complete noise event-a separate flight of the aircraft over the investigated area or a specific noise control point [10,11]. Noise data in their databases are usually supplied by aircraft manufacturers in the form of NPD-curves [1], which are provided for standard values of temperature and humidity and for a homogeneous atmosphere. Different aircraft noise models consider the same physical phenomena but do so with different levels of detalization and different sets of specific computational algorithms. Flight profiles (flight trajectories in the vertical plane) of the aircraft are determined by solving a system of equations of balanced motion recommended by existing methods and standards [1,17], using data for the required coefficients from the ANP database supported by Eurocontrol today (www.aircraftnoisemodel.org, accessed on 2 April 2021). It includes the data for around 200 aircraft types in the database, as for flight profiles, as for noise calculations.
For all the methods the flight profiles are considered in space in accordance with flight routes in the vicinity of the airport (or of the specific runway), which are shown in flight charts for the airport as nominal flight tracks. Accuracy of noise calculations depends on how close to the flight chart track a flight path is realized in a real situation, usually a flight path distribution exists inside a flight corridor (which may be predefined by flight chart for any route as lateral limits relatively to flight track) with axis along the flight track ( Figure 5). ICAO Doc 9911 [1] in the current edition provides the calculation scheme of such possible distributions (close to normal distribution), which are important for the calculation of equivalent sound levels and/or day-night noise indices. It is absolutely incorrect for single flyover event noise assessment, which should be closer to real flight trajectory parameters. Taking all this in mind the simplified scheme of any integrated noise model is looking like shown in Table 1. Today for a number of airports because of their specific aerodrome layout, infrastructure, and much quieter aircraft in operation due to ICAO Balanced Approach influence on acoustic performances of new aircraft designs not only the flight stages contribute essentially on noise footprints inside/around the airport, but taxiing of the aircraft to/from the runway from/to the gates close to terminals may contribute also and must be included in calculations of noise contours to be used for noise zoning purposes. Widespread implementation of Automatic Dependent Surveillance-Broadcast system (ADS-B) [22] onboard the aircraft, which are required by ICAO for safety purposes, first of all, even today simplifies the procedure of aircraft supervising (both in flight and on the ground) providing real data for noise source location necessary for further their noise exposure calculations in/around airports. Currently, the aircrafts are nicely supervised and checked on how they fulfill the flight corridor requirements and due to this, the lateral distribution of real flight trajectory around the flight chart track is very narrow today providing quite a narrow noise footprint in the result.
Differences in the calculation of the integrated models from measured values are the result of an underestimation of the exposure level SEL of the take-off noise of the average aircraft by~3 dBA and during the descending before landing by~1.5 dBA. The error of instrumental measurement of sound levels of single flyover events in accordance with the provisions of the modern international standard DIN 45,643 [23] is about 1 dBA, so the discrepancy between the calculated and measured values for individual events is still too large. Investigation of the reasons for underestimation of noise exposure in the calculation of single events of aviation noise radiation and improvement of the model to approach the measured values is one of the main tasks of the research, the practical result of which is to improve calculation-based monitoring of aviation noise at and around the airports.
Flight Profiles Accuracy
Flight profiles in real operation differ greatly from the results of prediction for balanced motion usually, as for take-off/climbing, so as for descending/landing profiles [21,24]. The differences are observed not only for the height-distance dependences but for the flight speeds and engine thrust settings, which contribute much to the predicted levels of noise also. For the take-off/climbing flight stage noise contour for L Amax = 75 dBA, which is defined by input data for flight parameters observed in operation, may be longer on 3-5 km in comparison to balanced flight input data as shown in Figure 6.
The acoustic basis of any aircraft noise simulation model consists of point sources that move dynamically in time, somewhere, and sometimes represented by line sources along the flight trajectories in horizontal and vertical planes. It includes the effects of geometric propagation (divergence) of sound waves from the source, absorption of their acoustic energy by air, and the effect of their interaction with the earth's surface (interference of direct and reflected waves from the surface). Different models consider the same physical phenomena but do so with different levels of detalization and different sets of specific computational algorithms. Flight profiles in real operation differ greatly from the results of prediction for balanced motion usually, as for take-off/climbing, so as for descending/landing profiles (Figure 7). The differences are observed not only for the height-distance dependences but for the flight speeds and engine thrust settings, which contribute much to the predicted levels of noise also (Figure 8). Review [26] is a good examination of SAE AIR-1845A methodology, realized in the ANP database and both standards [1,17]. The FAA linear regression method for the calculation of aerodynamic and flight performance coefficients was tested, validated, and refined for the six study airplanes. As the regression is developed using procedures for a wide range of airport altitudes, airplane weights, and atmosphere temperatures, the resulting coefficients can reproduce flight profiles over a similar range of conditions. A key attribute of the FAA method is the use of the manufacturer's flight profile data directly, which are easier and more intuitive for the industry to produce than the ANP database coefficients. This distinction also simplifies error checking, and the bounds of the matrix of flight profile data determine the range of applicability of the coefficients.
Arrival Flight Profile Parameter Analysis
In Figure 9 (combined from four figures, which were taken from [21]) the data for arrivals for two types of aircraft are shown. If to look at the flight speed (on the top of Figure 9) it should be found that operational speeds (blue lines) vary along the glide path and sometimes their values may be reduced below the flight safety requirements (value of balanced flight solution, which is defined in accordance with safety requirements also, is shown as a red line in Figure 9 at the top-left picture). In these cases a pilot must operate with thrust, making engine operation mode higher, trying to return the speeds to the safe values-so in reality we must consider overbalanced thrusts (due to flight control from pilot to return the velocity into safe diapason-by arcs every reduced velocity in the top left picture is connected with an increased thrust of the engine in the bottom left picture in Figure 10). It was found that the thrusts may be two-times higher than the balanced predictions for them. For some types of aircraft, the balanced predictions were found much less than observed thrust in operation (Figure 10, on the right), which may be accessed as a mistake in input values for coefficients (aerodynamic or thrust) in used databases.
By correction of the thrust at final glide slope descend for B-734 the L Amax is higher on 9-10 dBA than for similar ANP database result. Because of the very close location of this difference i9n flight operation to the runway, it influences the smaller in size contour 85 dBA making it 1-2 km longer as calculated by the IsoBella model and software (Figure 10). For smaller L Amax the contours are not affected sufficiently by these flight control measures. If balanced thrust and speed are seriously different from their measured and averaged values ( Figure 10, on the right) the influence of this difference will be observed for all the contours (65-85 for L Amax ) under investigation, not only for the smallest ones (for higher L Amax ).
Departure Flight Profile Parameter Analysis
In Figure 8 the data for aircraft departure flight path are shown. To look at the flight speed (bottom figure), it should be found that operational speeds (blue) vary along the take-off/climbing stage of the flight path and usually are less than balanced values (red). Appropriate operational thrusts are much less also (right figure), but both of them-speed and thrust-are located in safe diapason, these data are the results of pilot operational qualification/safety culture and may be defined for every airport/airline/aircraft via statistical analysis. Fewer values for flight speed (right bottom in Figure 9) provide higher altitudes for the flight path in comparison with balanced solution, fewer values for thrust provides smaller altitudes for the flight path. By correction of the thrust with correspondent height and speed at climb out the contour for the L Amax = 75 dBA is predicted longer on 1.5 km comparing with balanced flight input data for standard atmosphere conditions as shown in Figure 11. The resulting difference between the real and balanced data may enhance the noise exposure in point and footprint like shown in Figure 11 (the real contour for the L Amax = 75 dBA is predicted longer on 3.2 km comparing with balanced flight input data), and/or reduce-like shown in Figure 12 (the real contour for the L Amax = 75 dBA is predicted shorter on 2.9 km comparing with balanced flight input data). Underestimation of sound level, both at departure and arrival, as a function to track distance, is observed elsewhere for particular flight events, for example, as shown for US INM ver. 5.1 Figure 13 for aircraft type Boeing-737 in [26]. Close to runway overcalculation may be observed (Figure 13b). On average INM predictions are about 3-5 dB lower than measured values at high SEL (closer to runway) and till 10 dB lower at low SEL (far from the runway). Such a pattern is typical for most analyses made elsewhere. The main contribution to this is provided by the huge difference between the calculated balanced thrust of the engine and the thrust observed in real flight (Figures 11c and 12c). Improvements for engine thrust calculations must provide essential contribution in more accurate SEL (and L Amax ) assessment for separate flyover noise. In
Corrections for More Accurate Sound Level Assessment
If the calculation of noise levels for individual segments of the flight profile is inaccurate on 1 dBA, the noise contour for the normative value of the sound level L Ama = 75 dBA is shifted by 1.5-2 km in the direction of flight ( Figure 10). Comparisons with real measurements usually show the smaller calculated values of sound levels for single events in most of the cases. In our practice of the portable noise measurements (noise control points were chosen on distances up to 2 km from runway, few of them very close to the arrival/departure nominal route and beneath the flight path, some of them-up to 1 km aside of it, aircraft fleet consists of short and middle distance airplanes like A-320, A-319, B-737, B-738, B-735, EMB-195, MD-83, DH-8, ATR-72, ATR-42, F-70, SAAB-2000, RJ-85) the diapason of changes at every point is quite huge, even for the same type of the aircraft and same flight mode, but their difference from the predictions is much greater. If the maximum levels were calculated between 75-92 dBA for these points, the maximum measured were between 78-99 dBA, the minimum-between 69-87 dBA, standard deviation between measured and calculated σ = 1.2-3.9 dBA [25].
For the terminals of the continuous noise monitoring system located within 2-4 km from the end of the runway, the averaged (arithmetic mean with standard deviation σ cт = 0.3-1.0 dBA) measured values of L Amax were greater than those calculated by 0.8-1.2 dBA for different types of medium-haul aircraft (MD 81-90, Boeing 737-500−800), although in some cases differences for L Amax were observed up to 2-5 dBA (Figures 12 and 13). Similarly, for the take-off/altitude phase along with flight segments that contribute to the sound level on certain monitors (∆L Amax = 0.3-0.5 dBA, σ cт = 0.2-0.5 dBA), the flight speed is usually observed to be lower than the balanced value by 20-50 m/s, and the actual operational thrust of the engines is observed both higher and lower than the balanced value, which generally causes a difference in sound levels ∆L Amax ± 4-5 dBA (Figures 11 and 12). The noise circuit for L Amax 75 dBA (noise standard for the night according to the requirements of State Sanitary Rules [7]) for the takeoff of the aircraft type MD 81 increases along the flight axis more than 3 km in this case.
The method of aircraft trajectory segmentation takes into account the peculiarities of changes in flight parameters along with segments and their impact on sound level for the current segment. In particular, the linearized equation for estimating the angle of elevation along a rectilinear segment of the trajectory with a constant flight speed has the form: where the coefficient k = 1.01 according to the methodology of the SAE-AIR-5662 [19] standard takes into account the increased climbing gradient due to headwinds with a speed w v ≈ 4.1 m/s, and the equivalent airspeed inherent in climbing, v = 82.3 m/s, F n /δ am -the available thrust of the engine, N-the number of engines, R-reverse aerodynamic quality, namely, the ratio of drag to lift coefficient C X /C У for a given set of wing mechanization (flaps, slats, etc.), G is the total weight of the aircraft, δ am = p/p 0 -the ratio of atmospheric pressure p at the point on the trajectory to the standard value of air pressure at sea level (p 0 = 101,325 kPa). In conditions of the actual operation, the value of the efficiency k is observed smaller, in a number of investigations its value for the type Boing-737 (−400-800) varied between 0.79-0.845; for type MD81-MD90-in the range of 0.93-0.985, but for MD81 in some cases the value k = 1.075. Accordingly, the calculated values of sound levels for departures are less than measured for the Boing-737-400 till~3 dBA, for MD-81~3.5 dBA [27].
Ground Effect Correction
From an acoustic phenomenon only it is somewhat difficult to accept that if one is 400 m out to the side of an aircraft flight path and the aircraft is only several tens of meters above the ground surface or less, that one could expect an additional attenuation in the order of 10 dB (A) would result from ground effect-a calculation scheme for this phenomena is shown in Figure 14a. It is somewhat even more difficult to accept that, if one considers the same horizontal position but now increase the aircraft to few hundreds of meters above ground level that there would be excess attenuation across the ground in the order of 7 dB-these values are provided by the method incorporated in current standards and recommendations [1,13,17,19].
Homogeneous Atmosphere
If one is utilizing NPD measurements from the aircraft directly above the measurement position-directly how NPD data for the aircraft type should be evaluated [1,17], any reflection from the ground surface ( Figure 15) would have already been incorporated in the results. The basic values of NPD-dependences are obtained during certification measurements (for flight path altitudes of 100-700 m over the point of noise measurement on a surface) for both takeoff and landing flight stages and if the microphone is installed at a height of 1.2 m above the ground surface, both direct and reflected sound rays are formed (in homogeneous atmosphere-as straight lines) and reach the microphone, the interference between them creates the ground effect even at the longitudinal plane (at points directly beneath the flight path, Figure 15c), not only for lateral to the trajectory points of noise control, Figure 15b. That is, the correction for "lateral attenuation of aircraft noise" is a methodologically incorrect definition, better to use "ground effect" for this phenomenon. However, this fact is currently not taken into account by the ICAO Doc 9911 methodology and the standard SAE AIR 5662 [19]. To increase the accuracy of single noise event calculation, it is necessary to calculate the correction for ground effect Λ(β,l) with higher accuracy and to use together with NPDdependencies from the ANP database to determine the sound level of a particular segment of the flight path ( Figure 14). Thus, for the take-off and descending flight profile at the microphone installation points beneath the trajectory, the interference effect is up to 6 dBA ( Figure 16), which also introduces an error in the calculation of the maximum sound level under conditions of neglecting the ground effect in a plane of the flight path. If the ground effect was included in noise certification measurements thus the NPD-curves include it also and the direct usage of the model (2) in such case will provide the account of ground effect twice. For such case, the correction Λ(β,l) in model (2) must be the difference between ground effects just beneath the flight path and for (lateral) point aside flight path plane. The values of ground effect, as they are defined by the current ICAO Doc 9911 methodology, should be appropriate for usage with NPD-data with excluded ground effect only. The latest investigation of the ground effect for aircraft noise [21,28] confirms that the current ICAO Doc 9911 methodology is overestimating the ground effect (as it is shown in Figure 4), especially newish types of aircraft, most of which with engines installed under the wing.
Inhomogeneous Atmosphere
The last point of concern about the accuracy of ground effect assessment in any possible condition-how the wind speed gradient along the height over the ground surface and direction of the wind and also the temperature gradient influence the sound refraction between the aircraft (source) and point of noise measurement (receiver). With positive sound refraction (the sound speed increases with the altitude-sound waves are refracted down to the ground) ground effect may decrease, with negative (the average sound speed is decreasing with altitude, typically at daytime)-increase, in both cases the influence of sound incidence angle change will be most essential for total ground effect change. The influence of sound rays refraction on the ground effect was studied and it is shown in Figure 17 for sound speed gradient change in a range 0-0.005 s −1 , it can be seen that for its positive value the ground effect decreases with increasing gradient: within the influence of the temperature gradient (a = 0.00001-0.0005 m −1 ) by 1-2 dBA, in the presence of wind (a = 0.001-0.005 m −1 ) downwards the propagation of sound, the effect may be at the level of its neglecting. Figure 17. Influence of the vertical gradient of the sound speed on ground effect (soft acoustic surface-grass cover) [27]. Figure 18 show a comparison of estimated from four airport's short-term measurements of lateral attenuation of ground-to-ground (LA/GTG) sound propagation (source of noise is located on the ground, as a receiver microphone) with the calculated values of ground attenuation for NOITRA model with (a > 0) and without refraction (a = 0) and for equation from SAE AIR5662 [28]: for wind conditions "Upwind", "Calm" and "Downwind", respectively. The measurements were done similarly to noise trials at Figure 17-by microphones at Standard height 1.2 m and close to surface~0.0 m, where interference effect or ground attenuation was defined by the difference between the measured values. SAE AIR5662 [19] does not include 1.5 dB of engine installation effects for airplanes with wing-mounted jet engines. The measured data for ground attenuation are located between the "Upwind" and "Downwind" calculation curves from [29], closely to "Calm" curve, which provides smaller values in comparison with current ICAO Doc 9911 [1]. Figure 19 shows a comparison of the Air-to-Ground (ATG) component of lateral attenuation by using data of long-term noise monitoring at Narita airport from 50 selected stations located beneath and beside the straight flight route of the aircraft take-off. The shown in Figure 20 1751M-data (red line) were calculated with the modified reformulation of AIR 1751A [18] in accordance with proposed equations in [29]. Figure 18. Comparison of lateral attenuation of ground-to-ground (LA/GTG) at 4 airports in Japan [28] (small dash lines) with existing formulas from the standards [18,19] and refraction influence investigation with NOITRA (dash lines). Figure 19. Results of the average of estimated EGA/ATG respectively for the three wind conditions (upwind, calm, and downwind) [28] compared with EGA/ATG value from standards [18,19]: ground effect decreasing the overall sound level is observed for elevation angles β < 12 • ; for higher elevation angles (12 • < β < 70 • ) the ground effect may increase the overall sound level at the point of noise control. Instead of constant 1.089 in the definition of distance factor Γ(λ) in [1,13,17] the term of G 0 (V w , T g ) was introduced, it expresses a contribution of vector wind V w using a sigmoid function and the second term considers the effect of temperature gradient T g using a product of two sigmoid functions of respectively temperature gradient and vector wind [29,30]: and Λ(β) is long-range air-to-ground lateral attenuation given by Quite similar changes were proposed in [31] for long-range air-to-ground lateral attenuation: Both Figures 18 and 19 show clearly that comprehensive measurements of the ground effect including meteorological effects on aircraft noise propagation aside of flight path, which were made with analysis of long-term observations of unattended noise monitoring of aircraft flyover noise around the airports, mostly at Narita Airport [28]. The results were divided into 3 classes of wind conditions, which are highly influencing sound refraction, but the observed meteorological conditions include the changes of seasons, temperature and humidity, wind and cloud in fact [28]. The calculated data with NOITRA model for calm conditions and averaged spectral class of the aeroplane noise at arrivals, also shown in Figure 19, confirm that ground effect in dependence to elevation angle changes dramatically from attenuation at quite small angles to amplifying effect at bigger elevation angles, which are in good correspondence with measurement results [28][29][30].
Inhomogeneous Atmosphere Influence on NPD-Data
The vertical gradient of the temperature exists always, for the standard atmosphere it is negative −6.5 • C/km. The study of the influence of the atmospheric state was performed to assess the NPD-data in dependence on temperature, pressure, humidity, which differ from the values of the standard atmosphere at sea level and along the altitude. In a homogeneous (non-refracting) atmosphere with changes in temperature and humidity (they have the greatest effect on the value of atmospheric sound absorption) within temperature −30 • C < t < +30 • C and humidity 1% < h < 85% of the values of sound exposure levels may differ from the ANP-data (which were defined for t = 25 • C and h = 70%) within diapason between −10 dBA and +15 dBA (larger deviations are observed for very dry air) for different aircraft spectral classes (used in current ANP database) in the flight modes of arrival and departure.
Under conditions of the inhomogeneous atmosphere, the influence of different profiles of temperature, pressure, humidity with altitude on the values of NPD-data for different spectral classes from the ANP database: for conditions of the standard atmosphere (GOST 4401-81) with a vertical temperature gradient −6. The dependencies "noise-power-distance" or NPD-dependence are decisive in the sound levels calculation, both for the single flight noise events and flight scenarios at the airport. In the current ANP database, their tabular values are given for the type of engine installed on aircraft, although for modern aircraft the contribution of aerodynamic noise produced by airflow around the wing and its mechanization is also significant, especially during the arrival stages of flight, where the engine operation mode is close to idle and the flaps and slats of the wing and gear are in a deflected position.
For aircraft with turbojet engines with high by-pass degrees (m en > 10), the contribution of airframe noise is significant even at the departure flight of the aircraft. Therefore, neglecting the contribution of airframe noise today also significantly affects the accuracy of sound level calculations for both takeoff and landing. For A-320 and Boeing-737 aircraft of various modifications, the noise exposure from engines during aircraft arrival is~10 dBA lower than the airframe noise exposure, so the study of aircraft noise, especially during arrival stages, should be used as the aerodynamic configuration of the aircraft-engine mode-distance or NAPD-dependence.
According to the improved integrated model for calculating noise from the aircraft in flight, the sound level generated by a separate segment of the flight path L max, seg , without any obstacle along the sound propagation path should be described by the formula: the contribution to the sound exposure level L AE from each flight path segment is calculated by the formula: where L max (P,A,d) and L E∞ (P,A,d) are the values of sound levels from a particular segment of the flight path, which is determined based on the interpolation of NAPD tabular data (instead of NPD-data in Equation (2)) for the actual values of thrust or engine power P, aerodynamic configuration A and distance d. Other corrections in Equations (7) and (8) are the same as in the current model for aircraft noise calculation (2).
Discussion
The Noise-Power-Distance-relationship is a basic concept to calculate aircraft noise contours for air traffic in the airport nowadays. Current ICAO [1] and ECAC [13] recom-mendations, such as the complementarily standards [17][18][19], provide necessary accuracy for the calculation of equivalent levels and noise indices for the traffic scenario on their basis. Separate flyover noise events are still assessed with poor accuracy, never mind the requirements to operate with sound levels SEL and L Amax in a number of noise management tasks are very high.
Two main subjects of model improvements are discussed in the article-the difference between flight parameters in real operation and their values defined from solving a balanced system of aircraft movement. Statistical preparation of the data, observed in operation at the airport under consideration, provides more accurate values for flight thrust and velocity. The statistical values for coefficient k in Formula (4) from real operation allow to use more accurately the NPD-data and to cover the difference up to 5 dBA between calculated and measured sound levels under the flight paths.
Second important observation-how is influencing on calculated results the principal assumption of the method [1,[7][8][9][10][11][12][13][14][15][16][17][18][19] about homogeneous atmosphere conditions. At the first stage for every compared integral model [2][3][4][5][6], the aircraft flight parameters are assessed for conditions with changed atmosphere parameters (air pressure, temperature, and humidity) along with the altitude. At the second stage, the noise levels are assessed for conditions of the homogeneous atmosphere, which is looking like methodological violation from one side, and from another side-new inaccuracy is produced in sound level assessment. If to look in Figure 15 for equal values of lateral distance at Figure 14a and vertical distance at Figure 15b the equal NPD-data should be used, it is appropriate with basic assumption. In real atmosphere due to change with an altitude of the meteorological parameters the NPD-data in case of Figure 15a,b will be quite different due to the contribution of different conditions for sound refraction, sound absorption (in case of Figure 15a it may be concerned as constant per distance unit, in case of Figure 14b the calculated influence is shown in Figure 20) and sound attenuation due to ground effect as shown in Figures 17-19. The calculation accuracy may be improved till 5 dBA for separate flyover noise events by consideration of inhomogeneous conditions for sound propagation in the assessment of NPD-data and ground effect.
The ground effect itself should be considered as existing just beneath the flight path, so the "lateral attenuation" effect is methodically wrong. For future editions of the recommendations and standards [1,13,[17][18][19], the term "ground effect" is looking enough.
For newish types of aircraft or their various modifications, especially with high bypass engines in power plants (by-pass ratio till 10-12), the noise exposure from engines during aircraft arrival may be lower up to 10 dBA in comparison with airframe noise exposure. The current study of aircraft flyover noise exposure, especially during arrival stages, recommends using an aerodynamic configuration in basic noise-power-distance table data-namely NAPD-data. This new approach is not considered in detail in this article but will be a principal issue in near future.
Conclusions
The improvement of the integral noise model to calculate the sound exposure level L AE and maximum sound level L Amax for a single flight noise event according to the requirements to their accuracy may be achieved only with improvements of all basic elements of the current ICAO Doc 9911 [1] guidance. The main element of the model is the "noise-power-distance" dependence (NPD-dependence), which is recommended by the ICAO Doc 9911 and the Standard [17], should be replaced on "noise-power-airframedistance"-dependence (NPAD-dependence), including the contribution of airframe noise dependent from various possible aerodynamic configurations. The always measured deviations of flight parameters from their balanced values (means the values, which are the solution of the system of equation for balanced motion) and real parameters of the state of atmosphere also contribute to the current inaccuracy of the single flight noise event assessment. The error in estimating the sound levels from the individual segments of the trajectories during departure after take-off and arrival before landing may be reduced on 5-10 dBA.
The model for estimating the ground effect (interference of direct and reflected from ground surface sound rays) should be improved in comparison with its current model of the ICAO Doc 9911 [1], which is overestimating the effect and inappropriate for the types of aircraft in use today. This may reduce the error by 4-6 dBA when calculating the sound levels L Amax and L AE to assess a single flight noise event by removing the contribution of the interference effect from NPD-dependencies and taking into account the correction for each spectral class of aircraft (turbojet, turbofan, turboprop/propeller) for two types of sound reflection surface-acoustically soft and hard.
The data from different research were reviewed in an article to show the similar view of other experts on similar effects of sound propagation first of all. So the improvement of the method [1,13,[17][18][19] is of the highest importance in near future and the ways for this are assessed and recommended. | 11,768.4 | 2021-04-21T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Physics"
] |
Declarative Representation of Uncertainty in Mathematical Models
An important aspect of multi-scale modelling is the ability to represent mathematical models in forms that can be exchanged between modellers and tools. While the development of languages like CellML and SBML have provided standardised declarative exchange formats for mathematical models, independent of the algorithm to be applied to the model, to date these standards have not provided a clear mechanism for describing parameter uncertainty. Parameter uncertainty is an inherent feature of many real systems. This uncertainty can result from a number of situations, such as: when measurements include inherent error; when parameters have unknown values and so are replaced by a probability distribution by the modeller; when a model is of an individual from a population, and parameters have unknown values for the individual, but the distribution for the population is known. We present and demonstrate an approach by which uncertainty can be described declaratively in CellML models, by utilising the extension mechanisms provided in CellML. Parameter uncertainty can be described declaratively in terms of either a univariate continuous probability density function or multiple realisations of one variable or several (typically non-independent) variables. We additionally present an extension to SED-ML (the Simulation Experiment Description Markup Language) to describe sampling sensitivity analysis simulation experiments. We demonstrate the usability of the approach by encoding a sample model in the uncertainty markup language, and by developing a software implementation of the uncertainty specification (including the SED-ML extension for sampling sensitivty analyses) in an existing CellML software library, the CellML API implementation. We used the software implementation to run sampling sensitivity analyses over the model to demonstrate that it is possible to run useful simulations on models with uncertainty encoded in this form.
Introduction
Declarative model representation languages provide a significant opportunity for improving multi-scale modelling workflows, because they cleanly separate the description of the mathematical problem from any algorithmic description, and do so in a way that allows smaller models to be easily composed to build large multiscale models. Declarative model representation languages are best understood through comparison to imperative languages; imperative languages describe a series of steps taken to perform some computation, while models in declarative languages simply make assertions (as is typically done in descriptions of models in academic literature), leaving the numerical application of those assertions up to software packages. This approach has the important benefit that the same model can be used for multiple purposes. For example, a description of some ordinary differential equations and their initial values (an ODE-IV problem) might be used to render equations for a manuscript, solve the ODE-IV problem numerically to understand the time evolution of the system, be used to compute an analytic Jacobian or analytic solution using another solver package, be used in a sensitivity analysis, and be composed into a large multi-scale model, all without reformulating the model.
A number of declarative mathematical model representation languages exist in the literature; many of them have been developed with particular problem domains in mind. For example, Systems Biology Markup Language, or SBML [1] allows mathematical models to be described, with a focus on systems biology. CellML [2,3] is an example of a modelling language which has been designed to be domain neutral. The CellML project hosts a repository of CellML models [4] containing, at the time of writing, 557 workspaces, each of which contains one or more related models (mostly drawn from various fields of biology). CellML is also one of the modelling languages selected for use in the European Framework 7 Virtual Physiological Human project. For these reasons, this paper uses CellML as the starting point for representing uncertainty in mathematical models. However, most of what is presented here could be adapted to other declarative languages.
Uncertainty in model parameters can arise from diverse sources. A parameter may have been measured experimentally, yielding information about the value of the parameter, but not an exact value. Often, there may be a statistical model describing prior distributions and the relationship between samples (and the random variables from which they are sampled) and the particular parameterisation used in an experiment; the posterior distribution of the parameters can then be computed either analytically or using numerical methods (such as BUGS, Bayesian Inference Using Gibbs Sampling [5] and subsequent refinements).
Another common source of uncertainty is where there is no experimental data available for a parameter, but due to physical and other constraints, a modeller has an idea of the range of values in which a parameter lies. Modellers will often be able to suggest a subjective probability distribution for the parameter; for example, a modeller who knows that a parameter value must fall in the interval (a, b) may postulate, a priori, that the true value is uniformly likely to be any value between a and b.
A further common source of uncertainty arises when producing models of individuals from a population. Each individual may have a specific fixed value of a parameter, with variation of the parameter across the population; if a particular parameter has not been measured in a particular individual, the parameter is uncertain in an individual-specific model.
ODE-IV problems with uncertain parameters are distinct from stochastic differential equation initial value (SDE-IV) problems. SDE-IV problems contain references to stochastic functions that vary with time, while the class of problem described here describes parameters with a single but unknown true value that holds for all time values.
For uncertainty information to be useful in a declarative model, some representation for the posterior distribution of uncertain parameters is required. We will briefly summarise the existing literature on declaratively representing distributions of uncertain parameters, and then describe an approach for representing these distributions in CellML models.
The BUGS software package includes a declarative language for expressing statistical models [6]. This language can be used to declare stochastic and deterministic relationships between variables (these relationships are referred to as nodes in the paper). Stochastic nodes are described using distribution names taken from a controlled vocabulary, so that only distributions recognised by the software can be specified in the language. Other more recent Bayesian Inference Gibbs Samplers, such as WinBUGS [7], OpenBUGS and JAGS [8] have continued with the controlled vocabulary approach to describing distributions (and in the case of JAGS, providing facilities for more easily adding new distributions to the language). The output from these software packages describes the posterior distribution as a set of samples.
UncertML is a markup language for describing uncertainty using XML. It allows summary statistics about uncertain values to be provided, as well as descriptions of a finite number of distributions from a controlled vocabulary. It also allows distributions in terms of samples, by providing a set of samples (each individual sample called a realisation) drawn from the distribution of a parameter.
We are also aware of a proposal under development as part of the SBML distribution and uncertainty project (http://sbml.org/ Community/Wiki/SBML_Level_3_Proposals/ Distributions_and_Ranges_Hinxton_Proposal). The approach taken by that project so far provides a wrapper around UncertML to describe uncertainty in terms of realisations and distributions from a controlled vocabulary. This approach would not be adequate for representing uncertainty in CellML models for two major reasons: firstly, it would be incompatible with the principle of using Content MathML to represent mathematical relationships in CellML, and secondly, it would provide an inconsistently low level of expressive power. CellML already has facilities for representing mathematical expressions using Content MathML, and many probability density functions can be represented in closed form without the loss of accuracy arising from using realisations or the loss of expressive power arising from using a controlled vocabulary of distributions.
In this paper, we discuss mechanisms for bringing uncertainty into CellML models. The mechanisms for uncertainty representation presented here fit in naturally with the use of Content MathML to describe models; in addition to allowing distributions to be described using realisations as in UncertML and the SBML distributions and uncertainty project, it allows distributions to be specified by giving the probability density function. We also present an experimental extension to SED-ML (the Simulation Experiment Description Markup Language [9]) for describing sensitivity analysis experiments, and a software implementation of the proposals presented in this paper.
Representing the Information in MathML
CellML makes use of Content MathML [10] to describe mathematical relationships in a structured way. Content MathML provides the csymbol operator, which allows external symbols to referenced and included as an operator in a MathML expression.
To support descriptions of uncertain parameters, we introduce three operators to be included in Content MathML expressions. The full definitionURL for these operators is ''http://www.cellml. org/uncertainty-1#'', followed by the suffix for the respective operator: N uncertainParameterWithDistribution takes two arguments.
The first argument should be either a variable in the model, or a vector of variables in the model, while the second should be a statistical distribution (built with one of the following two operators). This operator forms an assertion that the variable in the first argument is a random variable with the distribution specified as the second argument.
N distributionFromDensity takes a single argument, which should be a function from a real number to a real number, representing the p.d.f. This function would usually be specified using the MathML lambda constructor.
N distributionFromRealisations takes a single argument, which should be a vector. Each element of the vector represents a realisation of the variable (in which case it should be an expression which evaluates to a scalar value) or variables (in which case each vector element should itself be a vector of expressions which evaluate to a scalar value).
Note that these URIs identify a virtual resource and are recognised by software, but do not refer to any particular document.
CellML requires that all variables and constants are annotated with units (with the possibility that the units are 'dimensionless'); this rule continues to apply in expressions for realisations and p.d.f.s, with probabilities being dimensionless. This allows software that checks CellML models for units and dimensional consistency to also check descriptions of probability distributions (although such software will still need to be updated to recognise the constructs presented in this paper).
Adding Uncertainty Support to a DAE-IV Solver
We implemented sampling from a probability density function in the CellML Integration Service, through numerical inversion of the cumulative density function [11] as follows: Let X be the distribution represented by the probability density function f; let x be the desired sample from X. Let z be a sample from Z, where: Note that for a random sampling analysis to be carried out, it is necessary for a source of uniform random or pseudo-random numbers to be available. Most general purpose computing platforms make such uniform random number generators available. For example, POSIX [12] defines the function random, which is suitable for use as a uniform random number generator on many platforms. On platforms where no suitable systemprovided random number generator is available, algorithms such as the Mersenne Twister [13] can be used to generate a series of values starting from a seed value.
Let F be the cumulative density function: We perform a change of variable on w in the integral to make the limits finite: Using numerical integration, F(y) can be evaluated from this last form at any y. To compute x from z, we numerically invert F(y) at z, giving: The numerical integrator to use is selected by the user in the simulation description. The inversion is performed by finding the smallestx x that minimises (z{F (x x)) 2 using an existing Levenberg-Marquardt implementation [14]. This approach assumes that F(x) is a valid cumulative density function, and so is monotonically increasing; the only local minimum of (z{F (x x)) 2 is the global minimum.
This procedure of numerically inverting a function is computationally expensive, but with the models we tested, the cost is still low compared to the numerical integration that follows.
Details of the Simple Example Model Used
To demonstrate the concepts described above, we coded a simple example model in CellML (Model S1). Our model describes the motion of an object in two spatial dimensions (x and y) experiencing a constant acceleration, and with uncertain initial position and velocity. This model was chosen for its conceptual simplicity.
We created two SED-ML simulation experiment descriptions; one describing a single run of the model, and one describing a sampling sensitivity analysis.
For illustration purposes, we chose the initial position x to have a posterior distribution with the components independently normally distributed with a mean of 0 m and a variance of 1 m 2 : Note that the l notation lx : f (x), where f (x) is some expression, is used to define an anonymous function with bound variable x.
These distributions were described using the probability density function. Likewise, the x component of the initial velocity was described independently as a sample from the normal distribution with mean 10 ms 21 and variance 1 m 2 s 22 : The y component of the initial velocity was described in a different way, to demonstrate the ability to sample values from a set of realisations. To generate the realisations, we created a statistical model. We assumed that the initial y velocity depended on which of two springs was used to propel the object; the spring is selected so that there is a 50% chance of each spring being selected. We further assumed that each spring produced normally distributed initial velocity, with the per-spring mean and perspring variance unknown. We set the prior distribution for each per-spring mean to be a normal distribution with a mean of 9 and a standard deviation of 0.5, and the prior distribution for the per spring variance to be an exponential distribution with a rate parameter of 20. We additionally provided 40 data values, 20 of which were equal to 6, and 20 of which were equal to 12, with the spring corresponding to each data value unknown. As there is no immediately obvious closed form for the posterior distribution of any additional velocities, it is a good example of where the distributionFromRealisations construct is useful. We used JAGS to produce 1000 samples for the posterior distribution for a velocity (determined independently from the 40 data points), after discarding a burn-in of 1000 samples, and put the retained 1000 samples into the CellML model using distributionFromRealisations. As shown in Figure 1, this produced a bimodal distribution. The representation in CellML is equivalent to: The remainder of the model describes straightforward equations: Results
Information to Represent
The CellML specification is intentionally very broad; the underlying philosophy used is to allow a wide range of models to be represented. Some types of software can process all valid CellML models, but other types of software (such as solver software) only support a subset of all models that can be expressed in CellML. It would therefore be helpful if the mechanism for adding uncertainty to CellML allowed the same level of generality to be preserved. For this reason, the approach presented in this paper allows the posterior distribution of uncertain parameters to be represented by specifying the probability density function (p.d.f.), rather than by selection from a controlled vocabulary, thus enabling modellers to express general models. Applications that will only work with a limited number of distributions will then need to recognise distributions from the mathematical form.
It is not always possible to find a closed form for the p.d.f. of the posterior distribution; in these cases, the distribution might only be known from numerical sampling. To support this use case, a mechanism is additionally provided to describe parameter uncertainty using realisations (samples) from the distribution.
Describing Sensitivity Analysis Simulation Experiments
The scope of modelling languages such as CellML is to represent mathematical models, and so descriptions of how to perform simulation experiments using those models are outside the Figure 1. The distribution of the initial position in x and y, and the initial x and y velocity components of the object, shown using both a density histogram and a kernel density plot. Generated using Model S1. doi:10.1371/journal.pone.0039721.g001 scope of CellML. SED-ML [9] is an emerging format for describing simulation experiments. The latest publicly available draft of SED-ML only supports one type of simulation, to describe a so called uniform time-course experiment, where a model describing an ODE-IV or differential algebraic equation initial value (DAE-IV) problem is used to find solutions at one or more points between the 'initial' bound variable value (at which initial values are provided) and some upper bound.
This type of simulation experiment can be used with models containing uncertain parameters, to find the solution for a single sampled instance of the problem. However, one of the major reasons for describing parameter uncertainty information in the first place is to understand the effects of the parameter uncertainty on the results of the simulation experiment, or in other words, to perform a sensitivity analysis.
There are numerous types of sensitivity analysis possible; one of the simplest and most robust (albeit computationally expensive) is random sampling-based sensitivity analysis [15]. We propose a simple extension to SED-ML to support such simulation experiments, by creating a new type of experiment called a SamplingSensitivityAnalysis. SamplingSensitivityAnalysis extends the existing UniformTimeCourse simulation type, but adds a new attribute, numberOfSamples, to describe the number of random samples to take.
Implementing Uncertainty in a DAE-IV Solver
As a first step towards validating the proposals presented in this paper, we extended an existing software library for working with CellML models, the CellML API implementation [16], to support simulations of models using these proposals.
We extended the SED-ML Processing Service and SED-ML Running Service within the CellML API to support sampling sensitivity analyses. In addition, we extended the CellML Code Generation Service and CellML Integration Service to allow them to solve DAE-IV problems with uncertainty in the model parameters.
We implemented both univariate and multivariate sampling from a vector of realisations by randomly picking an index from the realisations, so that each index is equally likely to be selected, and assigning the parameter(s) on the left hand side to the value(s) from the selected realisation. Figure 1 shows a density plot of the sampled parameters for four different sampled scalars making up the components of the initial position and the initial velocity. Figure 2 shows the path taken across ten runs of the model, showing that the initial variation has a significant impact on the path taken and the position after a fixed amount of time (all paths are shown between t = 0 s and t = 10 s). Figure 3 shows the output of the sensitivity analysis run, giving the position of the object at 10 s for many different parameters.
Results from the Simple Example Model
The model and simulation descriptions are available online in the CellML model repository [4] at https://models.cellml.org/w/ miller/uncertain-starting-parabola. The experimental software implementation of the proposal (including support for both the CellML and SED-ML extensions presented here) has been included in the development branch of the CellML API, available from http://cellml-api.sourceforge.net.
Discussion
The proposal presented in this paper allows mathematical models coded in CellML to include descriptions of parameter uncertainty. This proposal has been initially demonstrated on a simple physical model, but can be used with CellML models of arbitrary complexity.
The implementation issues around the proposal have been addressed (as discussed in the Methods section), and a software implementation has been produced and tested, demonstrating that the proposal is feasible to implement.
Future Work
The work presented here is only useful for continuous distributions, where the distribution will almost never be exactly equal to any one particular value, because the p.d.f. is finite everywhere. Such distributions have smooth, differentiable, and monotonically increasing cumulative density functions.
However, mathematical modellers may also need to describe variables sampled from a discrete distribution, and possibly even from distributions which are discrete on some ranges and continuous on others. The mixed case could be handled in MathML using combining constructs such as piecewise to mix discrete and continuous parts, and so the remaining need is to allow random variables with a discrete distribution to be represented. The representation presented here could be extended to allow discrete distributions by adding a new MathML csymbol for describing a probability distribution using a probability mass function (p.m.f.). Such discrete random variables would require a different numerical sampling algorithm. Determining the discrete values at which the p.m.f. is defined purely numerically is a difficult problem, but with a CellML model, it is possible to combine automated symbolic analysis with numerical analysis. This approach would be feasible for the class of discrete problems where a piecewise is used to ensure the probability is zero except for cases which consist of some transformation of finite sets. In such cases, the finite set could be computed by applying the transformation in the piecewise case condition. A numerical algorithm would take a sample from the uniform (0,1) distribution, compute the probability at each member of the finite set, and compute a cumulative sum of the probabilities until the sampled value was exceeded. While this would not cover every possible p.m.f., it would most likely be general enough to support most cases needed in practice.
The approach taken in this work does not allow multivariate probability density functions to be described using probability density functions. This limitation can sometimes be bypassed by describing a joint distribution as a univariate marginal distribution and a series of conditional distributions for the remaining random variables. Current ratified versions of CellML do not provide a mechanism to specify that a variable has a type other than real, and so software that processes CellML does not typically need to support vector mathematics. However, if future versions of CellML did allow variable types such as 'vector of real numbers' to be specified, it would make the inclusion of multivariate probability density functions more natural.
The use of probability distribution functions to describe distributions fits cleanly with the design of CellML, but it represents a significantly different approach to the controlled vocabulary approach taken in UncertML. An important area of future work is to investigate the interconversion between uncertainty specifications in UncertML, and uncertainty specifications using the approach discussed here. Conversion from UncertML to the approach discussed here should be relatively simple for most distributions, because it is simply a matter of substituting the form for the corresponding probability density function. Due to the higher level of generality of the approach presented here, conversion in the reverse direction will not always be possible. However, it would be possible to identify p.d.f.s for particular well-known forms of the distributions supported by UncertML, and convert those forms into the corresponding Content MathML.
The methodology presented in this paper represents probability distribution functions in a declarative form, which admits the possibility of both analytic and numerical analysis, as well as approaches that combine automatic analytic manipulation with numerical solution. The work presented in this paper primarily relies on numerical analysis. In some cases, however, it may be significantly more efficient to perform analytic work on the p.d.f. (and possibly the entire DAE-IV system) prior to any numerical analysis. In the case where the inverse of the c.d.f. has a closed form, automated symbolic manipulation could allow this closed form to be computed analytically.
A great deal of theoretical work has been published on how to efficiently sample from particular probability distributions; for example, the normal distribution [17] and the gamma distribution [18]. The work presented in this paper does not currently use these optimised algorithms because of the focus on generality. However, future work could preserve support for the general case, while detecting p.d.f.s corresponding to a distribution for which a more efficient algorithm is available.
In addition, there are a number of alternative general numerical algorithms for sampling from a continuous probability density function. The rejection method [19] allows for sampling directly from the probability density function, using two uniform random samples. The rejection method requires upper and lower bounds on both the density function and the random variable being sampled, and so automated symbolic analysis to determine these bounds would be required for an efficient rejection based method.
As is common with numerical integration problems, the presence of step-wise discontinuities can cause problems for numerical solvers. Consider the case of a uniform distribution; the probability density function is zero outside a certain range of random variable values, and a constant value inside the range. The only way that a numerical integration algorithm can determine that the function is not zero everywhere is if it happens to find a point inside the range. Numerical integration algorithms can only take a finite number of samples, so if the range is very small, it may be missed entirely by the numerical integrator. This problem can already occur in DAE-IV problems (where it is common to want to numerically integrate the DAE-IV system over a variable such as time). Generally, modellers can work around the problem by adjusting the numerical parameters to ensure that the maximum step size used by the numerical algorithm is small enough to ensure that the algorithm will find the step. However, a more general solution could be to analytically detect piecewise expressions, and numerically identify the boundary of a transition between piecewise cases, and ensure that the solver carries out a step in every piecewise case. Note, however, that a similar issue can arise with narrow normal distributions, because outside a certain range, the probability density is so small that it is represented by the floating point number zero. | 5,951.4 | 2012-07-03T00:00:00.000 | [
"Computer Science",
"Engineering",
"Mathematics"
] |
PyLiger: Scalable single-cell multi-omic data integration in Python
Motivation LIGER is a widely-used R package for single-cell multi-omic data integration. However, many users prefer to analyze their single-cell datasets in Python, which offers an attractive syntax and highly-optimized scientific computing libraries for increased efficiency. Results We developed PyLiger, a Python package for integrating single-cell multi-omic datasets. PyLiger offers faster performance than the previous R implementation (2-5× speedup), interoperability with AnnData format, flexible on-disk or in-memory analysis capability, and new functionality for gene ontology enrichment analysis. The on-disk capability enables analysis of arbitrarily large single-cell datasets using fixed memory. Availability PyLiger is available on Github at https://github.com/welch-lab/pyliger and on the Python Package Index. Contact<EMAIL_ADDRESS>Supplementary information Supplementary data are available at Bioinformatics online.
Introduction
High-throughput sequencing technologies now enable the measurement of gene expression, DNA methylation, histone modification, and chromatin accessibility at the single-cell level. Integration of such single-cell multiomic datasets is crucial for identifying cell types and cell states across a range of biological settings. Previously, we developed LIGER (Linked Inference of Genomic Experimental Relationships), an R package that employs integrative non-negative matrix factorization to identify shared and dataset-specific factors of cellular variation (Welch et al., 2019). These factors then provide a principled and quantitative definition of cellular identity and how it varies across biological settings.
Many users prefer to analyze their single-cell datasets in Python, which offers an attractive syntax and highly-optimized scientific computing libraries for increased efficiency. However, there is a lack of single-cell multi-omic integration tools available in Python. The Seurat v3 (Stuart et al., 2019) anchors algorithm is implemented in R, as is Harmony (Korsunsky et al., 2019). Scanpy (Wolf et al., 2018) offers excellent libraries for single-cell RNA-seq analysis, including batch correction with the BBKNN algorithm, but this approach is not designed for multi-omic integration such as combining scRNA and snATAC from different cells. The scvi-tools (Gayoso et al., 2021) library similarly provides options for scRNA integration, but is not designed for integrating different single-cell modalities from different individual cells.
To address these limitations, we developed PyLiger, a Python implementation of LIGER.
Python implementation of LIGER
We translated the complete, established LIGER framework into Python. Key functions includes integration of multiple single-cell datasets using integrative nonnegative matrix factorization, joint clustering, visualization, and differential expression testing (Fig. 1A). We carefully compared outputs to ensure that function outputs from the R and Python versions are identical to within the limits of numerical precision. The only exceptions are cases when external packages called by PyLiger, such as UMAP and Leiden, produce slightly different results between R and Python.
As an additional feature, we embedded new functionality for gene ontology (GO) enrichment analysis within PyLiger. This makes it much 1A). Functions are fully user-customizable in colormap, labels, etc.
PyLiger adapts AnnData format to interoperate with existing packages
We designed the structure of the PyLiger class to smoothly interface with the widely used AnnData format. The AnnData package was initially introduced along with Scanpy offering a convenient way to store data matrices and annotations together. We store cell factor loading matrices (H i ), shared metagenes (W ), and dataset-specific metagenes (V i ) as annotations of the raw matrix (Fig. 1B). The use of AnnData format also facilitates interoperability with existing single-cell analysis tools such as Scanpy and scVelo (Bergen et al., 2020). We used the naming rules from Scanpy to name our annotations (UMAP coordinates, for instance) so that each individual AnnData object can be plugged into Scanpy easily.
Python implementation reduces runtimes
To demonstrate the performance of PyLiger package, we tested functions using a dataset of 6,000 PBMCs (Kang et al., 2018). We confirmed that the results from PyLiger are identical (to within numerical precision) to those from the LIGER R package. (External packages called by PyLiger, such as UMAP and Leiden, produce slightly different results between R and Python in some cases.) Moreover, PyLiger functions run 1.5 to 5 times faster than their R counterparts (Fig. 1C). In particular, the most timeconsuming step-matrix factorization-is approximately 3 times faster in Python than our previous R implementation. This is particularly impressive 2.4 PyLiger scales to arbitrarily large single-cell datasets using fixed memory.
PyLiger supports HDF5 file format for on-demand loading of datasets stored on disk. We found that in AnnData objects, only raw matrix allows HDF5-based backing, but not other processed matrices stored as layers. Therefore, we store data matrices in a separate HDF5 file while matrix annotations are still stored in AnnData objects. We compared the on-disk mode to the in-memory mode using the same dataset of 6,000 PBMCs. By sacrificing a little processing efficiency (within a second on a dataset of 6,000 cells), the on-disk mode functions can process arbitrarily large datasets using fixed memory (Fig. 1C). Note that the functions create_liger and select_genes in the on-disk mode of PyLiger are slightly slower than on-disk mode of LIGER due to new feature implementation. Moreover, we implemented the online iNMF algorithm (Gao et al., 2021) in combination with HDF5 file format, providing scalable and efficient data integration as well as significant memory savings. The online iNMF algorithm scales to arbitrarily large numbers of cells but still uses fixed memory and can incorporate new data without recalculating from scratch. To benchmark the performance, we did a comparison of online iNMF between PyLiger and LIGER using five datasets of increasing sizes (ranging from 10,000 to 255,353 cells in total) sampled from the same adult mouse frontal and posterior cortex data. The PyLiger implementation of online iNMF achieves a 2.3× speedup on average in comparison to its R counterpart (Fig. 1D).
Conclusion
PyLiger provides an effective way to integrate large-scale single-cell multi-omic datasets. Its Python implementation enables convenient interoperability with other single-cell analysis tools and advanced machine learning and deep learning approaches. Embedded GO enrichment analysis and visualization modules provide a convenient interface for downstream analysis. Furthermore, incorporating online iNMF and HDF5 file format bring PyLiger's scalability into arbitrarily large numbers of cells. | 1,388.8 | 2021-12-27T00:00:00.000 | [
"Computer Science"
] |
Occurrence of concurrent infections with multiple serotypes of dengue viruses during 2013–2015 in northern Kerala, India
Background Dengue is a global human public health threat, causing severe morbidity and mortality. The occurrence of sequential infection by more than one serotype of dengue virus (DENV) is a major contributing factor for the induction of Dengue Hemorrhagic Fever (DHF) and Dengue Shock Syndrome (DSS), two major medical conditions caused by DENV infection. However, there is no specific drug or vaccine available against dengue infection. There are reports indicating the increased incidence of concurrent infection of dengue in several tropical and subtropical regions. Recently, increasing number of DHF and DSS cases were reported in India indicating potential enhancement of concurrent DENV infections. Therefore, accurate determination of the occurrence of DENV serotype co-infections needs to be conducted in various DENV prone parts of India. In this context, the present study was conducted to analyse the magnitude of concurrent infection in northern Kerala, a southwest state of India, during three consecutive years from 2013 to 2015. Methods A total of 120 serum samples were collected from the suspected dengue patients. The serum samples were diagnosed for the presence of dengue NS1 antigen followed by the isolation of dengue genome from NS1 positive samples. The isolated dengue genome was further subjected to RTPCR based molecular serotyping. The phylogenetic tree was constructed based on the sequence of PCR amplified products. Results Out of the total number of samples collected, 100 samples were positive for dengue specific antigen (NS1) and 26 of them contained the dengue genome. The RTPCR based molecular serotyping of the dengue genome revealed the presence of all four serotypes with different combinations. However, serotypes 1 and 3 were predominant combinations of concurrent infection. Interestingly, there were two samples with all four serotypes concurrently infected in 2013. Discussion All samples containing dengue genome showed the presence of more than one serotype, indicating 100% concurrent infection. However, the combination of serotypes 1 and 3 was predominant. To the best of our knowledge, this is the first report indicating the concurrent infection of dengue in the northern Kerala, India. The phylogenetic analysis of dengue serotype 1 identified in this study shows a close relationship with the strain isolated in Delhi and South Korea during the 2006 and 2015 epidemics respectively. Similarly this study indicates that the phylogeny of dengue serotype 3 of northern Kerala is more closely related to dengue isolate of Rajasthan state, India. The geographical and climatic conditions of Kerala favours the breeding of both the mosquito vectors of dengue (Aedes albopictus and Aedes aegypti), which may enhance the severity of dengue in the future. Therefore, the study provides an alarming message for the urgent need of an antiviral strategy or other health management systems to curb the spread of dengue infection.
both the mosquito vectors of dengue (Aedes albopictus and Aedes aegypti), which may enhance the severity of dengue in the future. Therefore, the study provides an alarming message for the urgent need of an antiviral strategy or other health management systems to curb the spread of dengue infection.
INTRODUCTION
Dengue virus (DENV) belongs to the family of Flaviviridae and genus Flavivirus and poses a global threat resulting in significant morbidity and mortality. The virus is transmitted by day-biting mosquito, Aedes aegypti (Liu-Helmersson et al., 2014). However, there is no vaccine or antiviral drug available that can neutralize all the four serotypes of dengue viruses.
There are four distinct DENV1-4 serotypes circulating all over the world and causing DENV infection. The infection causes symptoms ranging from acute febrile illness to severe manifestations, including bleeding and organ failure resulting in the DHF or DSS (Gubler, 1998;Moi, Takasaki & Kurane, 2016). Co-infection with circulating DENV 1 and DENV 2 was reported in 1982 in Columbia (Gubler et al., 1985). It has been known that sequential infection of more than one serotype of dengue increases the severity of dengue symptoms (Hammon, 1973). Meanwhile, there are reports indicating concurrent infection of dengue with more than one serotype (Anoop et al., 2010). However the correlation between concurrent infection of dengue with more than one serotype and severity of the disease symptoms is not well established. In this context, the current study becomes highly relevant and gives a platform for future investigation to understand the severity of the disease and concurrent infection caused by different dengue serotypes.
In the last 50 years, co-circulation of dengue serotypes was reported in South Asia, including India. The first virologically confirmed dengue case was reported in the east coast of Calcutta, India during 1963-64 (Carey et al., 1966;Sarkar et al., 1964). In addition, a dengue outbreak at Kanpur, India was documented during 1968 by DENV 4 (Chaturvedi et al., 1970). The presence of DENV 3 was found in patients as well as A. egypti mosquitoes in Vellore, India in 1966, and since then all the four types of DENV have been co circulated and isolated from patients and mosquitoes (Myers & Carey, 1967;Wenming et al., 2005).
In 1996, DENV 2 serotype infections were noticed in India, followed by spreading all over the country (Shah, Deshpande & Tardeja, 2004;Singh et al., 2000). The capital city of India, Delhi, became hyperendemic by hosting all four dengue virus serotypes by 2003 (Dar et al., 2003) with the coinfection of DENV 1 and DENV 3 in 2005 (Gupta et al., 2006). The magnitude of concurrent infection (19%) observed during the Delhi outbreak in 2006 is much higher in comparison with Taiwan (9.5%) and Indonesia (11%). Furthermore, replacement of DENV 2 and 3 with DENV 1 as the predominant serotype in Delhi over a period of three years (2007)(2008)(2009)) has been reported.
The occurrence of dengue fever was reported in the Kottayam district of Kerala, a south-western region of India, followed by an outbreak in 2003. Concurrent infection with all three DENV 1-3 were reported in a large number of patients in 2008 (Anoop et al., 2010) in Ernakulum district, a central region of Kerala. In 2013, various parts of India, including Davangere and Central Karnataka out of 123 positive NS1 antigen samples 56(45.5%) were infected with dengue fever, 37(30.1%) with DHF, and 30(24.4%) with DSS (Kalappanvar et al., 2013).
Since dengue cases are increasing at an alarming rate and causing a major health threat in tropical countries, it is necessary to identify and confirm the viral serotypes through epidemiological surveillance studies. The present study was conducted in order to specifically identify dengue virus and to assess the concurrent infection in the northern Kerala (Malabar region), India during three consecutive years (2013)(2014)(2015). Some of these areas are ideal ecosystems for the proliferation of Aedes albopictus mosquitoes. In this study, NS1 positive samples were further tested by amplification of the junction region of capsid and pre membrane gene (CprM) of the dengue viral genome (Lanciotti et al., 1992) by single step reverse transcriptase polymerase chain reaction (RT-PCR) followed by specific serotype identification.
To the best of our knowledge, we are reporting for the first time the 100% concurrent infection by different dengue serotypes among the samples analysed, including two samples which were positive for all four dengue serotypes.
Sample collection
The viremic blood samples from patients with suspected dengue symptoms (as per WHO guidelines) were collected from the Government hospital and local diagnostic centres located in the northern Kerala, India. The Institutional Ethics committee (IEC) clearance (O.R.No:IAD/IEC/13/14) was taken from the Institute of Applied Dermatology (IAD), Kasaragod district, Kerala for conducting this study. The prior informed consent was obtained from all participating human subjects. The blood samples used in this study were collected between June and August of 2013, 2014, and 2015, respectively. The blood samples were collected from patients who came to the diagnostic centres with a doctor's prescription or with the suspicion (1-5 days). Donors were aged 5-60 years of either sex, including paediatrics (0-5 years old). Two millilitres of blood was collected in a vacutainer tube from each individual and serum was separated by centrifugation as per the standard procedure.
Detection of NS1 antigen by capture ELISA
The presence of NS1antigen in the patient serum sample was screened for using a DENV specific NS1 detection Enzyme Linked Immunosorbent Assay (ELISA) kit (Jmitra and Co., New Delhi, India). The kit contains monoclonal antibodies against NS1 coated on micro wells, which can detect NS1 antigen secreted by DENV in the infected patient. Fifty microliters of serum sample per micro well was used in the 96 well plate assay. Normal serum specimens obtained from healthy humans of same sex and age groups were included as controls.
Isolation of dengue viral RNA
Dengue viral RNA was extracted from NS1 positive serum samples using the pure link RNA mini kit (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's instructions. Clarified human serum sample of 150 microliters volume was used for RNA isolation. All RNA samples were examined for their purity and concentration using Nano photometer (2000C; Thermo Scientific, Waltham, MA, USA).
Amplification of dengue viral CprM region by RT-PCR
Isolated Dengue viral RNA from serum samples was subjected to single step reverse transcription polymerase chain reaction (RT-PCR) (Lanciotti et al., 1992). Ten microliters of PCR amplified product was analysed on a 1% agarose gel and the size compared with a 1 Kb plus DNA ladder.
Nested PCR
A second round of amplification was initiated using one microliter of the above PCR product (1:100 in sterile distilled water) as a template in the subsequent nested PCR reaction. The reaction mixture contained all the components necessary for PCR amplification including D1 as a forward primer and dengue virus type-specific reverse primers TS1, TS2, TS3, and TS4 in separate individual tubes. The CprM consensus regions and each serotype-specific primers were designed as mentioned by Lanciotti et al. (1992) with slight modification of TS4 primer 5 -CTCTGTTGTCTTAAACAAGAGA-3 . The PCR amplified products were analysed by electrophoresis on 1% agarose gel.
Nucleotide sequencing
All necessary Good Lab Practices (GLP) were employed to avoid artifacts. The precautions were taken to avoid the contamination of serum samples by bar coding of samples. The cross contamination was prevented by setting of the PCR independently in the separate tubes with pair of specific primers. The tube containing no template was also setup as a negative control. The RT-PCR product (511 bp) obtained using D1 and D2 primers was separated on the 1% agarose gel. The DNA band was eluted from the gel using Wizard gel and PCR clean up system (Promega, Madison, WI, USA) as per the manufacturer instructions and was sequenced directly using big dye terminator V3.1 ready reaction sequencing mixture in an automated AB3500 Genetic Analyzer (Applied Biosystems, Foster City, CA, USA). Additionally, nested PCR products obtained using D1 and TS1/TS3 dengue virus serotype specific primers were purified and sequenced as detailed above.
Amino acid sequence similarity and diversity
The nucleotide sequences obtained from the above method were submitted to GenBank database (NCBI). These sequences were translated using the Expasy translation (EMBL) tool. These homologues amino acid regions were aligned with the partial or full length amino acid sequences of dengue isolates of diverse geographical locations (KP406801, EF127001, DQ285562, JQ922545, JN903581, KM403635, KR024707, KT187563, KP723473, JQ917404, JN713897) retrieved from GenBank, using BioEdit sequence alignment editor.
Molecular phylogenetic analysis of DENV 1 and DENV 3 by Maximum Likelihood method
The phylogenetic analysis was conducted independently for DENV 1 (GenBank accession no. KJ954284) and DENV 3 (GenBank accession no. KM042094) of northern Kerala with the gene sequences of dengue virus isolates of different locations available in NCBI GenBank. The percentage of the bootstrap supporting values was shown at major nodes on the tree. The tree is drawn to scale, with branch lengths measuring the number of substitutions per site. The reliability of the analysis was evaluated by a bootstrap test (MEGA), Tamura-Nei model (Tamura et al., 2012). All positions containing gaps and missing data were eliminated.
RESULTS
There were, in total, 120 serum samples of dengue infected individuals analysed between 2013 and 2015. Among these 120 samples, a total of 62 (51%) were male, 35 (29%) female, and 23 (20%) paediatric (0-5 years old) dengue cases (Fig. 1). Both males and females belong to 30-40 years age group with an approximate ratio of 2:1. This ratio is similar with the previous reported value (Mishra et al., 2015). There were 23 (20%) pediatric (0-5 years old) cases, consisting of 14 males and nine females. Out of 120 serum samples, 100 (83%) were found to be NS1 positive, indicating the possibility of finding dengue viral RNA genome.
Dengue viral genome isolation and serotyping
Dengue viral RNA was isolated from NS1 positive serum samples, as detailed in the patients and methods section. The isolated RNA was subjected to RT-PCR based amplification ( Fig. 2) using D1 and D2 primers. There was dengue specific amplified product of size 511 bp present only in 26 samples. This observation confirms the presence of dengue genome in these 26 samples out of 100 samples, which were NS1 positive. The initial dengue NS1 antigen detection and the presence of identifying dengue genome in the serum samples indicates the possibility of dengue infection. A representative gel image, which indicates the presence of 511 bp amplified product is given in Fig. 3A. Out of fifty-one NS1 positive samples in 2013, thirteen samples were found to be positive for dengue genome. Similarly, there were three out of seventeen and ten out of thirty two samples positive for dengue genome analysed during 2014 and 2015, respectively (Table 1). All samples were co-infected with more than one serotype with various combinations of serotypes as given in Table 1 and Figs. 3B-3D.
Concurrent infection
The above amplified RTPCR product (511 bp) was used as a template for nested PCR, using serotype specific primers as detailed in methods section. The nested PCR analysis of dengue serotyping showed that a single individual is hosting for more than one dengue virus serotype circulating in the blood. In all 26 samples of multiple infection, three different serotypes co-existed in eight samples, two different serotypes co-existed in sixteen samples, and four different serotypes co-existed in two samples. There are nine cases of concurrent infection with DENV1 and DENV 3, seven cases with DENV 2 and DENV 3, eight cases with DENV 1, 2 and 3, and two cases with DENV 1, 2, 3 and 4 combinations as shown in the Table 1. There was a large number of concurrent infection with DENV 1 and DENV 3. The concurrent infection of all four serotypes is an alarming indication and needs to be investigated in detail. The sequential infection with more than one serotype of a particular region leads to a severe cause for eliciting antibody dependent immune response.
Nucleotide sequencing and dengue virus serotyping
The D1 and D2 primer based amplified PCR product corresponding to CPrM region was sequenced (GenBank accession no. KX031992) and the data confirmed the presence of dengue viral genome sequence in the samples. The nucleotide sequence was found to be 99% similar with the existing CprM region of DENV 3 viral strains in GenBank database. Blast analysis, using CprM junction nucleotide sequence, showed similarity with DENV 3 isolates from Pakistan with 99% query coverage (GenBank accession no. KF041254). Nucleotide sequencing of DENV 1 specific nested PCR product, obtained using D1 and TS1 primer combinations, gives the size of 423 bp (GenBank accession no. KJ954284). The corresponding amino acid sequence (1-139 aa) derived from DEV 1 specific amplified region represents non functional poly protein partial sequence. DENV 3 specific nested PCR product using D1 and TS3 primer combinations gives the size of 235 bp (GenBank accession no. KM042094) on a 1% agarose gel. However, PCR amplified product of DENV 2 and DENV 4 were the lesser in quantity, the nucleotide sequences were not obtained with clarity.
Amino acid sequence diversity
To observe any mutations, the amino acid sequence of DENV1 was compared with the other closely related DENV 1 strains. The comparison of the amino acid sequence (1-139 aa) of DENV 1 (CUKKEL201308001; GenBank accession no. KJ954284) with various DENV 1 isolates existing in the data base (GenBank accession no KP406801, EF127001, DQ285562, JQ922545, JN903581, KM403635, KR024707, KT187563, KP723473, JQ917404, JN713897) revealed an identity of 98%-99%, except for Valine (V) which is replaced by Isoleucine (I) at 131st position as shown in Fig. 4. However, replacement of Isoleucine with Valine was reported at residue 106 of the capsid protein in the isolates obtained from the 1997, 1998 and 2001 outbreaks in the Caribbean islands pertaining to the Asian/American genotype (Gardella-Garcia et al., 2008).
Phylogenetic analysis of DENV 1 and DENV 3
In the molecular phylogenetic analysis (Fig. 5) by Maximum Likelihood method, the bootstrap support values indicates that the sequence of DENV 1 isolate (CUKKEL201308001; GenBank accession no. KJ954284) shares a common ancestor relationship with DENV 1 isolate (GenBank accession no. KP406801) and DENV 1 isolate 06/1/del2006 (GenBank accession no. EF127001) among 28 nucleotide sequences used for the analysis (Fig. 5). Similarly the phylogenetic analysis (Fig. 6)
DISCUSSION
The surveillance of concurrent infection of multiple dengue serotypes as well as its co circulation becomes extremely important for understanding viral pathogenesis and also for developing a dengue tetravalent vaccine able to neutralize all four serotypes. There are only a few independent studies conducted on co-circulation and concurrent infection of dengue in India. These studies include the report on the concurrent infection of dengue which was observed in Delhi in 2006 (Bharaj et al., 2008). In this outbreak, nine out of 48 samples (19%) were identified positive for dengue virus as a concurrent infection with more than one dengue virus serotype. The concurrent infection with the involvement of three dengue viral serotypes (DENV 1, 2, and 3) at an approximate rate of 56.8% was observed in Ernakulam, Kerala state in 2008 (Anoop et al., 2010). A large outbreak of dengue fever (DF) was reported in 2007, in the locality of Indo-Myanmar boarder with the co circulation of concurrent infection of DENV 2 & 3, 1 & 3, and 1 & 4 serotypes (Khan et al., 2013).
In 1980, DENV 1 and 2 were first observed in Columbia. By 1994, many parts of the world, including, Bolivia, Brazil, Costa Rica, El Salvador, French Guiana, Guatemala, Honduras, Mexico, Nicaragua, Panama, Peru, Puerto Rico, Trinidad and Tobago, and Venezuela were hosting all four dengue serotypes (Lorono-Pino et al., 1999). The first case of a dual infection with two dengue virus serotypes (DENV 1 and DENV 4) was reported in the serum of a 16-year-old male during the 1982 outbreak in Puerto Rico (Gubler et al., 1985). Whereas in New Caledonia, DENV 1 and DENV 3 viruses were isolated from 6 patients with Dengue fever in 1989 (Laille & Deubel, 1991). The first report of dual concurrent infection with DENV 2 and DENV 3 was observed in Chinese patients returned from Sri Lanka (Wenming et al., 2005). The viremic serum samples (292 in total) collected during epidemics from Indonesia, Mexico, and Puerto Rico were tested, and 16 (5.5%) cases were found to contain two or more dengue viruses by reverse transcriptase-polymerase chain reaction (Lorono-Pino et al., 1999).
Out of 100 NS1 positive samples, 26 (26%) were found to be containing dengue viral RNA and showed a target specific band (511 bp) on an agarose gel. The low proportion of dengue viral genome in NS1 positive serum samples may be due to loss of intact viral genome or the degradation of dengue positive RNA sense strand while handling the samples. The serotyping of the dengue viral RNA samples using nested PCR with sero specific primers revealed that all samples are concurrently infected with multiple serotypes. Different combinations of Dengue virus (DENV) concurrent infections including DENV 1 and DENV 3, 34% (nine cases out of 26 cases), DENV 2 and DENV 3, 27% (seven cases out of 26 cases), DENV 1,2 and 3, 31% (eight cases out of 26 cases) were observed. Further, it was also observed that two of the samples were concurrently infected with all four serotypes as shown in Table 1. However, coexistence of all four serotypes in one host needs to be authenticated by characterizing the viral genome after isolation and propagation of viruses from those serum samples. The observations indicate the possibility of enhancement of future concurrent infections, as the percentage of single serotype infection decreased and concurrent infection of multiple serotypes increased based on the current investigation. As supporting evidence, there were only 19% of concurrent infections during 2006 (Delhi, India). However, there was a significant increase in the percentage of concurrent infections in 2009 (56.8%) in Ernakulum, India. The percentage of concurrent infection reached 100% based on the current investigation. It is also important to note that Kerala, a southwest state of India, provides an ideal ecosystem for the propagation of both the mosquito vectors (Aedes albopictus and Aedes aegypti) of dengue transmission.
Since the maximum nucleotide sequence was obtained in the case of DENV 1, the amino acid alignment (1-139 aa) was made with the various DENV 1 isolates existing in the data base, which revealed an identity of 98%-99%, except Isoleucine instead of Valine at 131st position as shown in Fig. 4. Since both amino acids belong to the non-polar group of standard genetic code, the amino acid change may not have any significance. Phylogenetic analysis of DENV 1 nucleotide sequences (GenBank accession no. KJ954284) obtained from 2013 epidemic samples shares a common ancestor relationship with clinical isolates of DENV 1 from South Korean Travellers (GenBank accession no. KP406801) as well as the DENV 1 isolate from Delhi, India (GenBank accession no. EF127001) in among 28 nucleotide sequences used for the analysis (Fig. 5). Similarly the phylogenetic analysis (Fig. 6) of dengue 3 virus isolate of northern Kerala indicates the close relationship to dengue virus isolate (DENV-3/DMRC/Bal87/2013) polyprotein gene, partial cds (KT239735) of Rajasthan, India. Concurrent infection of multiple serotypes of dengue poses a lot of questions regarding viral replication. The possibility of mutual interference among different dengue serotypes during replication within the same host need to be investigated in detail. If so, would there be any replicative advantage to one of the serotypes against the other three serotypes? The viral interference which leads to replicative advantage towards one particular serotype is a major concern in the development of tetravalent dengue vaccine (Anderson et al., 2011).
CONCLUSIONS
The present study shows that all samples which were able to show PCR amplified product, analysed from the northern Kerala, India between 2013 and 2015 harbour more than one serotype of dengue, indicating 100% concurrent infection. The occurrence of concurrent infection of DENV 1 and DENV 3 was higher as compared to other combinations. The DENV 1 isolates of Northern Kerala was more closely related to South Korean and Delhi strain based on phylogenetic analysis. Similarly DENV 3 was more closely related to Dengue 3 isolates of Rajasthan, India (KT239735). | 5,334 | 2017-03-14T00:00:00.000 | [
"Medicine",
"Biology"
] |
Supernova calibration by gravitational waves
Hubble tension is one of the most important problems in cosmology. Although the local measurements on the Hubble constant with Type Ia supernovae (SNe Ia) are independent of cosmological models, they suffer the problem of zero-point calibration of the luminosity distance. The observations of gravitational waves (GWs) with space-based GW detectors can measure the luminosity distance of the GW source with high precision. By assuming that massive binary black hole mergers and SNe Ia occur in the same host galaxy, we study the possibility of re-calibrating the luminosity distances of SNe Ia by GWs. Then we use low-redshift re-calibrated SNe Ia to determine the local Hubble constant. We find that we need at least 7 SNe Ia with their luminosity distances re-calibrated by GWs to reach a 2% precision of the local Hubble constant. The value of the local Hubble constant is free from the problems of zero-point calibration and model dependence, so the result can shed light on the Hubble tension.
I. INTRODUCTION
The value of the Hubble constant is crucial for us to understand the evolution of the Universe because it characterizes the current expansion rate of the Universe.Over the years, the measurement precision of the Hubble constant has been drastically improved .By recalibrating the extragalactic distance ladder using a sample of Milky Way Cepheids with the Hubble Space Telescope photometry and Gaia EDR3 parallaxes, the SH0ES (Supernovae and H 0 for the equation of state) team determined the local Hubble constant from Type Ia supernovae (SNe Ia) data as H 0 = 73.15± 0.97 km/s/Mpc [7].Applying the tip of the red giant branch method to SNe Ia data from Carnegie Supernova Project results the Hubble constant, H 0 = 69.8± 0.6 (stat) ±1.6 (sys)km/s/Mpc [8].Combining the strong lensing time delay data and type Ia supernova (SN Ia) luminosity distances, it was found that H 0 = 74.2+3.0 −2.9 km/s/Mpc [22].However, the measurements of the anisotropies in the cosmic microwave background (CMB) by Planck 2018 based on the ΛCDM model gave H 0 = 67.4± 0.5 km/s/Mpc [19].These results showed that the values of the Hubble constant determined from different observations are in discrepancy and suggested that the local measurements and the values inferred from CMB are in significant tension [23].As the measurement precision improves, the tension becomes more significant, we are at a crossroads [24].As discussed above, the results from the early Universe probe of CMB depend on the ΛCDM model.The local measurements from SN Ia standard candles are independent of cosmological models, but they suffer the zero-point calibration problem due to the uncertainties of the absolute calibration of the peak luminosity for SN Ia and the determination of the absolute distance scale for the luminosity distances.Furthermore, if we consider the dependence of intrinsic luminosity on color and redshift, the measured value of the Hubble constant changes [25,26].
The observations of gravitational waves (GWs) can measure the luminosity distance of the GW source with high precision, providing an independent method of measuring cosmological distances.In 1986, Schutz proposed to determine the Hubble constant with GWs from binary neutron stars (BNS) [27].If electromagnetic counterparts of the coalescence of massive binary black hole (MBBH) or BNS can be identified, then the redshift of the GW source is determined and the luminosity-redshift relation provided by GWs as standard sirens [28] can be used to study the evolution of the Universe [29][30][31][32][33][34][35][36][37].In addition to being standard sirens, the propagation of GWs can also probe the evolution of the Universe [38,39].Since the first direct observation of GWs by the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration and the Virgo Collaboration in 2015, there have been reported tens of GW detections [40][41][42][43][44].The first observed BNS merger GW170817 and its counterpart GRB 170817A gives H 0 = 70.0+12.0 −8.0 km/s/Mpc [45].In the absence of a counterpart one can employ statistical methods, by establishing a correlation between GW source and its potential galaxy catalog, to get the redshift of GW source.Applying this method to 47 GWs from the Third LIGO-Virgo-KAGRA Gravitational-Wave Transient Catalog (GWTC-3), LIGO-Virgo-KAGRA collaborations obtained H 0 = 68 +8 −6 km/s/Mpc based on the ΛCDM model [46].The independent determination of the Hubble constant with GW standard sirens enables the potential of not only shedding light on the the Hubble tension but also constraining other cosmological parameters [47].There are lots of studies on the precise determination of the Hubble constant with GW standard sirens in the literature [47][48][49][50][51][52][53][54][55][56][57][58][59][60][61].There are also discussions on the uncertainties of GW standard sirens [61,62].
Due to the short arm length and various ground noises, ground-based detectors are not sensitive to GWs below 1 Hz, and a single detector cannot locate the source.Space-based detectors such as the Laser Interferometer Space Antenna (LISA) [63,64], Taiji [65] and TianQin [66], are sensitive to GWs in the frequency range 10 −4 − 10 −1 Hz, can detect and locate mergers of distant MBBHs.Furthermore, the network of LISA, TianQin and Taiji can significantly improve the accuracy of parameter estimation [67][68][69][70][71][72].Since the local measurement of the Hubble constant from SNe Ia data is independent of cosmological models, if we can use the accurate distance measurement from GWs to calibrate the luminosity distances of SNe Ia data, then we can use SNe Ia to determine the local Hubble constant without the problem of zero-point calibration.The idea of using GWs as a new cosmic distance ladder for an independent calibration of distances to SNe Ia was discussed for mergers of BNS in [73,74].Zhao and Santos used the event GW170817 to measure the absolute magnitude of SNe Ia [73].In Ref. [74], the authors found that a third-generation ground-based GW detector network will measure distances with an accuracy of ∼ 0.1% − 3% for BNS within ≤ 300 Mpc.However, the calibration method with BNS as standard sirens applies to lowredshift SNe Ia only and it may miss the possible variation in the absolute magnitude with the redshift.The calibration of distances to SNe Ia with MBBH mergers is more interesting and beneficial.Exploring the calibration over a substantial redshift range might allow for a study of potential variation in the absolute magnitude with the redshift.Moreover, the merger of MBBHs could also be used to calibrate Gamma-Ray Bursts at high redshifts [75,76].LISA will detect MBBH mergers up to the redshift ∼ 15 − 20 [77].As much more SNe Ia data and GW detections with space-based GW detectors will be available in the future, it is highly possible that MBBH merges and SNe Ia occur in the same host galaxy.
In this paper, we consider the possibility of re-calibrating the luminosity distances of SNe Ia by GWs from MBBH merges and the precision of the Hubble constant determined with the re-calibrated SNe Ia data.Even though we only use low-redshift SNe Ia data to determine the local Hubble constant so that the result is independent of cosmological models, the calibration of the absolute distance scale for the luminosity distances is not limited to low-redshift SNe Ia data.We consider all possible coincidences of MBBH merges and SNe Ia to re-calibrate the luminosity distances of SNe Ia with GWs, these re-calibrated SNe Ia include all possible redshift ranges.Once we solve the problem of zero-point calibration for SNe Ia data, we use low-redshift SNe Ia data to determine the local Hubble constant.
The paper is organized as follows.In Sec.II, we use the Fisher information matrix (FIM) method to estimate the accuracy of the luminosity distance from GW observations.In Sec.III, we discuss the accuracy of the absolute magnitude of SNe Ia calibrated by GWs.Then we determine the local Hubble constant from the SNe Ia data in Sec.IV.The conclusion is drawn in Sec.V.
II. THE MEASUREMENT OF LUMINOSITY DISTANCE WITH SPACE-BASED GW DETECTORS
In terms of the polarization tensor e A ij with A = +, × representing the plus and cross polarizations, the time-domain GW signal is expressed as where i, j = 1, 2, 3 denote the spatial components and t is the coordinate time.The output of the GW signal in the detector α is where F A α is the response function, nα (t) is the detector noise and ϕ D (t) is the Doppler phase.The Doppler phase ϕ D is where the distance R between the earth and the sun is 1 AU, θ and ϕ are the angular coordinates of the GW source, c is the speed of light, ϕ α is the detector's ecliptic longitude at t = 0 and P = 1 year is the rotational period.For GWs propagating in the direction ω, the response function , where the detector tensor D ij α is û and v are the unit vectors for the two arms of the interferometer, the transfer function T (f, û • ω) for the detector is [84,85], sinc(x) = sin(x)/x, and f * = c/(2πL) is the transfer frequency of the detector with the arm length L.
We usually work in the frequency domain, so we Fourier transform h A (t) and n(t) to h A (f ) and n(f ).By assuming that the noises of the detector are stationary and Gaussian, we describe the noise with the spectral density P n (f ), where ⟨...⟩ denotes the "expectation value" over many noise realizations and n * (f ) is the complex conjugate of n(f ).For space-based GW detectors, the noise curve is [86] P where S x is the position noise and S a is the acceleration noise.For LISA [64], For TianQin [66], S x = (10 f * = 0.2755 Hz.For Taiji [67], For LISA and Taiji, we also consider the confusion noise [86] S In the frequency domain, the GW waveform h A (f ) for the dominant harmonic is where ι is the inclination angle of the orbit relative to the line of sight.For simplicity, we consider the PhenomA waveform for a coalescing binary.In the inspiral stage, the amplitude A and the phase up to the second order post-Newtonian approximation for the PhenomA waveform are [87,88] A(f ) = 5 24 where is the chirp mass, η = m 1 m 2 /M 2 is the symmetric mass ratio, the luminosity distance for a flat Universe, z is the redshift, the Hubble parameter for the flat ΛCDM model, the energy density for the cosmological constant Ω Λ = 1 − Ω m0 , Ω m0 is the fractional matter energy density at present and H 0 is the Hubble constant.
A. The FIM method
To use the method of match filtering to analyze signals, we define the noise-weighted inner product for two signals s 1 (f ) and s 2 (f ) as where the upper cutoff frequency f up is chosen as the frequency f ISCO at the innermost stable orbit (ISCO), Since space-based GW detectors are insensitive to GWs with frequencies below around 2 × 10 −5 Hz [89], so we take 2 × 10 −5 Hz as the lower cutoff frequency.For the observation of one year, we calculate the frequency f 0 one year before the ISCO, then we set The SNR ρ for a signal s(f ) is The threshold of detecting a signal is set as ρ ≥ 8.For parameter estimation, we define the FIM in the frequency domain as where λ i is the parameter of the GW source.The covariance matrix σ ij between the parameter errors ∆λ i = λ i − ⟨λ i ⟩ and ∆λ j in the large SNR limit is The root mean square error of the parameter λ i is In this way, the error of the luminosity distance can be estimated from the FIM Γ ij .
For a network with n detectors, the SNR and FIM are
B. The accuracy of the luminosity distance
We consider a nonspinning MBBH with 9 parameters: the chirp mass M c , the symmetric mass ratio η, the luminosity distance d L , the sky location (θ, ϕ), the inclination angle ι, the polarization angle ψ and the coalescence phase ϕ c at the coalescence time t c , i.e., λ = (M c , η, d L , θ, ϕ, ι, ψ, t c , ϕ c ).For equal-mass MBBHs we considered, η = 0.25.The parameters ι, ψ, ϕ c , t c are chosen randomly in the following range: ϕ c ∈ [0, 2π], and t c ∈ [0, 1] in the unit of year.The angular uncertainty of the sky localization is evaluated as For each GW source, we assume that we can find an SN Ia which is in the same host galaxy as the GW source, so we use the same parameters (θ, ϕ, z) from the SNe Ia data for the GW source.In this paper, we use the Pantheon sample of SNe Ia data [90].The Pantheon sample compiles 1048 SNe Ia data, covering the redshift range 0.01 < z < 2.26.
We use the ΛCDM model to calculate the luminosity distance d L from the redshift z.The cosmological parameters are chosen as the Planck 2018 results: H 0 = 67.27km/s/Mpc, and Ω m0 = 0.3166 [19].
MBHs are assumed to form from seed BHs through merger and gas accretion [91,92].For MBBHs, following Ref.[31,59], we consider the three widely accepted population models: From Fig. 1 and Table I, we see that the median value of the relative error of the luminosity distance is larger than 1% and the median value of the angular resolution is bigger than 0.1 deg 2 with LISA.The Q3nod model gives a better constraint on the luminosity [74], the error of the localized volume will contain no more than one field galaxy.Once the MBBH is located within one galaxy, then the host galaxy can be identified and we can determine whether there is a SN Ia occurred in the same galaxy.If MBBH mergers and SNe Ia occur in the same host galaxy, then we can calibrate standard candles with standard sirens.
III. THE CALIBRATION ERROR OF THE ABSOLUTE MAGNITUDE
In this section, we assume that an MBBH merger and an SN Ia are in the same host galaxy so that we can use the luminosity distance of the MBBH with GW measurement to calibrate the SN Ia.At the redshift z, the apparent magnitude m B (z) of an SN Ia is where M B is the absolute magnitude.The error in the estimation of the absolute magnitude (calibration error) mainly comes from the measurement uncertainties of the apparent magnitude σ m B and the luminosity distance σ d L , so the error of the absolute magnitude is For convenience, we define The error of the luminosity distance can be large at some locations.To reduce the calibration error of the absolute magnitude, we discard those GW events with the signal-to-noise ratio ρ < 8 and σ * > σ m B detected by LISA.With this cutoff, we are left with 679 SNe Ia data for the pop model, 743 SNe Ia data for the Q3d model and 804 SNe Ia data for the Q3nod model.Note that we already applied this cutoff in Fig. 1.
With the estimated luminosity and the observed apparent magnitude for each SN Ia, we calculate M B and σ M B for each SN Ia and the results are shown in Fig. 2. We also summarize the median, mean and minimum values of σ m B and σ M B for all the SNe Ia data in Table II.From Table II, we see that the error of the luminosity distance accounts for less than 10% error of the absolute magnitude.In particular, for the LISA-Taiji-TianQin network, σ M B is almost the same as σ m B , so the contribution of σ * to σ M B is almost negligible.Fig.
2 also shows that the calibration error is mainly from the measurement uncertainty of the apparent magnitude.The above discussion assumes that we have only one calibrator.Now we consider the calibrations of more than one SN Ia.In other words, we assume that we can locate N pairs of MBBH mergers and SNe Ia that each pair is in the same host galaxy, so that we have N GW-calibrated SNe Ia to reduce statistical error.We discuss three cases, the best scenario considers those SNe Ia with the smallest measurement error on the apparent magnitude, the worst scenario considers those SNe Ia with the biggest σ m B , and the random scenario selects SNe Ia randomly.To constrain M B with N calibrators, we minimize with iminuit [94], and the results of σ M B versus the number N are shown in Fig. 3.Here m i B is the observed apparent magnitude for the SN Ia at the redshift z i , m B (d i L , M B ) is obtained with Eq. ( 21) and σ i is From Fig. 3, we see that the error of M B decreases as the number of calibrators increases.
IV. THE UNCERTAINTY OF HUBBLE CONSTANT
In the last section, we discussed the calibrations of the Pantheon sample of SNe Ia data by GWs.Now we can use the calibrated SNe Ia data to measure the Hubble constant H 0 .
Since the calibration of the luminosity distance by GWs involves only one-step distance ladder, the measured Hubble constant can overcome the the problem from electromagnetic distance ladder.To avoid the dependence of cosmological models, we use the kinematic d L − z relation from Taylor expansion [95], to fit low-redshift SNe Ia data, where q 0 is the deceleration parameter.Following Ref. [4], to avoid the possibility of a coherent flow in the more local volume, we use 237 SNe Ia in the redshift range 0.023 < z < 0.15 to constrain the Hubble constant H 0 with the cosmographic expansion (26).As discussed in [17], the minimum cutoff of z is large enough to reduce the impact of cosmic variance, and the maximum z is small enough to avoid the dependence on cosmological models.Now we determine cosmological parameters H 0 and q 0 by marginalizing over M B with the Bayesian analysis, where f (H 0 ), f (q 0 ) and f (M B ) are the prior distributions of H 0 , q 0 and M B , respectively, f (M B ) is a Gaussian distribution with the mean M B and the 1σ error σ M B given in the last section, L is the likelihood, E is the evidence, and SN stands for the given SNe Ia data in the redshift range 0.023 ≤ z ≤ 0.15 [90].The likelihood L is where χ 2 is Σ is the covariance matrix of the 237 SNe Ia data, and m B (z i ) is the predicted apparent magnitude at the redshift z i from Eqs. ( 21) and (26).
For the best scenario, the relative error of H 0 can be less than 2% for the three models with 7 calibrators; If N = 20, the relative error of H 0 can reach 1.6% for all three models.
The results are almost the same either with LISA alone or with the LISA-Taiji-TianQin network for all three scenarios.For the random scenario, the relative error of H 0 can reach below 2% with 12, 14, and 11 calibrators for the pop, Q3d, and Q3nod models, respectively; If N = 20, σ H 0 /H 0 can be less than 1.9% for all three models.For the worst scenario, the relative error of H 0 can reach below 2% with 31, 32, and 32 calibrators for the pop, Q3d, and Q3nod models, respectively; If N = 40, σ H 0 /H 0 can be less than 1.99% for all three models.These results are shown in Fig. 4. The results tell us that we can get a better than are available, the method presented here can determine the local value of H 0 with better than 2% precision.However, the relative error of deceleration parameter q 0 is around 30%.
The above simulation is based on the flat ΛCDM model with H 0 = 67.27km/s/Mpc.To investigate the impact of the choice of the value of cosmological parameters, we also did the simulation with the cosmological parameters H 0 = 73.00km/s/Mpc and Ω m0 = 0.3166 [6], and we find that the results are similar.For the best scenario, the relative error of H 0 can be less than 2% with 7 calibrators by LISA or the LISA-Taiji-TianQin network.For the random scenario, the relative error of H 0 can reach below 2% with 13 calibrators by LISA.
For the worst scenario, the relative error of H 0 can reach below 2% with 38 calibrators by LISA.If we use the LISA-Taiji-TianQin network, the number of calibrators needed to reach 2% accuracy for the random and worst scenarios is 12 and 32, respectively.Therefore, the model independent determination of the local Hubble constant from SNe Ia data calibrated by GWs can shed light on the Hubble tension.
For comparison, we also consider those GWs which calibrate SNe Ia as standard sirens to constrain the Hubble constant.Since the redshift of MBBHs is as large as z ∼ 0.3 for the best scenario, z ∼ 1.3 for the random scenario, and z ∼ 1.7 for the worst scenario, we cannot use the cosmographic expansion ( 26) and a cosmological model must be invoked.
For simplicity, we consider the constraint on the Hubble constant from the standard siren based on the ΛCDM model.In 1% with N > ∼ 4 for all scenarios.The result is consistent with that in Ref. [96,97].For the LISA-Taiji-TianQin network, the relative error of H 0 is less than 0.1%.As discussed above, the results from GWs as standard sirens depend on cosmological models even though the relative error is much smaller.
After learning that at least 7 SNe Ia with their luminosity distances calibrated by GWs are needed to reach a 2% determination of the local Hubble constant, we can now assess whether it will be possible to be realized within this the next decade of the operation of space-based detectors.According to [98], the galaxies number density is ≈ 2 × 10 with a GW standard siren, we find that the measurement error of the luminosity distance with LISA accounts for less than 10% error of the absolute magnitude.Furthermore, the contribution of the measurement error of the luminosity distance to σ M B is almost negligible for the LISA-Taiji-TianQin network.We conclude that the calibration error for SNe Ia is mainly from the measurement uncertainty of the apparent magnitude.
For N calibrators, we discussed three cases, the best-case scenario assumes that N SNe Ia with the smallest measurement error on the apparent magnitude and MBBH mergers occur in the same host galaxy, the worst-case scenario assumes that N SNe Ia with the biggest σ m B and MBBH mergers occur in the same host galaxy, and the random-case scenario assumes that N randomly selected SNe Ia and MBBH mergers occur in the same host galaxy.For each case, the measured luminosity distances are used to calibrate the absolute magnitude of N SNe Ia.For the best-case scenario, σ M B can reach 0.023 mag for all three population models.The uncertainty of the absolute magnitude can be as small as 0.034 mag even for the worst-case scenario.Note that the redshift of the calibrated SNe Ia is not limited to be small and it can be arbitrarily large.
After re-calibrating the absolute magnitude of the Pantheon SNe Ia data, we use 237 SNe Ia in the redshift range 0.023 < z < 0.15 to constrain the local Hubble constant.
Note that for the calibration, we are not limited to the 237 SNe Ia in the redshift range 0.023 < z < 0.15, we considered all possible coincident SNe Ia and MBBH mergers to calibrate the whole Pantheon sample of SNe Ia data.For the best-case scenario, the relative error of H 0 can be less than 2% for the three population models with 7 calibrators.For the random-case scenario, the relative error of H 0 can reach below 2% with 12, 14, and 11 calibrators for the pop, Q3d, and Q3nod models, respectively.For the worst-case scenario, the relative error of H 0 can reach below 2% with 31, 32, and 32 calibrators for the pop, Q3d, and Q3nod models, respectively.The uncertainty of the local Hubble constant can be reduced a little bit with more number of calibrators, but the reduction of the uncertainty is insignificant.If we use those GWs that calibrate the luminosity distance of SNe Ia as standard sirens to determine the Hubble constant, we can get a less than 1% precision with LISA and less than 0.1% precision with the LISA-Taiji-TianQin network.However, the results based on standard sirens depend on cosmological models.Subtleties may arise if we consider the relative positions of SNe Ia and the host galaxy of the MBBH mergers, and the peculiar velocity of the host galaxy.
We conclude that at least 7 SNe Ia with their luminosity distances calibrated by GWs are needed to reach a 2% determination of the local Hubble constant.The value of the local Hubble constant is free from the problems of zero-point calibration and model dependence.
Therefore, the model independent determination of the local Hubble constant from SNe Ia data calibrated by GWs can shed light on the Hubble tension.
LISA 2 .
Fig.1and all the figures in the following discussions.The results are consistent with those in Ref.[67,71,72,88,93].For the same detection threshold ρ ≥ 8, the LISA-Taiji-TianQin network can detect some GW signals that can not be detected by LISA alone, this is the reason why some results with the network only appear in Fig.1.
FIG. 1 .
FIG. 1.The 1σ errors of the luminosity distance with LISA and the LISA-Taiji-TianQin network for the pop model.In the top panel, the luminosity distances along with their estimated 1σ errors in the unit of 1 Gpc are shown.In the bottom panel, we show ∆d L in the unit of 100 Mpc, the red dashed lines represent the estimated 1σ error bar with LISA, and green solid lines represent the estimated 1σ error bar with the LISA-Taiji-TianQin network.
FIG. 2 .
FIG. 2. The absolute magnitude M B with 1σ uncertainty calibrated by GWs with the pop model.The top panel shows the observed apparent magnitude, i.e., no error of d L is included.In the middle and bottom panels, we include the errors of d L measured by LISA and the LISA-Taiji-TianQin network, respectively.
FIG. 3 .
FIG. 3. The dependence of σ M B on the number of calibrators N for the pop model.The red solid line and the green dash-dot line represent the estimated 1σ error of M B for the best scenario and the random scenario with LISA, the magenta dashed line represents the estimated 1σ error of M B for the worst scenario with LISA, the blue dotted line represents the estimated 1σ error of M B for the worst scenario with the LISA-Taiji-TianQin network.
2% determination of the local value of the Hubble constant from SNe Ia in the redshift range 0.023 ≤ z ≤ 0.15 in a model independent way by calibrating the luminosity distances of about 10 SNe Ia with GWs.Due to the measurement uncertainty of the apparent magnitude for SNe Ia, more calibrated SNe Ia can hardly reduce the relative error of H 0 further.Since the luminosity distances of MBBHs were simulated with the flat ΛCDM model, the central value of H 0 obtained here may not be trusted, but the estimated error of H 0 is independent of the model.Once the observations of GWs from MBBHs with space-based GW detectors
Fig. 5 ,FIG. 4 .
FIG.4.The relative error of H 0 with the pop model.The triangle represents the smallest number of calibrators N needed for the relative error reaching below 2%.The red solid line and the green dash-dot line represent the constrained relative error of H 0 for the best scenario and the random scenario with LISA, the magenta dashed line represents the constrained relative error of H 0 for the worst scenario with LISA, the blue dotted line represents the constrained relative error of H 0 for the worst scenario with the LISA-Taiji-TianQin network.
FIG. 5 .
FIG.5.The relative error of H 0 determined from N GW standard sirens with LISA for the pop model.The red solid line, the green dash-dot line, and the blue dotted line represent the constrained 1σ relative error of H 0 for the best scenario, the random scenario, and the worst scenario, respectively.
The main problem of the model independent determination of the local Hubble constant from SNe Ia is the absolute calibration of the peak brightness for SNe Ia.The observations of GWs as one-step standard sirens can be used to calibrate the luminosity distances of SNe Ia if an SN Ia and an MBBH merger occur in the same host galaxy.If one SN Ia is calibrated
TABLE I .
The median values of the relative error of the luminosity distance and the angular resolution with LISA and the LISA-Taiji-TianQin network for different population models.
TABLE II .
The median, mean and minimum values of σ m B and σ M B .σ M B (LISA) means the result for σ M B with LISA, and σ M B (Network) means the result for σ M B with the LISA-Taiji-TianQin network. | 6,823.6 | 2022-06-21T00:00:00.000 | [
"Physics"
] |
Superconductivity in Cubic A15-type V–Nb–Mo–Ir–Pt High-Entropy Alloys
We report the crystal structure and superconducting properties of new V5+2x Nb35−x Mo35−x Ir10Pt15 high-entropy alloys (HEAs) for x in the range of 0 ≤ x ≤ 10. These HEAs are found to crystallize in a cubic A15-type structure and have a weakly coupled, fully gapped superconducting state. A maximum T c of 5.18 K and zero-temperature upper critical field B c 2 (0) of 6.4 T are observed at x = 0, and both quantities decrease monotonically with the increase of V content x. In addition, T c shows an increase with increasing valence electron concentration from 6.4 to 6.5, which is compared with other A15-type HEA and binary superconductors.
INTRODUCTION
High-entropy alloys (HEAs) consisting of five or more constituent elements have received a lot of attention as an emerging class of multicomponent alloys [1][2][3][4][5]. These alloys are stabilized by the high mixing entropy rather than the formation enthalpy, and often refereed to as metallic glasses on ordered lattices. Despite the presence of strong chemical disorder, some HEAs exhibit collective quantum phenomena such as superconductivity [6,7]. So far, a number of HEA superconductors have been discovered and their crystal structures can be categorized into body-centered cubic (bcc)-type [8][9][10][11], a-Mn-type [12,13], CsCl-type [14], hcp-type [15][16][17], A15-type [18], and s-type [19]. In particular, the A15-type V 1.4 Nb 1.4 Mo 0.2 Al 0.5 Ga 0.5 HEA has a T c of 10.2 K and a disorder-enhanced upper critical field of 20.1 T [18], both of which are the highest among HEA superconductors. It is worthy noting that, for binary A15type superconductors, the T c values exhibit two maxima at valence electron concentrations (VECs) of 4.7 and 6.5, respectively [20]. Since the VEC of the V-Nb-Mo-Al-Ga HEAs is limited below around 5, it is desirable to search for other A15-type HEA superconductors with VEC close to 6.5.
Motivated by this, we replace Al and Ga in the V-Nb-Mo-Al-Ga HEAs with Ir and Pt to form new V 5+2x Nb 35−x Mo 35−x Ir 10 Pt 15 HEAs. A nearly single A15 phase is found for 0 ≤ x ≤ 10, which corresponds to a VEC range of 6.4-6.5. Physical property measurements indicate that these A15-type HEAs are weakly coupled, fully gapped superconductors with T c and B c2 (0) up to 5.18 K and 6.4 T, respectively. In addition, their T c increases with increasing VEC, in contrast to the V-Nb-Mo-Al-Ga HEAs. A comparison of the T c vs. VEC plots is made between the A15-type HEA and binary superconductors, and its implication is discussed.
MATERIALS AND METHODS
The V 5+2x Nb 35−x Mo 35−x Ir 10 Pt 15 HEAs were prepared by the arc melting method. Stoichiometric amounts of high purity V (99.99%), Nb (99.999%), Mo (99.995%), Ir (99.99%), Pt (99.99%) elements were mixed thoroughly and pressed into pellets in an argon-filled glove box. The pellets were then melted in an arc furnace under highpurity argon atmosphere. To ensure homogeneity, the melts were flipped several times, followed by rapid cooling on a water-chilled copper plate. The phase purity of as-cast HEAs was checked by powder x-ray diffraction (XRD) at room temperature using a Bruker D8 Advance x-ray diffractometer with Cu-Kα radiation. The structural refinements were performed using the JANA2006 program [21]. The morphology and elemental composition were examined by a Zeiss field emission scanning electron microscope (SEM) equipped with an energy dispersive x-ray (EDX) spectrometer. The four-probe resistivity and specific heat were measured in a Quantum Design Physical Property Measurement System (PPMS-9 Dynacool). The dc magnetization measurements were carried out in a commercial SQUID magnetometer (MPMS3).
X-Ray Diffraction and Chemical Composition
The XRD patterns for the V 5+2x Nb 35−x Mo 35−x Ir 10 Pt 15 HEAs are displayed in Figure 1A. For all x values, the major diffraction peaks can be well indexed on a cubic lattice with the Pm3n space group, indicative of a dominant A15 phase. With increasing x, the (004) peak shifts toward higher 2θ values. This points to a decrease of the a-axis with the increase of V content, in consistent with its smaller atomic radius compared with those of Nb and Mo [22]. In addition to the A15 phase, small impurity peaks are observed in the vicinity of main (102) diffraction and probably comes from the NbAl 2 -type sigma phase [18]. In the A15 structure, there are two crystallographic sites (0, 0, 0) and (0.25, 0, 0.5). Following Reference [18], all the five constituent elements are assumed to be distributed randomly on these sites for the structural refinement (see the inset of Figure 1A), and their occupancies are fixed by the stoichiometry. This assumption is based on the previous studies of binary A15 compounds, which show that the antisite disorder is the most common point defects [23]. In Nb 3 Sn, it has been argued that the Nb and Sn atoms occupy randomly the two sites after a certain period of mechanical milling [24]. The refinement profiles are shown in Figures 1B-D and the statistics are listed in Table 1. Both the
Parameter
Unit difference plot and R wp (R p ) factor indicate a reasonably good agreement between the observed and calculated XRD patterns, which supports the validity of the employed structural model. Note that a more definitive conclusion requires atomic-level spectroscopies in future. The refined lattice parameter a 5.0324, 5.0130, and 4.9848 Å for x 0, 5 and 10, respectively, close to those of the A15-type V-Nb-Mo-Al-Ga HEAs. Figures 2A-C show the typical SEM images for the HEAs, all of which appear to be dense and homogeneous. Indeed, EDX elemental mapping reveals the uniform distribution of V, Nb, Mo, Ir, and Pt, and, as an example, the results for x 0 are shown in Figures 2D-H Figures 3A,B show the temperature dependencies of resistivity (ρ) and magnetic susceptibility (χ) for the V 5+2x Nb 35−x Mo 35−x Ir 10 Pt 15 HEAs, respectively. For each x value, a sharp drop in ρ and strong diamagnetic χ are observed, signifying a superconducting transition.
Resistivity and Magnetic Susceptibility
As indicated by the vertical dashed line, the midpoint of ρ drop coincides well with the onset of diamagnetic transition. By this criterion, T c is determined to be 5.18, 4.49, and 3.61 K for the HEAs with x 0, 5, and 10, respectively. Below T c , there is a clear bifurcation between the zero-field cooling (ZFC) and field cooling (FC) χ data measured under an applied field of 1 mT, which is characteristic of a type-II superconductor. At 1.8 K, the χ ZFC data correspond to superconducting shielding fractions ranging from 101 to 174%. Although the demagnetization effect is difficult to correct due to irregular sample shapes, these large values suggest bulk superconductivity in these HEAs.
Specific Heat
To confirm the bulk nature of superconductivity, the V 5+2x Nb 35−x Mo 35−x Ir 10 Pt 15 HEAs were further characterized by specific heat (C p ) measurements, whose results are shown in Figure 4. As can be seen in Figures 4A, a C p jump is indeed detected around T c for these HEAs. Above T c , the data are analyzed by the Debye model where c and β(δ) are the Sommerfeld and phonon specific heat coefficients, respectively. With β, the Debye temperature Θ D is calculated as where R is the molar gas constant 8.314 J/molK 2 . This gives c 4.59, 4.94, and 5.03 mJ/molatomK 2 , and Θ D 419, 440, and 393 K for x 0, 5, and 10, respectively. Figure 4B shows the normalized electronic specific heat C el /γT after subtraction of the phonon contribution. For all HEAs, the ΔC el /γT are significantly smaller than the BCS value of 1.43 [25]. Nevertheless, the C el /γT data can still be fitted by a modified BCS model or the α-model [26] with α 1.39, 1.41 and 1.56 for x 0, 5 and 10, respectively, where α Δ 0 /T c and Δ 0 is the gap size at 0 K. These results suggest that the V 5+2x Nb 35−x Mo 35−x Ir 10 Pt 15 HEAs are BCS-like superconductors in the weak coupling regime. This is corroborated by their electron-phonon coupling constants λ ep in the range of 0.55-0.59, as calculated using the inverted McMillan formula [27], with μ * 0.13 being the Coulomb repulsion pseudopotential. In passing, it is pointed out that the decrease in T c with increasing x is accompanied by the decrease in λ ep but the increase in c. Hence the T c in the V 5+2x Nb 35−x Mo 35−x Ir 10 Pt 15 HEAs is mainly governed by the electron-phonon coupling strength rather than the density of states at the Fermi level. In passing, it is worth noting that the T c , c and λ ep values of the V-Nb-Mo-Pt-Ir HEAs are very similar to those of the (V 0.5 Nb 0.5 ) 3−x Mo x Al 0.5 Ga 0.5 HEAs for x ≥ 1.2 [18], pointing to a common phonon-mediated pairing mechanism.
Upper Critical Field
The upper critical fields B c2 of these HEAs were investigated by resistivity measurements under magnetic fields. As an example, the result for the HEA with x 0 is shown in Figure 5A. The resistive transition is gradually suppressed to low temperatures as the field increases. For each field, the T c is determined using the same criterion as above, and the obtained B c2 vs. T phase diagrams are displayed in Figure 5B. Extrapolating the B c2 (T) data to 0 K using the Wathamer-Helfand-Hohenberg model [28] yields B c2 (0) 6.4, 5.7 and 4.4 T for the HEAs with x 0, 5, and 10, respectively. These values are well below the corresponding Pauli limiting fields [29] of ∼9.6, ∼8.4, and ∼6.7 T, suggesting that B c2 in these HEAs is orbitally limited. In addition, the Ginzburg-Landau coherence lengths ξ GL can be calculated using the equation where Φ 0 2.07 × 10 −15 Wb is the flux quantum. This yields ξ GL 7.2, 7.6 and 8.7 nm for the HEAs with x 0, 5, and 10, respectively. The above results are summarized in Table 1. Figure 6 shows the VEC dependence of T c for the V 5+2x Nb 35−x Mo 35−x Ir 10 Pt 15 HEAs, together with the data for A15-type V-Nb-Mo-Al-Ga HEA [18] and binary [20] superconductors for comparison. One can see that superconductivity in all these materials occurs near the VEC values of 4.7 and 6.5, consistent with the expectation from the Matthias rule [30]. Compared with the V-Nb-Mo-Al-Ga HEAs, the V-Nb-Mo-Ir-Pt HEAs have higher VEC values in the range of 6.4-6.5 and their VEC dependence of T c is in the opposite trend, increasing monotonically with the increase of VEC. Nevertheless, the maximum T c is considerably lower for the V-Nb-Mo-Ir-Pt HEAs than for the V-Nb-Mo-Al-Ga ones. This indicates that optimal VEC for T c in A15-type HEA superconductors is around 4.7, which is reminiscent of the case in binary A15 compounds [20]. Moreover, for similar VEC values, the T c values for V-Nb-Mo-Al-Ga and V-Nb-Mo-Ir-Pt HEAs are always no more than half those of the binary compounds. It is thus reasonable to speculate that the upper limit of T c for A15-type HEA superconductors is about one-half the highest T c in binary A15 superconductors.
CONCLUSION
In summary, we have studied the structure, electronic, magnetic and thermodynamic properties of the V 5+2x Nb 35−x Mo 35−x Ir 10 Pt 15 HEAs with 0 ≤ x ≤ 10. In this x range, the HEAs adopt a cubic A15-type structure and exhibit bulk superconductivity. The analysis of their specific-heat jumps points to a weakly coupled, fully gapped superconducting state. The T c and B c2 (0) reach 5.18 K and 6.4 T, respectively, at x 0, and decrease monotonically with the increase of V content x. In addition, T c increases with increasing VEC from 6.4 to 6.5 and its comparison with isostructural HEA and binary superconductors suggests that the upper limit of T c for A15-type HEA superconductors is about half that for binary compounds. Our study helps to better understand the effect of chemical disorder in A15-type superconductors.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, upon reasonable request.
AUTHOR CONTRIBUTIONS
LB synthesized the samples and did the physical property measurements with the assistance from WJF, CYW, ZQQ, XGR. WSQ and CGH contributed in the magnetic measurements. RZ supervised the project and wrote the paper. | 2,937.2 | 2021-03-31T00:00:00.000 | [
"Physics",
"Materials Science"
] |
A smoothness indicators free third-order weighted ENO scheme for gas-dynamic Euler equations
To improve the shock-capturing capability of the third-order WENO scheme and enhance its computational efficiency, in this paper, we design a new WENO scheme independent of the local smoothing factor, WENO-SIF. The weight function of the WENO-SIF scheme is the segmentation function of the sub-stencil, which is guaranteed to achieve the desired accuracy at higher order critical points. WENO-SIF does not need to compute the smoothing factor during the computation, which effectively reduces the computational consumption. The present WENO-SIF is compared with WENO-JS and other WENO schemes for numerical experiments at one- and two-dimensional benchmark problems with a suitable choice of λ=0.13\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda =0.13$$\end{document}. The results demonstrate that the WENO scheme can further improve the resolution of WENO-JS, achieve optimal accuracy at high-order critical points, and significantly reduce the computational consumption.
Introduction
Based on the ENO scheme (Harten 1983;Liu et al. 1994) proposed the WENO method that can maintain the ENO property in the smooth region by recombining the sub-stencils using a weighting function. Due to its high accuracy and essentially oscillatory WENO scheme, it has been widely used and vigorously developed in fluid mechanics and aerospace. Jiang and Shu (1996) proposed a technique that can measure the smoothness of subtemplates, and they gave a framework for building the WENO method with various accuracies. Since then, Communicated by José R Fernáández. the researchers have conducted tests, evaluations, and improvements on the WENO method (Balsara and Shu 2000;Qiu and Shu 2003;Rathan and Raju 2018;. Henrick et al. (2005) found that the fifth-order WENO-JS scheme suffers from accuracy degradation at the extremum. They developed a mapping function that resulted in a fifthorder accurate WENO-M method that achieves the best performance at the first-order critical point. Since then, Feng et al. (2012Feng et al. ( , 2014, Wang et al. (2016), Vevek et al. (2019), Hong et al. (2020), , Li and Zhong (2021), and Zhu and Qiu (2021) have developed a series of mapping functions successively to develop different higher-order mapping type WENO schemes. Unlike Henrick's mapping-type WENO scheme, Borges et al. (2008) designed a new type of nonlinear weights by using a linear combination of low-order local smoothness factors to construct a global high-order smoothness factor. They developed a class of structurally simple fifth-order WENO-Z schemes. In addition, Castro et al. (2011), Don and Borges (2013), Liu et al. (2018), Wang et al. (2018), Peng et al. (2019), Baeza et al. (2019aBaeza et al. ( , b, 2020, , Semplice et al. (2016), Dumbser et al. (2017), Cravero et al. (2018), Rathan et al. (2020) and Huang and Chen (2021) have successively developed various high-order WENO-Z type schemes based on Borges'.
Compared to higher-order WENO schemes of order 5 and above, the third-order WENO scheme is more robust in capturing shocks, uses fewer grid points, and can be easily extended to unstructured grids. However, it also has shortcomings, such as higher dissipation and lower accuracy. Yamaleevn and Carpenter (2009) developed a high-resolution third-order energystable WENO scheme (ESWENO) using an amount-preserving stable global smoothness factor. Liu et al. (2017) proposed the WENO-MN scheme using the values of all three points on the global stencil of the third-order WENO scheme to calculate the local smoothness factor. Zhao (2015, 2016) and Xu and Wu (2018) developed three WENO schemes, namely WENO-N, WENO-NP, and WENO-NN, using different nonlinear combinations of local smoothness indicators and local smoothness factors. introduced a new reference smoothness indicator to construct a third-order WENO-R scheme with low dissipation. constructed a new global smoothness indicator by using Taylor expansions to handle local smoothness indicators and developed the WENO-ZF scheme. designed two new global smoothness indicators by nonlinearly combining local smoothness indicators with reference values based on Lagrangian interpolation polynomials and developed the WENO-L3 and WENO-L4. Tang and Li (2021) constructed three multiparameter types of finite-volume WENO schemes by modifying the weight functions of the WENO scheme using the construction of the limiter of the MUSCL scheme. In addition, many other scholars have improved the third-order WENO scheme and developed numerous new schemes (Kumar and Kaur 2020;Kim et al. 2021).
However, various current third-order WENO schemes suffer from two shortcomings: First, they fail to achieve optimal accuracy at high-order critical points. Second, some highresolution WENO schemes suffer from a relatively complicated construction of the weight function and low computational efficiency. To address these issues, in this paper we improve the WENO scheme by constructing a weight function that does not depend on the local smoothing factor. The results of WENO-SIF show that it can achieve optimal weights at higher order critical points and has a simple and efficient structure. The present WENO-SIF scheme with λ = 0.13 has better spectral properties and higher resolution through ADR analysis (Pirozzoli 2006) and calculation of various 1D-2D problems, and its computational efficiency is much higher than other given WENO schemes.
We set the basic framework for this paper: Sect. 2 presents the structure of the classical third-order WENO scheme. In Sect. 3 we give construction hints for the new WENO scheme and determine the parameters of the weight function in the new scheme using the ADR method. In Sect. 4, we compare the performance of various WENO schemes in terms of accuracy at higher-order critical points, resolution, and computational efficiency for 1-D and 2-D Eulerian gas dynamics. We provide the conclusions of this paper in Sect. 5.
Reviews of WENO methods
In this section, we briefly introduce a series of third-order WENO schemes. Consider the following one-dimensional scalar hyperbolic conservation laws. (1) Discretizing the computational region as a uniform interval of x yields a conservative scheme of semi-differentiation of the following form: wheref j+1/2 =f + j+1/2 +f − j+1/2 is the numerical flux, which satisfies df + (u)/du ≥ 0 and df − (u)/du ≤ 0. Sincef + j+1/2 andf − j+1/2 are approximated similarly, for convenience, we will only describe the approximation off + j+1/2 . For simplicity, we will remove the superscript + of the flux.
On the 3-point stencil at the central node x j−1/2 , the numerical flux function of the thirdorder finite-difference WENO scheme is defined as: where q k is the second-order flux on the sub-stencil S k = {x j−1+k , x j+k }, k = 0, 1, which is defined by And the weights ω k will be introduced in the following subsections.
WENO-JS scheme
The nonlinear weights proposed by Jiang and Shu (1996) are where (d 0 , d 1 ) = (1/3, 2/3). is a positive number with the value 10 −6 to avoid the denominator becoming zero in the smoothing region. The local smoothness indicators β k in the stencils are calculated by where r = 2 is the number of sub-stencils. And the local smoothness indicators of the third-order WENO scheme are Taylor expansion of Eq. (7) at the point x j yields Easy to verify that where n cp is a integer that satisfies the following condition: if the function f (x) has n cp + 1-order derivatives and satisfies, f , then x 0 is said to be a n cp -order critical point of the function f (x). Substituting (9) into (5) and ignoring , we get To get high accuracy numerical solution, the nonlinear weights of the WENO scheme should be approximately equal to the linear at the smooth region. Yamaleevn and Carpenter (2009) proposed a sufficient condition for the third-order accuracy of the weights ω k It can be seen that WENO-JS will reduce the order at the high-order critical point.
WENO-Z-type scheme
To improve the accuracy of WENO-JS, Borges et al. (2008) constructed a fifth-order WENO-Z scheme capable of achieving the optimal accuracy at the first-order critical point by introducing a global smoothness indicator. Similar to the fifth-order WENO-Z scheme, the weights of the third-order WENO-Z-type scheme are calculate as following From this, the weight functions of the third-order WENO-Z scheme can be got as: In the smooth region, the Taylor expansion of Eq. (13) at the point x j gives Thus From equation (15), one can see that the weight function in WENO-Z scheme does not satisfy the sufficient conditions given by Yamaleev et al. However, the numerical results show that the WENO-Z can achieve the optimal accuracy at the 0th order critical point. Xu and Wu (2018) proposed a new WENO-Z type scheme, WENO-PZ, by slightly modifying the local smoothness indicator of the third-order WENO-Z by using Taylor expansion. And they gave the following new reference smoothness indicator To enable the WENO-PZ scheme to achieve high accuracy at the critical points, Xu et al. set the value of p to 3/4. However, since the exponents of the local-smooth factors in WENO-PZ are rational numbers, they consume a lot of computational time during the calculation, a conclusion we will give later in the numerical calculation section.
WENO-M scheme
To overcome the fifith-order WENO-JS scheme will lose accuracy at the critical points, Henrick et al. (2005) designed the following mapping function to improve the approximation Similar to the fifth-order WENO-M scheme, the weights of the third-order WENO-M-type are as follows A Taylor expansion of Eq. (17) at d k gives Thus that is, the WENO-M satisfies the sufficient condition for the third-order accuracy of the weights. Interestingly, although WENO-M can achieve theoretically the optimal accuracy, it is not computationally achievable. This conclusion we will present in the section on numerical calculations.
WENO-MN scheme
In the smooth region, the Taylor series expansions of (21) are Thus, the weights of the WENO-JS-type scheme WENO-MN satisfies Liu et al. pointed out that the weights of the WENO-MN scheme do not meet the sufficient condition near the critical point, and its numerical dissipation will be significantly smaller than that of the WENO-JS in the smooth region.
Design and properties of the new WENO scheme
In this section, we propose a new WENO scheme independent of the local smoothing factor. The specific construction process of WENO-SIF is divided into the following parts. First, we reconstruct the weight functions of WENO-JS. It may be assumed that where is a positive number to prevent the numerator and denominator from being equal to zero, and in this paper, we always take the value = 10 −40 without specifical specification. Thus, the weight functions of WENO-JS can always be rewritten in the following form Similarly, the weight functions of the WENO-Z scheme can be rewritten as In contrast, the weight functions of the WENO-MN scheme can be rewritten as Second, combining the characteristics of the above several WENO schemes, we can always construct various improved WENO schemes without local smooth factors that satisfy the third-order condition by using functions of r j . That leads to a conclusion as follows.
Theorem 1 If the weight functions ω k (r j ), k = 0, 1 satisfies the following conditions: (a) ω 0 (r j ) (or ω 1 (r j )) is monotone decrease (or increase) with r j , and satisfying ω 0 (0) = 1 Then, the resulting WENO scheme can achieve optimal accuracy at any critical point.
Proof Ignoring in Eq. (24), we get Suppose that ω k (r j ), k = 0, 1 satisfies the two conditions in the theorem, then These weight functions will satisfy the sufficient conditions for third-order accuracy at anyorder critical point.
To reduce the consumption of the weight function in the calculation process and ensure that the new WENO scheme can achieve optimal accuracy at any critical point, we construct weight functions of the following form by combining the two conditions in Theorem 1.
in which, λ is a positive number less than 1 to be determined. We refer to the new WENO scheme equipped with the weight function of (30) as the local smooth factor-free WENO scheme, or WENO-SIF for short.
It is easy to verify that WENO-SIF satisfies all the conditions of Theorem 1 for λ < 1, and we can also get the following properties.
Proposition Consider S C and S D are the sub-stencils of WENO-SIF, and S D is the sub-stencil where the solution less smooth than in S C , i.e. β D > β C . λ 1 and λ 2 are positive numbers, and satisfy λ 1 < λ 2 < 1. Then Then Thus, for arbirtrary positive numbers λ 1 and λ 2 , if λ 1 < λ 2 , there is Figure 2 depicts the spectral properties of WENO-SIF by using the approximate dispersion relation of the shock capture method (ADR) (Pirozzoli 2006). We can see that when λ → 0, the spectrum of WENO-SIF is close to the third-order linear upwind scheme. For good numerical stability and high resolution of WENO-SIF in numerical calculations, as a rule of thumb, we will always choose λ = 0.13 in this paper. We will further discuss the impact of λ on the scheme in the numerical calculations thereafter. We also compared the dispersion and dispersion of the third-order WENO-JS, WENO-Z, WENO-M, WENO-PZ ( p = 3/4), WENO-MN, WENO-SIF, and the third-order upwind scheme, and the results shown in Fig. 3. From the figure, one can see that WENO-SIF has better spectral characteristics than other WENO schemes.
Numerical results
In this section, we will compare the performance of the present third-order WENO-SIF with WENO-JS, WENO-Z, WENO-M, WENO-PZ and WENO-MN by computing several classical problems, where the time advance is the third-order TVD Runge-Kutta method (Jiang and Shu 1996).
where C[u n ] denote the numerical flux. Without specified, we will set to be 1e−40 in the following, except for WENO-JS ( = 1e−6), and the CFL number is 0.6. In the following, CPU times for all 1-D problems are the averaged values of 100 calculations run with an Intel i3-10100 @ 3.60 GHz processor.
Convergence at higher-order critical points
In this subsection, we will compare various WENO schemes by computing the accuracy of a function at the critical points of the selected order. Given a trial function of the following form Then, the value of the kth order derivative of this function at the point x 0 = 0 is that is, x 0 = 0 is the n cp order critical point of f (x). For each n cp , we investigate the convergence of the critical point x 0 = 0 at six levels from q = 0 to 5. Where each level defined as x = 0.001/2 q . Tables 1, 2 and 3 list the L 1 , L 2 and L ∞ errors and convergence rates in the approximation of Table 1 shows all the computational errors and accuracies of various WENO schemes for the n cp = 0 critical point x c = 0. One can see that all the WENO schemes achieve third-order accuracy. The errors of WENO-Z, WENO-M, WENO-PZ, WENO-MN, and WENO-SIF are about the same and smaller than those of the WENO-JS scheme. That shows the WENO-SIF scheme can achieve optimal accuracy at the 0th-order critical point. Table 2 represents the various computational errors and accuracies at the n cp = 1 critical point. The accuracy of all known WENO schemes degrades with WENO-JS and WENO-Z reduced to first-order. And the WENO-SIF scheme provided in this paper maintains the same errors and accuracies as at the zero-order critical point. That shows the WENO-SIF method can achieve optimal accuracy at the first-order critical point. Table 3 represents the various computational errors and accuracies for n cp = 2. The results show that the accuracy of the methods WENO-JS, WENO-Z, and WENO-M decreases to the second-order at the second-order critical point, and the WENO-PZ can achieve the third-order accuracy. In contrast, the WENO-SIF scheme has the smallest various errors and maintains the theoretical third-order accuracy.
One-dimensional linear advection problems
The main work of this subsection is to examine the resolution and computational efficiency of various WENO schemes in computing the following 1D advection equation
Case 1
The initial conditions of this case are Figure 4 represents the numerical solutions of various WENO methods using 200 consistent cells with t = 2. We can see that the results of WENO-SIF are closer to the exact solution, while WENO-PZ shows nonphysical oscillations. That shows the present WENO-SIF is numerically stable and far less dissipative than the other WENO schemes.
In this example, we also compare the L 1 errors and CPU times of various schemes in computing this problem using different grid numbers, and the results are shown in Table 4 and Fig. 5. The results show the CPU times of WENO-SIF are significantly less than these WENO schemes when computing this problem. And all the errors of WENO-SIF are significantly smaller than those of the other schemes. That shows the WENO-SIF scheme has far better resolution and higher efficiency than others.
Case 2
The initial conditions of this case are where G(x, β, z) = e −β(x−z) 2 , F(x, α, a) = max(1 − α 2 (x − a) 2 , 0) and the constants z = −0.7, δ = 0.0005, β = ln2/(36δ 2 ) and α = 10. Figure 6 represents the numerical solutions of various WENO methods using 200 consistent cells with t = 6. In the square wave region (A), the results of both WENO-PZ and WENO-MN schemes show an upward jump, while the three schemes WENO-JS, WENO-Z, and WENO-M do not capture the square wave. In contrast, the present WENO-SIF is able to recognize the square wave and has a better match with the exact solution. And at the junction of square and triangular waves (B), the WENO-SIF provided in this paper performs significantly better than other WENO schemes. That shows the WENO-SIF scheme has better resolution than the others. We also compare the L 1 error and CPU time when computing this problem using different WENO schemes, and the results are presented in Table 5 and Fig. 7. That shows the CPU times of WENO-SIF are significantly less than the other WENO schemes, and the errors are also less than the other schemes. That shows the WENO-SIF scheme has good resolution and high efficiency than others.
One-dimensional inviscid Burgers equation
Next, we will solve the one-dimensional linear inviscid Burgers equation of the following form u t + uu x = 0, x ∈ [0, 2π], u 0 (x) = 0.5 + 0.5sin(x), periodic boundary. We also compare the L 1 error and CPU time for this problem, and the results are presented in Table 6 and Fig. 9. The results show that WENO-SIF requires significantly less CPU time than other WENO schemes to compute this problem with higher accuracy. That shows the WENO-SIF scheme has a higher resolution and computational efficiency than other WENO schemes.
Thereafter, we have also computed the one-dimensional advection equation and Burgers' equation after selecting several different values of λ, and the results are shown in Figs. 10, 11, and 12. Figure 10 depicts the results of case 1 with 200 cells at t = 2, the values of λ are 0.05, 0.10, 0.13, 0.20 and 0.50. One can see that numerical oscillations occur in the results when λ = 0.05 and 0.10, while the results for λ = 0.13 and 0.20 are significantly better than the rest. And we can see from Fig. 11, WENO-SIF will produce significant oscillations at the contact points of various waves when λ = 0.05, while the results for λ = 0.13 are better than the rest. In the results of the Burgers problem, oscillations occur for λ = 0.05, while the results for λ = 0.10 are better than the rest. Combining the above three arithmetic examples, it shows that the choice of λ = 0.13 for WENO-SIF is reasonable.
One-dimensional gas dynamic Euler equations
This section focuses on performing various WENO schemes for the 1-D gas dynamic Euler equation.
WENO-JS WENO-Z WENO-M WENO-PZ WENO-MN WENO-SIF
where ρ, u, p are the density, velocity and pressure, respectively. E = p/(γ − 1) + 1 2 ρu 2 is the total energy and γ is the specific heat ratio is set as γ = 1.4. And the time step is In this example, we also compare the L 1 errors and CPU times of various schemes in computing this problem using different grid numbers, and the results are shown in Table 7 and Fig. 14. It is easy to see that the WENO-SIF scheme has the least CPU time, and its error is the smallest among the results of all WENO schemes. That shows the present WENO-SIF has optimal computational efficiency.
Shu-Osher's problem
The initial conditions of Shu-Osher's problem at [−5, 5] are as follows This problem contains low-frequency and high-frequency density disturbances and is used to test the performance of different WENO methods. Figure 15 presents the distribution curves of density for various WENO methods with 400 cells at t = 1.8. The reference solution is Table 8 and Fig. 16. It is easy to see that the WENO-SIF scheme has the least CPU time, and its error is the smallest among the results of all WENO schemes. That shows the present WENO-SIF has optimal computational efficiency.
Titarev-Toro's problem
The initial conditions of the problem as follows Figure 17 represents the computational results of various WENO methods using 2000 cells at t = 5. The reference solution is computed by using third-order WENO-JS at 8000 cells. See from the figure that WENO-SIF performs better than the other WENO schemes. The L 1 errors and CPU times for the problem on the selected grid are shown in Table 9 and Fig. 18. It is easy to see that the WENO-SIF scheme takes much less CPU time to compute the problem and its accuracy is higher than that of the others. That indicates the present WENO-SIF has the best resolution and computational efficiency.
Two dimension gas dynamic Euler equation
In this section, we will investigate the performance characteristics of the WENO scheme mentioned in this paper by solving a two-dimensional gas dynamics problem of the following form where, ρ, u, v, p are the density, x-velocity, y-velocity and pressure, respectively. And the total energy E is defined as And the time step is where c = √ γ p/ρ and C F L = 0.6. We will use the global Lax-Friedrichs flux splitting method in each of the following examples, and the numerical fluxes are reconstructed in the characteristic space. The specific heat ratio γ = 1.4 for all examples except γ = 5/3 in RT instability problem.
2-D Riemann problem: case 1
The initial conditions for two-dimensional Riemann problem are as following (Roe 1981 In the results, there is a mushroom-like structure symmetric about the diagonal y = x, and the richness of the vortex structure at the interface can usually examine the resolution of the different schemes. The boundary conditions for this problem are: all four edges are zero-order extrapolation. Figure 19 represents the density contours of this case by using 800 × 800 cells until t = 0.8 with 24 contours at [0.2, 1.7]. One can see that the unstable Kelvin-Helmholtz microstructures constructed by WENO-SIF are significantly more abundant than the other WENO schemes, showing that the present scheme has better resolution than the rest.
All four edges of the problem are zero-order extrapolation boundary conditions. Figure 20 shows the density contours for the various WENO schemes using 800 × 800 cells until
All four edges of the problem are reflection boundaries. Figure 21 shows the density contours for the various WENO schemes using 800 × 800 cells until t = 2.5 with 12 contours at [0.45, 1.0]. Comparing the jets on the diagonal in each graph, we can see that the length of WENO-SIF (F) is longer than others. That shows the present scheme is better than the rest in computing this problem.
Double-Mach reflection problem
This problem has widely used as a test example for high-order schemes. The initial conditions of double-Mach reflection problem on the computational region [0, 4] × [0, 1] given as Jiang and Shu (1996); Hong et al. (2020); Woodward and Colella (1984) The boundary conditions for this problem are: on the bottom, exact post-shock conditions are imposed in the interval [0, 1/6], and we use the reflection boundary for the rest. We set the top to be the exact motion of a Mach 10 shock. Inflow and outflow for the left and right, respectively. Figure 22 shows the density contours for the various WENO schemes using 1024 × 256 cells until t = 0.2 with 30 contours at [2,22]. At the interface, WENO-SIF has a richer vortex structure than the rest of the solutions. That shows the present WENO-SIF has a better resolution than others.
This problem serves as a benchmark to examine the performance of various schemes to capture complex vortex structures and preserve symmetry. We set the boundaries of the problem: the bottom and top are the incoming and outgoing flow boundaries, respectively, and the left and right sides are the reflection boundaries. Figure 23 shows the density contour profiles for various WENO methods using 256 × 1024 grids up to t = 1.95, 13 density contours at [0.9, 2.2]. Comparing the results of all the schemes, one can see that the present WENO-SIF has the richest vortex structure and preserves the symmetry.
Forward-facing step problem
The problem describes a Mach 3 supersonic fluid entering a tunnel with a step that is reflected several times on the surface, thereby generating a shock wave for the disembodied body. This problem is commonly used to test the numerical stability and resolution of different numerical schemes. The wind tunnel has a length of 3, a width of 1 and a height of 0.2 with a step of 0.6 in the wind tunnel. (ρ, u, v, p) = (1.0, 3.0, 0, 0, 0.71429) is on the left side of the wind tunnel, and outflow boundary conditions on the right side. The surfaces of both the tunnel and the step are reflection boundaries. Figure 24 shows the density contour profiles for various WENO methods using 900 × 300 grids at t = 4.0, 45 density contours at [0.5, 6.0]. At the interface of these shocks, the WENO-SIF has a richer vortex structure than the others. This indicates that the present WENO-SIF has a better resolution than other WENO schemes.
We set all bounds for this problem to be periodic. Figure 25 shows the density contours for the various WENO schemes using 800 × 800 cells until t = 1 with 12 contours at [0.9,2.2]. As can be seen in Fig., the vortex structure of the WENO-SIF scheme is larger than that of the remaining schemes. This indicates that the present scheme has a higher resolution than other schemes.
Shock/schear layer interaction problem
The initial conditions of the Shock/schear layer interaction problem at where b = 10, a 1 = a 2 = 0.05 and T = 30/2.68. The right boundary is the outer flow whereas the upper and lower boundaries are set as the post-shock and slip walls, respectively. Figure 26 represents the density contour profiles for various WENO methods using 600×120 It can be observed that the vortex structure of the WENO-SIF scheme is richer than that of the remaining schemes. This indicates that the present scheme has a higher resolution than the alternative schemes.
Computational time for 2-D problems
In this section, we present the time consumed by the various WENO methods for computing the 2D problem using a desktop computer with an Intel i7-9700 CPU @ 3.00 GHz processor, and we list the results in Table 10. In this table, we will take the time taken by WENO-JS to obtain the relative time of the various WENO schemes. Of all the WENO schemes, WENO-SIF uses the smallest amount of CPU time. Compared with the WENO-JS scheme, WENO-SIF saves at least about 20% of computation time. And the CPU time of WENO-SIF is even only 20% of WENO-PZ. Combining the individual one-and two-dimensional problems, the WENO-SIF scheme provided in this paper is computationally more efficient than other WENO schemes.
Conclusion
To improve the shock capturing and reduce the computational consumption of the third-order WENO-JS scheme, in this paper, we propose a new WENO scheme that is independent of the smoothing factor. The weight functions of the WENO-SIF are the segmented linear functions of the ratio to the substencil flux, which contain a tunable parameter λ. This function satisfies ω 0 (0) = 1 (ω 1 (0) = 0) and ω 0 (∞) = 0 (ω 1 (∞) = 1), which ensures the new present WENO scheme contains the ENO property. Numerical results show that the present WENO scheme can achieve the desired accuracy at high-order critical points, and the resolution of WENO-JS can be further improved by suppressing numerical oscillations near discontinuities at a low additional computational cost. | 6,763.4 | 2023-04-28T00:00:00.000 | [
"Engineering",
"Physics"
] |
Stringy correlations on deformed $ AdS_{3}\times S^{3} $
In this paper, following the basic prescriptions of Gauge/String duality, we perform a strong coupling computation on \textit{classical} two point correlation between \textit{local} (single trace) operators in a gauge theory dual to $ \kappa $-deformed $ AdS_{3}\times S^{3}$ background. Our construction is based on the prescription that relates every local operator in a gauge theory to that with the (semi)classical string states propagating within the \textit{physical} region surrounded by the holographic screen in deformed $ AdS_3 $. In our analysis, we treat strings as being that of a point like object located near the physical boundary of the $ \kappa $- deformed Euclidean Poincare $ AdS_{3} $ and as an extended object with non trivial dynamics associated to $ S^{3} $. It turns out that in the presence of small background deformations, the usual power law behavior associated with two point functions is suppressed exponentially by a non trivial factor which indicates a faster decay of two point correlations with larger separations. On the other hand, in the limit of large background deformations ($ \kappa \gg 1 $), the corresponding two point function reaches a point of saturation. In our analysis, we also compute finite size corrections associated with these two point functions at strong coupling. As a consistency check of our analysis, we find perfect agreement between our results to that with the earlier observations made in the context of vanishing deformation.
Overview and Motivation
The classic mathematical evidences regarding the existence of an integrable structure on both sides of the AdS 5 /CF T 4 duality [1] might be regarded as one of the major theoretical advancements that took place during the past one and half decade [2]- [61]. It turns out that, in the so called planar limit, the dilatation operator associated with N = 4 SYM could be mapped to that with the corresponding Hamiltonian of an integrable spin chain in one dimension [5]. On the other hand, the integrable structure associated with the stringy side of the duality has been ensured due to the existence of an infinite umber of conserved quantities associated with the Lagrangian field equations in AdS 5 × S 5 [4].
During the past one and half decades, the quest for an integrable deformation corresponding to AdS 5 × S 5 superstring theory has been one of the prime focus of modern theoretical investigation [62]- [99]. Very recently, the novel discovery [73] regarding the one parameter integrable deformation associated with AdS 5 × S 5 superstring sigma model has drawn renewed attention due to its several remarkable features. At this stage one should take a note on the fact that the deformed sigma model [73] had been formulated in the presence of a real deformation parameter (η) such that the model exhibits two characteristic features-(1) the presence of Lax connection and (2) the invariance under the kappa symmetry. The kappa symmetry associated with the deformed superstring model turns out to be absolutely essential in order to ensure a type IIB supergarvity background.
Soon after the discovery of this new class of integrable deformations [73], the corresponding deformed target spacetime metric was figured out by authors in [75] where, considering the so called light cone gauge, they perform the perturbative (2 → 2) S matrix computation in the Hamiltonian framework. It is also noteworthy to mention that in their analysis the authors restricted themselves only to the bosonic sector of the full theory. On of the key outcomes of their analysis was the fact that in the limit of the large string tension, the S matrix corresponding to the integrable q-deformed model was found to be in a perfect agreement to that with the corresponding perturbative S matrix computed for the η deformed model once various other parameters of the q-deformed theory could be related to that with the real deformation parameter η ∈ [0, 1) in a following manner, (1) The computation [75] further unveils the fact that the full 10D background corresponding to the N S − N S sector supports a metric together with some non vanishing B field. The metric contribution in the bosonic sector of the Lagrangian could be divided into two pieces namely, the deformed AdS 5 and the deformed S 5 . The Wess-Zumino sector of the bosonic Lagrangian, on the other hand, sources the non trivial B field in the target spacetime. Given the above relation (1), there are several interesting limits that one might wish to explore. For example, the limit η → 0 clearly reproduces the undeformed AdS 5 × S 5 background. On the other hand, in the limit, η → 1 the original AdS 5 × S 5 gets mapped into dS 5 × H 5 indicating the fact that the corresponding world sheet theory is non unitary [78]. Therefore the deformation acts as an interpolation between AdS 5 × S 5 and dS 5 × H 5 . In our analysis, while solving the corresponding stringy dynamics associated to deformed AdS 3 ⊂ AdS 5 , we focus particularly in these two limits in order to gain further insights regarding the interpolating regime.
Before we actually explain the purpose of our present analysis, it worth emphasizing that the deformed model proposed in [73]- [74] leaves behind it many open issues that need to be addressed properly. Here we elaborate some of them. Due to the presence of the curvature singularity at some finite radial distance, ∼ κ −1 it turns out that strings are eventually confined within a region, 0 < < κ −1 . The vanishing of the beta function [80] somehow guarantees that such deformations might be allowed by string theory although its implication is not very much clear at this moment. As a consequence of this, the deformed target space metric corresponding to AdS 5 appears to be with no boundary in the usual sense [78]. Instead one could think of a holographic screen [84]- [85] and solve stringy dynamics within the region bounded by this holographic screen. It turns out that the region bounded by this holographic screen is the only allowed physical region in the bulk where the classical string solutions as well as the holographic correspondence make sense [84]- [85]. The notion of the usual boundary ( → ∞) could however be recovered only in the limit of the vanishing deformation. As a matter of fact, it turns out that the spacetime supersymmetry associated with the target spacetime is lost and on top of it the bosonic isometry associated with the undeformed 10D background gets q-deformed and/or hidden to its smaller subset. In other words, the original SO(2, 4) × SO(6) isometry is found to be broken down to its Cartan subgroup U (1) 3 × U (1) 3 [78] corresponding to shifts along various bosonic directions. As a natural consequence of this, the corresponding interpretation and/or the implication of these broken symmetry generators on the properties of the dual gauge theory is not clear immediately. In other words, it is not known a priori how various gauge invariant operators and in particular their correlation functions would be modified under this reduced subset of symmetry generators. However, the symmetries associated with the deformed 10D background immediately suggests that the dual gauge theory should not manifest any conformal invariance as well as supersymmetry. Keeping these facts in mind, it seems quite urgent to build up the necessary mathematical framework that would eventually unveil the hidden symmetries associated with this mysterious dual gauge theory at least in the regime of strong coupling. These are the precisely the issues that we consider to be worthy of further investigation.
In order to address the above mentioned issues in a systematic way, in the present paper we carry out a classical computation on the correlation function [28]- [46] between heavy local operators in the gauge theory those are dual to classical spinning strings moving nontrivially over κ-deformed background [78]. It is therefore a strong coupling computation from the perspective of the dual gauge theory. In our analysis, we consider only the bosonic sector of the full superstring theory [75]. Our analysis might be regarded as being the straightforward generalization of the earlier proposal in the context of AdS 5 × S 5 superstrings [28], where we compute two point correlations between single trace operators those are dual to semi-classical string states propagating over κ-deformed AdS 5 × S 5 geometry [78]. In our analysis, we consider two types of operators in the dual gauge theory namely, the magnons and the spikes.
According to the methods developed in [28], the Polyakov action (corresponding to these semi-classical string states over κ-deformed geometry) evaluated at the classical saddle point should correspond to the desired two point correlation between the single trace operators in the dual gauge theory at strong coupling. For the case of usual AdS 5 ×S 5 superstrings, the same prescription provides the correct two point correlation between local operators in the gauge theory where one could easily identify the corresponding classical conformal dimension to that with the energy of the stringy excitation in the bulk [29]. However, this is not the scenario that one should even expect to be hold true when the original bosonic isometry associated with the target spacetime is broken. Instead, it is quite natural to expect that the usual power law behavior [29] associated with these two point correlations would be modified in a non trivial fashion.
Before we actually start describing the precise mathematical framework adopted in our computation, it is customary to mention that instead of considering the full 10D background, we perform our analysis over the truncated target space of the full deformed geometry namely, the κ-deformed AdS 3 × S 3 [78]. As long as one is concerned only with the bosonic N S − N S sector [75] of the the full superstring theory, the resulting metric corresponding to this truncated model turns out to be the direct sum of the two individual sectors namely the AdS 3 and the S 3 . This picture no longer holds true as soon as one turns on RR fields 1 [79]. This 6D analogue of the full 10D background possesses two basic characteristic features namely, (1) the integrability of this model is ensured from the very outset and (2) the vanishing of the corresponding B field that was originally present in 10D [78]. Moreover, it turns out that this κ-deformed AdS 3 × S 3 model admits a RG flow [78] (in the same sense as that of the two parameter deformed O(4) sigma model) where corresponding to a large value of the deformation parameter (κ → ∞), the theory flows to a UV fixed point namely, dS 3 × H 3 .
In our analysis, we consider two types of operators in the dual gauge theory namely the magnons and the spikes [29] and we compute the associated two point correlations for each of these operators separately. As far as the bulk picture is concerned, we treat strings as being that of a point like object located near the physical boundary (the so called holographic screen [84]- [85]) of the κ-deformed Euclidean Poincare AdS 3 and as an extended object moving non trivially over three sphere (S 3 ). From our analysis, it turns out that stringy fluctuations corresponding to deformed S 3 are exactly solvable in the presence of generic background deformations (κ). However, it turns out to be extremely difficult to solve the same for generic κ-deformations associated to AdS 3 . Therefore, in our analysis, we choose to explore the corresponding stringy excitation (associated to deformed AdS 3 ) both in the perturbative (0 < κ ≤ 1) as well as in the non perturbative (κ 1) regime. Our analysis, therefore clearly reveals an intuitive picture regarding the behavior of the two point function corresponding to some intermediate value associated with the background deformations (κ) during the RG flow.
Considering the perturbative regime associated with AdS 3 , the corresponding strong coupling behavior associated with the two point function takes the following form, which clearly reveals the fact that the usual power law fall off [29] is exponentially suppressed by a non trivial factor. Stating in another way, the above formula (2) provides the small deformation behavior of the two point function between local operators in a holographic RG flow. The exponential suppression above in (2) has its origin in the stringy dynamics corresponding to deformed Euclidean AdS 3 sector of the full theory. However, at this stage, it is worth emphasizing that the associated power law behavior (2) (which is the reminiscent of the usual power law fall off in a CFT [29]) is exact (in the sense that we determine the coefficient ∆ κ exactly in terms of the corresponding background deformation (κ)) and has its origin in the stringy dynamics associated with deformed S 3 . Therefore, considering the above facts together, one might treat (2) as being that of a semi-perturbative expression for the two point function in the gauge theory. On the other hand, the nontrivial leading order contributions corresponding to the non perturbative regime (η → 1, κ 1) has its source in the background deformations associated with the deformed S 3 sector of the full geometry namely, It turns out that the contribution from the deformed AdS 3 sector appears only in the subleading order and which might be regarded as the consequence of the fact that the cation of type IIB 10D supergravity on T 4 . However, it turns out that for the supergravity background to be a consistent solution, the parameter cannot be arbitrary and has to take certain specific values in different limiting conditions.
physical AdS 3 region allowed for strings eventually shrinks to zero in the corresponding limit. In other words, at leading order the deformed AdS 3 contribution is suppressed compared to that of the contribution associated with deformed S 3 . Also it should be kept in mind that this large deformation (κ → ∞) regime would correspond into the enhancement of the so called unphysical domain that has its route in the non-unitarity associated with the world sheet theory for superstrings in the curved background [78]. To summarize: (1) the two point function corresponding to small background deformations exhibits a faster fall off than what is expected from the perspective of the usual CFT and (2) it gradually saturates to some constant value for large background deformations. The organization for the rest of the analysis is the following: In Section 2, we solve the stringy dynamics associated with the κ-deformed AdS 3 × S 3 background. We use these solutions in Section 3, in order to compute the two point correlations between giant mognons at strong coupling. We also compute the effects of incorporating finite size corrections [30] to these correlation functions at strong coupling. We perform identical analysis for spikes in Section 4. At this stage, it is also noteworthy to mention that in the limit of the vanishing deformation, all our results matches smoothly to that with the earlier findings of [29] where one could easily identify the entity, ∆ κ=0 = ∆ = E string as being that of the classical conformal dimension associated with local operators in the dual gauge theory. Finally we conclude in Section 5.
Strings in deformed AdS
We start our analysis by considering the Polyakov action for open strings over the κdeformed geometry 2 [78], where, the individual metric coefficients could be formally expressed as, such that the NS-NS two form vanishes during the process of consistent reduction from AdS 5 × S 5 [78]. Notice that, here ϕ, θ and φ are the angular coordinates on deformed S 3 . From (4), it is therefore indeed quite evident that there is as such no mixing between the coordinates of AdS 3 to that with the coordinates corresponding to S 3 . Hence we can analyze them separately.
κ-deformed Euclidean Poincare AdS 3
Our goal in this analysis would be to solve the (point particle) dynamics associated with the Polyakov action corresponding to open string configurations over the curved background (4). We perform our analytic computations with the choice of the conformal gauge conditions for the Polyakov action. In our analysis, we would treat strings as being that of a point like object located near the holographic screen [84]- [85] of the κ-deformed Euclidean Poincare AdS 3 and thereby ignore fluctuations on the world sheet of the string. Before we proceed further, it is also noteworthy to mention that our results are perturbative as we retain ourselves only upto leading order in the background deformations.
The deformed AdS 3 sector of the full spacetime could be formally expressed as, where we implement the following change of variables, which finally yields, Clearly, the above metric (8) is expressed in the so called global coordinates. However, for the sake of our present calculation, we need to re-express (8) in the Euclidean Poincare coordinates [84]- [85] which corresponds to Wick rotating the real time axis, t → it.
In order to proceed further, we make the following choice, Using (9), we finally rewrite the Euclidean AdS 3 as, Next, we substitute, into (10), which yields, Finally, we perform another set of coordinate transformations namely, which precisely ends up being giving rise to the so called Euclidean Poincare AdS 3 associated with non trivial κ-deformations, associated with the string metric that was present originally at, = κ −1 [78]. On top of it, from the structure of the above singularity, it is indeed evident that the maximum spatial volume that one could associate with the holographic screen [84]- [85] cannot be arbitrary and in fact it is fixed by the corresponding location of the screen at a fixed radial distance (z = z B ) in the bulk and vice verse. In other words, if z = z B be the radial location of the holographic screen in the bulk, then the maximal spatial region that one could associate with this screen (for generic background deformation (κ)) is given by, such that, z B ≤ z < ∞. In other words, the gauge invariant (local) operators as well as their correlation functions in the dual gauge theory are defined only within the spatial volume, A ≤ A B associated with the holographic screen. Clearly, the area associated with the holographic screen shrinks to zero (A B → 0) in the limit of the large background deformations (κ 1). On the other hand, the volume becomes infinitely large in the limit, κ → 0. Finally, considering all the previous arguments it should be clear by now that the volume, A = A B could be regarded as being that of the minimal spatial region that one could associate with z = const. hypersurface corresponding to a given background deformation (κ) in the bulk.
Perturbative solutions
Our next task would be to substitute (14) into the Polyakov action and solve the corresponding dynamics on the world sheet. In order to solve these fluctuations, we make the following ansatz corresponding to the coordinates on the world sheet [29], which finally results in the Polyakov action of the following form, is the effective string tension [78] associated with the deformed background.
In order to proceed further, we first note down the equations of motion associated with the coordinates on the world sheet namely, z(τ ) and, x a (τ ) where, we combine the remaining set of variables into a single variable, x a = {x 0 = t, x 1 = x} which yield the following set of equations 3 in the background deformations, where, the functions Γ(τ ) and Ξ a (τ ) could be formally expressed as, In order to solve (20) perturbatively in the deformation parameter (κ), we consider the following expansion for the variables namely, Substituting (22) into the equations of motion (20), we first note down the zeroth order equations, In order to solve (23), we first note that, Substituting (24) into (23), we finally solve zeroth order equations, 3 Here, prime corresponds to derivatives w.r.t τ .
which thereby correspond to a specific parametrization of geodesics in AdS 3 [28]. Our next task would be to use these zeroth order solutions in order to obtain the leading order corrections due to the background deformations (κ). The corresponding equations at leading order turn out to be, where, the entities like, Γ (0) (τ ) and Ξ (0) (τ ) comprise of all the zeroth order solutions in κ. A straightforward computation yields the following, Substituting (27) into (26), one finds, The first equation in (28) could be schematically expressed as, where, D τ = ∂ τ (cosh 2 βτ ∂ τ ). One could express the corresponding solution in terms of Green's function that eventually yields, subjected to the fact, Let us now focus on the second equation in (28). Like in the previous example, this equation could also be expressed schematically as, The solution turns out to be, where, G (x) (τ, τ ) is the corresponding Green's function that satisfies, together with some specific boundary conditions that will be discussed below 4 . Therefore, the complete set of solutions upto leading order in the deformation turns out to be, Substituting (35) into (19) and retaining ourselves upto leading order in the deformation (κ) we find, where, the full Polyakov action could be formally expressed as the sum of the usual on-shell piece without any deformation [28], and the contribution to the on-shell action sourced due to the background deformations namely, where, the function, yields the first non trivial correction to the Polyakov action in AdS 3 in the presence of the background κ-deformations. Note that, in order to arrive at the equations of motion, we had deliberately dropped the boundary terms (associated with a constant τ surface) in the Polyakov action which thereby invokes certain boundary conditions for the fields on the world-sheet. We are now in a position to implement these boundary conditions [28]. The first set of boundary conditions that we implement is the following [28], On the other hand, the second set of boundary conditions turn out to be [28], where, is some appropriate UV cutoff (in the presence of background deformations) such that, | − z B | 1 together with the fact that the bound, x 2 f A B is satisfied. Considering the zeroth order solutions for β(≈ 2 s log(x f /˜ )) (where,˜ ( 1) is the usual UV cutoff without background deformations [28]), it is in fact quite intuitive to note that to leading order in the deformations, which therefore yields a vanishing contribution at the end points of the time evolution.
Using (42), it is now quite trivial to note down, z(±s/2) = 1 cosh β(±s/2) where, we could ignore the subleading contributions as they appear as a fourth order term in the perturbative expansion. Therefore this implies that upto leading order in the deformation,
A note on large κ solutions
In this Section, we study equations of motion corresponding to the large values of the background deformations (κ 1). In the presence of large background deformations, the corresponding Polyakov action (19) takes the following form, The corresponding leading order equations of motion turn out to be, The solution corresponding to the above set of equations (46) could be formally expressed as, Therefore, in the non perturbative (κ 1) regime, the leading order contribution to the on-shell action vanishes. Let us now go beyond this trivial regime and consider corrections corresponding to the next sub-leading order at large κ. The corresponding action takes the following form, Considering the following fact namely, it is quite trivial to notice (without solving any of the equations of motion explicitly), which thereby possesses a vanishingly small contribution (to the full on-shell action (S AdS 3 ×S 3 )) as compared to that of the corresponding contribution appearing from S 3 in the large κ( 1) limit (as we shall see shortly). In other words, in the non perturbative regime (κ 1), the dynamics associated with the two point correlation is largely determined by the corresponding partition function associated with the three sphere (S 3 ).
Solutions in S 3
We now consider the dynamics of strings in S 3 . In order to proceed further, we choose the following ansatz [29], where, ς = aτ + bσ. With this choice in hand, we essentially confine ourselves to the subspace S 2 of the full three sphere. In order to proceed further, we first note-down the equation of motion corresponding to ϕ(ς) which yields, ∂ ς sin 2 θ (1 + κ 2 cos 2 θ) (aν + (a 2 − b 2 )g (ς)) = 0 (52) where, the prime denotes derivative w.r.t the variable, ς.
Integrating the above equation (112) once we find, where, C is some integration constant. We now focus our attention towards computing the equation of motion corresponding to θ(ς). In order to do that, instead of considering the dynamics directly, we turn our attention towards the first integrals of motion namely the Virasoro constraints of the theory [26], A straightforward computation yields the following, As a next step of our analysis, we factorize (55) as, where, Notice that, here θ max and θ min correspond to extremal values of θ such that θ = 0. It turns out that the size of the magnon and/or spike in the dual gauge theory could be estimated by means of θ max [29]. In our analysis, we first consider the infinite size limit associated with these single trace operators in the dual gauge theory. This infinite size limit corresponds to setting, sin θ max = 1 in the bulk. As far as the dual field theory is concerned, this infinite size limit would correspond magnons with large angular momenta together with finite angular difference (or momentum) and spikes with large angular difference between its two end points together with finite angular momentum [29].
At this stage, it is noteworthy to mention that the two point correlations between these heavy states in the dual gauge theory should not follow the usual power law behavior [29] of a CFT and in fact there should be a clear deviation from the usual power law behavior indicating the fact that the original conformal symmetry is broken. Therefore it remains as an interesting direction to be explored how does this two point correlation behave in the presence of an integrable one parameter background deformations [73]. In the first part of our analysis, we precisely address this issue by analytically computing the two point correlations between two heavy magnon states. In the second part of our analysis, considering the large size limit, we compute two point correlation function between single trace operators dual to spiky constructions over the deformed background. We also discuss the finite size corrections to these correlation functions in each of the above examples .
The large size limit
Both the infinite as well as the finite size limit for magnons correspond to setting, ∂ σ ϕ = 0 at θ = θ max [29]. For θ max = π 2 , this implies a large value for the angular momenta at a finite angular difference [29]. Using (53), this naturally implies, .
The infinite size limit for magnons corresponds to setting, C = aν [29]. Substituting (58) into (57), this further yields, Clearly, in the limit, κ → 0 one recovers the results corresponding to the giant magnon solutions constructed over the un-deformed background [29]. Using, (58) and (59) we can re write (56) as, The purpose of the present analysis is to perform an explicit analytic computation on two point correlations between two heavy magnon states in the classical limit [29]. It turns out that in the classical limit, the path integral is dominated by the Polyakov action evaluated at the classical saddle point [28]. Following the original prescription [28], the classical Polyakov action (corresponding to S 3 ) for giant magnon solutions turns out to be,S where, Π θ and Π ϕ are the conjugate momenta and dot corresponds to derivative w.r.t. τ . A straightforward computation yields, Using (53) and (60), we finally obtain, (1 + κ 2 cos 2 θ) where, the entity Θ (M ) could be formally expressed as, θ min sin θ 1 + 2κ 2 ab cos 2 θ (a+b) 2 cos θ(1 + κ 2 cos 2 θ) dθ sin 2 θ − sin 2 θ min (64) and α is some numerical prefactor. Interestingly enough and unlike the AdS 3 example, the above expression (63) is exact in trms of the background deformations. Finally, it is also noteworthy to mention that, in the limit, κ → 0 one could trivially convert the θ integral into an integral over σ which finally yields, Θ (M ) = 1. Combining (36) and (63), we finally obtain, where, the entity Z(s) could be formally expressed as, where, the functional form of Q(s) could be uniquely fixed by means of the corresponding saddle point equation evaluated at the classical saddle point. As a next step of our analysis, we determine the classical saddle point, s =s which is achieved by varying the action (65) w.r.t. the parameter s that yields the following differential equation, In the following, we would work out the saddle point solutions corresponding to (67) upto leading order in the deformation parameter. In other words, our solutions would be valid ∼ O(κ 2 ). These solutions turn out to be, where, q is the integration constant. Substituting (68) into (65), the semi-classical partition function evaluated at the classical saddle point turns out to be, which in the limit, κ → 0 precisely matches to that with the standard formula corresponding to the semi classical string partition function estimated over the AdS 5 × S 2 background [29]. Here, the entity, could be thought of as being that of the reminiscent of the classical conformal dimension associated with heavy single trace operators in the dual gauge theory. In the limit, κ → 0 this precisely matches with the classical conformal dimension associated with single trace operators dual to long stringy solutions in the bulk. As a consistency check of our analysis, below we show that in the appropriate limit (κ → 0), our result trivially reduces to that of the giant magnon dispersion relation at strong coupling [19] where one could easily identify the entity ∆ κ=0 as being that of the classical energy associated with stringy excitation in the bulk [29].
We first compute the angular momentum, (71) which clearly diverges in the limit, θ → π/2. However, the difference where, the function F(κ 2 , θ) could be formally expressed as, It turns out that in the limit, κ → 0 which is finite as expected [19]. On the other hand, the angle difference between the two end points of the string turns out to be, which trivially reduces to, in the limit, κ → 0. In summary, the dispersion relation [19], [29], is trivially satisfied in the limit, κ → 0. However, the above relation (77) does not hold in the presence of background deformations and it receives non trivial corrections namely, where, together with the fact that, Therefore, the excitation associated with the gauge theory dual to κ-deformed background are not magnons in the usual sense. As a natural consequence of this, it is also not quite confirmed whether the corresponding spin chain description holds for the dual gauge theory in the limit of weak couplings.
We now compute two point function between magnon like excitations in the dual gauge theory. Following the original prescription [28], the corresponding two point correlation between the heavy magnon like states finally turns out to be, where, the exponential suppression above in the two point function (81) has its origin in the point particle dynamics within the deformed AdS 3 sector. On the other hand, the associated power law behavior is exact in the deformation and its origin lies entirely on the dynamics of strings over the deformed sphere. It is therefore quite tempting to claim that (81) is a semi-perturbative result in itself. The above relation (81) is also quite intuitive in the sense that for a given separation (δx = x f ), the leading contribution to the two point correlations between heavy magnon like states is exponentially suppressed compared to that of their cousins in the gauge theory (CF T 2 ) dual to undeofrmed AdS 3 ×S 3 . In other words, in the presence of background deformations, the leading contribution to the two point correlation function between two heavy operators dies at rate faster compared to that of the original unreformed theory.
Before we finish our discussion, it is worth specifying the corresponding behavior of the two point correlation in the non perturbative regime, namely for κ 1. It turns out that in the non perturbative (κ 1) regime, where, sin ε cos 2 ε and, |ε| 1. The above equation (82) clearly exhibits the fact that in the non perturbative regime (κ 1), the dominant contribution to the two point correlation appears from the sphere partition function. This might be regarded as the consequence of the fact that in the limit, κ 1, the allowed physical region for strings moving in AdS 3 eventually shrinks to zero [78].
Using (82), the two point correlation function associated with giant magnon like states in the dual gauge theory could be formally expressed as,
Finite size corrections
Having done our explorations on giant magnon two point correlations, the purpose of this Section is to pursue the same computation in the finite charge limit. In other words, we compute two point correlation function between single trace operators of finite length. In order to do that, the first thing we consider is to set θ max = π 2 which in turn implies that, such that, C = aν. As a consequence of this we note, Our next task would be to compute the on-shell action (62) corresponding to deformed S 3 which for the present case yields, where, the entity (θ) could be formally expressed as, Two points are to be noted here. First of all, the above expression (86) is an exact result in terms of the background deformation and secondly it produces the correct result [30] in the limit, κ → 0.
As a next step, we write down the total action, Following the same steps as earlier and evaluating (88) at the classical saddle point we find, where, the entity, becomes exactly the classical conformal dimension associated with finite sized single trace (magnons) operators in the limit, κ → 0 [30]. Finally, we note down the corresponding two point correlation function between single trace operators of finite size which takes the following form, Therefore, compared with the previous example, the only difference that we encounter here is that the entities like, ∆ κ and ν are now corrected due to the finite size effects. Like in the previous section, we now focus on the large κ( 1) behavior of two point function in the limit of finite size corrections where we note, where, C Using, (92), the finite size corrections to two point correlation function in the limit of large background deformations could be formally expressed as, 4 Spikes
The large size limit
Having done our computations on two point correlations between single trace (magnon) operators, we now turn our attention towards the computation of two point correlation function between operators dual to spiky string solutions over the κ-deformed background.
We first consider the large size limit which corresponds to a large angular difference between the two end points of the spike [29]. The corresponding boundary condition that one uses is, ∂ τ ϕ = 0 at θ = θ max [29] which for the present case yields, The large size limit corresponds to setting, sin 2 θ max = 1 which in turn implies, C = b 2 ν a . Using this value for the constant we find, Finally, the on-shell action (62) corresponding to deformed S 3 turns out to be, where, the entity ξ(θ) could be formally expressed as, It is now quite trivial to check that in the limit, κ → 0 one finds, which upon substitution into (96), clearly reproduces the previous results of [29]. Following almost similar steps as that for the magnons, the full Polyakov action turns out to be, Evaluating (99) at the classical saddle point, the two point correlation function between single trace operators corresponding to the dual field theory turns out to be, where, the entity ∆ κ associated with spikes turns out to be, As expected, the qualitative behavior of the two point correlation (100) does not change compared to that with the previous example with magnons. On the other hand, the only difference between magnons and spikes appears to be in the coefficient, ∆ κ which in the limit, κ → 0 precisely matches to that with the classical conformal dimension associated with the single trace operator in the dual gauge theory. Like in the case for magnons, we now compute two point function in the limit of large background deformations. A straightforward computation yields the following, ν sin ε cos 2 ε . Using (102), it is now indeed trivial to compute the corresponding two point function which turns out to be,
Finite size corrections
Like in the case for magnons, we consider, θ max = π 2 in order to explore the effects associated with the finite size corrections on two point correlation function between single trace operators in the dual gauge theory.
With this choice in hand, we note, together with the expression for, such that the coefficient, Using (105), the on-shell action (62) corresponding to deformed S 3 turns out to be, where, the entity ξ F (θ) could be formally expressed as, Finally, following the same steps as in the previous examples, the two point correlation function between single trace operators takes the following form, where, the entity ∆ κ corresponding to operators dual to spiky solutions turns out to be, To conclude our discussion on spikes, following our previous methodology, we compute two point function in the limit of large background deformations (κ 1) which for the present case yields, where, C (S)
Summary and final remarks
We conclude our paper by mentioning some its possible future extensions that one might wish to explore. Before going into that, we first summarize the entire analysis performed so far. The goal of the present paper was to explore the underlying symmetries associated with the mysterious dual gauge theory description corresponding to κ-deformed AdS 3 ×S 3 background at strong coupling. We address this issue through a systematic computation of two point correlations between local operators at strong coupling. Our analysis is based on the basic principle [28], that relates every local operator in a gauge theory to that with the (semi)classical sting states propagating over the curved geometry.
In order to compute the two point function at strong coupling, we first solve the corresponding stringy dynamics within the physical region bounded by the so called holographic screen [84]- [85] in deformed AdS 3 . In our analysis, we consider two classes of local operators in the dual gauge theory namely, the magnons and the spikes. It turns out that one could solve the dynamics for strings quite exactly in deformed S 3 . However, it does not quite work that well for the deformed AdS 3 sector. Considering both of these scenarios together, we finally able to probe the behavior of the two point function corresponding to two extremal limits of background deformations namely, the (semi)perturbative as well as the non perturbative (κ 1) regime. Our results could therefore be extrapolated further towards the interpolating region in order to have a full qualitative understanding on the behavior of the two point function for generic background deformations.
Our analysis reveals that in the limit of small background deformations (0 < κ ≤ 1) associated with the deformed AdS 3 sector of the full background geometry, the corresponding two point correlation function between single trace operators in the dual gauge theory is exponentially suppressed and as a result it decays at a rate faster than that what is expected in the usual framework of a CFT [29]. This indeed confirms that the associated conformal invariance in the dual gauge theory is explicitly broken and also clarifies all the previous arguments [78] in favor of this observation. One should further take a note on the fact that for strings attached to the holographic screen the correlation function eventually vanishes due to large exponential suppresion.
Considering the other limit namely, κ 1 we observe that at leading order in the background deformations, the corresponding two point function receives contributions from S 3 and the contribution from AdS 3 appears only at subleading order and which is thereby suppressed compared to that of the sphere contribution. This eventually results in the saturation of the corresponding two point function at large background deformations.
Before we conclude finally, it is worth emphasizing that answers to various doubts and/or confusions associated with η -deformed models are not yet upto the mark. It still remains as a matter of debate whether the deformed sigma model leads towards any type IIB string theory after all. In this paper, instead of addressing this issue, we choose a rather different question to be addressed namely whether the usual notion of Gauge/String duality makes sense corresponding to classical target space solutions associated with η-deformed model. As far as two point correlation (between heavy operators) is concerned, we find sensible answers. However, many questions are yet to be addressed that one might wish to explore in the future: (1) It would be really nice to uplift the present calculation for κ-deformed AdS 5 × S 5 superstring model in the presence of non vanishing background B field, (2) A systematic computation of the three point correlations between local operators might shed further light on the symmetries associated with the mysterious dual gauge theory description at strong coupling. (3) The present analysis could also be performed in the presence of Lax pairs. It would also be nice to compute two point functions for backgrounds without integrable deformations for example black hole geometries where the dual field theory exhibits some suitable IR cutoff. We leave these issues for future investigations. | 9,959.6 | 2017-02-05T00:00:00.000 | [
"Physics"
] |
BaTiO
Dielectric capacitors with ultrafast charge-discharge rates are extensively used in electrical and electronic systems. To meet the growing demand for energy storage applications, researchers have devoted significant attention to dielectric ceramics with excellent energy storage properties. As a result, the awareness of the importance of the pulsed discharge behavior of dielectric ceramics and conducting characterization studies has been raised. However, the temperature stability of pulsed discharge behavior, which is significant for pulsed power applications, is still not given the necessary consideration. Here, we systematically investigate the microstructures, energy storage properties and discharge behaviors of nanograined (1-x )BaTiO 3 - x NaNbO 3 ceramics prepared by a two-step sintering method. The 0.60BaTiO 3 -0.40NaNbO 3 ceramics with relaxor ferroelectric characteristics possess an optimal discharge energy density of 3.07 J cm -3 , a high energy efficiency of 92.6%, an ultrafast discharge rate of 39 ns and a high power density of 100 MW cm -3 . In addition to stable energy storage properties in terms of frequency, fatigue and temperature, the 0.60BaTiO 3 -0.40NaNbO 3 ceramics exhibit temperature-stable power density, thereby illustrating their significant potential for power electronics and pulsed power applications.
INTRODUCTION
Dielectric capacitors, as fundamental components in high-power energy storage and pulsed power systems, play an important role in many applications, including hybrid electric vehicles, portable electronics, medical devices and electromagnetic weapons, due to their high power density, ultrafast charge-discharge rates and long lifetimes [1][2][3][4][5][6] . However, most current commercial polymer dielectric capacitors and multilayer ceramic capacitors (MLCCs) possess somewhat low energy densities of < 1-2 J cm -3 , which results in them occupying relatively large volumes and/or weights in devices [7][8][9][10] . The development of third-generation semiconductors and the need for device miniaturization have resulted in an urgent demand for high-energy-density dielectric capacitors [1,11] .
Under an applied voltage, the dielectric materials in dielectric capacitors polarize to store energy [1,12,13] . Their energy storage properties can be calculated through polarization-electric field (P-E) loops, namely, , and η = W d /W c , where W c and W d are the charge and discharge energy density, respectively, P max and P r are the maximum and remnant polarization, respectively, and η is the energy efficiency [14][15][16] . Among all dielectric materials, relaxor ferroelectrics with high P max , low P r , high breakdown strength (E b ) and slim P-E loops have been investigated extensively for their excellent energy storage properties [17][18][19][20][21][22] . The polar nanoregions in relaxor ferroelectrics can switch rapidly under an applied electric field, which significantly reduces loss and results in high η [23][24][25][26][27][28] . In addition, the excellent fatigue and temperature stability of the pulsed discharge behavior and energy storage properties are highly desirable for dielectric capacitors operating in harsh environments, i.e., aerospace fields and oil-well drilling [29][30][31][32] . Many strategies have been utilized to enhance the temperature stability of dielectric materials in recent years, including multiscale optimization [27] , composite strategy design [28] , unmatched temperature range design [33] and special sintering methods [34] . However, the temperature stability of pulsed discharge behavior is not given sufficient attention in current research into dielectric materials.
In this study, we prepare nanograined (1-x)BaTiO 3 -xNaNbO 3 ceramics, which possess relaxor ferroelectric characteristics with a good P-E relationship (high P max , low P r and slim P-E loops) and high E b , using a solidstate reaction method. The 0.60BaTiO 3 -0.40NaNbO 3 ceramics exhibit an optimal W d of 3.07 J cm -3 and a high η of 92.6% under 38.1 MV m -1 at ambient temperature. Stable energy storage properties in terms of frequency (0.1-100 Hz), fatigue (10 6 cycles) and temperature (25-120 °C) are also achieved. Moreover, the ceramics possess an ultrafast discharge rate of 39 ns and a high power density of 100 MW cm -3 . The variation of the power density is less than 15% from 25 to 140 °C. All these results suggest that 0.60BaTiO 3 -0.40NaNbO 3 ceramics are ideal candidates for energy storage applications in pulsed power systems.
MATERIALS AND METHODS
(1-x)BaTiO 3-x NaNbO 3 ((1-x)BT-xNN) dielectric ceramics with x = 0.35, 0.40, 0.45 and 0.50 were prepared through a conventional solid-state method. According to the stoichiometric ratio of (1-x)BT-xNN ceramics, BaCO 3 , TiO 2 , Na 2 CO 3 and Nb 2 O 5 powders with analytical grade, as the raw materials, were weighed and ball milled with ethanol for 24 h. The mixed powders were then dried at 80 °C and calcined at 950-1030 °C for 5 h in the closed alumina crucibles to avoid the volatilization of Na. Afterward, the calcined (1-x)BT-xNN powders were ground with a polyvinyl butyraldehyde solution (PVB, 10 wt.%) and uniaxially pressed into cylinders with a diameter of 8 mm and a thickness of 0.5 mm under a pressure of 2 MPa. The cylinders were heated at 600 °C for 5 h to remove the PVB binder and then sintered with a two-step sintering method [35,36] (all samples were heated to 1250-1350 °C for 1-10 min and then cooled down to 1100-1150 °C for 3-5 h).
The ambient-temperature X-ray diffraction profiles of the (1-x)BT-xNN ceramics were obtained using a Rigaku 2500 X-ray diffractometer (Rigaku, Tokyo, Japan) with Cu Kα radiation and λ = 1.5418 Å. The surface microstructures of the ceramics after thermally etching at 1050 °C for 0.5 h were characterized using scanning electron microscopy (SEM, MERLIN VP Compact, Zeiss Ltd., Germany) at 15 kV. To measure the ferroelectric properties and pulsed discharge behaviors, the compact ceramics were polished down to 180-200 µm in thickness and then gold electrodes with a radius of 1.5 mm were sputtered on both surfaces. The P-E loops were measured using a TF ANALYZER 2000E ferroelectric measurement system (aixACCT Systems GmbH, Aachen, Germany) under different frequencies (0.1-100 Hz) and various temperatures (25-140 °C). The dielectric properties were measured under a frequency range of 1 kHz to 1 MHz and a temperature range of -150 to 300 °C using an impedance analyzer (E4980A, Agilent Technologies, USA). The overdamped and underdamped pulsed discharge behavior was measured using a charge-discharge platform (CFD-001, Gogo Instruments Technology, Shanghai, China) with a resistor-capacitance load circuit. More details regarding the resistor-capacitance circuit measurement system are given in Supplementary Figure 1.
RESULTS AND DISCUSSION
The ambient-temperature X-ray diffraction profiles of the (1-x)BT-xNN ceramics are displayed in Figure 1. All samples exhibit typical perovskite structures with traces of a Ba 6 Ti 7 Nb 9 O 42 secondary phase (PDF#47-0522). The approximate amounts of Ba 6 Ti 7 Nb 9 O 42 phases are displayed in Supplementary Table 1 and are less than 5% in all (1-x)BT-xNN ceramics. The (200) peaks between 45° and 46° without splitting suggest that all samples are mainly pseudocubic phases at room temperature. The cell parameters of (1-x)BT-xNN ceramics decrease with increasing NN content [Supplementary Table 2], which is mainly because the radius of Na + (1.39 Å) is smaller than that of Ba 2+ (1. The temperature-dependent (150-300 °C) dielectric properties of the (1-x)BT-xNN ceramics were measured at various frequencies [ Figure 3] and indicated prototypical relaxor ferroelectric characteristics. The dielectric constants of all the (1-x)BT-xNN ceramics at room temperature are ~1000-1200 and the Ba 6 Ti 7 Nb 9 O 42 phases are considered to have paraelectric characteristics. Hence, the Ba 6 Ti 7 Nb 9 O 42 phases may not significantly affect the dielectric characteristics of the ceramics. It can be found that the dielectric constant and the Curie temperature increase with increasing NN content. All the (1-x)BT-xNN ceramics exhibit low dielectric loss of less than 0.012 between -100 and 200 °C. Generally, the modified Curie-Weiss law, 1/ε -1/ε m = (T -T m ) γ /C, is utilized to describe the dielectric characteristics of relaxor ferroelectrics, where ε and ε m are the dielectric constant and maximum value of ε, respectively, T and T m are the corresponding temperatures, C is the Curie constant and γ is used to describe the degree of diffuseness. The γ value varies from one for typical ferroelectrics to two for ideal relaxor ferroelectrics [24,37] . The fitted γ values of all the ceramics are shown in Figure 4 and are between 1.686 and 1.766 at 1 MHz, thereby manifesting strong relaxation behavior. This strong relaxation behavior causes the (1-x)BT-xNN ceramics to respond rapidly under an applied electric field, resulting in high η. Figure 5A, with all ceramics exhibiting slim P-E loops. Among these, the 0.55BT-0.45NN ceramics possess the largest P max and P max -P r values [ Figure 5B], leading to high W d . However, due to the lower P r , relatively larger P max -P r value and the highest E b [ Figure 5B and C], a W d of 3.07 J cm -3 and a high η of 92.6% are achieved in the 0.60BT-0.40NN ceramics at 38.1 MV m -1 , which are the optimum energy storage properties among all the (1-x)BT-xNN ceramics at 25 °C [ Figure 5D]. Figure 6 exhibits the energy storage properties as a function of the applied electric field. All BT-NN ceramics possess high E b between 32.7 and 38.1 MV m -1 and high η between 87.5% and 93.0%. The corresponding current-field curves of the (1-x)BT-xNN ceramics are shown in Supplementary Figure 5, confirming the high η. Noticeably, the η of the 0.60BT-0.40NN ceramics decreases slightly with increasing E and shows a slight variation of < 4% within the whole electric field range tested, which is conducive to high η energy storage applications.
Given that the stability of the energy storage properties for dielectric materials is crucial in practical applications, the frequency, fatigue and temperature stabilities of the energy storage properties for the 0.60BT-0.40NN ceramics are characterized in Figure 7. The P max of the 0.60BT-0.40NN ceramics only decreases from 15.1 to 14.3 µC cm -2 with increasing frequency from 0.1 to 100 Hz, while the P r remains almost unchanged [ Figure 7A]. Hence, the variations in W d and η are less than 6.0% and 1.2%, respectively [ Figure 7B]. The stable frequency-dependent energy storage properties are realized because the polar nanoregions can switch rapidly under the applied electric field [38] . To evaluate the fatigue stability, the unipolar P-E loops under 15 MV m -1 are characterized for 10 6 cycles [ Figure 7C]. Fortunately, the P-E loops have no noticeable change and the variations in W d and η are less than 0.6% and 0.7%, respectively [ Figure 7D]. Figure 7E exhibits the unipolar P-E loops measured under 20 MV m -1 at various temperatures. It can be found that the P max of the 0.60BT-0.40NN ceramics is consistent with the trend of the ε and gradually decreases with increasing temperature. The reduction in P max results in a decrease in W d , while the η stays over 90% when the temperature is up to 120 °C. Figure 7F shows the energy storage properties (W d and η) of the 0.60BT-0.40NN ceramics with increasing temperature from 25 to 120 °C, revealing good temperature stability.
In practical applications, dielectric capacitors charge and discharge at the microsecond or nanosecond timescale [1] . The W d and η calculated by the P-E loops cannot reflect the true energy storage properties [39] , so a resistor-capacitance circuit is constructed to evaluate the discharge behavior of the 0.60BT-0.40NN ceramics. Figure 8A displays the overdamped pulsed discharge electric current-time (I-t) curves at various E values. The corresponding W d can be calculated using , where R and V are the load resistor (here R = 100 Ω) and the effective volume of the sample, respectively [40] . The discharge rate is usually described by the discharge time corresponding to the 90% stored W d value, which is abbreviated as τ 0.9 . As the E increases, the current peak and W d also increase. Finally, the W d reaches 1.21 J cm -3 at 25 MV m -1 [ Figure 8B]. In general, the W d calculated by the I-t curve is always lower than that calculated by the P-E loop because the characterization mechanisms with different measurement frequencies [1] and dielectric material losses differ [41] . The τ 0.9 of the 0.60BT-0.40NN ceramics is ~39 ns [ Figure 8B]. The ultrafast discharge rate comes from the low hysteresis polarization response and the relaxor characteristic. This makes the 0.60BT-0.40NN ceramics more competitive in high-power applications [38,42] . Moreover, the undamped pulsed discharge current curves at 25 °C under various E values are displayed in Figure 8C. From the current curves, we can calculate the current density (C D ) and power density (P D ) from C D = I max /S and P D = EI max /2S, where I max and S represent the maximum value of the undamped pulsed discharge current curves and the electrode area, respectively [26] . The C D and P D of the 0.60BT-0.40NN ceramics at 25 MV m -1 are 801 A cm -2 and 100 MW cm -3 , respectively [ Figure 8D]. More importantly, from the undamped pulsed discharge current curves at 20 MV m -1 under various temperatures [ Figure 8E], it can be found that the variations of C D and P D are ~15% from 25 to 140 °C [ Figure 8F], which suggests that the 0.60BT-0.40NN ceramics have significant potential for pulsed power system applications.
CONCLUSIONS
In summary, the 0.60BT-0.40NN ceramics with relaxor ferroelectric characteristics have an optimal W d of 3.07 J cm -3 , a high η of 92.6%, a high P D of 100 MW cm -3 and an ultrafast τ 0.9 of 39 ns. Moreover, they exhibit stable energy storage properties in terms of frequency (0.1-100 Hz), fatigue (10 6 cycles) and temperature (25-120 °C), as well as temperature-stable power density (25-140 °C). These ideal energy storage properties and pulsed discharge behavior make the 0.60BT-0.40NN ceramics more promising for high-stability energy storage MLCCs in pulsed power system applications. | 3,267.6 | 2022-01-01T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
Charged fluids in higher order gravity
We generate the field equations for a charged gravitating perfect fluid in Einstein–Gauss–Bonnet gravity for all spacetime dimensions. The spacetime is static and spherically symmetric which gives rise to the charged condition of pressure isotropy that is an Abel differential equation of the second kind. We show that this equation can be reduced to a canonical differential equation that is first order and nonlinear in nature, in higher dimensions. The canonical form admits an exact solution generating algorithm, yielding implicit solutions in general, by choosing one of the potentials and the electromagnetic field. An exact solution to the canonical equation is found that reduces to the neutral model found earlier. In addition, three new classes of solutions arise without specifying the gravitational potentials and the electromagnetic field; instead constraints are placed on the canonical differential equation. This is due to the fact that the presence of the electromagnetic field allows for a greater degree of freedom, and there is no correspondence with neutral matter. Other classes of exact solutions are presented in terms of elementary and special functions (the Heun confluent functions) when the canonical form cannot be applied.
Introduction
It is important to describe the physical properties and behaviour of charged localized distributions in relativistic astrophysics. This has a long history in physical theories since such structures model dense stars and astronomical bodies. These have been widely studied in a variety of physical scenarios over the decades. For some comprehensive studies of charged objects in general relativity see the treatments of Murad and Fatema [1,2], Fatema and Murad [3], Murad a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>(corresponding author) c e-mail<EMAIL_ADDRESS>[4], Kiess [5] and Ivanov [6,7]. Fewer results are known in modified gravity theories such as Einstein-Gauss-Bonnet (EGB) gravity. The introduction of higher order curvature terms, together with the electromagnetic effects, leads to field equations which are difficult to integrate. However particular charged stars in EGB gravity have been generated by Hansraj [8], Bhar and Govender [9] and Banerjee et al. [10]. Such solutions of the combined EGB and Maxwell equations should match to the suitable exterior spacetimes of Boulware and Deser [11] and Wiltshire [12] to produce a charged stellar model. Exact solutions of the charged EGB equations may also be used to study a variety of physical phenomena. For example Sharif and Abbas [13] considered the dynamics of charged radiating collapse in EGB gravity demonstrating that the Gauss-Bonnet terms affect the role of collapse. It is important to note that the Gauss-Bonnet term corrects undesirable physical features that can arise in conventional Einstein stellar models [8].
For neutral matter with isotropic pressures in EGB gravity, the fundamental equation governing the behaviour of gravity is the condition of pressure isotropy. Stellar models satisfying this requirement have been found in [14][15][16][17][18][19][20][21]. In the presence of the electromagnetic field the condition of pressure isotropy is adapted to include the presence of the charge. The presence of charge changes the behaviour of the gravitational field and allows for a wide class of exact solutions to the field equations. Therefore in our treatment the charged condition of pressure isotropy is central to our investigation. This is a necessary condition to describe an isotropic charged self-gravitating body in EGB gravity. Two features of our approach are noteworthy. Firstly, the new charged condition of pressure isotropy is a simple generalization of the neutral case. Secondly, the connection to general relativity is easy to make as most of the known Einstein stellar models have isotropic pressures, both neutral and charged. Clearly much more general behaviour is allowed, with greater free-dom in the analytical forms of the gravitational potentials, if anisotropic pressures are permitted.
It is our intention to develop an algorithm that may be utilized to find new charged exact solutions in EGB gravity. The idea is to extend this approach from general relativity to the charged EGB equations. In general relativity certain solution generating algorithms have been developed over time; these are contained in the papers [22][23][24][25][26][27][28]. The higher order curvature terms and charge have a profound impact on the charged condition of pressure isotropy in the EGB case. Naicker et al. [29] developed an EGB algorithm in N dimensions for neutral and static spherically symmetric metrics. We show in this treatment that a similar algorithm may be generated in the presence of the electromagnetic field. The charged condition of pressure isotropy is shown to be an Abelian differential equation of the second kind. It can be transformed to canonical form using a transformation suggested by Polyanin and Zaitsev [30]. We demonstrate that general solutions exist to the fundamental equation which is not the case for neutral matter. Particular charged exact models are found by specifying forms for the electric field and one of the potentials which contain neutral EGB models found earlier. It is the presence of the electromagnetic field that permits wider classes of solutions. Note that, in a different approach, Maharaj et al. [31] used an existing solution to generate a new exact EGB solution in their algorithm.
Charged EGB gravity
We first introduce the necessary quantities related to the electromagnetic field. The Faraday tensor F is defined in terms of the electromagnetic potential A by We note that the tensor F is skew-symmetric. The electromagnetic matter tensor E is composed of the Faraday tensor and the metric tensor, and is written as where A N −2 is the total surface area of the (N − 2)-sphere denoted by In the above Γ (. . . ) is the gamma function. The electromagnetic field is governed by Maxwell's equations. These fundamental equations are expressed covariantly as In the above J a is the current density defined by for a non-conducting fluid, and σ is the proper charge density. The energy momentum tensor for neutral matter is defined by In the above, ρ represents the energy density, p represents the isotropic pressure and u is the comoving fluid velocity which is unit and timelike (u a u a = −1, u a = e −ν δ a 0 ). The total energy momentum tensor T is then given by The Gauss-Bonnet action, a modification of the Einstein-Hilbert action, is required to generate the EGB field equations in any spacetime dimension. Interestingly, this Gauss-Bonnet action contains quadratic curvature terms which yield field equations that are second order and quasilinear in the highest derivative. The Lovelock tensor H is expressed by and the Gauss-Bonnet term L G B is given by The EGB field equations for charged matter are derived in the form In the above, G ab is the Einstein tensor, α is the Gauss-Bonnet parameter, and κ N is the gravitational coupling constant defined by If N = 4, then we obtain κ (= κ 4 ) = 8π G c 4 as the appropriate limit in general relativity. When the matter distribution contains electric charge, we must consider the contribution of the electromagnetic field to the total energy momentum tensor T . For a charged gravitating body we need to solve the EGB field equations (10) together with Maxwell's equations (4).
Field equations
The interior spherically symmetric static stellar manifold in N dimensions has the metric where ν(r ) and λ(r ) are the gravitational potentials that are arbitrary functions of r. The (N − 2)-sphere is given by For charge we select the electromagnetic potential A in the form which is usually the choice made when studying static spheres in general relativity. We then get the Faraday tensor component Hence we obtain the following form for the electrostatic field intensity Then the static spherically symmetric metric (12), the electromagnetic potential (14) and the matter distribution (7) lead to the charged EGB field equations. If we equate the curvature and the matter components using the definition (10), we obtain the EGB field equations in N dimensions. These are expressed by Note that primes represent differentiation with respect to the variable r. Then the combined field equations describe the gravitational behaviour of a charged gravitating fluid in EGB gravity in N dimensions. If we set E = 0 then we obtain the neutral EGB field equations of Naicker et al. [29]. Note that the system (17) contains several cases that arise in general relativity and EGB gravity: spacetime dimensions N = 4, N ≥ 5, neutral and charged matter. This is reflected in Table 1. Our investigation allows for a comprehensive treatment of all the cases.
We now apply the transformation first introduced by Durgapal and Bannerji [32] in general relativity, to simplify the system (17). The charged EGB field equations can then be recast as Note that dots represent differentiation with respect to the variable x.
If we equate (19b) and (19c) then we find the charged isotropic pressure condition The charged condition of pressure isotropy has to be integrated to find an exact model of a charged gravitating sphere.
To solve Eq. (20) we need to restrict two of the quantities y, Z and E. Note that the case N = 5 is special as the term There is simplification in (20) and most exact solutions found correspond to N = 5. The dimensions N ≥ 6 have a dramatic effect and lead to new features absent in the model when N = 5. A choice of the potentials y and Z may lead to a model with unphysical behaviour. Consequently in many investigations, a choice for the electric field is made on physical grounds.
For recent examples of this approach see the treatments of Mathias et al. [33], Lighuda et al. [34] and Mafa Takisa et al. [35]. We can summarize our results in the following statements: Theorem 1 If the electric field E is specified then the condition of pressure isotropy, a nonlinear second order differential equation, has to be integrated.
Corollary 1
We can obtain a general form for E without integration if the potentials Z = Z 0 and y = y 0 are specified.
Abelian differential equation
Progress in the integration of (20) can be made if we write it in a particular analytic form. Expression (20) can also be regarded as a first order nonlinear ordinary differential equation in Z . This is given by The above is further identified as an Abel differential equation of the second kind in Z if y and E are specified. It is important to find exact solutions to this equation in order to determine the dynamics of our model. In general (21) is difficult to solve, however it can be simplified by making use of a transformation similar to that in Polyanin and Zaitsev [30]. We now present the new variable (22) where w = w(x) and Equation (21) then reduces to the canonical differential equation of the form where the new functions F 1 and F 0 depend on the metric potential y, its derivatives and E. These are expressed by and In order to find a solution for w = w(x), we must integrate (24) and make appropriate choices for y and E. Since F 1 and F 0 both depend on an arbitrary function of y in a complicated manner and F 0 contains contributions from the electromagnetic field, it will not be possible to find a general solution to (24). However particular solutions do exist.
We summarize our result in the following: Theorem 2 When α = 0 and 6xẏ+(N − 5) y = 0, the condition of pressure isotropy is classified as an Abelian differential equation of the second kind in Z , in N dimensions, which can be transformed to the canonical form wẇ = F 1 w + F 0 .
Corollary 2
If particular choices for the potential y = y 0 and the electromagnetic field E = E 0 are made, then wẇ = F 1 w + F 0 can be solved to find the metric potential Z = Z 0 .
Observe that the above result is a generalisation of the model presented in Naicker et al. [29] to include the electromagnetic field; we regain the result by [29] when E = 0. It is indeed interesting that the canonical form (24) is not affected by the electromagnetic field. However it is important to observe that the presence of E leads to a new differential equation. We note that our result provides a solution generating algorithm for the charged EGB field equations which extends the neutral algorithm of [29] to include the electromagnetic field.
A specific metric
Equation (24) does admit exact solutions. As an example we illustrate a solution to (24) by setting This metric potential was also used by Hansraj and Mkhize [19] when N = 6 and by Naicker et al. [29] for arbitrary spacetime dimensions N ≥ 5, for uncharged matter. The integral (23) evaluates to Then expression (24) now has the form In order to solve Eq. (29) we must specify a form for the electromagnetic field. We choose a form for E as where A is some arbitrary constant. Other forms of E are possible but the chosen form simplifies the integration process. The form for E selected leads to a singularity at the centre so that the model applies to an envelope region away from the centre. Equation (29) then becomes This equation can be identified as a nonlinear first order differential equation in the variable w(x) which can be simplified further using the substitution As a result, we obtain in terms of the new variable W (x). The structure of expression (33) is a separable differential equation which can be solved to obtain where C 1 > 0 represents an integration constant. We can then write Eq. (34) in terms of the variable w(x) in the form using (32). Therefore we have solved Eq. (31). The solution (35) is provided implicitly. In terms of the potential Z we can obtain the form Hence the gravitational potential Z is given exactly, containing elementary functions of x for all spacetime dimensions N ≥ 5. The charged condition of pressure isotropy (21) admits the particular exact solutions given by (27), (30) and (36). Earlier solutions are contained in our general result. When A = 0 in expression (36) we obtain, for all N ≥ 5, the solution which is explicit. This regains the neutral solution found by Naicker et al. [29]. The uncharged model of Hansraj and Mkhize [19] with N = 6 is a special case of (37). The uncharged solutions generate an explicit form for Z . The electromagnetic field also leads to exact models but its overall effect on Z is that it has to satisfy an implicit equation.
Dimension N = 5
Note that the spacetime dimension N = 5 leads to simplification in the Abelian differential equation (21) with several terms vanishing. In addition the transformation (22) takes on the simpler form where the function W now has the explicit form The functions F 1 and F 0 in (24) can then be written as We can observe that the spacetime dimension N = 5 is special as integration of the canonical form (24) is now possible and the functions F 1 and F 0 are expressed in a simpler form. We now demonstrate an explicit solution to (24) when N = 5. The choice y = 1 2 D 1 x 2 + D 2 for the potential and E 2 = bx for the electrostatic field intensity in (24) then yields which is a separable differential equation that can be integrated to obtain w and consequently Z . The gravitational potential Z is then provided by Note that D 1 , D 2 and b are constants. The solution for the potential Z is thus provided explicitly in closed form and is expressed in terms of elementary functions of x. This appears to be a new class of solutions to the charged EGB field equations. When b = 0, we obtain the uncharged case similar to the solution illustrated in Hansraj et al. [17].
Exceptional metrics
The transformation given by (22) holds when α = 0 and 6xẏ + (N − 5) y = 0. Therefore we need to consider these cases separately.
Firstly we consider the case when α = 0, then the condition of pressure isotropy (21) takes on the form This is a first order linear ordinary differential equation in Z which can be solved by making a choice for the potential y = y 0 and the electromagnetic field E = E 0 . For a recent general treatment of (44) see Komathiraj and Sharma [36].
In particular, if we let y = √ x and use (30) for E, then (44) has the solution where B is a constant of integration. Note that when A = 0 (45) reduces to the neutral case in Naicker et al. [29]. Secondly we consider the case We can integrate the above to get whereC is an integration constant. This potential y leads to for the isotropy pressure condition (21). We can solve the above equation by specifying a form for the electromagnetic field E. We choose Then expression (48) has the form We show that (50) can be solved. For the particular spacetime dimension N = 5, expression (50) is a linear differential equation in Z . The solution is given by Note thatC 1 is an integration constant, and setting A = 0 regains the generalised Einstein static model in EGB theory as expected for the neutral case. When N = 5 then (50) is not linear in Z , it is a Riccati equation. When A = 0 (corresponding to uncharged matter) it reduces to the equation considered by Naicker et al. [29]. When A = 0, other solutions are then possible which we present in Appendix A. It is clear that the spacetime dimension N and the charge parameter A have a profound effect on the dynamics.
General cases
The charged condition of pressure isotropy has been transformed to the canonical form (24). We have shown that exact solutions exist by choosing specific functions for y and E, and then integrating to find Z . We now show that it is possible to find general solutions to (24) without having to make a choice for y, Z or E. These new classes of solutions arise by placing restrictions on the functions F 0 and F 1 . The presence of the electromagnetic field allows for greater freedom and permits these three new classes of solutions to exist. In the absence of charge there is less freedom.
so that From Eq. (53) we can obtain a general form for the electric field intensity E as Equation (24) is now written aṡ which is identified as a separable differential equation. We integrate Eq. (55) to obtain w, and consequently Z in the form where C is a constant of integration. Hence we have solved the charged condition of pressure isotropy when F 0 = 0. There is freedom of choice for the metric function y. Any choice y = y 0 generates forms for E and Z via (54) and (56) respectively. We can state our result as Proposition 1 If F 0 = 0 then a general expression for the electric field E is provided by Eq. (54). Any choice of the potential y = y 0 leads to an exact solution for the charged EGB field equations.
We now let which yields the following constraint Equation (58) is a product of a first order linear ordinary differential equation and a second order linear differential equation. The permissible solutions are given by whereQ, B 1 and B 2 represent integration constants. It now remains to find the potential Z if the constraint (57) holds. Equation (24) then has the form which is a separable equation. Integrating we obtain w and then the function Z in the form With F 1 = 0 we have integrated the charged condition of pressure isotropy. The form of y in (59) and any choice E = E 0 leads to a functional form for Z via (61). This result leads to the statement: Proposition 2 If F 1 = 0 then two forms for the potential y are possible. The potential Z is given by (61): Any choice of the electric field E = E 0 leads to an exact solution of the charged EGB field equations.
Case III:
An interesting class of models are possible if F 0 and F 1 are related. We let the function F 1 be proportional to F 0 where K is some constant. This gives the condition From Eqs. (25), (26) and (62) we obtain Therefore the electric field E is specified. On substituting (62) in Eq. (24) we obtain which is a separable equation. Integrating we obtain Note that K = 0 and we obtain a class of models different from Case II in Sect. 7.2. In terms of the variable Z we obtain where (N − 3) (N − 4) =Ñ and C is a constant of integration.
We have solved the charged condition of pressure isotropy when F 1 = K F 0 . The integration in (66) can be completed once a functional form for y = y 0 is selected. We can state our result as: Proposition 3 If F 1 = K F 0 then a general expression for the electric field E is given by (63). Any choice for the metric function y = y 0 results in an exact solution of the charged EGB field equations.
We have established that three propositions, resulting from restrictions on F 1 and F 0 , that allow for integration, lead to expressions for the first potential Z in terms of the second potential y. A specific choice of y will lead to a functional form for Z . Clearly the choice made for y should simplify the integration and lead to an acceptable model.
Matching
The solutions found in this paper may be interpreted as static cosmological models or more realistically as interior descriptions of static charged stars. For a stellar structure there has to be matching at the surface at an exterior gravitational field. In general models, including spherical geometry, the matching conditions are well known and can be written as across a comoving boundary surface Σ for the line element ds 2 and the extrinsic curvature K ab . The matching conditions (67) hold in general relativity. Several models of static relativistic stars have been found in the past which satisfy the conditions in (67). In EGB gravity the boundary conditions on Σ have the form as given by Davis [37]. In the above we havê where the caret "ˆ" indicates quantities associated with the induced metric and P abcd is the divergence free part of the Riemann tensor. The tensor J ab is defined by and J is its trace. For a proper distribution of a static star in EGB gravity we need to match an interior solution to an exterior vacuum solution, say the Boulware-Deser metric. In many EGB treatments the matching conditions are taken to be the general relativity equations (67); for an example of this approach see [16]. Such investigations do produce useful physical features of the stellar model but it has to be acknowledged that the resulting structure is incomplete as Eqs. (68) may not be satisfied. However it is difficult to solve (68) in general.
In an attempt to circumvent this problem Maurya et al. [38] have suggested that the conservation of energy momentum could be used in the analysis of the boundary conditions. This approach is helpful but the boundary conditions (68) are still not satisfied in general. In an ongoing investigation we are presently studying the general matching of the Boulware-Deser spacetime to the interior static spherically symmetric matter distribution. This will then produce a complete stellar model in EGB gravity.
We now consider the existence of stellar models in EGB gravity using the approach of Maurya et al. [38] for the solutions found in this paper. We expect that the dimension N should affect the matter content and the geometry. The interior spacetime is described by the metric (12), and the exterior spacetime is described by where which is the Boulware-Deser-Wiltshire metric in N spacetime dimensions. In the above M is the gravitational mass of the hypersphere and Q is its charge. Note that in the limit as α → 0 we regain the Reissner-Nordstrom solution in N dimensions.
The first fundamental form is the direct matching of the line elements (12) and (72) at the boundary r = R. This yields where ε 1 = y(R 2 ) and ε 2 = Z (R 2 ). The gravitational mass M is given by where and In the above M E and M G B are the masses corresponding to the contributions from general relativity and Einstein-Gauss-Bonnet gravity respectively. It is clear that the dimension N affects the value of the gravitational mass M. The second fundamental form implies that the radial pressure vanishes at the boundary r = R. From (19b) we obtain where ε 3 = y (R 2 ) and ε 4 = E(R 2 ). The charge density at r = R is expressed by The total charge within a radius r of the hypersphere of radius R is given by where Q is the charge as measured by an external observer at infinity. Observe that Eq. (80) generates a restriction on the parameters when the electric field is specified. If E is given by (30) then (80) becomes where we have set ε 5 = A which is a charge parameter.
Hence the matching at r = R gives the four restrictions, (73), (74), (78) and (81). Observe that the free parameters are ε 1 , ε 2 , ε 3 , ε 4 , ε 5 and R so that we have an algebraic system of four equations with six unknowns. Hence this system always admits a real solution when two unknowns are specified. (Note that Q is defined in terms of R, ε 5 and M is given in terms of R, ε 2 and Q). It is important to note that Z is given implicitly in general by Eq. (36). When the charge parameter A = 0 then an explicit form for Z results which is also the case for the parameters ε 1 -ε 5 . When A = 0 then the junction conditions have to be solved numerically.
The dimension of spacetime is critical in our analysis. We note that for the spacetime dimension N = 5 the junction conditions take on the simpler form due to the fact that the term (N −5)(1−ε 2 ) 2R 4 in (78) vanishes. This indicates that the dimension N = 5 is a special case. We note that the EGB part of the mass function M G B in (77) also takes on the simpler form M G B = 2α(1 − ε 2 ) 2 which is independent of R. For N ≥ 6 note that M G B depends on R. The evolution of the static star is therefore different in five dimensions than in higher dimensions. When the dimension of spacetime is N = 6, the junction conditions become where the term (N −5)(1−ε 2 ) 2R 4 now comes into effect. We also note that the mass function (77) is greater in six dimensions because of the effect of the R N −5 term; for R > 1. The charge Q also increases in magnitude, as the spacetime dimension increases, from Eqs. (83) and (85), for N = 5 and N = 6 respectively.
Discussion
We have studied static spherically symmetric models in a higher dimensional charged EGB gravity setting. The matter distribution considered is a perfect fluid, in an electric field, with isotropic pressure. The charged EGB field equations for such a fluid distribution were found for all spacetime dimensions N ≥ 5. We demonstrate that the charged condition of pressure isotropy is an Abelian differential equation of the second kind in Z which is reduced to the canonical form wẇ = F 1 w + F 0 after using a transformation. This generalises the Naicker et al. [29] result to include the electromagnetic field. It is interesting to observe that a solution generating algorithm to this equation exists for all dimensions N ≥ 5. The canonical equation is solved by choosing a specific form for the potential y and the electromagnetic field E. As a result the gravitational potential Z is defined exactly in an implicit manner. An important point to note is that the presence of the electromagnetic field permits an implicit equation in the potential Z . However, if the electromagnetic field vanishes, we regain an explicit exact solution in Z as demonstrated in [29]. Furthermore, three new classes of exact solutions to the charged EGB field equations were generated by placing constraints on the functions F 1 and F 0 . In the first approach, we set F 0 = 0: this permitted a general expression for E without integration, and any choice for the potential y will yield a functional form for the potential Z . The second approach, F 1 = 0, yielded two analytic forms for the metric y and any choice for the electromagnetic field E will result in an exact solution for Z . In the third and final constraint, F 1 = K F 0 , a general form for the electromagnetic field E is illustrated. As a result the gravitational potential Z can be determined exactly by specifying any form for the metric function y. These families of exact solutions arise due to the presence of charge. Charge allows for a greater degree of freedom which is not the case for neutral models. Other possible exact solutions to the charged EGB condition of pressure isotropy are found when exceptional metrics are considered. The matching conditions in EGB gravity were also considered for our model. A complete stellar model in EGB gravity (and general Lovelock gravity) is not yet known, yet it is still possible to ascertain the existence of a static star in EGB gravity. The higher dimensional interior spherically symmetric spacetime was matched to the exterior vacuum solution of Boulware-Deser and it was shown that the radial pressure vanishes at the boundary of the star as expected. The mass function was also obtained in N dimensions. The dimension N critically affects the geometry of the star as well as its matter distribution.
An important point to note is that our classes of interior models have no general relativity counterpart. These models exist only in EGB gravity. The charged condition of pressure isotropy, an Abel differential equation of the second kind, is reduced to a canonical form which is different from general relativity. This is a nonlinear differential equation (an Abelian differential equation of the second kind) in Z . In general relativity the charged condition of pressure isotropy is a linear differential equation in Z if y is specified. An explicit solution to this Abelian equation for Z , in tandem with a resolution of the boundary condition from the matching will yield a complete stellar model. | 7,310.8 | 2023-04-01T00:00:00.000 | [
"Physics"
] |
How Applied English Students’ Dealing with Literary Translation
The graduate students of Applied English are expected to have skills on translations needed by the industry. They are expected to be able to translate any kinds of texts either formal, advertisement, directions or literary. The translation of the literature differs from other forms of translation. It is interesting to find out the ways the applied students dealing with literary translations. In collecting the data, document analysis in the form of analyzing the students work and interviews were done. The result showed that translation literary works is not an easy job for applied English students when they do not have the theory related and not familiar with. Based on the study, it was found that several mistakes are committing by the students when translating the literary works such as using literal translation, misunderstanding the context, having over confidence and lacking vocabularies. On the other hand, the students realized their mistakes well and understand that through reading a lot, improving the vocabularies and having more experiences will ease them to deal with the literary translation.
Introduction
Indonesian government had introduced the National Education Blueprint for Smart and Competitive Indonesians 2005-2025 which priorities the development of vocational education and training (VET) sector and focuses on increasing the number of vocational schools and improving the English communication skills of their graduates. This initiative was implemented to meet the high demand for young and skilled human capital by the industry. The students are expected to be able to initiate and maintain predictable face-to-face conversations and satisfy limited social demands. Statistics seem to confirm that a bright future eludes vocational school students, who have topped the unemployment rate for the last three years with 8.63 percent this year, 8.92 percent in 2018 and 9.27 percent in 2017. The apparent gloomy state of vocational school students ignores the decisive role that vocational education will play as the country is anticipating a big harvest of its demographic bonus between 2020 and 2035. In fulfilling the expectation the vocational students are expected to master many skills needed by the industry especially the English.
Diponegoro University as one of the biggest university in Indonesia supports the government to reaching the goal through the vocational school. One of the departments in the school is Applied English.
The graduate students of Applied English are expected to have skills on translations needed by the industry. They are expected to be able to translate any kinds of texts either formal, advertisement, directions or literary. The translation of the literature differs from other forms of translation. The sheer size of the texts involved in literary translation sets it apart. Dealing the translation for about thousands of words is not an easy task for the faint-hearted. Recreating novel or other literature in a new language without losing the beauty and essence of the original work is not an easy job. One of the key challenges of literary translation is the need to balance staying faithful to the original work with the need to create something unique and distinctive that will evoke the same feelings and responses as the original. As literary translators will attest, a single word can be extremely troublesome. The author of a work of fiction has chosen that word for a good reason, so the translator must ensure that it is faithfully delivered in the target language. One of the biggest challenges in this arena of literary translation is the balance to remain true to the original work while creating an entirely unique piece that evokes the same responses as the original piece. The author has chosen a particular word for a particular reason, so it is depend on the translator to ensure rightfully delivered in the target language.
As we know, the vocational students are expected to fulfill the industry need and there is high interesting to see the applied students whose having high demand to deal with any kinds of text dealing with the literary translations.
Literature Reviews
Translation makes people easily get any information without confusion. Gibova (2012: 27) states that "when analyzing translations of any sort, be it literary or non-literary texts, there are certain categories that allow us to examine how the target text (TT) functions in relation to the source text (ST)." In addition, different genre of text has different treatment or procedures depending on the functions such as referential or informative, expressive, and operative (Reiss, 1976;Nord, 1977, cited in Colina, 2003.
Dealing with the translations many linguists have defined translation in their own perspectives.
Newmark (1988:7) defines translation as "a craft consisting in the attempt to replace a written message and/or statement in one language by the same message and/or statement in another language." In line with Newmark, Larson (1984: 3) states that "Translation is transferring the meaning of the source language into the receptor language. This is done by going from the form of the first language to the form of a second language by way of semantic structure. It means that this is being transferred and must be held constant.". In this case Larson (1984) gives opinions on the completeness and harmony between language forms and structures of meaning. This is a package that is capable of delivering a form of understanding the meaning of the text contained by the source that should be able to be transferred to the target text with full responsibility. Jacob (2002) adds that the translator has to adapt the message to the target audience and use only what he or she considers to be the most appropriate solution in any given situation. The ultimate aim is to communicate the message as effectively as possible. Thus, communicating the message to the target language readers is an effective solution in translating. Roberts, as stated in (Mamur, 2005), mentions five competencies translators must possess, namely (1) linguistic competence, i.e., the ability to understand the source language and produce acceptable target expressions, (2) translation competence, i.e., the ability to comprehend the meaning of source text and express it in the target text, (3) methodological competence, i.e., the ability to research a particular subject and to select appropriate terminologies, (4) disciplinary competence, i.e., the ability to translate texts in the same basic disciplines such as economics, information science and law, and (5) technical competence, i.e., the ability to use aids to translation like the word processor, database, and Internet.
In the process of translation, translators usually use some procedures to solve the specific translation problems. There have been overlapping terms to refer to the procedures, such as 'translation method' and 'translation strategy' as proposed by Vinay and Darbelnet, Nida, Taber, and Margot, Vasques Ayora, Delisle and Newmark which were then reviewed and revised by Molina and Albir (2002) to get a better classification consistent in its application and were meant to reach all kinds of texts. Molina and Albir proposed the term 'translation technique' and defined it as "a procedure for analyzing and classifying an effort to achieve translation equivalence".
Literary Translations
Translating literary texts is difference when compared to translate non-literary texts. Translating scientific texts is not as complicated as translating literary texts (Purwoko, 2006). Literary texts contain unique and distinctive aspects that are hard to translate. Literary texts have different text structures and linguistic characteristics from non-literary texts, so translating these texts has its own difficulties and complexities (Soemarno, 1988).
A literary text is the work that contains messages and styles. Messages that contain connotative meaning and style in the form of aesthetic-poetic mechanism is the characteristic of literary text.
Literature is itself a series of papers that describe the history of a community, containing artistic and aesthetic values and read as references (McFadden in Meyer, 1997: 2). A translator of literary texts will face a variety of difficulties, such as difficulties associated with meaning, such as lexical meaning, grammatical meaning, the meaning of contextual or situational, meaning textual, and socio-cultural significance. There are meanings that are easily translated (translatable) and not even difficult to be translated (untranslatable). Furthermore, if a translator is already well aware of his role, he will produce a good translation, namely the qualified translation that is easy to understand and looks like a natural translation product and helpful as a source of information (Kovács, 2008: 5).
Materials and Methods
The focus of this study is to investigate how applied English students dealing with the literary translation. This study employed qualitative case study (Baxter and Jack, 2008;McMillan andSchumacher, 2003, cited in Syamsudin andDamayanti, 2007). The data used in this study were some words or phrases from students' opinion from interview towards literary translation. The primary intent of the informal interview is to find out what the interviewees think and how the views of one individual compare with those of another (Fraenkel, Wallen & Hyun, 2012). In addition, their translations resulted were analysed to see their way in translating the literary text. The participants were taken from fourth semester of Applied English Department whose have learnt translation for 3 semester but they never do literary translation. They were asked to translate the short story text from the storynory.com entitle When the Sun Hid in the Cave and after that we discussed and analysed about their translation quality.
Results
In measuring their ways on translating the literary text, the students were asked to translate a short narrative story in English to Indonesia. The story was taken from storynory.com which entitled When the Sun Hid in the Cave. Based on their interviews and their translations, it was found that the applied English students faces several problem related to the literary translation. Even though they have familiar with translation proses, but dealing with literary translation is quite tough for them. They felt hard to translate that short story accurately without losing the soul of the text. Several mistakes they make in dealing with the literary translation.
Literal translations
Some of students use literal translation in doing literary translation. It made they change the meaning of the original one. Translating a script word for word is almost sure to change the meaning of the translation from the original since it is literary translation, they should not do literal translation to share the soul of the text. It was found that some of the students translate the text literally like translating the sentence "Nobody can live on love alone, however." into "Bagaimanapun, tidak ada yang dapat hidup dengan cinta sendirian". It doesn't feel smooth and easy to understand since the phrase hidup dengan cinta sendirian is ambigu and hard to understand. It will be easy to understand and smooth if the sentence is translated into Bagaimanapun keadannya tidak seorang bisa hidup sendiri tanpa cinta or Bagaimanapun, tak ada orang yang dapat hidup tanpa cinta. Based on the interviews, it was also found that the students have difficulty in finding the equivalent word in translating the text.
Misunderstanding the context of a word
If the translator misunderstands the context that a word is used in, he will translate it differently which can alter the original meaning. It was found that The phrase 'The farmer shrugged his shoulders' is quite hard to translate into Indonesian since the students misunderstanding the context. Some of them translated into Petani itu mengangkat bahu-nya meaning that they only translated the words without seeing the context. In addition, based on their interviews, they agree that in dealing with this translation, understanding the context is very important to see the equivalent word to be translated. After discussing the translation result together they also realise how important to understand the context and kind of texts.
Over-confidence
It is one of their mistakes in dealing with literary translation. Not having work proofread is not only overconfident but also foolish. It makes them commit many mistakes in translations so the translations didn't look smooth. Based on the interviews, it was found that usually the students just translate the text without proofreading the translation doesn't feel smooth and commit many mistakes.
Lack of Vocabularies
Lacking the vocabularies knowledge in translating the literary translation is significant for producing the high quality translations. Based on the interviews, the students have difficulty in translating the literary text especially finding the correct word to use in the text. Most of them said that is hard to find the equivalent word in literary translations since lacking of vocabulary. Some expressions mentioned by the students such as "it is difficult to choosing the right words for tales, it is difficult to find the equivalent words, and it is difficult to understand the story" showing that they lack of the vocabularies. Through the depth interviews, it was found that the students do not like to read a lot and do some practices to improve their vocabularies.
The students realize that translating literary work is different from non-literary since the text contain some aspects that are hard to translate and understand. It contains conotative meaning and style.
Those difficulties and mistakes done by them are mostly dealing with lexical meaning, and contextual meaning. Some of them are easy translated and the rest are difficult to be translated. On the other hand, the students understand their mistakes and the way to solve it through reading a lot, improving the vocabularies and having a lot of experiences in translating the text.
Conclusion
Translation literary works is not an easy job for applied English students when they do not have the theory related and not familiar with. Based on the study, it was found that several mistakes are commit by the students when translating the literary works such as using literal translation, misunderstanding the context, having over confidence and lacking vocabularies. On the other hand, the students realized their mistakes well and understand that through reading a lot, improving the vocabularies and having more experiences will ease them to deal with the literary translation.
When the Sun Hid in Her Cave
At the dawn of time, Susano-o, the spirit of the sea and storms, was making ready to leave heaven and to gush down to Earth. His sister, the far-shining Sun Goddess, said, "Oh, impetuous brother of mine. Before you go, let us exchange tokens of our love and affection for one another." Susano-o bowed to his sister, drew his sword from his side, and presented it to her. She accepted the gift, and then chewed off pieces of the metal blade in her mouth, before spitting them out. Instantly, the fragments of the sword sprang up as three beautiful daughters.
Then the sparkling Sun Goddess took jewels from her hair and gave them to her brother. He crunched them up with his teeth and spat them out; they became five strong sons.
"They are my sons," said the goddess, "because they were born from my jewels." "No, they are my sons," said the storm god, "because you gave me those jewels." Thus the brother and sister began to quarrel. The stormy tempered Susano-o grew so angry that he swept through his sister's rice fields and destroyed them. He flung manure all over her garden, and frightened her maidens so that they hurt themselves on their spinning wheels.
The bright goddess was greatly offended by the evil pranks of her brother. She fell into a most dreadful sulk, and hid herself in a cave in a remote part of the earth. There was no more light, and heaven and earth were plunged into darkness.
Amid this gloom, thousands of gods and spirits gathered in a heavenly river bed to discuss what to do. One of the oldest and wisest gods proposed that they make a mirror, in order to tempt the goddess to come out of hiding and gaze at her beauty. Another suggested that they should sew a beautiful dress as a gift to smooth her temper. Still other gods said that they must offer her jewels and even a palace. At last they decided to make all these gifts, and they set to work.
When they were ready, the divine ones gathered outside the cave of the Sun Goddess. They lit bonfires so that they could see in the darkness, and they called the goddess by her name, Amaterasu, but no matter how many times they called, she remained lurking within the shadows of her hiding place.
The gods needed to do better than if they were to gain her attention, so they began to make music. They clashed symbols and banged wooden clappers together. The plump goddess of mirth, with dimpled cheeks and eyes full of fun, lead a dance. She performed on top of a giant drum that thundered with her every step. She held a stick in her hand with bells tied to it so that they rang out as she danced. Farmyard cockerels joined in with crowing. You can imagine what a lovely concert they made!
The dancing goddess of mirth wore a dress that was held together with vines. As she waved her arms and pranced about, the dress became looser and looser until it fell off altogether and she had not a stitch of clothing on her. The gods found this so hilarious that they all laughed until the heavens clapped with thunder.
Only then did curiosity get the better of the far-shining one, and she peeped out of her cave. She saw her bright face reflected in the mirror that had been placed just in front of the opening, and she was astonished by her own beauty. She did not have long to gaze, however, because a strong-handed god seized hold of her arm and dragged her out of the cave. Then all the heavens and earth were lit, the grass became green again, the flowers blazoned with a multitude of colors, and human beings looked upon one another's faces.
There was another benefit from this gloomy episode in the history of creation. This was the first time that music, dance, and fun were known on the face of the earth -and these divine gifts have brightened human lives ever since. | 4,204.2 | 2020-06-04T00:00:00.000 | [
"Linguistics",
"Education"
] |
Fast and Accurate Approaches for Large-Scale, Automated Mapping of Food Diaries on Food Composition Tables
Aim of Study: The use of weighed food diaries in nutritional studies provides a powerful method to quantify food and nutrient intakes. Yet, mapping these records onto food composition tables (FCTs) is a challenging, time-consuming and error-prone process. Experts make this effort manually and no automation has been previously proposed. Our study aimed to assess automated approaches to map food items onto FCTs. Methods: We used food diaries (~170,000 records pertaining to 4,200 unique food items) from the DiOGenes randomized clinical trial. We attempted to map these items onto six FCTs available from the EuroFIR resource. Two approaches were tested: the first was based solely on food name similarity (fuzzy matching). The second used a machine learning approach (C5.0 classifier) combining both fuzzy matching and food energy. We tested mapping food items using their original names and also an English-translation. Top matching pairs were reviewed manually to derive performance metrics: precision (the percentage of correctly mapped items) and recall (percentage of mapped items). Results: The simpler approach: fuzzy matching, provided very good performance. Under a relaxed threshold (score > 50%), this approach enabled to remap 99.49% of the items with a precision of 88.75%. With a slightly more stringent threshold (score > 63%), the precision could be significantly improved to 96.81% while keeping a recall rate > 95% (i.e., only 5% of the queried items would not be mapped). The machine learning approach did not lead to any improvements compared to the fuzzy matching. However, it could increase substantially the recall rate for food items without any clear equivalent in the FCTs (+7 and +20% when mapping items using their original or English-translated names). Our approaches have been implemented as R packages and are freely available from GitHub. Conclusion: This study is the first to provide automated approaches for large-scale food item mapping onto FCTs. We demonstrate that both high precision and recall can be achieved. Our solutions can be used with any FCT and do not require any programming background. These methodologies and findings are useful to any small or large nutritional study (observational as well as interventional).
INTRODUCTION
Food composition tables (FCTs) document the nutritional content and properties of food items. These tables are used in conjunction with dietary records, e.g., food diaries, to match consumed food items, and quantify the dietary intake from an individual. The availability of complete and good quality FCTs is required to enable quantitative research in nutritional studies, as well as epidemiological research and public health monitoring. These data are also playing a critical role in studies that aim to monitor an individual's diet and propose personalized recommendations.
Traditionally, FCTs were compiled at a national level with limitations in the data format, depth of annotation and data completeness across different countries. Noticeable efforts have put in placed over the past few years to collect, standardize, and curate FCTs (1)(2)(3)(4)(5). In particular, the European Food Information Resource (EuroFIR) has been pivotal in harmonizing data from more 28 national FCTs (including European countries and the USA) (2). Electronically linking these data with food diaries data from observational or interventional studies, provides opportunities to better study the link between Health and Nutrition (6). Yet, dietary consumption is often collected through paper-based food diaries, which requires substantial effort for digitalization (converting records to electronic format) and for food item mapping (for each record, identify its corresponding or closest food item in FCTs and collect the nutritional composition of the matched item). As of today, this effort still remains a manual, expertise-driven exercise. As a direct consequence, such manual mapping is limited to the available study's resources and the retrieved information is limited to a few composition variables (e.g., macronutrients and energy content, and rarely extended to detailed information about micronutrients).
Numerous efforts are being spent in methods for food image recognition using deep learning (7)(8)(9)(10) and there is an explosion of mobile applications for food recording. They might prove helpful with future studies by enabling the individual to directly link the consumed food items to a reference database. However these solutions are not intended to solve the mapping issue in studies with existing food records. In addition, their interface, quality of their underlying FCT, and performance still remain to be carefully validated for use in clinical nutrition studies. Hitherto, food item mapping remains a largely un-addressed issue.
Another problem relies in estimating the variability introduced with the mapping. The ideal mapping aims to match the queried food record onto an item from the FCT. In practice, the one-to-one match rarely exists: local FCT may not be extensive enough to enable food matching. Also in multicentric studies, food items from a specific country would be frequently matched onto a larger FCT from another country that may not have a close equivalent for a specific item from another country. In the absence of a clear on-to-one mapping between a food record and a FCT's food item, several strategies are possible: ignore the food record (thereby introducing missing data), use the closest match, or to create an average profile from several close matches. All three options would introduce variability (or missing data) that needs to be appropriately handled in subsequent statistical analyses. To our knowledge, the variability induced from uncertain food mapping, remains un-addressed in nutritional studies and dietary intakes are analyzed based on the assumption that a perfect match has been found between the food record and one food item from the FCT. This mapping uncertainty is magnified in multi-center studies, where food records from a specific country often need to be translated in English and then mapped onto a English-based FCT [such as the USDA (11) or MW7 (1)]. In this scenario, the English translation may add further to the uncertainty or simply the queried food item may not exist in the English-based FCT.
Finally, variability may also come from the FCTs themselves when they contain different versions of the same food item or when the nutritional content of an item is incorrect [e.g.,when the record was saved using an incorrect unit such as kJ instead of kcal, such erroneous record would stand as an aberrant value (i.e., as an outlier) compared to other similar food items]. Various statistical methods exist to detect outliers. However, outlier detection can only be attempted within a group of coherent, similar items. Also the clustering needs to be made with a granularity that goes beyond the simple food group category. Whilst significant efforts have been spent on data integration and unit harmonization across FCTs, significant efforts remain needed for quality control and data curation. In particular, there is a strong need for metrics to perform food item clustering and subsequently to detect and correct errors in FCTs.
In this study, we attempt automated remapping of a large number of food records (∼170,000 individual food records, corresponding to 4,200 distinct food items). The food diary data stemmed from one of the largest weight maintenance dietary intervention of its kind: the Diet, Obesity and Genes study [DiOGenes (12)(13)(14)(15)]. Food items were matched to those referenced in the EuroFIR. We define and evaluate an automated approach, based on food name similarity. We also propose an additional approach based on machine learning, to refine mapping of difficult items. Finally, we compared the performance of our approaches, when using the original food name or Englishtranslation.
Ethics
The DiOGenes study (12)(13)(14)(15) was performed according to the latest version of the Declaration of Helsinki. Local ethical committees approved of all procedures that involved human participants and written informed consent was obtained from all participants. The present study did not use any clinical data or any individual-level data; only unique food elements (food items defined by their name and macronutrient composition) from food diaries were used.
Study Design and Participants
The DiOGenes study is a pan-European, multi-center, randomized controlled dietary intervention program (NCT00390637). The study was conducted in eight European countries: the Netherlands (NL), Denmark (DK), United Kingdom (UK), Greece (GR), Bulgaria (BG), Germany (D), Spain (SP), and Czech Republic (CR). The study has been described in detail previously (12)(13)(14)(15). Family eligible for inclusion consisted of at least one overweight (body mass index > 27 kg/m 2 ) but otherwise healthy parent aged less than 65 years with at least one healthy child between 5 and 18 years. All eligible adults (n = 932) followed a low-caloric diet (LCD) for 8 weeks. The LCD provided 800 kcal per day with the use of a meal-replacement product (Modifast, Nutrition et Santé France). Participants could also eat up to 400 g of vegetables (corresponding to a maximal addition of 200 kcal/day). Families with at least one of the parent achieving at least 8% of weight loss were then included in a 6-month weight maintenance diet (WMD) phase, following either a low/high protein, and glycemic index diet or a control diet.
Food Diary Data
Adults completed a 3-day weighed food record for three consecutive days, including two-week days and one weekend day. Records were validated during interview with qualified nutritionists. Food diaries were completed at screening, 2-4 weeks after the randomization in the WMD and 2-4 weeks before completion of the WMD. The participants were instructed to weigh all their foods and to supply information on brand names, cooking and processing. When weighing was not possible, participants were instructed to record the quantity in household measures (cups, glasses, tablespoons). All foods noted in these diaries were coded to foods listed in country-specific FCTs (15). Based on such coding and recorded weight, the macronutrient and energy intake was computed for each record. Additional nutrient information such sugar, starch, fiber, mono-and polyunsaturated fat was retrieved. In total, 202,000 food records were collected during the study. Yet the retrieved information remained very partial and a high number of missing values were observed (about 7% missing values for the energy content, 17-31% missing values for macronutrient variables; and more than 50% missing values for other variables). In this study, we used data from six centers (NL, DK, UK, GR, BG, and SP) for which an FCT from the same country was available in the EuroFIR. FCT data from Germany and Czech Republic, as provided by EuroFIR, requires additional licenses (to be purchased directly from their respective data source). Thus, in our proof-of-concept study, we focused the mapping onto data available from the standard EuroFIR membership (providing access to FCTs from the same six remaining DiOGenes countries). In total, the food diary records (∼170,000) pertained to a unique list of 4,179 food items. For each food item, both original (local-language), and English-translated names were available. At the time of data retrieval (April 2016), the EuroFIR resource provided access to the following databases versions: NL data from the NEVO database version 2014, DK data from the 2009 release, UK from 2008, GR from 2013, BG from 2009, and SP from 2010.
Food Name Comparison Using Fuzzy Matching
In information theory, comparing two names (strings of characters) is referred to as fuzzy matching and searches a sequence of letters (a string) that matches approximately a pattern. A frequently similarity metric is the Levenstein distance (16) that computes the minimal number of single-characters edits (insertion, deletion, substitution) needed to change one word into another. We used the partial token sort ratio from the FuzzyWuzzyR package (17). This approach is based on the Levenstein distance and computes the ratio of the most similar substring (pattern) between two food names, where each names is split by words (token), and sorted prior the comparison. The resulting score ranges from 0: the two food names are distinct (no common substring is found); up to 100 the two elements are identical (or one of the element is fully included in the second element). Prior computing the fuzzy matching metric, all punctuation marks (commas, semi-columns, points, questionmarks, etc.) were removed and all letters set to upper case. Figure 1A summarizes the fuzzy matching concept and provides examples of similarity between food names.
Annotation Process to Review Food Item Matches
To evaluate whether the matches were plausible, the food names (English and original language), the energy and macronutrient content was also investigated. Decision on whether a match DiOGenes-EuroFIR was plausible was defined from a holistic consideration from the food names, the type of process applied to the food (e.g., "boiling" or "frying"), differences in several composition variables (e.g., energy content, fat, sugar content, etc.). In specific case, e.g., when no English translation was available or seemed inaccurate, the two matching elements were reviewed with a Google image search. In case of ambiguity, a conservative approach was used and the match was assigned to as being "non-plausible." Upon manual annotation of matches, the overall performance (relevance of mapping; and amount of items that can be mapped) was assessed using the so-called precision-recall curves (18). Specifically, each pair of items (the queried item and its match) has two important attributes: the outcome of the manual review ("plausible match" or "not plausible") and a single metric that aims to quantify the mapping confidence (e.g., the fuzzy matching score). The assumption is that items with high confidence scores are more likely to be "plausible matches" than items with lower scores. The challenge is to define a threshold on such score and enable automated classification into either plausible or non-plausible match. A precision-recall curve assesses the quality of mapping by investigating a large range of possible thresholds. For each threshold, automated classification can be made for each pair of items and can be contrasted with information from the manual review. This allows deriving the following In a precision-recall curve, each point corresponds to a single threshold and reflects the precision and recall metrics as obtained when using such threshold for automated classification. These curves are useful for two aspects: (1) identify a classification threshold that provides satisfactory precision and recall performance; (2) compares the overall performance from different classification models (with one curve per model).
Food Item Comparison Using Machine Learning
To extend on the fuzzy matching comparison, we defined a machine learning classifier to better distinguish between plausible and non-plausible matches. A machine learning classifier attempts to learn from the data and define a model that can achieve good performance at predicting two classes (e.g., plausible/non-plausible match). Numerous approaches exist in the field of machine learning. Here, we used a C5.0 classification tree. C5.0 models are one type of classification trees and are extremely popular (19,20). A classification tree defines a decision process, where each node in the tree is a test on an input variable and results in a binary decision forming two sub-nodes (sub-groups). Each sub-group is then tested with another test until a final decision can be made. In our analyses, the final decision corresponds to whether a given pair of items corresponds to a plausible match. Classification trees have the advantage to be easily interpretable compared to other more complex models (e.g., neural networks). A C5.0 model aims to test the most informative variables first and to define a binary split that optimize the similarity of the resulting sub-groups. In more details, a C5.0 model is based on the concept of information entropy (a measure of the homogeneity within a group) and extracts informative patterns from the data to achieve a binary classification. Each node of the tree is built by defining a binary rule based on the variable that provides the maximal information gain (by defining the most homogeneous sub-groups). Each resulting node is then split again until no more splits are possible. This type of models is robust in the presence of missing data and the resulting classification rules are easily interpretable. In addition, the performance of classification can be significantly improved by using a boosting strategy (21). Boosting enables to define several models (some that may only have a moderate performance) and to combine them into a better, consensus meta-model. Our C5.0 classification trees used as input variables the fuzzy matching score and the percentage of difference in energy content between the two food items. To avoid biases when comparing energy content, the nutritional content from all food items (incl. DiOGenes and EuroFIR items) were scaled to 100 g portions.
The percentage of difference in energy content was computed as follows: We built the C5.0 model using the following approach. First, we used a 20% random subset of the EuroFIR resource and computed all pairwise comparison with the DiOGenes food items. Next, we restricted the list of all pairs (n > 3,300,000) to those with either: • fuzzy matching score > 75% and absolute difference in energy content < 25% • fuzzy matching score > 90% This smaller list (n = 2,625 pairs) was then manually reviewed to indicate in a new column whether the match was plausible (see above section on annotation process). From the list of annotated matches, we then built a boosted C5.0 classification tree using the C50 R package (22). Two models were trained, one based on a comparison between original food names and another one based on a comparison using the English-translated food names. The resulting trees are shown in Supplementary Figure 1. These models allowed for a new pair of item to derive the probability that the match is correct, given their similarity in term of fuzzy matching and energy content (see illustration in Figure 1B). For simplicity in the manuscript, we refer to these outputs as the C5.0 probabilities. These probabilities were obtained using the predict function from C50 R package.
Code Availability
Our code is available from R packages released under General Public Licenses (GPL licenses). These packages enable to perform fuzzy matching comparisons, irrespectively of any FCT [FoodMapping package released under GPL version 2 (GPL-2); available from https://github.com/armandvalsesia/ Foodmapping]; and also to compute the probability that a match is correct based on pre-trained C5.0 models [FoodC5 package released under GPL-3, available from https://github. com/armandvalsesia/FoodC5]. The code is optimized for a large number of comparisons and does not require access to high performance computers or clusters. Documentation and quickstart tutorial are also available.
Fuzzy Matching Concept
Nutritionists encoding food diaries initiate their searches by searching the food name in a FCT interface. Then, from a list of elements containing the queried name or keywords, the nutritionist would decide which element is the closest match. When the returned elements have identical or near-identical names compared to the queried element, the decision could be automated thereby reducing the number of items that requires an expert decision. We thus sought to investigate possible approaches to map food items from the DiOGenes study onto the EuroFIR FCTs.
We first assessed mapping items from one single country and for which the reference FCT is still available. During the DiOGenes study, food diaries from the Netherlands (NL) were mapped onto a release of Dutch Food Composition Database [the NEVO database (23)]. NEVO has evolved and is still maintained. It is available from the Dutch national institute for public health and environment and has been integrated in the EuroFIR. EuroFIR keeps a unique food identifier code for each food item per country and keeps reference to the original (FCT) food identifier code. In the DiOGenes food data, food identifier codes were also recorded and we were able to cross-match all 898 DiOGenes NEVO items onto the current NEVO release. Manual investigation showed that all these matches were correct. This mapping constitutes a very valuable resource to assess the performance of an automated approach, notably by assessing how many of these known matches could be retrieved. We thus applied the fuzzy matching approach described in Figure 2A. Out of 898 food items, 780 (87%) of the mapped items (using the fuzzy matching) shared the same food identifier codes, indicating that the reference item was found. The remaining fraction (117 items) was investigated manually. We found that 72 of those (61%) were plausible matches. Thus, our automated approach was able to correctly remap 852/898 (95%) of the queried DiOGenes NL food items.
Next, we applied a similar strategy to 4,179 DiOGenes food items from six countries (NL, DK, UK, GR, BG, and SP) that have an FCT from the same country in EuroFIR. Among those items, 3,308 (80%) could be cross-matched using their food identifier codes; indicating that they had a clear equivalent. Using the fuzzy matching approach, we were able to remap all 3,308 items, with a global precision equals to 97% (Figure 2B). These results demonstrate that when a queried food item is already present in the FCT, our approach would find it.
Real-World, Large-Scale Example: Application to All Food Items
The proof of concept focused on items that could be matched by food identifier code, and thus are assumed to have an equivalent in EuroFIR. However, in a real-world problem, the fraction of items that is already present in the FCT is unknown. Also for items without a direct equivalent, it is unknown whether those could be mapped onto a similar item.
We thus sought to apply the fuzzy matching approach to the items that could not be cross-matched based on their food identifier codes. This corresponded to 871/4179 (20%) of the DiOGenes food items. By applying the same process (i.e., find the best match) and upon manual review of the hits, we obtained annotated results: each pair of matching items was annotated as being a plausible match or an incorrect one. This annotation enabled to derive specific performance metrics, for different thresholds on the fuzzy matching score. For example, the precision (percentage of correctly mapped items) can potentially be improved by considering pair of items for which the fuzzy score is greater than a given threshold (e.g., 90% instead of 50%). However, increasing such stringency would de facto limits the number of queried items that can be mapped. Estimating the recall rate (percentage of mapped items) is another important indicator of performance. Figure 3 presents the performance (precision and recall) as a function of increasing fuzzy score thresholds. Two approaches were tested: a mapping using the original country food names (e.g., Danish name) and using the English-translated names. For each approach, Figure 3 presents the performance for items that were previously cross-matched based on their food identifier codes ("directly mappable"); items that could not be crossmatched ("other items"); and both type of items ("all items").
As expected, the directly mappable items (green curves) achieves very good performance, already with a permissive fuzzy score filter (with a 50% fuzzy score threshold) all items could be mapped (recall = 100%) and the precision is 97%. By contrast, the precision for the more difficult items ("other items") was not good and only achieved 43.6% (when using the original food name). Similar precision was achieved when using the English-translated name. Therefore, for such difficult items, more stringent thresholds are required. Using thresholds at 70% increased the precision to 92.2 and 73%, respectively for the mapping with Original and English-translated names. However, the recall rates were reduced to 32.5% and 51.8%. The "all items" group is representative of a real-world mapping and 20% of these items are the difficult ones. Here, the global performance was found acceptable with permissive fuzzy score thresholds. At 50% threshold, the precisions were 88.7 and 79.1%, respectively for the Original and English-translated name mapping. For both mapping, the recall rates were > 99%. Using a threshold at 70%, the performance can be significantly improved with precision > 94% and recall > 91%.
From Figure 3, specific fuzzy score thresholds can be derived to achieve high precision (e.g., 75%). Table 1 illustrates how a desired precision (e.g., 75 or 80%) would influence the fuzzy score thresholds. For e.g., when mapping items using their original name, a threshold at fuzzy score > 50% would enable more than 75% precision when mapping easy ("directly mappable") items while a threshold > 63% would be required for difficult items (those without direct equivalent). Therefore, to enable good precision for all items, the stringent threshold > 63% should be used. With such threshold, the precision for all items would be close 97% with a recall rate > 95% ( Table 2).
When mapping items using their English translation, different threshold should be used. For precision > 75%, the fuzzy score threshold should be > 75% ( Table 1). This will enable a precision close to 96.5% for all items, with a recall rate equal to 86.5% ( Table 2).
A Machine Learning Approach to Refine Food Item Matching
As expected, using stringent thresholds to increase the precision leads to smaller recall rates. In particular, at precision > 75%, the recall rate for difficult items is less than 50% ( Table 1). Since difficult items only represent 20% of all queried food items, it means that only 10% of those cannot be mapped automatically and would require an expert-driven matching. This constitutes an improvement over the current situation (where all items are matched manually). Still, we sought to explore additional approaches to improve such matching.
Previous results showed that mapping food names using their English translation would not improve the recall compared to a mapping using the original name (recall rates were 34.8 vs. 50%, respectively). Therefore, additional data would be needed to improve the matching. We rationalized that such information should be easy to acquire. Inherently, if extensive information were already known about the macro-and micronutrient composition from a queried food item, it would mean that the mapping was already done once. Instead, we sought to use a priori information about the food content that could be easy to acquire. We made the assumption that some rough estimates about the total energy content could be obtained during the digitalization phase (i.e., converting paper-based food diaries to electronic diaries). Such estimates would be used to compute the difference in energy content between the queried and retrieved food items; which could potentially be useful to discriminate whether the match is plausible or not.
To assess this approach, we trained C5.0 classification trees (see section Materials and Methods). We observed that this approach had some potential to map items that could not be mapped with the fuzzy matching alone. For e.g., when searching for "beefsteak raw, " the top matching item would be "beef rump steak raw." However the corresponding fuzzy score would only be 56% and may not pass stringent fuzzy score filters. Yet, the difference in energy content is relatively low between these two 2 | Threshold required to achieved 75% precision for all three-item classes ("Any," "Mappable," "Other"), with the fuzzy matching approach. items (<2%) and thus the probability that the match is correct is very high (99.98%). Conversely, this approach could help discriminate between highly similar food names but that pertain to very different food products. "hake raw" and "hare raw" differ only by one letter and thus the resulting fuzzy score would be high (88%). Yet, those two items differ by 25% in term of energy content. With our C5.0 approach, the resulting probability that the match is correct would be relatively low (52%, i.e., close to a random guess).
Large-Scale Performance of the Machine Learning Approach and Comparison With the Fuzzy Matching Approach
We next evaluated the performance from our machine learning approach and compared it to previous results using only the fuzzy scores. Table 3 shows the required thresholds to achieve 75 or 85% precision and Table 4 provides the performance when using a single threshold that achieves at least 75% precision for all item categories. The performance (precision vs. recall) for all our approaches (fuzzy scores/C5.0 combined with either original-or English-translated food name mapping) is shown in Figure 4. When mapping the "Any items" list, all approaches had very good performance (strong precision and recall). With relaxed thresholds (fuzzy score > 50% or C5.0 probability > 50%), all four approaches led to comparable performance with precision > 78% and recall > 97%. The performance can be further decomposed to distinguish between "easy cases" (those that exists in the FCTs; i.e., "directly mappable") and "difficult cases" (without an equivalent). For easy cases, there was no improvement with the machine learning approach compared to the fuzzy matching approach. In fact, the fuzzy matching approach had better precision rate. However, for difficult cases, the machine learning approach improved significantly the recall rate, while keeping a comparable precision. With thresholds to ensure > 75% precision, the recall rate increases from 50 to 57% when mapping items with their Original name. When mapping items using their English-translated names, the recall rate improves from 34.8 to 55.1%.
DISCUSSION
In this study, we explored computational approaches to automate food item mapping onto FCTs. Up to date, noticeable emphasis had been placed to collect FCT data; to enable some level of harmonization across FCT and facilitate data access through databases and user-interfaces (1)(2)(3)(4)(5). Yet, this did not address the problem to automatically map food records.
We found that the simpler approach: fuzzy matching, provided very good performance. Under a relaxed threshold (fuzzy score > 50%), this approach enabled to remap 99.49% of the queried items with a precision equal to 88.75%. With a slightly more stringent threshold (fuzzy score > 63%), the precision could be significantly improved to 96.81% while keeping a recall rate > 95% (i.e., only 5% of the queried items would not be mapped). The more complex approach (based on a C5.0 classifier) enabled to increase the recall rate of difficult items and could potentially be used for items that cannot be mapped with fuzzy matching.
In this study, we mapped the DiOGenes food items using six FCTs available from the EuroFIR resource. However, our approach and code implementation is FCT-agnostic and can be used with any other data source. This provides flexibility to use many different FCTs together and to use any customized/private FCT. Also the starting point is only food names, which makes the approach easily applicable to any other nutritional study and other type of dietary assessment methods.
All the code required to perform fuzzy matching and compute the C5.0 probabilities is freely available as open source R packages (available from GitHub). Our implementation is optimized to enable large number of comparisons: on a standard laptop (with a 2.3 GHz Intel Core i7 processor), 100,000 pair-wise comparisons take < 20 s and 1,000,000 comparisons take 205 sec (∼3.5 min). Significant speed improvements can be made using parallel computing (either with multi-threading or using distributed computing on a grid). Quick-start tutorials and documentation are available; and the code can be used with a very basic knowledge of the R language.
Traditionally, food mapping was performed manually with each single item to be queried against a reference FCT. Then the expert would need to sift through the list of retrieved items, identify the most relevant match and somehow export the required information (e.g., either a food id code or directly the available nutrient composition). Such process is time-consuming; in our experience with the DiOGenes study and other dietary interventions, it takes on average 5 min per item (with a range between 3 and 8 min, depending on the item complexity and the nutritionist's familiarity with the food item). By contrast, our approach enables to fully automate the mapping and can be completed within a few minutes for over a million comparisons without the need for a human intervention.
Such fast and deterministic process enables to rerun the mapping with newest releases of FCTs and to acquire additional information on the nutrient composition. For e.g., upon the initial manual DiOGenes food item mapping, nutrient composition was retrieved for macronutrients and 13 nutrient variables. Using our automated approach enabled to retrieve all nutrient composition variables as available from the queried FCT (for e.g., an automated mapping onto NEVO would retrieve more than 128 composition variables). On average over the six DiOGenes centers, an additional list of 20 nutrients was added to the food records information. Significant improvement was also reached for the amount of missing values. The initial records had 17-31% missing values for macronutrient variables and more than 50% missing values for other variables. Upon automated remapping, these percentages were reduced below 2.3% for macronutrients and below 10% for other variables (except for alcohol content whose percentage of missing values and strictly zero values was reduced from 98 to 91%). An additional benefit of the automated approach is the ability to quantify mapping uncertainty. With a manual mapping, the uncertainty cannot easily be quantified by objective means (if at all) and is typically not captured. By contrast, the automated approach computes a mapping confidence metric (similarity or probability), which can be used for ad-hoc post-filtering and could also be taken into account in subsequent statistical analyses.
Throughout our food item review, we observed variability between different versions of the same food item. For e.g., 100 gr portion of raw garlic would be recorded with an energy content varying between 305 and 670 kcal. We did not observe food items recorded with incorrect units for energy composition (kJ instead of kcal). Yet with such volume of information, data curation (including detection and correction of errors) remains a challenge and a thorough review of each composition variables cannot be performed without automated approaches. Specifically, automated outlier detection (in food nutrient values) would help curation and provide new tools for quality control. However, outlier detection can only be performed when similar items are grouped together. While FCTs provide a food group label that could help pre-cluster food items, this information remains very incomplete. In our EuroFIR subset, about 25% of the food items have no food group information. Our fuzzy matching approach could be of help with such issue. It can be used as similarity metric and would enable to cluster similar items together. Then from such clusters, the individual composition variables can be assessed to identify potential outliers. Such approach would help improving further the quality and completeness from FCTs.
Current efforts in data integration and harmonization across FCTs (1-5) focus on renaming nutrient composition variables using a unified nomenclature and deriving the composition values using consistent units. While this is necessary for combining FCTs in a consistent manner, it does not solve the missing value issue. This situation exists within a same FCT where the different food items would inherently have missing values for one or more composition variables (nutrients). This problem is magnified when combining data across different FCTs that have different number of nutrients. For e.g., NEVO is by far the richest database (with information for 128 different nutrients) whilst other FCTs provide information for macronutrients and a few micronutrients. While there is some guidance on how to estimate missing nutrient values (24), it is a manual, expertisedriven, decision and the literature remains scarce for imputation of missing values in FCTs using computational approaches. There is some guidance for recipe calculation however that would not solve the issue at the ingredient-level (25). Our fuzzy matching approach could be used to cluster together similar food items (independently whether they are cooked food or single ingredients) and could potentially prove useful to impute the missing values (using a strategy similar to traditional k-nearest neighbors). Such imputation process could also be improved by using a composite measure of similarity based on both fuzzy scores and similarity in term of the available nutrient composition. Food item clustering based on fuzzy matching also opens new possibilities with respect to FCT data integration. It would enable to keep track of possible modifications between different versions of the same FCT. Finally, with the availability of FCTs from different countries, such clustering would enable to reduce redundancy across different FCTs and to derive a single, more comprehensive meta-FCT.
In summary, we propose strategies to perform food item mapping at large-scale. Our extensive benchmark demonstrates that both high precision and recall can be achieved. Previously food mapping was a manual, time-consuming and expertisedriven process. These new tools provide a powerful alternative to clinicians and nutritionists, who were performing manually these tasks. In addition to reducing significantly the burden and saving time, it makes the process fully reproducible allowing going back to specific matches in a deterministic manner.
To the best of our knowledge, this is the first time that automated solutions are proposed. These methodologies and findings are useful to any nutritional study (observational as well as interventional) and can be applied in both small and large data collections.
AUTHOR CONTRIBUTIONS
AV: Conceived, designed, and supervised the present study; WS and AA: Designed and supervised the DiOGenes clinical trial; JH contributed data; AV: Developed the statistical approach and the proof of concept; ML: Implemented the approach and optimized R code implementation; AV and ML: Analyzed and interpreted the data; AV: Wrote the paper with input from all authors. All authors read and approved the paper. AV had primary responsibility for final content.
ACKNOWLEDGMENTS
We are thankful to Jérôme Carayol and Hélène Ruffieux for fruitful statistical discussions. We are also very grateful to Soren Solari and Yariv Levy for useful discussions at early phase of the project. Finally, we thank Radu Popescu for precious help with continuous integration. | 8,646.4 | 2018-05-09T00:00:00.000 | [
"Computer Science"
] |
Interplay of filling fraction and coherence in symmetry broken graphene p-n junction
The coherence of quantum Hall (QH) edges play the deciding factor in demonstrating an electron interferometer, which has potential to realize a topological qubit. A Graphene p-n junction (PNJ) with co-propagating spin and valley polarized QH edges is a promising platform for studying an electron interferometer. However, though a few experiments have been attempted for such PNJ via conductance measurements, the edge dynamics (coherent or incoherent) of QH edges at a PNJ, where either spin or valley symmetry or both are broken, remain unexplored. In this work, we have carried out the measurements of conductance together with shot noise, an ideal tool to unravel the dynamics, at low temperature (~ 10mK) in a dual graphite gated hexagonal boron nitride (hBN) encapsulated high mobility graphene device. The conductance data show that the symmetry broken QH edges at the PNJ follow spin selective equilibration. The shot noise results as a function of both p and n side filling factors reveal the unique dependence of the scattering mechanism with filling factors. Remarkably, the scattering is found to be fully tunable from incoherent to coherent regime with the increasing number of QH edges at the PNJ, shedding crucial insights into graphene based electron interferometer.
The coherence of quantum Hall (QH) edges play the deciding factor in demonstrating an electron interferometer, which has potential to realize a topological qubit. A Graphene p − n junction (PNJ) with co-propagating spin and valley polarized QH edges is a promising platform for studying an electron interferometer. However, though a few experiments have been attempted for such PNJ via conductance measurements, the edge dynamics (coherent or incoherent) of QH edges at a PNJ, where either spin or valley symmetry or both are broken, remain unexplored. In this work, we have carried out the measurements of conductance together with shot noise, an ideal tool to unravel the dynamics, at low temperature (∼ 10mK) in a dual graphite gated hexagonal boron nitride (hBN ) encapsulated high mobility graphene device. The conductance data show that the symmetry broken QH edges at the PNJ follow spin selective equilibration. The shot noise results as a function of both p and n side filling factors (ν) reveal the unique dependence of the scattering mechanism with filling factors. Remarkably, the scattering is found to be fully tunable from incoherent to coherent regime with the increasing number of QH edges at the PNJ, shedding crucial insights into graphene based electron interferometer.
1 Introduction: Ever since the realization that the charge and energy are carried by the edge states in a QH system, the interest of edge dynamics have surged both theoretically and experimentally. The understanding of edge dynamics makes an electron interferometer suitable for exploring exotic phenomena like fractional statistics, quantum entanglement and non-abelian excitations [1][2][3][4][5] . A Graphene p−n junction naturally harboring co-propagating electron and hole like edge states offers an ideal platform [6][7][8][9][10][11][12][13][14][15] to study the edge or equilibration dynamics. The equilibration of such edge states is predicted to be facilitated by inter-channel tunnelling via either incoherent or coherent scattering mechanism 16-24 depending on the microscopic details of the interface. As suggested by Abanin et. al. 17 , for a graphene PNJ interface with random disorders, the edge mixing is expected to be dominated by the incoherent process. In the opposite limit, a cleaner PNJ interface 16,22,25,26 is supposed to exhibit coherent scattering. A cleaner PNJ interface is also very intriguing for studying the equilibration dynamics as it has spin and valley symmetry broken polarized QH edges [27][28][29][30] . Though there are several conductance measurements 26,31,32 showing spin-selective partial equilibration of the edges, but the equilibration dynamics for symmetry broken QH edges at a PNJ is still unknown.
Shot noise is a quintessential tool to unravel the equilibration dynamics of a junction and it is usually characterized by Fano factor (F ), which is the ratio of the actual noise to its Poissonian counterpart.
For coherent and incoherent scattering, F = (1 − t) and t(1 − t), respectively 17,[33][34][35][36] , with t being the average transmission of the PNJ. So far, shot noise studies 37,38 at graphene PNJ in the QH regime have been performed on Si/SiO 2 substrate-based devices, where the spin-valley symmetry broken conductance plateaus are not observed and the measured Fano 37, 38 fairly agrees with the incoherent model 17 due to disorder limited interface. Besides, the shot noise measurements are focused around the lowest filling factor (ν = ±2) and hence the dependence of F on filling factors (ν) is lacking. More importantly, there are no shot noise studies for spin-valley symmetry broken QH edges at graphene PNJ.
With this motivation, we have carried out the conductance together with shot noise measurements at a PNJ realized in a dual graphite gated hBN encapsulated high mobility graphene device. From the conductance measurement, we show that the spin and valley degeneracies of the edge states are completely lifted and at the PNJ the edge states undergo spin selective partial equilibration. Our shot noise data as a function of filling factors shows the following important results: (1) The Fano strongly depends on the filling factors. It monotonically increases with p side filling factors, whereas it slowly varies with n side filling factors. (2) For lower values of p side filling factors (ν p ≤ 2), the variation of Fano matches well with the calculated Fano based on incoherent scattering model, whereas for higher values of p side filling factors (ν p ≥ 4) Fano follows the coherent scattering model. These results reveal a crossover of scattering process from incoherent to coherent regime in the equilibration of QH edges, which has not been observed in the previous shot noise studies 37,38 .
2 Results: Measurement set-up: The schematics of our device with the measurement setup is shown in Figure 1(a). The PNJ device is fabricated by placing an hBN encapsulated graphene on top of two graphite gates BG1 and BG2, each of which can independently control the carrier density of one half of the graphene (details in supporting information (SI) figure SI-1). The PNJ (width ∼ 10 µm) is obtained at the interface of BG1 and BG2 by applying opposite voltages to the gates. During our entire measurement, the BG1 (BG2) side is maintained as n (p) doped, by setting gate voltage V BG1 > 0 (V BG2 < 0). When a perpendicular magnetic field is applied to the graphene, chirally opposite QH edge states co-propagating along the PNJ, are created as shown by the colored arrow lines in Fig. 1(a). As shown in the figure, the current (I in ) injected at the p doped region is carried by clockwise edge-states towards the PNJ. After partitioning at the PNJ, the transmitted current (I t ) at the n doped region and reflected current (I r ) at the p doped region is carried by the outgoing anti-clockwise and clockwise edge-states, respectively. The shot noise generated due to partitioning at the PNJ is carried by both the transmitted and reflected paths. To measure I t and the shot noise, the measurement setup consists of two parts: 1) A low frequency (∼ 13 Hz) part, which determines I t by measuring the voltage drop V m at n doped region, with a Lock-in amplifier (LA) as shown in Fig. 1(a) (also see SI-2(a)). 2) A high frequency shot noise measurement part, where a DC current (I in ) is injected at p doped region and the generated noise is measured at reflected side using LCR resonant circuit at ∼ 765 kHz as shown in Fig. 1(a) (described in detail in SI-2(b)). All the measurements were performed at 8 T magnetic field inside a cryo-free dilution fridge (with base temperature ∼ 10 mK), whose mixing chamber plate serves as the cold ground (cg in Fig.1(a)).
Conductance measurement: Figure 1 , which are connected to the gate voltages V BG1 and V BG2 . The chirality of the edge states during measurement, for p and n doped region, is shown by the red and purple arrowed lines. For both conductance as well as shot noise measurements excitation current I in is injected at the p side. The transmitted current (I t ) at the n side is determined by measuring the voltage V m with a lock-in amplifier (LA). The shot noise generated at the PNJ is measured at the p side using a resonant tank circuit followed by a cryogenic amplifier (CA). The extreme left and right contacts were grounded to the dilution mixing chamber plate serving as cold ground (cg). (b) The trans-resistance R t = V m /I in , as a function of V BG1 and V BG2 . R t shows a checkerboard-like pattern corresponding to the different combinations of p and n side filling factors ν p and ν n , shown in the white vertical and horizontal axis, respectively. The white dashed region on (ν p , ν n ) = (-2,2) plateau, are to show how noise data were taken on several different points of the same plateau. (c) Spin configuration of the edge states for two different ways of LL degeneracy lifting: spin and valley polarized ground states. (d) Measured transmission (t) (open circles) of the PNJ as a function of filling factor ν n for ν p = −1. The calculated t for full equilibration and partial equilibration with spin splitting configuration is shown by blue and red dashed lines, respectively. of ν p and ν n , where ν p and ν n are the p and n side filling factors, respectively (details in SI-3(a)). The transmittance t = I t /I in of each plateau is determined from the R t as t = |ν n |R t / h e 2 , where V m = I t R h and R h = h e 2 /|ν n | is the QH resistance of the n doped region. Figure 1(d) shows the measured values of t (open circles) as function ν n for ν p = −1 with the corresponding theoretical values considering full equilibration 17 ; t = ν n /(ν p + ν n ) (blue dashed line), and spin selective partial equilibration 31,32,39 where ν p↑ (ν p↓ ) and ν n↑ (ν n↓ ) are the total numbers of up (down) spin edge channels of the p and n doped region, respectively (SI-5). For spin selective equilibration, two possible sequences of spin polarization of the edge states (valley or spin polarized ground state) are shown in Fig. 1(c) 32 . The red dashed line in Fig. 1(d) is based on the spin structure for the spin-polarized ground state and it is in very good agreement with the measured t. Note that the other spin sequence also gives good agreement with the experimental data. For simplicity, we will be presenting only one of them (spinpolarized ground state) throughout the manuscript. The measured t and the calculated values based on spin selective equilibration for other plateaus are also in very good agreement and are shown in SI-5.
Shot noise measurement: In this section, we present the results of our shot noise measurement. The shot noise generated at the PNJ is measured at the reflected side (p side) as a function of I in , as shown in Fig. 1(a).
In general, measured current noise (S I ) consists of both thermal and shot noise and follows the expression: where V sd is the applied bias voltage across the PNJ, T is the temperature and k B is Boltzmann constant. For eV sd > k B T shot noise dominates over thermal noise and S I becomes linear with I in as shown in Figure 2(a) for (ν p , ν n ) = (−2, 2), (−3, 3) and (−4, 4) filling factor plateaus. The red lines in Fig. 2(a) are the fit using equation (1). The slopes of the fit have been used to determine the normalized noise magnitude (F * = S I 2eI in ). For obtaining Fano (F = S I 2eIt ) we just follow F = F * /t, which is conventionally used to characterize the noise and used in the previous shot noise studies on graphene PNJ 37, 38 . Figure 2(b) shows the histogram of F obtained from the noise data taken at several points (∼ 50) on each checkerboard (plateau) as shown by the white dotted squares in Fig. 1(b) for (ν p , ν n ) = (−2, 2). The histograms are fitted with the Gaussian function as shown by the solid red lines in Fig. 2(b) for (ν p , ν n ) = (−2, 2), (−3, 3) and (−4, 4) plateaus. It can be seen that the histograms have a maximum at a certain value of F (mean value), which depends on the filling factors (ν p , ν n ). The noise data and the corresponding histograms for some other plateaus are shown in SI (SI-14, SI-15 and SI-16). We should note that to pinpoint the exact scattering mechanism, the accuracy of the extracted Fano is very essential. This accuracy depends on the amplifier gain, noise from the contacts as well as on enough statistics. In SI-4, the precise gain calibration and in SI-13, the measured contact noise as a function filling factors are shown. The contact noise has been subtracted in the histogram plots shown in Fig. 2(b) as well as in SI.
Discussion:
Fano versus filling factor: The measured values of F (mean value) as a function of filling factor are shown in Fig. 3 as open circles with the error bars (standard deviations of Gaussian fit in Fig. 2b). In Fig. 3(a) and 3(b), F is plotted as a function of ν p while the n side filling factor is kept fixed at ν n = 2 and ν n = 5, respectively. It can be seen that F increases monotonically from ∼ 0.05 to 0.6 with increasing ν p . Similarly, in Fig. 3(c) and 3(d), F is plotted as a function of ν n while the p side filling factor is kept fixed at ν p = −2 Fig. 1(b). The solid lines are the Gaussian fit to extract the mean value of F and its standard deviation. and ν p = −4, respectively. However, in this case, the F does not increase monotonically with ν n , rather slowly varies around ∼ 0.2 and 0.6 for ν p = −2 and ν p = −4, respectively. Similar dependence of F on ν p or ν n for other fixed values of ν n or ν p is shown in SI (SI-11 and SI-12).
Comparison with theoretical models: To understand the above results, we theoretically calculate F for coherent and incoherent processes. In coherent scattering, the injected hot carriers from p side ( Fig. 1(a)) coherently scatter to the n side and the inter-channel scattering can be described by scattering matrix approach 33,34 . In this case, F follows as (1 − t) similar to that of a quantum point contact (QPC) (details in SI-6). Furthermore, for our symmetry broken PNJ, we also impose the constraints that the two opposite spin channels do not interact with each other 32,39 . Thus the Fano can be written as , where t ↑ = ν n↑ /(ν n↑ + ν p↑ ) and t ↓ = ν n↓ /(ν n↓ + ν p↓ ) are the transmittance of up and down spin channels, respectively (SI-5). The calculated F coherent is shown as red dashed lines in Fig. (3) (SI-6). F coherent increases with ν p but decreases with ν n , which can be qualitatively understood as the transmittance of the PNJ decreases and increases with ν p and ν n , respectively. For incoherent scattering we consider both the quasi-elastic and inelastic processes 17,34 . In quasi-elastic case, known as chaotic cavity model, the injected hot carriers from p side scatters to the n side and subsequently scatters back and forth due to the presence of disorders along the PNJ giving rise to double-step distribution 17,34,36,37,40,41 . Following Abanin et. al 17 the expression for F is t(1 − t) and for our symmetry broken PNJ Fano can be written as F incoherent = (ν p↑ t 2 ↑ (1 − t ↑ ) + ν p↓ t 2 ↓ (1 − t ↓ ))/(ν p↑ t ↑ + ν p↓ t ↓ ). The calculated F incoherent is shown as blue dashed lines in Fig. (3) (SI-6). F incoherent remain almost constant around F ∼ 0.2 and much smaller in magnitude compared to F coherent . Note that the calculated values of Fano using inelastic scattering as described by Abanin et. al. 17 are very similar in magnitude with the quasi-elastic case (SI-10(e)). The monotonic increase of F with ν p in Fig. 3(a) and 3(b) is in contradiction with the incoherent scattering model and is consistent with the coherent case except for lower values of ν p . However, the measured F with ν n for ν P = −2 matches very well with the incoherent scattering, but for ν p = −4 it perfectly matches the coherent medel. This suggests there is a cross over from incoherent to the coherent regime with the increasing number of edge channels at the p side. This is further verified in Fig. 4, where the measured F plotted as a function of |ν p | = |ν n | and increases monotonically from ∼ 0. ∼ 0.25 and 0.5, respectively. We believe that the screening might be playing a big role in dynamics as observed in GaAs based 2DEG 40,[42][43][44][45] . The coherent scattering dominates as the screening increases with more number of participating edges at PNJ.
Conclusion:
In summary, we have carried out conductance together with shot noise measurement on a high-quality graphene p − n junction, for the first time, with spin and valley symmetry broken quantum Hall edges. We have shown that the conductance data follows the spin-selective partial equilibration, and most importantly, our shot noise data reveals the intricate dependence of Fano on filling factors with a crossover in dynamics from incoherent to the coherent regime, which can not be obtained from the conductance measurements. These results will help to design future electron optics experiments using the polarized QH edges of graphene.
Methods:
Device fabrication: To make the encapsulated device, the hBN and graphene, as well as the graphite flakes for bottom gates, were exfoliated from bulk crystals on Si/SiO 2 substrates. Natural graphite crystals were used for exfoliating graphene and the graphite flakes. The suitable flakes for the device were first identified under an optical microscope and then sequentially assembled with the residue-free polycarbonate-PDMS stamp technique [46][47][48] . We have used 15nm and 25 nm thick hBN flakes for encapsulating the graphene flake and 10 ∼ 15nm thick graphite flakes for the bottom gates. To make the metal edge contacts on the device, first, the contacts were defined with e-beam lithography technique. Then along the defined region, only the top hBN flake was etched out using CHF 3 − O 2 plasma. After that Cr(2nm)/P d(10nm)/Au(70nm) was deposited using thermal evaporation.
Shot noise set-up: To measure the shot noise, first, the voltage noise generated from the device is filtered by a superconducting resonant LC tank circuit, with resonance frequency at 765 kHz and bandwidth 30 kHz 49,50 . The filtered signal is then further amplified by the HEMT cryo amplifier followed by a room temperature amplifier. The amplified signal is then fed to a spectrum analyzer, to measures the r.m.s of the signal. The gain of the amplifier chain is determined from the temperature dependence of the thermal noise of ν p = −2 filling factor plateau, while the n − side is in the insulating state. The thermal noise measurement is carried out using the same noise circuit. AP and MRS contributed to the device fabrication, data acquisition, and analysis. CK contributed in the noise setup and preliminary experiments. AD contributed in conceiving the idea and designing the experiment, data interpretation, and analysis. KW and TT synthesized the hBN single crystals. All authors contributed in writing the manuscript. | 4,746 | 2020-10-01T00:00:00.000 | [
"Physics"
] |
Systematic manipulation of the surface conductivity of SmB$_6$
We show that the resistivity plateau of SmB$_6$ at low temperature, typically taken as a hallmark of its conducting surface state, can systematically be influenced by different surface treatments. We investigate the effect of inflicting an increasing number of hand-made scratches and microscopically defined focused ion beam-cut trenches on the surfaces of flux-grown Sm$_{1-x}$Gd$_x$B$_6$ with $x =$ 0, 0.0002. Both treatments increase the resistance of the low-temperature plateau, whereas the bulk resistance at higher temperature largely remains unaffected. Notably, the temperature at which the resistance deviates from the thermally activated behavior decreases with cumulative surface damage. These features are more pronounced for the focused ion beam treated samples, with the difference likely being related to the absence of microscopic defects like subsurface cracks. Therefore, our method presents a systematic way of controlling the surface conductance.
I. INTRODUCTION
Over the past decade, the proposed topological Kondo insulator SmB 6 has seen a surge of research interest [1] despite its more than half-a-century-old history [2,3]. This interest stems from a combination of complex correlated electron physics and the proposed topologically non-trivial surface states resulting from spin-orbit-driven band inversion in the bulk [4][5][6].
Irrespective of the direct involvement of the surface in the topical physics, relatively little is known about its properties. While the bulk of SmB 6 is known for its intermediate and temperature-dependent Sm valence of approximately 2.6 at low temperature [2,[7][8][9][10], the valence at the surface appears to be closer to 3+ [9,10]. This change in valence could be related to the formation of Sm 2 O 3 near the surface, resulting from an oxidation of the near-surface Sm. A changed surface chemistry may also shift the chemical potential at the surface [11] and may lead to time-dependent surface properties [12]. Consequently, in numerous studies relying on highly surface-sensitive techniques like scanning tunneling microscopy and spectroscopy (STM/S) [13][14][15][16][17][18][19] and angleresolved photoemission spectroscopy (ARPES) [20][21][22][23][24][25][26], SmB 6 surfaces were prepared by in situ cleaving in ultra-high vacuum (UHV) conditions. However, SmB 6 is difficult to break, and surfaces perpendicular to the main crystallographic axes of the cubic structure (space group P m3m) are polar in nature, giving often rise to (2×1) reconstructed surfaces. Notably, even cleaved surfaces may exhibit valence inhomogeneities [27] and band-bending effects [26,28,29].
Also for cases of less surface-sensitive techniques, like resistivity measurements, surfaces often need to be *<EMAIL_ADDRESS>prepared, e.g. by polishing or etching (see e.g. [30][31][32][33][34][35][36][37][38]). However, such surface preparation may influence the surface itself, e.g. by disrupting the crystal structure at the surface, introducing impurities or, again, changing the Sm valence. One particularly interesting example here is the creation of so-called subsurface cracks by rough polishing [39]. These subsurface cracks constitute additional surfaces with their own surface states, which conduct in parallel to the actual sample surface. Hence, care has to be taken when comparing different results since differences in the applied surface preparation procedure may result in differences in the measured properties. To make things more complicated, there can also be differences between samples grown by either floating zone or Al flux technique, not only intrinsically [38,[40][41][42][43] but also with respect to the impact of surface preparation as shown exemplary for etched surfaces [38].
We here apply a systematic way of manipulating the sample surface by utilizing a focused ion beam (FIB), complemented by a rather crude surface scratching. The low-temperature resistivity plateau of our flux-grown SmB 6 samples, typically taken as a hallmark of the conducting surface state, can be influenced considerably, yet consistently, by both surface treatments, indicating impaired surface states. As expected, the thermally activated transport across the bulk gap is not affected significantly by the surface treatments.
II. EXPERIMENTAL
The samples Sm 1−x Gd x B 6 used in this study were grown by the Al flux technique [44] with Gd content x = 0 and 0.0002. The tiny amount of Gd for the latter samples was confirmed by magnetic susceptibility measurements. It allowed electron spin resonance (ESR) measurements, which will be reported elsewhere [45]. We did not observe any noticeable differences in the here reported properties of samples with x = 0 and 0.0002. Therefore, we concentrate in the following on samples x = 0.0002 which were studied more extensively. Energydispersive x-ray (EDX) spectroscopy conducted within our FIB equipment at pressures in the 10 −6 mbar range did not show any elements other than Sm, B, O and Al, with Gd being below the detection limit. Upon using the FIB to remove a layer of a few µm thickness, the Al signal is no longer detectable within these sputtered areas.
As a crude way of disrupting the surface we cut lines by means of a diamond scribe. Because of the hardness of SmB 6 , considerable force had to be applied to inflict the line damage to the sample surface as shown in Fig. 1(a). In this example, the first scratch (marked by an arrow and a number) was only applied to the front surface. In a second step, more scratches were applied, and all scratches now cover also back and side surfaces to form closed rings approximately perpendicular to the long sample axis. In a third step, the scratches were deepened by applying more force to the diamond scribe. Figure 1(a) was taken subsequent to the second scratching and with contacts for resistance measurements attached.
In an effort to structure the surfaces of our samples in a much more systematic and controlled fashion, we utilized a FIB. Trenches of about 7-10 µm in depth were cut by Xe ions at beam currents of 500 nA with acceleration voltage of 30 kV in consecutive runs. In a first run (denoted F1 in the following), a single line is cut across 500 mm We note that this distance is still large compared to the effective carrier mean free path < 1 µm [45,46]. Resistance measurements were usually conducted after some FIB runs or line scratches, using a Physical Property Measurement System (PPMS) by Quantum Design, Inc. In case of FIB-cut sample surfaces, van der Pauw-type measurements were conducted. A sample (different to the one presented in Fig. 2) after FIB run F6 with contacts attached is shown in Fig. 1(b).
A. Sm1−xGdxB6 samples with scratched surfaces
Resistances of the sample shown in Fig. 1(a) before and after inflicting an increasing number of scratches to the sample surface are presented in Fig. 3(a). Clearly, the first scratch did not significantly change the resistance, possibly because the first scratch did not form a closed ring around the sample. Consecutive scratches formed closed rings and introduced resistance changes. These changes, however, are exclusively limited to the lowtemperature regime, as shown in the different representations in Fig. 3(a)-(c). This finding is in agreement with the fact that the resistance at higher T reflects bulk properties while the surface state dictates the resistance behavior only at T below a few K. The bulk hybridization gap ∆ can be estimated from R(T ) ∝ exp(∆/k B T ), where k B is the Boltzmann constant. Typically, for pure [36,47] and slightly Gd-substituted [17,48] SmB 6 two regimes with different gap values are observed depending on the T -range considered. This also holds for our measurements with ∆ 1 = 2.85(±0.07) meV and ∆ 2 = 5.3(±0.1) meV, independent of the scratches, see Fig. 3(b). However, the scratches do influence the lower bound T th of the temperature range within which R(T ) can be described by thermally activated behavior (the latter is marked by a magenta line in Fig. 3(b)). Obviously, there is a clear trend: the more pronounced the scratches, the lower T th . We find for the as-grown sample and after the first scratch T th ≈ 7 K [see arrow in Fig. 3(b)], after the second scratching T th ≈ 6.4 K, and T th ≈ 6.1 K after the third scratching. This trend is also seen in the derivative dR/dT in Fig. 3(c). As outlined in Ref. 36, the thermally activated behavior, i.e. the exponential increase of R(T ), is a clear hallmark of the bulk resistance, which is superseded by the additional surface component upon lowering T assuming a parallel conductance model [31,33,49]. The low-T resistance plateau indicates the presence of the surface states even after scratching. Yet, based on the trend of T th , the crossover from bulk-dominated conductivity (roughly above 10 K) to surface-dominated conductivity (below about 3 K) appears to take place at lower temperature. The increased value of the low-T resistance plateau measured on the damaged surfaces could be caused by either a decreased conductivity of the intrinsic surface state or an additional damage layer at the scratched areas below which the intrinsic surface state reconstructs (or a combination thereof). However, the surface state still develops and appears to govern the R(T ) below a similar T ≈ 3 K at which the R(T )slope does no longer change, see Fig. 3(c). Here we note that subsequent to measurement "3rd scratch A" the contacts were completely removed and attached anew for measurement "B", showing that the contacts themselves have no significant influence on R(T ). In particular, the difference of the R(T )-values at low T is less than 7% (compared to ∼ 20% change between 2nd and 3th scratch).
B. FIB-cut trenches on sample surfaces
In order to manipulate the sample surface in a much more controlled and systematic way, we also measured the resistivity of FIB-treated samples as described above. Exemplary resistivity data of a sample Sm 1−x Gd x B 6 with x = 0.0002 are given in Fig. 4(a) for an increasing number of FIB-cut lines on its surfaces. We note that the contacts needed to be removed before each subsequent FIB run. Albeit great care was taken to re-attach the contacts after the FIB run at the very same positions, a marginal influence on the resistivity values cannot be excluded entirely. Due to the small size and position of the contacts, there was no conducting path between contacts uninterrupted by FIB-cut lines -even via side surfaces -after FIB-run F5 already [as can be inferred from Fig. 1(b) for the second sample].
Already the first cross of FIB-cut lines (F2) increases the ρ(T )-values within the plateau at low T compared to the as-grown surface by more than 30%, which appears to be well beyond the geometry inaccuracy. Interestingly, the reduction of T th after FIB-run F2 is very similar to those after the second and third scratch, i.e. for comparable number of lines/scratches. Upon increasing the FIB-cut line density, the low-T resistivity increases further such that ρ(T ) of F9 at low temperature exceeds the value of the as-grown surface by almost an order Most other parameters remain largely unaffected by the FIB surface structuring, only ∆ 1 appears to be slightly modified.
Nonetheless, the bulk sample properties remain essentially unaltered by the FIB treatment. Very similar trends were observed on a second FIB-cut sample, shown in Fig. 1(b).
The increased resistivities upon damaging the sample surfaces is in contrast to its observed decrease for substituted or intentionally imperfect samples [17,[50][51][52][53][54] or ion irradiated samples [55]. This might imply that our surface treatments by FIB or scratching do not influence the surfaces on the whole, but rather act on the surface states locally. On the other hand, an increased slope of the low-T resistivity appears to generally indicate a diminished surface state.
IV. DISCUSSION
In Ref. 39, the influence of subsurface cracks on the total resistivity is discussed. Such subsurface cracks provide additional conduction channels, and a decreased low-T resistance upon surface scratching was reported [39]. Subsurface cracks could also be found below our scratched surface areas, see the arrow marks in Fig. 5(a). Fig. 2, obtained from the resistivity data of Fig. 4. The gap values ∆ and the temperature ranges within which the fits hold are given. d denotes the approximate distance between FIB-cut lines (or the lines and the sample perimeter in case of F2 We note that such subsurface cracks could so far be found exclusively underneath the scratches, presumably indicating that the unscratched sample regions (including the pristine samples) are free of such subsurface cracks, Fig. 5(b). The subsurface cracks can be found down to a few micrometers below the scratched surface. However, in contrast to the earlier findings [39] the sample resistance increases with scratching in our case, Fig. 3(a). Here we recall that our scratches encircle the whole surface without leaving any possible current path on the surface untouched. Therefore, our approach seems to emphasize the impact of the surface conductance to the total sample resistance compared to Corbino-type measurements [32,36,39]. We therefore infer that the value of the low-T resistance plateau is the result of two counteracting effects: while the subsurface cracks lower this value by introducing additional conductance channels, the surface conductance itself is hampered due to scratching, as also indicated by the lower T th values. In this respect, the intermediate and high temperature resistance regime provides important hints of largely unchanged bulk properties. This picture is corroborated by the results of the FIB-cut samples. Albeit the trenches inflicted by the FIB cut deeper into the sample (up to about 10 µm) compared to the scratches (typically a few micrometers, with depths up to about 5 µm), we did so far not find any indication for subsurface cracks on FIB-treated samples, see example in Fig. 5(c). The material directly at the bottom of the FIB-cut trenches, and to a lesser extent also at the sidewalls, is typically turned amorphous to a depth of several tens of nanometers and, in case of preferential sputtering, non-stoichiometric [56]. Below this affected layer, the crystal structure is usually well preserved, with only occasional lattice defects caused by the ion bombardment. In this sense, the FIB treatment can be considered a controlled and systematic way of manipulating the surface conductance. The low-T saturation of ρ(T ), Fig. 4(a), indicates that the conducting surface layer, albeit possibly encumbered, is still subsisting. We note that in one case we also conducted an abrasion of the whole sample surface of about 3 µm deep by rastering the entire sample surface with the ion beam in a last run (i.e. after FIB-cutting a line grid) and still observed indications of the surface layer, in line with Ref. 57. This finding has an interesting consequence: The above-mentioned amorphous layer then covers the whole sample surface.
Since this layer very likely prevents a surface reconstruction from forming, we can in all likelihood rule out a 2 × 1 surface reconstruction (as, e.g., observed in some cases by STM [13,14,19]) causing the conducting surface states. Also, the polarity change at the interface between amorphous and crystalline SmB 6 is certainly smaller compared to a pure SmB 6 surface. In consequence, all this makes conducting surface states driven by a non-trivial topology of the crystalline SmB 6 more likely.
There are at least two contributions which may cause the increase of ρ(T ) at low-T upon FIB-cutting trenches: i) The surface state may be tampered with and ii) the surface area increases with the number of lines. The latter, however, appears not to be a decisive factor as an increased ρ(T ) is already observed for a small number of lines. As an example, the surface area of surface F6 in Fig. 4 is less than doubled compared to the asgrown one, but ρ(2 K) increased by a factor of more than 5. This strong increase of ρ(T ) at low temperature, along with the concomitantly lowered T th (Tab. I) as well as preliminary ESR results [45] suggest a reduction of the contribution of the surface states to the sample conductivity, possibly due to an FIB-induced depletion of the surface states. This might be related to a confinement of the surface states. In addition, disorder effects (in the bulk or/and near the surface due to our treatments) can be important in Kondo insulators as disorder can greatly affect the hybridization gap [58,59] and, in turn, the surface states. As one example, the Sm valence near the surface can be modified [27,37], which could introduce changes to the surface conductivity.
As mentioned above, it is unlikely, and also not seen in our attempts, that the FIB treatment induces subsurface cracks. On the other hand, similar to the case of scratched sample surfaces, the surface state appears to be tampered with. Therefore, one may speculate that the relatively small increase of the low-T resistance for the scratched surfaces compared to the FIB-treated ones is related to the subsurface cracks in the former. Of course, the severity of the inflicted damage to the respective surface may also differ.
V. CONCLUSION
We showed that introducing localized damage to the SmB 6 surface by different treatments, like mechanical surface scratching and FIB-cut trenches, can alter the low-temperature resistance plateau significantly. We find that the measured low-temperature R-value depends sensitively on the type of surface treatment and the struc- tural damage incurred. In our cases, the bulk resistivity at higher temperature remains largely unchanged and hence, the ratio between the resistances at high and at lowest temperature is not a good measure of the sample quality. However, the low-temperature limit to which the resistance follows a thermally activated behavior is found to be related to the severity of damage inflicted to the surface.
More generally, the systematic and well-controlled surface treatment by FIB as presented here may provide a path for modification and patterning of surface states as recently suggested theoretically [60]. | 4,103.4 | 2021-05-27T00:00:00.000 | [
"Physics"
] |
Comparative Effects of R- and S-equol and Implication of Transactivation Functions (AF-1 and AF-2) in Estrogen Receptor-Induced Transcriptional Activity
Equol, one of the main metabolites of daidzein, is a chiral compound with pleiotropic effects on cellular signaling. This property may induce activation/inhibition of the estrogen receptors (ER) a or b, and therefore, explain the beneficial/deleterious effects of equol on estrogen-dependent diseases. With its asymmetric centre at position C-3, equol can exist in two enantiomeric forms (R- and S-equol). To elucidate the yet unclear mechanisms of ER activation/inhibition by equol, we performed a comprehensive analysis of ERa and ERb transactivation by racemic equol, as well as by enantiomerically pure forms. Racemic equol was prepared by catalytic hydrogenation from daidzein and separated into enantiomers by chiral HPLC. The configuration assignment was performed by optical rotatory power measurements. The ER-induced transactivation by R- and S-equol (0.1–10 µM) and 17b-estradiol (E2, 10 nM) was studied using transient transfections of ERα and ERβ in CHO, HepG2 and HeLa cell lines. R- and S-equol induce ER transactivation in an opposite fashion according to the cellular context. R-equol and S-equol are more potent in inducing ERα in an AF-2 and AF-1 permissive cell line, respectively. Involvement of ERα transactivation functions (AF-1 and AF-2) in these effects has been examined. Both AF-1 and AF-2 are involved in racemic equol, R-equol and S-equol induced ERα transcriptional activity. These results could be of interest to find a specific ligand modulating ER transactivation and could contribute to explaining the diversity of equol actions in vivo.
Introduction
Estrogens are used in hormonal replacement therapy (HRT) to prevent menopausal symptoms such as hot flushes, urogenital atrophy, but also osteoporosis in postmenopausal women. Unfortunately, HRT has not lived up to its potential to improve health in women. Estrogens have been associated with an increased incidence of breast and endometrial cancers, which has led to the use of antiestrogens and selective estrogen receptor modulators (SERM) such as tamoxifen and raloxifen, which exhibit a safer profile. However, since undesirable effects persist, numerous investigators continue to search for better SERM for HRT. Much research has been conducted into the health benefits of consuming soy foods, with soy isoflavones and soy protein being implicated as protective against a variety of diseases including heart and vascular diseases, osteoporosis and hormone-dependant cancers (such as those of the breast and prostate) [1][2][3]. Despite their popularity and putative health benefits, it is clear that we need to know much more about the molecular mechanisms, safety and efficacy of these compounds as natural SERM, before they can be recommended to postmenopausal women either as pharmaceutical or nutraceutical agents or as food additives.
After the identification of equol in biological liquids, the total synthesis was of great importance in order to confirm the chemical structure, and then to provide a sufficient amount of this compound for biological activity studies. Racemic (±)-equol can be synthesized from daidzein and formonetin, which are readily available in sufficient quantities from plants or can be prepared by chemical synthesis. The key transformation step involves the reduction of a vinylogous ester to an ether group. The method most often used during the last years, was the hydrogenation of daidzein by hydrogen in acetic acid with 10% palladium on carbon as a catalyst [11,12]. Recently, transfer hydrogenation was proposed as an alternative to classic hydrogenation, and different catalysts were tested [13]. Paerlman's catalyst (20% Pd(OH) 2 ) was found to be highly effective in the reduction of formonetin, daidzein [13] and corresponding isoflavene [14]. A "biomimetic" reduction of formonetin with dihydroacridine as a hydride donor was also proposed [13]. Recently described, a new original synthetic approach to racemic equol provided a direct construction of the isoflavan skeleton via a Diels-Alder reaction of oquinone methides [15].
The pure S-equol enantiomer can be produced by microbiological methods [16,17]. The first total synthesis of S-equol was described only three years ago. This approach, based on an Evan alkylation and an intramolecular Bichwald etherification, needed the use of organolithium reagents for the alkylation step, which could not be improved and gave modest yield (9.8% of overall yield) [18]. A new alternative route employed allylic substitution and afforded the S-isomer in 24.6% yield over 13 steps [19]. Both R-and S-equol of high stereoselective purity have been prepared by enantioselective hydrogenation of O-protected chromene in the presence of an Ir catalyst having a chiral ligand [20]. Therefore, all synthetic approaches to pure enantiomeric forms remain still expensive and time consuming. The semi-preparative chiral-phase HPLC provides ready and relatively rapid access to both S-and R-equol in quantity sufficient for in vitro studies [13,17,21,22].
Most of published results on the biological activities of equol in vitro are available for the racemate, with the exception of Magee et al. [22], who evaluated the effects of racemic equol and S-equol on breast and prostate cancer cell lines. Their main findings were that racemic and S-equol show equipotent biological effects on proliferation and invasion of these cell lines, while the compounds have different abilities to protect against induced DNA damage [22].
Equol is strikingly similar in chemical structure to estrogens and is therefore capable of binding weakly to estrogen receptors (ER) [23]. The effects of 17-estradiol (E2) and related compounds, such as non-steroidal estrogens and equol, are mediated by two members of the nuclear receptor superfamily, ER and ER, which are coded by separate genes. ERuse two transactivation functions (AF), located in their N-terminal (AF-1) and C-terminal (AF-2) domains. Once activated by ligand binding, these AF recruit co-regulators of gene transcription. The transcriptional activity of the AF-2 region is dependent on ligand binding, while AF-1 is constitutively active when isolated. The transcriptional activity of ER can be promoted through functional cooperation between both AF-1 and AF-2 or through each AF acting independently [24].
Therefore, different forms of equol may produce clinical and/or experimental effects distinct from estrogens by differentially triggering ER transcriptional activity. To test this hypothesis, we have first, prepared the pure enantiomeric forms of equol, using semi-preparative chiral phase HPLC, in order to compare the effects of racemic equol and R-and S-enantiomers on the transcriptional activity of ERand ER Furthermore, the present study investigated the roles of the AF domains, and more particularly of the AF-1 domain, in the ability of R-and S-equol to induce ER transactivation. For these purposes, transient transfections of ER constructs were performed in ER-negative CHO, HeLa and HepG2 cell lines.
Chemical Synthesis and Chiral Separation
In this study, the racemic equol was prepared by catalytic hydrogenation from daidzein, which was synthesized as previously described [25]. R-and S-equol were then separated using a new chiral stationary phase Chiralpak® IA [26]. Different mobile phases were investigated ( Table 1). The shorter retention times should be considered and a compromise between different chromatographic parameters should be found for providing an efficient semi-preparative separation. It was found that the mixture nheptane-isopropanol (n-Heptane/IPA (80/20, V/V)) had the highest potential in terms of enantioselective separation of equol with the retention of enantiomers in an appropriate time range (Figure 1). Loading studies have been run and scaled up to 10 mm diameter column. Good separation can be achieved with loading up to 7 mg of the racemate per injection. The best loading found in literature data was around 3 mg per injection [16]. [18]), the order of elution was assigned as R-equol (peak 1) and then S-equol (peak 2). The enantiomeric purity of R-and S-equol were +99% and 97.5%, respectively.
Transcriptional Activation of ER by Different Enantiomeric Forms of Equol
In CHO cells, racemic equol (0.1 µM to 10 µM) induces ER and ER transactivation on the ERE-TK-LUC reporter gene in a dose-dependent fashion as calculated by linear regression (R 2 = 0.90 for ER and 0.95 for ER), as shown in Figure 2A. For ER a significant increase in transcriptional activity is obtained with 1 and 10 µM of racemic equol (3.4 ± 0.4-; 5.4 ± 1.4-fold increase compared to control, p < 0.05 and p < 0.01, respectively). For ER, only the highest concentration of racemic equol (10 µM) induces a significant effect (3.9 ± 1.3 fold increase compared to control, p < 0.05). Racemic equol (10 µM) induces a significant ER and ER transactivation similar to the one induced by 10 nM 17-estradiol (E2), the endogenous ligand of ER. The ERinduced transactivation by E2 and racemic equol in CHO cells is slightly lower than that of ER (Figures 2B and C). E2 and equol effects on both ER and are completely abolished by treatment with 1 µM ICI (data not shown).
ER transcriptional activity (fold increase)
We observe that similar effects to E2 are typically achieved at concentrations that are three-orders of magnitude higher, which can be reached physiologically with a soy-rich diet. This is in accordance with the fact that the relative binding affinity of racemic R-and S-equol measured on recombinant ER, is generally in the order of 100 to 1000-times less than that of E2 [13].
Serum concentrations of equol are quite different between women (with an equol-producer status) from various geographic areas and/or specific diets. We have shown that serum concentrations of equol reach 0.6 µM following consumption of soy supplements and up to 3 µM after ingestion of 50 mg total isoflavones (about 30% daidzin) twice a day [27,28]. A recent study demonstrated a high equol bioavailability, with racemic, R-and S-equol concentrations in plasma from 0.4 up to 2 µM after a single bolus administration of equol (20 mg) [29].
We have previously demonstrated that the ability of phytoestrogens, such as genistein, daidzein and racemic equol to act as ER agonists is independent of the cellular context (AF-1 or AF-2 permissive) [30]. Therefore, it was of particular interest to determine the mechanisms of action of enantiomeric forms of equol on ER transcriptional activation in epithelial cell lines, which have different AF permissiveness.
In HeLa cells, racemic equol and R-equol (0.1 µM-10 µM) induce ER and ER transactivation on the ERE-TK-LUC reporter gene in a dose-dependent fashion, as calculated by linear regression (R 2 = 0.72; 0.78 for ER and 0.99; 0.78 for ER, respectively, data not shown). In contrast, S-equol even at the highest concentration tested (10 µM) does not induce significant ER transcriptional activation (1.8 ± 0.2 and 2.7 ± 0.9 for ER and ER, respectively) compared to control (Figures 3C and D). As for CHO cells (Figure 2), the ER and ERinduced transactivations by racemic equol (10 µM) in HeLa cells are similar (5.2 ± 1.4 and 5.4 ± 2.4 fold increase compared to control, p < 0.001 and p < 0.05, respectively). E2, racemic equol, R-and S-equol effects on both ER and ER are completely abolished by treatment with 1 µM ICI (data not shown). In HeLa cells, both ER and ER transcriptional activations induced by R-equol (10 µM) is not different from racemic equol. R-equol induces a stronger ERtranscriptional activation than S-equol (p < 0.01). Interestingly, the highest concentration of R-equol (10 µM) induces a stronger ER transcriptional activation than ER (10.0 ± 0.7 and 2.9 ± 1.0 fold increase compared to control, p < 0.001 and p < 0.05, respectively). Taken together, our results clearly demonstrate that R-equol and S-equol induce ER and ER transactivations in a different manner in regards to the AF permissiveness of the cell line. While Sequol is more potent to induce ER in the AF-1 (HepG2) permissive cell line, R-equol appears to be more effective in inducing ER in the AF-2 (HeLa) permissive cell line.
Several studies have evaluated the ER subtypes binding affinity and/or transcriptional activity of racemic equol [10,[31][32][33] and equol enantiomers [13]. Taken together these authors report that (1) in binding assays, equol has a distinctively higher binding affinity, but only slight preference for transactivation of ER compared to ER; (2) S-equol has a high binding affinity preference for ER while R-equol binds more weakly and with a preference for ER; (3) racemic, R-and S-equol are ER agonists in transcriptional activity studies, and (4) in contrast to the slight ER preference of R-equol, S-equol has no ER subtypes preference in terms of transcriptional potency [13] It is well known that the estrogenic potency of compounds is a complicated phenomenon, which results from a number of factors, such as the nature of the inducer, including antiestrogens, xenoestrogens and phytoestrogens, but also the differential effects on the transactivation functionalities of the receptor, the particular co-activators recruited, the cell-and target promoter-contexts, the relative expression of each ER subtype [34][35][36][37][38][39] and the cell differentiation stage [34,37,40,41]. Therefore, the use of different model to study transcriptional potencies (HEC-1 cell line and an (ERE) 2 -PS2-LUC reporter gene [13] versus HeLa and HepG2 cell lines and (ERE)-TK-LUC reporter gene, for instance, may also explain this discrepancy.
The use of ER constructs expressed in ER-negative backgrounds has been a powerful technique for studying the function of various domains in the ER [37,40,41]. To further examine the role of AF-1 in the ER-induced transactivation by R-and S-equol, we used expression vectors of full length ER (ER) or truncated ER in the A/B domain (ERAF-1) in both epithelial cell lines. We compared the transcriptional efficiency of both ER constructs on estrogen sensitive reporter genes in HepG2 (AF-1 permissive) and HeLa (AF-2 permissive) epithelial cell lines. Similar expression of the different ER variants was controlled by Western blots as previously described [42].
In HepG2 cells, where ER transactivation is mainly ensured by AF-1 (ER>>ERAF-1), racemic, R-and S-equol (10 µM) induce, as expected, a higher transactivation of ER than ERAF-1 ( Figure 4A). S-equol induces a higher ERAF-1 transcriptional activation than R-equol (p < 0.05) in AF-1 permissive cells. In HeLa cells, ER transactivation is mainly ensured by AF-2 (ER≈ERAF-1). Racemic equol and R-equol (10 µM) induce a similar transactivation of both ER and ERAF-1 ( Figure 4B), indicating that deletion of AF-1 has no effect on ER-induced transactivation for both compounds. Similar results are obtained in CHO cells (data not shown) with racemic equol presenting a similar transactivation profile as E2, as previously described [30]. In contrast, S-equol does not induce similar transcriptional activation of ER and ERAF-1 in HeLa cells. Deletion of AF-1 enhances ER transcriptional activity to reach the level of the one induced by racemic equol and significantly different than R-equol (p < 0.05, Figure 4B). This result indicates that this property of S-equol to induce ER transcriptional activation through AF-2 is partly repressed by AF-1. This could be due to a conformational change of the ligand binding domain, which inhibits, at least partially, the activity of AF-1 in HeLa cells [24]. ICI (1 µM) completely blocked the effects of E2 and enantiomeric forms of equol in both cell lines (data not shown). R-and S-equol are therefore capable of inducing ER transactivation through activation of both AF, however, S-equol effect through AF-2 may be repressed by AF-1.
Taken together, our results indicate that while R-equol and S-equol are more potent in inducing ER transactivation in an AF-2 and in an AF-1 permissive cell line, respectively, both compounds Requol and S-equol are capable of inducing ER transactivation through activation of both AF-1 and AF-2 domains. However, in contrast to R-equol, S-equol-induced ER transcriptional activation through AF-2 is (1) higher in the AF-1 permissive cell line, and (2) partly repressed by AF-1 in the AF-2 permissive cell line.
We conclude that racemic, R-and S-equol exert distinct effects on ER transcriptional activity, which cannot be solely explained by their differential affinities for the ER subtypes [13], but may be due to the differential effects on the transactivation functionalities of the receptor and the cell differentiation stage [30,34,37,[40][41][42]. Given the pleiotropic actions of phytoestrogenic compounds, it is possible that they are also affecting other biological processes in addition to these mechanisms, such as cellular metabolism, relative expression of each ER subtype, non-genomic activities of ER or other signal transduction pathways (such as MAPK). Further in vitro studies are therefore needed to elucidate the pathways involved in equol effects. Approximately 40% of Humans have the bacteria capable of producing equol. The ability to produce equol following ingestion of soy isoflavones is of particular interest, because it has been demonstrated in vitro and in some animal models that equol is more biologically active than its precursor daidzein and the alternate metabolite, o-desmethyl angolensin [5][6][7][8]. More importantly, studies report relationships between the equol-producer phenotype and reduced risk factors for several chronic diseases and differential responses to interventions (for review, [7,43]). Given that it is exclusively the S-equol enantiomer that is produced in vivo by the gut microflora, [9,10], our findings may have implications regarding the effects of equol in vivo. In particular, since ER activity is mediated through AF-1 in differentiated cells and through AF-2 in well-dedifferentiated cells [41], Sequol may differentially modulate processes involving ERactivation, such as cell differentiation and/or proliferation, as in breast cancer.
In this regard, a very recent study demonstrated, in a rat model of chemically-induced-tumors, that
S-equol has no chemopreventive action in vivo, while the unnatural enantiomer R-equol was potently chemopreventive [44]. It is clear that further studies are needed to elucidate specifically the biological (for example, antigenotoxic and/or antioxidant properties) of such compound. However, these results may contribute to explain the diversity of daidzein and/or equol actions in vivo and particularly as Sequol, as food supplement, is being clinically developed for the treatment of prevention of menopausal symptoms [45].
Chemicals and Instruments
All chemical reagents and solvents were purchased from Sigma Aldrich Chemical Co.
Chemical Synthesis of (±)-Equol
Palladium-on-charcoal (10%, 0.5g) was added to a well-stirred solution of daidzein (2 g, 7.9 mmol) in 95% ethanol (200 mL). The mixture was degassed and placed under hydrogen atmosphere at room temperature and atmospheric pressure for 24 h. After filtration and evaporation of the solvent, the crude product was recrystallized from ethanol/water to afford the target product as white crystals (
Chromatographic Resolution of R-and S-equol.
Enantiomeric separation was performed on a Varian Prostar chromatographic system with UV detection at a wavelength of 280 nm. A Daicel Chiralpack ® IA column [26] with amylase tris (3,5dimethylphenylcarbamate chiral phase immobilized on 5 µm silica-gel (analytical column 250 × 4.6 mm, semi-preparative column 250 × 10 mm, Chiral Technologie Europe, Illkirch, France) with a Chiralpak ® IA guard column were used. The mobile phase selected for the method consisted of a mixture of n-heptane/IPA (80/20, v/v) delivered in isocratic elution mode. The flow rates were 1 mL/min and 3 mL/min for analytical and semi-preparative columns, respectively. For semi-preparative separation, the injection volume was 100 µL of equol solution in IPA (50 mg/mL). The elution order and retention times were as follows: RT 9.14 min for R-equol and RT 10.37 min for S-equol.
Cell Culture and Transient Transfection Experiments
CHO, HeLa and HepG2 cell lines were routinely maintained in DMEM supplemented with 5% FCS and antibiotics. For experimental conditions, phenol red free medium DMEM/F12 supplemented with 2.5% charcoal-stripped FCS was used (Experimental medium).
Transfections were carried out with FuGENE TM 6 as described previously [42] with 50 ng of total DNA consisting of the expression vector, the reporter gene and the pCMV- galactosidase internal control (10, 20 and 20 ng, respectively). Cells were treated either with different concentrations of E2 and equol (10 nM and 0.1 to 10 µM, respectively), ICI (1 M) or vehicle (V, 0.01% EtOH), or a combination of these compounds as indicated. Cells were harvested and luciferase and -galactosidase assays were performed as previously described [42]. The reporter gene activity was obtained after normalization of the luciferase activity with the -galactosidase activity.
Statistical Analysis
Shown are the means ± SEM of 2 to 10 independent experiments, each performed in triplicate as indicated. One-way ANOVA and Dunnett's multiple comparison post-hoc test or Student's t-test were performed for the statistical analysis between experimental conditions (GraphPad Prism ® , USA). Dose-dependant effects were assessed by linear regression (GraphPad Prism ® , USA). Statistical significance is indicated by 1, 2, and 3 symbols (* or •) corresponding to p < 0.05, p < 0.01 and p < 0.001, respectively.
Conclusions
Equol, one of the main metabolites of daidzein is being clinically developed as a food supplement to treat estrogen-related diseases. Understanding how natural estrogenic compounds elicit clinical selective effects is key to the development of safer HRT. Equol is a chiral compound, and induction of activation/inhibition of the ER may depend on the enantiomeric form and purity of equol. Catalytic hydrogenation of daidzein followed by chiral HPLC separation provided ready and rapid access to racemic, R-and S-forms of equol in sufficient quantities, and allowed us to examine their differential effects on the two ER subtypes. Good chiral separation with semi-preparative isolation of more than 3 mg of each enantiomer per injection was achieved using a new immobilized chiral stationary phase. We have shown that high concentrations (10 µM) of R-equol and S-equol induce ERand ER transcriptional activation differently according to the cellular context. R-equol and S-equol are more potent to induce ER in the AF-2 and AF-1 permissive cell lines, respectively. The S-enantiomer has little transcriptional potency on both ER and ER in an AF-2 permissive cell line. ER transcriptional activation by both ennatiomers involves their capacity to act mainly through AF-1 and AF-2. This study confirms that racemic, R-and S-equol are SERM with estrogenic activities. Therefore, in light of our study of the effects of equol and its enantiomeric forms on the two ER, it would appear prudent to evaluate carefully, in vivo, the biological effects of not only the isoflavones, but also their metabolites and their enantiomers. Such investigations would greatly help in evaluating the potential effects of the ingestion of soy isoflavones on human health and disease. | 5,048.4 | 2010-03-01T00:00:00.000 | [
"Biology"
] |
Higher spins on AdS$_{3}$ from the worldsheet
It was recently shown that the CFT dual of string theory on ${\rm AdS}_3 \times {\rm S}^3 \times T^4$, the symmetric orbifold of $T^4$, contains a closed higher spin subsector. Via holography, this makes precise the sense in which tensionless string theory on this background contains a Vasiliev higher spin theory. In this paper we study this phenomenon directly from the worldsheet. Using the WZW description of the background with pure NS-NS flux, we identify the states that make up the leading Regge trajectory and show that they fit into the even spin ${\cal N}=4$ Vasiliev higher spin theory. We also show that these higher spin states do not become massless, except for the somewhat singular case of level $k=1$ where the theory contains a stringy tower of massless higher spin fields coming from the long string sector.
Introduction
In the tensionless limit string theory is expected to exhibit a large underlying symmetry that is believed to lie at the heart of many special properties of stringy physics [1][2][3]. In flat space, the tensionless limit is somewhat subtle since there is no natural length scale relative to which the (dimensionful) string tension may be taken to zero. The situation is much better in the context of string theory on an AdS background, since the cosmological constant of the AdS space defines a natural length scale. This is also reflected by the fact that higher spin theories -they are believed to capture the symmetries of the leading Regge trajectory at the tensionless point [4,5] -appear naturally in AdS backgrounds [6].
In the context of string theory on AdS 3 concrete evidence for this picture was recently obtained in [7]. More specifically, it was shown that the CFT dual of string theory on AdS 3 × S 3 × T 4 , the symmetric orbifold of T 4 , see [8] for a review, contains the CFT dual of the supersymmetric higher spin theories constructed in [9]. 1 While this indirect evidence is very convincing, it would be very interesting to have more direct access to the higher spin sub-symmetry in string theory. This symmetry is only expected to emerge in the tensionless limit of string theory, in which the string is very floppy and usual supergravity methods are not reliable. Thus we should attempt to address this question using a worldsheet approach.
Worldsheet descriptions of string theory on AdS backgrounds are notoriously hard, but in the context of string theory on AdS 3 , the background with pure NS-NS flux admits a relatively straightforward worldsheet description in terms of a WZW model based on the Lie algebra sl(2, R) [16][17][18]. In this paper we shall use this approach to look for signs of a higher spin symmetry among these worldsheet theories. More concretely, we shall combine the WZW model corresponding to sl(2, R) with an su(2) WZW model, describing strings propagating on S 3 , as well as four free fermions and bosons corresponding to T 4 . The complete critical worldsheet theory then describes strings on AdS 3 × S 3 × T 4 .
The worldsheet description of these WZW models contains one free parameter, the level k of the N = 1 superconformal WZW models associated to sl(2, R) and su (2), respectively -these two levels have to be the same in order for the full theory to be critical. Geometrically, these levels correspond to the size of the AdS 3 space (and the radius of S 3 ) in string units. The tensionless limit should therefore correspond to the limit where k is taken to be small.
In this paper we analyse systematically the string spectrum of the worldsheet theory for k small. 2 As we shall show, the only massless spin fields that emerge in this limit are those associated to the supergravity multiplet, while all the higher spin fields remain massive, except in the extremal case where the level is taken to be k = 1 -this is strictly speaking an unphysical value for the level since then the bosonic su(2) model has negative level; however, as argued in [23], some aspects of the theory may still make sense. (We should also mention that in the context of the WZW model based on AdS 3 × S 3 × S 3 × S 1 [24,25] the theory with k = 1 is not singular since it is compatible with the levels of the two superconformal su(2) models being k + = k − = 2, leading to vanishing bosonic levels for the two su(2) algebras.) 3 For k = 1, the bosonic sl(2, R) algebra has level k bos = 3, and as in [26], an infinite tower of massless higher spin fields arises from the long string subsector (the spectrally flowed continuous representations). These higher spin fields are part of a continuum and realise quite explicitly some of the speculations of [23].
For more generic values of the level, we also explain the sense in which a 'leading Regge trajectory' emerges, and we give an explicit description of these states. In particular, we show that the relevant states form the spectrum of a specific N = 4 higher spin theory of Vasiliev that was recently analysed in detail by one of us [27]. (More specifically, this higher spin theory consists of one N = 4 multiplet for each even spin; the fact that the leading Regge trajectory in closed string theory only consists of states (or multiplets) of even spin is also familiar from flat space, see the discussion around eq. (4.12).) For spins that are small relative to the size of the AdS space, the states on the leading Regge trajectory are described by physical states coming from the (unflowed) discrete representations of sl(2, R); as the spin gets larger, the corresponding classical strings become longer until they hit the boundary of the AdS space where they become part of the spectrally flowed continuous representations, describing the continuum of long strings, see Figure 1. This picture fits in nicely with expectations from [16][17][18], see also [23].
The fact that among these backgrounds with NS-NS flux no conventional higher spin symmetry emerges also has a natural interpretation in terms of the structure of the classical sigma model. Indeed, as explained in [28], the tension of the string is of the form [28, eq. (7.34)] where Q NS and Q RR are quantized, and g s is the string coupling constant. This formula therefore suggests that the tensionless limit is only accessible in the situation with pure R-R flux (and in the limit g s → 0).
The paper is organized as follows. We explain the basics of the worldsheet theory (and set up our notation) in Section 2. In Section 3 we prove that the spectrum of this family of worldsheet theories does not contain any massless higher spin fields among the unflowed representations (describing short strings). In Section 4 we start with identifying the states that comprise the leading Regge trajectory. We first analyse the states of low spin that arise from the unflowed discrete representations. We also comment on the structure of the subleading Regge trajectory, as well as the situation for the case where T 4 is replaced by K3. The rest of the leading Regge trajectory that is part of the continuous spectrum is then identified in Section 5. We also comment there on the massless higher spin fields arising from the spectrally flowed continuous representation at k = 1, and explain how they fit in with the expectations from [23]. Section 6 contains our conclusions, and there are three appendices where we have collected some of the more technical arguments that are referred to at various places in the body of the paper.
Worldsheet string theory on AdS 3
We want to study the spectrum of type IIB strings propagating on backgrounds of the form AdS 3 ×S 3 ×X, where X is either T 4 or K3 so that the resulting theory has N = 4 spacetime supersymmetry. We shall concentrate on the background with pure NS-NS flux for which the AdS 3 × S 3 theory can be described by a (non-compact) SL(2, R) × SU(2) WZW model 4 that can be studied by conventional CFT methods. The bosonic version of this theory was discussed in some detail in the seminal papers [16][17][18]; in what follows we extend, following [29][30][31][32], some aspects of their analysis to the supersymmetric case.
The symmetry algebras of the supersymmetric WZW models are the N = 1 superconformal affine algebras sl(2, R) (1) k ⊕ su (2) (1) k that will be described in more detail below.
Their central charges equal and the condition that the total charge adds up to c = 9 (as befits a 6-dimensional supersymmetric background) requires then that k = k . For this choice of levels, the naive N = 1 worldsheet supersymmetry of the model is enhanced to N = 2 [29,33]. This enhancement can also be understood from the fact that the AdS 3 × S 3 theory can be described as a non-linear sigma model on the supergroup PSL(2|2) (see, e.g., [34]).
The AdS 3 WZW model
In our conventions, the sl(2, R) (1) algebra describing superstrings on AdS 3 reads The dual Coxeter number is h ∨ sl(2,R) = −2 . As detailed in appendix A, the shifted currents decouple from the fermions, J a n , ψ b r = 0 , and satisfy the same algebra as the J a with level κ = k + 2 . The Sugawara stress tensor and supercurrent are where every composite operator in the above expressions is understood to be normalordered. These generators satisfy the N = 1 superconformal algebra (A.17)-(A.19) with central charge (see eq. (A.20)) The holographic dictionary implies that the global charges in the spacetime theory are given by [29] L CFT with analogous expressions for the right-movers. In particular, the spacetime conformal dimension (which we henceforth refer to as the energy E) is given by the eigenvalue of J 3 0 +J 3 0 , while the spacetime helicity s equals J 3 0 −J 3 0 . 5 Since we want to keep track of these quantum numbers, it will prove convenient to describe the representation content with respect to the coupled currents J a .
In addition to the symmetry algebra, the actual worldsheet conformal field theory is characterised by the spectrum, i.e., by the set of sl(2, R) (1) representations that appear in the theory. For the bosonic case, a proposal for what this spectrum should be was made in [16], and the same arguments also apply here once we decouple the fermions. Recall that a highest weight representation of a (bosonic) affine Kac-Moody algebra is uniquely characterised by the representation of the zero mode algebra (in our case sl(2, R)) acting on the 'ground states' -these are the states that are annihilated by the modes J a n with n > 0. For the case at hand, the relevant representations of sl(2, R) that appear [16] are the so-called principal discrete representations (corresponding to short strings), as well as the principal continuous representations -together they form a complete basis of squareintegrable functions on AdS 3 . Furthermore, since the no-ghost theorem truncates the set of these representations to a finite number (depending on k) [35,36], additional representations corresponding to their spectrally flowed images appear [16]; these describe the long strings. In each case, the representation on the ground states is the same for left-and right-moversthis theory is therefore the natural analogue of the 'charge-conjugation' modular invariant, see also [17].
In the supersymmetric case we are interested in, we consider the above sl(2, R) affine theory for the decoupled bosonic currents J a , and tensor to it a usual free fermion theory (where the fermions will either all be in the NS or in the R sector). Note that this will lead to a modular invariant spectrum since both factors are separately modular invariant.
In the following we shall study the spacetime spectum of this worldsheet theory with a view towards identifying the states on the leading Regge trajectory. We shall first concentrate on the unflowed discrete representations, where the low-lying states of the leading Regge trajectory -those whose spin satisfies s k 2 -originate from. The remaining states of the Regge trajectory are part of the continuum of long strings that is described by the (spectrally flowed) continuous representations; they will be analysed in Section 5.
The NS sector
In the NS sector we label the ground states by |j, m , where m is the eigenvalue of J 3 0 , while j labels the spin, and C 2 is the quadratic Casimir of sl(2, R) The condition to be ground states, i.e., to satisfy J a n |j, m = 0 ∀ n ≥ 1 and ψ a r |j, m = 0 ∀ r ≥ 1 2 (2.10) implies, in particular, that the coupled and decoupled bosonic modes with n ≥ 0 agree on the ground states, J a n |j, m = J a n |j, m , n ≥ 0 ; (2.11) the correction terms involve positive fermionic modes that annihilate the ground states.
(Thus it makes no difference whether we label the ground states in terms of the decoupled or coupled spins). Furthermore, the ground states are annihilated by L n |j, m = 0 for n ≥ 1 and G r |j, m = 0 for r ≥ 1 2 , (2.12) as follows from eqs. (2.4) and (2.5).
The discrete lowest weight representations D + j -in [16] they are called 'positive energy' -are characterised by the conditions Note that the state |j, j has the lowest J 3 0 eigenvalue and is therefore annihilated by J − 0 . In particular, it follows from (2.7) that as appropriate for a quasiprimary state in the dual 2d CFT. The representation of the full affine algebra is obtained by the action of the negative modes J a −n and ψ a −r , acting on these ground states. With respect to the global sl(2, R) algebra, all of these states will then also sit in discrete lowest weight representations of sl(2, R), and the quasiprimary states of the dual CFT will always correspond to the lowest weight states of these discrete representations.
The R sector
The analysis in the Ramond sector is slightly more subtle since there are fermionic zero modes. The ground states are therefore characterised in addition by an irreducible spinor representation of the Clifford algebra in (2 + 1)-dimensions, spanned by the fermionic zero modes -this representation is two-dimensional and can be described by |s 0 , with s 0 = ±1. The full set of ground states is therefore labelled by |j, m; s 0 . The presence of the fermionic zero modes implies that, unlike (2.11), the action of the decoupled and coupled bosonic zero modes differs on the ground states. In particular, where on the ground states (see eq. (A.10)) Effectively, this can be interpreted as shifting the spin j (with respect to the coupled algebra) of the R sector representation by ± 1 2 relative to the decoupled algebra.
We are interested in organising the descendants of these ground states in terms of representations of the (coupled) sl(2, R) zero modes since they have a direct interpretation in terms of the dual CFT, see eq. (2.7). Since the creation generators -the negative bosonic and fermionic modes -transform in the adjoint representation of this sl(2, R), the spins that arise will be of the form j + , where j is the spin of the (decoupled) ground states while ∈ Z in the NS sector and ∈ Z + 1 2 in the R-sector -here we have absorbed the above shift by 1 2 into the definition of . A similar consideration applies for the right-movers where the resulting spin will be j +¯ for the same j (and with the same restrictions on¯ ). Thus the total energy and spin of such a descendant will be Note that in the NS-NS and R-R sectors the spacetime spin s will be integer, while in the NS-R and R-NS sectors it will be half-integer.
The compact directions
The remaining spacetime directions are described by S 3 × T 4 . Supersymmetric strings propagating on S 3 can be described by a WZW model based on su(2) (1) , for which our conventions are The dual Coxeter number is h ∨ su(2) = +2 . As for the case of sl(2, R) (1) , we can decouple the bosons from the femions by defining so that K a m , χ b n = 0. The decoupled currents satisfy again the same algebra as the K a , but with level (k − 2) instead. We will therefore mostly restrict ourselves to k ≥ 2 in this paper, see however the discussion in Section 5.1.1. 6 The ground states of the corresponding WZW models will transform in the same representation for left-and right-movers with respect to the decoupled su(2) algebras (i.e., with respect to the zero modes of (2.20)). These representations are labeled by a spin j with j = 0, 1 2 , 1, 3 2 , . . ., and their states are described by m = −j, −j + 1 , . . . , j − 1, j , as is well-known for su (2) representations. We choose the convention that the Casimir of the global decoupled algebra (i.e., of the zero modes of (2.20)) on the representation j equals (2.21) The decoupled and coupled bosonic zero modes agree in the NS-sector, while in the R-sector they differ by a fermionic contribution, and as a consequence, the K 3 0 eigenvalues in the R-sector are shifted by ± 1 2 relative to those in the NS-sector, cf., the discussion around eq. (2.17) above.
Finally, the T 4 theory corresponds to four free bosons Y i and four free fermions λ i (i = 1, 2, 3, 4). The ground states in this sector are characterised by a momentum vector | p with For a compact torus the left-and right-moving momenta need not agree -they can differ by winding numbers. However, for our purposes, i.e., for identifying the leading Regge trajectory, we will always work in the zero momentum sector p = 0, both for left-and rightmovers. The multiplicity of the Ramond sector ground states is accounted for as usual by introducing two labels (s 2 , s 3 ), with s 2,3 = ±.
GSO projection
As usual in a NS-R worldsheet string theory, one must impose an appropriate GSO projection in order to remove tachyonic modes and guarantee supersymmetry of the spacetime theory. In the NS sector the worldsheet parity operator is defined to be odd on the ground states Let us denote by N the (integer or half-integer) excitation number in the sl(2, R) sector, while N is the corresponding number for su (2), and N for the T 4 excitations. On a state with excitation numbers (N, N , N ) the total worldsheet parity is then The GSO projection (−1) F = (−1)F = +1 in the NS-sector thus requires that either one or all three excitation numbers are half-integer, and this has to be imposed both for leftand right-movers. In order to describe this compactly we introduce the number The above considerations imply that n has to be an integer in the NS sector, both for leftand right-movers. Obviously, the same is true in the R sector since there all excitation numbers are integers anyway.
In the R sector, the GSO projection involves also a contribution from the fermionic zero modes corresponding to s 0 s 1 s 2 s 3 . Thus we can, for any descendant, satisfy the GSO projection by changing s 3 , if necessary. Thus the GSO projection is correctly accounted for by reducing the multiplicity of the 4-fold ground state in the R-sector of T 4 -corresponding to the four choices for (s 2 , s 3 ) with s 2,3 = ± -to 2.
Physical state conditions
The sl(2, R) WZW model contains a time-like direction, and as a consequence the theory is non-unitary. As usual in worldsheet string theory, the corresponding negative-norm states are removed upon imposing the Virasoro constraints. In our context, the physical state conditions are where ν,ν = 0, 1 2 in the R and NS sectors, respectively, and L tot We parameterise the contributions from each component as Here, j, j and h T label the spins (resp. the conformal dimension) of the corresponding ground states; for the case of sl(2, R) and su(2) the relevant spins are defined with respect to the decoupled currents. Furthermore, physical states satisfy the super-Virasoro constraints where again L tot and G tot denote the total worldsheet currents, receiving contributions from all three sectors of the theory.
The no-ghost theorem [35,36,[41][42][43] (adapted here to the supersymmetric setup, see also [30]) shows that the Virasoro constraints (2.26) remove negative-norm states from the spectrum provided the unitarity bound is satisfied. This condition is the k-dependent bound on the spin j that we mentioned before, see the discussion at the end of Section 2.1. It was argued in [16], based on the structure of the spectrally flowed representations, that in fact the bound on j should be slightly stronger and take the form For most of the following the (weaker) unitarity bound will suffice, but for some arguments, in particular the analysis of the spectrally flowed representations, the stronger Maldacena-Ooguri (MO) bound (2.33) will be required.
Next, we write the first equation in the on-shell condition (2.26) as where n was defined above in eq. (2.25). In addition, we get the same equation withn in place of n from the second condition of (2.26), wheren is defined analogously for the right-movers. We therefore conclude that n =n. Furthermore, as was noted above, n is always a non-negative integer after GSO-projection. We can use eq. (2.34) to solve for j Note that for fixed n, the Virasoro level of the physical states satisfy N , N , N ≤ n + ν, as follows from (2.25), and similarly in the barred sector. Since each excitation mode can raise the J 3 0 eigenvalue at most by one (and since each fermionic ψ ± −1/2 mode can only be applied at most once), we conclude that in the NS sector the J 3 0 eigenvalue m of the physical states will lie between j − n − 1 ≤ m ≤ j + n + 1, while in the R sector it will lie between j − n − 1/2 ≤ m ≤ j + n + 1/2. This implies that the spacetime states labeled by n have spin s bounded as |s| ≤ 2n + 2. More explicitly, the relevant states are of the form 8 where r andr are positive integers or zero in the NS sector, and positive half-integers in the R sector -these parameters are simply related to ( ,¯ ), see the paragraph above (2.18), 7 We have taken here the positive square root since j > 0 for unitarity. 8 From what we have said so far, it is not yet clear that all these states will indeed be physical, but this will turn out to be the case, see the discussion below in Section 4. Furthermore, some of these states will appear with higher multiplicity. For the arguments of the next section it is however enough to know that only these charges can appear among the physical states.
by a shift in order to make them non-negative. The spacetime energy and spin of these states is then given by Finally, it is worth pointing out how the AdS 3 × S 3 × T 4 supergravity spectrum is obtained from the worldsheet description. The supergravity states all arise for n = 0, which leads then to fields of spin |s| ≤ 2. Furthermore, this condition restricts the excitation levels as N, N , N ≤ ν, and it follows that the supergravity spectrum is obtained from the level 1/2 descendants in the NS sector, as well as the R ground states. Crucially, from (2.35) (with no momentum in the T 4 directions) we deduce that the sl(2, R) and su(2) spins are related by SUGRA: so that j is now an integer or half integer (with j ≥ 1). We have explicitly checked that the corresponding physical states precisely reproduce the supergravity spectrum, as derived in e.g. [44]. In particular, one finds that the j = 0 (j = 1) sector contains the (massless) graviton supermultiplet, while the representations with j > 0 give rise to a tower of massive BPS multiplets. 9 3 No massless higher spin states from short strings With these preparations at hand we now want to analyse whether the string spectrum possesses massless higher spin states at least for some value of the level. As we shall show in this section, this will not be the case for the short strings coming from the unflowed (discrete) representations.
Recall first the standard holographic relation between the mass of an AdS 3 (bulk) excitation and the conformal dimension E and spin s of the dual operator in the boundary 2d CFT [45], where E = h+h and s = h−h in the usual CFT notation. As expected, massless higher spin fields are dual to conserved currents of dimension greater than two, which in the present context satisfy E = |s| (with |s| > 2). Hence, massless higher spin states are characterised by the property that either the J 3 0 eigenvalue or theJ 3 0 eigenvalue vanishes.
Let us concentrate, for concreteness, on the caseJ 3 0 = 0. Then it follows from (2.36) that we need to have Then the on-shell condition (2.34) implies that As discussed above, the case n = 0 corresponds to (supergravity) states that have |s| ≤ 2 and are therefore not of higher spin. We may therefore assume that n ≥ 1. Our strategy will be to show that unitarity implies that n +r ≤ 1, contradicting the n ≥ 1 assumption, except for n = 1 andr = 0. The latter case is then excluded by the stronger MO-bound (or by noticing that the relevant state is null).
First, from (3.2) we note that the unitarity bound j ≥ 0 implies n + 1 −r ≥ 0. Since j ≥ 0 by definition, from (3.3) we find that positivity of k requires Next, we use the unitarity bound (2.32) which translates for j = n + 1 −r into Together with (3.4), this requirement is equivalent to Since the quantity on the left hand side is greater or equal to zero (recall that n −r > 0 and h T ≥ 0, j ≥ 0 by unitarity), we conclude n +r ≤ 1. Finally, for n = 1 andr = 0, we have j = 2, and hence from (2.35), k ≤ 2, which is only compatible with unitarity for k = 2 (and incompatible with the stronger MO-bound (2.33) even in that case). Actually, the corresponding stateJ is null at k = 2, as has to be the case since it saturates the unitarity bound.
Summarizing, we have shown that the only conserved currents that exist in the unflowed discrete representations appear in the supergravity spectrum (n = 0), and thus have spin s ≤ 2. Our analysis holds for all values of the level k > 0; thus, among the WZW backgrounds there is no radius at which the theory develops a higher spin symmetry from the short string spectrum. This is in line with the arguments of the Introduction, see eq. (1.1). It is also in accord with the results of [46] where evidence was found that the symmetric orbifold point (that exhibits a large higher spin symmetry) is dual to a background with R-R flux.
The long string sector (that is described by spectrally flowed representations) will be discussed in Section 5. As we shall explain there, for k = 1 a stringy tower of higher spin fields appears from the spectrally flowed continuous representation, mirroring the bosonic analysis of [26]. Since these massless higher spin fields arise from long strings, they describe a qualitatively different higher spin symmetry from the usual tensionless limit [26].
Regge trajectories and their N = structure
Next we want to identify the leading Regge trajectory states in the string spectrum and compare this to the W ∞ symmetry that was found in [7]. In order to identify the leading (and sub-leading) Regge trajectory states in the string spectrum, we first need to study in more detail the actual physical states. In this section we concentrate again on the states from the unflowed discrete representations; the spectrally flowed representations will be discussed in Section 5.
General discrete spectrum
Recall from our discussion in Section 2.4 that physical states in a representation built from an AdS 3 groundstate labeled by j take the form (2.36), with the corresponding spacetime energy and spin being given by (2.37). We now want to show that for all choices of r,r in 0 ≤ r,r ≤ 2n + 2, physical states with these quantum numbers exist. In addition, we want to determine their multiplicities.
Let us start with some general comments about the string spectrum. One should expect that the physical states are obtained by applying eight transverse oscillators to the ground states -of the ten oscillators, one linear combination is eliminated by the Virasoro condition, and a second one leads to spurious states, i.e., gauge degrees of freedom. In the current context, it is natural to take the light-cone directions to be a linear combination of the time-like AdS 3 direction, as well as one direction on the T 4 . Then the transverse (physical) excitations correspond to the ± modes from AdS 3 , all three oscillators from the S 3 factor, and three of the four oscillators from the T 4 . Thus the physical descendants of the ground states of the chiral NS and R sector are expected to be counted by -here j and j label the spins of the sl(2, R) and su(2) ground state representation (taken with respect to the decoupled currents), respectively, 10 10 As far as we are aware, this formula was first written down in [32] generalizing the corresponding bosonic formula from [17] and building on [31]. These formulae are correct for sufficiently large values of k for which there are no non-trivial null-vectors.
where h T is the ground state conformal dimension of the T 4 theory, while for the sl(2, R) and su(2) factors we have Here y and z are the chemical potentials with respect to sl(2, R) and su (2), respectively, and we have used that the corresponding characters are of the form Furthermore, q keeps track of the total Virasoro eigenvalue which has to equal q ν for the actual physical states, see eq. (2.26). (We are here describing one chiral sector; the results for left-and right-movers then has to be combined.) The first line in each of (4.1)-(4.2) accounts for the contribution of the ground state representations, while the second line describes the contributions of the non-zero oscillators. The overall multiplicity of 2 in the R-sector reflects the overall multiplicity after GSO projection, see the discussion after eq. (2.25).
We have checked this prediction in some detail (by solving the physical state conditions explicitly, at least for some low-lying states), and we have found complete agreement. We should mention, though, that there are some subtleties with the counting for j = 1; this is discussed in more detail in Appendix B.
We note that this formula in particular implies that, for all 0 ≤ r ≤ 2n + 2, physical states with these quantum numbers exist. In order to see this, we solve for j (in terms of n, j and h T ) using eq. (2.35); then the overall power of q ν comes from taking the term with q n+ν from the oscillator product in the second line. In the NS sector r = 0 then corresponds to the situation where the J 3 0 eigenvalue is j − n − 1. This can be achieved by taking from the numerator the term y −1 q 1/2 , as well as n powers of y −1 q 1 from the geometric series expansion of the denominator term (1 − y −1 q). The corresponding state is thus of the form Similarly, the case r = 2n + 2 corresponds to having J 3 0 eigenvalue j + n + 1, in which case the relevant powers are yq 1/2 from the numerator, and n powers of yq 1 from the geometric series expansion of the denominator term (1 − yq). Schematically, the corresponding state is thus of the form |j + n + 1 = J + −1 where the dots stand for additional terms that make it a lowest weight state with respect to the sl(2, R) algebra. In either case it is easy to see that these representations appear with multiplicity one -these are the 'extremal' cases that can only be obtained in one way.
On the other hand, the intermediate cases 0 < r < 2n + 2 can be obtained in more than one way, but from the above analysis it is clear that all of these terms will indeed arise.
Incidentally, we should note that it follows from the explicit formula that (apart from the overall y j /(1 − y) term) the partition function is symmetric under the symmetry y ↔ y −1 . As a consequence, the multiplicities of the representations corresponding to r and 2n + 2 − r will be the same.
Combining left-and right-movers, the full spacetime spectrum (in terms of energy and spin) forms a diamond in the (E, s) plane for fixed n, depicted in Figure 2. Here the corner points have multiplicity one, but the other points have higher multiplicity. On general grounds it is clear that we must be able to organise the spectrum in terms of (small) N = (4, 4) representations, see Appendix C for a brief review of their structure. In the (E, s) plane, N = (4, 4) multiplets form small diamonds with edges spanning two units of energy and two units of spin. For example, the right most vertex of the diamond in Figure 2, which is characterised by (r,r) = (2n + 2, 0), has multiplicity one (since both r andr take their extremal values), and corresponds to the chiral states with h = j + n + 1 andh = j − n − 1. This state is then the top (h = h 0 + 2) component of the left-moving long N = 4 multiplet whose bottom component has h 0 = j + n − 1 and transforms in the representation m of su (2), where m = 2j + 1, see Table 6 of Appendix C. Similarly, with respect to the right-movers, the state is the bottom component of a similar N = 4 multiplet withh 0 = j − n − 1. The relevant states in the full multiplet then give rise to states in the dashed diamond in Figure 2. (Here we have also included the R sector states that are needed to complete the multiplets.) Once the states that sit in this multiplet have been accounted for, we look at the remaining states and proceed iteratively. For example, the 'extremal' R sector states that contribute to this multiplet have h = j + n + 1 2 and/orh = j − n − 1 2 . Concentrating on the first case, it follows from (4.2) that there will be 8m states of this form transforming as 4 · (m + 1) and 4 · (m − 1) -one factor of 2 is the overall factor in eq. (4.2), while the other factor of 2 comes from the fact that we can either use one fermionic (−1) mode in the R-sector or none. Furthermore, the two different representations come from tensoring with the spin 1 2 representation described by the factor (z 1/2 + z −1/2 ) in the first line. Two copies of each of these two representations are part of the long N = 4 multiplet, see Table 6, while the other two will generate two pairs of new N = (4, 4) multiplets, whose bottom components will transform as (m + 1) and (m − 1), respectively. (The second dot along thē r = 0 edge in Figure 2 represents states in these multiplets.) Proceeding in this manner, we find that the multiplicity of the N = 4 multiplets along ther = 0 edge (i.e., only considering states whose bottom component ish 0 = j − n − 1) is described in Table 1 For future reference, in Table 2 we also give the multiplicity of the N = 4 multiplets along ther = 0 edge for j = 0, i.e., m = 1 -in this case, only δm ≥ 0 is possible and some of the multiplicities are reduced.
Leading Regge trajectory
Having discussed the general structure of the discrete string spectrum, we can now identify the states on the leading Regge trajectory. These are the states that should have the lowest energy for a given spin, together with their N = (4, 4) descendants. We want to argue that they are precisely described by the dashed diamond in Figure 2, where n takes the values n = 0, n = 1, n = 2, etc. First we note that the leading Regge trajectory states will be associated to states with j = 0 (and h T = 0) -for fixed n, as well as (r,r), the choice of j and h T only enters via j as defined in (2.35), and j in turn only contributes to E, but not to s, see eq. (2.37). Choosing j and or h T to be non-trivial, increases j and hence E, but does not modify the spin s. The states of lowest energy (for fixed spin) therefore arise for j = h T = 0.
Similarly, by construction, the states with lowest energy for given spin lie (for fixed n and hence j -recall that j = h T = 0) on the lower edges of the representation diamond. Without loss of generality, focusing on positive helicity modes we can then restrict our attention to ther = 0 edge. The energies of these states satisfy the linear dispersion relation where the inequality 2n < s arises because the lowest energy state with spin s = 2n is obtained from the diamond corresponding toñ = n − 1 -this is a consequence of the inequality √ 1 + 4kn ≥ 2 + 1 + 4k(n − 1) , (4.8) which, after squaring twice, is equivalent to (k + 2) ≥ 4n ; (4.9) in turn this follows from the unitarity bound, see eq. (2.32), using the expression for j from eq. (2.35) with j = h T = 0. The conformal dimensions of the leading Regge trajectory states for small values of the spin are plotted (for k = 200) in Figure 3.
As a side remark, we should note that the states with dispersion relation (4.7) formally become chiral if k takes the value k = n + 1. However, this choice is not allowed by the unitarity bound, except for the supergravity states with n = 0 and the special solution n = 1 that was already discussed after eq. (3.6). [The latter case corresponds to n = 1 and k = 2 and is incompatible with the MO bound (2.33).] Sincer = 0 the right-moving states are the 'extremal' states withN =N = 0, so that the right-moving (barred) su(2) representation is always trivial. Furthermore, the leading term with r = 2n + 2 is also trivial with respect to the left-moving su(2) algebra, and it is the top state of an N = (4, 4) multiplet with su(2) ⊕ su(2) quantum numbers (1, 1). n=0, j=1 (graviton) giving rise to different ranges 2n + 1/2 ≤ s ≤ 2n + 2 of the spin. In each family of four dots, the last one describes a unique state, the corner state of Figure 2, while the first three correspond to states with higher multiplicity.
We now want to argue that the leading Regge trajectory consists just of the first multiplet of Table 2 for each n. This is natural since there is only a single multiplet with these quantum numbers; its top component is obtained by tensoring the sl(2, R) representations (4.5) and (4.6) for the left-and right-moving sector, respectively. (The terms with r < 2n + 2, on the other hand, lead in general to N = (4, 4) multiplets for which the left-moving su(2) spin is not trivial.) Furthermore, these states always define the states with smallest energy for the given spin, independent of k. 11 In order to see this, it is enough to show that E(n, s = 2n + 2) < E(p, s = 2n + 2) for any p > n -note that a state with this spin can only appear for p ≥ n. Without loss of generality it is enough to concentrate on the case p = n + 1 since any p > n can be iteratively obtained in this manner. Furthermore, we may assume that the relevant state in the p'th (i.e., n + 1'th) diamond sits on the lower edge, i.e., has energy described by eq. (4.7). Then the inequality we need to prove is simply which upon squaring both sides (after subtracting 2) leads to This identity is now a direct consequence of the unitarity bound, see eq. (4.9) with n = p. 11 The situation is in general more complicated for the other states, see the discussion of the next subsection.
We note that these states carry exactly the same quantum numbers as the generators of the even spin N = 4 W ∞ algebra that was analysed in [27]. This is the minimal version of the N = 4 higher spin symmetry, and it has a nice AdS 3 dual that is also discussed in some detail in [27]. On the other hand, while the string spectrum also contains multiplets with odd integer spin, there does not seem to be any natural candidate for which of the 7 singlet multiplets at spin 2n + 1, see Table 2, should be added to the even spin W ∞ algebra in order to generate the full N = 4 W ∞ algebra of [7] (or the extended algebra of [46] where also the charged bilinears are included in the higher spin algebra). Incidentally, the fact that the leading Regge trajectory should only be identified with the fields (or multiplets) of even spin is also expected from bosonic closed string theory in flat space. There the states of the leading Regge trajectory are associated to the worldsheet states of the form where the level-matching condition requires that the number of transverse oscillators on the left and right is the same. As a consequence, this only leads to fields of even spin s = 2n.
Subleading N = 4 trajectory
Unlike the leading Regge trajectory, the identification of the subleading trajectory turns out to be somewhat less clean, and in particular it depends on the value of k. For 2n < s ≤ 2n+2 there are a priori three kinds of states competing to be the subleading trajectory. These are the states in the interior of the (n, j = 0) diamond; the states on edge of the (n, j = 1/2) diamond; and the states on the edge of the (n + 1, j = 0) diamond. Denoting the energies of these three sets by E * n (j = 0), E n (j = 1/2), and E n+1 (j = 0), respectively, we find that their explicit values for the relevant spins are as given in Table 3. s E * n (j = 0) E n+1 (j = 0) E n (j = 1/2) 2n + 2 -−1 + 1 + 4k(n + 1) 1 + 2 √ 1 + kn 2n + 3/2 3 2 + √ 1 + 4kn − 3 2 + 1 + 4k(n + 1) 1 2 + 2 √ 1 + kn 2n + 1 1 + √ 1 + 4kn −2 + 1 + 4k(n + 1) 2 √ 1 + kn 2n + 1/2 1 2 + √ 1 + 4kn − 5 2 + 1 + 4k(n + 1) − 1 2 + 2 √ 1 + kn It turns out that among these states, the one with the smallest energy is (4.14) A few remarks are in order. First, the competing states always lie on the edge of some diamond. Second, for fixed k, the choice between the two diamonds is nand therefore s-dependent. Nevertheless, the existence of a minimum value for n (which is n = 0) implies that we can make the states of eq. (4.13) to be the subleading ones for all possible values of n, and thus for all higher spin states, by tuning k to be small enough. This happens for k ≤ 15 4 . Note that since k must be integer, this allows for the two solutions k = 2 and k = 3.
We should also note that the su(2) ⊕ su(2) quantum numbers are different for these two sets of competing representations, as detailed in Table 4. In particular, the states of the second column are non-trivial with respect to the right-moving su(2) algebra. Unfortunately, there does not seem to be any particularly simple pattern among these representations, and they do not seem to be naturally in correspondence with the subleading Regge trajectory of [46]. 12 Obviously, there is no fundamental reason why such a correspondence should exist -the two descriptions refer to different points in moduli space.
AdS 3 × S 3 × K3
One may hope that the situation could become a bit simpler for the case of AdS 3 × S 3 × K3, since then the spectrum will contain fewer states. Let us consider the case when K3 can be described as a T 4 /Z 2 orbifold. This Z 2 orbifold can be easily implemented in the worldsheet description since it simply acts as a minus sign on each of the four bosonic and fermionic oscillators associated to the Table 5. Number of the Z 2 -even N = 4 multiplets forr = 0 organized by their su(2) quantum numbers. This is to be compared to Table 1.
Unfortunately, there is still a fairly large multiplicity (namely 3 = 5 − 2 -the subtraction of 2 arises as in the passage from Table 1 to Table 2) for the first odd spin 'leading' Regge trajectory states, and again the most natural intepretation is that the leading Regge trajectory has just even spin multiplets as before. Similarly, the situation for the subleading Regge trajectory also does not seem to improve significantly.
Spectrally flowed sectors and long strings
In the previous section we have identified the low-lying states of the leading Regge trajectory that originate from the unflowed discrete representations. More specifically, these states have spin s = 2n + 2, with n = 0, 1, 2, . . . , 1 4 (k + 2), where the upper bound comes from eq. (4.9), which in turn is a consequence of the unitarity bound (2.32). If we impose the slightly stronger MO-bound (2.33), we find instead In either case, we only get finitely many states in this manner. In this section we look for the remaining states of the leading Regge trajectory. As we shall see, they arise from the continuous representations describing long strings. This makes also intuitive sense since the leading Regge trajectory states correspond to longer and longer strings that get closer to the boundary of AdS 3 , until they finally merge with the continuum of long strings.
We start by describing the rest of the full string spectrum that corresponds to the spectrally flowed continuous and discrete representations. For each class of representation we then identify the states of lowest mass for a given spin. We will see that the states from the unflowed discrete representations are indeed the lightest states of a given spin for small spin; furthermore, for s ≈ k 2 , the spectrally flowed continuous representations will take over.
The spectrally flowed representations are obtained from the discrete and continuous representations upon applying the automorphism of sl(2, R) (1) defined bỹ Here ω is an integer, and the same automorphism (with the same value of ω) is applied to both left-and right-movers. We characterise the spectrally flowed representations by using the same underlying vector space, but letting theJ a m modes act on it (rather than the J a m modes), and similarly for the fermions. In order for the resulting representation to decompose into lowest weight representations of sl(2, R) we need, in particular, that Thus we take ω to be a positive integer (or zero). Note that theJ 3 0 eigenvalue of the states is thenJ where m is the actual J 3 0 eigenvalue of the state in question. Since ω is positive, this eigenvalue is always positive (at least on the ground states).
Using the explicit form ofL 0 (c.f. (5.2)), the on-shell condition is in the NS-sector and for general ω 5) where N tot = N + N + N is the total excitation number, and we have set j = h T = 0. A similar condition also applies to the right-movers, and we have the level-matching condition N tot −N tot = ωs (5.6) since (5.5) involves the term ωm. Finally, we need to impose the GSO-projection. It is natural to assume -and this leads to the correct BPS spectrum of [47] -that the correct GSO projection is the one that takes the same form in all representations, including the spectrally flowed ones. In terms of the original vector space description we are using here, this then translates into the condition since we only flow in the sl(2, R) factor and hence the fermion number of the ground state changes by one for each unit spectral flow, see also [48]. 13
Spectrally flowed representations -the continuous case
According to [16] the spectrum of string theory on AdS 3 contains representations whose ground states transform in continuous representations of sl(2, R). The states of the continuous representation C α j are labelled by |j, m, α , where j = 1 2 + ip with p real, and m takes all values of the form m = α + Z. These representations are neither highest nor lowest weight with respect to sl(2, R). Their Casimir is given by In particular, they can therefore only satisfy the mass-shell condition (2.26) in the NS-sector with N tot =N tot = 0. Since this is incompatible with the GSO projection, there are no physical states in the unflowed continuous representations. 14 However, after spectral flow, these representations give rise to interesting physical states, as we shall now describe.
Because of (5.3) (applied to |j, m, α ) the spectrally flowed continuous representations are lowest weight with respect to sl(2, R) if ω > 0. Plugging j = 1 2 + ip into the mass-shell condition (5.5) and solving for m leads to and similarly form. (Remember that j, i.e., p, and ω are the same for both left-and right-moving representations.) Using (5.4) the spacetime energy of the state is then It is clear that the lowest energy for any given quantum numbers is achieved by putting p = 0, as also expected classically. Furthermore, using level-matching (5.6) to solve for the spin we can rewrite this as Thus the minimum energy for a given s is achieved by puttingN tot = 0 if ω is odd, orN tot = 1/2 if ω is even (as required by the GSO projection, eq. (5.7)). For any k > √ 6/2−1 0.22 and any even ω ≥ 2, the continuous (ω − 1) sector has lower energy. Hence, in what follows we focus on the case of odd ω. Since ωs = N tot , with ω ≥ 1, settingN tot = 0 is only valid for s ≥ 0. (Analogously, the lowest energy for s ≤ 0 is achieved by putting N tot = 0, so that ωs = −N tot .) Thus we conclude that the spectrally flowed continuous representations contain states with the dispersion relation for any spin s. This energy is a growing function of ω ∈ N for any k > −1 + √ 2 0.41. Since there is no constraint on the set of spins, the lowest energy for any spin is achieved by putting ω = 1, for which we then find
Massless higher spin fields for k = 1
We should note that for k = 1, (5.13) describes massless higher spin states. For this value (k = ω = 1), the mass-shell condition (5.9) (and its right-moving analogue) become simply while the conformal dimensions of the dual CFT are 15) and the GSO-projection (5.7) requires now that both N tot andN tot should be integers.
Since there are eight transverse oscillators, there is a stringy growth of massless higher spin fields.
This phenomenon is the exact analogue of what was found for the bosonic case, where the corresponding phenomenon happens for k bos = 1 + 2 = 3 in [26]. (In particular, k = 1 is also the minimum value where the massless graviton that arises from the discrete representation with j = 1 is allowed by the MO-bound (2.33).) The theory with k = 1 describes strings scattering off a single NS5 brane; while this is formally an ill-defined theory -the level of the bosonic su(2) algebra is negative, k bos = 1 − 2 = −1, although this conclusion could be avoided if we consider instead of AdS 3 × S 3 × T 4 the background AdS 3 × S 3 × S 3 × S 1 , see the comments at the bottom of page 2 -it was argued in [23] that at least some aspects of the theory still make sense. Note that the gap of the spectrum was predicted in [23] to be see eq. (4.26) of [23], and this is reproduced exactly (as in the bosonic case of [26]) in our analysis from the mass-shell condition (5.9) for p = 0, w = 1 and N tot = 0. It was furthermore argued there that the dual CFT should correspond to a symmetric orbifold associated to R 4 × T 4 . (Here the R 4 arises from the S 3 together with the radial direction of AdS 3 that becomes effectively non-compact in this limit.) This is nicely in line with our finding of the massless higher spin fields. In particular, given that the symmetric orbifold involves an 8-dimensional free theory, the single particle generators have the same growth behaviour as found above in (5.15), see [49,50].
On the other hand, this tensionless limit is different in nature to what one expects from the symmetric orbifold of T 4 , see [26] for a discussion of this point. In particular, one may expect that these massless higher spin states get lifted upon switching on R-R flux. It would be interesting to confirm this, using the techniques of [28].
Spectrally flowed representations -the discrete case
For discrete flowed representations, it follows from the analysis of [16] that j satisfies the MO-bound Writing m = j + r, and solving the on-shell condition (5.5) for j, we find In addition, we must solve the constraints r ≥ −N for ω odd, and r ≥ −N − 1/2 for ω even. We should note that ωs = N tot −N tot , and s = r −r, so that N tot − rω =N tot −rω.
Then j is indeed the same for the left-and right-moving sectors.
We first note that j is a decreasing function of r . The unitarity constraint j ≥ 0, together with the fact that there is a minimum value that r can take as a function of N , leads to the existence of a minimum value for the levels N tot ,N tot in a given ω sector, which is of the form ω odd: N tot ,N tot ≥ kω 2 + 2 4ω + 4 (5.19) ω even: Let us then define where 0 ≤ b < 1 is a bookkeeping device that rounds to the closest upper integer if ω is odd, or to the closest upper half-integer if ω is even, as required by the GSO-projection of eq. (5.7). Note that there is no upper bound on the levels, on the other hand. Furthermore, N min (k, ω) is an increasing function of both k and, more importantly, ω. This means that the lowest allowed levels appear for ω = 1.
As for the spectrally flowed continuous representations (that are analysed in Section 5.1), the lowest energy states are those for which either N tot orN tot (or both) attain their lowest possible values. Let us first fixN tot = N min (k, ω). Then by level-matching the spin s is positive or null. Furthermore, we fixr = −N min (k, ω) − 1/2 if ω is even, and r = −N min (k, ω) if ω is odd. (Note that this is only possible if N = N = 0, namely the internal CFT is not excited; this condition will lead to the analog of the even spin lowest energy states in the unflowed case.) This uniquely determines j to be j = 1 2 1 − kω + 4bk(ω + 1) + (kω − 1) 2 (5.22) for both even and odd ω. We see that when b = 0 we indeed get j = 0 , as expected. The energy is then given by for odd ω, with a similar expression for even ω. As for the continuous case, for any even ω ≥ 2, the discrete (ω − 1)-sector has lower energy. Hence we restrict our attention to the case of odd ω, with energy given by (5.23). We find that the lowest energy states of positive helicity s are then given by ω even: We should note that the left-moving states do not saturate the value of m for the given value of N tot , and thus they may have multiplicities greater than one. The right-moving states, on the other hand, saturate them, and hence will be unique. Finally, in the above we have assumed s > 0; the corresponding lightest states with negative helicity are obtained upon exchanging the roles of left-and right-movers.
Even though it is perhaps not evident, the leading energy (5.23) is an increasing function of ω, a fact which we have confirmed numerically. Therefore, the lowest energy states for any given spin come from the ω = 1 sector. Their energy is We emphasize that the parameter b introduced above is uniquely fixed by k and does not depend on the spin s. As a result, this dispersion relation is linear in the spin s. The same is also true for the states from the spectrally flowed continuous representations, see eq. (5.13). This behaviour ties in nicely with the observation of [51], see also [52], about the behaviour of classical strings for large spin s. In particular, it is argued in [51], see eq. (6.0.8), that the log s term correction term to the linear dispersion relation vanishes for pure NS-NS flux. 15
Comparison of the different sectors
We can now compare the different dispersion relations coming from the different sectors.
Recall from the analysis of Section 4.2, see eq. (4.7), that the dispersion relation for the leading Regge trajectory states from the unflowed discrete representations is where we have set s = 2n + 2 in eq. (4.7) -this corresponds to the top component of the corresponding N = 4 multiplet -and expressed n in terms of s. These states are only available for spins up s < k 2 +2− 1 2k , see eq. (5.1). (Note that, for k ≥ 2, the right-hand-side of this inequality is not an integer and hence cannot be attained. 16 ) It is easy to see that (in 15 These claims are somewhat in tension with the analysis of [53] where a (log s) 2 correction term was found for the case of pure NS-NS flux. Our findings seem to support the conclusion of [51,52]. We thank Arkady Tseytlin for drawing our attention to the work of [51]. 16 For k = 1, it gives s = ±2 .
this range of spins) E Regge (s) from eq. (5.26) is smaller than both E cont (s) from eq. (5. 13) or E disc (s) from eq. (5.25); in fact, precisely at the (unphysical) value s = k Thus the states from the unflowed discrete representations describe the leading Regge states for spins s < k 2 + 2 − 1 2k .
For larger spins, on the other hand, the relevant states must come from the spectrally flowed representations. As we have seen in sections 5.1 and 5.2, for both the spectrally flowed continuous and discrete representations, the lowest energy states always appear for ω = 1, and in either case, they give rise to states of arbitrarily high spin. We can compare the relevant dispersion relations, and it is fairly straightforward to see from eqs. (5.13) and for all spins. Thus it follows that the remaining states of the leading Regge trajectory are part of the spectrally flowed continuous representations. In order to get a sense of the qualitative picture, we have plotted the relevant states in Figure 1 for one representative value of k (k = 20).
The picture that emerges is thus that the lowest energy states arise in the unflowed discrete sector for as high spin as allowed by the MO-bound. Once the MO-bound is reached, the continuous ω = 1 representations take over; this makes intuitive sense since the leading Regge trajectory states come from highly spinning strings that get longer and longer as the spin is increased. As they hit the boundary of AdS 3 , they merge into the continuum of long strings [16], and thus the leading Regge trajectory states of higher spin will arise from that part of the spectrum, i.e., from the spectrally flowed continuous representations.
Conclusions
In this paper we have studied string theory on the background AdS 3 × S 3 × T 4 with pure NS-NS flux, using the WZW model worldsheet description with a view to exhibiting the emergence of a higher spin symmetry in the tensionless (small level) limit. As we have shown in Section 3, this part of the moduli space does not contain a conventional tensionless point where small string excitations become massless and give rise to a Vasiliev higher spin theory. However, for k = 1, a stringy massless higher spin spectrum emerges from the spectrally flowed continuous representations (corresponding to long strings). These higher spin fields are of a different nature than those arising in the symmetric orbifold of T 4 [26], but they realise nicely some of the predictions of [23].
For generic values of k we could also identify quite convincingly the states that make up the leading Regge trajectory, and we saw that they comprise the spectrum of a Vasiliev higher spin theory with N = 4 superconformal symmetry. It would be very interesting to try to repeat the above analysis using the worldsheet description of [28] that allows for the description of the theory with pure R-R flux (where one would expect the actual higher spin symmetry to emerge, see the arguments of the Introduction). Among other things one should expect that the massless higher spin fields that arise from the long string spectrum at k = 1 will acquire a mass since the long string spectrum is believed to be a specific feature of the pure NS-NS background. On the other hand, the leading Regge trajectory states should become massless as one flows to the theory with pure R-R flux. It would be very interesting to confirm these expecations. It would also be very interesting to analyse to which extent the leading Regge trajectory forms a closed subsector of string theory in the tensionless limit. algebra T (z)T (w) ∼ c/2 (z − w) 4 with central charge . (A.20) In terms of modes, in the NS sector we have which is exactly as in the NS sector. The price one pays for this redefinition is that the fermionic Ramond vacuum |0 R (which is annihilated by all the positive modes of the fermions) is no longer annihilated by L 0 , but rather satisfies Finally, it is interesting to note that if we simultaneously consider supersymmetric sl(2, R) and su(2) algebras (which have h ∨ equal and opposite in sign), as appropriate to AdS 3 × S 3 , we find that the h ∨ terms in (A.24) -(A.26) drop out from the algebra of the total currents.
B Low momenta subtleties
The only subtlety concerning the counting of physical states given by eqs. (4.1) and (4.2) arises for j = 1 and j = 0. Then the mass-shell condition requires that the physical states appear at excitation number N = 1 2 , and in particular, the state that is excited by ψ − −1/2 has j = 0. For j = 0 the general character formula for sl(2, R) representations (4.4) breaks down since the L −1 = J + 0 descendant of the state with j = 0 is null. As a consequence, we have the identity i.e., the character on the left-hand-side is actually not an irreducible character, but rather splits up into the contributions of two different irreducible sl(2, R) representations (namely the ones with j = 0 and j = 1). This phenomenon also has a microscopic origin: for j = 1 and j = 0 there are three sl(2, R) descendants that define physical states, namely Since one of these states ( 1 2 ; 0, 0 ) is the vacuum state of the space-time CFT, these states describe the chiral states of the space-time CFT. (Recall that the above discussion is a chiral discussion; the vacuum state for the right-movers, say, appears then together with the above states.) In particular, the j = h = 2 state is the Virasoro field, and at j = h = 1 we get in addition to the state 1 2 ; 1, 1 six j = h = 1 states from the excitations associated to the S 3 × T 4 directions. Altogether, they give rise to an su(2) current algebra (coming from the S 3 excitations), as well as four h = 1 bosons -these are the familiar bosons of the T 4 . (Similary, in the R-sector we get four h = 1 2 fields and four h = 3 2 fields -they describe the four free fermions of the T 4 , as well as the four supercharges of the N = 4 superconformal algebra.) state su (2) h state Table 7. Chiral long N = 4 multiplet for m = 1 and m = 2.
is obtained by demanding that Q i+ −1/2 |h; j = 0 for one choice of i ∈ {1, 2}. Using that we see that every 1 4 BPS state is automatically 1 2 BPS, i.e., if Q i+ −1/2 |h; j = 0 for one choice of i, it is actually zero for both i = 1, 2. Furthermore, the BPS bound is explicitly BPS bound: The resulting short multiplet is described in table 8. As usual, for small values of j (or m), there are further shortenings, in particular, for m = 1 the whole multiplet consists just of the vacuum itself h = j = 0, while for m = 2 the whole multiplet truncates to 2 (h = 1 2 ) ⊕ 2 · 1 (h = 1).
state su(2) h |m m j Q|m 2 · (m − 1) j + 1/2 QQ|m m − 2 j + 1 The corresponding multiplets of the full (4, 4) theory is then obtained by tensoring these chiral multiplets together. For example, if both left-and right-moving multiplets are long (corresponding to m andm), the total number of states is 256 × m ·m. | 16,157.4 | 2017-04-27T00:00:00.000 | [
"Physics"
] |
Objective Numerical Evaluation of Diffuse, Optically Reconstructed Images Using Structural Similarity Index
Diffuse optical tomography is emerging as a non-invasive optical modality used to evaluate tissue information by obtaining the optical properties’ distribution. Two procedures are performed to produce reconstructed absorption and reduced scattering images, which provide structural information that can be used to locate inclusions within tissues with the assistance of a known light intensity around the boundary. These methods are referred to as a forward problem and an inverse solution. Once the reconstructed image is obtained, a subjective measurement is used as the conventional way to assess the image. Hence, in this study, we developed an algorithm designed to numerically assess reconstructed images to identify inclusions using the structural similarity (SSIM) index. We compared four SSIM algorithms with 168 simulated reconstructed images involving the same inclusion position with different contrast ratios and inclusion sizes. A multiscale, improved SSIM containing a sharpness parameter (MS-ISSIM-S) was proposed to represent the potential evaluation compared with the human visible perception. The results indicated that the proposed MS-ISSIM-S is suitable for human visual perception by demonstrating a reduction of similarity score related to various contrasts with a similar size of inclusion; thus, this metric is promising for the objective numerical assessment of diffuse, optically reconstructed images.
Introduction
Diffuse optical tomography (DOT) is a promising imaging technology designed to reconstruct absorption and reduced scattering coefficients by obtaining the light propagation intensity around a tissue boundary [1]. DOT involves two major steps to accomplish the entire process of obtaining the optical property map distribution, including measurement and computation procedures. In the measurement system, several pairs of light sources and detectors are attached around a subject or phantom model to acquire the light radiance distribution. In the computation procedure, a reconstruction image algorithm is utilized to predict optical properties inside the tissue [2,3]. The DOT technique is an invasive modality because it uses the near-infrared (NIR) spectral window to image the structural and functional properties of human tissue. For this reason, several works with clinical and computational elements have explored three measurements, referred to as continuous-wave (CW), frequency-domain (FD), and time-domain (TD) [4][5][6]. A CW-DOT works with only the light intensity attenuation; thus, the direct current power voltage may be implemented to drive the associated laser source; meanwhile, an FD-DOT applies an amplitude-modulated light source with a typical frequency of 100 MHz. In contrast, to reduce the ill-posedness of CW-DOT and FD-DOT, TD-DOT, which produces photons with a time-of-flight distribution, can propagate in three regimes, such as ballistic, snake, and diffused photon patterns. It is considered possible to perform improved measurements via the TD-DOT method [7,8].
To complete the entire DOT computation, a forward problem based on the finite element method (FEM) and an inverse solution with a regularization algorithm must be solved [9]. Once these two procedures have been fulfilled, the optical property distribution indicated by the reconstructed images can be obtained. The reconstructed images provide structural or functional information related to tissue conditions. In the case of breast imaging, the distribution map of optical properties offers information associated with the presence of tumors. To assess the reconstructed images from DOT, the subjective knowledge and insight of medical image analysis experts are required, which tend to be costly and time consuming. The medical image analysis technique relies normally on the mean square error (MSE), peak signal-to-noise ratio (PSNR), and contrast-to-noise ratio (CNR); however, these assessments often present inconsistencies with the human visual system (HVS) [10][11][12]. A contrast-and-size detail (CSD) analysis was developed to deal with contrast ratio and size, and exhibited the capability to separate visible and invisible inclusions [13,14]. However, CSD is not appropriate for human perception and lacks a threshold value to distinguish the presence of inclusions. To overcome this issue, a structural similarity (SSIM) index was first introduced in 2004 to accord with the HVS by considering luminance, contrast, and structure calculation [12]. Since then, SSIM has become more popular in the field of image quality assessment (IQA), even in biomedical and clinical applications [15][16][17][18][19].
In optical modality, SSIM is capable of evaluating image enhancement, whereas in microwave imaging, SSIM offers image inspection of the breast [20,21]. In addition, radiological image assessments for computed tomography and magnetic resonance imaging have been conducted [22][23][24][25]. Moreover, to detect dopamine from the alternation of pH and histamine, as well as radiotherapy, SSIM can be used to objectively assess images with the assistance of a reference image [26,27]. The abovementioned researches concern SSIM in the medical field because several reports have demonstrated that SSIM is feasible for improving the sensitivity based on the purpose by implementing image processing insight [28]. The multiscale SSIM (MS-SSIM) may be more flexible than the mean SSIM (MSSIM) because it provides multiscale image assessment with downsampling by two images in each iteration [29,30]. In addition, SSIM has been repeatedly improved, with several derivative methods developed, such as gradient-based SSIM (GSSIM), three-component weighting region, four-component weighting region, complex wavelet, and an improved SSIM with a sharpness comparison (ISSIM-S) [31][32][33][34][35]. In these advanced implementations, SSIM was transformed to demonstrate reasonable performance in assessing images without a reference and can be used for image decomposition, identifying inter-patch and intra-patch similarities, and deblurring IQA [36][37][38][39]. SSIM is used widely to evaluate images, including medical images; therefore, this study presents four types of SSIM as a computer-based observer to assess DOT reconstructed images. The emphasis of this research was to evaluate simulated images to avoid uncertainty in a practical environment. MSSIM, MS-SSIM, mean ISSIM-S (MISSIM-S), and multiscale ISSIM-S (MS-ISSIM-S) were utilized to compare homogeneity and heterogeneity. To the best of our knowledge, comparisons of those four types of SSIM are described for the first time for evaluating DOT reconstructed images. Additionally, MS-ISSIM-S is a novel, proposed image quality metric in this research. A comparison of four SSIM algorithms was conducted with 168 simulated reconstructed images involving the same inclusion position, as well as different contrast ratios and inclusion sizes. The proposed MS-ISSIM-S measure is the ISSIM-S modified by performing a multiscale technique, as presented in Section 2.4. To evaluate the performance of these four SSIMs, the mean opinion score (MOS) calculated with Spearman's rank correlation, as described in Section 2.5, was completed.
The remainder of this study is organized as follows. Section 2 describes the methodology and Section 3 describes the results and discussions. Section 4 presents some final concluding remarks.
Methodology
The procedure begins with an image reconstruction process that yields reconstructed images of optical properties. The initial estimation of the optical properties, light source intensity, modulation frequency, and speed of light in the diffusion media must be completed first to simulate the forward problem. Then, a forward problem algorithm is employed to calculate the light distribution around the boundary in comparison with the light intensity from the measurement. If the solution converges, the simulation is stopped. However, if the solution does not converge, an inverse solution algorithm is used along with regularization to update the absorption and diffusion coefficients related to the reduced scattering coefficient. This image reconstruction will occur continuously until the stop criterion is satisfied. When the reconstructed images are obtained, two images are included, which separately contain information on homogeneity and heterogeneity, which are then compared with four numerical analysis assessment methods, including MSSIM, MISSIM-S, MS-SSIM, and MS-ISSIM-S. These four numerical analyses result is in similarity values. Figure 1 shows a diagram of the processes used to yield the similarity measures investigated in this study.
Forward Problem
This study describes the forward model of FD-DOT to express the light intensity distribution Φ(r, ω) at position r and light modulation frequency ω with the known absorption coefficient µ a , diffusion coefficient D, light source term S 0 , and speed of light in the media c by solving the diffusion equation (DE), as given below.
where D is defined as where µ s is the scattering coefficient, g denotes the average cosine of the scattering angle, and µ s refers to a reduced scattering coefficient.
To simulate the light distribution with DE, as in Equation (1), an FEM is implemented with the exact optical property values of µ a and µ s , as well as the known S 0 (r, ω) and the boundary condition. This study adopted the mixed boundary condition, as shown in Equation (3). The FEM can be simulated using two procedures. The boundary condition is substituted into a weak form, and the Galerkin method can be performed.
wheren refers to the unit vector and α denotes the incorporated reflection as the result of the refractive index difference at the boundary. Therefore, the discrete equation in matrix can be expressed as where A denotes the optical property matrix, b refers to the boundary node, l is the internal node, and i and j are matrix indexes. Thus, Equation (4) can calculate the forward model Φ in the simple matrix from optical property·radiance = source.
Inverse Solution
Because the goal of DOT is to reconstruct the optical properties inside the tissue with the provided light intensity information around the boundary, the distribution χ 2 can be obtained by minimizing the misfit differences between the photon propagation rate being investigated around the geometry Φ M and light intensity from solving the DE with the estimated optical properties Φ C , as expressed in Equation (5).
These data-model misfit differences can be minimized by iteratively solving I∆χ = ∆Φ, where I = ∂Φ C /∂µ a ∂Φ C /∂D is the Jacobian matrix and ∆χ denotes [∆µ a ; ∆D], the optical coefficient of the update vector at each iteration. However, solving this inverse problem I∆χ = ∆Φ usually involves the difficulty of an ill-posed problem as the number of model parameters increases. As a result, TR was introduced to overcome this issue. Hence, the inverse problem in DOT is formulated as an optimization of the damped least-squares problem.
where λ is a regularization parameter. One can minimize this damped least-squares problem iteratively and then solve the following updated equation, where I is an identity matrix [40].
Image Quality Assessment
This section describes the three IQAs used herein. Section 2.3.1 reviews the original SSIM, Section 2.3.2 discusses MS-SSIM, and Section 2.3.2 describes ISSIM-S.
Structural Similarity Index
SSIM was first introduced to overcome issues related to IQA. Previously, to measure the image quality, the MSE, SNR, PSNR, and CNR were commonly used. Nonetheless, these techniques are not suitable for human perception; in particular, MSE can generate the same value for two distorted images, even though one image is more visible than the other. In addition, SNR, PSNR, and CNR are attractive given their mathematical simplicity and clear physical meaning. In addition, CSD is promising for evaluating an image according to a comparison in terms of contrast and size but does not accord with the HVS because it shows occasional inconsistency related to contrast [10,12,14]. Therefore, SSIM emerged to overcome these issues, aiming to accord closely with human visual perception by considering the luminance l, contrast c, and structure s. Figure 2 shows a diagram of the original SSIM. Luminance is calculated first over the two images. Image x is the homogeneous, reconstructed image as a reference, whereas the image y denotes a given reconstructed image under the test. The contrast is then measured. To obtain the structure, the covariance between x and y must be calculated. When these three parameters have been acquired, the combination produces a similarity score ranging from −1 to 1. However, in many cases, the similarity is between 0 and 1. An SSIM score is a combination of comparison from the l, c, and s by calculating the mean intensity µ x and µ y along with standard deviation σ x and σ y for images x and y, as well as the covariance σ xy between x and y. The SSIM can be formulated as where the constant C 1 = (K 1 L) 2 , C 2 = (K 2 L) 2 , C 3 = C 2 /2, and L = 255. By setting K 1 and K 2 1, the instability can be avoided at µ 2 x + µ 2 y and σ 2 x + σ 2 y , and σ x σ y are close to zero. Nevertheless, SSIM performs well in the local statistics; hence, in this study, a 9 × 9 local window with the assistance of a Gaussian weighting function w = {w i | i = 1, 2, 3, . . . , N}, a standard deviation of 1.5, and the unit sum of ∑ N i=1 w i = 1 were used. Hence, the mean of SSIM can be expressed as The window slides over the entire image, where X and Y are a reconstructed, homogeneous image and an examined reconstructed image, respectively, x j and y j are the image contents at the j-th local window, and M is the number of local windows in the image [12].
Multiscale Structural Similarity
To improve the original SSIM, greater flexibility in viewing MS-SSIM was introduced, and the proposed measure showed outstanding performance compared to single-scale SSIM. Figure 3 shows a diagram of the MS-SSIM measurement. The measurement is simple, as a single-scale SSIM. First, images x and y are processed as in the original SSIM to yield the c and s in the first scale, and, in this case, the first scale is used as the original image size. Second, the low-pass filter (LPF) is deployed over the entire image and downsampled by 2. This assessment is repeated until K-scale, and the similarity is obtained by calculating the products of c and s in multiscale with the final l. The entire MS-SSIM score can be evaluated using a combination of all the measurements via with β 1 = 0.0448, β 2 = 0.2856, β 3 = 0.3001, β 4 = 0.2363, and β 5 = 0.1333, where β k = γ k = α k at k = 1, 2, 3, . . . , K [30,34]. In this study, we set K = 5.
Improved Structural Similarity with Sharpness Comparison
The SSIM involves several shortcomings and, in such conditions, the similarity score is over-evaluated by comparing a reference image and images filtered by LPF. In contrast, slightly distorted images with geometrical transformations, such as spatial and rotation translations, have low similarity. Regarding these issues, SSIM may overestimate the filtered images. Hence, ISSIM-S was introduced to overcome these drawbacks [32]. In this study, the reconstructed images are similar to the blurred and translated images, as they were obtained by using different numbers of nodes and elements in the forward problem and an inverse solution to avoid the inverse crime. Figure 4 shows a diagram of the ISSIM-S measurement. Compared with Figure 2, ISSIM-S has improvements in the sharpness and structure comparisons. The limitation of the SSIM is defined in Equation (11). The structure is sensitive to translation, rotation, and scaling; hence, a new structure comparison is necessary and can be formulated as where σ x− and σ y− are the standard deviations for images x and y, which are smaller than µ x and µ y , whereas σ x+ and σ y+ denote the standard deviations for images x and y, which are higher than µ x and µ y . To decrease the overestimation, a new component, denoted as sharpness comparison h(x, y), is utilized, which is correlated to the normalization digital Laplacian, as shown in Equation (15).
where ∇ 2 x is the normalized digital Laplacian of image x, and ∇ 2 y is the normalized digital Laplacian of image y given by Then, the ISSIM-S and MISSIM-S are calculated as
Multiscale Improved Structural Similarity with Sharpness Comparison
The purpose of MISSIM-S is to decrease the overestimation of the blurred image and to approximate the similarity score with translation and scaling. Nevertheless, the reconstructed images are not only translation and scaling, but also aim to accord with the inclusion contrast and size. As noted above, MS-SSIM is effective because it can assess images at varying scales. By combining the principles of MISSIM-S and MS-SSIM, we propose MS-ISSIM-S as a new assessment technique in this work. Figure 5 shows a diagram of the MS-ISSIM-S measurement. The entire procedure is similar to the assessment process in MS-SSIM, but h is calculated separately for each scale. Once the h is acquired at each scale, the mean of h is used to obtain the MS-ISSIM-S, as formulated in Equation (20). (20) We adopted K = 5.
Spearman's Rank Correlation
To analyze the similarity score in each method, as explained in Sections 2.3 and 2.4 with the HVS, a Spearman's rank correlation [41] was used to determine the relationship between two independent variables. Correlation is a statistical method used to assess the association degree to aid in understanding the relationship between two variables, but not to distinguish the fundamental relation [42,43]. This correlation is a numerical value used to quantify the linear correlation between the MOS and reconstructed images analyzed for each SSIM type in this study. The correlation values are between −1 and 1, with −1 indicating a negative linear correlation, 0 expressing no relation, and +1 denoting a perfect linear correlation. This correlation measurement aims to identify the most appropriate method for the four SSIM methods used in this study. Spearman's rank correlation coefficient can be obtained as where ρ denotes the correlation, d p expresses the difference in p-th rank between MOS and reconstructed images, and n is the number of reconstructed images in each case for every optical property, as stated in Section 3.1. In this case, n was 21. However, when there are ties in the rank, the correction factor c f is added as a summation to Equation (21); thus, it can be expressed as where m 1 , m 2 , . . . are element numbers in the tie ranks. These c f will be in numbers following the number of ties ranked.
Results and Discussions
To avoid uncertainty, we emphasized the use of reconstructed images from DOT simulations. Section 3.1 states the image reconstruction model by describing the results for the used simulation cases, and Section 3.2 defines the image assessment by showing the results for each used SSIM type, which were then compared with MOS by applying Spearman's rank correlation.
Image Reconstruction Model
The simulated model used in this study was constructed to mimic breast tissue. A circular array with a group of finite element meshes comprised 4225 nodes, and 8192 triangle elements were implemented for the forward model. In addition, around the 80 mm of the diameter model boundary, 16 light sources and 16 detectors were attached to obtain the tissue information. Because there were 16 light sources and 16 detectors, the total measurement was 256 for the image reconstruction procedure. The total source detector (SD) was 32; hence, the distance between one source to another source and one detector to another detector was 22.5 • . Moreover, the distance between SD was 11.25 • . Figure 6a depicts the model geometry mesh designed to mimic breast tissue, with the attached red dots indicating light sources and the green rectangles as the detectors. S 1 indicates sensor 1, and d 1 denotes detector 1. The measurement began with S 1 as the light source, and then proceeded from d 1 to d 16 as the detectors to obtain the light intensity around the boundary. This measurement continued until S 16 as the light source penetrated the light inside the tissue model. Therefore, the measurement was rotated in a counterclockwise direction (CCW). Figure 6b shows the artificial embedded inclusion inside the tissue to mimic a breast tumor for exact µ a distribution. The inclusion location and size were 90 • and 7.5 mm in radius, respectively. To test the image reconstruction algorithm, a forward problem simulation was applied to acquire the light distribution. As the FD-DOT was implemented, it offered two solutions as the results, including light propagation and phase shift. Figure 7 depicts the light distribution map and the phase shift when S 1 was used as the source. Figure 7a,b shows the photon propagation and phase shift distribution for a homogeneity model. It may be observed that the light was distributed well inside the model by implementing the background µ a = 0.01 mm −1 and µ s = 1 mm −1 . Furthermore, Figure 7c,d depicts the light and phase shift distributions when the case shown in Figure 6b was applied. The red circles indicate inclusion locations. The distributions of the light source and phase shift inside the inclusion are recognized differently compared with the homogeneity case. These distributions verify that the inclusion exists in the specified position. Figure 6b with the red circle, which shows the inclusion position when S 1 was used as the source.
The log intensity measured around the boundary was inspected to confirm the results shown in Figure 7. Figure 8 depicts the light and phase shift around the boundary by using the simulated data from the detectors when S 1 was used as the source. Figure 8a demonstrates the intensity from d 1 to d 16 . The light intensity from d 4 to d 9 indicated the differences in the intensity between the homogeneous (black, dashed lines) and heterogeneous (red, dashed lines) cases, whereas the phase shift was varied from d 4 to d 9 , as shown in Figure 8b. These results indicated that the data around the boundary can be obtained by the DE method using the FEM. The next step was an inverse solution to yield the reconstructed optical property images. To proceed with the entire process, an inverse solution was performed with different numbers of elements and nodes to avoid inverse crime; hence, 1536 triangular elements and 817 nodes were used. To achieve the purpose of this research, the reconstruction was performed in two cases representing homogeneous, invisible, and visible inclusions, as shown in Table 1. The representation of homogeneity in cases A1 and A2 was that the inclusion radius, µ a , and µ s were zero. To imitate invisible inclusion in case A1, the inclusion radius was 2.5 mm with µ a = 0.02 mm −1 and µ s = 2 mm −1 , whereas the visible inclusion was 10 mm of the radius alongside µ a = 0.02 mm −1 and µ s = 3 mm −1 . In contrast, for case A2, the unseeable inclusion used 2.5 mm of the radius along with µ a = 0.02 mm −1 and µ s = 0.89 mm −1 , whereas the distinguishable inclusion used 10 mm as the radius, as well as µ a = 0.03 mm −1 and µ s = 0.89 mm −1 . Figure 9 depicts the reconstructed images for case A1, whereas Figure 10 shows the reconstructed images for case A2. Figures 9a-c and 10a-c depict the reconstructed µ a of homogeneous, invisible, and visible inclusions for cases A1 and A2, respectively, whereas Figures 9d-f and 10d-f depict the reconstructed µ s of homogeneous, invisible, and visible inclusions for cases A1 and A2, respectively. These results indicated that the algorithm successfully reconstructed the optical properties; thus, it may be promising for the reconstruction using the other cases in simulation to accomplish the goal of this research. The circular profile used to examine the reconstructed images was applied over the image. Figures 11 and 12 depict the circular profile for Figures 9 and 10. Figure 11a
Image Assessment
As the results show in Section 3.1, the simulation proceeded to image assessment based on the computer observer using the reconstructed images to test numerical assessment with four types of SSIM: MSSIM, MS-SSIM, MISSIM-S, and MS-ISSIM-S. Hence, the objective decision numerically could be determined to distinguish between detectable and undetectable inclusions. Several uncertainties are encountered in medical image analysis; thus, medical image insight along with the medical background is required to prevent misconceptions, and individual or subjective assessments are highly preferable. This research attempted to manage the assessment method to analyze reconstructed images numerically; thus, the inspection could be more objective because the numerical method (computer-based observer) tended to be used. To avoid ambiguities in measuring the reconstructed DOT images, only simulated images were assigned in this study.
To obtain the reconstructed images, the cases shown in Table 2 were simulated with the inclusion location, as shown in Figure 6b. Additionally, 1% noise amplitude and 10% noise amplitude, phase, and optical properties were simultaneously completed to imitate the real environment. In this study, 168 reconstructed images involving 84 µ a and 84 µ s images were used. The inclusion radii were 2.5, 3.75, 5, 6.25, 7.5, 8.75, and 10 mm with the same µ a = 0.02 mm −1 by changing µ s = 2, 2.5, and 3 mm −1 , namely, case B1. Case B2 had the same situation as case B1, but different optical properties, such as the same µ s = 0.89 mm −1 with µ a = 0.02, 0.025, and 0.03 mm −1 . A total of 672 assessments were performed. Because the SSIM is a full-reference image analysis, four reference images were necessary. Figure 9a,d was utilized as the µ a and µ s reference images for case B1 with 1% noise amplitude and 10% noise amplitude, phase, and optical properties. Figure 10a,d was employed for case B2 with 1% noise amplitude and 10% noise amplitude, phase, and optical properties. Figure 13 depicts the comparison of the four types of SSIM evaluations for case B1 with 1% noise amplitude. The MS-ISSIM-S score decreased with the contrast ratio and inclusion size, as well as when every part, denoted by a magenta, dashed line, was examined in detail, and the similarity scores for MS-ISSIM-S showed a decreasing relationship with the contrast, as shown in Figure 13a. Meanwhile, for µ s , MS-SSIM exhibited an almost similar trend to that of MS-ISSIM-S, as shown in Figure 13b. MSSIM and MISSIM-S demonstrated inconsistency in contrast and size. In addition, case B1 was observed with 10% noise amplitude, phase, and optical properties. Again, MS-ISSIM-S showed reliability related to the contrast and size for each part with the magenta, dashed line border, whereas MSSIM, MS-SSIM, and MISSIM-S were inappropriate with respect to contrast and size, and even similarities scores fluctuated, as shown in Figure 14a. MS-SSIM compared with MS-ISSIM-S, as shown in Figure 14b, had a slightly different trend, but showed inconsistency in contrast, whereas MSSIM and MISSIM-S were exceptionally inconsistent with the contrast and size. Figure 15a illustrates a similar tendency for MS-ISSIM-S and MS-SSIM when the contrast was increased. The similarity score must be decreased because the inclusion inside the tissue was to be detectable, as well as with the larger inclusion size. However, MSSIM and MISSIM-S were difficult to use in terms of contrast and size. Similar results are also shown in Figure 15b. To complete the entire computer-based observer evaluation, case B2 was simulated with 10% noise amplitude, phase, and optical properties. Even though the applied noise was 10%, the performance of MS-ISSIM-S was superior to that of MSSIM, MS-SSIM, and MISSIM-S because the similarity score was decreased with respect to the contrast and decreased relatively with the larger inclusion size, as shown in Figure 16a. Moreover, for µ s , MS-ISSIM-S exhibited reasonable similarity scores associated with the contrast and size by showing that with the larger inclusion size and higher contrast the similarity was reduced. Nevertheless, MSSIM, MISSIM-S, and MS-SSIM were complicated to fit with the contrast ratio in each part, as shown in Figure 16b. As mentioned in Section 2.3.3, a MISSIM-S improved the MSSIM related to overestimation in the blurred images and underestimation with respect to translation and scaling; thus, it showed clearly, especially in Figure 16b, MSSIM overestimated the similarity score of reconstructed images, while MISSIM-S tried to suit inclusion size by presenting a lower similarity score when the µ s = 0.89 mm −1 , with µ a and inclusion sizes at 0.03 mm −1 and 5 mm, and 0.02 mm −1 and 8.75 mm, as well as 0.03 mm −1 and 8.75 mm, respectively. Nonetheless, the performances of MSSIM and MISSIM-S demonstrated inconsistency with the contrast ratio and inclusion size. In contrast, MS-ISSIM-S offered stability in measuring the DOT-reconstructed images by performing the reducing of similarity scores following the raising of inclusion contrast, as shown in Figures 13-16. To confirm the performance of these four types of SSIM, the MOS and Spearman's rank correlation were used, as described in Section 2.5. There were 20 test subjects, none of whom had eye issues, such as color blindness. To measure the MOS, the subjects were shown the reconstructed images, as in Table 2, and provided their opinion in scores in the range of 1 (the inclusion is not detectable) until 5 (the inclusion is very detectable). The experiments were performed under an adjusted illumination and display. These MOS scores were subjective according to the participant's own individual opinions; thus, they are not credible. However, the MOS scores can imply a relative comparison by using the Spearman's rank correlation when the comparisons were performed between the similarity scores in four SSIM types with MOS scores. Table 3 shows the correlation scores for the four SSIM types. MISSIM-S was not appropriate in terms of human visual perception in the case of assessing the DOT-reconstructed images because this technique is designed to overcome the MSSIM underestimation related to translation and scaling and overestimation associated with blurred images. Therefore, MISSIM-S is superior when the comparison is regarding translation and scaling images. However, MS-SSIM is better than MSSIM because it measures images at various scales. Apparently, MS-SSIM works satisfactorily because it can evaluate distorted images well; thus, MS-ISSIM-S was developed in this research based on the advantages of MISSIM-S and MS-SSIM. As shown in Table 3, MS-ISSIM-S presents the highest correlation average score compared to MOS, followed by MS-SSIM, MSSIM, and MISSIM-S. MS-ISSIM-S was more stable in assessing the DOT-reconstructed images with correlation scores from 0.8552 to 0.9955, although the reconstructed images had several image distortions due to the limitations of the algorithm, such as resolution and sensitivity. In addition, MS-SSIM was shown to be promising for the objective assessment of the DOT-reconstructed images, but it had uncertainty in case B1 with 10% noise amplitude, phase, and optical properties, with a correlation score of 0.6864. Nonetheless, the correlation scores were between 0.6864 and 0.9964. MSSIM and MISSIM-S presented unsatisfactory correlations, as it may be observed that MSSIM correlations were 0.6532 to 0.9740 and the MISSIM-S correlations were −0.0974 to 0.9487. Using Spearman's rank correlation, it is evident that MS-ISSIM-S performed a robust assessment by showing the best correlation scores and suitable values for the contrast and inclusion size, as shown in Table 3 and Figures 13-16. Therefore, a median was used as the threshold value to separate the visible and invisible inclusions. Because the ultimate goal of this research was to evaluate the image numerically based on a computer decision, a comparison was performed between MS-ISSIM-S and MOS to validate the algorithm, as shown in Figures 17-20. Figure 17a-d depicts the comparison for case B1 with 1% noise amplitude, as well as the red line to distinguish visible inclusions on the right and invisible on the left. Figure 17 shows that MS-ISSIM-S (Figure 17a,c) had the same results as MOS (Figure 17b,d); thus, in this case, MS-ISSIM-S performed perfectly. Moreover, Figure 18a,b has a similar condition for splitting the detectable and undetectable inclusions. However, for µ s , case B1 with 10% noise amplitude, phase, and optical properties, as shown in Figure 18c, differed slightly from Figure 18d. Unfortunately, due to aiming to mimic the real environment with adequate noise, Figure 18c shows two errors in assessing the images indicated by the red rectangles. Two red rectangles mean detectable inclusion, whereas the true condition was undetectable inclusion. Yet, to determine a value as a threshold value is not trivial [10]; thus, to determine the algorithm performance, more simulated cases are needed. However, MS-ISSIM-S exhibited more stable results in image assessment of DOT than the other models compared, according to the results, as shown in Table 3. Figure 19a,b has the same results, indicating that MS-ISSIM-S fit with the MOS, whereas Figure 19c depicts one detectable error for size 3.75 mm with µ s = 0.89 mm −1 and µ a = 0.03 mm −1 compared with Figure 19d. As can be seen, with lower optical property contrast and small inclusion, such as µ a = 0.02 mm −1 , µ s = 2 mm −1 , and inclusion size 2.5 mm, the color map was bright, indicating a high similarity score, and, thus, there was no inclusion. In contrast, with the high contrast ratio and larger inclusion, for instance, µ a = 0.02 mm −1 , µ s = 3 mm −1 , and an inclusion size of 10 mm, the similarity score was lower, represented by the dark color, indicating the presence of an inclusion. With these results, SSIM, especially MS-ISSIM-S, shows promise as an option to assess numerically and objectively the image based on computer, regardless of insight in the medical image analysis field. However, experts are necessary to reach a decision in medical applications. Moreover, since we only presented the DOT-reconstructed images from simulation cases, to confirm the results in this paper, clinical image analyses by medical doctors specialized in radiology are necessary to obtain the comprehensive insight of the improved SSIM, especially for MS-ISSIM-S. Once again, the goal of a computer-based observer for medical images is to assist radiologists to achieve a conclusion in the medical field. The comparisons between radiologists' points of view with the results of this research shortly are essential. In addition, the threshold value here was presented by employing the median for simplicity; hence, further research must consider an appropriate method to conclude this threshold.
Conclusions
A reconstruction algorithm was implemented to produce DOT-reconstructed images. Simulated cases generating reconstructed images with 1% noise amplitude and 10% noise amplitude, phase, and optical properties were employed. To numerically assess the images, four types of SSIM were used to obtain the similarity scores. To confirm the results, Spearman's rank correlation was utilized to compare the four SSIMs with MOS. MS-ISSIM-S showed the best correlation, with a score between 0.8552 and 0.9955 and an average correlation of 0.9452, representing a robust image assessment regardless of the noise. A comparison of MOS and MS-ISSIM-S to yield a suitable HVS was performed by separating the image into two sections with the assistance of a threshold value, as indicated graphically by a red line. MS-ISSIM-S demonstrated acceptable results when it measured images with low noise, but the association with HVS was relatively reliable. In addition, with lower optical property contrast and small inclusion, the color map was bright, indicating a high similarity score; thus, there was no inclusion. In contrast, the similarity score of regions with a high contrast ratio and larger inclusion was lower, represented by the dark color; hence, an inclusion was present. These results indicated that SSIM, particularly MS-ISSIM-S, is a promising option for the numerical and objective computational assessment of reconstructed images, regardless of specialized insight in the field of medical image analysis. However, experts naturally remain necessary to make specific medical decisions. | 8,260 | 2021-12-01T00:00:00.000 | [
"Physics"
] |
Dipetalodipin, a Novel Multifunctional Salivary Lipocalin That Inhibits Platelet Aggregation, Vasoconstriction, and Angiogenesis through Unique Binding Specificity for TXA2, PGF2α, and 15(S)-HETE*
Dipetalodipin (DPTL) is an 18 kDa protein cloned from salivary glands of the triatomine Dipetalogaster maxima. DPTL belongs to the lipocalin superfamily and has strong sequence similarity to pallidipin, a salivary inhibitor of collagen-induced platelet aggregation. DPTL expressed in Escherichia coli was found to inhibit platelet aggregation by collagen, U-46619, or arachidonic acid without affecting aggregation induced by ADP, convulxin, PMA, and ristocetin. An assay based on incubation of DPTL with small molecules (e.g. prostanoids, leukotrienes, lipids, biogenic amines) followed by chromatography, mass spectrometry, and isothermal titration calorimetry showed that DPTL binds with high affinity to carbocyclic TXA2, TXA2 mimetic (U-46619), TXB2, PGH2 mimetic (U-51605), PGD2, PGJ2, and PGF2α. It also interacts with 15(S)-HETE, being the first lipocalin described to date to bind to a derivative of 15-lipoxygenase. Binding was not observed to other prostaglandins (e.g. PGE1, PGE2, 8-iso-PGF2α, prostacyclin), leukotrienes (e.g,. LTB4, LTC4, LTD4, LTE4), HETEs (e.g. 5(S)-HETE, 12(S)-HETE, 20-HETE), lipids (e.g. arachidonic acid, PAF), and biogenic amines (e.g. ADP, serotonin, epinephrine, norepinephrine, histamine). Consistent with its binding specificity, DPTL prevents contraction of rat uterus stimulated by PGF2α and induces relaxation of aorta previously contracted with U-46619. Moreover, it inhibits angiogenesis mediated by 15(S)-HETE and did not enhance inhibition of collagen-induced platelet aggregation by SQ29548 (TXA2 antagonist) and indomethacin. A 3-D model for DPTL and pallidipin is presented that indicates the presence of a conserved Arg39 and Gln135 in the binding pocket of both lipocalins. Results suggest that DPTL blocks platelet aggregation, vasoconstriction, and angiogenesis through binding to distinct eicosanoids involved in inflammation.
The hemostatic process, a host defense mechanism to preserve the integrity of the circulatory system, remains inactive until vascular injury occurs, leading to activation of hemo-stasis. The first step in this cascade of events is platelet interaction with the exposed extracellular matrix (ECM), which contains a large number of adhesive macromolecules such as collagen. Under conditions of high shear, initial tethering of platelets to the ECM is mediated by interaction between the platelet receptor glycoprotein (GP) 2 Ib and vWF bound to collagen (1). This interaction allows platelet receptor GPVI to bind to collagen, triggering release of the so-called secondary mediators TXA 2 and ADP that are necessary for integrins ␣21 and ␣ IIb  3 activation and completion of platelet aggregation (2). Vasoconstriction is another critical step triggered by injury and mediated by biogenic amines produced by adrenergic fibers or vasoactive components such as TXA 2 and serotonin released by platelets in an attempt to decrease blood flow at sites of injury and therefore prevent blood loss (3).
Because of the interface encountered by vectors upon interaction with their host, salivary glands from bloodsucking arthropods have evolved different mechanisms that counteract hemostasis and inflammation (4). At least thirteen different mechanisms for inhibition of platelet function are reported to explain how these molecules affect platelet function, thus assisting hematophagous animals to acquire a blood meal (5). These inhibitors have been classified as enzymes, small ligand binders, enzymes or enzyme inhibitors, nitric oxide (NO)-releasing molecules, and integrin antagonists. Among members of the lipocalin family (6), inhibitors have been reported to bind to ADP (7), biogenic amines (8 -9), and leukotrienes (10). Other salivary components interfere with hemostasis by targeting vasoconstriction, such as tachikinin-like peptides from Aedes aegypti (11) or peptides such as sandfly maxadilan, which specifically activates PAC1, the type I receptor for pituitary adenylate cyclase-activating peptide (PACAP) (12). Vasodilation is also mediated through release of NO by NO-carrying nitrophorins from Rhodnius prolixus (13). In this report, we have cloned, expressed, and studied the mechanism of action of a novel lipocalin, herein named dipetalodipin (DPTL). DPTL binds to TXA 2 , PGF 2␣ , 15(S)-HETE, and other prostanoids, and * This work was supported, in whole or in part, by the Division of Intramural Research, NIAID, National Institutes of Health. □ S The on-line version of this article (available at http://www.jbc.org) contains supplemental data. 1 was found to block platelet aggregation, vasoconstriction, and angiogenesis. The antiinflammatory and antihemostatic properties of DPTL may assist triatomines to successfully feed on blood and counteract host pro-inflammatory mechanisms triggered upon injury.
Dipetalogaster maxima Salivary Gland cDNA Construction-This was done as described before (15) and in the supplemental data. Sequencing of cDNA indicate that DPTL is an abundant secreted lipocalin (data not shown).
Sequence Analysis-Sequence similarity searches were performed using BLAST. Cleavage site predictions of the mature proteins used the SignalP program. The molar extinction coefficient (⑀ 280 nm ) of mature DPTL at 280 nm was obtained at Expasy Protemics server, yielding for mature DPTL a value of ⑀ 280 nm ϭ 20315 M Ϫ1 ⅐cm Ϫ1 ; A 280 nm/cm 0.1% (1 mg/ml) ϭ 1.132, molecular weight 17,951.1 (165 aa), and pI 8. 49.
Expression of DPTL in Escherichia coli-Synthetic cDNA for DPTL was produced by Biobasics (Ontario, Canada). The sequence displays an N-terminal NdeI and a C-terminal XhoI restriction sites. The NdeI site adds a 5Ј-methionine codon to all sequences that acts as start codon in the bacterial expression system, whereas the XhoI site was incorporated after the stop codon. pET 17b constructs were confirmed before transformation of E. coli strain BL21(DE3)pLysS cells. Detailed description of expression of recombinant DPTL is available online in the supplemental data.
Protein Purification, PAGE, and Edman Degradation-These steps were performed as described in detail in the supplemental data available online.
High-throughput Ligand Binding Assay-To investigate putative ligands of DPTL, 50 l of 100 mM ammonium acetate, pH 7.4 (AA buffer) containing 1 M DPTL and 2 M each of arachidonic acid, 15(S)-HETE, PGE 2 , PGD 2 , PGF 2␣ , TXA 2 , U-46619, U-51605, leukotriene B 4 , and carbocyclic TXA 2 were injected into a 3.2 ϫ 250-mm Superdex peptide column (GE Healthcare) equilibrated with 100 mM AA buffer. A flow rate of 50 l/min was maintained with a P4000 SpectraSystem pump (Thermo Scientific, Rockford, IL). The absorbance at 280 nm was monitored using an ABI 785 detector (Applied Biosystems, Foster City, CA). Fractions were collected into a 96-well plate every minute using a Probot apparatus (Dionex, Sunnyvale, CA). Selected fractions (20 l) were mixed with 1 l of methanol containing 1 M HCl, centrifuged at 14,000 ϫ g for 10 min, and the supernatant injected into a 0.3 ϫ 150-mm C18 reverse phase column (Magic C18 200 Å; Michrom BioResources, Inc, Auburn CA) equilibrated with 10% methanol/water containing 0.1% acetic acid at a flow rate of 3 l/min maintained by an ABI 140D pump (Applied Biosystems). After 15 min, the methanol concentration was raised linearly to 90% in the course of 30 min. The column effluent was mixed with pure methanol at a rate of 4 l/min (to facilitate electrospray) using a syringe pump attached to a LCQ Deca XP Max mass spectrometer (Thermo Scientific). Mass spectrometry was performed in negative-ion mode to detect ligand masses. A similar protocol was used to detect positively charged agonists: PAF acether, leukotrienes C 4 , D 4 , and E 4 , histamine, serotonin, norepinephrine, epinephrine, and adenosine diphosphate, with the mass spectrometer running in positive-ion capture mode.
Isothermal Titration Calorimetry (ITC)-Prostanoids (in ethanol or methyl acetate) were placed in glass vials and the vehicle evaporated under nitrogen atmosphere; the dried material was then resuspended in appropriate concentrations in 20 mM Tris-HCl, 0.15 M NaCl, pH 7.4, sonicated, and vortexed. Calorimetric assays for measuring DPTL binding to a number of ligands were performed using a VP-ITC microcalorimeter (Microcal, Northampton, MA) at 35°C. Titration experiments were performed by making successive injections of 10 l each of 40 M ligand into the 1.34-ml sample cell containing 4 M DPTL until near-saturation was achieved. Prior to the run, the proteins were dialyzed against 20 mM Tris-HCl, 0.15 M NaCl, pH 7.4, for binding experiments. The calorimetric enthalpy (⌬H cal ) for each injection was calculated after correction for the heat of DPTL dilution obtained in control experiments performed by titrating DPTL into buffer.
The binding isotherms were fitted according to a model for a single set of identical binding sites by nonlinear squares analysis using Microcal Origin software. Enthalpy change (⌬H), and stoichiometry (n) were determined according to Equation 1, where Q is total heat content of the solution contained in the cell volume (V o ), at fractional saturation , ⌬H is the molar heat of ligand binding, n is the number of sites, and M t is the bulk concentration of macromolecules in V o . The binding constant, K a , is described as Equation 2, where [X] is the free concentration of ligand.
Free-energy (⌬G) and entropy term (ϪT⌬S) of association were calculated according to Equations 3 and 4. Platelet Aggregation and ATP Release Assays-Platelet-rich plasma was obtained by plateletpheresis from medication-free platelet donors at the DTM/NIH blood bank. Aggregation and ATP release were performed as described (16) and in the supplemental data.
Platelet Adhesion Assay under Static Conditions-Inhibition of platelet adhesion to immobilized collagen was examined by fluorometry. Microfluor black microtiter 96-well plates (ThermoLabsystems, Franklin, MA) were coated with 2 g of fibrillar (Horm) or soluble collagen overnight at 4°C in PBS, pH 7.2, essentially as described (16) and in the supplemental data.
Contraction of Rat Aorta-Contraction of rat aortic ring preparations by U-46619 was measured isometrically and recorded with transducers from Harvard Apparatus Inc. (Holliston, MA). A modified Tyrode solution (with 10 mM HEPES buffer) that was oxygenated by continuous bubbling of air was used in the assays (17). In the first assay, aortic rings were suspended in a 0.5-ml bath kept at 30°C and were pre-constricted by 100 nM U-46619 before addition of proteins to give final concentrations of 1 M. In the second assay, aortic ring preparations were preincubated with 100 nM of DPTL, and increments of 100 nM U-46619 were added until maximum contraction was reached. Additions to the bath were never greater than 5% of the volume of the bath.
Contraction of Rat Uterus-Wistar female rats were injected intraperitonially with 0.1 mg of estradiol in 1 ml of phosphatebuffered saline. 24 h later, they were killed, and the uterus removed into a modified De Jalon solution (NaCl 154 mM, KCl 5.6 mM, D-glucose 2.8 mM, NaHCO 3 6 mM, CaCl 2 0.4 mM, Hepes 5 mM, 0.1 M dexamethasone, final pH 7.4). About 1.5-cm pieces of the uterus were attached to a 1-ml bath kept at 35°C, and their contractions recorded isotonically (Harvard Apparatus Inc.) under a 2 g load. Rhythmic contractions were induced by addition of PGF 2␣ at the indicated concentrations.
Human Dermal Microvascular Endothelial Cell (HMVEC) Culture-HMVEC (CC-2643) were purchased from Clonetics (San Diego, CA) and grown at 37°C, 5% CO 2 in T-25 flasks in the presence of EBM-2 Plus as described (17) and in the supplemental data.
Tube Formation Assay-Tube formation assay was done as described with modifications (18). Costar culture plates (96well; Corning, NY) were coated with 30 l of growth factorreduced Matrigel (BD Biosciences) and allowed to solidify at room temperature. 100 l of MVEC suspension (5 ϫ 10 5 /ml) were added to each well in the presence of vehicle, or 15(S)-HETE, or DPTL, or DPTL plus 15(S)-HETE at the concentrations indicated in the figure legends. Plates were incubated at 37°C, 5% CO 2 for 5-6 h, and formation was observed under an inverted microscope coupled to a digital camera (Axiovert 200; Carl Zeiss, Inc., Thornwood, NJ). Images were captured with AxioCamHR color camera (model 412-312) attached to the microscope. Tube length was measured by outlining the tubes (and converted to pixels) using AxioVision 4.6.3 software.
Modeling of DPTL and Pallidipin-Structures of DPTL and pallidipin (gi388359) were modeled using the alignment mode in SWISS-MODEL (19). The template structure was the ammonium complex of nitrophorin 2 (PDB accession number 1EUO).
RESULTS
Analysis of a Dipetalogaster maxima salivary gland cDNA library indicates that members of the lipocalin family of proteins is highly abundant, representing more than 90% of predicted secreted molecules (not shown). One of these sequences, DPTL, displays high sequence similarity to pallidipin, a lipocalin from Triatoma pallidipennis, which has been claimed as a specific inhibitor of collagen-induced platelet aggregation (20); however, the molecular target for pallidipin and its exact mechanism of action have remain elusive thus far. Fig. 1A shows a CLUSTAL alignment of DPTL and pallidipin in addition to other salivary antihemostatic proteins including RPAI-1 (7), triplatin (21), moubatin (22), and TSGP3 (23). DPTL was found to be more closely related to pallidipin, as depicted by clade I (Fig. 1B). The presence of mature DPTL in the salivary gland was supported by a one-dimensional gel of the gland homogenate, which displays intense staining at ϳ19 kDa mol wt, corresponding to salivary lipocalins. To identify the N terminus of DPTL, gels were electroblotted to PVDF membranes and bands submitted to Edman degradation. Fig. 1C shows the N terminus of the most abundant proteins. One of them was identified as an 18 kDa protein with N-terminal sequence KEcTLMAAaSNFNSDKYfDV (lowercase indicates ambiguous identification), which is in agreement with the corresponding cDNA coding for DPTL. The other sequence, GSISEcKTPKPMDDFSGTKF, was identified as procalinlike (24). These two sequences together represent at least 70% of the proteins loaded in the gel. Therefore, DPTL is a particularly abundant protein that is found in mature form in the salivary gland of D. maxima (Fig. 1C).
To study the effects of DPTL on platelet aggregation, its corresponding cDNA was cloned in pET17b vector, followed by transformation and expression of recombinant protein in BL21(DE3)pLysS cells. DPTL was purified through a series of chromatographic procedures and eluted as a single peak in the last step, performed in a gel-filtration column (Fig. 1D). PAGE of purified DPTL demonstrated that it migrates as a ϳ18 kDa protein under nonreducing conditions, and slightly higher mol wt in the presence of DTT, consistent with reduction of disulfide bridges. The N terminus was submitted to Edman degradation, and resulted in the sequence MxExTLMAAASNFNSD-KYFD (Fig. 1E). This sequence is identical to the N-terminal sequence predicted by the cDNA coding for DPTL with the exception of methionine in position 1, which has been added for expression purposes. In addition, MS results identified a mass of 17,948 Da for recombinant DPTL, which is in agreement with the theoretical mass of the molecule with an extra Met (17,951.1 Da) (Fig. 1F).
Results presented in Fig. 1 indicated that recombinant DPTL was pure, soluble, and suitable for further experimentation. In a first step toward studying its mechanism of action, DPTL was shown to dose-dependently inhibit low doses of collagen-induced platelet aggregation, with an IC 50 ϳ30 nM. Inhibition was abolished when high doses of collagen were employed. DPTL also blocked in a dose-dependent manner ATP release triggered by collagen (Fig. 2). Notably, no effect on shape change was observed, suggesting that DPTL did not target collagen itself, nor collagen receptors integrin ␣21 or GPVI. This was confirmed through platelet adhesion assays carried out with calcein-labeled platelets incubated with immobilized soluble (integrin ␣21-mediated) or fibrillar (GPVI and integrin ␣21dependent) collagen (25)(26). Results reported in Table 1 show that adhesion of platelets to fibrillar or soluble collagen was not inhibited by DPTL (1-10 M). As a positive control, EDTA prevented platelet adhesion to fibrillar collagen ϳ60% and abolished adhesion to soluble collagen (26). These results, in addition to the lack of inhibition of platelet shape change, demonstrated that DPTL is not a specific collagen inhibitor and suggested that secondary mediators might be the target of the molecule. TXA 2 and ADP are two important mediators of platelet aggregation that are, respectively, generated and released by platelets upon stimulation by collagen (2). In an attempt to verify the inhibitory profile of DPTL toward other agonists that activate platelets independently of secondary mediators, it was tested as an inhibitor for U-46619 (TXA 2 mimetic) and AA-induced platelet aggregation. Fig. 2 shows that DPTL dose-dependently inhibits U-46619 and AA-induced platelet aggregation in a dose-dependent manner, corroborating the notion that DPTL targets TXA 2 (or ADP) mediated platelet responses. Because collagen, TXA 2 , and AAinduced aggregation is particularly sensitive to 5Ј-nucleotidases (27), ADP receptor antagonists (28), or ADP-binding proteins (7), it was of interest to exclude ADP as a potential target for DPTL. Fig. 2 shows that DPTL was ineffective as an inhibitor when ADP was employed at low or moderate concentrations, excluding this agonist as a target for the inhibitor. In addition, DPTL did not affect platelet aggregation triggered by strong agonists such as PMA (PKC activator), convulxin (GPVI agonist), and ristocetin (vWF-dependent platelet agglutinator), which characteristically induce platelet aggregation/ agglutination independently of ADP or TXA 2 .
Whereas results suggested that DPTL targets secondary mediators of platelet aggregation, they did not formally identify TXA 2 as the (sole) ligand or establish whether other ligands involved in pro-hemostatic events unrelated to platelet function were targets for DPTL. Therefore, an experiment was optimized to broaden our search, where the inhibitor was incubated with small compounds (e.g. biogenic amines, prostaglandins and endoperoxides, leukotrienes, HETEs, epoxides and lipids) that may affect platelet function, vessel tonus, angiogenesis, or neutrophil function. The mixture was loaded into a gel-filtration column that excludes protein with mol wt higher than 20 kDa but retains small ligands. If DPTL binds to a given ligand(s), complex formation will occur and elute in the void (Ͼ20 kDa), while free, unbound ligands remain in the column. Accordingly, Fig. 3A shows a peak eluted at 20 min that represents DPTL and potentially bound ligands. This fraction was acidified to precipitate DPTL. The sample was centrifuged, and the supernatant containing ligands was applied to a RP-HPLC column followed by elution using a gradient of methanol. Each fraction was collected and submitted to reverse phase HPLC/ mass spectrometry that scanned for different masses compatible with the test mixture. A negative mass detection of 351 was found when fraction 20 was sprayed (Fig. 3C), this mass was compatible with PGD 2 or PGE 2 , which have masses of 352.4. As a control, fractions 17 (Fig. 3B) and 24 (Fig. 3D), which eluted before and after a peak corresponding to DPTL, were devoid of ligands.
Calorimetry results therefore indicated that DPTL binds TXA 2 , PGF 2␣ , or 15(S)-HETE among other ligands. This was also tested through additional pharmacologic assays. Fig. 5A shows that inhibition of collagen (5.2 g/ml)-induced platelet aggregation by DPTL was identical to inhibition by SQ 29548, a TXA 2 antagonist or by indomethacin which blocks TXA 2 production. Further, when DPTL was added to platelets incubated with SQ 29548 and indomethacin, no additional inhibition was observed. These results indicated a common target, i.e. TXA 2 pathway. Additionally, Fig. 5B shows that DPTL suppressed rhythmic contractions of the rat uterus induced by 0.2 M PGF 2␣ , being the inhibitory effect surmounted by high concentrations of the prostaglandin. Moreover, DPTL induces relaxation of aorta previously contracted with U-46619 (Fig. 5C).
Dipetalodipin
Collagen Finally, DPTL was found to inhibit by Ͼ85% tube formation evoked by 15(S)-HETE (18), suggesting that it could negatively modulate angiogenesis (Fig. 5D). Because DPTL is a lipocalin with sequence homology to nitrophorin 2 (NP2), a NO-binding protein from another triatomine species whose structure has been determined (13), we constructed a molecular model for the inhibitor using the NP2 structure as a template. Fig. 6A shows that DPTL displays structural features typical of the lipocalin family of proteins, whose structure consists of eight-stranded antiparallel -barrel forming a central hydrophobic cavity; ligands are normally bound at a site located in the center of the -barrel. Fig. 6B shows a comparison of this putative binding pocket in the models of DPTL and pallidipin. Many of the
TABLE 2 Affinity, stoichiometry, and thermodynamic parameters of DPTL interaction with different ligands
Interaction was detected by ITC as depicted in Fig. 4. residues predicted to lie in this pocket are shared by the two proteins and may play a role in stabilizing the bound ligand. Among these are a number of hydrophobic and aromatic residues that could be important in interactions with the hydrocarbon chain of eicosanoid ligands. Hydrogen bonding interactions with polar functional groups on the ligand are also normally essential, and their presence is suggested by the highly favorable enthalpies measured in ITC experi-ments. A number of residues that could potentially form hydrogen bonding or ionic interactions with bound ligands are also present in the binding pocket. Most notable among these are the conserved residues Arg 39 and Gln 135 in both (mature) DPTL and pallidipin. Remarkably high sequence similarity for the full-length sequences of both lipocalins is also depicted by the CLUSTAL alignment presented in Fig. 6C.
DISCUSSION
Salivary secretions are rich sources of bioactive molecules that counteract host defenses in distinct ways. Many of these molecules have turned out to display unique and specific pharmacologic properties that in several instances contribute to our understanding of vertebrate biology (4). In this report, we have identified DPTL as a novel salivary lipocalin that binds to distinct prostanoids. Accordingly, ITC experiments demonstrated that DPTL binds to cTXA 2 , U-46619 (stable TXA 2 mimetic), and TXB 2 which is the metabolic end product of TXA 2 . Binding occurred with a K D ϳ100 -200 nM and was compatible with 1:1 stoichiometry. Binding to TXA 2 was consistent with the inhibitory profile for platelet aggregation observed in the presence of the inhibitor and suggests that native TXA 2 generated by platelets is a target for the inhibitor. Accordingly, DPTL affected only platelet responses induced by low concentrations of collagen or by AA and U-46619, which are TXA 2 dependent. A similar inhibitory profile is observed for aggregation of platelets from a patient with a mutation of the TXA 2 receptor (29), or for mice with a gene deletion of Gq (30). Targeting of the TXA 2 pathway by DPTL is also corroborated by a lack of additional effects of the inhibitor on collagen-induced platelet aggregation in the presence of SQ29548 and indomethacin (Fig. 5A). In contrast, DPTL did not interfere with aggregation triggered by high doses of collagen, which occurs via tyrosine kinase-dependent PLC␥2 activation (2,31), or by strong agonists such as convulxin and PMA or vWF-dependent agglutinating agent ristocetin.
Targeting TXA 2 has several implications, as it is the major contributor of platelet aggregation by collagen, which is the most atherogenic protein of the vessel wall (32). Upon platelet adhesion to collagen, TXA 2 is generated and activates platelets through the TP receptors that are coupled to G q and G 12/13 (33). This promotes shape change, and activation of an intracellular pathway that leads to granule secretion, and ADP release. ADP is critical for completion of platelet aggregation by TXA 2 through binding to P2Y12-coupled G i activation and decrease of intraplatelet cAMP. This explains the sensitivity of TXA 2induced platelet aggregation to apyrases (2), ADP receptor antagonists (28), and ADP-binding proteins such as RPAI-1 (7). TXA 2 (and ADP) are also critical for collagen-induced platelet aggregation, because they induce integrin ␣21 activation, a step necessary for firm platelet adhesion to collagen (2,34). Finally, TXA 2 (and ADP) promote integrin ␣ IIb  3 activation, leading to fibrinogen binding to activated platelets, and are necessary for recruitment of platelets to the site of injury (35). Therefore, targeting TXA 2 increases the threshold for platelet aggregation by a number of agonists, and appears to be an effective strategy developed by a hematophagous arthropod to attenuate platelet function and associated inflammation (36 -37).
Whereas platelet aggregation assays suggested that DPTL targets TXA 2 , it was possible that the inhibitor interacted with other compounds not necessarily involved with platelet function. Therefore, we broadened our search for ligands involved in pro-hemostatic events other than platelet activation. This was carried out through experiments that combined chroma-tography of DPTL added to a mixture of eicosanoids, biogenic amines, endoperoxides, and lipids, followed by mass spectrometry of potentially bound molecules. Results indicated that a molecule compatible with the mass of PGD 2 bound to DPTL; formal identification of the compound was subsequently attained by ITC. Results demonstrated that DPTL interacts not only with TXA 2 mimetic U-46619, but also displays high affinity to other prostanoids such as PGF 2␣, which is a potent endothelium-dependent vasoconstrictor (38). Consistent with this specificity, DPTL induced relaxation of PGF 2␣ -dependent uterus contraction. Because DPTL also interacts with PGH 2 , an endoperoxide that behaves as a vasoconstrictor (39), and also converts spontaneously to PGF 2␣ , it is evident that DPTL behaves as a potential vasodilator increasing blood flow at sites of feeding. Our results also demonstrate high-affinity interaction of DPTL to PGD 2 , a major mediator of mast cells and thought to be involved in allergic reactions and immune responses (40 -41). Furthermore, DPTL binds to 15(S)-HETE and is the first lipocalin reported to date that targets a bioactive component generated by the 15-LOX pathway. This interaction may be relevant, taking into account the role of 15(S)-HETE in monocyte-endothelial cell interaction (42), ICAM-1 expression by endothelium cells (43), and other pro-inflammatory events (44). 15(S)-HETE also modulates angiogenesis (18), and this activity was inhibited by DPTL, evidenced by reduced endothelial tube formation over growth factor-reduced Matrigel. Interference with angiogenesis may potentially attenuate inflammation and granulation tissue formation at site of bite, as was reported previously for tick saliva (17).
These results suggest that DPTL potentially displays multifunctional antihemostatic properties, through attenuation of platelet aggregation mediated by TXA 2 , negative modulation of vessel tonus by PGF 2␣, and blockade of angiogenesis by 15(S)-HETE (Fig. 5). It is important to recognize that DPTL is a very abundant lipocalin in the salivary gland, accounting for at least 30% of total salivary lipocalins estimated by SDS/PAGE (Fig. 1C). Assuming a molecular mass of ϳ20 kDa for DPTL and release of 50% of the salivary contents (ϳ1 g/salivary gland pair) upon feeding being ϳ30% DPTL, a concentration of at least 1 M of the inhibitor could exist in the feeding environment (ϳ15 l); this concentration is clearly above the K D for TXA 2 and other prostanoids. In addition, D. maxima expresses an apyrase and several other uncharacterized salivary proteins that may interfere with hemostasis in a distinct yet redundant manner.
Results obtained by pharmacologic assays were compatible with DPTL being a lipocalin that evolved with a binding pocket adapted to accommodate small eicosanoids. To get further insights into the mechanism of binding of DPTL to TXA 2 , a molecular model based on NP2 was constructed (13). The putative binding pocket of the model exhibits a generally hydrophobic structure that would be consistent with the binding of eicosanoids. Also present are residues that could potentially form hydrogen bonds with polar groups on the ligand. Many of these residues are conserved in the pallidipin, which has been claimed as a specific inhibitor of collagen-mediated platelet aggregation, suggesting that the two proteins may have similar functions; however, it is clear that DPTL (and pallidipin) are not specific collagenbinding proteins or receptor antagonists because they do not affect platelet shape change or platelet adhesion. Accordingly, it is plausible that pallidipin exerts its antiplatelet activity through a similar mechanism characterized here for DPTL as binding to TXA 2 ; interaction with other eicosanoids is also likely. In conclusion, DPTL displays unique ligand specificities that may assist the triatomine D. maxima to successfully feed on blood. DPTL is a potentially useful tool to understand the contributions of TXA 2 , PGF 2␣ , and 15(S)-HETE in different pathways leading to platelet aggregation, vasoconstriction, angiogenesis, and oxidative stress in health or disease (49 -50). | 6,447 | 2010-10-02T00:00:00.000 | [
"Biology",
"Chemistry"
] |
The effects of different footprint sizes and cloud algorithms on the top-of-atmosphere radiative flux calculation from the Clouds and Earth ’ s Radiant Energy System ( CERES ) instrument on Suomi National Polar-orbiting Partnership ( NPP )
Only one Clouds and Earth’s Radiant Energy System (CERES) instrument is onboard the Suomi National Polar-orbiting Partnership (NPP) and it has been placed in cross-track mode since launch; it is thus not possible to construct a set of angular distribution models (ADMs) specific for CERES on NPP. Edition 4 Aqua ADMs are used for flux inversions for NPP CERES measurements. However, the footprint size of NPP CERES is greater than that of Aqua CERES, as the altitude of the NPP orbit is higher than that of the Aqua orbit. Furthermore, cloud retrievals from the Visible Infrared Imaging Radiometer Suite (VIIRS) and the Moderate Resolution Imaging Spectroradiometer (MODIS), which are the imagers sharing the spacecraft with NPP CERES and Aqua CERES, are also different. To quantify the flux uncertainties due to the footprint size difference between Aqua CERES and NPP CERES, and due to both the footprint size difference and cloud property difference, a simulation is designed using the MODIS pixel-level data, which are convolved with the Aqua CERES and NPP CERES point spread functions (PSFs) into their respective footprints. The simulation is designed to isolate the effects of footprint size and cloud property differences on flux uncertainty from calibration and orbital differences between NPP CERES and Aqua CERES. The footprint size difference between Aqua CERES and NPP CERES introduces instantaneous flux uncertainties in monthly gridded NPP CERES measurements of less than 4.0 Wm−2 for SW (shortwave) and less than 1.0 Wm−2 for both daytime and nighttime LW (longwave). The global monthly mean instantaneous SW flux from simulated NPP CERES has a low bias of 0.4 Wm−2 when compared to simulated Aqua CERES, and the root-mean-square (RMS) error is 2.2 Wm−2 between them; the biases of daytime and nighttime LW flux are close to zero with RMS errors of 0.8 and 0.2 Wm−2. These uncertainties are within the uncertainties of CERES ADMs. When both footprint size and cloud property (cloud fraction and optical depth) differences are considered, the uncertainties of monthly gridded NPP CERES SW flux can be up to 20 Wm−2 in the Arctic regions where cloud optical depth retrievals from VIIRS differ significantly from MODIS. The global monthly mean instantaneous SW flux from simulated NPP CERES has a high bias of 1.1 Wm−2 and the RMS error increases to 5.2 Wm−2. LW flux shows less sensitivity to cloud property differences than SW flux, with uncertainties of about 2 Wm−2 in the monthly gridded LW flux, and the RMS errors of global monthly mean daytime and nighttime fluxes increase only slightly. These results highlight the importance of consistent cloud retrieval algorithms to maintain the accuracy and stability of the CERES climate data record.
Introduction
The Clouds and Earth's Radiant Energy System (CERES) project has been providing data products crucial to advancing our understanding of the effects of clouds and aerosols on radiative energy within the Earth-atmosphere system.CERES data are used by the science community to study the Earth's Published by Copernicus Publications on behalf of the European Geosciences Union.
Six CERES instruments have flown on four different satellites thus far.The CERES pre-flight model (FM) on the Tropical Rainfall Measuring Mission (TRMM) was launched on 27 November 1997 into a 350 km circular precessing orbit with a 35 • inclination angle and flew together with the Visible and Infrared Scanner (VIRS).CERES instruments FM1 and FM2 on Terra were launched on 18 December 1999 into a 705 km Sun-synchronous orbit with a 10:30 ECT (equatorial crossing time).CERES instruments FM3 and FM4 on the Aqua satellite were launched on 4 May 2002 into a 705 km Sun-synchronous orbit with a 13:30 ECT.CERES on Terra and Aqua flies alongside the Moderate Resolution Imaging Spectroradiometer (MODIS).CERES instrument (FM5) was launched onboard Suomi National Polar-orbiting Partnership (hereafter referred to as NPP) on 28 October 2011 into a 824 km Sun-synchronous orbit with a 13:30 ECT and flies alongside the Visible Infrared Imaging Radiometer Suite (VIIRS).As the orbit altitudes differ among these satellites, the spatial resolutions of CERES instruments also vary from each other.TRMM has the lowest orbit altitude and offers the highest spatial resolution of CERES measurements, about 10 km at nadir; the spatial resolution of CERES on Terra and Aqua is about 20 km at nadir, and it is about 24 km at nadir for NPP as it has the highest orbit altitude.
The CERES instrument consists of a three-channel broadband scanning radiometer (Wielicki et al., 1996).The scanning radiometer measures radiances in shortwave (SW, 0.3-5 µm), window (WN,(8)(9)(10)(11)(12), and total (0.3-200 µm) channels.The longwave (LW) component is derived as the difference between total and SW channels.These measured radiances (I ) at a given Sun-Earth-satellite geometry are converted to outgoing reflected solar and emitted thermal topof-atmosphere (TOA) radiative fluxes (F ) as follows: where θ 0 is the solar zenith angle, θ is the CERES viewing zenith angle, φ is the relative azimuth angle between CERES and the solar plane, and R j (θ 0 , θ, φ) is the anisotropic factor for scene type j .Here scene type is a combination of variables (e.g., surface type, cloud fraction, cloud optical depth, cloud phase, aerosol optical depth, precipitable water, lapse rate) that are used to group the data to develop distinct angular distribution models (ADMs).Note that the SW ADMs are developed as a function of θ 0 , θ, φ for each scene type, whereas the LW ADMs are a weak function of θ 0 and φ and are developed only as a function of θ (Loeb et al., 2005;Su et al., 2015a).
To facilitate the construction of ADMs, there are pairs of identical CERES instruments on both Terra and Aqua.At the beginning of these missions one of the instruments on each satellite was always placed in a rotating azimuth plane (RAP) scan mode, while the other one was placed in crosstrack mode to provide spatial coverage.When in RAP mode, the instrument scans in elevation as it rotates in azimuth, thus acquiring radiance measurements from a wide range of viewing combinations.There are about 60 months of RAP data collected on Terra and about 32 months of RAP data collected on Aqua.CERES instruments fly alongside highresolution imagers, which provide accurate scene-type information within the CERES footprints.Cloud and aerosol retrievals based upon high-resolution imager measurements are averaged over the CERES footprints by accounting for the CERES point spread function (PSF; Smith, 1994) and are used for scene-type classification.Similarly, spectral radiances from MODIS-VIIRS observations are averaged over the CERES footprints weighted by the CERES PSF.Surface types are obtained from the International Geosphere-Biosphere Program (IGBP; Loveland and Belward, 1997) global land cover data set.Fresh snow and sea ice surface types are derived from a combination of the National Snow and Ice Data Center (NSIDC) microwave snow-ice map and the National Environmental Satellite, Data, and Information Service (NESDIS) snow-ice map.NESDIS uses imager data to identify snow and sea ice and provide snow and sea ice information near the coast, whereas NSIDC does not provide microwave retrievals within 50 km of the coast.
TRMM ADMs were developed using 9 months of CERES observations and the scene identification information retrieved from VIRS observations (Loeb et al., 2003).Terra ADMs and Aqua ADMs were developed separately using multi-year CERES Terra and Aqua measurements in RAP mode and in cross-track mode using the scene identification information from Terra MODIS and Aqua MODIS (Loeb et al., 2005;Su et al., 2015a).The high-resolution MODIS imager provides cloud conditions for every CERES footprint.The cloud algorithms developed by the CERES cloud working group retrieve cloud fraction, cloud optical depth, cloud phase, cloud top temperature and pressure, and cloud effective temperature and pressure (among other variables) based on MODIS pixel-level measurements (Minnis et al., 2010).These pixel-level cloud properties are spatially and temporally matched with the CERES footprints and are used to select the scene-dependent ADMs to convert the CERESmeasured radiances to fluxes (Eq.1).The spatial matching criterion used is 1 km.The temporal matching criterion used is less than 20 s when CERES is in cross-track mode and less than 6 min when CERES is in RAP mode.
There is only one CERES instrument on NPP and it has been placed in cross-track scan mode since launch; it is thus not feasible to develop a specific set of ADMs for CERES on NPP.Currently, the Edition 4 Aqua ADMs (Su et al., 2015a) are used to invert fluxes for the CERES measurements on NPP.The CERES footprint size on NPP is larger than that on Aqua.As pointed out by Di Girolamo et al. (1998), the nonreciprocal behavior of the radiation field depends on measurement resolution, which means the ADMs do too.They concluded that ADMs should be applied only to data of the same resolution as the data used to derive the ADMs.Since the footprint sizes are different between Aqua CERES and NPP CERES, will using ADMs developed based upon Aqua CERES measurements for NPP CERES flux inversion introduce any uncertainties in the NPP CERES flux?Additionally, ADMs are scene-type dependent, and it is important to use consistent scene identification for developing and applying the ADMs.However, the VIIRS channels are not identical to those of MODIS, especially the lack of 6.7 and 13.3 µm channels, which caused the cloud properties retrieved from MODIS and VIIRS to differ from each other.These differences affect the scene identification used to select the ADMs for flux inversion and thus can lead to additional uncertainties in the NPP CERES flux.In this study, we design a simulation study to quantify the NPP CERES flux uncertainties due to the footprint size difference alone and due to both the footprint size and cloud property differences.
Comparison between Aqua CERES and NPP CERES
Besides the altitude differences between the Aqua and NPP satellites, they are also different in other orbital characteristics.For example, the orbital period for Aqua is about 98.82 min, while it is about 101.44 min for NPP; and the orbital inclination for Aqua is about 98.20 a and I m n using Aqua CERES ADMs.The matching criteria used for SW radiances are that the latitude and longitude differences between the Aqua footprints and the NPP footprints are less than 0.05 • , solar zenith angle and viewing zenith angle differences are less than 2 • , and the relative azimuth angle difference is less than 5 • .The matching criteria used here also provide a tight constraint on scattering angles, with about 95.6 and 99.9 % of the matched footprints having scattering angle differences less than 2 and 3 • , respectively.The same latitude and longitude matching criteria are used for LW radiances, and the viewing zenith angle difference between the Aqua footprints and the NPP footprints is less than 2 • .
Figure 1 shows the SW, daytime LW, and nighttime LW radiance comparisons between Aqua CERES and NPP CERES using matched footprints of 2013 and 2014.The total number of matched footprints, the mean I m a and I m n , and the root-mean-square (RMS) errors are summarized in Table 1.The mean SW I m n is about 1 W m −2 sr −1 greater than I m a , the mean daytime LW I m n is about 0.4 W m −2 sr −1 smaller than I m a , and the nighttime LW I m n and I m a agree to within 0.1 W m −2 sr −1 .Excluding matched footprints with a scattering angle difference greater than 2 • does not change the SW comparison result.These comparisons include data taken from nadir to oblique viewing angles (θ > 60 • ).The RMS errors remain almost the same when we compare the radiances taken at different θ ranges.Footprint size differences may also contribute to the radiance differences, but these radiance differences should be random.It is likely that the footprint size differences can increase the RMS errors, but the mean radiance differences are mostly resulted from calibration differences between Aqua CERES and NPP CERES.As mentioned earlier, the daytime CERES LW radiance is derived as the difference between total channel and SW channel measurements, and the nighttime CERES LW radiance is directly derived from the total channel measurements.The differences shown in Table 1 indicate that the agreement of the total channels between Aqua CERES and NPP CERES is better than that of the SW channels, leading to a smaller daytime LW difference than SW difference.Loeb et al. (2016) examined the normalized instrument gains for the total and SW channels for CERES FM1-FM5 since the beginning of the mission (BOM).The total channel response to LW radiation has gradually increased with time for all instruments.For the two instruments (FM3 and FM5) that are of interest here, the increases relative to the BOM are 0.7 % for FM3 and 0.4 % for FM5.The SW channel response increases about 0.4 % for FM3 and decreases by 0.2 % for FM5.Exact causes for the calibration differences between Aqua CERES and NPP CERES are not yet known and more research is needed to understand their differences.The future plan is to place NPP CERES on the same radiometric scale as Aqua CERES.
Flux comparison using the same matched footprints are shown in Fig. 2, and the mean F m a and F m n and the RMS errors between them are summarized in Table 1.Consistent with the radiance comparisons, the mean SW F m n is about 3.8 W m −2 greater than F m a , the mean daytime LW F m n is about 1.0 W m −2 smaller than F m a , and the mean nighttime LW F m n is about 0.3 W m −2 smaller than F m a .When we compare the relative RMS errors (RMS error divided by the mean Aqua value) between radiance and flux, the relative flux RMS errors (6.4 % for SW, 2.2 % for daytime LW, and 1.4 % for nighttime LW) are always slightly larger than the relative radiance RMS errors (6.0 % for SW, 2.1 % for daytime LW, and 1.1 % for nighttime LW).This indicates that additional 1. uncertainties are added when the radiances are converted to fluxes.However, we cannot directly compare the gridded monthly mean fluxes from Aqua and NPP as their overpass times differ.Figure 3 shows the monthly mean TOA insolation difference between NPP CERES and Aqua CERES for April 2013.Insolation for NPP overpass times is greater than that for Aqua overpass times over most regions, except over the northern high latitudes where NPP has significantly more overpasses at θ 0 > 70 • than Aqua.Regional differences as large as 30 W m −2 are observed over the tropical regions and north of 60 • N. Globally, the NPP CERES monthly mean insolation is greater than that of Aqua CERES by 13.4 W m −2 for this month.When we compare the monthly gridded TOAreflected SW flux between NPP CERES and Aqua CERES (Fig. 4a), the difference features in high-latitude regions (north of 60 • N and south of 60 • S) resemble those of the insolation differences.We then compare the albedo between NPP CERES and Aqua CERES (Fig. 4b).Over most regions, the albedo from NPP CERES is greater than that from Aqua CERES, except over parts of tropical oceans and Antarctica, where some negative differences are observed.The global monthly mean albedo from NPP CERES is greater than that from Aqua CERES by 0.003 (1.02 %).The albedo difference is mostly from the calibration differences (see Fig. 1a and Table 1), while the footprint size difference and scene identification difference also contribute to the albedo difference.
The CERES cloud working group developed sophisticated cloud detection algorithms using visible and infrared channels of MODIS separately for polar and nonpolar regions and for daytime, twilight, and nighttime (Trepte et al., 2010).However, these detection algorithms have to be modified to be applicable to the VIIRS observations (Q.Trepte, personal communication, 2017), as some of the MODIS channels utilized for cloud detection are not available on VIIRS.These modifications include replacing the 2.1 µm MODIS channel with the 1.6 µm VIIRS channel, replacing detection tests using MODIS 6.7 and 13.3 µm channels with VIIRS 3.7 and 11 µm channels, and supplementing with tests utilizing the VIIRS 1.6 µm channel and the brightness temperature differences between 11 and 12 µm.These changes mainly affect cloud detections over the polar regions.The parameterization of 1.24 µm reflectance was regenerated for VIIRS using improved wavelength and insolation weighting, which affects cloud optical depth retrieval over the snow-ice surfaces (S.Sun-Mack, personal communication, 2017).These The cloud fraction retrieved from VIIRS is greater than that from MODIS by up to 10 % and the cloud optical depth from VIIRS is smaller than that from MODIS by 2-3 over part of the Antarctic.The cloud fraction from VIIRS over the northern high-latitude snow regions is smaller than that from MODIS, while the optical depth from VIIRS is greater than that from MODIS.Over the Arctic, the cloud optical depth from VIIRS is much larger than that from MODIS.Over the ocean between 60 • S and 60 • N, the differences in cloud fraction seem rather random, while the differences in cloud optical depth are mostly positive (VIIRS retrieval is greater than Aqua MODIS retrieval).
Given that the footprint sizes and overpass times are different between Aqua CERES and NPP CERES, in addition to the calibration differences and cloud retrieval differences between them, fluxes from these CERES instruments cannot be compared directly to assess the effects of footprint size difference and cloud property difference on flux uncertainty.
Method
To quantify the footprint size and cloud retrieval effect on flux inversion without having to account for the calibration and overpass time differences, we design a simulation study using the MODIS pixel-level data and the Aqua-Earth-Sun geometry.MODIS spectral measurements are used to retrieve cloud properties and aerosol optical depth.These pixel-level imager-derived aerosol and cloud properties, and spectral narrowband (NB) radiances from MODIS are convolved with the CERES PSF to provide the most accurate aerosol and cloud properties that are spatially and temporally matched with the CERES broadband radiance data.Figure 6 illustrates the process of generating the simulated Aqua CERES and NPP CERES footprints from the MODIS pixels.We first use the Aqua CERES PSF to convolve the aerosol and cloud properties, and the MODIS NB radiances (and other ancillary data) into Aqua-size footprints (left portion of Fig. 6), as is done for the standard Aqua CERES SSF product.These NB radiances for the simulated Aqua CERES footprints are denoted as I s a (λ), where superscript "s" is for the simulated (in contrast to superscript "m" for the measured).We then increase the footprint size to be that of NPP and use the NPP CERES PSF to average the MODIS NB radiances, cloud and aerosol properties, and other ancillary data into the simulated NPP footprints.NB radiances for the simulated NPP CERES footprints are denoted as I s n (λ).A total of 4 months (July 2012, October 2012, January 2013, and April 2013) of simulated Aqua CERES and NPP CERES data were created.Every Aqua CERES footprint contains the broadband SW and LW radiances measured by the CERES instrument.The simulated NPP footprints, however, do not contain broadband radiances.To circumvent this issue, we developed narrowband-to-broadband coefficients to convert the MODIS NB radiances to broadband radiances.
The Edition 4 Aqua CERES SSF data from July 2002 to September 2007 are used to derive the narrowband-to- broadband (NB2BB) regression coefficients separately for SW, daytime LW, and nighttime LW.Seven MODIS spectral bands (0.47, 0.55, 0.65, 0.86, 1.24, 2.13, and 3.7 µm) are used to derive the broadband SW radiances, and the SW regression coefficients are calculated for every calendar month for discrete intervals of solar zenith angle, viewing zenith angle, relative azimuth angle, surface type, snow-non-snow conditions, cloud fraction, and cloud optical depth.Five MODIS spectral bands (6.7, 8.5, 11.0, 12.0, and 14.2 µm) are used to derive the broadband LW radiances, and the LW regression coefficients are calculated for every calendar month for discrete intervals of viewing zenith angle, precipitable water, surface type, snow-non-snow conditions, cloud fraction, and cloud optical depth.The 20 IGBP surface types are grouped into eight surface types: ocean, forest, savanna, grassland, dark desert, bright desert, the Greenland permanent snow, and the Antarctic permanent snow.When there is sea ice over the ocean and snow over the land surface types, regres-sion coefficients for ice and snow conditions are developed (only footprints with 100 % sea ice-snow coverage are considered).
These SW and LW NB2BB regression coefficients are then applied to I s a (λ) and I s n (λ) to derive the broadband radiances, I s a and I s n , for simulated footprints of Aqua CERES and NPP CERES, shown on the left and right of Fig. 6, if the footprint consists of a single surface type.As both simulated Aqua CERES and NPP CERES footprints use the Aqua-Earth-Sun geometry, I s a and I s n have the same Sun-viewing geometry.Even though the Aqua CERES footprints contained the broadband radiances from CERES observations (I m a ), we choose to use the broadband radiances calculated using the NB2BB regressions to ensure that I s a and I s n are consistently derived.Doing so we can isolate the flux differences between simulated Aqua CERES and simulated NPP CERES caused by footprint size difference.
The cloud properties in the simulated Aqua CERES footprints and in the simulated NPP CERES footprints are all based upon the MODIS retrievals, so the scene identifications used to select ADMs for flux inversion are almost the same for both the simulated Aqua CERES and the NPP CERES, except for small differences due to differing footprint sizes.As demonstrated in Fig. 5, cloud properties differ between the MODIS and the VIIRS retrievals.These cloud retrieval differences affect the anisotropy factors selected for flux inversion.To simulate both the footprint size and cloud property differences, cloud fraction and cloud optical depth retrievals from MODIS convolved in the simulated NPP CERES footprints are adjusted to be similar to those from VIIRS retrievals to assess how cloud retrieval differences affect the flux.To accomplish this, daily cloud fraction ratios of VIIRS to MODIS are calculated for each 1 • latitude by 1 • longitude grid box.These ratios are then applied to the cloudy footprints of the MODIS retrieval to adjust the MODIS cloud fractions to be nearly the same as those from the VIIRS retrieval.Note that no adjustment is done for clear footprints.Similarly, daily cloud optical depth ratios of VIIRS to MODIS are calculated using cloudy footprints for each 1 • by 1 • grid box.These ratios are used to adjust the MODIS-retrieved cloud optical depth to be close
Results
We first compare the footprint-level fluxes between simulated Aqua CERES and simulated NPP CERES using data from 1 April 2013 (about 700 000 footprints).As the cloud fraction and cloud optical depth adjustments are done at the grid-box level, it is not feasible to compare footprint-level F s a and F n s , and only footprint-level F s a and F s n are compared.For SW, the bias between F s a and F s n is 0.1 W m −2 and the RMS error is 4.7 W m −2 .For LW, the biases are close to zero and the RMS errors are 1.3 and 0.9 W m −2 for daytime and nighttime, respectively.These flux RMS errors are much smaller than those listed in Table 1, indicating that calibration differences are responsible for most of the flux differences between Aqua CERES and NPP CERES mea-surements.However, we should avoid direct comparisons between these two sets of RMS errors, as they are derived using different time periods.
We now compare the monthly grid box (1 • latitude by 1 • longitude) mean fluxes from the three simulations outlined in the previous section.Differences between F s n and F s a are used to assess the NPP CERES gridded monthly mean instantaneous flux uncertainties due to the footprint size difference, and differences between F n s and F s a are used to assess the NPP CERES gridded monthly mean instantaneous flux uncertainties due to both the footprint size and cloud property differences.
The monthly mean instantaneous TOA SW fluxes for simulated Aqua CERES (F s a ) are shown in Fig. 7a for April 2013.Note that these fluxes are different from those in the Edition 4 Aqua SSF product as the CERES-measured radiances differ from those inferred using NB2BB regression coefficients.The flux differences caused by the footprint size difference between the simulated NPP CERES and the simulated Aqua CERES (F s n − F s a ) are shown in Fig. 7b.Grid boxes in white indicate that the number of footprints with valid SW fluxes differ by more than 2 % between simulated Aqua CERES and NPP CERES, as the NB2BB regressions are only applied to footprints that consist of the same surface types, which results in fewer footprints with valid fluxes for NPP CERES than for Aqua CERES.The footprint size difference between Aqua CERES and NPP CERES introduces an uncertainty that rarely exceeds 4.0 W m −2 in monthly gridded NPP CERES instantaneous SW fluxes.For global monthly mean instantaneous SW flux, the simulated NPP CERES has a low bias of 0.4 W m −2 compared to the simulated Aqua CERES, and the RMS error between them is 2.4 W m −2 .Results from the other 3 months are very similar to April 2013 (not shown).
Figure 7c shows the SW flux difference caused by both the footprint size and cloud property differences (F n s − F s a ).Adding the cloud property differences increases the NPP CERES flux uncertainty compared to when only footprint size differences are considered (Fig. 7b), and monthly gridded instantaneous flux uncertainty over the Arctic Ocean can exceed 20 W m −2 .Accounting for cloud property differences, the global monthly mean instantaneous SW flux from simulated NPP CERES has a high bias of 1.1 W m −2 and the RMS error is increased to 5.2 W m −2 .Over the Arctic Ocean, the cloud optical depth from VIIRS retrieval is much greater than that from the MODIS retrieval, while the difference in cloud fraction is relatively small.Anisotropic factors for thick clouds are smaller than those for thin clouds at oblique viewing angles and are larger for near-nadir viewing angles.The viewing geometries over the Arctic Ocean produced a greater number of smaller anisotropic factors than larger ones when MODIS cloud optical depths were replaced with VIIRS-like cloud optical depths, which resulted in larger fluxes when using VIIRS-like cloud properties for flux inversion.
The daytime and nighttime LW flux from the simulated Aqua CERES footprints, LW flux differences due to footprint size difference, and LW flux difference due to both footprint size difference and cloud property difference are shown in Figs. 8 and 9.The effect of footprint size on gridded monthly mean daytime and nighttime LW flux is generally within 1.0 W m −2 .For global monthly mean LW flux, the differences between F s n and F s a are close to zero, and the RMS errors between them are about 0.8 and 0.2 W m −2 for daytime and nighttime LW fluxes.When cloud property differences are also considered, their effect on gridded monthly mean LW fluxes increases to about 2 W m −2 .The RMS errors of the global monthly mean LW flux increase slightly to about 0.9 and 0.5 W m −2 for daytime and nighttime.The LW fluxes showed much less sensitivity to cloud property changes than the SW fluxes, especially over the Arctic Ocean where cloud optical depth changed significantly.This is because the LW ADMs over the snow-ice surfaces have very little sensitivity to cloud optical depth (Su et al., 2015a), but they were developed for discrete cloud fraction intervals, and larger flux changes are noted in regions experiencing large cloud fraction changes.
Summary and discussion
The scene-type-dependent ADMs are used to convert the radiances measured by the CERES instruments to fluxes.Specific empirical ADMs were developed for CERES instruments on TRMM, Terra, and Aqua (Loeb et al., 2003(Loeb et al., , 2005;;Su et al., 2015a).As there is only one CERES instrument on NPP and it has been placed in cross-track mode since launch, it is not possible to construct a set of ADMs specific for CERES on NPP.Edition 4 Aqua ADMs (Su et al., 2015a) are thus used for flux inversions for NPP CERES measurements.However, the altitude of the NPP orbit is higher than that of the Aqua orbit, resulting in a larger CERES footprint size on NPP than on Aqua.Given that the footprint size of NPP CERES is different from that of Aqua CERES, we need to quantify the NPP CERES flux uncertainty caused by using the Aqua CERES ADMs.Furthermore, there are some differences between the imagers that are on the same spacecraft as Aqua CERES (MODIS) and NPP CERES (VIIRS), as VIIRS lacks the 6.7 and 13.3 µm channels.These spectral differences and algorithm differences lead to notable cloud fraction and cloud optical depth differences retrieved from MODIS and VIIRS.As the anisotropy factors are scene-type dependent, differences in cloud properties will also introduce uncertainties in flux inversion.Furthermore, the calibrations between CERES instruments on Aqua and on NPP also are different from each other.Comparisons using 2 years of collocated Aqua CERES and NPP CERES footprints indicate that the SW radiances from NPP CERES are about 1.5 % greater than those from Aqua CERES, the daytime LW radiances from NPP CERES are about 0.5 % smaller than those from Aqua CERES, and the nighttime LW radiances agree to within 0.1 %.
To quantify the flux uncertainties due to the footprint size difference between Aqua CERES and NPP CERES, and due to both the footprint size difference and cloud property difference, we use the MODIS pixel-level data to simulate the Aqua CERES and NPP CERES footprints.The simulation is designed to isolate the effects of footprint size difference and cloud property difference on flux uncertainty from calibration difference between NPP CERES and Aqua CERES.The pixel-level MODIS spectral radiances, the imager-derived aerosol and cloud properties, and other ancillary data are first convolved with the Aqua CERES PSF to generate the simulated Aqua CERES footprints, and then convolved with the NPP CERES PSF to generate the simulated NPP CERES footprints.Broadband radiances within the simulated Aqua CERES and NPP CERES footprints are derived using the MODIS spectral bands based upon narrowbandto-broadband regression coefficients developed using 5 years of Aqua data to ensure consistency between broadband radiances from simulated Aqua CERES and NPP CERES.These radiances are then converted to fluxes using the Aqua CERES ADMs.The footprint size difference between Aqua CERES and NPP CERES introduces instantaneous flux uncertainties in monthly gridded NPP CERES of less than 4.0 W m −2 for SW and less than 1.0 W m −2 for both daytime and nighttime LW.The global monthly mean instantaneous SW flux from simulated NPP CERES has a low bias of 0.4 W m −2 compared to that from simulated Aqua CERES, and the RMS error between them is 2.4 W m −2 .The biases in global monthly mean LW fluxes are close to zero, and the RMS errors between simulated NPP CERES and simulated Aqua CERES are about 0.8 and 0.2 W m −2 for daytime and nighttime global monthly mean LW fluxes.
W. Su et al.: The effects of different footprint sizes and cloud algorithms on CERES flux inversion The cloud properties in the simulated Aqua CERES footprints and in the simulated NPP CERES footprints are all based upon MODIS retrievals, but in reality cloud properties retrieved from VIIRS differ from those from MODIS.To assess the flux uncertainty from scene identification differences, cloud fraction and cloud optical depth in the simulated NPP CERES footprints are perturbed to be more like the VI-IRS retrievals.When both footprint size and cloud property differences are considered, the uncertainties of monthly gridded NPP CERES SW flux can be up to 20 W m −2 in the Arctic regions where cloud optical depth retrievals from VI-IRS differ significantly from MODIS.The global monthly mean instantaneous SW flux from simulated NPP CERES has a high bias of 1.1 W m −2 and the RMS error is increased to 5.2 W m −2 .LW flux shows less sensitivity to cloud property differences than SW flux, with uncertainties of about 2.0 W m −2 in the monthly gridded LW flux, and the RMS errors increase to 0.9 and 0.5 W m −2 for daytime and nighttime LW flux.Su et al. (2015b) quantified the global monthly 24 h averaged flux uncertainties due to CERES ADMs using direct integration tests and concluded that the RMS errors are less than 1.1 and 0.8 W m −2 for 24 h averaged TOA SW and LW fluxes.The uncertainty for the global monthly instantaneous SW flux is approximately twice the uncertainty of 24 h averaged flux.This simulation study indicates that the footprint size differences between NPP CERES and Aqua CERES introduce flux uncertainties that are within the uncertainties of the CERES ADMs.However, the uncertainty assessment provided here should be considered as the low end, as many regions (especially over land, snow, and ice) were not included due to sample number differences within the grid boxes.When cloud property differences are accounted for, the SW flux uncertainties increase significantly and exceed the uncertainties of the CERES ADMs.These findings indicate that inverting NPP CERES flux using Aqua CERES ADMs results in flux uncertainties that are within the ADMs uncertainties as long as the cloud retrievals between VIIRS and MODIS are consistent.When the cloud retrieval differences between VIIRS and MODIS are accounted for, the SW flux uncertainties exceed those of the CERES ADMs.To maintain the consistency of the CERES climate data record, it is thus important to develop cloud retrieval algorithms that account for the capabilities of both MODIS and VIIRS to ensure consistent cloud properties from both imagers.
Data availability.The CERES data were obtained from the NASA Langley Atmospheric Science Data Center at https://eosweb.larc.nasa.gov/project/ceres/ssf_table.The CERES data are produced by the CERES science team and the data quality summaries are available at https://eosweb.larc.nasa.gov/project/ceres/quality_summaries/CER_SSF_Terra-Aqua_Edition4A.pdf.
Competing interests.The authors declare that they have no conflict of interest.
Figure 1 .
Figure 1.Radiance comparisons between matched Aqua CERES and NPP CERES footprints -(a) SW, (b) daytime LW, and (c) nighttime LW using the data of 2013 and 2014.The total number of footprints, the mean radiances, and the radiance RMS errors are summarized in Table1.
Figure 2 .
Figure 2. Flux comparisons between matched Aqua CERES and NPP CERES footprints -(a) SW, (b) daytime LW, and (c) nighttime LW using data from 2013 and 2014.The total number of footprints, the mean fluxes, and the flux RMS errors are summarized in Table1.
Figure 6 .
Figure 6.Schematic diagram of convoluting the MODIS pixels into the simulated Aqua and NPP footprints.Left depicts the processes involved in producing the simulated Aqua footprints, middle depicts the simulated NPP footprints with MODIS retrievals, and right depicts the simulated NPP footprints with VIIRS-like retrievals.
Figure 7 .
Figure 7.The gridded monthly mean TOA instantaneous SW fluxes from the simulated Aqua footprints (F s a , a), the flux differences caused by footprint size difference between simulated NPP and simulated Aqua (F s n − F s a , b), and the flux differences caused by both footprint size and cloud property differences (F n s − F s a , c) using April 2013 data.Regions shown in white have large sample number differences between simulated Aqua and simulated NPP.
Figure 8 .
Figure 8.The gridded monthly mean TOA daytime LW fluxes from the simulated Aqua footprints (F s a , a), the flux differences caused by footprint size difference between simulated NPP and simulated Aqua (F s n −F s a , b), and the flux differences caused by both footprint size and cloud property differences (F n s − F s a , c) using April 2013 data.Regions shown in white have large sample number differences between simulated Aqua and simulated NPP.
Figure 9 .
Figure 9.The gridded monthly mean TOA nighttime LW fluxes from the simulated Aqua footprints (F s a , a), the flux differences caused by footprint size difference between simulated NPP and simulated Aqua (F s n − F s a , b), and the flux differences caused by both footprint size and cloud property differences (F n s − F s a , c) using April 2013 data.Regions shown in white have large sample number differences between simulated Aqua and simulated NPP. | 8,526 | 2017-10-27T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Detecting Mild Cognitive Impairment by Exploiting Linguistic Information from Transcripts
Here we seek to automatically identify Hungarian patients suffering from mild cognitive impairment (MCI) based on linguistic features collected from their speech transcripts. Our system uses machine learning techniques and is based on several linguistic features like characteristics of spontaneous speech as well as features exploiting morphological and syntactic parsing. Our results suggest that it is primarily morphological and speech-based features that help distinguish MCI patients from healthy controls.
Background
Mild cognitive impairment (MCI) is a heterogeneous set of symptoms that are essential in the early detection of Alzheimer's Disease (AD) (Negash et al., 2007). Symptoms such as language dysfunctions may occur even nine years before the actual diagnosis (APA, 2000). Thus, the language use of the patient may often indicate MCI well before the clinical diagnosis of dementia.
MCI is known to influence the (spontaneous) speech of the patient via three main aspects. First, verbal fluency declines, which results in longer hesitations and a lower speech rate (Roark et al., 2011). Second, the lexical frequency of words and part-of-speech tags may also change significantly as the patient has problems with finding words (Croot et al., 2000). Third, the emotional responsiveness of the patient was also observed to change in many cases (Lopez-de Ipiña et al., 2015).
For many patients, MCI is never recognized as in the early stage of the disease it is not trivial even for experts to detect cognitive impairment: according to Boise et al. (2004), up to 50% of MCI patients are never diagnosed with MCI. Although there are well known tests such as the Mini Mental Test, they are usually not sensitive enough to reliably filter out MCI in its early stage. Tests on linguistic memory prove more efficient in detecting MCI, but they tend to yield a relatively high number of false positive diagnoses (Roark et al., 2011).
Although language abilities are impaired from an early stage of the disease, evaluating the language capacities of the patients has only received marginal attention when diagnosing AD (Bayles, 1982). However, if diagnosed early, a proper medical treatment may delay the occurrence of other (more severe) symptoms of dementia to the latest extent possible (Kálmán et al., 2013).
Here we seek to automatically identify Hungarian patients suffering from mild cognitive impairment based on their speech transcripts. Our system uses machine learning techniques and is based on several features like linguistic characteristics of spontaneous speech as well as features exploiting morphological and syntactic parsing.
Recently, several studies have reported results on identifying different types of dementia with NLP and speech recognition techniques. For instance, automatic speech recognition tools were employed in detecting aphasia (Fraser et al., 2013b;Fraser et al., 2014;Fraser et al., 2013a) and mild cognitive impairment (Lehr et al., 2012), and Alzheimer's Disease (Baldas et al., 2010;Satt et al., 2014). Jarrold et al. (2014) distinguished four types of dementia on the basis of spontaneous speech samples. Lexical analysis of spontaneous speech may also indicate different types of dementia (Bucks et al., 2000;Holmes and Singh, 1996) and may be exploited in the automatic detection of patients suffering from dementia (Thomas et al., 2005). As for analyzing written language, changes in the writing style of people may also refer to dementia (Garrard et al., 2005;Hirst and Wei Feng, 2012;Le et al., 2011).
Concerning the automatic detection of MCI in Hungarian subjects, Tóth et al. (2015) experimented with speech recognition techniques. However, to the best of our knowledge, this is the first attempt to identify MCI on the basis of written texts, i.e. speech transcripts for Hungarian.
In the long run, we would like to develop a system that can automatically detect linguistic symptoms of MCI in its early stage, so that the person can get medical treatment as early as possible. It should be noted, however, that our goal cannot be an official diagnosis as diagnosing patients requires medical experience. All we can do is implement a test supported by methods used in artificial intelligence, which indicates whether the patient is at risk and if so, s/he can turn to medical experts who will provide the clinical diagnosis.
Data
In our experiments 1 , two short animated films were presented to the patients at the memory ambulance of the University of Szeged. Patients were asked to talk about the first film then about their previous day, and lastly, about the second film. Their speech productions were recorded and transcribed by linguists, who explicitly marked speech phenomena like hesitations and pauses in the transcripts. These transcripts formed the basis of our experiments, i.e. we exploited only written information.
All of our 84 subjects were native speakers of Hungarian, a morphologically rich language. For each person, a clinical diagnosis was at our disposal, i.e. it was clinically proved whether the patient suffers from MCI or not. On the basis of these data, subjects were classified as either MCI patient or healthy control at the university memorial. Table 1 shows data on the subjects' gender and diagnosis while Table 2 shows the mean values for age and education (in terms of years attended at school).
Speech transcripts reflect several characteristics of spontaneous speech. On the one hand, they contain several forms of hesitations and silent pauses, which are also marked in the transcripts, on the other hand, they abound in phenomena typical of spontaneous Hungarian speech such as phonological deletion (mer instead of the standard form mert "because" or ement instead of the standard form elment "(he) left") and lengthening (utánna instead of the standard form utána "then"). There are duplications (ez ezt "this this-ACC") and neologisms created by the speaker (feltkáva, which probably means főtt kávé "boiled coffee"). Fillers also deserve special attention when studying transcripts. Besides hesitations, we treated words and phrases referring to some kind of uncertainty together with indefinite pronouns as fillers such as ilyen "such", olyan "such", izé "thing, gadget",és aztán "and then", valamilyen "some kind of", valahogy "somehow", valamerre "somewhere" 2 . Thus, MCI patients often seem to substitute content words with fillers or indefinite pronouns, moreover, they also appear to use lots of paraphrases, which also indicate uncertainty just like egy ilyen bagolyszerűség a such owl-likeness "something similar to an owl" or az olyan délelőtt volt that such morning was "that happened some time in the morning".
Experiments
In order to determine the status of the subjects, we experimented with machine learning tools. The task was regarded as binary classification, i.e. subjects were classified as either an MCI patient or a healthy control, on the basis of a feature set derived from their transcripts.
At first, transcripts were morphologically and syntactically analysed with magyarlanc, a linguistic preprocessing toolkit developed for Hungarian (Zsibrita et al., 2013). For classification, we exploited morphological, syntactic and semantic features extracted from the output of magyarlanc.
Each person was asked to recall three different stories. As MCI is strongly related to memory deficit, we believe that the order of the tasks might also influence performance, hence we opted for processing each transcript separately. Thus, for each person, features to be discussed below were calculated separately for the three transcripts and all of them were exploited in the system.
Feature set
In our experiments, we employed features of spontaneous speech and morphological and semantic features derived from the transcripts and their automatic linguistic analyses. When defining our features, we took into account the fact that the speech of MCI patients may contain more pauses and hesitations than that of healthy controls (Tóth et al., 2015) and they are also supposed to have a restricted vocabulary due to cognitive deficit, which may affect the choice of words and the frequency of parts of speech (Croot et al., 2000) and might even yield neologisms. We also made use of demographic features that were at our disposal.
Our feature set contained the following features: Spontaneous speech based features: number of filled and silent pauses; number and rate of hesitations compared to the number of tokens; number of pauses that follow an article and precede content words as this might reflect that MCI patients may have difficulties with finding the appropriate content words; number of lengthened sounds (which we considered as a special form of hesitation).
Morphological features: number of tokens and words; number and rate of distinct lemmas; number of punctuation marks; number and rate of nouns, verbs, adjectives, pronouns and conjunctions; number of first person singular verbs as it might also be indicative how often the patient reflects to him/herself; number and rate of unanalyzed words, i.e. those with an "unknown" POS tag, which might indicate neologisms created by the speaker on the spot.
Semantic features: number and rate of fillers and uncertain words compared to the number of all tokens; number and rate of words/phrases related to memory activity (e.g. nem emlékszem not remember-1SG "I can't remember") as they directly signal problems with memory and recall; number of negation words; number and rate of content words and function words; number of thematic words related to the content of the films, based on manually constructed lists.
The mean values for each feature are reported in Table 3.
Statistical analysis of features
In order to reveal which features can most effectively distinguish healthy controls from MCI patients, we carried out a statistical analysis of the data (t-tests for each feature and transcript). For most of the features, significant differences were found between the two groups -p-values are listed in Table 3. The age of the patients also indicates significant differences: people who were at least 71 years old were more probable to suffer from MCI than those who were younger at the time of the experiment (p = 0.0124).
According to the data, each group of features has a significant effect in distinguishing controls and MCI patients. It is shown that it is mostly the second transcript (the one including the narratives about the subjects' previous days) where significant differences may be found among MCI patients and the control group. However, significant differences exist for the other two types of texts as well.
Machine learning experiments
To automatically identify MCI patients, we exploited machine learning techniques, i.e. support vector machines (SVM) (Cortes and Vapnik, 1995) with the default settings of Weka (Hall et al., 2009) and due to the small size of the dataset, we applied leave-one-out cross validation. As a baseline, majority labeling was used. For the evaluation, the accuracy, precision, recall and F-measure metrics were utilized.
In order to examine the effect of certain groups of features, we carried out an ablation study, i.e. we retrained the system without making use of one specific group of features. The results and differences are shown in Table 4 Table 4: Results and differences. MCI: mild cognitive impairment, P: precision, R: recall, F: F-measure, %: accuracy.
Results and Discussion
Using all the features, our system managed to achieve an accuracy score of 69.1%, that is, 58 out of the 84 patients were correctly diagnosed. 12 patients were falsely diagnosed as healthy and 14 controls were falsely labeled as MCI patients. Our results outperformed the baseline (57.14% in terms of accuracy). The system got a high recall value for MCI patients (75.0) but a lower one for controls (61.1), which is encouraging in the light of the fact that our main goal is to identify the widest possible range of potential MCI patients, who can turn to clinical experts to find out what their clinical diagnosis is.
We also experimented with using only features that displayed statistically significant differences among controls and MCI patients (see Table 3). Somewhat surprisingly, an accuracy of 75% could be achieved in this way, which indicates that some of our original features are superfluous and just confused the system, and this result needs further investigation.
An ablation study was also carried out to analyze the added value of each feature group. Speech-based, demographic and morphological features unequivocally contributed to performance. However, the effect of semantic features seems less obvious as they harm performance taken as a whole but some individual semantic features are useful for the system, as shown by the results achieved with just using significant features.
When investigating the errors made by our system, we found that MCI patients that spoke only a few short sentences were often classified as healthy controls. They had a lower number and rate of hesitations and pauses, moreover, their vocabulary contained fewer fillers and uncertain words, and these features resemble those typical of healthy controls. What is more, healthy subjects who talked more also hesitated more, which might be indicative of MCI. Furthermore, their use of pronouns and conjunctions was also more similar to those of MCI patients, hence the system falsely predicted a positive diagnosis for them.
Due to the specific characteristics of the data and the complexity of data collection -which requires clinical experiments -our dataset can be expanded only step by step. However, we found statistically significant differences among MCI patients and healthy controls concerning several linguistic and speech-based features even in our small dataset, which may be beneficial for our future experiments and might be also exploited by those who study spontaneous speech.
Conclusions
In this study, we introduced our system that automatically detects Hungarian patients suffering from mild cognitive impairment on the basis of their speech transcripts. The system is based on features derived from morphological and syntactic analysis as well as characteristics of spontaneous speech. Both statistical and machine learning results revealed that morphological and spontaneous speech-based features have an essential role in distinguishing MCI patients from healthy controls.
In the future, we would like to extend our dataset with new transcripts. Also, we intend to improve our machine learning system and investigate the role of semantic features. Lastly, we would like to integrate features from automatic speech recognition into our system so that tools from both speech technology and natural language processing can contribute to the automatic detection of mild cognitive impairment. | 3,255 | 2016-01-01T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Deep Learnability: Using Neural Networks to Quantify Language Similarity and Learnability
Learning a second language (L2) usually progresses faster if a learner's L2 is similar to their first language (L1). Yet global similarity between languages is difficult to quantify, obscuring its precise effect on learnability. Further, the combinatorial explosion of possible L1 and L2 language pairs, combined with the difficulty of controlling for idiosyncratic differences across language pairs and language learners, limits the generalizability of the experimental approach. In this study, we present a different approach, employing artificial languages, and artificial learners. We built a set of five artificial languages whose underlying grammars and vocabulary were manipulated to ensure a known degree of similarity between each pair of languages. We next built a series of neural network models for each language, and sequentially trained them on pairs of languages. These models thus represented L1 speakers learning L2s. By observing the change in activity of the cells between the L1-speaker model and the L2-learner model, we estimated how much change was needed for the model to learn the new language. We then compared the change for each L1/L2 bilingual model to the underlying similarity across each language pair. The results showed that this approach can not only recover the facilitative effect of similarity on L2 acquisition, but can also offer new insights into the differential effects across different domains of similarity. These findings serve as a proof of concept for a generalizable approach that can be applied to natural languages.
INTRODUCTION
Learning a second language (L2) can be difficult for a variety of reasons. Pedagogical context (Tagarelli et al., 2016), cognitive processing differences across learners (Ellis, 1996;Yalçın and Spada, 2016), L2 structural complexity (Pallotti, 2015;Yalçın and Spada, 2016;Housen et al., 2019), or similarity between the target L2 and the learner's first language (L1) can all conspire to affect the speed and success of L2 acquisition (Hyltenstam, 1977;Lowie and Verspoor, 2004;Foucart and Frenck-Mestre, 2011;Málek, 2013;Schepens et al., 2013;Türker, 2016;Carrasco-Ortíz et al., 2017). Similarity, in particular, is a difficult variable to examine, because it is so hard to pin down. Although individual structures (e.g., relative clauses, cognate inventory) can be compared fairly straightforwardly across languages, it is much harder to combine these structures appropriately to determine a global similarity metric across languages, which limits our ability to predict how difficult an arbitrary L2 will be to acquire for different L1 speakers. The goal of this paper is to propose a new method for evaluating the effect of similarity on the learnability of L2 structures, using deep learning.
Current approaches to determining similarity effects on L2 acquisition typically take an experimental angle, usually proceeding in one of two ways. The first is to take one group of L2 learners, with the same L1, and compare their acquisition of different structures in the L2, such that one structure is similar to L1 and the other is different. For example, Lowie and Verspoor (2004) observed that Dutch learners of English acquire prepositions that are similar in form and meaning across the two languages (e.g., by/bij) more easily than ones that are dissimilar (e.g., among/tussen). Foucart and Frenck-Mestre (2011) found tentative evidence German learners of French show more electrophysiological sensitivity to gender errors when the French nouns have the same gender as German than when they have a different gender. This observation was later supported by Carrasco-Ortíz et al. (2017), who observed a similar pattern with Spanish learners of French. Díaz et al. (2016) examined Spanish learners of Basque, and found stronger electrophysiological responses to syntactic violations in structures that are common between Spanish and Basque compared to violations in structures unique to Basque. Türker (2016) found that English learners of Korean performed better at comprehending figurative language when the expressions shared lexical and conceptual structure across the two languages than when they diverged. Overall, then, it seems that at the lexical, morphosyntactic, syntactic, and conceptual levels, learners have an easier time acquiring L2 structures that are similar to the L1 equivalents than structures that are different.
The second type of experimental approach holds constant the target structures to be learned, and instead compares the acquisition of those structures across learners with different L1s. Málek (2013), for example, found that Afrikaans-speaking learners of English acquired prepositions, which divide up the conceptual space in very similar ways in the two languages, better than Northern Sotho speakers, which treats those same meanings quite differently. Kaltsa et al. (2019) found that German-Greek bilingual children, whose L1s have a gender system similar to Greek's, performed better on gender agreement tasks than English-Greek bilingual children, whose L1 has no such gender system. In a very large-scale study, Schepens et al. (2013) found that Dutch learners had more difficulty acquiring Dutch when their native languages' morphological systems were less similar to Dutch-especially if that dissimilarity lay in a reduced complexity. This approach, too, shows that similarity between L1 and L2 seems to aid learning.
Although these findings all agree that L1/L2 similarity is important, they nevertheless all rely on binary same/different evaluations at a feature by feature level. Yet even if language grammars could be neatly decomposed into binary feature bundles, it's not at all clear whether those features are equally strong in determining similarity. Are two languages more similar if they share relative clause construction, for example, or subjectverb agreement patterns? And even if we can arrive at a hierarchy of feature strength within a domain, it's not at all clear how similarity can be compared across domains. Is a language pair more similar if both employ a particular conceptual organization of spatial relations, or if both rely on suffixing concatenative morphology? And if a pair of languages share similar syntactic patterns but utterly distinct morphology, are they more or less similar than a pair of languages that share morphological structure, but are utterly dissimilar in syntax? Even if we assume that all linguists work from the same theoretical underpinnings when characterizing grammatical structures, these questions make it clear that using feature-by-feature comparisons to characterize linguistic similarity has severe limitations.
To avoid these problems, our approach employs deep learning, using changes in neural network activity before and after learning a second language as a proxy for the learnability of that language. In the work presented here, we restrict ourselves to carefully controlled artificial languages, as a proof of concept, but the approach is scalable to natural languages. Our process, illustrated in Figure 1, starts with a set of five artificial languages, whose similarity across pairs was systematically controlled (Step 1). Next, we trained Long Short Term Memory (LSTM) neural networks on each of these five languages, producing five models representing monolingual L1 speakers for each of the five languages (Step 2). After characterizing the state of these monolingual networks ( Step 3) we then retrained them on a second language (Step 4), crossing each possible L1 with each possible L2, to create a set of 20 "bilingual" networks. Twenty such networks are possible because there are 10 possible combinations of the five languages, and each combination counts twice-once for a network with an L1 of language A learning an L2 of language B, and then again for the reverse. We then characterized the state of each bilingual network (Step 5), and quantified the change of state that had to take place during the L2-acquisition process (Step 6). This change of state represents the "learnability" of each L2 for a speaker of each L1. Finally, we compared these learnability metrics to the built-in degree of similarity across different artificial language pairs (Step 7).
If our approach can capture the findings from experimental research in a scalable manner, then networks should show less change when learning L2s that are similar to the known L1. Further, by controlling the domains of similarity between the languages (e.g., morphology vs. syntax), we can examine which types of similarity are most effective in aiding L2 acquisition.
Artificial Languages
We built five artificial languages-Alpha, Bravo, Charlie, Delta, and Echo-which could vary across three dimensions: vocabulary, morphology, and syntax. We built multiple versions of each linguistic subsystem: two vocabularies, labeled A and B; three morphologies, labeled C, D, and E, and two syntactic systems, labeled F and G. By distributing the different linguistic subsystems across the five languages, we were able to manipulate language similarity systematically. Table 1 illustrates this manipulation. For example, Alpha has vocabulary A, morphology C, and syntax F. Charlie has vocabulary B, morphology E, and syntax G. It therefore shares none of these properties with Alpha, which makes the two languages maximally dissimilar. Alpha and Charlie therefore appear in the left-most column of Table 2, under the heading "No overlap." By contrast Bravo has vocabulary A, morphology D, and syntax G, which means it overlaps with Alpha in exactly one dimension-vocabulary-while differing in morphological and syntactic systems. Alpha and Bravo therefore appear in the middle column, under the heading "One dimension > Morphology." Alpha and Delta overlap in two dimensions-sharing vocabulary A and Syntax Fand so that language pair appears in the rightmost column, under the heading "Two dimensions > Vocabulary/Syntax." This distribution of features ensured that every combination of domain and degree of similarity was represented: One language pair was maximally dissimilar, sharing neither vocabulary, morphology, nor syntax; five language pairs overlapped in one of those three features; and four language pairs overlapped in two features. For each vocabulary, we constructed a set of 330 word roots, divided into six different lexical categories: nouns, verbs, adjectives, determiners, prepositions, and conjunctions. In each vocabulary, there were 100 of each of the three different classes of content words (nouns, verbs, adjectives) and 10 of each of the three different classes of function words (prepositions, determiners, conjunctions). For the sake of simplicity, we used the same set of phonotactic rules to generate the words in each vocabulary, but we ensured that there was no overlap between the two lexicons. This ensured identical phonotactic systems across all languages. Nevertheless, since the neural networks used here did not look below the level of the morpheme when learning each language, the similarity in phonotactics across the two vocabularies was not able to affect the learning process in this work.
The three morphological systems are all concatenative, but the features that appear and the types of morphosyntactic processes varied across each language. For example, morphology C had suffixing number concord in NPs, such that a plural suffix appeared on determiners, nouns, and adjectives, while singular NPs were unmarked. Plural subject NPs conditioned plural agreement suffixes on the verb, while tense markers for past, present or future appeared on verbs as prefixes. By contrast, morphology D had prefixing number on nouns and adjectives, but determiners had no number marking. Rather, determiners conditioned definiteness agreement on nouns, such that a lexically specified set of definite determiners conditioned a definite suffix on nouns, while the indefinite determiner conditioned an indefinite suffix. Verbs showed no agreement, but prepositions assigned either accusative or dative case on NP complements. The two syntactic systems varied primarily in whether they were head-initial (Syntax F) or head-final (Syntax G). Sentences could be one clause or two clauses; clauses could contain subjects and verb phrases; and verb phrases could contain direct objects and prepositional phrases. However, for the sake of simplicity we did not allow any recursion: Sentences could not extend beyond two clauses, and noun phrases could not be modified by prepositional phrases.
Examples (1-2) below illustrate the type of sentence generated in Alpha and Charlie, respectively, the maximally different languages. The "translation" below each example provides an English sentence with similar syntactic structure to the two sentences 1 . Note that the meanings of the individual words in the English glosses are arbitrary, as we did not build any semantic content into these languages.
(1) Alpha: Those large dogs will run in the empty garden." (2) Charlie: us-biaus This large dog will run in the empty garden."
Training Data
For each language, we generated 200,000 sentences by randomly selecting phrase structure rules in a top-down walk through the language's grammar. For example, if the language could contain single-clause or dual-clause sentences, we would randomly select a single clause sentence. A single clause sentences required a noun phrase subject, so the walk moved down to the noun phrase structure rule. Given the option of a singular or plural subject noun phrase, we would randomly select a plural noun phrase, and given the option of an adjective modifier or not, we would randomly select no adjective. This process was repeated for all syntactic structures in the sentence, right down to the vocabulary selection. The process was repeated to generate 200,000 sentences, which were filtered to remove any repetitions.
Training the Monolingual Model
A wide variety of neural network model structures have been developed for use with linguistic data (for an excellent overview of their use with both natural and artificial language, see Baroni, 2019). We trained a long short-term memory (LSTM) model on our generated sentences. During training, this model learns the parameters which define layers in the network model through optimization of a loss function. These layers are essentially maps that take input features and map them to output features. Thus, across the different layers these features or activations represent language at different levels of abstraction. After training, the models are able to generate new text word-by-word for each language they had been trained on.
As Table 2 shows, there are many different layers included in the full model. However, the key layers for our purposes were the word embedding layer and especially the LSTM layer. The word-embedding layer has learnable weights and maps input to 100 output features. At the end of training, each word in the vocabulary was associated with a unique vector of word embedding feature weights, which allows the model to uncover internal regularities in the vocabulary, such as part of speech, or grammatical definiteness. We refer to these features simply as "word embeddings, " and conceptualize them, roughly, as the ability of the model to learn the lexicon of a new language.
The LSTM layer also has learnable weights associated with memory gates and 100 hidden cells; it takes output from previous layers as its own input, and maps it to 100 features. Each hidden cell is associated with a set of 12 learnable parameters, which control how much information about the input is retained or forgotten during training and generation. Once these weights are learned, the features are inputed into the final layers to create probability distributions for the next possible word, given a preceding word sequence. The next word actually generated by the model is the result of sampling from that probability distribution. These hidden cells can be roughly conceptualized as the ability of the model to learn the grammar of a new language, including the dependencies within and across sentences 2 .
During training, the model learns the parameters which define the layers through optimization of a loss function, using stochastic gradient descent (Hochreiter and Schmidhuber, 1997). Each language model saw the training data five times, and the amount of data for each language was kept constant at 200,000 sentences across all the models. The execution platform used was the Deep Learning Toolbox from MATLAB (MATLAB, 2019).
The entire training process took about fifteen minutes on a GPU, or about two hours on a CPU. All training data, code, and output models are available on our OSF archive: https:// osf.io/6dv7p/?view_only=4575499b2daf473fbd6a04ca49213218. Readme files are included to allow the reader to run the code on their own machine, but we also provide trained nets for the users to download and analyse to complement our own analysis below.
Evaluating the Monolingual Model
After training was completed, we evaluated the success of language learning by having the model generate 100 sentences, and then running those sentences through the Lark automatic parser (https://github.com/lark-parser/lark) to see whether they could be parsed according to the grammar that was used to generate the original training data. On average our monolingual models were able to produce fully parsable sentences about 90% of the time. The lowest accuracy rate was 81% (Echo), and the highest was 95% (Bravo and Charlie. See Figure 2 for the full set of accuracy rates. Monolingual models are those points for which L1 and L2 are the same language).
Training the Bilingual Model
To create our "bilingual" models, we took our monolingual models, and retrained them on data from each of the other four languages. Thus, our monolingual model trained on 2 We did not include cross-sentence or cross-clausal dependencies in our grammars. However, natural languages have many of them, ranging from switchreference conjunctions in Quechua subordinate clauses (Cohen, 2013), to longdistance cross-clausal agreement in Tsez (Polinsky and Potsdam, 2001) to basic pronoun co-reference in English.
FIGURE 2 | Summary of parsing accuracy for monolingual and bilingual models. Lines connect the results for outputs produced in the same target language. Datapoints where L1 and L2 are the same represent the monolingual models; datapoints where L1 and L2 are not the same represent bilingual models.
Alpha was retrained on data from Bravo, Charlie, Delta, and Echo, producing four bilingual models, each with the same L1 but a different L2. This process was repeated for each monolingual model, resulting in five monolingual models, and twenty bilingual models representing each possible combination of L1 and L2. The parsing accuracy of bilingual models was on average 91%, ranging from a low of 86% (L1-Bravo, L2-Alpha) to a high of 96% (L1-Alpha, L2-Delta; and L1-Charlie, L2-Echo). These accuracy rates are given in Figure 2.
With our trained monolingual and bilingual models in hand, we are now prepared to address the key research question: How does similarity between L1 and L2 across different domains of linguistic structure affect learnability in neural network models?
Output-Oriented Approaches
There are three ways we see for determining how learnable a second language is for our models. The first is to examine the output of the trained models, to determine the percentage of sentences that are grammatically correct. Although we did examine the accuracy of our model outputs using the Lark parser (see above), we do not consider it a useful measure beyond a rough check that learning has occurred. First, this approach scores each sentence as a binary parsing success or failure. In other words, the parser would treat word salad-or indeed, alphabet soup-as exactly the same sort of failure that would result from a misplaced agreement suffix in an otherwise flawless sentence. Yet if we were to try to determine some degree of "partial credit" parsing, to reward correct phrases in ungrammatical sentences, we would need to make certain theorydependent decisions. A complete verb phrase, for example, can be as simple as a single verb, but it can also include direct object noun phrases and prepositional phrase adjuncts. What should be done in the case of a well-constructed verb phrase that nevertheless contains an ungrammatical direct object embedded within it-and is an ungrammatical noun phrase argument a worse violation than an ungrammatical prepositional phrase adjunct? If so, what about an ungrammatical noun phrase that is the argument of a preposition, but the prepositional phrase is itself merely an adjunct to the verb phrase? These decisions will need to be informed by theoretical assumptions of syntactic and morphological structure, and to the extent that they are theory-dependent, they are not objective measures of learning.
A second approach to assessing learnability is to look at the training time required to learn the second language. The model's progress could be monitored during training, and the learnability could be measured in terms of the amount of time needed to reach a particular success criterion. Yet what should count as the success criterion? The default learning curve in our models tracks the accuracy with which the model predicts the next word in the training data as a function of what it has already seen, but the ability to predict the next word in a training sentence is very different from the ability to generate novel sentences that respect the underlying structural patterns in the training data. In principle the model could be asked to generate sample sentences after each training epoch to track its progress learning those patterns, but then those sentences would need to be scored for accuracy, which brings us back to the problem of partial credit vs. binary parsing as described above.
In principle, these problems are not insurmountable. Yet even if we had well-motivated, theory-independent ways to score the model output for accuracy, these approaches neglect a fundamental strength of using neural networks to study second-language learnability-a strength that goes well beyond the scalability and generalizability of computer simulations. This strength is the ability to examine the internal structure of a model. We cannot open up the heads of bilingual learners and examine the behavior of their specific languagelearning neurons as they produce each individual word or sentence in their second language. But with neural networks, we can.
Network-Structure Approach
Our network-structure approach essentially asks how much the underlying structures of the network models themselves must change in order to learn a new language. Further, by asking whether the amount of change varies depending on the types of words or sentences being produced, we can also pinpoint the domain in which a language is more or less learnable. The less a network must change in order to learn a new language, or the less it must change to produce a structure in that new language, the more learnable that language or structure is.
To characterize the network structures, we made each model generate 100 sentences, and recorded the activation of each of the 100 network cells for each word in each sentence. We then calculated each cell's mean activation for each part of speech by averaging across all 100 sentences. Because not all languages shared the same inventory of morphological prefixes and suffixes, we extracted only the root parts of speech that were constant across all languages-namely, adjectives, conjunctions, determiners, nouns, prepositions, verbs, and End Of Sentence, which we treated as a word type of its own. Finally, for each cell in each part of speech, we subtracted its mean activation in L2 from its mean activation in L1, and took the absolute value of that difference. If a cell's mean activation changed greatly in the process of learning L2, then the absolute difference will be high; by contrast, if it retained roughly the same activation pattern in L1 and L2, then the absolute difference in mean activation across the 100 sentences will be minimal.
RESULTS
We predicted that the model would need to change less to learn L2s that were similar to L1, than L2s that were different from L1. For each language pair, we coded the amount of overlap as 0, 1, or 2. Thus, our L1-Alpha, L2-Charlie model had an overlap of 0, as did our L1-Charlie, L2-Alpha model. The two models pairing Alpha and Bravo had an overlap of 1, and Alpha and Delta had an overlap of 2 (see Table 1).
In addition to coding language pairs by degrees of overlap, we also coded each them for the domain of overlap, to explore whether that affected the amount of cell activity change produced by learning a second language. For example, the language pairs Alpha and Bravo, along with Charlie and Delta, both have an overlap degree of 1, but Alpha and Bravo overlap in vocabulary, while Charlie and Delta overlap in morphology (see Table 1).
We analyzed degree and domain of overlap separately, because domain of overlap perfectly predicts degree of overlap.
Degree of Overlap
Degree of overlap refers to the number of linguistic domains (vocabulary, morphology, syntax) that are shared between two languages (see Table 1). We analyzed the effect of degree of overlap on mean activity change with mixed effects linear regression, using the R programming environment (version 3.6.1), with the package lme4 (version 1.1-21). We set the absolute mean activity difference between L1 and L2 as the dependent variable, and for independent variables we included L1, L2, part of speech, and overlap degree. Random effects included random intercepts for each cell 3 .
Not every language participated in pairings for all three degrees of overlap, because Bravo, Delta, and Echo did not have any partners in which there was no overlap. As a result, L1 and L2 would have produced a rank-deficient contrast matrix if we attempted to put an interaction between these terms and overlap degree in the model. Nevertheless, we can see apparent languagespecific variation in the effects of overlap in the top two panels of Figure 3. Figure 3 shows the absolute mean activity change between L1 and L2, averaged across all cells for the twenty possible L1/L2 language pairings. The x-axis indicates the degree of overlap between the two languages, which can range from 0 (no overlap; maximally dissimilar) to 2 (two domains of overlap; maximally similar). In the top panel, the twenty languages are grouped according to the starting point-the L1 that the model learned before it was retrained on an L2. In the middle panel, they are grouped according to the L2; and in the bottom panel they are grouped according to the particular part of speech.
In both of the top two panels, we can see that Bravo is the one language that bucks the pattern of reduced activity change with higher degrees of overlap. Learners with an L1 of Bravo (top panel) increase their activity change when learning L2s of two degrees of similarity compared to L2s with one degree of similarity; and when learners are learning Bravo as an L2, they show no difference in activity change regardless of whether Bravo overlaps with their L1 by one or two degrees. Nevertheless, all the other languages show a decrease in cell activity change as L1/L2 similarity increases.
The lowest panel in Figure 3 collapses across L1s and L2s, and instead shows the changes in cell activity that are necessary to produce each part of speech in the L2. Interestingly, these grammatical distinctions show distinct clusters. Nouns and adjectives do not require much change in cell activity, while verbs condition a bit more. The function wordsconjunctions, determiners, and prepositions-on the other hand, cluster together, appreciably above the content words. The largest change in cell activity that is necessary is associated with the end of the sentence: cells must learn distinctly different activation patterns in order to know when to stop an utterance than they must learn to produce words within that utterance.
All of these observations emerged in the regression model.
Domain of Overlap
To analyse the effects of domains of overlap (e.g., syntax vs. morphology for language pairs with one degree of overlap, or syntax/morphology vs. vocab/morphology for language pairs with two degrees of overlap), we built a mixed effects linear regression model, with the same software as in our degree analysis. Again we set the absolute mean activity change as the dependent variable, and used part of speech and overlap domain as independent variables, and cell identifier as random intercepts.
Two key patterns that emerged in the model are visible in
DISCUSSION
This project used artificial language learners and artificial languages to test a method of investigating L2 acquisition that has considerable potential for expansion. By building artificial languages, we were able to sidestep the problem of defining how similar two languages are, because we could hard-code into the artificial languages a known degree of overlap. By measuring the changes in neural networks that had been trained on these languages, we were able to estimate the learnability of a language by focusing not on output, but on changes within the generative machine itself. Our results supported our predictions: More degrees of overlap between an L1 and L2 led to less change in network activity. In other words, more similar languages were easier to learn.
Key Domains and Structures for Learning
Our results are particularly intriguing because they offer insights into which components of linguistic similarity, and which linguistic structures, seem to require the most learning during second language acquisition. We observed that a shared morphological system between L1 and L2 in particular seemed to result in easier learning; while shared syntactic structures seemed to make very little difference. Further, function words (conjunctions, determiners, and prepositions) seemed to be harder to learn compared to content words (adjectives, and nouns, although to a lesser extent verbs).
Both of these patterns may reflect the way in which linguistic dependencies are encoded in these artificial languages. Because our syntactic grammars were fairly simple, dependencies such as subject-verb agreement, or number concord in noun phrases, were largely expressed through morphological affixes. Syntactic structures were actually quite similar: all sentences needed subjects and verbs; all verb phrases could be transitive or intransitive, with optional prepositional phrase adjuncts; and all sentences could have one or two clauses, with the latter combining the clauses through the use of a conjunction. Although the linear order in which these structures were combined varied, the nature of the dependencies was quite similar. By contrast, the morphological systems could vary widely: with different features-tense, number, definiteness-expressed or ignored depending on which morphology a language had. This fundamental similarity across syntactic systems, compared to a wider degree of variability in morphological systems, may explain why shared morphology proved more useful in learning a new language than shared syntax.
To the extent that the syntax of these languages did encode dependencies, however, it largely was encoded in the function words. Determiners were obligatory in noun phrases; conjunctions were required in two-clause sentences; and prepositions required noun phrase objects. Adjectives, by contrast, were entirely optional; and nouns were often optional in verb phrases, because verbs could be either transitive or intransitive. This could explain why so much more of the network activity changes emerged in the production of content words than on function words, as shown in the bottom panel of Figure 3.
If this account is accurate, it can explain why, among content words, verbs were harder to learn than nouns and adjectives. Verbs were often the locus of agreement morphology, as well as tense inflection; and unlike adjectives and nouns, verbs' appearance was most restricted: in each clause they were obligatory, and also limited to exactly one appearance per clause. Yet this limitation was also shared across all the languages. As a result, verbs required more learning than nouns and adjectives, but less than the more structurally complex content words.
Potential for Generalization
Because deep learning packages are sophisticated enough to learn natural languages as well as artificial languages (Sutskever et al., 2011;Graves, 2013, for an excellent recent overview, see Baroni, 2019), we believe that this approach can be generalized to natural languages, and allow researchers in language acquisition to make testable predictions about how learnable second languages might be for speakers of different first languages. Although there is a robust pedagogical tradition for certain language pairs (e.g., Spanish for English speakers, or French for German speakers), these resources are limited to dominant language groups which provide a large population of L1 learners, or which are popular L2 target languages. For such learners, existing pedagogical approaches are nuanced and mature. Yet for speakers of Finnish, who wish to learn Malayalam or Quechua, there may be very few resources that are targeted to their existing knowledge.
Naturally, it will be necessary to apply these methods to natural languages to see whether the patterns that we found in a sterile simulation generalize in any meaningful way. Our current analysis, for example, did not consider the role of phonological similarity, although research in bilingualism has shown that the phonological structures of L1 and L2 can interact in complex ways. For example, substantial similarity between L1 and L2 phonologies may actually interfere with the development of distinct L2 phoneme categories (Flege, 1995(Flege, , 2007. This pattern which may well pose a challenge for our results, which show generally facilitative effects of similarity. On the other hand, we did not ask our models to learn the phonology of the languages we constructed, and so we cannot know whether they would replicate natural language findings of inhibitory effects of phonological similarity, or mis-predict facilitatory effects. We also did not consider the role of semantics in our artificial languages, which rendered it impossible to explore or model the effects of cognates (words with similar forms and meanings in two languages) or false friends (words with similar forms and dissimilar meanings) as a domain of language similarity. Yet these types of lexico-semantic relationships have been shown to affect both language learning (Otwinowska-Kasztelanic, 2009;Otwinowska and Szewczyk, 2019), as well as bilingual processing (van Hell and Dijkstra, 2002;Duyck et al., 2007;Brenders et al., 2011). Further, these effects interact in complex ways not only with properties of the broader linguistic context, but also individual cognitive properties of the speaker (Schwartz and Kroll, 2006;van Hell and Tanner, 2012;Dijkstra et al., 2015). Our models could in principle be adjusted to reflect different speaker properties-e.g., more or fewer hidden cells or word embeddings could perhaps model differences in working memory capacity-but we did not manipulate those properties here. All models had identical internal structures, and effectively modeled the identical person in multiple learning situations.
Our findings should not be overstated, but we do believe they serve as a persuasive proof of concept of our methods: Given a known set of underlying relationships between L1 and L2, our modeling procedure can uncover them in ways that are linguistically meaningful. Yet interpreting the output of this approach when it is applied to natural languages will not be straightforward, and will need to be guided by the already robust psycholinguistic literature on bilingual language processing and acquisition. Nevertheless, we are optimistic. Our approach, while still in its early stages, has the potential to democratize language learning, by predicting not only which languages are easier to learn for which speakers, but also of identifying which domains of grammar may be most challenging.
DATA AVAILABILITY STATEMENT
The grammars, training data, and code used in this study can be found in the Open Science Framework repository here: https:// osf.io/6dv7p/?view_only=4575499b2daf473fbd6a04ca49213218. van Hell, J. G., and Dijkstra, T. (2002). Foreign language knowledge can influence native language performance in exclusively native contexts.
Conflict of Interest:
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Copyright © 2020 Cohen, Higham and Nabi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | 8,146.6 | 2020-06-24T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Bifurcation Analysis and Chaos Control in Genesio System with Delayed Feedback
We investigate the local Hopf bifurcation in Genesio system with delayed feedback control. We choose the delay as the parameter, and the occurrence of local Hopf bifurcations are verified. By using the normal form theory and the center manifold theorem, we obtain the explicit formulae for determining the stability and direction of bifurcated periodic solutions. Numerical simulations indicate that delayed feedback control plays an effective role in control of chaos.
Introduction
Since the pioneering work of Lorenz 1 , much attention has been paid to the study of chaos.Many famous chaotic systems, such as Chen system, Chua circuit, R össler system, have been extensively studied over the past decades.It is well known that chaos in many cases produce bad effects and therefore, in recent years, controlling chaos is always a hot topic.There are many methods in controlling chaos, among which using time-delayed controlling forces serves as a good and simple one.
In order to gain further insights on the control of chaos via time-delayed feedback, in this paper, we aim to investigate the dynamical behaviors of Genesio system with timedelayed controlling forces.Genesio system, which was proposed by Genesio and Tesi 2 , is described by the following simple three-dimensional autonomous system with only one quadratic nonlinear term: where a, b, c < 0 are parameters.System 1.1 exhibits chaotic behavior when a −6, b −2.92, c −1.2, as illustrated in Figure 1.In recent years, many researchers have studied this system from many different points of view; Park et al. 3-5 investigated synchronization of the Genesio chaotic system via backstepping approach, LMI optimization approach, and adaptive controller design.Wu et al. 6 investigated synchronization between Chen system and Genesio system.Chen and Han 7 investigated controlling and synchronization of Genesio chaotic system via nonlinear feedback control.Inspired by the control of chaos via time-delayed feedback force 8 and also following the idea of Pyragas where τ > 0 and M ∈ R.
Bifurcation Analysis of Genesio System with Delayed Feedback Force
It is easy to see that system 1.1 has two equilibria E 0 0, 0, 0 and E 1 −a, 0, 0 , which are also the equilibria of system 1.2 .The associated characteristic equation of system 1.2 at E 0 appears as As the analysis for E 1 is similar, we here only analyze the characteristic equation at E 0 .First, we introduce the following result due to Ruan and Wei 10 .
Lemma 2.1. Consider the exponential polynomial
P λ, e −λτ 1 , . . ., e −λτ m λ n p Following the detailed analysis in 8 , we have the following results.Lemma 2.2.i If Δ ≤ 0, then all roots with positive real parts of 2.1 when τ > 0 has the same sum to those of 2.1 when τ 0. ii then all roots with positive of 2.1 when τ ∈ 0, τ 0 has the same sum to those of 2.1 when τ 0. Proof.Substituting λ τ into 2.1 and taking the derivative with respect to τ, we can easily calculate that thus the results hold.
Theorem 2.4.i If Δ ≤ 0, then 2.1 has two roots with positive real parts for all τ > 0. ii
has two roots with positive real parts for
2 exhibits the Hopf bifurcation at the equilibrium E 0 for τ τ j k .
Some Properties of the Hopf Bifurcation
In this section, we apply the normal form method and the center manifold theorem developed by Hassard et al. in 11 to study some properties of bifurcated periodic solutions.Without loss of generality, let x * , y * , z * be the equilibrium point of system 1.2 .For the sake of ISRN Mathematical Physics convenience, we rescale the time variable t τt and let τ τ k μ, x 1 x − x * , x 2 y − y * , x 3 z − z * , then system 1.2 can be replaced by the following system: where x t x 1 t , x 2 t , x 3 t T ∈ R 3 , and for φ φ 1 , φ 2 , φ 3 T ∈ C, L μ and f are, respectively, given as By the Riesz representation theorem, there exists a function η θ, μ of bounded variation for θ ∈ −1, 0 , such that In fact, the above equation holds if we choose F μ, φ , θ 0.
3.6
Then 1.2 can be rewritten in the following form:
3.7
For ψ ∈ C 0, 1 , we consider the adjoint operator A * of A defined by
3.8
For φ ∈ C −1, 0 and ψ ∈ C 0, 1 , we define the bilinear inner product form as Suppose that q θ 1, α, β T e iθω k τ k −1 ≤ θ ≤ 0 is the eigenvectors of A 0 with respect to iω k τ k , then A 0 q θ iω k τ k q θ .By the definition of A and 3.2 , 3.4 , and 3.5 we have Similarly, let q * s B 1, α * , β * e isω k τ k 0 ≤ s ≤ 1 be the eigenvector of A * with respect to −iω k τ k , by the definition of A * and 3.2 , 3.4 , and 3.5 we can obtain
3.13
We Rewrite 3.13 in the following form:
3.23
Therefore we have
3.24
Comparing the corresponding coefficients, we have
3.26
Substituting the above equation into 3.21 and comparing the corresponding coefficients yields H 20 θ −g 20 q θ − g 02 q θ , 3.27
3.28
By 3.21 , 3.28 , and the definition of A we have ẇ20 θ 2iω k τ k w 20 θ g 20 q θ g 02 q θ .
3.30
Similarly we have
3.34
By 3.21 and the definition of A we have 0 Substituting 3.30 into 3.36 and noticing that
ISRN Mathematical Physics 9
Following the similar analysis, we also have
3.44
Thus the following values can be computed:
3.45
It is well known in 11 that μ 2 determines the directions of the Hopf bifurcation: if μ 2 > 0 < 0 , then the Hopf bifurcation is supercritical subcritical and the bifurcated periodic solution exists if τ > τ k τ < τ k ; χ 2 determines the period of the bifurcated periodic solution: if χ 2 > 0 < 0 , then the period increase decrease ; β 2 determines the stability of the Hopf bifurcation: if β 2 < 0 > 0 , then the bifurcated periodic solution is stable unstable .
Numerical Simulations
In this section, we apply the analysis results in the previous sections to Genesio chaotic system with the aim to realize the control of chaos.We consider the following system: Obviously, system 4.1 has two equilibria E 0 0, 0, 0 and E 1 6, 0, 0 .In what follows we analyze the case of E 0 only, the analysis for E 1 is similar.The corresponding characteristic equation of system 4.1 at E 0 appears as .0.632012.Therefore, using the results in the previous sections, we have the following conclusions: when the delay τ 0.1 < 0.632012, the attractor still exists, see Figure 2; when the delay τ 0.632, Hopf bifurcation occurs, see Figure 3.Moreover, μ 2 > 0, β 2 < 0, the bifurcating periodic solutions are orbitally asymptotically stable; when the delay τ 1.2 ∈ 0.632012, 1.85965 , the steady state S 0 is locally stable, see Figure 4; when the delay τ 3.2 > 1.85965, the steady state S 0 is unstable, see Figure 5. Numerical results indicate that as the delay sets in an interval, the chaotic behaviors really disappear.Therefore the parameter τ works well in control of chaos.
Concluding Remarks
In this paper we have introduced time-delayed feedback as a simple and powerful controlling force to realize control of chaos of Genesio system.Regarding the delay as the parameter, we have investigated the dynamics of Genesio system with delayed feedback.To show the effectiveness of the theoretical analysis, numerical simulations have been presented.Numerical results indicate that the delay works well in control of chaos.
, τ 2 , . . ., τ m vary, the sum of the order of the zeros of P λ, e −λτ 1 , . . ., e −λτ m on the open right half plane can change only if a zero appears on or crosses the imaginary axis.
By Theorem 2.4, when Δ 36.9808− 14.976M ≤ 0, that is, M ≥ 2.46934, 4.2 has two roots with positive real parts for all τ > 0. In order to realize the control of chaos, we will consider M < 2.46934.We take M −8 as a special case.In this case, system 4.1 | 2,079 | 2012-02-08T00:00:00.000 | [
"Engineering",
"Mathematics",
"Physics"
] |
Adeno‐associated virus (AAV)-based gene therapy for glioblastoma
Glioblastoma (GBM) is the most common and malignant Grade IV primary craniocerebral tumor caused by glial cell carcinogenesis with an extremely poor median survival of 12–18 months. The current standard treatments for GBM, including surgical resection followed by chemotherapy and radiotherapy, fail to substantially prolong survival outcomes. Adeno-associated virus (AAV)-mediated gene therapy has recently attracted considerable interest because of its relatively low cytotoxicity, poor immunogenicity, broad tissue tropism, and long-term stable transgene expression. Furthermore, a range of gene therapy trials using AAV as vehicles are being investigated to thwart deadly GBM in mice models. At present, AAV is delivered to the brain by local injection, intracerebroventricular (ICV) injection, or systematic injection to treat experimental GBM mice model. In this review, we summarized the experimental trials of AAV-based gene therapy as GBM treatment and compared the advantages and disadvantages of different AAV injection approaches. We systematically introduced the prospect of the systematic injection of AAV as an approach for AAV-based gene therapy for GBM.
Introduction
Glioblastoma (GBM) is a tumor located in the central nervous system (CNS) that forms in the supportive tissue of the brain [1]. Human GBM is highly invasive and spreads rapidly to nearby healthy brain tissues before symptoms occur [2]. GBM has been reported to be the most lethal intracranial tumor because of its high resistance to conventional radiotherapy and chemotherapy [3]. Despite advances in surgery, the complex genetic heterogeneity and insidious infiltration of GBM cells result in almost inevitable recurrence with less than 5% 5-year survival rate [4]. Another major obstacle in GBM treatment is the blood-brain barrier (BBB), which limits the diffusion of most small-molecule therapeutic agents and all large molecules into the brain parenchyma and blocks the drug treatment of GBM [5]. Thus, developing effective therapeutic strategies that provide improved clinical therapeutic efficiency and increased survival rate among patients with GBM is urgently needed.
Gene therapy refers to the introduction of foreign genes into target cells to correct or compensate for diseases caused by defective or abnormal genes to achieve therapeutic purposes; this strategy is promising for many diseases, including cancer, neurodegenerative, and cardiovascular diseases [6,7]. More than 2000 clinical trials of gene therapy have been conducted, and most of the vectors have been proven effective and safe [8]. Current studies indicate that approximately 64% of the clinical trials of gene therapy were conducted to treat cancer diseases, and the most common strategy is the delivery of tumor growth-inhibiting or tumor-killing genes [9]. RNA interference has been used in gene therapy to inhibit tumorigenesis and proliferation [10]. In addition, suicide gene [11], oncolytic virus [12], and immunomodulatory gene [13] have widely been applied in cancer gene therapy. The key to gene therapy is the use of safe and effective gene delivery vectors, such as viral and non-viral vectors. Fortunately, a variety of viral vectors including adenovirus [14], herpes simplex virus [15], and adenoassociated virus (AAV) [16] have been widely applied in the treatment of clinical and experimental cancer disease models. Among them, AAV, as an important viral vector, exerts a strong potential in the treatment of cancer diseases [17].
AAV vectors are promising in gene therapy for their stable, efficient, and non-cytotoxic gene delivery to transduce a great number of tissues of different mammalian species, including the CNS, and are one of the most commonly used viral vectors in gene therapy [18,19]. Currently, AAV has been used as a vector for gene therapy in multiple clinical trials (more than 100) to target lung, liver, eye, brain, and muscle and has achieved great success in blindness and hemophilia diseases [20]. AAV1 vector-encoded lipoprotein lipase became the first gene therapy product (Glybera) approved to treat lipoprotein lipase deficiency by the European Union in 2012 [21]. Five years later, another AAV-mediated gene therapy drug (Luxturna) was subsequently approved for marketing in the U.S. [22]. Just last year, AAV9-based gene therapy (Zolgensma) has also been marketed to treat spinal muscular atrophy [23]. These development greatly inspired researchers to further explore the function of AAV as a gene therapy vector. AAV-mediated gene therapy strategies include gene replacement, gene silencing, and gene editing [17]. Recently, AAVs that deliver therapeutic agents have been utilized for the treatment of experimental GBM mice model and remarkably inhibited the growth of GBM cells and prolongs the survival rate of GBM mice [24]. Due to the presence of BBB, AAV that deliver therapeutic agents for the treatment of experimental GBM model are administered by intracranial local injection, which indeed relieves non-invasive experimental GBM in mice model [25,26]. However, intracranial injection entails surgical risks and clinical costs and makes the scope of treatment relatively limited [27]. Human GBM cells are highly invasive and can migrate along blood vessels to areas of the brain away from the tumor bulk. This factor poses a big challenge for intracranial injection [28]. Researchers have also tried intracerebroventricular (ICV) injection to deliver AAV vectors to treat GBM. ICV injection can solve the relatively limited diffusion of AAV vectors in local injection to a certain extent and improves the therapeutic effect on invasive experimental GBM mouse model [29]. Instability and inevitable invasiveness are the drawbacks of ICV injection [30]. The development of BBB-crossing AAV make the systematic injection of AAV possible to treat GBM [31]. Systematic delivery, also called intravenous injection, can achieve widespread gene delivery and minimize invasive surgery; thus, this approach would be ideal for treating CNS diseases, including GBM [32,33]. However, systematic injection in AAV-based GBM gene therapy also has some problems, including the low efficiency of AAV crossing the BBB, pre-existing AAV-neutralizing antibodies in the body, peripheral toxicity, and inability to target specific cells [34][35][36][37].
In this review, we systematically introduced the prospects of AAV-based gene therapy for GBM and compared the advantages and disadvantages of different AAV injection methods. Most importantly, we will focus on the feasibility of the systematic injection of AAV for the treatment of GBM and the challenge faced by systematic injection.
AAV structure and composition
AAV was accidentally found in the 1960s during a laboratory preparation of adenovirus and later found in human tissues [38]. AAV does not cause any human diseases, and its life cycle is connected with a helper virus (such as adenovirus and herpes simplex virus). AAV cannot replicate independently, and its replication and cytolytic functions can only be performed under the presence of helper viruses [39,40]. AAV does not integrate with the host's genome and can stably express transgenes for a long period. In addition, AAV is widespread in many species, including human and non-human primates, and is highly infectious to a variety of tissue cells in vivo with non-pathogenic quality; thus, AAV has become the star vector for gene therapy [41,42].
AAV is a single-stranded linear DNA-deficient virus with a genomic DNA of less than 5 kb, and its structure is icosahedral non-enveloped particle. AAV is composed of one single-stranded DNA with inverse terminal repeat (ITR) sequence and two open reading frames Rep and Cap at both ends. ITRs are symmetrical repeats that play important roles in the structure and function of AAV. The Rep gene comprises four overlapping genes Rep78, Rep68, Rep52, and Rep40 and can encode the Rep protein required for AAV replication, package, and genomic integration. Cap gene is composed of overlapping amino acid sequences and encodes the capsid protein, including VP1, VP2, and VP3 with a ratio of 1:1:10 (VP1:VP2:VP3). These three interact with each other to form a symmetrical icosahedron structure, which acts as a vehicle for gene delivery [43,44].
AAV-based cancer gene therapy
AAV-based gene therapy has been applied in a variety of preclinical and clinical trials to date and has shown a strong safety profile and trustworthy therapeutic effects [16]. In recent years, AAV has shown great value in the treatment of tumor diseases. Two clinical trials of AAVbased cancer gene therapy have been reported. One is the single injection of carcinoembryonic antigen (CEA)specific cytotoxic T lymphocyte, which is activated by AAV2-CEA-transduced dendritic cells, to treat patients with advanced gastric cancer (ClinicalTrials.gov Identifier: NCT02496273), and the other is AAV2-hAQP1 applied in patients with squamous cell head and neck cancer (ClinicalTrials.gov Identifier: NCT02602249). In the treatment of cancer diseases, AAV can transduce a large number of cancer cells and cancer stromal cells and stably express cancer therapeutic genes (suicide gene, immunostimulatory gene, cytotoxic gene, small interference (siRNA) and anti-angiogenesis gene) to inhibit cancer formation and progression [45,46]. The biggest problem with AAV-based cancer gene therapy is how to make AAV more specifically transduce to the cancer region [47]. Hence, a variety of rational designs of capsid have been engineered for cancer-specific transduction. Aminopeptidase N (CD13) is highly expressed in tumor tissues. Thus, Grifman et al. engineered AAV2 capsid by inserting an NGR peptide motif, which made AAV2 deliver therapeutic agents more efficiently and specifically to tumor cells [48]. Integrin is highly expressed on cancer vessels and cancer tissues and is used as an indicator of poor cancer prognosis. A study modified the AAV2 capsid by introducing a 4C-RGD peptide, which could efficiently combine αvβ3 and αvβ5 integrins. This modification promotes AAV2-mediated gene delivery to integrin-positive cancer cells in vitro and in vivo [49]. In addition, another study fused designed ankyrin repeat proteins to AAV2 capsid VP2 to target the cancer-associated receptor human epidermal growth factor receptor 2 (HER2)/neu. Her2-AAV selectively and highly transduces Her2-positive tumor cells and weakly transduces other cells, which greatly reduces its toxicity to other normal tissues [50]. AAV5 has also been engineered for cancerspecific transduction. Lee et al. engineered AAV5 with integrin-homing peptides, sialyl Lewis X and tenacin C, which are highly expressed in cancer cells [51]. Cheng et al. mutated tyrosine residues on AAV3 to phenylalanine, which increased the transduction capacity to hepatocellular carcinoma cells [52]. AAV capsid engineering promotes the effect of cancer cell-specific transduction to more effectively deliver therapeutic agents to the tumor site and greatly improve the treatment effect of AAVbased cancer therapy. The specific transduction of AAV is particularly important in AAV-based GBM gene therapy. How to make AAV specifically transduce to CNS regions and greatly reduce the peripheral toxicity of therapeutic genes especially in systematic injection approach are the key steps in AAV-based GBM gene therapy.
AAV-based experimental trials on GBM mice model
AAV has been used to treat experimental GBM model for decades because of their stable and persistent expression of anti-tumor agents in transduced cells [53]. After the first discovery that AAV-encoded tumor suppressor genes could effectively inhibit the growth of GBM cell lines in vitro, AAV emerged as an effective delivery tool for the treatment of experimental GBM model [54]. Previously, AAV-based GBM therapy was administered by local injection because of the BBB, which blocks the path of AAV to the GBM [55]. Researchers have also tried the ICV route to deliver AAV directly into the cerebrospinal fluid to further penetrate into the brain parenchyma to treat experimental GBM mouse models and have achieved certain success [56]. The recent discovery of BBB-crossing AAV introduced a new approach, namely, the systematic injection of AAV, to fight GBM. Systematic injection seems a better treatment approach than local or ICV injection because of its non-invasiveness and broad transduction [57] (Fig. 1). AAV-mediated experimental gene therapy against GBM utilizes a variety of therapeutic strategies, such as tumor suppression and the use of anti-tumor genes, including anti-angiogenesis genes, cytotoxic or suicide genes, and immunostimulatory genes [58]. Next, we will systematically summarize the progress of AAV-based GBM research in several in vivo delivery routes and in vitro findings (Table 1).
Anti-GBM effect of AAV in vitro
It is reported that the hypoxia-regulated AAV was first used to kill GBM cells in 2001 in vitro. They constructed a hypoxia-regulated AAV, which can encode the suicide gene Bax for the hypoxic GBM microenvironment. Their result showed that Bax was abundantly expressed under hypoxic condition after AAV transduction and promoted the death of GBM cells in vitro [54].Tumor necrosis factor-related apoptosis-inducing ligand(TRAIL), which induces tumor cell apoptosis but is less toxic to normal tissues, has been used in the treatment of various tumor diseases. Shawn et al. developed AAV-soluble TRAIL (sTRAIL), which could transduce GBM cells to promote the killing effect on GBM cells and increase pro-apoptotic protein level in GBM cells in vitro [59].
Anti-GBM effect of AAV through intratumoral injection
Intratumoral injection is the most generally preferred method for AAV to treat the experimental GBM mice model because of the presence of BBB. The local injection of AAV, which deliver therapeutic agents, has inhibited the growth of intracranial GBM and prolonged the survival rate of tumor-bearing mice to some extent [61]. AAV-based GBM therapy was first applied to treat experimental GBM mice model in 1996, Their study proved that a single intracranial injection of AAV-tk-IRES-IL-2 could effectively prohibit Fig. 1 Different injection approaches of therapeutic AAV to treat GBM. Intratumoral injection is a common way to deliver therapeutic AAV to treat GBM in early years, but that has the limited transduction and surgical risk. ICV injection of therapeutic AAV can cause the widely transduction in the injected side, but it will lose the killing effect to the opposite side tumors. Systemic injection of therapeutic AAV will cause the widely transduced throughout the brain, and that can effectively inhibit invading GBM cells throughout the brain IFN-β -ICVinjection [29] sTRAIL AAV9 Systematic injection [78] IFN-β AAV9 Systematic injection [80] the progression of xenograft GBM. One year later their laboratory colleagues proved that the intracranial injection of AAV-tk and the intraperitoneal injection of gancyclovir could eliminate tumors in GBM mice [62]. However, despite its excellent result, the approach has serious hepatotoxicity, and its use has been stranded. Interferon-beta (IFN-β) has potent antitumor effects by inhibiting the growth and angiogenesis of cancer cells and promoting cancer cell apoptosis and immune stimulation. Since 2002, several researchers have tried to overcome the experimental GBM mice model by administering AAV-encoding IFN-β through local injection and have achieved certain effects [63]. Vascular endothelial growth factor (VEGF), as a proangiogenic factor, is remarkably upregulated in GBM tissue and promotes angiogenesis and growth of GBM tumors. Thus, AAV-delivered sVEGFR1/R2, a kind of VEGF-optimized soluble inhibitor, was used to treat experimental GBM model. Their result showed the powerful anti-GBM effect exerted by the local injection of AAV-sVEGFR1/R2 [25]. Furthermore, Some studies also found that bevacizumab, an anti-VEGF monoclonal antibody, delivered by AAVrh.10 could reduce the blood vessel density and volume of GBM tumor and increase survival rate [64]. Tissue factor pathway inhibitor-2 (TFPI-2) has a strong ability to inhibit tumor cell proliferation, migration, and angiogenesis. Niranjan et al. found that AAV-TFPI-2 could mediate the inhibition of GBM progression in vitro and in vivo [65]. It is reported that the overexpression of the C-terminal fragment of the human telomerase reverse transcriptase (hTERTC27) can prohibit the occurrence of malignant tumors including the experimental GBM model. Evidence has shown that intratumoral injection of AAV-hTERTC27 could inhibit the growth of xenograft GBM, amplify tumor necrosis and apoptosis, and reduce microvessel density in nude mice [66]. In addition, studies have shown that AAV2 intratumoraldelivered decorin, which exerted anti-tumor effect by affecting the epidermal growth factor receptor, transforming growth factor-beta, and p21, could inhibit GBM and prolong the survival of GBM mice [61]. Studies shown that AAV-secreted TRAIL (S-TRAIL) could promote the killing effect on GBM cells in vitro. Here, AAVrh.8-S-TRAIL accompanied with the administration of lanatoside C was proved to increase the overall survival of U87 bearing mice and that further confirm the anti-GBM role of S-TRAIL [67]. It is also clarified that AAV2-apoptin-derived peptide (ADP) promoted the apoptosis of GBM cells and prolonged the survival rate of orthotopic GBM bearing mice s [55]. Previous studies have shown that microRNAs could inhibit tumorigenesis. Here, studies have proved that AAV-miR-7 significantly reduced the tumor size, upregulated death receptor 5 to promote the tumor cell death and prolonged the survival in xenografts GBM mice model [68]. The intracranial injection of AAV circumvents the obstacles of BBB. Compared with the injection of pure therapeutic protein, AAV's persistent and stable expression of therapeutic agents can better inhibit the sustained development of GBM [69]. This was indeed the reason for the popularity of AAV-based GBM treatment approach in the early years. The above research also shows that the local injection of AAV delivering some traditional or GBM-specific anti-tumor genes is effective and, to some extent, alleviates the progress of experimental GBM mice model. Locally injected therapeutic AAV has been reported to infiltrate into the tumor area, but the AAV genome will be diluted because of the rapid growth and division of GBM tumor cells; this dilution will results in the reduced expression efficiency and affects the therapeutic effect [27]. In addition, studies have shown that the local injection of AAV with partial area transduction in the brain can only have a good effect on noninvasive, implanted GBM tumors, but human GBM is highly invasive. GBM cells can migrate along blood vessels away from the tumor core; thus, the local injection of AAV could hardly eliminate all the invasive distant GBM cells [70,71]. Owing to the invasive nature of GBM cells, a globally spread gene delivery vehicle is badly needed to combat the diffused primary tumor or tumor recurrence. Studies have demonstrated proved that the injection of ssAAV2-ADP in the left hemisphere effectively prevents the growth of ipsilateral tumors but is not enough to prevent the growth of distal tumors in the contralateral hemisphere [55]. Furthermore, Matheus et al. showed that the intracranial injection of AAVrh8-sTRAIL indeed extends the survival rate of experimental GBM mice, but these mice also died of tumor spread within 100 days. Therefore, the local injection of AAV to treat human GBM is still very flawed and will not achieve the desired therapeutic effect. The researchers also demonstrated that the therapeutic gene must be widely expressed in the brain to fight against invasive GBM cells [67]. One way to achieve this goal is to systematically inject BBB-crossing AAV, which can perform extensive gene delivery in the brain.
Anti-GBM effect of AAV through ICV injection
The ICV injection of therapeutic drugs is a common approach for the treatment of CNS diseases. In recent years, ICV injection is also widely used because it allows therapeutic drugs to reach most of the brain regions with the circulation of cerebrospinal fluid in the treatment of experimental GBM mice model [72]. Studies have shown that the ICV injection of AAV can overcome the disadvantages of local injection in GBM treatment. It is reported that intracranial fixed-point injection cannot completely eliminate distant infiltrating GBM cells because of the extensive infiltration and migration characteristics of GBM cells. They showed that the pre-injection of AAV vector encoding human IFN-β (AAV-IFN-β) through ICV injection can completely prevent tumor growth in an orthotopic model of GBM [29]. In addition, the survival rate of pre-established U87 intracranial tumor mice injected with AAV-IFN-β through ICV was substantially improved compared with injecting the control AAV vector through the same route. These data indicate that the ICV injection of AAV vectors that encode anti-tumor proteins is a promising method and deserves further study. Compared with local injection, the ICV injection of AAV can eliminate most of the distant GBM cells. Furthermore, local injection is highly dangerous when GBM is located in the critical structure of the brain, and ICV injection can well avoid this problem [73]. However, ICV injection also has defects. First, ICV injection is unstable. Second, ICV injection delivers AAV into the cerebrospinal fluid, which only circulates between the ventricles. The brain parenchyma area close to the ventricle may have good transduction, but the transduction efficiency for areas away from the brain ventricle may not be enough [56,74]. Furthermore, other studies [29] demonstrated that AAV delivered by ICV injection also has chemotaxis in the brain, mostly transduces the hippocampus and corpus callosum, but rarely transduces other parts. From this point of view, AAV-based GBM gene therapy through ICV injection is also inadequate. Searching for a better delivery method that can make AAV transduce the entire CNS is a key step in AAVbased GBM gene therapy.
Anti-GBM effect of AAV through systematic injection
The discovery of BBB-crossing AAV9 opened the door to the systematic injection of AAV for CNS diseases in 2009 [33]. BBB-crossing rAAVrh.8 and rAAVrh.10, which played a role in promoting the systematic injection of therapeutic AAV to treat CNS diseases, were discovered in 2014 [75]. To date, AAV9, AAVrh.8, AAVrh.10, AAVrh.39, and AAVrh.43 have been proved to have the ability to transduce glial and neurons after systemic injection [76]. AAV9 variants AAV-PHP.B and AAV-PHP.eB, which were developed by researchers through directed evolution approach, also have excellent CNS transduction ability in C57 mice [77,78]. After the discovery of these BBB-crossing AAVs, researchers began to treat experimental GBM mice model with therapeutic AAV by systematic injection. It is the first time that the systematic injection of AAV was applied to treat GBM in 2016. Their result showed that systematic administration of AAV9-sTRAIL suppressed tumor growth and remarkably increased the survival of xenograft GBM mice [79]. Some studies also proved that systematic AAV9-IFN-β delivery could induce complete tumor regression in experimental GBM model in a dose-dependent manner. They also demonstrated that the systematic administration of AAV9-IFN-β is more efficient in multifocal GBM compared with local injection [80]. In recent years, the systematic injection of therapeutic AAV in GBM treatment has attracted considerable attention with the development of AAV9 variants AAV-PHP.B and AAV-PHP. eB, which have been proven to have a stronger ability to cross the BBB than AAV9.
The systematic injection of therapeutic AAV has extensive transduction characteristics and fundamental advantages over local injection or ICV injection; thus, this approach is an excellent way to treat GBM [80] (Table 2). A great number of studies have shown that systemically injected therapeutic AAV can transduce most regions of the CNS through the extensive vascular system and has a comprehensive containment effect on invasive, malignant GBM [81,82]. It is reported that proved that the effect of the systematic injection of AAV9-IFN-βin treating multifocal GBM is better than that of local injection [80]. Moreover, ICV injection can only inhibit ipsilateral GBM tumors but not the tumors in the contralateral side because of its limitations in transduction [29]. These results clearly showed the advantages of the systematic injection of AAV in the treatment of GBM. Human GBM is highly invasive and can spread widely in the brain; thus, systemic injection is the best choice for AAV-based GBM gene therapy to eliminate GBM more thoroughly [83]. Despite its advantages compared with local injection and ICV injection, the systematic injection of AAV-based GBM gene therapy faces a variety of challenges that need [56,74] Systematic injection Extensive transduction throughout the brain Immune barrier, neutralizes antibodies in the blood [76,87] to be resolved [84]. The first challenge is the efficiency of BBB crossing. BBB is the main obstacle that hinders the entry of therapeutic drugs into the CNS. How to overcome the BBB and transduce more efficiently into the CNS are the most critical steps in the systematic injection of AAV-based GBM gene therapy [85,86]. Although some AAVs that can cross the BBB have been developed, more efficient AAV mutants still need to be studied. The second challenge is the non-specificity of AAV transduction. Systematic injection can widely distribute AAV in various parts of the body; hence, the expression of therapeutic gene in non-target cells away from the disease site is also very high and may result in ineffective treatment and high peripheral toxicity. The last challenge is the immune barrier [87]. Therapeutic AAV can be neutralized because of the large amount of AAV antibody in the human blood, which results in poor treatment effect or even no effect. Therefore, finding possible solutions to the challenges of systematic injection is the key in AAVbased GBM gene therapy.
Conclusions and future prospects
GBM is a highly malignant intracranial tumor that is highly aggressive and heterogeneous. Surgical resection combined with radiotherapy and chemotherapy is the main method for the clinical treatment of GBM, but patient survival rate is still very low [88]. The development of gene therapy has been widely used in a great number of diseases. AAV has become a focus in gene therapy because of its stable, non-pathogenic, and longterm expression of therapeutic agents. AAV has been used for decades to deliver therapeutic agents to treat experimental GBM in mice model [89]. AAV-based GBM therapy is mainly administered by local injection in the early years because of the BBB [55]. Although this approach is damaging and can only achieve partial transduction in the CNS region, it plays a role in extending the survival rate of experimental GBM mice model. The discovery of BBB-crossing AAV9 in 2009 introduced the systematic injection of AAV to treat GBM. Systematic injection is noninvasive and has superior wide-spread transduction than local injection, especially for the treatment of highly aggressive tumors, such as GBM [33]. The systematic injection of therapeutic AAV has great advantages over local injection in the treatment of aggressive GBM [80] but also faces many challenges. Developing more efficient BBB-crossing AAV, performing AAV-specific CNS transduction, and reducing peripheral toxicity are the main challenges [87]. Researchers have used multiple genetic engineering techniques to make AAV capsid have the ability to cross the BBB and search for new BBBcrossing AAV serotypes [90]. Until now, a great number of BBB-crossing AAV mutants are being developed, including the AAV-PHP.B and AAV-PHP.eB, which can transduce the entire CNS region. Peripheral toxicity, especially liver toxicity, have been addressed through some countermeasures, such as inserting CNS-specific promoters or using microRNA to suppress peripheral transgene expression [91,92], but cannot be completely eliminated. Thus, developing an AAV capable of CNSspecific tropism without infecting peripheral tissue is a direction worthy of further research. Compared with other viral vectors such as oncolytic viruses, AAV vectors have unique advantages. Although the oncolytic virus has a direct cytotoxic effect on GBM tumor cells, the AAV vector has the advantages of stability, high efficiency, and long-term continuous expression of therapeutic genes, which is more conducive to the durable inhibitory effect of therapeutic genes on GBM. Furthermore, AAV also has BBB-crossing ability, which poses the possibility of intravenous injection of gene therapy for GBM treatment, which is incomparable to other viral vectors. In conclusion, the systematic injection of AAV for the treatment of GBM is a promising direction, but some work needs to be studied further: developing more efficient BBB-crossing AAV, enhancing the CNS-specific transduction of AAV, and reducing peripheral toxicity. | 6,181 | 2021-01-26T00:00:00.000 | [
"Medicine",
"Biology"
] |
DIGITAL MARKETING A CATALYST IN CREATING BRAND IMAGE THROUGH CUSTOMER
A famous quote says “Content is fire; social media is gasoline (by Jay Baer). If the product has content and the same can be marketed through various digital sources this will enhance the sales of the company. Through digital marketing nowadays marketers are building relationships through links. This paper speaks in point by point about computerized way of promoting the development and different instruments. Contrast between conventional promoting and advanced showcasing is additionally appeared, how digital marketing can help in making brand with the assistance of different devices getting associated. Brand creation is finished remembering the client, the paper discusses different advances associated with making brand through digital marketing.
Research Article
(a) Glabal reach: can reach customers anywhere in the world (b) less expensive (c) can make personalised marketing attemps by understanding the customers (d) Analytical Tools are avaiable to analys the effect of digital market (e) Digital marketing provides real time results within no time. Time is precious for all of us, so why waste even a Nano second (f) Brand building is what every business tries to accomplish and digital marketing helps develop your brand by promoting it on several platforms, the more viral your brand goes, the more reputation your brand will earn in the eyes of search engines as well as users. "Digital is no longer a medium, but a way of doing business," said Ashish Bhasin, CEO-South Asia at Dentsu Aegis Network. "The digital transformation is affecting every business and agencies and marketers and whoever doesn't recognize this will be left behind. Digital is a behavioral change taking place with the consumers, not just a way of building a brand.
Tools Of Digital Marketing
Search Engine Optimization (SEO): This is the first widely used tool by the customer. 85% of all consumers search online. Hence we can say SEO is the first business marketing technique. This goal is accomplished through implementation of search engine friendly website architecture, optimized internal navigation and link landscape, as well as optimization of the content (comprised, at a minimum, of readability & usability improvements, and grammatical corrections). SEO is as much art as it is science, but at its core it is the discipline of making user-friendly & useful content understandable and easily digestible by search engines. Pay Per Click Advertisement (PPC): Pay-Per-Click Advertising -or when it's being run on Google, 'Ad Words' is a digital marketing tactic that allows you to pay to appear at the top of search engine results. This combines the positives of SEO (being at the top of Google) without having to wait for all the SEO work you've done to take effect. Unlike SEO, Google Ad Words costs an ongoing fee. The positives of this is that the fee only incurs when someone actually clicks on your ad (hence the term "pay-per-click"). Because you're not paying unless someone clicks that button, every dollar you spend on an Ad Words campaign translates to a visitor. An Ad Words campaign works in perfect concert with an SEO campaign, allowing you to pay to be on page one for relevant Google searches until your organic rankings get to the top of Google. Content Marketing: The concepts behind content marketing are no different than that of traditional marketing; the only thing that's changed is the method of delivery. The digital world lets businesses deliver engaging pieces of content (such as blogs, e-books and videos) to potential clients, which is a great way to show the world what your business is all aboutwhile bringing in more sales of course. Social Media: Almost everyone on the face of the planet uses social media, whether it's Face book, LinkedIn, Instagram, Snapchat or one of the hundred other platforms. With a huge captive audience, there's no better way to talk to new customers than through targeted social media advertising. Often working as part of a content marketing campaign, social media is about using engaging posts or content to create a brand image while also enabling a business to talk directly with its customers, getting feedback far more easily than from traditional customer satisfaction surveys. Email and Text Marketing: When message of the product or services are sent through an email to customers. Its relevantly low cost when compared to other types of advertising. The product details also can be shared through email. It is a way to send information about the products and services from cellular and smart phone devices Under this technique, companies can send marketing messages to their customers in real-time, any time and can be confident that the message will be seen.
Understanding traditional and digital marketing
Before understanding the customer a marketer has to be completely clear and focused on which customer segment is he/she targeting on. Once the prospect customers have been identified the next step is to understand the customer behavior. The normal model talks respecting generalized view on or client conduct works, at that place are restricted within she strategy or poverty broader perspective. The Engel, Blackwell then Miniard Model has a broader prospective the mannequin has IV answer levels facts input flooring that board includes whole the inputs yet stimuli beside advertising (like advertising, radio, newspapers, internet, phrase over mouth, etc.,) next comes the records processing board who talks as regards customer funnel, paying interest in conformity with it, comprehending its intent, proactively and subliminally accepting it, yet maintaining between attention in imitation ofmake a selection here is then the 1/3 stage comes of selection procedure flooring this is the nearly importantstage purchaser recognizes the necessity searches because of the manufacture yet service between the web does a pre-purchase choice assessment than purchases the product or services consumes such yet has a post-purchase evaluation. The mannequin also talks in regard to the twain variables influencing the patron choice who are environmental influences kind of culture, social, personal influences, household or other situations yet the 2daspect is individual variations some kind on patron beyond another based on totally concerning elements kind of customer resources, drive yet involvement, knowledge, attitudes, character values and lifestyle. Now looking at impact of Digital technology on consumer behavior, it is important to know the difference between a regular consumer and one who is online. Let us take an example of customer wanting to buy a branded purse through a Traditional Impact and Digital Impact. Passive influence is consumer does not have much control on and Active influence is in the consumer seeks for. In the below mentioned figures we can see the difference of both approach, with digital technology providing far more targeting options, marketers can use various mix.
Passive Influence
Consumer sees an advertisement on television Active Influence Consumer discussion with friends
Consumer Behavior
Research Article
JUSTIFICATION OF THE STUDY
As 4P's (Marketing Mix) plays a vital role in traditional marketing, it also applied in Digital marketing too. Product: The product in a digital market cannot be touched physically, smelled or seen directly. Hence to determine the actual quality of the product is not possible in digial marketing. But in today's world customers get the feel of the product or services through various means, by finding the review or getting information from family and friends who have bough the product. Due to the digital era the low quality product can be easly known and will eventually lead to reduction of sale.
Research Article
Price: Price is one of the most important parameter considered for making a purchase decision. By using the digital marketing various add on values can be used for example : discount coupons , exclusive deals, sales, If the brand has been created by digital marketing, many other opinion, that customer will be ready to pay the higher price Promotion: This talks about in different manners the information of the product is reaching the customer. In traditional marketing promotion was done through various modes, TV, Radio, Hoarding other advertising. But in digital marketing promotion is done through various digital tools, e.g.: Search engine optimization, email and text marketing, Content and social marketing. These measurable tactics put you, the marketer in control of testing and optimizing your marketing mix as you go versus making a huge one-time buy for the year on television advertising that may or may not work as effectively. Place: Where the product should be sold is "Place" if the product is sold at the retail store or sold through various distribution channels or directly in the market, these all brings various challenges. With all business trying to have a global market its Digital marketing place an important role, with understanding the necessities has given birth to companies like Amazon, eBay flipchart etc., It's for promoting the business to clients as well as showcasing the business to business and clients in every one of the four blends the digital marketing can increase the value of. Business to Business (B2B): Digital promoting can be utilized for showcasing of business to different business visionaries additionally which is called business to business advertising. B2B Digital Marketing enables makers and providers to promote their items and administrations before worldwide/national purchasers and in the present advanced world, it is done through online entries. Advanced promoting systems can bolster B2B showcasing admirably since B2B connections are once in a while around a prompt coincidental exchange. Or maybe it is tied in with building notoriety, exhibiting capacity and displaying believability. Client to Customer (C2C): When a man discusses an organization's item and administrations with family and companions. For instance post viewing a film the client communicates is certain input to other individuals through different methods of correspondence which makes a prospect client. Client to Business (C2B): in which purchasers (consumers) make esteem and organizations devour that esteem. For instance, when a buyer composes audits or when a customer gives a helpful thought for new item improvement then that purchaser is making an incentive for the business if the business receives the info. Excepted ideas are swarm sourcing and co-creation. Business to Customer (B2C): Promoting the item to the client through computerized showcasing.
OBJECTIVE OF THE STUDY
Objective of the study is to show, how to create a brand image in digital marketing through customer voice.
REVIEW OF LITREATURE
The literature has been reviewed from books, websites, etc.
Stated by Philip Kotler: The different books have been alluded where we can see the advancement of showcasing, the development has been because of progress in innovation, the change has offered ascend to computerized promoting from conventional yet in addition specified. Understanding the customer behavior and the factors effecting customer is clearly showcased in Fundamentals of Digital Marketing by Puneet Singh Bhatia. It would be absurd for any organization to go over the edge on advanced media. Organizations should utilize a blend of social and customary media to advance items. The development of brand showcasing made an atmosphere of responsiveness for another equation, and brands that could read the changing feelings and requirements among their purchasers in this new scene would appreciate receiving the rewards. Marina Johansson (2010) discusses how social networks has influenced the brand image, brand equity and brand awareness. Brand equity is generated through brand awareness. She discusses how the social media is assisting in creating the brand awareness among the customers In Building brand loyalty Yuvraj and Indumathi (2018) has noticed that, the increased usage of personal devices has paved way to digital marketing and increased ways of communicating with the target customers when compared to traditional marketing. "Assessing the consumer decision process in the digital marketplace" Thompson S.H, Teo, Yon Ding Yeong. "Omega The International Journal of Management Science" This paper focuses on consumer decision making process in reference to online shopping in the Singapore market. They have conducted internet survey and 1133 responses received, using structural equation model, they finding perceived risk has a negative relationship with consumers, they state in the paper, there is a positive relationship between perceived benefits of search and overall deal evaluation. The various further studies suggested by the paper, will be useful to understand the effect of digital marketing in consumer buying decision model. And the paper suggests the study for both B2C and B2B customers. Digital Marketing Strategies that Millennial find Appealing, Motivating or Just Annoying, Dr. Katherine Taken Smith, the purpose of the study was to understand the commonly used digital media and that would affect the millennium audience. They have done analysis based on survey of 571 millennial. Which gives an outcome, millennial prefers certain forms of digital advertising, while avoiding others.
Results also indicate that the digital marketing strategies that are considerably more effective than others in grabbing the attention. The study limitation is of only 571, and with changing environment, the study needs to done periodically to understand the present affect of digital marketing on consumers. An Empirical study on effectiveness and challenges of Digital Marketing in Bangladesh, Md Sajedul Islam. International Journal of Engineering and Management Invention (IJEMI) Volume 01/Issue 01/August 2016. The study shows online marketing strategies in Bangladesh, the study shows the comparison of traditional marketing and digital marketing system. The conclusion of the study is Digital marketing has become an essential part of strategy of many companions boundaries attached, with various sophisticated electronic devises which can be used.
Research Methodology:
The research methodology that has been adopted for this article is analysis of live case study from secondary source of information.
Discussion about a live case study-Apollo Munich Insurance.
Let's analyze and understand the above concept taking a case study-"Apollo Munich Insurance" has created brand awareness among the consumers through digital way. Apollo Munich Health Insurance Company Ltd. is a private part medical coverage organization in India. Established on 8 August 2007, it is a joint venture between the Apollo Hospitals group and Munich Health, one of the three business fragments of Munich Re; a main reinsurance organization situated in Germany Creating Awareness: Apollo Munich started with advertisements of the company creating awareness to people. The awareness of primary regarding the company selling insurance. This created awareness among people interested to buy health insurance. Creating Positioning: Once the company was known to the potential clients the next step was identify the direct customers, understand how each competitors is positioning their brand, comparing our position to the competitors identify the uniqueness, develop a distinct and value based positioning idea. Craft a brand position statement, a positioning statement is a one or two sentence declaration that communicates your brand's unique value to your customers in relation to your main competitors. Customer Perception: Apollo Munich is merger of two companies' and Apollo is one of the old hospitals in India. It carries a prestigious name and brand and this is a value addition in terms of creating a positive perception for the customer. Since the company is having healthcare background customers will definitely have a comfort in buying the product. Hence the background has played a major role in creating positive perception through digital marketing. Creating Customers: Once the need and preference, is known the awareness is created with a positive perception of product and service is sold. Customer Loyalty: Today, an ever increasing number of individuals are getting to be noticeably acclimated with utilizing the web to discover data about items and administrations, all things considered, mark supervisors realized that making their items or brands online nearness is ending up imperatively critical too. Consequently, the act of Digital Marketing sets the bar high to have a magnificent client relationship administration set up.
Research Article
Inventive items have been showcased by Apollo Munich with big names, likewise there has dependably been persistent relationship kept up by Apollo Munich with clients through, TV includes, Face book, tweeter and so on which has made client unwaveringness. Apollo Munich Health Insurance took off #Be insured advertisement crusade to feature the exceptional advantages of its progressive Health Wallet design, as of late propelled under another class 'WINSURE'. Wellbeing Wallet, a cutting edge medical coverage design, not just tends to the present needs of the clients by paying for their hospitalization yet additionally covers their OPD costs that are normally not secured by health care coverage arrangements. Strikingly, Health Wallet likewise guarantees reasonableness of proceeding with the arrangement in later years. Post analysis based on this case study, we can say there are two areas the digital marketers should work on (i) awareness of digital marketing (ii) Creating a positive customer perception. Brand can be created through digital marketing following the below mentioned steps. Creating Brand Creating Aware ness: For whatever length of time that you're putting forth a fantastic item or administration, odds are that a great many people will be more than willing to pay a smidgen of additional cash to help your business. Making an attention to the item among prospect clients is the main phase of showcasing and in this day and age through digital marketing the same can be accomplished all the more viably. Making mindfulness and making positive client recognition through advanced advertising go in a line. The way to achieving the majority of this is to discover your intended interest group as well as characterize the particulars of your organization's offerings and guarantee those offerings (items and administrations) are in accordance with the requirements and needs of your potential clients. Creating Positioning: Making Positioning: Is your main thing in the psyche of your objective buyer not what you do to an item or administration. Feeling has been appeared to be the principal driver of basic leadership on a buyer's way to buy. Emotional positioning examples include Coca-Cola, which exists to inspire moments of happiness and Cadbury, which exists to inspire moments of joy.
Customer Perception: Customer has a tendency of developing perception of the product or services before purchase. Perception usually varies from customer to customer, product as well as service quality. Hence, from business or marketing point of view, it is essential for them to track down the customer behavior pattern and their perception and therefore need to address with the same effectively and efficiently. Creating Customers: Once the client knows about the item and an item position and observation has been made and subsequent stage is buy of the item consequently bringing forth client Customer Loyalty: Next retaining the client, making reliability is the most critical advance, letting know the client what you are improving the situation, doing likewise by digital marketing, in the event that you don't say each one of those things the organization is doing the client may not know, consequently keeping up consistent contact with the client through advanced mode is extremely fundamental.
CONCLUSION:
Change is constant, innovation and new technologies should be accepted. Digital marketing acts as a catalyst to build brand through customer. Marketing team, agencies, distribution channels do not build brand customer do.
Aware ness
Positioni ng Loyalty Custo mer
Research Article
Brand is nothing but an emotion, perception build in the customer mind. These emotions are created via visual effects through digital marketing. In today's digital world there are various modes of marketing hence creating a wider platform for the marketers to sell their product and services. Now understanding the difference and factors influence a consumer behavior in digital market, the costumer perception towards a digital shopping has given importance. Hence the marketers should be aware of the fact that a positive customer perception has been created to earn profits through digital marketing. | 4,544.4 | 2021-04-10T00:00:00.000 | [
"Business",
"Computer Science"
] |
Is Cosmological Constant Needed in Higgs Inflation?
The detection of B-mode shows a very powerful constraint to theoretical inflation models through the measurement of the tensor-to-scalar ratio $r$. Higgs boson is the most likely candidate of the inflaton field. But usually, Higgs inflation models predict a small value of $r$, which is not quite consistent with the recent results from BICEP2. In this paper, we explored whether a cosmological constant energy component is needed to improve the situation. And we found the answer is yes. For the so-called Higgs chaotic inflation model with a quadratic potential, it predicts $r\approx 0.2$, $n_s\approx0.96$ with e-folds number $N\approx 56$, which is large enough to overcome the problems such as the horizon problem in the Big Bang cosmology. The required energy scale of the cosmological constant is roughly $\Lambda \sim (10^{14} \text{GeV})^2 $, which means a mechanism is still needed to solve the fine-tuning problem in the later time evolution of the universe, e.g. by introducing some dark energy component.
I. INTRODUCTION
Recently the detection of B-mode from CMB by the BICEP2 group [1] has indicated a strong evidence of inflation [2][3][4], which solves many theoretical puzzles in the Big Bang cosmology. The B-mode polarization can be only generated by the tensor perturbations. According to the reports of the BICEP2 experiment, the tensor-toscalar ratio is in range: r = 0.20 +0.07 −0.05 (68% CL). In a simplest slow-roll inflation model, the early universe was driven by a single scalar field φ with a very flat potential V (φ). Usually, we call this field the inflaton. Although there are many inflation models in the market, we still do not well-understand what is the inflaton. The most economical and fundamental candidate for the inflaton is the standard model (SM) Higgs boson, which has been already observed by the collider experiment LHC in 2012 [5,6]. In this sense, Higgs inflation is a simple and elegant model. However, it is not easy for the Higgs boson to realize a inflation model with correct density perturbations. To see this, we estimate the inflaton mass from the amplitude A s of the scalar perturbation power spectrum in the chaotic inflation model [7] with a quadratic potential V (φ) = m 2φ2 /2: which is many orders of magnitude larger than the observed Higgs mass, m h ≈ 125.9±0.4 GeV. In other words, the potential of Higgs field h is not flat enough to realize an inflation. By introducing a non-minimal coupling to the gravity (∼ h 2 R) , one could indeed achieve such a flat potential [8] after a conformal transformation. And the predictions of this kind of non-minimal coupling Higgs inflation are well consistent with observations before BI-CEP2. The authors in ref. [9] have found that this model can not accommodate the new measurement from BI-CEP2, because it generally predicts a small amplitude of tensor perturbations. An alternative Higgs inflation model was proposed in ref. [10], in which the Higgs boson kinetic term is non-minimally coupled to the Einstein tensor (∼ G ab ∂ a h∂ b h). According to the recent analysis on this model [11], it predicts r ≈ 0.16 when the number of e-folds N ≈ 33, since r ≈ 16/(3N + 1) in this model. However, to overcome the problems in the Big Bang theory, the number of e-folds is required to be around N ≈ 60, then the tensor-to-scalar ratio becomes even smaller, say r ≈ 0.09. Another interesting Higgs inflation model called the Higgs chaotic inflation is proposed in ref. [12]. In this model, the SM Higgs boson realizes the quadratic chaotic inflation model, based on the so-called running kinetic inflation [13,14]. The kinetic term of the inflaton is significantly modified at large field values, while it becomes the canonical one when h is small. The value of r in this model is the same as that in the chaotic inflation model with a quadratic potential, i.e. r = 8/N . For N ≈ 60, it predicts r ≈ 0.13, but if we require a larger r, say r ≈ 0.2, a smaller N is needed, say N ≈ 40, which is a little better than that predicted in the other Higgs inflation models, see ref. [15] for recent revisited in this model. It seems that the Higgs chaotic inflation is a charming Higgs inflation model in the market.
On the other hand, there is a challenge for a single field inflation with BICEP2 result. For the chaotic inflation, the larger the value of the tensor-to-scalar ratio is, the smaller the value of the running of the spectral index is, see the details in ref. [16]. Therefore, to be more consistent with observations, one might consider a little more beyond a single inflation model. Among many choices, the cosmological constant is often forgotten when one building an inflation model, since by itself only the exact scale-invariant Harrizon-Zel'dovish power spectrum with the scalar spectral index n s = 1 could be produced, which is already ruled out at over 5σ by Planck [17]. However, we find that the situation is changed when the early universe is dominated by the cosmological constant as well as the inflaton. It could give n s ≈ 0.96, r ≈ 0.2 when the number of e-folds is not so small, say N ≈ 56, and it also predict the correct magnitude of the spectrum amplitude.
In the following, we will assume that the running kinetic approach is a correct way to realize inflation by SM Higgs boson and we also assume that both inflaton and the cosmological constant dominated the universe during the inflation time. In next section, we give a briefly review of the running kinetic inflation and then we pursue the role played by the cosmological constant during inflation. Finally, we will draw our conclusions and give some discussions in the last section.
II. RUNNING KINETIC INFLATION
The running kinetic inflation can be easily implemented in supergravity by assuming a shift symmetry exhibiting itself in the Kähler potential at high energy scales, while this symmetry is explicitly broken and therefore becomes much less prominent at low energy scales. In the unitary gauge, one can write down the Lagrangian for the Higgs boson h [12][13][14][15]: The effect of non-canonical kinetic term is significant for large h ≥ 1/ √ ξ. The kinetic term grows as h 2 , that is why the name "running kinetic inflation". By redefining the Higgs field, one can rewrite the Lagrangian in terms of canonically normalized field φ ≡ ξ/8h 2 with the effective potential Thus, the quadratic chaotic inflation occurs.
III. THE ROLE OF THE COSMOLOGICAL CONSTANT DURING INFLATION
Assuming the universe was dominated by both the inflaton and the cosmological constant, the Friedmann equation could be written as where M pl = (8πG) −1/2 ≈ 2.435 × 10 18 GeV is the reduced Planck mass. Then by using definition of the slowroll parameters, we get And also the amplitude of the scalar perturbation power spectrum is given by which is defined as P s = A s (k/k * ) ns−1+··· . By using the relations n s − 1 = 2η − 6ǫ with n s the scalar spectrum index and r = 16ǫ with r the tensor-to-scalar ratio, we obtain the inflaton mass in terms of n s , r and A s : and also the value of the cosmological constant: The number of e-folds could be also given by By using Eqs.(9), (10) and the value of φ: obtained from Eqs.(5), (6) and (7), we get Substituting the observed values of n s ≈ 0.96, r ≈ 0.20 and A s ≈ 2.19 × 10 −9 into Eq.(8), we estimated the mass of the inflaton as m ≈ 2.59 × 10 13 GeV. If ξ is sufficiently large, say ξ ≈ 4.6 × 10 9 in Eq.(3), the quartic coupling could be λ h ≈ 0.13, which is required to explain the correct electroweak scale and the Higgs boson mass m h = √ 2λ h v. The large value of ξ could be understood in terms of symmetry, see refs. [12][13][14][15] for details.
The scale of the cosmological constant can be estimated from Eq.(9), Λ ≈ 1.85 × 10 −9 M 2 pl ≈ (1.05 × 10 14 GeV) 2 . As usual, the fine-tuning problem of the cosmological constant still exist at later time. Alternatively, one can consider some dynamical dark energy models instead, which are more like a cosmological constant component at early time.
From Eq. (12), we obtain the number of e-folds as N ≈ 56 , which looks enough to solve the horizon problem, the flat problem etc. in the Big Bang cosmology. In other words, the model could predict r ≈ 0.2 and n s ≈ 0.96 by given N ≈ 56. Of course, the cosmological constant and the mass of the inflaton should take the values estimated above. From Fig.1, one can see that the value of N increases with Λ for small Λ values, while it decreases for large Λ values. This could be easy to understand: when Λ is small, we have φ 2 , m 2 ∼ Λ, see Eqs.(8), (9) and (11), then N ∼ Λ. But when Λ is large, we have φ 2 ∼ 1/Λ, m 2 ∼ Λ 2 , then N ∼ 1/Λ, which approaches to zero when Λ goes to infinity. The latest analysis of the data including the P lanck CMB temperature data, the WMAP large scale polarization data (WP) , CMB data extending the Planck data to higher-l, the Planck lensing power spectrum, and BAO data gives the constraint on the index n s of the scalar power spectrum [17]: 0.9583 ± 0.0081(Planck + WP), 0.9633 ± 0.0072 (Planck +WP+lensing), 0.9570 ± 0.0075 (Planck +WP+highL), 0.9607 ± 0.0063 (Planck +WP+BAO). It also gives an upper bound on r 0.25.
The BICEP2 experiment constraints the tensor-scalarratio as: r = 0.20 +0.07 −0.05 in ref. [1]. They are also other groups have reported their constrain results on the ratio: r = 0.23 +0.05 −0.09 in ref. [18] by adopting the Background Imaging of Cosmic Extragalactic Polarization (B2), Planck and WP data sets; r = 0.20 +0.04 −0.05 in ref. [19] combined with the Supernova Legacy Survey (SNLS); r = 0.199 +0.037 −0.044 in ref. [20] by adopting the Planck, supernova Union2.1 compilation, BAO and BICEP2 data sets; and also r = 0.20 +0.04 −0.06 in ref. [21] with other BAO data sets. This B-mode signal can not be mimicked by topological defects [22]. The most likely origin of this signal is from the tensor perturbations or the gravitational wave polarizations during inflation.
Here, one can see that the cosmological constant plays an important role. It helps the universe to inflate at early time and contributes to the number of e-folds though Eq.(10). As a result, the inflaton field φ could be smaller than that without Λ. To see this, we estimate φ ≈ 9M pl from Eq.(11), while φ ≈ √ 4N M pl ≈ 15M pl without Λ. Then, the slow-roll parameter ǫ could become also larger, which will then enhance the tensor-to-scalar ratio, r ≈ 16ǫ, see Fig.2. Therefore, it is likely that the cosmological constant energy component is needed in the Higgs chaotic inflation with quadratic potential.
IV. CONCLUSION AND DISCUSSION
The recent detection of B-mode by BICEP2 indicates an exciting leap forward in our ability to explore the early universe and fundamental physics. The measurement of the tensor-to-scalar ratio r ≈ 0.2 shows a very powerful constraint to theoretical inflation models. Higgs boson is the most likely candidate of the inflaton field. However, its mass m h ∼ O(10 2 ) GeV is much smaller than that for a inflaton m ∼ O(10 13 ) GeV. To solve this hierarchy problem, a non-minimal coupling between the Higgs boson and gravity or a non-canonical kinetic term is needed. Usually, these Higgs inflation models predict a small value of r, which is not quite consistent with the results from BICEP2. In this paper, we explored whether a cosmological constant energy component is needed to improve the situation. And we found the answer is yes. The Higgs chaotic inflation now predicts r ≈ 0.2, n s ≈ 0.96 with e-folds number N ≈ 56, which is large enough to overcome the problems in the Big Bang cosmology.
However, we are still far from understanding the cosmological constant. And we haven't solve its fine-tuning problem in the later time evolution of the universe, which is asked why the present value of the cosmological constant is so small, or why the universe is accelerating at present z ∼ 1. Noticed that the slow-roll parameters have a finite maximum value from Eqs. (5) and (6) as long as Λ = 0: ǫ max ≈ m 2 /Λ when φ = √ 2Λ/m, and η max ≈ m 2 /Λ when φ → 0. It seems that the inflation will never end if Λ > m 2 . To end the inflation, one may need a phase transition of a heavy Higgs boson χ with its mass at GUT scale, and it also slightly couples to the light one that responsible to inflation by ∼ h 2 χ 2 . At the beginning of inflation, the heavy Higgs boson is stable at its true vacuum (χ = 0), then it only contributes a constant potential, which can be regarded as the cosmological constant. When the inflaton rolls down the potential and becomes small enough, the vacuum at χ = 0 turns to be a false one and the heavy boson would be no longer stable, then it rolls to true vacuum to end the inflation. In fact, the endless inflation is essentially due to the cosmological fine-tuning problem. Once a correct mechanism is found to reduce Λ to its present observa-tional value, then the inflation would be certainly end. We will give a concrete example in detail to realize such a mechanism that may solve the fine-tuning problem in the later work [23].
The challenge for a single field inflation to predict a large value of the running of the index still exit, n ′ s ≡ dn s /d ln k ≈ −0.00025 for r ≈ 0.2 in our case, see also ref. [16] for detail discussions on this issue. But the constraint on the running is not so tight: n ′ s ≈ −0.013±0.009(68%CL) from the analysis of Planck data, see ref. [17]. Furthermore, if additional sterile neutrino species are taken into account in the universe, one could also obtain r ≈ 0.20 without the running of the spectral index (n ′ s ∼ 0), see refs. [24][25][26]. Certainly, if a large running is well-confirmed in future, then other mechanisms explain it are urgently needed. | 3,463.6 | 2014-04-15T00:00:00.000 | [
"Physics"
] |
Laboratory method for investigating the influence of industrial process conditions on the emission of polycyclic aromatic hydrocarbons from carbonaceous materials
This work is dedicated to developing a laboratory method for assessing emissions of polycyclic aromatic hydrocarbons (PAHs) from different carbon-based materials at elevated temperatures. The method will additionally contribute to enhancing the fundamental knowledge about the formation and decomposition of these compounds during various process conditions. Developing a method entails designing a setup for laboratory-scale experiments utilizing different furnace configurations and off-gas capturing media. To demonstrate the method's applicability, different carbon materials were tested under identical conditions, and analysis results for the same material in different furnace setups were compared. In this article, we have focused on the procedure for obtaining the “fingerprint” of PAH emissions under conditions characteristic of industrial processes.• Two setups for investigation of the influence of temperature on PAH emissions were designed and tested for three types of carbon materials.• The collected off-gas samples underwent analysis in two different laboratories to capture intra-laboratory differences and to evaluate the significance of the instrument detection limit.• The results of PAH 16 (16 EPA PAH) and PAH 42 analysis were compared to showcase the influence of the expanded list on the overall emission of PAH. The novel methodology enables the determination and comparison of PAH emissions during the thermal treatment of individual carbon materials under laboratory conditions. This could potentially be a new approach for predicting the PAH emissions in metallurgical industries that use these carbon materials as reducing agents in their processes and their control by optimizing process parameters and raw materials used. In addition to being suitable for simulating various conditions in the metallurgical industry, the utilization of low-hazard PAH solvents makes it a promising method.
• Two setups for investigation of the influence of temperature on PAH emissions were designed and tested for three types of carbon materials.• The collected off-gas samples underwent analysis in two different laboratories to capture intralaboratory differences and to evaluate the significance of the instrument detection limit.• The results of PAH 16 (16 EPA PAH) and PAH 42 analysis were compared to showcase the influence of the expanded list on the overall emission of PAH.
The novel methodology enables the determination and comparison of PAH emissions during the thermal treatment of individual carbon materials under laboratory conditions.This could potentially be a new approach for predicting the PAH emissions in metallurgical industries that use these carbon materials as reducing agents in their processes and their control by optimizing process parameters and raw materials used.In addition to being suitable for simulating various conditions in the metallurgical industry, the utilization of low-hazard PAH solvents makes it a promising method.
Background and state-of-art
Polycyclic aromatic hydrocarbons (PAHs) are a large class of organic compounds with two or more fused aromatic rings in their structural configurations.The most important sources of PAHs are incomplete combustion and pyrolysis of organic material, i.e., materials containing carbon and hydrogen.
In 1976, the United States Environmental Protection Agency (US EPA) named 16 PAHs as priority pollutants based on their toxicity and environmental presence in the highest concentrations [ 1 ].Over the past 47 years, these 16 compounds have played an essential role as they have been analyzed in almost all types of environmental matrixes, which is why many regulations are focused on these group representatives.Out of hundreds of compounds in this group, the specific 16 compounds were selected based on the commercial availability of their analytical standards, the feasibility of measurement with available analytical methods, their occurrence in the environment, and, at that time, knowledge of their toxicity.However, despite the limited availability of data regarding the toxicological impact of polycyclic aromatic compounds beyond this list, research indicates that an extended list of polycyclic aromatic compounds should be used in the assessment of environmental pollution [2][3][4][5].For example, the list does not include substituted PAHs, such as alkylated ones, which are more abundant and persistent in the environment than the parent PAHs [6][7][8].In addition, certain compounds, such as dibenzo pyrene isomers, exhibit a carcinogenic potential that is tenfold higher than benzo(a)pyrene [ 3 ], meaning that even small concentrations of these compounds can significantly contribute to the toxicity of an entire sample.
PAHs can be found throughout the environment in the air, water, food, and soil.The significance of air emissions lies in the fact that inhalation of PAH-containing air is considered the most common route of exposure to PAHs.Based on statistics provided by the Norwegian Environment Agency, it is evident that a considerable amount of polycyclic aromatic hydrocarbon (PAH) emissions in Norway for the year 2019 were released into the air [ 9 ].Industrial activities, such as aluminum plants, manganese ferroalloy smelters, and silicon carbide producers, represent some of the largest sources of PAH emissions in Norway.Material producers use carbon materials as reductants, electrodes, etc., which results in varying degrees of emission of polycyclic aromatic hydrocarbons.These compounds can become airborne through many mechanisms.One such mechanism is evaporating PAH compounds that already exist in the materials.Another mechanism involves the thermal generation of PAH by incomplete combustion of carbon-containing material, whereby the emission of PAH represents a net of both the formation and decomposition of PAH compounds.In addition, mechanically generated dust particles can become airborne during the handling and transport of solid materials containing PAHs [ 10 ].
The current standard methodologies (e.g., ISO 11338-1) for determining PAH emissions from metallurgical industries often give inconsistent results, partly as a result of dynamic process conditions over shorter and longer time scales.Hence, measuring emissions with irregular intervals and short periods, although carried out according to standards and regulations, does not always help companies understand and control their emissions.Since the ultimate goal is to reduce the emission of these compounds, intensive work is presently carried out to develop new methods for online monitoring [ 11 ].
While industrial measurements are essential for emission reporting, no established, standard laboratory procedure exists for determining and comparing PAH emission from industrial carbonaceous materials under controlled conditions in laboratory-scale furnaces.The aim of the present work was, hence, to develop and evaluate a method for estimating PAH emission from different carbon materials in laboratory-scale experiments under various pyrolysis conditions.The new method should be applicable for determining the PAH emission under conditions similar to those of industrial processes, as well as being safe for the operator.As a basis for method development, the current study used temperatures and raw materials seen in the silicon (Si) and manganese (Mn) ferroalloy industries.Both metallurgical grade silicon (MG-Si) and manganese ferroalloys are produced by carbothermal reduction, most often in a Submerged Arc Furnace (SAF), with the use of both fossil and biological carbon materials as reductants, whereby, consequently, the resulting off-gas emissions may include varying amounts and types of PAHs [ 12 , 13 ].
Experimental laboratory setups
To establish a successful laboratory methodology for evaluating PAH emission from different carbon materials under different conditions, it is necessary to design and test a suitable setup.In this work, two different setups ( Fig. 1 ) were examined and compared to achieve the desired goal.Fig. 1 A shows Setup 1 using an Entech 1400 furnace.The furnace was fitted with a vertical crucible of a high-temperature resistant iron-chromium-aluminum alloy as a reaction chamber.The selected carbon material is heated while a carrier gas is injected through the bottom of the furnace.The reaction gases leave the crucible through a port in the top lid and are guided to the sampling equipment.
Fig. 1 B shows Setup 2, which used a horizontal Nabertherm -RHTH 120-300/16-18 -Alumina tube furnace.In this furnace setup, the sample is placed in a steel holder at the very end of a steel tube, which is brought into the furnace from one end so that the sample is located in the middle of the alumina tube.The carbon material is heated while the carrier gas is injected from the opposite side of the steel tube inlet (left in the picture).The reaction gas leaves the furnace through a heated steel tube and goes to the sampling equipment.
Three washing bottles/bubblers/Impinger (250 mL Duran bubblers, one with Impinger nozzle and two with frit D0; with KS 19 pans and balls; Paul Gothe GmbH) are used to capture particulates and soluble constituents of the reaction gases in both setups.The first bottle is empty to prevent the washing solution from entering the furnace in case of reversed flow.Coarse particulates and condensate droplets will mainly be retained in the first washer, while the other two, filled with 2-propanol, will capture gaseous PAHs.During the sampling, these bottles were cooled to avoid the evaporation of the solvent and condensation of PAHs in the off-gas.In Setup 1, a cryostat bath with a water/glycol cooling system was used, while in Setup 2, an ice bath was used to which a new amount of ice was added periodically during the experiment.
Test procedure
Before starting the experiment, the cryostat was turned on, and the temperature was set to − 10 °C (Setup 1).While the cryostat reached the set temperature, a pre-set mass of representatively sampled carbon material was weighed and placed in the crucible (Setup 1) or the steel boat (Setup 2).The crucible/boat was then placed in the corresponding furnace and connected to the emission sampling system, ensuring no gas leakage.
During each experimental run, three sets of 3 washing bottles were used to capture PAHs released during three pre-set temperature ramping and holding phases: Phase I (room temperature to 400 °C), Phase II (401 to 750 °C), and Phase III (751 to 1100 °C), see Fig. 2 .The sampling of PAH components during Phase 1 started with the temperature ramp from room temperature to 400 °C, with a ramp rate of 6 C°min − 1 for Setup 1 and 5 C°min − 1 for Setup 2, followed by 1 h holding time at 400 °C.After completing Phase 1, the set of bottles (Set#1) was collected, and the solvent and washing liquids were sent for analysis.Before the second temperature ramp was started, the set of washing bottles was replaced with Set#2, containing clean solvents, and the tube that connects the chamber with the bubbling bottles was washed with a small amount of fresh 2-propanol.The procedure was repeated for Phases II and III.
All carbon material samples used to perform the PAH emissions tests were weighed at room temperature before and after each experiment to investigate total sample mass loss.
Sample analysis
The solvent samples were prepared and stored immediately after sampling.For each sampling sequence, the volume of 2-propanol from the two last bottles was combined with a small volume of fresh solvent used for washing all three bottles into one single fluid sample.The combined sample, ranging in volume from 220 mL to 280 mL (depending on the required amount of solvent for washing), was refrigerated in a glass bottle wrapped in aluminum foil (to avoid photolytic decomposition) at + 4 °C until the chemical analysis was carried out.
The samples obtained from the experiments for method validation in Setup 1, with graphite as a carbon material, were analyzed for PAH 16 in the SINTEF Industry laboratory [ 15 ], as well as for PAH 42 ( Table 1 ) in the NILU laboratory [ 16 ] which is accredited Direct injection with/added IS Liquid-liquid extraction -solvent exchange to cyclohexane.The extraction was followed with clean-up methods using "Grimmer method " followed by clean up using a deactivated silica column (adsorption chromatography).
for those analyses, to compare the adequacy of the chemical analysis method for our samples, but also to determine the impact of the expanded list on the total amount of released PAHs.Identification and quantification of native PAHs in both laboratories were carried out using a gas chromatograph coupled to a mass spectrometer as a detector (GC/MS).All the samples were spiked with an internal standard containing deuterated PAH congeners.Details of the applied analysis methods are given in Table 2 .
The method detection limit (MDL) is defined as the minimum concentration of a substance that can be measured and reported with 99 % confidence that the value is above zero [ 19 ].The MDL depends mainly on the instrument's sensitivity and the matrix effect.The MDL values in both laboratories for each compound from the PAH 16 and PAH 42 lists are summarized in Table S1 in the Supplementary section.
Cleaning procedure
A routine, seemingly simple cleaning operation is of great importance for this method if we bear in mind that for analytical analysis such as PAH 42 analysis, the equipment for performing experiments must be minimally contaminated before use.However, if we consider the type of material used, the cleaning operation is anything but simple.
In addition to glass bottles (bubblers), glass (Setup 2) or Teflon tubes (Setup 1), and small steel parts that connect the furnace and the bottles, the biggest challenge is cleaning the parts that are in direct contact with the carbon material.
The procedure for cleaning the steel crucible and its lid of Setup 1 involves first mechanical cleaning with a brush, then washing with acetone, and finally heating in a Muffle furnace to 650 °C in air and holding at temperature for 30-45 min in order to remove the remains of organic matter.The temperature of 650 °C was chosen following the temperature limitation of the available Muffle furnace of the appropriate size.
Cleaning the steel tube and the boat of Setup 2 involves washing with acetone and heating the tube furnace to 1300 °C in an atmosphere of synthetic air and keeping it at temperature for 30 min.
Method validation
This work aims to develop a method to detect the specific composition of a PAH mixture as a unique signature or "fingerprint" in PAH compounds emitted from one particular carbonaceous material under controlled conditions.To validate the method's efficacy for acquiring a PAH emission fingerprint, we conducted experiments using various carbon materials and subsequently categorized the resulting data into three subsections.In the first one, we compared the analysis results of samples obtained in experiments with the same materials in different setups.Within this subsection, and in order to compare the efficiency of the designed setups, we also evaluated the efficiency of the cleaning procedure, the breakthrough test, and the repeatability of the conducted experiments.In the following subsection, we compared the methods of chemical analysis in two laboratories by comparing the analysis results of the same sample.Finally, we compared the total content of PAH 16 and PAH 42 and discussed the importance of the expanded list.
We did not examine the influence of the atmosphere on the PAH emission from carbonaceous materials for the purpose of method validation.However, using this approach, Arnesen et al. investigated the influence of the atmosphere on the emission of PAHs from green anode paste baking [ 14 ].Nevertheless, a detailed examination of the impact of different atmospheres and flow conditions on the fingerprint of PAHs emitted from carbonaceous materials might be a potential subject for future work.
Matrix and material
Three types of raw carbon materials (graphite, coke, and charcoal) were tested in Argon (Ar) atmosphere in the two different experimental setups, with three parallels for each experiment, resulting in 18 test runs, as shown in Table 3 .Since three samples were collected for each experiment (for the three different temperature set-points), this resulted in 54 samples for PAH analysis.The coke and charcoal, materials for the experiments, were provided by an industrial partner, and the chemical analysis of these materials is presented in Table S2 in the Supplementary section.
In addition to coke and charcoal, graphite grade G330 (Schunk Tokai) was used as a reference material.The total ash content in this material is ≤ 300 ppm, according to property information obtained from the supplier.Graphite, the most stable form of carbon, was chosen for similar reasons as Argon, as, under ideal conditions, it should contain nothing but carbon.The effect of temperature on pure carbon in an inert atmosphere could hence be used as a baseline to compare other carbon materials.
Argon gas purity grade 5.0 was chosen for the atmosphere to test the efficiency of the setup, the difference between the raw materials, and the influence of temperature, initially eliminating the additional influence of the atmosphere to establish a reliable baseline for future studies in different atmospheres.
Isopropyl alcohol ( ≥ 98 % Technical, VWR Chemicals) was used as a suitable solvent for capturing polycyclic aromatic hydrocarbons.PAHs are lipophilic non-polar compounds; however, substituted groups can contribute to their polarity [ 20 ].These compounds exhibit low solubility or insolubility in water but are soluble in organic compounds such as toluene, benzene, carbon tetrachloride, etc. Isopropanol is, in addition to being less toxic than, for example, toluene, also a better choice than, for example, acetone, which has a low boiling point.
Preparation of the carbon material samples included sieving to specific particle size (5-10 mm for graphite and coke and 5-25 mm for charcoal), making a representative sample according to the literature procedure (Spoon method) [ 21 ], and drying at 107 °C ± 3 °C to constant mass [ 22 ], in an Entech muffle furnace in air.The sample's initial weight was measured before the experiment: for Setup 1, the materials weighed 300 g, while for Setup 2, the graphite and coke weighed 15 g, and the charcoal weighed 10 g.
Comparing setup 1 and setup 2
Conducting parallel experiments using the same materials, temperatures, and atmosphere in both setups serves to evaluate the setups in terms of usability and simultaneously compare the scientific outcomes of the experiments.
The most significant differences between the two setups are the amount and position of the sample, and the orientation of the gas flow.While the crucible in Setup 1 is 50 cm high and has an internal diameter of 11.5 cm, the sample holder in Setup 2 has dimensions of 10.3 cm x 4 cm x 2 cm, which significantly limits the amount of sample to be tested, especially for the materials with low relative density such as charcoal.However, if the desired goal can be achieved by using a smaller sample, this "limitation" does not have to be a disadvantage, as working with smaller samples is easier and safer for the operator.While Setup 1 has a vertical crucible, in which the carrier gas enters from the bottom and passes evenly through the sample, Setup 2 has a horizontal steel tube in which the boat with the sample is placed, and the carrier gas comes from the side, which means that it does not pass through the sample but moves above it.In the presented experiments, inert gas was used, so the advantages and disadvantages of this difference cannot be fully determined from a gas-solid reaction point of view.
As mentioned above, all carbon material samples were weighed at room temperature before and after the experiment.Fig. 3 shows the average mass loss (%) for each carbon material used in the experiments in both setups.Most of the mass loss may be caused by the thermal evaporation of volatile organic components in the sample and is very similar in both setups for each material, although slightly lower values (up to 1 % of total loss) were observed in the experiments in Setup 1. Table 4 presents the results of the PAH 42 analysis as total PAHs emitted from carbon materials tested in both setups.While for graphite, the difference in total PAHs between setups is almost insignificant, the value for charcoal in Setup 2 is nearly double the value for Setup 1.
Fig. 4 shows the results of PAH 42 analysis for experiments with graphite as a carbon material in an Argon atmosphere, measured as total PAH across the full temperature range in ng g − 1 sample reacted.Since graphite is the most stable form of pure carbon and Argon is an inert gas, the release of PAHs should be limited.However, the results show PAH emission even under these conditions, although the amount of released PAH compounds is very low in both setups.Although the difference in total PAH between the results for the two setups is even minor compared to other tested materials, we can see a significant discrepancy between the compound concentration trends.The differences in the compound concentration trends between the two setups led us to examine the influence of furnace contamination on the analysis results, which will be discussed later in the "Validation of cleaning procedure " section.
If we compare the results of PAH 42 analysis of samples obtained from experiments with coke as a carbon material performed in two setups ( Fig. 5 ), we can see that despite differences in absolute concentration, the distribution trend is very similar for all compounds except for a few (e.g., Biphenyl, 2-Methylanthracene, Fluoranthene).
For a convenient presentation of the results, 42 compounds are divided into two groups, one which includes compounds with 2 and 3 rings -Low Molecular Weight PAHs (LMW PAHs), and the other which includes other compounds with 4-6 rings -High Molecular Weight PAHs (HMW PAHs).Fig. 6 shows that most LMW and HMW PAHs were released in Phase II of the experiment with coke in Setup 1, i.e., at a temperature of 401-750 °C.The same is the case for the experiment with coke in Setup 2, although a large proportion of the released HMW PAHs is also in Phase III of the experiment, i.e., at temperatures of 751-1100 °C.Although the boiling points of LMW PAHs range from 218 °C for Naphthalene to 390 °C for Retene, and the boiling points of HMW PAHs from 383 °C for Fluoranthene to 550 °C for Dibenzo(ah)anthracene, it can be seen from the results that most of these compounds are released at temperatures higher than their boiling points.The experimental results suggested that PAHs emitted during the pyrolysis process at low temperatures mainly came from evaporation of the aromatic structures initially within the coke.With a further increase in pyrolysis temperature, PAH production initially increases, giving a maximum peak in Phase II of the experiment, after which it decreases with increasing temperature, indicating that two competitive reactions occur during coke pyrolysis: PAH formation and PAH decomposition.It was postulated that at temperatures below 750 °C the dominant reaction is the PAH formation.One possible mechanism of their formation via pyrosynthesis from low hydrocarbons is that higher pyrolysis temperature ( > 500 °C) breaks carbon-hydrogen, and carbon-carbon bonds to form free radicals, which will combine with acetylene, followed by their further condensation with aromatic ring structures [ 23 ].This aromatization process might give off polycyclic aromatic hydrocarbons in Phase II of the experiments with coke in both setups.Similar behavior was observed with PAH production from coal pyrolysis, where PAH emission concentrations reached the maximum at a pyrolysis temperature of 800 °C [ 24 ].Almost the same can be observed by comparing the results of the analysis of samples obtained by experiments with charcoal in the two setups ( Fig. 7 ).Again, with a few exceptions (Biphenyl, Coronene), the distribution trend is very similar in both setups.
In experiments with charcoal, in both setups, the most significant amount of both LMW and HMW PAHs was released in Phase II, that is, at temperatures of 401-750 °C.However, as in the previous case, the trend is somewhat different for HMW PAHs since a large amount of these compounds was also released in Phase III of the experiment in Setup 2 ( Fig. 8 ).
Validation of cleaning procedure
To determine the cleaning efficiency of both setups, three consecutive tests, to ensure no other material was introduced into the furnace between tests, were performed with no carbon sample in the furnace, with a standard cleaning procedure before the first and between each test.The experiments were performed in an Argon atmosphere, like those with carbon materials.The only difference was that the bubbling bottles were not changed after each temperature phase, but one sample was taken at the end of the experiment.The results are shown in Fig. 9 and represent the amounts of released PAHs due to the contamination of the crucible/sample holder.
From the results shown in Fig. 9 , it can be concluded that there is some contamination of the sample holders, even after multiple cleanings.Although contamination was highest in test 1, neither performing multiple cleaning cycles nor additional thermal treatment in the form of a subsequent experiment contributed significantly to the reduction of contamination, that is, to the reduction of the amount of PAHs released from the empty furnaces.
Fig. 10 shows a comparison of the amount of released PAH compounds from experiments with no material and those with graphite as carbon material.Apart from the trend being quite similar, the concentrations of released compounds from empty furnaces are also significant compared with those from experiments with graphite, indicating that the influence of contamination is considerable when studying a material with very low PAH emission.Most of the released PAHs in the graphite experiments could hence be due to contamination of the furnace.From the test results for Setup 1, even higher concentrations of many PAH compounds can be observed in the empty furnace, so the amount of total PAHs is almost three times higher than one using graphite ( Table 5 ), which may depend on the type of material used in the previous experiments.In addition, compared to the sample holder of Setup 2, the crucible of Setup 1 has a significantly larger surface area relative to the sample size; therefore, greater contamination can be expected in this setup.The traces of materials used in these furnaces are, given the thorough furnace cleaning procedure, clearly not easily removed by solvents that are entirely or relatively safe for use, and the thermal profile and holding time of thermal treatment in the air need further adjustment to completely remove contamination traces.Since the method of analysis of polycyclic aromatic hydrocarbons is sensitive, detecting these compounds as a consequence of contamination is inevitable if present.Although the concentrations of the detected compounds are not high enough to significantly affect the fingerprint of other materials than graphite, for precise analysis, this contamination should be taken into account and be considered in error calculation or as a baseline if it cannot be avoided.There is no standardized methodology for procedural blanks in the scientific literature.However, the concept and importance of procedural blanks, which assess contamination throughout the measurement process, are widely recognized in terms of their impact on the reliability of produced data [ 25 , 26 ].Therefore, we suggest that an empty furnace trial is always run prior to measurement to determine the level of baseline emission from the setup.
Breakthrough test
To test the efficiency of the sampling line, i.e., the breakthrough of PAHs through the third bubbling bottle, two additional experiments were performed with charcoal in an Argon atmosphere in Setup 2 (the same sampling line was used in both setups).Breakthrough testing was performed by adding an analytical thermal desorption (ATD) tube filled with Tenax TA adsorbent and glass wool to the sampling line, behind the last bubbling bottle.If there is a breakthrough, off-gas from the furnace will pass through three bubbling bottles, the first of which is empty, the following two filled with approximately 100 mL of 2-propanol, and the ATD tube, where in this case, the LMW PAH will be absorbed.The role of the glass wool was to shield the absorbent material by filtering any particles in the stream.The PAH content in the ATD tubes was analyzed using thermal desorption and GC-MS.Table 6 shows the results of PAHs in the tubes, compared to the mean of three experiments of PAHs in 2-propanol.
The results show the presence of negligible concentrations of the lightest compounds, but a significantly higher concentration of Fluoranthene and Triphenylene can be observed.The solubility of Naphthalene and Acenaphthene in 2-propanol is higher than that of the remaining listed compounds [ 27 ], which could explain the observed breakthrough.Alternatively, the higher concentration of Fluoranthene and Triphenylene might be attributed to the evaporation of 2-propanol containing PAH from the sampling line to the tube if the solvent is not kept at a sufficiently low temperature since, in these two experiments, a loss of 4.04-6.14mL of 2-propanol was observed for each temperature set point, i.e., an average loss of 2.30 % of 2-propanol per experiment.A similar occurrence was observed by Arnesen et al., who tested the breakthrough of LMW PAHs from 2-propanol into the sample collection system to investigate the reason for the small amount of LMW PAHs for all experiments performed compared to HMW PAHs [ 14 ].Nevertheless, the influence of solvent evaporation is improbable as it would also result in elevated levels of the lightest compounds in the tube.As mentioned above, in Setup 1, a cryostat bath with a water/glycol cooling system was used for cooling 2-propanol, and the temperature of the bath was maintained at − 8 to -10 °C, while in Setup 2, an ice bath was used, which makes controlling the temperature of the solvent more difficult.However, a slight evaporation of 2-propanol was also observed in Setup 1.As such, we suggest using a cryostat for more reliable low-temperature cooling to ensure minimum solvent evaporation.
Repeatability of measurements
Observing the error bars in Figs. 5 and 7 , we can conclude that the quantitative repeatability of the experiments is low, particularly for Setup 2. The coefficients of variation from the mean values of the total amount of PAHs per tested materials are shown in Table 7 .
Table 7
Coefficient of variation (CV) for all sets of PAH 42 analysis results (NILU) for each individual carbon material in both setups.CV was calculated using the equation: CV (%) = SD/ x ̅ * 100 %, where: SD = standard deviation, and x ̅ = mean of three measurements.Based on the displayed coefficients of variation, we concluded that we had better experimental control in Setup 1.The possible reason for this could be the contamination of the furnace since, between sets of three tests, the furnace of Setup 2 was used for other experiments.Also, one of the reasons could be the contamination of the alumina tube due to the high-temperature gradient in the setup, causing the diffusion of condensate on the walls [ 28 ], and the impossibility of its complete extraction through the steel tube into the sampling system.Despite this, the trend of concentrations of released PAH compounds for all materials is comparable for the setups, providing insight into the PAH emission fingerprint from carbonaceous materials, which is the objective of this method.
Other advantages and disadvantages of both setups should be considered, such as controlling the temperature, the time required to start the experiment, i.e., assembling the setup, the efficiency and the time it is necessary to spend for cleaning, as well as the required amount of chemicals, etc.In the context of temperature control, it is not uncommon for the actual temperature in furnaces, such as those employed in this study, to exhibit deviations from the desired set temperature.Therefore, the minor deviation between the observed temperature and the predetermined temperature, as well as the slight variation in temperature over parallels in Setup 2, can be regarded as advantageous features of this setup compared to Setup 1.
Comparing results of PAH analysis in different laboratories
Although this work was primarily focused on developing a laboratory method for comparing emission of PAHs from different carbon materials under different conditions such as temperature and atmosphere, consistent and accurate analysis of the samples obtained from the experiments is clearly of great importance for the results.Selection of a laboratory for sample analysis is based on several criteria such as possession of appropriate standards and developed methodology for PAH analysis, instrument sensitivity, as well as the price and time required for analysis.
As described under the Method details, the first set of samples obtained from the experiments in Setup 1, with graphite as the carbon material, were primarily analyzed for PAH 16 at the SINTEF laboratory in Trondheim.However, the results showed that only 8 out of 16 PAHs were detected (Naphthalene, Acenaphthylene, Acenaphthene, Fluorene, Phenanthrene, Anthracene, Fluoranthene, Pyrene), with Naphthalene accounting for 39-54 % of the total amount of PAHs in the samples.In contrast, some higher molecular weight PAHs (Benz(a)anthracene, Chrysene, Benzo(b)fluoranthenes, Benzo(k)fluoranthenes, Benzo(a)pyrene, Indeno(1,2,3cd)pyrene, Dibenzo(ah)anthracene, Benzo(ghi)perylene) were not detected in any sample, indicating that their concentration was below the lower detection limit (LDL) of the instrument, which can be seen in Fig. 11 (blue bars).As a low PAH concentration was expected in samples from the reaction of graphite, samples obtained in one experiment (Exp3) were pre-concentrated and reanalyzed.As expected, the results after concentration showed a significantly higher concentration of the previously detected compounds and also detected Chrysene.The other seven compounds from the PAH 16 list were, however, still not detected ( Fig. 12 ).
Samples obtained in two of the three parallel experiments were subsequently analyzed at the NILU laboratory in Kjeller.Comparing the analysis results from the different laboratories, it can be observed that the concentrations of some compounds (especially low molecular weight species such as Naphthalene and Acenaphthylene) are higher in SINTEF's results, resulting in higher total PAH values.The mean values of total PAHs calculated from SINTEF's and NILU's results are 1.9 μg g − 1 and 1.7 μg g − 1 , respectively.In contrast to the SINTEF's analysis, all 16 compounds were detected in NILU's analysis ( Fig. 11 ).On the other hand, if we compare the concentrations of the compounds detected in both laboratories and their distribution with temperature, the trend is similar.Many of the samples used in the current study required a very low detection limit in the analysis, and hence, all further samples were analyzed at the NILU laboratory.
Comparing PAH 16 and PAH 42
Fig. 13 presents the ratio of PAH 16 and PAH 42 for all tested carbon materials, where the mean value of the three measurements for each experiment are shown.It can be seen from the results that PAH 16 makes up 45-76 % of the total amount, which implies that the remaining compounds can make up more than 50 %, which is the case for the results of the analysis of the samples obtained by the thermal treatment of coke.Of these, 12-34 % are 1-Methylnaphthalene and 2-Methylnaphthalene, compounds that are known to be more persistent than the parent compound, Naphthalene.In addition, although limited, data on the toxicity of these compounds can be found in the literature [29][30][31].
Given the discussion above regarding the contamination of the furnace, it is crucial to consider how this contamination can impact the analysis results.Specifically, when examining the PAH16/PAH42 ratio in graphite, it becomes evident that furnace contamination has a greater influence compared to other materials due to the minimal emission levels associated with graphite.Consequently, it is important to acknowledge that the results for graphite carry a larger margin of error.
Summary
• Two experimental setups were designed and tested to develop a method for studying how different process conditions affect the emission of PAH from carbon materials.Three carbon materials (graphite, coke, and charcoal) were tested in an inert atmosphere (Ar).• A comparison was made between the PAH analysis methods used in two different laboratories.In the SINTEF laboratory, eight out of sixteen PAH 16 compounds were detected in all analyzed samples.The method detection limit plays an essential role in the PAH analysis of samples obtained by thermal treatment of carbon materials.For our samples, the more suitable sensitivity of the instrument was found in the NILU laboratory, although both methods proved to be suitable for PAH analysis.• All samples were analyzed for PAH 42 instead of PAH 16, and it was shown that PAH compounds of the extended part of the list make up a significant proportion of the total amount of released PAHs, with 49-55 % and 37-38 % for coke and charcoal, respectively.• Regarding the influence of temperature, the highest emission of PAHs for both coke and charcoal was observed in the second phase of the experiment, that is, at a temperature of 401-750 °C.• Comparing the results of PAH 42 analysis of samples obtained from experiments in two setups, it can be concluded that, despite challenges with cleaning, the distribution trend is very similar for all compounds except for a few, indicating that both setups are applicable to achieve our goal, the PAH emission fingerprint.
Limitations and future work
From the results of the method validation experiments, we observed that better experimental control was achieved in experiments in Setup 1, with a coefficient of variation ranging from 2.83 to 30.44.However, to use this method to obtain more precise emission data on PAH emissions from carbon materials, it is important to consider contamination.
In addition to the need for more extensive research on cleaning methods, there are various opportunities for future research that would enhance the capabilities of the developed method to acquire an even more distinct pattern of PAH emissions from carbonaceous materials.Investigating the impact of various atmospheres and flow rates on PAH emissions during the thermal treatment of carbon materials could provide a comprehensive understanding of the method's applicability for quantitatively determining PAH emission fingerprints in conditions like those in industrial activities.
Fig. 2 .
Fig. 2. Temperature ramping diagram with three temperature set-points at which samples are taken.
Fig. 3 .
Fig. 3. Average total mass loss (%) over the temperature cycle for carbon material samples in both setups.Error bars show the variation in triplicate experiments.
Fig. 4 .
Fig. 4. Results of PAH 42 analysis (NILU) in ng g − 1 sample reacted for experiments with graphite as a carbon material performed in Setup 1 (blue) and Setup 2 (orange).Results are the total amount of each PAH for the full temperature range.Error bars show the standard deviation for each compound based on two parallel experiments in Setup 1 and three parallel experiments in Setup 2.
Fig. 5 .
Fig. 5. Results of PAH 42 analysis (NILU) in ng g − 1 sample reacted for experiments with coke as a carbon material performed in Setup 1 (blue) and Setup 2 (orange).Results are the total amount of each PAH for the full temperature range.Error bars show the standard deviation for each compound based on three parallel experiments.
Fig. 6 .
Fig. 6.Distribution of LMW and HMW PAHs released in experiments with coke in: (A) Setup 1, and (B) Setup 2, with temperature.
Fig. 7 .Fig. 8 .
Fig. 7. Results of PAH 42 analysis (NILU) for samples obtained from experiments with charcoal as a carbon material performed in Setup 1 (blue) and Setup 2 (orange).Error bars show the standard deviation for each compound based on three experiments.
Fig. 9 .
Fig. 9. Results of PAH 42 analysis (NILU) for triplicate experiments in empty furnaces in an atmosphere of Ar in (A) Setup 1, and (B) Setup 2.
Fig. 10 .
Fig. 10.Results of PAH 42 analysis (NILU) for experiments in empty furnaces (blue) and experiments with graphite as a carbon material (orange), in an atmosphere of Ar in (A) Setup 1, and (B) Setup 2. Error bars show the standard deviation for each compound based on three parallel experiments, except for experiments with graphite in Setup 1, where error bars show standard deviation based on two parallel experiments.
Fig. 11 .
Fig. 11.Results of PAH 16 analysis performed at NILU (green) and SINTEF (blue) laboratories for samples obtained in experiments (Exp1 and Exp 2) with graphite as a carbon material in Setup 1. Values represent the total amount of PAH emitted over the 25-1100 °C temperature range per gram of graphite sample.Error bars show the standard deviation for each compound based on two experiments.
Fig. 12 .
Fig. 12. Results of PAH 16 analysis (SINTEF): (A) before, and (B) after concentrating sample obtained in experiment with graphite as a carbon material (Exp3) performed in Setup 1.
Table 1
List of 42 PAH components.
Table 2
Details of PAH analysis methods in two laboratories.
Table 4
Total PAH emitted from carbon materials in Setup 1 and Setup 2. Values represent mean of two parallels for graphite in Setup 1 and means of three parallels for all other experiments.
Table 5
Total PAH emitted from experiments with an empty furnace and experiments with graphite, performed in both setups.The values indicate the average of two tests conducted for graphite in Setup 1, while for all other experiments, the average is calculated from three tests.
Table 6
Concentrations of PAH species detected in tubes and 2-propanol after the experiment with charcoal in Argon atmosphere.Values in 2-propanol are an average of three experiments, and results from tubes are the average of two experiments. | 9,422 | 2024-04-01T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
Foam nest in Scinax rizibilis ( Amphibia : Anura : Hylidae )
During the intervals of February 1993 to January 1994 and November 1994 to February 1995, in the southern São Paulo state, we studied the breeding activity of Scinax rizibilis (Bokermann, 1964), the only known hylid species with oviposition in foam nests. The foam nests were constructed by female jumps, during the oviposition. The clutches contained 850-1250 eggs, which were almost black, except for the small clear vegetative pole. The construction of foam nest in S. rizibilis is unique among the other species with this characteristic. The complexity of a foam nest is intermediate, and egg development was faster when eggs were surrounded by foam. It is possible to recognize a progression from less developed structures, represented by the bubble nests of some microhylid frogs, to more complex examples, such as the foam nests of Leptodactylidae or Leiuperidae.
MATERIAL AND METHODS
Scinax rizibilis was observed in a temporary pond at the Fazendinha São Luiz (24°21'S, 48°44'W, 800 m altitude) in the municipality of Ribeirão Branco, southern São Paulo state, Brazil, from February 1993to January 1994, and from November 1994to February 1995.We visited the pond either fortnightly or monthly, and monitored it for 2-6 nights during each visit, totaling 148 hours in 46 visits.The pond area was approximately 1,950 m2 and the distribution of vegetation was regular with predominance of Juncaceae.The pond was bordered by typical Atlantic Forest flora.
Nocturnal observations were conducted with a 6 V spotlight covered with sheets of thin red plastic to reduce the stress on the animals (ROBERTSON 1990).Focal-animal, all occurrences, and sequence samples were used for behavioral records (LEHNER 1996).
Pairs found in amplexus were collected manually.We measured the snout-vent length (SVL) of individuals to the nearest 0.1 mm with a caliper ruler and weighted them with a Pesola ® balance to the nearest 0.05 g.The clutches obtained were preserved in 5% formalin.Ten eggs of each clutch were measured under a stereomicroscopic using a micrometric ocular.
The pairs in amplexus (n = 5) were put into separate aquariums (25 x 8 x 20 cm) with water at a depth of 3 cm.The subjects were filmed with a video camera (two pairs) and pho- In order to test the foam influence on egg development, we monitored four egg masses over 24 hours, two with foam and two without foam, maintained in a plastic recipient (10 x 10 x 10 cm) with water at a depth of 8 cm.We extracted some eggs from the foam nest with a fine-mesh dipnet.Developmental stages following GOSNER (1960) were determined using a stereomicroscopic.For statistical analyses, we used Pearson correlation coefficient (ZAR 1996) with the significance level of 0.05.
Foam nest construction (n = 2) lasted 38 or 40 minutes, and was performed by females during the amplexus (Fig. 4).The beginning of the oviposition is marked by a circular swimming by the females.The male pushed its two feet very close, forming a channel between its cloaca and the female cloaca.After that, the pigmented eggs appeared.Since the mucus secreted by the reproductive traits of the females is transparent, we were not able to observe the exact moment of its release.
This sequence of oviposition was repeated several times.During the entire process the female performed alternate movements of the legs to join mucus and eggs.After egg expulsion, the female jumped up in the water and the impact of its body against the water allowed the retention of air bubbles in the mucus, forming the foam nest.The end of oviposition is marked by a characteristic posture signalization of the female; the female arched its back inwardly, the head was elevated (at 45° to it body), and legs and arms were distended.Then, the male slipped laterally off the female body and the process of oviposition ceased (Fig. 4F).The egg development was faster when eggs were surrounded by foam (Fig. 5).
DISCUSSION
The clutch size is positively correlated with SVL and female mass, as observed in other Neotropical anuran species (MARTINS 1988, BASTOS & HADDAD 1996), according to the model HADDAD et al. (1990), in his description of the foam nest of S. rizibilis (as Hyla cf.rizibilis), suggested the females have an active part in nest construction; herein we confirm it.The characteristic posture of the female for signalizing the end of oviposition was observed in the leiuperid Physalaemus cuvieri (J.P. Pombal Jr pers.obs.), and may be common in anurans.
The function of foam nests is controversial.Some authors have suggested that it might: 1) reduce exposure to aquatic predators (HEYER 1969); 2) protect the eggs from desiccation (HÖDL & GETTINGER 1985); and 4) supply oxygen for eggs and embryos (SEYMOUR 1999, SEYMOR & LOVERIDGE 1994), accelerating development (HADDAD & HÖDL 1997).HADDAD et al. (1990) suggested that foam nests in Scinax rizibilis evolved mainly as a protection against insolation and desiccation of eggs and embryos.However, the data obtained in this study corroborates HADDAD & HÖDL (1997), because eggs without foam (little oxygen) developed slower than eggs with foam (more oxygen) (Fig. 5).Development acceleration and protection are not mutually exclusive functions.Therefore, we cannot say thatthe foam nests of S. rizibilis only provide either one of these two benefits.For example, eggs of species that breed in temporary ponds, such as S. rizibilis, hatch quickly into larvae, decreasing the risk of desiccation and predation by conspecifics (HÖDL 1992).It is possible to recognize a sequence from a less elaborate floating device, represented by the bubble nest, to an elaborate structure represented by the foam nest (Tab.I).The foam & DAVIES 1979).The more complex foam nests are those built by Hyperolidae, Leiuperidae, Leptodactylidae, and Rhacophoridae whose eggs are surrounded by many small air bubbles, being deposited on the water surface (HEYER 1969, HÖDL 1992) or on leafs/litter (JENNIONS et al. 1992, KADADEVARU & KANAMADI 2000).
As stated by HEYER (1969), the pre-adaptations to foam nest construction are widespread among anurans because many species are able to secrete mucus during oviposition.HADDAD et al. (1990) manually beat the mucus of Scinax hiemalis and obtained a foam nest.The fact that foam nests are known from seven anuran families and that the construction procedures differ among them indicates that the evolution of the foam nest may have originated independently among these families.
Figures 1-3.Relationship between the: (1) snout-vent length and number of eggs of females of S. rizibilis; (2) mass and number of eggs of females of S. rizibilis; (3) diameter and number of eggs of S. rizibilis.
Figure 4 .
Figure 4. Stages of foam nest construction in S. rizibilis.Drawing based in slides.
Figure 5 .
Figure 5. Developmental stages of tadpoles of S. rizibilis with and without foam.
Table I .
(HADDAD & HÖDL 1997)nuran foam nest of selected species.producedbyS.rizibilis is intermediate between the two extremes(HADDAD & HÖDL 1997).The less complex foam nests are built by Limnodynastidae, Microhylidae, and Myobatrachidae, which deposit eggs on the water surface surrounded by few large air bubbles which are produced by male and female(HADDAD & HÖDL 1997), or by females that paddle the forelimbs to start a flux of water (TYLER nest | 1,640.8 | 2010-01-01T00:00:00.000 | [
"Biology"
] |
Research on Online Education Curriculum Resources Sharing Based on 5G and Internet of Things
Information technology has brought great changes to China’s education. 5G technology provides a better guarantee for the sharing of curriculum resources, facing the extreme shortage of educational resources in China. The contradiction between limited educational resources and unlimited development needs of higher education has become increasingly prominent. How to effectively realize resource sharing among universities has become a problem that must be considered in the talent development of universities. In order to solve this problem, universities must improve the utilization rate of resources, maximize resource sharing, and establish a more perfect resource sharing mechanism under the background of 5G and Internet of Things. This paper analyzes the current situation of research at home and abroad, the current situation of resources development, and the application of online courses under the background of Internet of Things, thus constructing an overall framework of curriculum resource sharing mode. According to effective experiments, the offline curriculum education resource sharing and traditional resource sharing schemes in the background of 5G and Internet are compared, and the necessity and importance of applying 5G Internet of Things are verified.
Introduction
With the increasing scale of higher education, whether there is a complete resource sharing mechanism is an important factor to ensure that students can receive high-quality education. Faced with limited teaching resources and unlimited development needs, universities must reasonably improve the utilization rate of resources, make use of the convenience of 5G and Internet of Things technology, maximize resource sharing, and propose solutions to the problems existing in resource sharing. The literature [1] shows that there is an extreme shortage of educational resources in China at present. According to effective investigation, most students cannot get high-quality teaching resources. On the university campus, with the convenience of 5G and the Internet, they have gradually formed a mode of sharing curriculum education resources, which can share teaching resources, teachers' resources, and curriculum resources and advanced teaching facilities. According to the investigation of college students' access to resources in the literature [2], it is concluded that the simplest way for college students to obtain high-quality learning resources is through libraries and online search tools. According to a report, college students tend to share resources among classmates and friends. Instead of directly obtaining resources, on average, students share learning resources twice or more times a week. They will share their handwritten notes and textbooks, purchased extracurricular books, and use some social software that cannot be directly shared, such as sharing learning resources on the Internet. The literature [3] studies the main ways of sharing highquality resources inside and outside universities. The conclusions are as follows: the first one is through learning textbooks, extracurricular books, online courses, and learning websites. The other is network resources. The characteristics and differences between them are obvious. The literature [4] investigated more than 600 teachers in order to investigate the influence of shared resources on teaching and practice. Knowing how they use shared resources and how to choose shared resources, the survey results show that shared education can not only reduce costs but also bring greater flexibility to education. The literature [5] is to verify the timeliness of the application of blog and wiki resource sharing mode. Taking "Principles and Methods of Instructional Design," one of the universities awarded by the Ministry of Education of China, as the research object, based on the research and analysis of curriculum resources sharing at home and abroad, this paper establishes a general framework of curriculum resources sharing mode. The literature [6] studies the sharing of massive open online course resources and puts forward a solution to the fragmentation of network resources. The implementation process and application framework of linked data are introduced. The literature [7] puts forward a brand-new way of education, which is called Fujian-Taiwan cooperation in construction and education. How to make full use of their respective educational resources, limited integration and sharing between the two campuses is the key to improve the education quality of talent training programs. Organizational coordination institutions and teaching quality monitoring mechanisms have been established to ensure the substantive sharing of educational resources. With the prevalence of the Internet of Things and the explosive growth of various data flows, the Internet of Things may face the problem of resource shortage. Reference [8] puts forward a scheme to solve the shortage of resources sharing. Considering the different communication requirements of various sensors, a new function is designed to drive the learning process. The results show that this algorithm can achieve good network performance. The literature [9] studies the platform of "Construction and Sharing of Moral Education Curriculum Resources in Shanghai Universities." It is found that they are realized through "1+2" operation mode, cloud storage structure, user classification management, and resource sharing scoring mechanism, which has the characteristics of intelligent resource retrieval and real-time resource evaluation and has become one of the important research achievements of "the construction and sharing of moral education curriculum resources in colleges and universities." The literature [10] describes a knowledge based on a learning development system. Used in e-learning courseware design and elearning resource management, the distributed e-learning system development environment is developed by building a system model. The literature [11] introduces intelligent algorithms in a distributed manner to coordinate the overall goals of cellular systems with the individual goals of Internet (LOT) devices. The utility function of Internet of Things users is designed, a new incentive mechanism is constructed, and a priority queue is set for continuous actions. The literature [12] proposes a distribution protocol using blockchain technology. In the Online education resources, there are unfair distribution of teaching resources and difficulties in retrieving resources. Facing the emergence of 5G and Internet of Things technology, teaching resources can be shared and utilized in a variety of ways. With the support of 5G technology and Internet of Things technology, this paper realizes the research framework of sharing teaching resources and puts forward the theoretical model of sharing resources. This paper compares the offline curriculum educational resource sharing and traditional resource sharing schemes under the background of 5G and Internet and greatly improves the performance of teaching resources.
Research Background.
With the development of science and technology and the prevalence of Internet technology, compared with traditional networks, 5G networks have the advantages of light coverage, low energy consumption, hot spots, and high capacity. It can spread data well. According to the current extremely scarce state of educational resources, some places do not have advanced teaching resources, so it is necessary to make use of the convenience of the Internet to share networks.
Significance of Research.
The rise of the Internet of Things has also promoted the development of the education industry and launched a brand-new education model. The sharing of network resources has broken the traditional teaching concept, so that students are no longer limited by time and place. The sharing of network resources can not only save manpower and material resources but also quickly let more people receive high-quality educational resources.
Research Status at Home and Abroad.
Effective storage and management of data is a problem that information resource sharing parties need to solve. A large number of scholars have done a lot of research on data encryption, storage, and management. Safe and efficient storage of shared data is the basis of safe sharing of resources, which can maximize the use of effective resources, maximize the value of data resources, and promote the development of society and production.
The prevalence of Internet of Things technology leads to the frequent sharing and exchange of data, and more and more people begin to pay attention to security and privacy issues. Access control is an important technology to ensure the data security of the Internet of Things. It is the control Journal of Sensors of terminal access members to access shared data resources, which makes the access more secure, effective, and flexible.
Problems in Resource
Sharing. In the process of resource sharing, there are also many security problems. The data processing ability of data exchange mode is poor, and the sharing security problem cannot be guaranteed. The following are the problems: (1) Resource interconnection and interoperability in multidomain IoT scenarios. It is difficult to share data across domains. The Internet of Things is independent, and services are massively diversified and decentralized, resulting in difficult data sharing, poor service interaction, and system coordination and linkage (2) The confidentiality and privacy of data have received a great threat. Therefore, how to ensure that information resources are not leaked and the security of shared resources has become a major challenge for Internet of Things resource sharing 1.5. Implementation Process of Curriculum Resources. We should make full use of the convenience brought by the Internet of Things to our lives. Although network teaching has been popularized, the traditional teaching methods are still deeply rooted, and some teachers do not use information technology very well. Therefore, teachers should fully feel the charm of the integration of information technology and classroom and lead teachers to recognize and utilize information teaching resources and take their essence. For some poor areas, school curriculum resources are extremely scarce. As a result, many students cannot enjoy high-quality educational resources, and then, the sharing of network resources is particularly important for them. Sharing resources is not only conducive to the reform of traditional teaching methods in some poor areas but also conducive to stimulating teachers' "want to teach" and students' "want to learn," which greatly stimulates students' interest in learning and makes the teaching quality reach a higher level. 3 Journal of Sensors from the server. Data sharers can exchange identities with data recipients. Data sharers can share data with data collectors on the server, encrypt the provided data, and upload it to the server.
Data acquirers are members who are interested in the data on the server. They can download the corresponding data from the server. If they have data access rights, they can decrypt the ciphertext with the group key. The relationship between the three is shown in Figure 1.
In Figure 1, the data sharer uploads course resources to the Service Registry side, and the data getter downloads resources from the Service Registry side, thus realizing the sharing of teaching resources between the data sharer and the data getter.
Key Calculation for Shared Resources.
According to the Chinese remainder theorem, the following equations are calculated: x i ≡ ς t,1 mod p 2 ð Þ, ⋯,
Journal of Sensors
And get a unique solution: Among them,
Results obtained
Group It can be used to encrypt the information exchange between shared resources and terminal devices to ensure the security of information exchange.
Encryption and Storage of Shared Resources.
After the terminal members in the shared resources complete the successful registration steps, they can selectively encrypt and store the uploaded resources.
Download and Access to Shared
Resources. Each user who logs into the system searches for corresponding resources on the platform through keyword search and related description content. If the user needs to access the shared resources, he needs to send information to the plat-form, download the corresponding ciphertext resources according to the ciphertext link, and then select the corresponding authority to calculate the decryption key: According tom = c t,2m ⊕ H 2 ðx 3 Þ c t,2m ⊕ H 2 ðx 3 Þ, get the resources of civilization.
Types of Learning Platform for Curriculum Resources.
With the advent of the information age, in order to better organize platform resources, colleges and universities have successively built educational administration management systems, online learning course centers, and so on. The main categories are as follows: (1) Students' autonomous learning: students can freely answer the discussion questions raised by teachers on the platform, and teachers can also upload exercises. It makes up for the defect that offline teachers have less communication with students and can also test students' autonomous learning ability [13,14].
(2) Live webcast learning: on the basis of traditional classroom, live webcast learning has more intelligent functions, such as sign-in, answering first, and class inspection. During the epidemic period, live webcasts were held at home. After the outbreak, colleges and universities also maintained a live learning Journal of Sensors platform. This is of great help to students' final review. Students can watch live playback and consolidate their knowledge.
(3) Blended learning: in blended learning platform, representative platforms are "recording and broadcasting classroom" and "smart classroom," which have the functions of recording, playback, interaction, and group discussion. Give the teaching environment intelligent, integrated teaching management and diversified teaching scenes and even introduce the Internet of Things and big data technology to open up all intelligent teaching platforms.
(4) Online open class: students can choose courses according to their own hobbies, which are not limited by time and place. As long as there is a network, they can learn. In some places, the resources of famous teachers are scarce. We can learn the courses of famous teachers through the Internet, so that more people can come into contact with the classrooms of famous teachers and share resources to the maximum extent.
(5) Calculator-assisted instruction: because the platform type has a wide audience and the needs of various universities are similar, it has become the mainstream trend to open up the educational information platform and realize resource sharing. The flowchart is shown in Figure 2. 2.3. Resource Accumulation. Traditional teaching resources basically come from teaching materials, extracurricular books, etc. With the rise of multimedia teaching, more resources come from multimedia courseware. In the information age, network resources have gradually occupied the mainstream trend. According to the learning platform, the (1) PPT: it is mainly written around the curriculum and helps students understand by means of some pictures and micro videos, which is beneficial to stimulate students' learning enthusiasm.
(2) Training of experimental teaching materials: it mainly focuses on the construction of experimental centers in schools and obtains medical experimental operation resources based on visual intelligent laboratories.
In the face of numerous platform resources, we should do a good job of sorting out and complete a certain amount of resource reserves according to the advantages of the Internet.
Architecture Design of Internet of Things Data Exchange
System. The data exchange framework is show in Figure 3.
As can be seen from Figure 3, data sharing center is more important among several modules. Subdomains are divided into data providing and data releasing ports. Subdomains can provide teaching resources to the data sharing center and can also obtain resources to exchange data and interface with the data sharing center. Data sharing center is mainly used to provide data service and service interface, and the three modules are used to realize data exchange and sharing.
The core is the data exchange center, which plays the role of service query, service release, and data conversion. It consists of administrative center and edge shared components, as shown in Figure 4.
Management Center.
Manage the security management of registering, publishing, and maintaining data exchange services, as shown in Figure 5.
When a user initiates a query request, keywords are usually used to search, and the operation mechanism adopts recursive mode. The flow chart is shown in Figure 6.
Edge Shared Components.
Edge sharing component consists of three parts: data exchange management, authority management, and data processing management. Data exchange management has the functions of data retrieval and data transmission, and the data processing management module has the functions of extracting and transforming the data obtained by the management module, so as to realize the sharing and exchange of diversified data. Privilege management module is the core to ensure data security. The architecture diagram is shown in Figure 7.
Necessary Experimental Results.
Comparing the implementation effect after adopting 5G and Internet of Things with other schemes, we designed an experiment to compare the degree of resource sharing without the background of Internet of Things with the prevalence of resources under the background of 5G and Internet of Things and obtained the following data, as shown in Figures 11 and 12. 3.3. Evaluation Results. According to the survey results, under the background of 5G and Internet of Things, information dissemination is wider, faster, and more accurate. The Internet of Things has become a mainstream trend. Of course, the requirements for 5G network are higher, so we should constantly optimize the network carrier and make sufficient preparations for the service bearing of 5G and Internet of Things.
Conclusion
We rationally apply the Internet of Things, which has brought great changes to the information season and maximized the sharing rate of information resources. Finally, on the basis of absorbing the theoretical research results and practical experience of foreign resource sharing, according to China's specific national conditions, proceeding from reality, this paper puts forward some new ideas on the construction of literature resources sharing network in China and especially puts forward some constructive views and suggestions on how to choose the breakthrough point of the construction of resources sharing network in China and how to establish a self-developing network operation mechanism, the guiding ideology of the construction of resources sharing network in China, and the construction of network system.
Data Availability
The experimental data used to support the findings of this study are available from the corresponding author upon request. | 4,040.4 | 2022-01-05T00:00:00.000 | [
"Computer Science",
"Education"
] |
Differential TLR activation of murine mesenchymal stem cells generates distinct immunomodulatory effects in EAE
Background Recently, it has been observed that mesenchymal stem cells (MSCs) can modulate their immunoregulatory properties depending on the specific in-vitro activation of different Toll-like receptors (TLR), such as TLR3 and TLR4. In the present study, we evaluated the effect of polyinosinic:polycytidylic acid (poly(I:C)) and lipopolysaccharide (LPS) pretreatment on the immunological capacity of MSCs in vitro and in vivo. Methods C57BL/6 bone marrow-derived MSCs were pretreated with poly(I:C) and LPS for 1 hour and their immunomodulatory capacity was evaluated. T-cell proliferation and their effect on Th1, Th17, and Treg differentiation/activation were measured. Next, we evaluated the therapeutic effect of MSCs in an experimental autoimmune encephalomyelitis (EAE) model, which was induced for 27 days with MOG35–55 peptide following the standard protocol. Mice were subjected to a single intraperitoneal injection (2 × 106 MSCs/100 μl) on day 4. Clinical score and body weight were monitored daily by blinded analysis. At day 27, mice were euthanized and draining lymph nodes were extracted for Th1, Th17, and Treg detection by flow cytometry. Results Pretreatment of MSCs with poly(I:C) significantly reduced the proliferation of CD3+ T cells as well as nitric oxide secretion, an important immunosuppressive factor. Furthermore, MSCs treated with poly(I:C) reduced the differentiation/activation of proinflammatory lymphocytes, Th1 and Th17. In contrast, MSCs pretreated with LPS increased CD3+ T-cell proliferation, and induced Th1 and Th17 cells, as well as the levels of proinflammatory cytokine IL-6. Finally, we observed that intraperitoneal administration of MSCs pretreated with poly(I:C) significantly reduced the severity of EAE as well as the percentages of Th1 and Th17 proinflammatory subsets, while the pretreatment of MSCs with LPS completely reversed the therapeutic immunosuppressive effect of MSCs. Conclusions Taken together, these data show that pretreatment of MSCs with poly(I:C) improved their immunosuppressive abilities. This may provide an opportunity to better define strategies for cell-based therapies to autoimmune diseases.
Background
Mesenchymal stem cells (MSCs) are nonhematopoietic, multipotent progenitor cells isolated from a variety of adult tissues, including bone marrow and adipose tissue. They are capable of self-renewal and are able to differentiate into at least some mesenchymal cell types, such as bone, cartilage, and fat, thus playing a potential role in tissue repair [1,2]. In addition to their potential for differentiation, MSCs also exhibit immunosuppressive activity, as shown by their ability to inhibit the proliferation and function of immunocompetent cells, such as T and B lymphocytes, natural killer cells, and dendritic cells [3][4][5]. These immunomodulatory properties of MSCs have generated great interest in their potential as a promising therapeutic modality for proinflammatory and autoimmune diseases [6,7]. Diverse studies using experimental animal models have shown that MSCs can reduce the progression and/or severity of various immune-mediated diseases, such as collagen-induced arthritis (CIA) [8], experimental colitis [9], and experimental autoimmune encephalomyelitis (EAE) [10,11]. In addition, we recently demonstrated that the intravenous administration of MSCs in EAE at different stages of the disease induced differential therapeutic effects depending on the proinflammatory environment at each stage of the disease [12].
It has also been demonstrated that the immunosuppressive activity of MSCs does not seem to be spontaneous but instead requires MSCs to be "licensed" in an appropriate proinflammatory environment to exert their effects [13,14]. In this line, Krampera et al. and Ren et al. showed that MSCs mediating immunosuppression required preliminary activation by immune cells through the secretion of the proinflammatory cytokine IFN-γ, either alone or together with TNF-α, IL-1α, or IL-1β [3,15]. These cytokine combinations induced the MSCs to express high levels of soluble factors involved in MSCmediated immunosuppression, such as indoleamine 2,3dioxygenase (IDO), transforming growth factor beta (TGF-β), prostaglandins, and nitric oxide (NO), as well as other factors [3,[15][16][17].
In addition to activation of MSCs by proinflammatory cytokines, Toll-like receptors (TLRs) can influence their immunomodulatory capacity. For example, Liota et al. [18] showed that human bone marrow-derived MSCs (BM-MSCs) express high levels of TLR3 and TLR4, and that ligation of these receptors by their agonists, polyinosinic:polycytidylic acid (poly(I:C)) and lipopolysaccharide (LPS), respectively, can reduce the inhibitory activity of MSCs on CD4 + T-cell proliferation. In contrast, Opitz et al. showed that pretreatment of human BM-MSCs for 24 hours with poly(I:C) or LPS significantly enhanced the immunosuppressive activity of BM-MSCs on the allo-mixed lymphocyte reaction (MLR) [19]. On the contrary, recent results demonstrate that human MSCs polarize into two active phenotypes following specific TLR3 or TLR4 activation. Priming by TLR3 agonists specifically leads to the expression of immune dampening mediators and the maintained suppression of T-cell activation. In contrast, priming by TLR4 agonists results in the expression of proinflammatory mediators and a reversal of the MSC-established suppressive mechanisms of T-cell activation [20]. Besides these in-vitro studies, our group recently demonstrated that TLR3 preconditioning increases the therapeutic efficacy of human umbilical cord MSCs in a mouse model of colitis [21]. These results demonstrate the complexity of the immunomodulatory capacity of MSCs and suggest that TLR activation may affect the functional immune activity of MSCs.
The aim of the present study was to demonstrate, for the first time in murine MSCs and an experimental model of multiple sclerosis (EAE), that in-vitro pretreatment of MSCs with poly(I:C) or LPS can induce two distinct active phenotypes in MSCs, as found in humans, and that these polarized cells possess opposite immunological effects in vitro and in vivo. Our results indicate that pretreatment of MSCs with poly(I:C) enhances their immunosuppressive capacity on T lymphocytes and that the intraperitoneal injection of these MSCs significantly reduces the severity of EAE. In contrast, LPS-pretreated MSCs induced a significant increase in T-cell proliferation and completely reversed the immunosuppressive therapeutic effect of MSCs in EAE.
Animals
Female C57BL/6 mice, 8-14 weeks old, were purchased from the central animal facility of the Faculty of Medicine, University of Chile. Animals were housed under standard laboratory conditions and provided food and water ad libitum. Experimental procedures and protocols were performed according to the US National Institute of Health Guide for the care and use of laboratory animals (NIH Publication No. 85-23, revised 1996), and were approved by the institutional animal care and use committee of the Universidad de los Andes and the FONDECYT bioethics advisory committee in Chile.
MSC characterization
The MSC phenotype was confirmed by flow cytometry based on the positivity for CD29, CD44, and Sca-1, in the absence of CD45 and C11b antigen. All antibodies were purchased from BD Biosciences (San Diego, CA, USA). Surface staining was performed following a standard protocol. The samples were acquired with a FACSCanto II flow cytometer (Becton, Dickinson and Company). Data were analyzed using FCS Express 4 Plus research edition and Flow Jo software. Determination of the capacity of MSCs to differentiate toward chondrogenic, adipogenic, and osteogenic lineages was performed as described previously [22].
Treatment of MSCs
MSCs were grown to 70-80 % confluence and incubated with agonists for 1 hour in complete αMEM. For TLR pretreatment of MSCs, poly(I:C) (10 μg/ml; Sigma-Aldrich, Israel) and LPS (500 ng/ml; Sigma-Aldrich, Israel) were used as agonists for TLR3 and TLR4, respectively. Cells were then washed thoroughly with a complete cell culture medium before use in the different assays described in the following.
Quantitative real-time PCR analysis
For the evaluation of MSC gene expression, after pretreatment with poly(I:C) and LPS for 1 hour, cells were thoroughly washed and cultured for 12 hours in complete αMEM. Cells were then harvested using Trypsin 1× (Trypsin-EDTA 1×; Gibco) and pelleted. Total RNA was isolated using the RNeasy Mini Kit (Qiagen) following the manufacturer's instructions. The RNA concentration was measured with a NanoDrop 2000 spectrophotometer (Thermo Scientific), and cDNA was synthesized from 2 μg of RNA using a reverse transcription protocol (Improm II-RT, A3802; Promega). Quantitative real-time PCR (RT-qPCR) was performed in a Stratagene MX3000P thermocycler (Agilent Technologies) using GoTaq qPCR Mastermix (Promega, Madison, WI, USA). 18S was used to normalize the results, and basal conditions were used for calibration. MxPro v4.10d software was used for analysis using the 2 -(ΔΔCt) formula, where ΔΔCt takes into account the efficiency of the primers and the normalized Ct values. The primer sequences used for amplification are presented in Table 1.
Proliferation assays
Splenocytes were obtained from the spleen of adult C57BL/6 mice. Extracted cells were passed through a 70-μm filter (cell strainer; BD Falcon), centrifuged at 1680 rpm for 6 minutes, and treated with cold NH 4 Cl for 5 minutes. Cells were then washed in PBS (phosphate-buffered saline) and centrifuged at 1680 rpm for 6 minutes. Next, cells were labeled with CellTrace Violet (CTV) (Invitrogen, UK) according to the manufacturer's instructions. For T-cell activation, 2 × 10 5 cells were stimulated with concanavalin A (conA) (0.5 μg/ml) in the presence or absence of MSCs at a 1:10 ratio (MSCs:splenocytes) in complete RPMI medium with 10 % FBS, 100 U/ml penicillin, and 100 μg/ml streptomycin (Gibco) at 37°C in a 5 % CO 2 atmosphere. After 5 days of culture, cells were washed and evaluated by flow cytometry for the percentage of CD3 + T cells in the population. For the proliferation analysis, we used CTV, which functions similarly to standard CFSE staining. CTV was added at the beginning of the cultures. Each peak on the histograms corresponds to the division cycles for CD3 + lymphocytes. After obtaining the number of events, we calculated a proliferation index that incorporated the number of cells divided by the number of progenitors as described by Roederer [23].
Th1 and Th17 FACS analysis
Differentiated T cells were stained with an anti-CD4 PEconjugated antibody (BD Biosciences) for 30 minutes at 4°C in staining buffer. Intracellular staining was performed using a CytoFix/Cytoperm kit (BD Bioscience) following the manufacturer's instructions. Cells were stained with anti-IFN-γ (FITC-conjugated) antibody for the Th1 subset of the population, or an anti-IL-17A (PE-conjugated) antibody for the Th17 subset of the population. After membrane and intracellular staining, cells were analyzed with a FACSCanto II using the FACS Express software. For the proliferation analysis, we used CTV, which functions similarly to standard CFSE staining. CTV was added at the beginning of the cultures. Each peak on the histograms corresponds to the division cycles for CD4 + IFN-γ + and CD4 + IL-17 + lymphocytes, corresponding to Th17 and Th1 lymphocytes, respectively. After obtaining the number of events, we calculated a proliferation index that incorporated the number of cells divided by the number of progenitors.
ELISA for cytokines
Culture supernatants were assayed for IL-6 using an ELISA kit (catalog number DY406; R&D systems) according to the manufacturer's protocol.
Measurement of iNOS activity
NO was detected using a modified Griess reagent (Sigma-Aldrich). Briefly, all NO 3 was converted into NO 2 by nitrate reductase, and total NO 2 was detected by the Griess reaction as described previously [25].
Ex-vivo T-cell analysis
For ex-vivo T-cell analyses, draining inguinal and axillary lymph nodes were removed from mice 27 days after EAE induction. T cells were obtained and cultured at a density of 2.5 × 10 5 /well. Inflammatory cells were restimulated with PMA/ionomycin for 3.5 hours in the presence of brefeldin A for the last 2.5 hours of incubation at 37°C before antibody staining and analysis by flow cytometry. Next, Th1 and Th17 cells in the samples from the different groups were identified as already described. Finally, after membrane and intracellular staining, cells were analyzed with a FACSCanto II using the FACS Express software.
Statistical analysis
A Kruskal-Wallis test, which accounts for non-normal distributions with small sample sizes and multiple groups, was performed for comparisons between experimental groups. Post-hoc analyses were performed with the Mann-Whitney test. For all analyses, we used GraphPad Prism Program (GraphPad, San Diego, CA, USA) statistical software. p < 0.05 was considered statistically significant. Data are presented as the mean ± standard deviation.
Characterization and TLR expression of MSCs
Murine MSCs were cultured in complete αMEM for the selective proliferation of MSCs. After culturing, cells with a stable fibroblast-like phenotype were used for experimentation (Fig. 1a). As evidenced by flow cytometry, cells were uniformly and strongly positive for MSC-related markers, such as CD44, CD29, and Sca-1 (80-99 %), and were negative for CD45 and CD11b (<4 %) (Fig. 1c). As shown in Fig. 1c, we confirmed the ability of MSCs to differentiate into adipocytes, chondrocytes, and osteoblasts using a specific differentiation stimulus (right) or control medium (left) as described in Methods. We next examined the relative expression of TLR3 and TLR4 genes in MSCs using RT-qPCR and gel electrophoresis. RT-qPCR and agarose gel electrophoresis analysis revealed that murine MSCs expressed both TLRs and that the expression level of TLR4 was higher than TLR3 (Fig. 1d, e). We also found that pretreatment of MSCs with poly(I:C) and LPS for 1 hour did not affect the immunophenotypic profile of murine MSCs (data not shown).
TLR3 and TLR4 pretreatment differentially affect the in-vitro immunosuppressive capacity of murine MSCs To evaluate the effect of specific stimulation of TLR3 and TLR4, we treated the MSCs for 1 hour with poly(I:C) or LPS in complete αMEM and then determined their in-vitro immunomodulatory capacity. First, we tested the immunosuppressive capacity of MSCs to inhibit T-cell proliferation induced by conA. Briefly, splenocytes were isolated from C57BL/6 mice, stained with CTV (a fluorescent dye used to determine T-lymphocyte proliferation), and then stimulated with ConA for 3 days. Flow cytometry was used to analyze the proliferation of CD3 + T lymphocytes that were Negative control, without cDNA template. MW molecular weight. *p < 0.05, **p < 0.01, ***p < 0.001 the control condition (without MSCs) (p < 0.05, Fig. 2a). MSCs pretreated with poly(I:C) for 1 hour were considerably more effective than untreated MSCs in inhibiting Tcell proliferation at the different ratios analyzed (p < 0.001, Fig. 2a). In contrast, MSCs pretreated with LPS for 1 hour not only reversed the immunosuppressive effect of MSCs but also induced a significant increase in T-cell proliferation in a dose-dependent manner compared with the effects in the control condition (p < 0.001, Fig. 2a). We next studied the effect of poly(I:C) and LPS pretreatment of murine MSCs on the expression of immune modulators, such as the soluble immunosuppressive factors NO and proinflammatory cytokine IL-6. Supernatants derived from MSCs cultured in complete αMEM for hours in the absence or presence of splenocytes were used to evaluate the presence of NO, as described in Methods. A modified Griess assay for nitrite quantitation showed no significant differences in NO secreted by untreated or pretreated MSCs (data not shown). However, when MSCs were cultured in the presence of splenocytes, we observed a significant increase in NO production induced by the MSCs pretreated with poly(I:C) compared with untreated MSCs (p < 0.001, Fig. 2b). Conversely, pretreatment of MSCs with LPS induced lower NO production in comparison with untreated MSCs or MSCs pretreated with poly(I:C) (p < 0.001, Fig. 2b).
Quantitative analysis of IL-6 expression, evaluated by RT-qPCR, revealed that MSCs pretreated with LPS induced a significant increase in the relative expression of IL-6 compared with untreated MSCs or poly(I:C) pretreated MSCs (p < 0.05, Fig. 2c). No significant differences were observed in mRNA IL-6 expression between untreated MSCs and MSCs pretreated with poly(I:C). We also observed that MSCs pretreated with LPS had higher IL-8 mRNA expression than untreated MSCs or MSCs pretreated with poly(I:C) (data not shown). Moreover, we observed that MSCs pretreated with poly(I:C) lose the capacity to secrete IL-6, as measured by ELISA after 24 hours of stimulation, compared with the observed IL-6 secretion in untreated MSCs and MSCs pretreated with LPS (p < 0.001, Fig. 2d). These results suggest that MSCs pretreated for 1 h with poly(I:C) have a higher immunosuppressive effect in vitro when compared with untreated MSCs or MSCs pretreated with LPS.
Pretreatment of murine MSCs with poly(I:C) or LPS induces different and opposing in-vitro effects on Th1 and Th17 subsets
We next studied the immunomodulatory effect of the addition of untreated murine MSCs or MSCs pretreated with poly(I:C) or LPS for 1 hour on in-vitro differentiation and proliferation of Th1 and Th17 lymphocytes. MSCs stimulated with TLR3 and TLR4 ligands differentially modulate Th1 and Th17 differentiation and proliferation. T-helper cell differentiation (a, b, e, f) and proliferation (c, d, g, h) were assessed using naïve CD4 + T cells. Purified CD4 + cells were stimulated with a specific cocktail of cytokines, as described in Methods, to induce Th1 (a-d) and Th17 (e-h) differentiation in the absence or presence of MSCs pretreated with or without a TLR agonist. MSCs were either unstimulated or were stimulated with poly(I:C) (10 μg/ml) or LPS (500 ng/ml) for 1 hour before being cocultured with CD4 + T cells in complete RPMI medium. MSCs were added at day 0 of the differentiation process in a 1:10 ratio (MSCs:T cells). Flow cytometry analysis, gating on CD4 + cells, and intracellular staining, using antibodies (mAb) for IFN-γ and IL17 to identify Th1 and Th17 lymphocytes, respectively, were performed. Representative density plots of six different experiments for Th1 and Th17 differentiation are shown. For proliferation analysis, CD4 + cells were previously labeled with CellTrace Violet (CTV) and analyzed (presented as histograms). Further analysis of the events of each cycle, described by the proliferation index (d, h). Th1 differentiation (b) and proliferation (d) with the MSCs pretreated with poly(I:C) and LPS. Th17 differentiation (f) and proliferation (h) with the MSCs pretreated with poly(I:C) and LPS. Bars represent the mean ± SEM, significant differences calculated using the Mann-Whitney test. *p < 0.05, **p < 0.001. MSCsPoly MSCs pretreated with poly(I:C) for 1 hour, MSCsLPS MSCs pretreated with LPS for 1 hour CD4 + T lymphocytes, purified by negative selection from splenocytes, were stained with CTV and cultured under Th1 and Th17 polarizing conditions in the absence or presence of TLR3 or TLR4-stimulated MSCs. MSCs were added at the beginning of the differentiation protocol at a 1:10 ratio (MSCs:CD4 + T cells) and intracellular cytokines for Th1 and Th17 cells were evaluated at day 5 by flow cytometry, as described in Methods. The patterns of Th1 and Th17 differentiation and proliferation for six different experiments are shown in Fig. 3a, e. The data analysis summary of the proliferation is shown in Fig. 3b, f.
As shown in Fig. 3a, b, the addition of untreated MSCs significantly suppressed the clonal expansion of IFN-γ-secreting (Th1) cells relative to that we reported previously [22]. (p < 0.05, Fig. 3a, b). The pretreatment of MSCs with poly(I:C) induced a higher capacity to inhibit Th1 than was observed in untreated MSCs (p < 0.05). Although not significantly different, the addition of MSCs treated with poly(I:C) also induced a decrease in the proliferation of Th1 cells compared with that in untreated MSCs (Fig. 3c, d).
In contrast, culturing Th1 cells with MSCs pretreated with LPS showed that these MSCs had a reduced ability to inhibit Th1 differentiation and proliferation in comparison with the effect of untreated MSCs (p < 0.05). Regarding the ability of MSCs to inhibit the clonal expansion of IL-17-secreting (Th17) lymphocytes, we observed that untreated MSCs significantly inhibited Th17 differentiation (p < 0.05, Fig. 3e, f). Furthermore, Th17 cultured with MSCs showed a decrease in Th17 proliferation, although these differences were not statistically significant (Fig. 3e, f). Similar to the pattern observed with Th1 cells, MSCs pretreated with LPS showed a decreased effect on Th17 differentiation compared with that observed in untreated MSCs and MSCs pretreated with poly(I:C) (p < 0.05, Fig. 3e, f). Finally, MSCs pretreated with LPS exhibited increased Th17 proliferation compared with the effect of MSCs pretreated with poly(I:C) (p < 0.05, Fig. 3 g, h). These results showed that brief pretreatment of murine MSCs with poly(I:C) or LPS induces different and opposing effects on Th1 and Th17 cell differentiation and proliferation, suggesting that stimulation of murine MSCs with TLRs can modulate the cells' in-vitro immunosuppressive capacity against T-helper cell subsets.
Brief in-vitro pretreatment of MSCs with poly(I:C) or LPS induces distinct and opposing immunomodulatory effects on EAE
To elucidate the therapeutic effect of untreated MSCs or MSCs pretreated with poly(I:C) or LPS, we induced EAE in C57BL/6 mice using MOG immunization as described previously [12]. MSCs were injected i.p.
(2 × 10 6 cells/mice) 4 days after EAE induction, and the clinical scores and body weight loss were recorded daily until day 27 (Fig. 4a). Control EAE mice showed the first clinical signs at day 10 post immunization (onset), reached a peak at day 21, and then presented a stable disease course until day 27, as we observed previously [12]. Consistent with previous reports [12], the administration of untreated MSCs before the onset of clinical signs significantly decreased the clinical signs of EAE compared with control treatment (p < 0.05, Fig. 4a). The administration of MSCs pretreated with poly(I:C) for 1 hour generated a nonsignificant increase of the therapeutic effect on EAE clinical scores relative to untreated MSCs, decreasing the progress of the disease even further (Fig. 4a). In contrast, MSCs pretreated with LPS completely reversed the protective effect of MSCs against EAE, showing a similar trend in the clinical manifestations of the disease to that observed in the control EAE mice (Fig. 4a). These results were confirmed by analyzing the cumulative EAE score, which showed that untreated MSCs significantly decreased the clinical signs of EAE compared with the control treatment (p < 0.001, Fig. 4b). On the other hand, MSCs pretreated with poly(:C) were even more potent than untreated MSCs in significantly inhibiting the cumulative EAE score compared with the score of the control EAE mice (Fig. 4b). In contrast, MSCs pretreated with LPS significantly reversed the trend observed in the cumulative score induced by untreated MSCs or MSCs pretreated with poly(I:C) (p < 0.05) (Fig. 4b).
Similarly, analysis of body weight loss demonstrated that untreated MSCs and MSCs retreated with poly(I:C) resulted in significantly less weight loss compared with the control (p < 0.05, Fig. 4c). Finally, the administration of MSCs pretreated with LPS reversed the effect on body weight loss induced by untreated MSCs and MSCs pretreated with poly(I:C) (p < 0.05, Fig. 4c). These data consistently demonstrate that MSCs pretreated with poly(I:C) reduce the clinical signs of EAE and that pretreatment of MSCs with LPS reverses this effect, suggesting that specific TLR activation can alter the immunomodulatory capacity of MSCs in vivo.
We next evaluated whether the administration of untreated MSCs could affect Th1 and Th17 cell subsets in EAE mice. Percentages of Th1 and Th17 subsets were analyzed in lymph nodes samples of EAE mice by flow cytometry as described in Methods. As expected, treatment with MSCs decreased Th1 and Th17 subsets in the lymph nodes of EAE mice. We found a significant effect on the Th17 subset (p < 0.05) and a decrease in the Th1 subset. Interestingly, pretreatment of MSCs with poly(I:C) was able to significantly decrease both the Th1 and Th17 subsets (p < 0.05). In contrast, the administration of MSCs pretreated with LPS completely reversed the effect of MSCs on the Th1 and Th17 subsets (p < 0.05, Fig. 5a, b). We observed higher percentages of Th1 and Th17 in this group, similar to percentages found in the EAE mice without any treatment.
Discussion
Recently years, stem cell treatments have become an important therapeutic strategy for the treatment of various proinflammatory and autoimmune diseases because of their powerful immunomodulatory properties via the suppression of T cells, B cells, natural killer (NK) cells, and antigen presenting cells [26,27]. Such immunological effects have been shown primarily in vitro but also in vivo, in a number of experimental disease models such as EAE [10][11][12], CIA [8], and experimental colitis [9,28]. Despite the in-vitro and in-vivo evidence for a therapeutic effect of MSCs, their precise mechanism of action and the profile of their adverse effects as immunomodulatory agents are still poorly understood.
Recently, it has been demonstrated that stimulation of human MSCs with poly(I:C) and LPS induces activation of NF-kB, mitogen-activated protein kinases (MAPK), and protein kinase B (AKT) signaling pathways. Activation of these pathways was associated with the induction and secretion of different patterns of cytokines and chemokines, suggesting that LPS could promote the activation of immune responses while poly(I:C) could suppress it [29]. Similarly, Waterman et al. [20] demonstrated that human MSCs polarize into a proinflammatory or anti-inflammatory phenotypes, according to the specific TLR3 or TLR4 activation in vitro. This functional phenotype was also shown in vivo, in experimental models of diabetes [30] and ovarian cancer [31]. These findings suggest that pretreatment of MSCs with TLRs could be a powerful and innovative therapeutic tool for the treatment of autoimmune and proinflammatory pathologies. In the present study, we evaluated the immunomodulatory effect of murine MSCs after treatment with TLR3 and TLR4 agonists in vitro and in a mouse model of multiple sclerosis. Our results demonstrated that pretreatment of MSCs with poly(I:C) enhances their immunosuppressive capacity in vitro and that intraperitoneal injection of these MSCs significantly reduces the severity of EAE. In contrast, LPS pretreatment of MSCs induces a significant decrease in their immunomodulatory function in vitro and completely reverses the therapeutic immunosuppressive effect of MSCs in vivo.
Diverse studies have shown that murine MSCs express different functional TLRs, such as TLR1-TLR8 [32]. Our data showed that murine MSCs cultured to 80-90 % confluence in complete culture medium express significant levels of mRNA for TLR3 and TLR4, and that the expression level of TLR4 was higher than that of TLR3, similar to the pattern described by Pevsner-Fischer et al. [32]. In addition, we demonstrated that pretreatment of these TLRs for 1 hour with their respective agonists differentially affects the in-vitro immunosuppressive capacity of murine MSCs. First, we observed that untreated MSCs were functionally capable of inhibiting the proliferation of activated T cells, confirming what has been published previously [7]. Once the inhibitory capacity of the MSCs on T-cell proliferation was confirmed, we evaluated the effect of MSCs pretreated for 1 hour with poly(I:C) or LPS. MSCs pretreated with poly(I:C) were able to significantly increase the inhibitory capacity of MSCs on T-cell proliferation by approximately 33 % with respect to untreated MSCs. Conversely, MSCs pretreated with LPS completely reversed the immunosuppressive effect of untreated MSCs and induced a significant, and dose-dependent, increase in T-cell proliferation.
These results demonstrate that brief, in-vitro LPS stimulation of murine MSCs induces a proinflammatory phenotype, similar to the effects previously shown by Waterman et al. [20], using human MSCs.
To better understand the effect of activation of TLR ligands on the immunomodulatory activity of MSCs, we measured NO production in the absence or presence of splenocytes stimulated with ConA as well as the expression and levels of proinflammatory cytokine IL-6. In the absence of splenocytes, no differences were observed in NO secreted by untreated or pretreated MSCs. However, a b Fig. 5 Pretreatment of MSCs with poly(I:C) and LPS generated a differential modulation of Th1 and Th17 cells in EAE mice. Lymph nodes were removed from different groups of treatments at day 27. (a) Th1 detection using CD4 + IFN-γ for the five groups of mice. (b) Th17 detection (CD4 + IL-17 + ) for the five groups of mice. Bars represent the mean ± SEM, significant differences calculated using t tests (*p < 0.05, **p < 0.001).
MSCsPoly MSCs pretreated with poly(I:C) for 1 hour, MSCsLPS MSCs pretreated with LPS for 1 hour in the presence of splenocytes, we detected a significant increase in NO production induced by MSCs pretreated with poly(I:C) but not by those pretreated with LPS, which had lower NO production compared with untreated MSCs. On the other hand, our results indicated that the expression of IL-6 increased after stimulation of MSCs with LPS and was inhibited after stimulation of MSCs with poly(I:C). Taken together, these data provide evidence of a based anti-inflammatory phenotype for MSCs pretreated with poly(I:C) and an opposite, proinflammatory phenotype for MSCs stimulated with LPS, which show a loss of capacity to inhibit T-cell proliferation, a higher expression of IL-6, and nonsignificant NO secretion.
MSCs have been identified as immunomodulating cells because they inhibit the generation and function of Th1 and Th17 cells and increase Treg cell formation [33][34][35][36].
Previous studies from our laboratory showed that MSCs cocultured with CD4 + T cells grown in conditions polarizing them towards Th1 or Th17 lineages exert strong Th1 immunosuppression but have little effect on Th17 cells [22,25]. Here, we evaluated the immunomodulatory effect of TLR3 and TLR4-pretreated MSCs on Th1 and Th17 differentiation and proliferation in vitro. We observed a strong capacity of MSCs pretreated with poly(I:C) to inhibit Th1 and Th17 differentiation and proliferation, which was even more pronounced than the effect of untreated MSCs. Conversely, MSCs pretreated with LPS showed a diminished capacity to inhibit Th1 and Th17 differentiation and proliferation. Recently, we studied the therapeutic effect of MSC administration on EAE, showing that the injection of MSCs at the time of disease onset induces a significant improvement in the clinical signs of the disease [12]. In the present study, using the same mouse model, we studied whether the administration of MSCs pretreated with poly(I:C) or LPS generated distinct therapeutic effects in vivo. Our results demonstrated that MSCs pretreated with poly(I:C) significantly reduce the clinical signs of EAE and that pretreatment of MSCs with LPS completely reverses the therapeutic immunosuppressive effect of MSCs. Furthermore, when we evaluated the cumulative score and the weight loss of the animals in each group, we found the same pattern that again highlighted the ability of MSCs stimulated with poly(I:C) to increase the immunosuppressive capacity of the MSCs. Poly(I:C) stimulation generated a decrease in the score and weight loss in the treated animals, while LPS caused an increase in clinical signs and a high percentage of weight loss in animals. In addition, we investigated the relationship between the treatments of the animals with respect to Th1 and Th17 proinflammatory cell subsets in the lymph node of EAE mice as a way to account for the observed results. We found a significant decrease of the Th1 and Th17 subsets induced by the administration of untreated MSCs, although these differences were significant only in the case of Th17 cells. No significant differences were observed in the expression of Th1 and Th17 cells when EAE mice were injected with poly(I:C)-pretreated MSCs in comparison with the expression in untreated MSCs. In contrast, the treatment of EAE mice with LPSpretreated MSCs completely reversed the effect on the Th1 and Th17 subset cells induced by untreated MSCs.
Conclusions
In summary, for the first time we found that murine MSCs polarize into two distinct phenotypes following invitro specific TLR activation, as observed in humans. TLR3 stimulation specifically leads to enhancement of the immunosuppressive capacity to inhibit the proliferation of splenocytes and the differentiation and proliferation of Th1 and Th17 in vitro. Meanwhile, TLR4 stimulation completely reverses these immunomodulatory effects. Secondly, we also examined these phenotypes in the context of the autoimmune disease model of multiple sclerosis, where pretreatment of murine MSCs with TLR3 and TLR4 agonists generates distinct and opposing immunomodulatory effects on EAE. Our findings are important to better define strategies of cell-based therapies for proinflammatory and autoimmune diseases. | 7,238 | 2016-10-10T00:00:00.000 | [
"Medicine",
"Biology"
] |
Atomically Dispersed Nickel Anchored on a Nitrogen‐Doped Carbon/TiO2 Composite for Efficient and Selective Photocatalytic CH4 Oxidation to Oxygenates
Abstract Direct photocatalytic oxidation of methane to liquid oxygenated products is a sustainable strategy for methane valorization at room temperature. However, in this reaction, noble metals are generally needed to function as cocatalysts for obtaining adequate activity and selectivity. Here, we report atomically dispersed nickel anchored on a nitrogen‐doped carbon/TiO2 composite (Ni−NC/TiO2) as a highly active and selective catalyst for photooxidation of CH4 to C1 oxygenates with O2 as the only oxidant. Ni−NC/TiO2 exhibits a yield of C1 oxygenates of 198 μmol for 4 h with a selectivity of 93 %, exceeding that of most reported high‐performance photocatalysts. Experimental and theoretical investigations suggest that the single‐atom Ni−NC sites not only enhance the transfer of photogenerated electrons from TiO2 to isolated Ni atoms but also dominantly facilitate the activation of O2 to form the key intermediate ⋅OOH radicals, which synergistically lead to a substantial enhancement in both activity and selectivity.
Introduction
Methane is not only a highly available clean fuel from natural gas, shale gas and biogas, but also a very potent and important greenhouse gas with a warming potential more than 25 times that of CO 2 . [1][2][3] The catalytic conversion of methane to higher added-value chemicals, typically derived from petroleum and coal, is therefore attractive for reducing dependence on crude oil and mitigating global warming. The current industrial methane conversion technology is realized through an indirect route, associated with an energy-intensive syngas production process and the subsequent methanol or Fisher-Tropsch synthesis. [3,4] Direct conversion of methane to methanol and other oxygenates with molecular oxygen is one of the most ideal approaches to realize methane transformation more efficiently and cleanly. [5][6][7][8][9] The key challenge in direct methane conversion is the activation and selective oxidation of methane, because methane is a rather inert molecule and the desired products are more reactive than methane and is susceptible to overoxidation to CO 2 . [6,7] To minimize the overoxidation of oxygenates, the methane conversion reaction is generally conducted at relatively low temperatures (< 200°C), along with the utilization of corrosive or expensive oxidants (such as sulfuric acid, N 2 O, H 2 O 2 ) to replace O 2 to activate methane and/or the operation of a stepwise chemical looping process, which makes the process economically uncompetitive. [5-7, 10, 11] Compared with thermocatalytic methane conversion, photocatalytic methane oxidation can proceed at room temperature to achieve appreciable yields of oxygenates and has recently received great interest. [12][13][14][15][16][17][18][19][20][21][22][23] Cocatalysts play a vital role in semiconductor-based photocatalytic methane oxidation reactions, as they can not only promote the separation and transfer of photogenerated charge carriers, but also control the activation of reactants, thereby enhancing surface reaction rates and tuning product selectivity. Among various cocatalysts, noble metals generally exhibit the outstanding performance for photocatalytic methane oxidation. [15,16,19,20,[22][23][24][25] For example, our previous studies showed that noble metals (Pt, Pd, Au or Ag) decorated ZnO were active and selective for photooxidation of CH 4 with O 2 to oxygenates (CH 3 OOH, CH 3 OH and HCHO) and Ag/TiO 2 {001} enabled the selective production of CH 3 OH, while pristine ZnO and TiO 2 exhibited low activity and selectivity for the production of oxygenates. [20,23] Other researchers have also reported a series of good photocatalysts with noble metals as cocatalyst for photocatalytic aerobic oxidation of methane to oxygenates, such as AuÀ Pd/ TiO 2 , [26] Pd/In 2 O 3 , [16] Au/WO 3 , [25] Pt/WO 3 [24] and black phosphorous-supported Au single atoms. [19] Despite the promising results obtained in the abovementioned studies, the high-cost and limited-reserves of noble metals limits their applications. Therefore, there is a high demand to develop low-cost and high-performance alternative cocatalysts to noble metals. Atomically dispersed non-precious metal atoms anchored on N-doped carbon (M-NC) materials, which are generally considered as single atomic site catalysts, have been employed as cocatalysts for efficient photocatalytic reactions such as H 2 production and CO 2 reduction, [27][28][29] due to the highly exposed active metal sites and efficient transfer of charge carriers. Moreover, the electronic structure of atomic metal sites can be fine-tuned by changing the coordination environments, rendering M-NC active and selective for targeted catalytic reactions with favorable reaction kinetics. [27,30] In view of such distinctive characteristics, M-NC potentially have the capability to enhance the activity and selectivity of semiconductor-based photocatalysts in photooxidation of CH 4 . Nevertheless, to the best of our knowledge, there have been no studies reporting the utilization of M-NC as cocatalysts for photocatalytic CH 4 oxidation.
In this work, a single-atom NiÀ NC/TiO 2 composite is prepared and used as a photocatalyst for direct CH 4 oxidation with O 2 to produce liquid oxygenates. We found that owing to the unique structural properties, the atomically dispersed NiÀ NC sites not only promote the carrier separation and transfer efficiency, but also enable the controlled activation of O 2 to * OOH radicals, a key intermediate for the formation of the primary product CH 3 OOH that can be readily transformed into CH 3 OH and HCHO. As a result, a high C1 oxygenates yield of up to 198 μmol with 93 % selectivity is achieved after 4 h of irradiation, superior to most previously reported photocatalysts using noble metals as cocatalysts. Figure 1a illustrates the synthetic process for the preparation of NiÀ NC/TiO 2 via a facile one-pot solvothermal method. [31] Briefly, TiO 2 (P25) and Ni precursor (NiCl 2 ) were first dispersed in formamide (HCONH 2 ). Then, the mixed solution was solvothermally heated at 180°C for 12 h. During the solvothermal process, formamide can be easily transformed into nitride-doped carbon (NC) material on the surface of TiO 2 ; meanwhile, considering the strong interaction of NiÀ N coordination, NiÀ N bond was formed in the presence of Ni 2 + . Finally, the resulting sample was washed with diluted acid and water for several times to yield TiO 2 loaded with NC coordinated Ni catalyst (denoted as NiÀ NC/ TiO 2 ). The color of the material after solvothermal reaction changed from white to black ( Figure S1), indicative of loading of Ni and CN on TiO 2 . The Fourier-transform infrared (FT-IR) spectra show a new peak at 1386 cm À 1 on NiÀ NC/TiO 2 ( Figure S2), confirming the presence of CÀ N groups. Inductively coupled plasma optical emission spectrometry shows that the weight amount of Ni in NiÀ NC/ TiO 2 is 0.5 wt %. For comparison, TiO 2 decorated with Ni nanoparticles (NPs) with a loading amount of 0.5 wt % (denoted as Ni NPs/TiO 2 ) was prepared via an impregnation method followed by H 2 reduction at 400°C for 1 h.
Results and Discussion
X-ray diffraction (XRD) patterns ( Figure 1b) show that all diffraction peaks are associated with TiO 2 (anatase and rutile) and no peak of any likely Ni species is observed on NiÀ NC/TiO 2 and Ni NPs/TiO 2 . [32] Transmission electron microscopy (TEM) and high-resolution TEM images (Figure 1c-e) show that the surface of TiO 2 is wrapped by a thin amorphous layer in NiÀ NC/TiO 2 and no sign of remarkable Ni NPs is detected, while small Ni NPs with size of 2-3 nm were formed on TiO 2 surface in Ni NPs/TiO 2 ( Figure S3). Two lattice fringes with interplanar distances of 0.352 and 0.325 nm agree well with the crystal parameters of anatase (101) and rutile (110) planes, respectively, implying that solvothermal treatment did not alter the crystal structure of TiO 2 (Figure 1d and e). Aberration-corrected high-angle annular dark-field scanning TEM (AC HAADF-STEM) image shows many isolated bright spots with no observed clusters or subnanoparticles in NiÀ NC/TiO 2 (Figure 1f), which directly validates the formation of atomically dispersed Ni sites. The energy dispersive X-ray (EDX) spectroscopy elemental mapping analysis (Figure 1g) demonstrates that elemental Ni is uniformly dispersed throughout the entire structure of NiÀ NC/TiO 2 .
The surface compositions and chemical states of NiÀ NC/ TiO 2 were investigated by X-ray photoelectron spectroscopy (XPS). The high-resolution Ni 2p XPS spectrum of NiÀ NC/ TiO 2 (Figure 2a) displays the binding energy of Ni 2p 3/2 peak at 855.2 eV, which is higher than that of Ni 0 (853.5 eV) and slightly lower than that of Ni 2 + (855.8 eV), [33,34] suggesting the formation of positively charged Ni species. The highresolution N 1s spectrum of NiÀ NC/TiO 2 is deconvoluted into three characteristic peaks at 398.8 eV, 399.7 eV and 400.7 eV ( Figure S4), which could be assigned to pyridinic-N, NiÀ N and pyrrolic-N, [35] respectively. The presence of NiÀ N species indicates that Ni atoms are adequately coordinated with N sites.
X-ray absorption fine structure spectroscopy (XAFS) analysis was further performed to investigate the coordination environment of Ni in NiÀ NC/TiO 2 using Ni foil, NiO and nickel phthalocyanine (NiPc) as references. The Ni Kedge X-ray absorption near-edge structure (XANES) spec-tra ( Figure 2b) show that the absorption edge position of NiÀ NC/TiO 2 is between those of Ni foil and NiO, revealing the cationic Ni sites, in good consistency with the result of XPS analysis. Additionally, NiÀ NC/TiO 2 has a similar preedge profile to NiPc with a peak at 8340 eV, which is attributed to the transition of 1s to 4p and is the feature of planar NiÀ N 4 moiety. [33] Compared with NiPc, the slight decrease in light intensity of NiÀ NC/TiO 2 probably results from the distorted NiÀ N 4 structure. As shown in the Fourier transformed (FT) Ni K-edge extended XAFS (EXAFS) spectra (Figure 2c), the prominent peak at ca.1.40 Å for NiÀ NC/TiO 2 corresponds to first shell coordination of NiÀ N bond, [36] contributing similarly to the NiPc reference, and no obvious NiÀ Ni peak at 2.19 Å is detected, revealing the negligible presence of metallic Ni species. These results confirm the presence of atomic dispersion of Ni species in NiÀ NC/TiO 2 in the form of NiÀ N coordination, in accordance with the result of dispersed Ni atoms from HAADF-STEM image. To precisely quantify the coordination microenvironment of Ni site, the curve fitting for EXAFS spectra was performed (Figure 2d, Figure S5, and Table S1). As shown in Figure 2d, the fitting results of the first coordination shell verify that the Ni site in NiÀ NC/TiO 2 is fourcoordinated by N atoms, matching well with a NiÀ N 4 site configuration. In addition, the wavelet transform EXAFS (WT-EXAFS) spectra (Figure 2e) show that NiÀ NC/TiO 2 and NiPc exhibit similar contour plot with only one intensity maximum at 6.5 Å À 1 instead of the NiÀ Ni interaction (c.a. 8.4 Å À 1 ), [37] This further manifests the formation of dispersed Ni atoms with NiÀ N coordination. All of above characterizations demonstrate that Ni species are atomically dispersed in NiÀ NC/TiO 2 with NiÀ N 4 moiety.
Photocatalytic CH 4 oxidation performance was evaluated in a batch reactor at room temperature using only O 2 as the oxidant. [22,23] As shown in Figure 3a, only HCHO was detected in the liquid phase over pristine TiO 2 after 4 h of irradiation with a yield of 140 μmol, accompanied by 33 μmol of CO 2 . For Ni NPs/TiO 2 , the yields of HCHO and CO 2 slightly decreased to 135 and 26 μmol, respectively, with the production of a small amount of CH 3 OH (19 μmol). The selectivity for C1 oxygenated products increased from � 81 % over pristine TiO 2 to � 86 % over Ni NPs/TiO 2 . Due to CH 3 OH being the precursor of HCHO and CO 2 in the photocatalytic CH 4 oxidation, [22,23] the trace or small amount of CH 3 OH observed over TiO 2 and Ni NPs/TiO 2 suggests the facile overoxidation of CH 4 . By comparison, a much higher yield of primary products CH 3 OOH (55 μmol) and CH 3 OH (29 μmol) together with 114 μmol of HCHO were produced over NiÀ NC/TiO 2 , and the amount of CO 2 decreased to 16 μmol. This leads to a remarkable � 93 % oxygenates selectivity and the corresponding apparent quantum efficiency (AQE) for oxygenates at 360 nm was determined to be 1.9 %. The yield and selectivity of liquid oxygenates of NiÀ NC/TiO 2 are higher than those of NiÀ NPs/ TiO 2 and TiO 2 , demonstrating the superiority of single atom NiÀ NC cocatalysts for the photocatalytic CH 4 oxidation. The excellent photocatalytic performance observed over NiÀ NC/TiO 2 is comparable to or even outperforms most reported photocatalysts decorated with either noble metal or non-noble metal cocatalysts under similar experiment conditions (Table S2). [12,13,16,17,19,20,[22][23][24][25][38][39][40] Reactions without photocatalyst, without light or replacing CH 4 with Ar did not yield any product. Isotope labelling experiment using 13 CH 4 was performed to elucidate the source of carbon atoms of the products. 13 C NMR spectrum shows three obvious peaks assigned to CH 3 OOH, CH 3 OH and HCHO (Figure 3b), suggesting that the produced oxygenates originated from methane, instead of carbon materials in NiÀ NC/TiO 2 . In addition, no liquid products were detected without the introduction of O 2 ( Figure S6), which indicates that O 2 is the necessary for photocatalytic CH 4 oxidation. Isotopic experiments with oxygen revealed that O 2 molecules was the oxygen source of the produced oxygenates ( Figure S7). The overall yield of oxygenates was increased with the reaction time, and the formation of CH 3 OH was observed by extending the irradiation time over 3 h (Figure 3c). Increasing the water amount was conducive to promoting the production of oxygenates and suppressing the overoxidation of CH 4 to CO 2 ( Figure S8). There was marginal loss in the photocatalytic performance and selectivity for oxygenates after consecutive five runs ( Figure S9), and the morphology and structure of catalyst remained unchanged (Figures S10 and S11). These results confirm the good stability of NiÀ NC/TiO 2 . Increasing the amount of Ni from 0.5 wt % to 1.1 wt % did not noticeably improve the performance of NiÀ NC/TiO 2 ( Figure S12), because excessive loading amount of NiÀ CN can shield light absorption of TiO 2 ( Figure S13). When Ni was replaced with Co and Fe, the total amounts of oxygenates were reduced, due to no detectable formation of CH 3 OOH (Figure 3d). This demonstrates that the isolated Ni site in NiÀ NC with unique properties play an important role for efficient photooxidation of methane to oxygenates.
To understand the role of cocatalysts in photocatalytic reaction, the photoluminescence (PL) spectra of the samples were performed to study the photogenerated charge separation efficiency (Figure 4a). Bare TiO 2 shows an intensive emission peak at 400-440 nm upon excitation at 320 nm. After the introduction of Ni NPs or NiÀ NC on TiO 2 , the PL intensity is remarkably decreased, and NiÀ NC/TiO 2 exhibits a lower emission peak than Ni NP/TiO 2 , indicating that NiÀ NC favorably prevents the recombination of charge carriers compared to Ni NPs cocatalysts. The time-resolved PL spectra were carried out to investigate the dynamics of charge carriers (Figure 4b). The average lifetime of NiÀ NC/ TiO 2 (0.9 ns) is shorter than those of Ni NP/TiO 2 (1.9 ns) and TiO 2 (4.5 ns), in line with the typical cocatalyst/semiconductor systems in which the facile electrons transfer from semiconductors to cocatalysts leads to fast fluorescence decay, [41,42] revealing the excellent ability of NiÀ NC to accelerate the transfer of photogenerated electrons. These results demonstrate the positive role of NiÀ CN in efficiently separating electrons and holes, thereby leading to the enhanced performance of photocatalytic CH 4 oxidation.
Generally, for photocatalytic CH 4 aerobic reaction in aqueous solution, the CÀ H bond of CH 4 is oxidized by photo-generated active oxygen species to form * CH 3 radicals, which would react with oxygen-derived free radicals to produce oxygenates. [23] To elucidate the reaction mechanism of photocatalytic CH 4 oxidation, electron paramagnetic resonance (EPR) with 5,5-dimethyl-1-pyrroline-N-oxide (DMPO) was conducted. As shown in Figure 4c, * CH 3 radicals are detected on both NiÀ NC/TiO 2 and Ni NP/TiO 2 , and the intensities of * CH 3 radicals of NiÀ NC/TiO 2 is slightly higher than that of Ni NP/TiO 2 , revealing that the activation of CH 4 to * CH 3 radicals occurs in selective photo-oxidation of CH 4 in aqueous solution. Figure S14 shows that strong EPR signals assigned to * OH radicals are observed without the introduction of CH 4 . The decreased intensity of * OH radicals in the presence of CH 4 may be due to the highly active * OH radicals participating in CH 4 oxidation, such as deep oxidation of CH 4 to HCHO and CO 2 . [22] For the intermediates in photocatalytic O 2 reduction, one set of EPR signals that are assigned to * OOH radicals appear upon illumination (Figure 4d). The observed * OOH radicals are easily produced from O 2 reduction with protons by photogenerated electrons. Clearly, the signal intensity of * OOH radicals of NiÀ NC/TiO 2 is higher than that of Ni NP/ TiO 2 , indicating that NiÀ CN cocatalyst can facilitate the formation of * OOH radicals compared with Ni NP cocatalyst. The high amount of * OOH radicals probably lead to the enhanced production of CH 3 OOH and other oxygenates.
Based on the above results, a plausible photocatalytic CH 4 oxidation mechanism on NiÀ NC/TiO 2 is depicted in Figure S15. Under light irradiadtion, electrons and holes are generated on TiO 2 . The photogeneratred electrons are transferred to single NiÀ NC sites to promote the reducion of O 2 to form * OOH radicals, while the powerful holes are left on the surface of TiO 2 to initiate the CH 4 oxidation to produce * CH 3 radicals. These two radicals can easily combine to form the primary product CH 3 OOH, which can be subsequently transformed into CH 3 OH and HCHO. The single NiÀ NC sites guarantee the efficient separation of photogenerated electrons and holes and the favourable formation of * OOH radicals by mild reduction of O 2 , ultimately leading to excellent performance of photocata- lytic CH 4 oxidation with O 2 . To demonstrate this hypothesis, the detailed reaction pathways were calculated by density functional theory (DFT) calculations. The optimized structural models of NiÀ NC/TiO 2 and Ni NPs/TiO 2 are given in Figure S16.
The energy profiles of the O 2 reduction and CH 4 activation reactions are illustrated in Figure 5a and b, with the corresponding structures of reaction intermediates and transition states shown in Figure 5c and d. The activation of O 2 to form *OOH species is an exothermic reaction on NiÀ NC/TiO 2 and Ni NP/TiO 2 , with a reaction energy of À 0.79 and À 2.85 eV, respectively. The comparatively unfavourable formation of *OOH species indicates the weak surface adsorption of NiÀ NC because of its unique electronic structure. This results in the preferential desorption of *OOH species to generate * OOH radicals that can participate in the production of CH 3 OOH, instead of the subsequent dissociation of *OOH to form * OH radicals due to the large reaction energy (2.00 eV). By contrast, the desorption energy of *OOH species on Ni NPs is as high as 2.62 eV. Compared with *OOH desorption, the dissociation of *OOH to *O + *OH is more preferred on Ni NPs, with a reaction energy of À 2.25 eV and an energy barrier of 0.12 eV, and the produced *OH species on Ni NPs could desorb to form * OH radicals for oxidizing oxygenates to CO 2 . These results indicate that NiÀ NC cocatalyst is beneficial for the production of * OOH radicals in O 2 reduction, consistent with the EPR results, which contributes to the production of oxygenates. The different behaviors of Ni NPs and NiÀ NC on O 2 activation probably because *OOH is very unstable on metallic Ni NPs and is easily dissociated to form strong NiÀ O bonds, as indicated by the large reaction energy (À 2.25 eV), while NiÀ NC is stabilized by N coordination, thus unfavorable for further dissociation of *OOH.
For CH 4 activation, the reaction energy for the cleavage of the first CÀ H bond of 1.14 eV on NiÀ NC/TiO 2 is quite similar to that on Ni NPs/TiO 2 (1.15 eV), with a relatively lower energy barrier (1.46 eV vs. 1.54 eV). Likewise, the reaction energies for the subsequent *CH 3 desorption to * CH 3 radicals on NiÀ NC/TiO 2 and Ni NPs/TiO 2 are similar (1.14 eV vs. 1.13 eV). It clearly shows that there is no significant difference in the activation of methane over NiÀ NC and Ni NPs cocatalysts. As a result, the different pathways of O 2 reduction over NiÀ NC/TiO 2 and Ni NP/TiO 2 primarily contribute to the differences of activity and selectivity in photocatalytic CH 4 oxidation.
Conclusion
In summary, atomically dispersed NiÀ NC/TiO 2 has been developed by a facile one-pot solvothermal method for room-temperature photocatalytic CH 4 oxidation with O 2 to C1 oxygenates with 93 % selectivity. The single-atom NiÀ NC sites function as electron capture centers to achieve efficient separation of charge carriers in NiÀ NC/TiO 2 . Moreover, the isolated Ni atoms are active for the favorable formation and desorption of * OOH radicals in O 2 reduction, rather than being active for the production of * OH radicals that are more likely to facilitate the overoxidation of oxygenates to CO 2 . Such unique properties of NiÀ NC results in a prominent C1 oxygenates productive rate and high selectivity. This work is the first case of single metal atoms anchored N-doped carbon material as a cocatalyst to promote the performance of photocatalytic aerobic oxidation of CH 4 , which may drive the discovery of more earth-abundant and low-cost photocatalysts for efficiently and selectively oxidizing CH 4 to solar fuels and chemicals. | 5,188.4 | 2022-11-29T00:00:00.000 | [
"Chemistry"
] |
Field-Induced Magnetic Monopole Plasma in Artificial Spin Ice
Artificial spin ices (ASIs) are interacting arrays of lithographically-defined nanomagnets in which novel frustrated magnetic phases can be intentionally designed. A key emergent description of fundamental excitations in ASIs is that of magnetic monopoles -- mobile quasiparticles that carry an effective magnetic charge. Here we demonstrate that the archetypal square ASI lattice can host, in specific regions of its magnetic phase diagram, high-density plasma-like regimes of mobile magnetic monopoles. By passively"listening"to spontaneous monopole noise in thermal equilibrium, we reveal their intrinsic dynamics and show that monopole kinetics are minimally correlated (that is, most diffusive) in the plasma phase. These results open the door to on-demand monopole regimes having field-tunable densities and dynamic properties, thereby providing a new paradigm for probing the physics of effective magnetic charges in synthetic matter.
Owing to their user-defined geometries of interacting magnetic elements, artificial spin ices (ASIs) provide a highly flexible and powerful platform with which to investigate the rich physics of frustrated spin systems [1][2][3]. Initially conceived [4] as two-dimensional analogs of "natural" pyrochlore spin ices [5,6] such as Ho 2 Ti 2 O 7 , investigations of ASIs now extend well beyond these original goals and enable detailed studies of a vast selection of possible interacting lattice arrangements, including exotic magnetic topologies not found in nature [7][8][9][10]. Together with natural spin ice materials, one of their most exciting properties is that the fundamental excitations in many ASIs have a natural emergent description in terms of effective magnetic monopoles [11,12] -that is, mobile quasiparticles that possess the equivalent of a net magnetic charge. These charge excitations can interact with each other and with applied magnetic fields via the magnetic analog of the ubiquitous electronic Coulomb interaction, representing the emergence of a range of novel phenomena [1,2], including the possibility of "magnetricity" [12].
While the presence of monopoles in ASI has been observed in pioneering imaging measurements [13,14], dynamical studies of monopole kinetics, and the ability to tune continuously through monopole-rich phases, remain at an early stage. Because their underlying magnetic interactions can be engineered to manifest near room temperature, ASIs are especially well-suited to studies of monopole dynamics and other collective modes. In this work, we use a high-bandwidth magneto-optical noise spectrometer to passively detect spontaneous magnetization fluctuations in thermally active square ASI. Because fluctuations of the constituent magnetic elements in ASIs are inextricably linked to the kinetics of monopoles, the system's broadband magnetization noise spectrum natu-rally encodes the intrinsic timescales and dynamic correlations of the underlying monopole excitations. The noise reveals specific regions in the field-dependent phase diagram where the density of mobile monopoles increases well over an order of magnitude compared with neighboring phases, a consequence of the field-tunable tension on the Dirac strings connecting mobile monopoles. Moreover, detailed noise spectra demonstrate that monopole kinetics are minimally correlated (i.e., most diffusive) in this plasma-like regime. Discovery of on-demand monopole phases with tunable kinetic properties opens the door to new probes of magnetic charge dynamics and provides a new paradigm for studies of magnetricity in artificial magnetic materials.
We consider the prototypical square ASI lattice [4], shown in Fig. 1(a). Each ferromagnetic nano-island behaves as a single Ising-like macrospin with net magnetization parallel or antiparallel to its long axis due to shape anisotropy. Crucially, the islands are made sufficiently thin so that they are superparamagnetic and thermally active at room temperature [15][16][17][18][19], i.e., in the absence of strong biasing magnetic fields, each island's magnetization can fluctuate spontaneously. This ensures that the lattice can efficiently sample the vast manifold of possible moment configurations, and remain near its magnetic ground state in thermal equilibrium.
Our study focuses on a previously-unexplored characteristic of thermal square ASI; namely, that its field-dependent magnetic phase diagram and groundstate moment configuration must include regions where monopole-like excitations play a dominant and active role. This can be understood by considering the relative energies of the four possible vertex types (I-IV), shown and described in Fig. 1 FIG. 1. Field-dependent phase diagram of thermally-active square artificial spin ice (ASI). (a) An SEM image of the sample. Each Ni0.8Fe0.2 island has lateral dimensions 220 nm × 80 nm and thickness 3.5 nm, and behaves as a single superparamagnetic Ising moment that, in the absence of any biasing field, exhibits rapid thermodynamic fluctuations near room temperature. (b) The four vertex types in archetypal square ASI, in order of increasing energy at zero applied magnetic field. Type-I vertices have lowest energy because nearest-neighbor dipolar coupling J1 exceeds the next-nearest-neighbor coupling J2. Type-I and -II vertices have 2-in/2-out configurations and therefore obey ice rules (but only type-II have a net polarization), while type-III vertices have 3-in/1-out or 3-out/1-in configurations and therefore have a monopole-like effective magnetic "charge". (Type-IV vertices also have magnetic charge but are energetically very unfavorable and occur only rarely.) (c) Notional schematic of the anticipated field-dependent phase diagram of square ASI, showing the two well-defined ground states of the system: full tiling with type-I vertices at small applied fields Bx,y ≈ 0, and polarized type-II vertex tiling when |Bx| and |By| are both large. Near the boundaries, the equilibrium dynamics are determined by the thermal creation, annihilation, and motion of type-III monopole vertices, which generate magnetization noise. (d) A thermal fluctuation in the type-I phase creates a pair of type-III vertices. (e) Subsequent fluctuations can cause the monopoles to diffuse, leaving behind a Dirac string of type-II (yellow) vertices. Red arrows show islands that have flipped; blue and red dots indicate the mobile monopole-like vertices.
ground state of square ASI is an ordered antiferromagnetic tiling of type-I vertices [15,16,20], which obey the "2-in/2-out" ice rule and do not possess any net polarization. However, for sufficiently large in-plane magnetic fields applied at angles near a lattice diagonal (±45 • ), polarized type-II vertices must eventually become energetically favored; these also obey ice rules, but possess a net polarization. The field-dependent phase diagram of thermal square ASI should then qualitatively resemble the schematic drawn in Fig. 1(c).
The crossover between type-I and type-II magnetic order (or between type-II orderings with different polarization) obviously requires the reversal of individual islands. Crucially, as depicted in Fig. 1(d), flipping any island within either a type-I or type-II ordered lattice unavoidably creates a pair of higher-energy type-III vertices, which have 3-in/1-out or 3-out/1-in moment configuration and therefore possess an effective magnetic "charge" that can be regarded as a magnetic monopole-like quasiparticle excitation [1][2][3]. Subsequent flips of other islands can create additional monopole pairs, annihilate adjacent monopole pairs, or cause an existing monopole to move through the ASI lattice ( Fig. 1(e)). Within a type-I (type-II) ordered region, the creation and sub-sequent separation of a monopole pair along a staggered diagonal direction creates a string of type-II (type-I) vertices [18]. As discussed in detail below, near the type-I/type-II boundaries these monopole excitations, once thermally created, can diffuse freely along certain directions with no cost in energy. Thermal square ASI may therefore be expected to host field-tunable regimes of mobile magnetic monopoles (see Fig. 1(e)).
To search for dynamic monopole regimes in square ASI, and to quantify their timescales and correlations -all under conditions of strict thermal equilibrium-we developed a broadband magnetization noise spectrometer to measure the frequency spectrum of the system's intrinsic magnetization fluctuations (see Fig. 2(a), and Appendix). Samples were mounted in the x − y plane, with horizontal and vertical islands oriented alongx andŷ. A weak probe laser was linearly polarized and focused on the ASI. Due to the longitudinal magneto-optical Kerr effect (MOKE), magnetization fluctuations in thex direction, δm x (t), imparted Kerr rotation fluctuations δθ K (t) on the polarization of the reflected laser, which were detected with balanced photodiodes. This "magnetization noise" was digitized and its frequency-dependent power spectrum P (ω) was computed and signal-averaged in real time. Figure 2(b) shows a characteristic spectrum measured out to 1 MHz, spanning over 5 orders of magnitude in both frequency and detected noise.
We first analyze the noise from a control lattice that contains only horizontal islands oriented alongx. Figure 3(a) shows a map of the measured total (frequencyintegrated) noise power versus applied in-plane magnetic fields B x and B y . As expected, significant noise was only observed when B x ≈ 0, where the thermally-active islands were unbiased and fluctuated freely. For larger |B x |, the islands were polarized and effectively frozen, suppressing fluctuations. Note that B y has no effect because this control sample lacks vertical islands, and because the Ising-like magnetization of the horizontal islands is not influenced by B y in this small range (±7 G).
In marked contrast, Fig. 3(b) shows the noise map from square ASI, where strong dipolar interactions between nearest-neighbor (adjacent vertical and horizontal) islands lead to the stable type-I magnetic ordering sketched in the phase diagram of Fig. 1(c). Indeed, the noise map exhibits a large dark central region when B x,y ≈ 0, indicating suppressed fluctuations, as expected for stable type-I order. Towards the four corners of the map, where both |B x | and |B y | are large, the magnetization noise again vanishes, indicating field-stabilized type-II tiling. Strikingly, however, there is a bright diamondshaped boundary region, indicating a high level of spontaneous noise between the type-I and type-II magnetic orderings.
The boundary region occurs when |B x | + |B y | equals the crossover field B c where type-I and type-II vertices become exactly degenerate in energy (see Supplemental Material Fig. S1 for additional details). As discussed above, spontaneous reversal of an island within a type-I or type-II ordered region creates a pair of type-III monopole vertices ( Fig. 1(d)). The key point is that at B c , these thermally-generated monopoles can then separate and diffuse freely along the staggered diagonal path that is most closely aligned with the applied field, as illustrated in Fig. 1(e). This motion, denoted by the process III + I ↔ II + III, has no collective energy cost along the boundary where type-I and -II vertex energies are degenerate. No net energy is required to lengthen or shorten the Dirac string of flipped vertices connecting the two monopoles -i.e., the string tension vanishes [18]. Most importantly, the freedom to separate leads to a substantial increase in the effective monopole lifetime and therefore the equilibrium density of mobile monopole vertices. Being topological quasiparticles, their density is limited only by the rate at which they annihilate by diffusing to an edge of the lattice or by encountering monopoles of opposite charge. This special regime stands in marked contrast to the case when |B x | + |B y | < B c (or > B c ), where monopole motion that creates additional type-II (or type-I) vertices is energetically unfavored and therefore suppressed. In this case the Dirac string has a nonzero tension that favors recombination of monopole pairs shortly following their creation, and, as demonstrated below, is accompanied by increasingly anomalous monopole diffusion and correlated dynamics.
Because the stochastic creation, annihilation, and motion of monopoles is intimately linked to reversal of individual islands, the boundary region is therefore clearly revealed by magnetization noise. Equivalently, all noise in square ASI is necessarily due to monopole kinetics. The diamond-shaped boundary therefore signals a fieldtunable regime of dynamic magnetic monopoles. We note that the noise map also shows bright vertical stripes when B x ≈ 0 and |B y | is large, indicating strong fluctuations at the boundary between different type-II orientations. This arises from an effective dimensional reduction, where all the vertical islands are polarized by B y and thereforeconsidering nearest-neighbor coupling J 1 only-there is no energy difference whether a horizontal island is magnetized along ±x. Thermal fluctuations δm x (t) therefore occur, similar to the case of the control lattice at B x = 0.
(Weaker next-nearest-neighbor coupling J 2 is insufficient, compared to kT , to stabilize any order).
To better understand the noise data and provide additional insight into the underlying mechanisms involved, we performed Monte Carlo simulations of square ASI (see Appendix). Figure 3(c) shows the calculated density of type-III monopole vertices, while the computed rate of horizontal spin flips is shown in Fig. 3(d). Regions of high monopole density correspond to regions of maximum flip rate, which in turn agrees very well with the measured noise map, thereby validating the connection between the measured noise power and the density of mobile monopoles.
Returning to the phase diagram sketched in Fig. 1(c), both experiments and simulations confirm that the type-I and type-II ordered regions in square ASI are separated by a plasma-like regime of thermally-active monopoles. Simulations indicate that not only does the monopole density increase by over two orders of magnitude at these boundaries (Fig. 3(c)), but also that the majority of these monopoles are freely diffusing (Fig. 4). Crucially, we note that both the monopole density and (as shown below) their dynamic correlations in this regime are continuously tunable by applied magnetic fields above and below B c , which is distinct from the monopole phase recently achieved in "degenerate square ASI" that uses height-offset vertical and horizontal islands [21,22]. In other words, by tuning to B c in conventional square ASI, we realize a regime where correlations associated with magnetic field applied along a lattice diagonal direction (i.e., along the dotted lines in panels (a) and (b)). In the monopole plasma regime (i.e., near the crossover field Bc where type-I and type-II vertices are energetically degenerate), the total monopole density increases mainly due to the huge increase in the number of free monopoles, which in turn is due to the absence of Dirac string tension at Bc and consequent free monopole diffusion.
the energy costs of changing type-I and type-II vertex populations are minimized. Of course, tuning |B| away from B c lifts the energy degeneracy of type-I and type-II vertices, whereupon the Dirac strings acquire tension and the monopoles will be affected by the magnetostatic potential associated with B, and can be expected to exhibit different kinetics. Noise measurements provide an effective tool to directly probe kinetic correlations of monopoles, via the detailed shape of the noise spectrum over a very broad frequency range (typically from 1 Hz to 1 MHz). This range directly accesses the relevant intrinsic timescales of fluctuations in our thermally-active ASIs, and naturally complements powerful imaging techniques such as PEEM or MFM that provide excellent spatial information but are typically limited to much slower (∼0.1-0.001 Hz) timescales and are less amenable to measurements in applied magnetic fields. We note that an analogous approach based on SQUID magnetometry was recently applied to Dy 2 Ti 2 O 7 crystals by Dusad et al. [23], yielding important insights into monopoles in natural pyrochlore spin ices at cryogenic temperatures.
A typical noise spectrum, P (ω), from square ASI is shown in Fig. 2(b). Here, P (ω) = a(ω)a * (ω) , where a(ω) is the Fourier transform of the noise signal δm(t) and the brackets indicate an average over repeated measurements. Equivalently, P (ω) is the Fourier transform of the system's temporal correlation function δm(0)δm(t) . Empirically, we find that all the measured noise spectra can be fit reasonably well by the functional form where ω 0 is a characteristic relaxation rate, and β is a power-law decay exponent where P (ω) ∝ ω −β at high frequencies. Importantly, β defines the "color" of the noise and is an indicator of correlated dynamics. In general, if processes responsible for magnetization dynamics are uncorrelated in time, then δm(0)δm(t) decays exponentially (∝ e −t/τ0 ) with a characteristic relaxation time τ 0 = 1/ω 0 . Per the fluctuation-dissipation theorem, the corresponding noise spectrum then exhibits a Lorentzian lineshape (β = 2) and is said to be Brownian. Noise exhibiting β = 2 therefore implies simple diffusive Brownian kinetics, such as from a trivial random walk with independent increments, or from monopole creation and annihilation models described, e.g., by Ryzhkin [24] and Klyuev [25]. In contrast, noise exhibiting β < 2 indicates kinetics that cannot be described by a single exponential time scale [26]. Although it is always possible to simulate any β by summing a suitable distribution of Lorentzians, for the case of ASI the observation of β < 2 can be considered in terms of anomalous monopole diffusion arising from correlated kinetics. In particular, when temporal fluctuations are no longer independent but instead exhibit negative correlations (e.g., if prior fluctuations increased the magnetization, then the next fluctuation is more likely to decrease it), then monopoles will exhibit sub-diffusive behavior. Such processes are related to "fractional" Brownian motion [27] and can be said to retain a memory [25]. Noise following Eq. (1) can derive from kinetics with algebraically-decaying correlation functions of the form δm(0)δm(t) ∝ t −α e −t/τ0 , where α = 1 − β 2 . Thus, broadband noise spectroscopy can reveal both the kinetic correlations and the characteristic relaxation times as square ASI is tuned through the monopole plasma regime. Figure 5(a) shows the dramatic evolution of P (ω) as square ASI is tuned across the boundary between type-I and type-II order, keeping |B x | = |B y |. Figure 5(b) shows the measured values of ω 0 , β, and the total noise power. Interestingly, although β < 2 everywhere (indi- Corresponding computed noise spectra from Monte Carlo simulations and extracted parameters ω0, β, and total noise. The total integrated noise is peaked when type-I and type-II vertex energies are degenerate, i.e., in the monopole plasma regime that occurs at the crossover field Bc. By contrast, simulations show that ω0 exhibits a minimum when the ASI orders antiferromagnetically (spontaneous type-I ordering), which occurs at an applied field that is slightly less than Bc when T > 0 (see also Supplemental Material Fig. S4).
cating some degree of correlated dynamics), β exhibits a clear maximum and is closest to 2 when B = B c , where the total noise power and the monopole density are largest. This indicates that memory effects are weakest in the plasma-like regime, and monopole motion along the lattice diagonal (III + I ↔ II + III; see Fig. 1(e)) most closely approximates ordinary diffusion. However, β decreases when |B| = B c , indicating that dynamics in square ASI become increasingly sub-diffusive away from the monopole plasma regime. Given that noise in pyrochlore Dy 2 Ti 2 O 7 also evinced similarly sub-diffusive kinetics [23], correlated monopole dynamics in thermal equilibrium may therefore be a universal feature shared by both natural (3D) and artificial (2D) spin ices. Monte Carlo simulations capture the overall shape of the noise spectra and many of the observed trends (Figs. 5(c),(d)), showing the total noise is maximized at the monopole plasma regime at B c . The simulated noise also exhibits a non-trivial power-law decay exponent β in the type-I phase, that grows steeper (see dashed lines) as B increases toward B c , in agreement with the data. The simulations do not permit accurate extraction of β at larger B (i.e., into the type-II phase), because of the proximity of ω 0 to the highest (Nyquist) frequency of the simulation, where aliasing artifacts and trivial decays due to numerics are also present.
The measured spectra (Figs. 5(a),(b)) also show that the relaxation rate ω 0 falls by orders of magnitude when tuning from type-II into the type-I phase by decreasing |B|. Such behavior is only partially expected: While a slowing-down of kinetics upon approaching the antiferromagnetic phase transition is expected from theory [28,29], relaxation rates are expected to increase once again after entering the antiferromagnetically-ordered phase -as captured by the simulations (Figs. 5(c),(d)) but in marked contrast to the measurements where ω 0 remains small as B → 0. This discrepancy points to a key difference between real ASIs, which are composed of mesoscopic superparamagnets, and models of binary spin systems. Kinetics in real ASIs are a non-trivial convolution of many-body time scales (which are simulated), and local timescales associated with magnetic anisotropy and dynamics within the islands themselves (which typically are not simulated).
The calculated minimum in ω 0 highlights a further important distinction: the antiferromagnetic phase transition in square ASI [30] occurs at B < B c when T > 0 (as mandated by the system's temperature-dependent free energy), whereas the monopole plasma always occurs at B c -independent of temperature-because it is determined solely by the energy degeneracy of type-I and type-II vertices. This distinction is apparent in the separation between B c and the minimum in ω 0 in Fig. 5(d), and is further elucidated at other temperatures by calculations of the specific heat and order parameter shown in Supplemental Material Fig. S4.
In summary, broadband noise spectroscopy introduces a new paradigm to ASI studies, by providing a probe that is explicitly sensitive to dynamic timescales and correlations over many orders of magnitude in frequency. These results open the door to direct exploration of field/temperature phase diagrams and their intrinsic equilibrium dynamics, and the discovery of a fieldtunable monopole plasma regime in archetypal square ASI demonstrates the power of such investigations. The ability to create monopole-rich phases on demand -with tunable kinetic correlations -suggests the natural next steps of engineering monopole phases in finite-size arrays and in different lattice geometries. The additional availability of a wide-bandwidth dynamic probe opens the possibility of studying new regimes of magnetic charge dynamics in the more highly frustrated kagome systems, as well as the dynamics of the topological excitations recently demonstrated in vertex-frustrated ASIs [7]. As a long-term prospect, the ability to field-tune the presence or absence of magnetic charges suggests the possibility of transistor-like devices based on monopole flow, realizing new potential applications for these emergent effective charges. Sample fabrication. ASI lattices were fabricated by methods similar to those employed in prior work [20,31]. Briefly, electron beam lithography was used to pattern bilayer resist masks on Si/SiN substrates for subsequent metal deposition and lift-off. Islands of lateral dimension 220 nm × 80 nm were formed, with thickness ∼3.5 nm. Ultrahigh vacuum (∼10 −10 Torr base pressure, ∼10 −9 Torr deposition pressure) electron beam evaporation at 0.05 nm/s was used for permalloy (Ni 0.8 Fe 0.2 ) deposition, in a molecular beam epitaxy system. The islands were then capped with two layers of thermallyoxidized Al (total thickness ∼3 nm) to minimize oxidation of the permalloy.
Broadband magnetization noise spectroscopy. The ASI samples were mounted face-up in the x − y plane, on a positioning stage that could be temperaturecontrolled from -10 • C to +30 • C. The horizontal and vertical islands were oriented alongx andŷ, respectively. The magneto-optical noise spectrometer is adapted from an instrument previously developed to measure out-ofplane magnetization fluctuations [32]. A weak probe laser (<1 mW), incident in the x − z plane, was linearly polarized and focused to a small ( 4 µm diameter) spot on the ASI at 45 • incidence; ∼300 islands were therefore probed. Thermodynamic magnetization fluctuations along thex direction, δm x (t) (i.e., fluctuations of the horizontal islands only), imparted small Kerr rotation fluctuations δθ K (t) on the polarization of the reflected laser, which were detected with balanced photodiodes. The magnetization noise was amplified, digitized, and its power spectrum was computed and signal-averaged in real time using fast Fourier transform (FFT) methods. Small coils were used to apply magnetic fields B x and B y in the sample plane.
The spectral density of the measured noise contained additional contributions from amplifier noise and photon shot noise, which were unrelated to and uncorrelated with the magnetization fluctuations from the ASI. We subtracted off these constant contributions by also measuring the noise spectra in the presence of large applied magnetic field (B x = B y 20 G) where all the islands were strongly polarized and magnetization noise from the ASI was entirely suppressed.
To obtain the maps of the total (frequency-integrated) noise power vs. applied magnetic field, for each value of magnetic field the noise spectrum was acquired for several seconds, which allowed us to record good quality data in the frequency range from a few hundred Hz to a few hundred kHz. For more detailed studies of the noise spectra over the broadest possible frequency range (shown for example in Fig. 5), the measured noise was signal-averaged for a longer time duration (typically tens of minutes), which increased the usable bandwidth from about 1 Hz to over 1 MHz.
Monte Carlo (MC) spin dynamics simulations. We performed standard Glauber MC simulations of conventional square ASI lattices with periodic boundary conditions. We used single spin updates (i.e., no cluster or loop moves), which should coarsely resemble the kinetics of the nanoislands in square ASI. Spins were chosen randomly, and the acceptance probability was p = [1 + exp(∆/kT )] −1 , where ∆ is the usual energy difference associated with a spin flip and k is the Boltzmann constant. Typical simulations utilized ∼10 6 annealing steps, and then the magnetization was recorded for up to several millions of MC time steps at fixed temperature T and applied field B. Noise spectra were computed directly from the time series via fast Fourier transform.
The simulations employed a vertex model where the energetics were obtained from two energy scales: the nearest-neighbor coupling J 1 between perpendicular spins, and the weaker next-nearest-neighbor coupling J 2 between collinear spins. The simulations used J 1 /J 2 = 1.8, in line with previous micromagnetic simulations of ASI systems, for which the ratio varies from 1.4 − 2.0 (depending on fabrication details). Within this model, the energies of the four different vertex topologies in zero applied magnetic field were: I = −4J 1 + 2J 2 , II = −2J 2 , III = 0, IV = 4J 1 + 2J 2 . The vertex model defines clearly the applied in-plane magnetic fields at which the monopole plasma regime is realized -namely, fields at which type-I and type-II vertices have equal energies: |B x | + |B y | ∝ II − I = 4(J 1 − J 2 ). Supplemental Material Fig. S1 shows the origin of the diamond-shaped noise maps and contains additional details.
The simulation temperature T is defined in units of J 2 /k. Using J 1 /J 2 = 1.8, the critical temperature T c below which square ASI spontaneously orders into its longrange type-I antiferromagnetic state is T c 2.4 J 2 /k (at zero applied magnetic field). To most closely match experimental conditions, MC simulations were typically performed at lower temperatures in the range of T = 1.4 − 1.8 J 2 /k. Supplemental Material Fig. S4 shows additional simulations of the T -and B-dependent antiferromagnetic order parameter and specific heat in thermal square ASI, in relation to the (T -independent) monopole plasma regime. The applied fields B x and B y were defined in terms of the Zeeman energy on a single spin, and thus also in units of J 2 . With these conventions, the monopole plasma was expected at |B x | + |B y | = B c = 3.2 J 2 . x y FIG. S1. Expected shape of the magnetic-field-dependent magnetization noise map in conventional thermal square ASI. (a), (b) Type-I and type-II vertices, respectively, in an in-plane magnetic field B = Bxx + Byŷ applied at an angle 0 < φ < 90 • with respect to the x axis. The energies of the vertices ( I and II ) are given by the nearest-and the next-nearest-neighbor coupling constants (J1 and J2, respectively) and the Zeeman energy due to B, so that I = −4J1 + 2J2 and II = −2J2 − µ (Bx + By), where µ is the magnetic moment of a single nanoisland. (c) The expected shape of the magnetic field-dependent noise map, which is determined by the crossover magnetic field Bc that is required to make type-I and type-II vertices energetically degenerate ( I = II ). For the type-II vertex shown in panel (b), Bx + By = Bc = 4(J1 − J2)/µ, which is depicted by the dashed green line. Analogous expressions for the other three possible type-II vertices together define the diamond (blue line). Additionally, different orientations of type-II vertices are degenerate for Bx = 0 and large By (or By = 0 and large Bx) which results in the vertical (horizontal) "tails" of the diamond (where fluctuations alongx (ŷ) are expected), depicted with solid (dashed) red lines. Regions of high monopole density are also the most active. The four-fold symmetry of the monopole density map is not captured by the experiment, which is sensitive only to fluctuations alongx but notŷ. FIG. S3. Measured maps of the magnetization noise for square thermal ASI with different lattice constants. (a) Map of the total (frequency integrated) magnetization noise measured at 23 • C for the square ASI made of 220 nm × 80 nm islands, with the lattice constant d = 300 nm (i.e., the same ASI used for all the experiments described in the main text). (b) Analogous map for square ASI with d = 360 nm. As expected, the diamond-shaped boundary is still visible, but appears at smaller values of the applied field, due to weaker coupling between the islands. | 6,830.6 | 2020-08-19T00:00:00.000 | [
"Physics"
] |
Supersymmetric $AdS_5$ black holes and strings from 5D $N=4$ gauged supergravity
We study supersymmetric $AdS_3\times \Sigma_2$ and $AdS_2\times \Sigma_3$ solutions, with $\Sigma_2=S^2,H^2$ and $\Sigma_3=S^3,H^3$, in five-dimensional $N=4$ gauged supergravity coupled to five vector multiplets. The gauge groups considered here are $U(1)\times SU(2)\times SU(2)$, $U(1)\times SO(3,1)$ and $U(1)\times SL(3,\mathbb{R})$. For $U(1)\times SU(2)\times SU(2)$ gauge group admiting two supersymmetric $N=4$ $AdS_5$ vacua, we identify a new class of $AdS_3\times \Sigma_2$ and $AdS_2\times H^3$ solutions preserving four supercharges. Holographic RG flows describing twisted compactifications of $N=2$ four-dimensional SCFTs dual to the $AdS_5$ vacua to the SCFTs in two and one dimensions dual to these geometries are numerically given. The solutions can also be interpreted as supersymmetric black strings and black holes in asymptotically $AdS_5$ spaces with near horizon geometries given by $AdS_3\times \Sigma_2$ and $AdS_2\times H^3$, respectively. These solutions broaden previously known black brane solutions including half-supersymmetric $AdS_5$ black strings recently found in $N=4$ gauged supergravity. Similar solutions are also studied in non-compact gauge groups $U(1)\times SO(3,1)$ and $U(1)\times SL(3,\mathbb{R})$.
Introduction
Black branes of different spatial dimensions play an important role in the develoment of string/M-theory. They lead to many insightful results such as the construction of gauge theories in various dimensions and the celebrated AdS/CFT correspondence [1]. According to the latter, black branes in asymptotically AdS spaces are of particular interest since they are dual to RG flows across dimensions from superconformal field theories (SCFTs) dual to the asymptotically AdS spaces to lower-dimensional fixed points dual to the near horizon geometries [2]. Recently, a new approach for computing microscopic entropy of AdS 4 balck holes has been introduced based on twisted partition functions of three-dimensional SCFTs [3,4,5,6,7,8,9]. This has also been applied to AdS black holes in other dimensions [10,11,12,13,14].
In this paper, we are interested in supersymmetric black holes and black strings in asymptocally AdS 5 spaces from five-dimensional N = 4 gauged supergravity coupled to vector multiplets [15,16]. These solutions have near horizon geometries of the forms AdS 2 × Σ 3 and AdS 3 × Σ 2 , respectively. We will consider Σ 3 in the form of a three-sphere (S 3 ) and a three-dimensional hyperbolic space (H 3 ). Similarly, Σ 2 will be given by a two-sphere (S 2 ) and a two-dimensional hyperbolic space (H 2 ), or a Riemann surface of genus g > 1. Similar solutions have previously been found in minimal and maximal gauged supergravities, see for example [17,18,19,20,21,22,23,24]. This type of solutions has also appeared in pure N = 4 gauged supergravity in [25], and recently, half-supersymmetric black strings with hyperbolic horizons have been found in matter-coupled N = 4 gauged supergravity with compact U(1) × SU(2) × SU (2) and non-compact U(1) × SO(3, 1) gauge groups [26].
We will look for more general solutions of AdS 5 black strings with both hyperbolic and spherical horizons and preserving 1 4 of the N = 4 supersymmetry in five dimensions. The solutions interpolate between N = 4 supersymmetric AdS 5 vacua of the gauged supergravity and near horizon geometries of the form AdS 3 × Σ 2 . In addition, we will look for supersymmetric black holes interpolating between AdS 5 vacua and near horizon geometries AdS 2 × Σ 3 . According to the AdS/CFT correspondence, these solutions describe RG flows across dimensions from the dual N = 2 SCFTs to two-and one-dimensional SCFTs in the IR. The IR SCFTs are obtained via twisted compactifications of N = 2 SCFTs in four dimensions. Many solutions of this type have been found in various space-time dimensions, see [27,28,29,30,31,32,33,34,35,36,37,38] for an incomplete list.
We mainly consider N = 4 gauged supergravity coupled to five vector multiplets with gauge groups entirely embedded in the global symmetry SO (5,5). We will also restrict ourselves to gauge groups that lead to supersymmetric AdS 5 vacua. These gauge groups have been shown in [39] to take the form of U(1) × H 0 × H with the U(1) gauged by the graviphoton that is a singlet under USp(4) ∼ SO(5) R-symmetry. The H ⊂ SO(n + 3 − dim H 0 ) is a compact group gauged by vector fields in the vector multiplets, and H 0 is a non-compact group gauged by three of the graviphotons and dim H 0 −3 vectors from the vector multiplets. The remaining two graviphotons in the fundamental representation of SO(5) are dualized to massive two-form fields. In addition, H 0 must contain an SU (2) subgroup. For the case of five vector multiplets, possible gauge groups that admit supersymmetric AdS 5 vacua and can be embedded in SO (5,5) are U(1) × SU(2) × SU(2), U(1) × SO(3, 1) and U(1) × SL(3, R). We will look for AdS 5 black string and black hole solutions in all of these gauge groups.
The paper is organized as follow. In section 2, we review N = 4 gauged supergravity in five dimensions coupled to vector multiplets using the embedding tensor formalism. In section 3, we find supersymmetric AdS 3 × Σ 2 solutions preserving four supercharges and give numerical RG flow solutions interpolating between these geometries and supersymmetric AdS 5 vacua. An AdS 2 × H 3 solution together with an RG flow interpolating between AdS 5 vacua and this geometry will also be given. In section 4 and 5, we repeat the same analysis for non-compact U(1) × SO(3, 1) and U(1) × SL(3, R) gauge groups. Since the U(1) × SL(3, R) gauge group has not been studied in [26], we will discuss its construction and supersymmetric AdS 5 vacuum in detail. The full scalar mass spectrum at this critical point will also be given. This should be useful in the holographic context since it contains information on dimensions of operators dual to supergravity scalars. We end the paper with some conclusions and comments in section 6.
2 Five dimensional N = 4 gauged supergravity coupled to vector multiplets In this section, we briefly review the structure of five dimensional N = 4 gauged supergravity coupled to vector multiplets with the emphasis on formulae relevant for finding supersymmetric solutions. The detailed construction of N = 4 gauged supergravity can be found in [15] and [16]. The N = 4 gravity multiplet consists of the graviton eμ µ , four gravitini ψ µi , six vectors A 0 and A m µ , four spin-1 2 fields χ i and one real scalar Σ, the dilaton. Space-time and tangent space indices are denoted respectively by µ, ν, . . . = 0, 1, 2, 3, 4 andμ,ν, . . . = 0, 1, 2, 3, 4. The SO(5) ∼ USp(4) R-symmetry indices are described by m, n = 1, . . . , 5 for the SO(5) vector representation and i, j = 1, 2, 3, 4 for the SO(5) spinor or USp(4) fundamental representation. The gravity multiplet can couple to an arbitrary number n of vector multiplets. Each vector multiplet contains a vector field A µ , four gaugini λ i and five scalars φ m . The n vector multiplets will be labeled by indices a, b = 1, . . . , n, and the components fields within these vector multiplets will be denoted by (A a µ , λ a i , φ ma ). From both gravity and vector multiplets, there are in total 6 + n vector fields which will be denoted by . The 5n scalar fields from the vector multiplets parametrize the SO(5, n)/SO(5)× SO(n) coset. To describe this coset manifold, we introduce a coset representative V A M transforming under the global SO(5, n) and the local SO(5) × SO(n) by left and right multiplications, respectively. We use indices M, N, . . . = 1, 2, . . . , 5 + n for global SO(5, n) indices. The local SO(5) × SO(n) indices A, B, . . . will be split into A = (m, a). We can accordingly write the coset representative as The matrix V A M is an element of SO(5, n) and satisfies the relation with η M N = diag(−1, −1, −1, −1, −1, 1, . . . , 1) being the SO(5, n) invariant tensor. Equivalently, the SO(5, n)/SO(5) × SO(n) coset can also be described in term of a symmetric matrix which is manifestly invariant under the SO(5) × SO(n) local symmetry. Gaugings promote a given subgroup G 0 of the full global symmetry SO(1, 1)× SO(5, n) of N = 4 supergravity coupled to n vector multiplets to be a local symmetry. These gaugings are efficiently described by using the embedding tensor formalism. N = 4 supersymmetry allows three components of the embedding tensor ξ M , ξ M N = ξ [M N ] and f M N P = f [M N P ] . The first component ξ M describes the embedding of the gauge group in the SO(1, 1) ∼ R + factor identified with the coset space parametrized by the dilaton Σ. From the result of [39], the existence of N = 4 supersymmetric AdS 5 vacua requires ξ M = 0. In this paper, we are only interested in solutions that are asymptotically AdS 5 , so we will restrict ourselves to the gaugings with ξ M = 0.
For ξ M = 0, the gauge group is entirely embedded in SO(5, n) with the gauge generators given by where ∇ µ is the usual space-time covariant derivative. We use the convention that the definition of ξ M N and f M N P includes the gauge coupling constants. Note also that SO(5, n) indices M, N, . . . are lowered and raised by η M N and its inverse η M N , respectively.
Generators X M = (X 0 , X M ) of a consistent gauge group must form a closed subalgebra of SO(5, n). This requires ξ M N and f M N P to satisfy the quadratic constraints Gauge groups that admit N = 4 supersymmetric AdS 5 vacua generally take the form of U(1) × H 0 × H, see [39] for more detail. The U(1) is gauged by A 0 µ while H ⊂ SO(n + 3 − dim H 0 ) is a compact group gauged by vector fields in the vector multiplets. H 0 is a non-compact group gauged by three of the graviphotons and dim H 0 − 3 vectors from the vector multiplets. H 0 must also contain an SU(2) subgroup. For simple groups, H 0 can be SU(2) ∼ SO(3), SO(3, 1) and SL(3, R).
In the embedding tensor formalism, there are two-form fields that are introduced off-shell. These two-form fields do not have kinetic terms and coupled to vector fields via a topological term. In all of the solutions considered here, the two-form fields can be consistently truncated out. We will accordingly set all the two-form fields to zero from now on. The bosonic Lagrangian of a general gauged N = 4 supergravity coupled to n vector multiplets can be written as where e is the vielbein determinant. L top is the topological term whose explicit form will not be given here since, given our ansatz for the gauge fields, it will not play any role in the present discussion. With vanishing two-form fields, the covariant gauge field strength tensors read where The scalar potential is given by where M M N is the inverse of M M N , and M M N P QRS is obtained from by raising the indices with η M N . All fermionic fields are described by symplectic Majorana spinors subject to the following condition with C and Ω ij being respectively the charge conjugation matrix and USp(4) symplectic form. Supersymmetry transformations of fermionic fields (ψ µi , χ i , λ a i ) are given by (15) in which the fermion shift matrices are defined by In these equations, V ij M is defined in term of V M m as where Γ ij m = Ω ik Γ mk j and Γ mi j are SO(5) gamma matrices. Similarly, the inverse element V ij M can be written as In the subsequent analysis, we use the following explicit choice of SO(5) gamma matrices Γ mi j given by where σ i , i = 1, 2, 3 are the usual Pauli matrices. The covariant derivative on ǫ i reads where the composite connection is defined by In this work, we mainly focus on the case of n = 5 vector multiplets. To parametrize the scalar coset SO(5, 5)/SO(5) × SO(5), it is useful to introduce a basis for GL(10, R) matrices (e M N ) P Q = δ M P δ N Q (22) in terms of which SO(5, 5) non-compact generators are given by For a compact U(1) × SU(2) × SU(2) gauge group, components of the embedding tensor are given by where g 1 , g 2 and g 3 are the coupling constants for each factor in U(1) × SU(2) × SU(2). The scalar potential obtained from truncating the scalars from vector multiplets to U(1) × SU(2) diag ⊂ U(1) × SU(2) × SU(2) singlets has been studied in [26]. There is one U(1) × SU(2) diag singlet from the SO(5, 5)/SO(5) × SO(5) coset corresponding to the following SO(5, 5) non-compact generator With the coset representative given by the scalar potential can be computed to be The potential admits two N = 4 supersymmetric AdS 5 critical points given by i : ii : In critical point i, we have set g 2 = − √ 2g 1 to make this critical point occur at Σ = 1. However, we will keep g 2 explicit in most expressions for brevity. Critical point i is invariant under the full gauge symmetry U(1) × SU(2) × SU(2) while critical point ii preserves only U(1)×SU(2) diag symmetry due to the non-vanising scalar φ. V 0 denotes the cosmological constant, the value of the scalar potential at a critical point.
Supersymmetric black strings
We now consider vacuum solutions of the form AdS 3 ×Σ 2 with Σ 2 being S 2 or H 2 . A number of AdS 3 × H 2 solutions that preserve eight supercharges together with RG flows interpolating between them and supersymmetric AdS 5 critical points have already been given in [26]. In this section, we look for more general solutions that preserve only four supercharges.
We begin with the metric ansatz for the Σ 2 = S 2 case where dx 2 1,1 is the flat metric in two dimensions. For Σ 2 = H 2 , the metric is given by To preserve some amount of supersymmetry, we perform a twist by cancelling the spin connection along the Σ 2 by some suitable choice of gauge fields. We will first consider abelian twists from the U(1) × U(1) × U(1) subgroup of the U(1) × SU(2) × SU(2) gauge symmetry. The gauge fields corresponding to this subgroup will be denoted by (A 0 , A 5 , A 8 ). The ansatz for these gauge fields will be chosen as for the S 2 case and for the H 2 case.
There are three singlets from the SO(5, 5)/SO(5) × SO(5) coset corresponding to the SO(5, 5) non-compact generators Y 53 , Y 54 and Y 55 . However, these can be consistently truncated to only a single scalar with the coset representative given by We now begin with the analysis for Σ 2 = S 2 . With the relevant component of the spin connection ωφθ = e −g cot θeφ, we find the covariant derivative of ǫ i along theφ direction where . . . refers to the term involving g ′ that is not relevant to the present discussion. Note also that a 8 does not appear in the above equation since A 8 is not part of the R-symmetry under which the gravitini and supersymmetry parameters are charged.
For half-supersymmetric solutions considered in [26], it has been shown that the twists from A 0 and A 5 can not be performed simultaneously, and there exist AdS 3 × H 2 solutions. However, if we allow for an extra projector such that only 1 4 of the original supersymmetry is unbroken, it is possible to keep both the twists from A 0 and A 5 non-vanishing. To achieve this, we note that We then impose the following projector to make the two terms with a 0 and a 5 in To cancel the spin connection, we then impose another projector and the twist condition It should be noted that the condition (41) reduces to that of [26] for either a 0 = 0 or a 5 = 0. However, the solutions in this case preserve only four supercharges, or N = 2 supersymmetry in three dimensions, due to the additional projector (39).
To setup the BPS equations, we also need the γ r projection due to the radial dependence of scalars. Following [26], this projector is given by with I i j defined by The covariant field strength tensors for the gauge fields in (34) can be straightforwardly computed, and the result is For Σ 2 = H 2 , the cancellation of the spin connection ωφθ = e −g coth θeφ is again achieved by the gauge field ansatz (35) using the conditions (39), (40) and (41). On the other hand, the covariant field strengths are now given by which have opposite signs to those of the S 2 case. This results in a sign change of the parameter (a 0 , a 5 , a 8 ) in the corresponding BPS equations. With all these, we obtain the following BPS equations In these equations, κ = 1 and κ = −1 refer to Σ 2 = S 2 and Σ 2 = H 2 , respectively. It can also be readily verified that these equations also imply the second order field equations. We now look for AdS 3 solutions from the above BPS equations. These solutions are characterized by the conditions g ′ = ϕ ′ = Σ ′ = 0 and f ′ = 1 We find the following AdS 3 solutions.
• For ϕ = 0, AdS 3 solutions only exist for a 8 = 0 and are given by .
This should be identified with similar solutions of pure N = 4 gauged supergravity found in [25]. Since a 8 and ϕ vanish in this case, the AdS 3 solution has a larger symmetry U(1) × U(1) × SU (2). Note also that unlike half-supersymmetric solutions that exist only for Σ 2 = H 2 , both Σ 2 = S 2 and Σ 2 = H 2 are possible by appropriately chosen values of a 0 , a 5 and g 1 , recall that g 2 = − √ 2g 1 . • For ϕ = 0, we find a class of solutions Note that when a 8 = 0, we recover the AdS 3 solutions in (50). As in the previous solution, it can also be verified that these AdS 3 solutions exist for both Σ 2 = S 2 and Σ 2 = H 2 .
Examples of numerical solutions interpolating between N = 4 AdS 5 vacuum with U(1) × SU(2) × SU(2) symmetry to these AdS 3 × Σ 2 are shown in figure 1 and 2. At large r, the solutions are asymptotically N = 4 supersymmetric AdS 5 critical point i given in (30). It should also be noted that the flow solutions preserve only two supercharges due to the γ r projector imposed along the flow.
As pointed out in [26], there are five singlets from the vector multiplet scalars but these can be truncated to three scalars corresponding to the following noncompact generators of SO(5, 5) The coset representative is then given by To implement the U(1) diag gauge symmetry, we impose an additional condition on the parameters a 5 and a 8 as follow We can repeat the previous analysis for the U(1) × U(1) × U(1) twists, and the result is the same as in the previous case with the twist condition (41) and projectors (39), (40) and (42).
With the same procedure as in the previous case, we obtain the following BPS equations From these equations, we find the following AdS 3 × Σ 2 solutions.
• For φ 1 = φ 3 = 0, there is a family of AdS 3 solutions given by , We refrain from giving the explicit form of L AdS 3 at this vacuum due to its complexity.
• For φ 3 = 0, we find • Finally, for φ 1 = 0, we find Unlike the previous case, at large r, we find that solutions to these BPS equations can be asymptotic to any of the two N = 4 supersymmetric AdS 5 vacua i and ii given in (30) and (31). Therefore, we can have RG flows from the two AdS 5 vacua to any of these AdS 3 × Σ 2 solutions. Some examples of these solutions for Σ 2 = S 2 are given in figures 3, 4, 5 and 6.
Supersymmetric black holes
We now move to another type of solutions, supersymmetric AdS 5 black holes. We will consider near horizon geometries of the form AdS 2 × Σ 3 for Σ 3 = S 3 and Σ 3 = H 3 . The twist procedure is still essential to preserve supersymmetry. For the S 3 case, we take the metric to be ds 2 = −e 2f (r) dt 2 + dr 2 + e 2g(r) dψ 2 + sin 2 ψ(dθ 2 + sin 2 θdφ 2 ) .
we obtain non-vanishing components of the spin connection ωφθ = e −g cot θ sin ψ eφ, ωφψ = e −g cot ψeφ, ωθψ = e −g cot ψeθ . (66) We then turn on gauge fields corresponding to the U(1) × SU(2) diag ⊂ U(1) × SU(2) × SU(2) symmetry and consider scalar fields that are singlet under U(1) × SU(2) diag . Using the coset representative (28), we find components of the composite connection that involve the gauge fields (67) The components of the spin connection on S 3 that need to be cancelled are ωφθ, ωφψ and ωθψ. To impose the twist, we set A 0 = 0 and take the SU(2) diag gauge fields to be A 3 = a 3 cos ψdθ, A 4 = a 4 cos θdφ, A 5 = a 5 cos ψ sin θdφ (68) together with A 3+m = g 2 g 3 A m for m = 3, 4, 5. By considering the covariant derivative of ǫ i along θ and φ directions, we find that the twist is achieved by imposing the following conditions g 2 a 3 = g 2 a 4 = g 2 a 5 = 1 (69) and projectors Note that the last projector is not independent of the first two. Therefore, the AdS 2 solutions preserve four supercharges of the original supersymmetry. Condition (69) also implies a 3 = a 4 = a 5 . We will then set a 4 = a 5 = a 6 = a from now on. Using the definition (8), we find the gauge covariant field strengths and H 3+m = g 2 g 3 H m for m = 3, 4, 5. For Σ 3 = H 3 , we use the metric ansatz ds 2 = −e 2f dt 2 + dr 2 + e 2g y 2 (dx 2 + dy 2 + dz 2 ) with non-vanishing components of the spin connection where various components of the vielbein are given by Since there are only two components, ωxŷ and ωẑŷ, of the spin connection to be cancelled in the twisting process, we turn on the following SU(2) gauge fields and A m+3 = g 2 g 3 A m , for m = 3, 4, 5. Repeating the same analysis as in the S 3 case, we find the twist conditions and projectors The last projector is not needed for the twist with A 4 = 0. In addition, it follows from the first two projectors as in the S 3 case. The twist condition (76) again implies thatã = a, and the covariant field strengths in this case are given by and H m+3 = g 2 g 3 H m , for m = 3, 4, 5. Note that although A 4 = 0, we have nonvanishing H 4 due to the non-abelian nature of SU(2) field strengths.
With all these ingredients, the following BPS equations are straightforwardly obtained As in the AdS 3 solutions, κ = 1 and κ = −1 corresponds to Σ 3 = S 3 and Σ 3 = H 3 , respectively. It turns out that only κ = −1 leads to an AdS 2 solution given by This solution preserves N = 4 supersymmetry in two dimensions and U(1) × SU(2) diag symmetry. As r → ∞, f ∼ g ∼ r, solutions to the above BPS equations are asymptotic to either of the N = 4 AdS 5 vacua in (30) and (31). RG flow solutions interpolating between these AdS 5 vacua and the AdS 2 × H 3 solution in (83) are shown in figure 7 and 8. In particular, the flow in figure 8 connects three critical points similar to the solution given in the previous section. We end this section by a comment on the possibility of turning on the twist from A 0 along with those from the SU(2) diag gauge fields. As in the previous section, if we impose an additional projector the projection matrix of the A 0 term in the composite connection (67) will be proportional to that of A 3 . We can take the ansatz for A 0 to be and proceed as in the A 0 = 0 case. This results in the projectors given in (77) and the twist conditions g 2 a 4 = g 2 a 5 = 1 and g 1 a 0 + g 2 a 3 = 1 .
We can see that at this stage the parameter a 3 needs not be equal to a 4 and a 5 . However, consistency of the BPS equations from δλ a i conditions require a 3 = a 4 = a 5 and hence a 0 = 0 by the conditions in (86). This is because A 0 does not appear in δλ a i variation. The resulting BPS equations then reduce to those of the previous case with A 0 = 0. So, we conclude that the A 0 twist cannot be turned on along with the SU(2) diag twists.
U (1) × SO(3, 1) gauge group
For non-compact U(1) × SO(3, 1) gauge group, components of the embedding tensor are given by This gauge group has already been studied in [26]. The scalar potential admits one supersymmetric N = 4 AdS 5 vauum at which all scalars from vector multiplets vanish and Σ = 1 after choosing g 2 = − √ 2g 1 . At the vacuum, the gauge group is broken down to its maximal compact subgroup U(1) × SO(3). A holographic RG flow from this critical point to non-conformal field theory in the IR and a flow to AdS 3 × H 2 vacuum preserving eight supercharges have also been studied in [26]. In this case, AdS 3 × S 2 solutions do not exist.
In this section, we will study AdS 3 × Σ 2 and AdS 2 × Σ 3 solutions preserving four supercharges. The analysis is closely parallel to that performed in the previous section, so we will give less detail in order to avoid repetition.
Supersymmetric black strings
We will use the same metric ansatz as in equations (32) and (33) and consider the twist from U(1) × U(1) gauge fields. The second U(1) is a subgroup of the SO(3) ⊂ SO(3, 1). There are in total five scalars that are singlet under this U(1) × U(1), but as in the compact U(1) × SU(2) × SU(2) gauge group, these can be truncated to three singlets corresponding to the following SO(5, 5) noncompact generators With the embedding tensor (88), the compact SO(3) symmetry is generated by X 3 , X 4 and X 5 generators.
Using the coset representative of the form we can repeat all the analysis of the previous section by using the ansatz for the gauge fields A 0 = a 0 cos θdφ and A 5 = a 5 cos θdφ, for Σ 2 = S 2 and A 0 = a 0 cosh θdφ and A 5 = a 5 cosh θdφ, for Σ 2 = H 2 . The result is similar to the compact case with the projectors (39) and (40) and the twist condition (41).
As in the compact case, Σ 2 can be either S 2 or H 2 , depending on the values of a 5 , a 0 , g 1 and g 2 such that the twist condition (41) is satisfied. This is in contrast to the half-supersymmetric solution found in [26] for which only Σ 2 = H 2 is possible.
To find a domain wall interpolating between the AdS 5 vacuum to this AdS 3 × Σ 2 solution, we further truncate the BPS equations by setting φ i = 0 for i = 1, 2, 3. The resulting equations are given by An example of numerical solutions is shown in figure 9.
Supersymmetric black holes
We now consider AdS 2 × Σ 3 solutions within this non-compact gauge group. We will look for solutions with U(1) × SO(3) ⊂ U(1) × SO(3, 1) symmetry. There is one U(1) × SO(3) singlet from the SO(5, 5)/SO(5) × SO(5) coset corresponding to the non-compact generator The coset representative can be written as Using the metric ansatz (64) and (72) together with the gauge fields (68) and (75), we find that the twist can be implemented by using the projectors given in (70). Furthermore, the twist condition also implies that a 3 = a 4 = a 5 = a with g 2 a = 1, and the twist from A 0 cannot be turned on. The AdS 2 × Σ 3 solutions preserve four supercharges.
Using the projector (42), we can derive the following BPS equations These equations admit one AdS 2 × H 3 solution given by while AdS 2 × S 3 solutions do not exist. By setting φ = 0, we find a numerical solution to the above BPS equations as shown in figure 10. Components of the embedding tensor for this gauge group are given by
Supersymmetric AdS 5 vacuum
The SL(3, R) factor is embedded in SO(3, 5) ⊂ SO (5,5) such that its adjoint representation is identified with the fundamental representation of SO (3,5). The SO(3) ⊂ SL(3, R) is embedded in SL(3, R) such that 3 → 3. Decomposing the adjoint representation of SO (3,5) to SL(3, R) and SO(3), we find that the 25 scalars transform under SO(3) ⊂ SL(3, R) as Unlike the U(1) × SO(3, 1) gauge group, there is no singlet under the compact SO(3) symmetry. Taking into account the embedding of the U(1) factor in the gauge group as described in (110), we find the transformation of the scalars under with the subscript denoting the U(1) charges. It can be readily verified by studying the corresponding scalar potential or recalling the result of [39] that this U(1) × SL(3, R) gauge group admits a supersymmetric N = 4 AdS 5 vacuum at which all scalars from vector multiplets vanish with Σ = 1 and We have, as in other gauge groups, set g 2 = − √ 2g 1 to bring this vacuum to the value of Σ = 1. All scalar masses at this vacuum are given in table 1
These equations admit one supersymmetric AdS 3 × Σ 2 solution given by φ 2 = φ 3 = 0, Σ = √ 2κ a 5 g 1 , and a domain wall interpolating between this critical point and the supersymmetric AdS 5 is shown in figure 12. It should also be noted that this AdS 3 × Σ 2 solution is the same as in the U(1) × SO(3, 1) gauge group.
Supersymmetric black holes
We end this section with an analysis of AdS 2 × Σ 3 solutions and domain walls connecting these solutions to the supersymmetric AdS 5 . In order to preserve supersymmetry, SO(3) ⊂ SL(3, R) gauge fields must be turned on. However, in the present case, there is no SO(3) singlet scalar from the vector multiplets. After using the twist condition g 2 a = 1 and projectors in (70) and (77) together with the ansatz for the gauge fields in (68) and (75), we obtain the BPS equations These equations turn out to be the same as in the SO(3, 1) case after setting all the scalars from vector multiplets to zero. A single AdS 2 × H 3 critical point is again given by (109).
Conclusions and discussions
We have found a new class of supersymmetric black strings and black holes in asymptotically AdS 5 space within N = 4 gauged supergravity in five dimensions coupled to five vector multiplets with gauge groups U(1) × SU(2) × SU(2), U(1) × SO(3, 1) and U(1) × SL(3, R). These generalize the previously known black string solutions preserving eight supercharges by including more general twists along Σ 2 . Furthermore, unlike the half-supersymmetric solutions which only exhibit hyperbolic horizons, the 1 4 -supersymmetric black strings can have both S 2 and H 2 horizons. On the other hand, the AdS 5 black holes only feature H 3 horizons.
For U(1) × SU(2) × SU(2) gauge group, we have identified a number of AdS 3 × Σ 2 solutions preserving four supercharges. The solutions have U(1) × U(1) ×U(1) and U(1) ×U(1) diag symmetries and correspond to N = (0, 2) SCFTs in two dimensions. We have given many examples of numerical RG flow solutions from the two supersymmetric AdS 5 vacua to these AdS 3 × Σ 2 geometries. We have also found a supersymmetric AdS 2 × H 3 solution describing the near horizon geometry of a supersymmetric black hole in AdS 5 . For U(1) × SO(3, 1) and U(1) × SL(3, R) gauge groups, all AdS 3 × Σ 2 and AdS 2 × Σ 3 solutions exist only for vanishing scalar fields from vector multiplets and have the same form for both gauge groups.
It would be interesting to compute twisted partition functions and twisted indices in the dual N = 2 SCFTs compactified on Σ 2 and Σ 3 . These should provide a microscopic description for the entropy of the aforementioned black strings and black holes in AdS 5 space. On the other hand, it is also interesting to find supersymmetric rotating AdS 5 black holes similar to the solutions found in minimal and maximal gauged supergravities [40,41] or black holes with horizons in the form of a squashed three-sphere [42,43,44]. Furthermore, embedding these solutions in string/M-theory is of particular interest and should give a full holograpic interpretation for the RG flows across dimensions identified here. | 8,323 | 2018-12-25T00:00:00.000 | [
"Physics"
] |
Functional Analysis of B 7-H 3 in Colonic Carcinoma Cells
B7-H3 (B7 homologue 3), a newly found member of B7/CD28 superfamily (Shin et al., 2010), exists as two isoforms: B7-H3 VC, which contains one IgVand IgC-like domain, and B7-H3 VCVC, which contains two such domains. The latter represents the predominant B7H3 molecule detectable in various human tissues. Both B7-H3 isoforms are shown to decrease the proliferation and cytokine production induced by TCR activation of human T cells in vitro (2005). Performing as an important molecule in T cell immune response, B7-H3 stimulates proliferation of both CD4+ and CD8+ T cell, enhances the induction of cytotoxic T cells and selectively stimulates interferon gamma (IFN-gamma) production in the presence of T cell receptor signaling (Chapoval et al., 2001). The expression of B7-H3 and B7-H2 by Human nasal epithelial (HNE) cells is a couple of potential co-stimulatory signals, through which these cells may interact with activated mucosal T lymphocytes (Saatian et al., 2004), and that is similarly as the known human leukocyte antigen (HLA) and B7 homolog family costimulatory molecules, which expressed on the epithelial cells of the human respiratory tract and affects the cellular differentiation and cytokines. Recently, there are a lot of reports on the highly association of B7-H3 over-expression with kinds of cancers. For example, B7-H3 is over-expressed by all six non-small-cell lung cancer (NSCLC) cell lines on both mRNA and protein level. (Sun et al., 2006; Xu et al., 2010) And in prostate cancer, expression level of B7-H3 was correlated with pathologic indicators of aggressive cancer as well as clinical outcome. B7-H3 is uniformly and aberrantly expressed by adenocarcinomas of the prostate,
Introduction
B7-H3 (B7 homologue 3), a newly found member of B7/CD28 superfamily (Shin et al., 2010), exists as two isoforms: B7-H3 VC, which contains one IgV-and IgC-like domain, and B7-H3 VCVC, which contains two such domains.The latter represents the predominant B7-H3 molecule detectable in various human tissues.Both B7-H3 isoforms are shown to decrease the proliferation and cytokine production induced by TCR activation of human T cells in vitro (2005).Performing as an important molecule in T cell immune response, B7-H3 stimulates proliferation of both CD4+ and CD8+ T cell, enhances the induction of cytotoxic T cells and selectively stimulates interferon gamma (IFN-gamma) production in the presence of T cell receptor signaling (Chapoval et al., 2001).The expression of B7-H3 and B7-H2 by Human nasal epithelial (HNE) cells is a couple of potential co-stimulatory signals, through which these cells may interact with activated mucosal T lymphocytes (Saatian et al., 2004), and that is similarly as the known human leukocyte antigen (HLA) and B7 homolog family costimulatory molecules, which expressed on the epithelial cells of the human respiratory tract and affects the cellular differentiation and cytokines.
Recently, there are a lot of reports on the highly association of B7-H3 over-expression with kinds of cancers.For example, B7-H3 is over-expressed by all six non-small-cell lung cancer (NSCLC) cell lines on both mRNA and protein level.(Sun et al., 2006;Xu et al., 2010) And in prostate cancer, expression level of B7-H3 was correlated with pathologic indicators of aggressive cancer as well as clinical outcome.B7-H3 is uniformly and aberrantly expressed by adenocarcinomas of the prostate,
Functional Analysis of B7-H3 in Colonic Carcinoma Cells
Peng Lu 1 , Rong Liu 2 , Er-Min Ma 1 , Tie-Jian Yang 1 , Jia-Lin Liu 1 * high-grade prostatic intraepithelial neoplastic, and four prostate cancer cell lines and expressed by benign prostatic epithelia (Roth et al., 2007;Zang et al., 2007;Yuan et al., 2011).Also in breast cancer, B7-H3 mRNA expression was detected 39% in primary breast tumors but not in normal breast tissues (Ahmed, 2010).B7-H3 expression was highly correlated to sentinel lymph node and overall number of lymph nodes with metastasis (Arigami et al., 2010).And finally in gastric cancer, blood specimens contained significantly more copies of B7-H3 mRNA than those from healthy volunteers (Biglarian et al., 2010).The 5-year survival rate was significantly lower in patients with high B7-H3 expression than with low expression (Arigami et al., 2011).Expect these poor prognostic which B7-H3 linked to, the opposite effect has been observed in other cancers, such as in human oral squamous cell cancer (Yang et al., 2008;Nygren et al., 2011).So the function of B7-H3 is needed to be further unearthed individually in miscellaneous cancers.
In colorectal cancer, endothelial B7-H3 expression was also significantly associated with poor outcome and there was observation that B7-H3 expression in tumorassociated vasculature and fibroblasts.In rectal cancer patients, the only significant association was between fibroblast B7-H3 expression and shorter metastasis-free survival.These indicate that nuclear B7-H3 might be involved in colon cancer progression and metastasis, and suggest that nuclear B7-H3 could become a useful prognostic marker in colon cancer (Lupu et al., 2006;Ingebrigtsen et al., 2012).However, the multiple changes trigged by B7-H3 over-expression in colon cancer and the panorama view of its function pathway and mechanism remains to be clarified.Therefore, we tried our way to denote the functions of B7-H3 in this article.
Expression Profile of Colonic Cancer Cells and Normal Colonic Cells
To probe the differences between colonic cancer cells and normal colonic cells and to clarify the roles of B7-H3 in colonic cancer cells, we collected colorectal carcinoma cells and the normal cells and analyzed their expression profiles with gene chip.Firstly, we searched proper sample expression data from GEO (Gene Expression Omnibus) database and chose GSE23878 as an object.This chip set contains 35 colonic carcinoma chips, 23 normal colonic chips.The platform information is GPL570 [HG-U133_Plus_2] Affymetrix Human Genome U133 Plus 2.0 Array.We downloaded the original CEL file as well as the denotation files of this platform.
Extraction of Differentially Expressed Genes
After obtaining the original chip data, we analyzed the chip data via R software (v.2.13.0) (Team, 2008).The whole chip data is classified into two classes: colon carcinoma cells and normal cells.RMA (Robust Multichip Averaging) method (Irizarry et al., 2003) was at first being applied to normalize the data on different chips, and the B7-H3 gene expression profile was obtained.Then limma (Smyth, 2004), a set of linear regression model software kit, were used to compare the differential expressions on different classes of chips so the differential expressed genes between colon carcinoma cells and normal cells were obtained.
GO Cluster with B7-H3 Influence
To chase the changes between differentially expressed genes on cell level and cluster their functions, we searched the database Gene Otology.And then we used DAVID to cluster the differential expressed genes according to the GO Intracellular component, biological process and molecular function (Huang da et al., 2009;Huang da et al., 2009) to get the influence information of differentially expressed genes on cells.Finally, we focused on the classes B7-H3 located and obtained the information about the affection of B7-H3 on cells.
Biological Pathways Influenced by B7-H3
In order to unearth the influence of B7-H3 on colon carcinoma cells, we focus on the biological pathway level.All the metabolism and non-metabolism pathways got from authority pathway database KEGG PATHWAY DATABASE were used as DAVID network materials for KEGG PATHWAY cluster analysis (Huang da et al., 2009;Huang da et al., 2009) and then the changed pathways in colon carcinoma betray themselves.Finally, those pathways linked to B7-H3 or in which B7-H3 located were focused to understand the molecular mechanisms B7-H3 performed its functions.
Unearthing Small Molecules Facilitating B7-H3 Functions
The connectivity map (CMAP) database, storing whole genomic expression profiles under small active molecular inferences, contains 6,100 classes of small molecular interference experiments and 7,056 expression profiles.(Lamb et al., 2006) We analyzed differential expressed genes between normal and colon carcinoma cells, contrast these genes with genes responded to small molecular interference in CMAP database, hoping to find some small molecular associated with those differentially expressed genes between normal cells and bladder carcinoma cells.The differentially expressed genes, located in the same cluster with B7-H3 or highly associated with B7-H3, between normal and colon carcinoma cells were classified into up regulated and down regulated, from which respectively 500 most eminent probes were chosen for GSEA analysis, and then compared to the differentially expressed genes after small molecular treatment, and finally enrichment values was gain.These enrichment values, varied between -1 and 1, determine the similarity.The more the value is closer to 1, the more similar between these genes, i.e. the small molecular can imitate the effects of B7-H3.And on the contrary, if the value is closer to -1, it illustrates that these small molecular can interrupt the effects of B7-H3.
DAB immunohistochemistry staining analysis of Colonic Cancer Cells and Normal Colonic Cells
A total of 104 cases of colon cancer specimens and clinical data were obtained from colon cancer patients with intact follow-up data who subjected to radical resection in the cancer center.The specimens were constructed from formalin-fixed, paraffin-embedded tissues.The expression of in tumor cells matched with adjacent noncancerous tissue was examined with immunohistochemistry technique.Additionally, immunofluorescence staining technique was used to investigate the molecular mechanism by analyzing expression of B7-H3.
Expression Difference of B7-H3
As a critical factor linked to immunity, B7-H3 is overexpressed in several kinds of cancers.To probe if these differentially expression condition also occurred in colon carcinoma, we extract the expression profile about B7-H3 from chips.After normalization treatment, we got the profile of B7-H3 expression as Figure 1.
To probe the differentially expressed genes trigged by B7-H3 changes, we applied canonical t-test method to both colon carcinoma and normal cells' gene expression profiles and obtained the differentially expressed genes in colon carcinoma cells.After t-test to all the genes, associated Value is BH p.We chose BH p<0.001 as the Significant threshold for differentially expressed genes and found the expression of 13397 gene probes changed, which is linked to 10509 genes (See Supplement corcancer-cor.xls).These differentially expressed genes covered those genes co-function with B7-H3 and those changed after B7-H3 activity.Therefore, these differentially expressed genes can be helpful for clarify B7-H3 function mechanism.
GO Entries B7-H3 Participated in
Compared the GO clusters of differentially expressed genes of colon carcinoma to normal cells in Cellular component, Molecular function and Biological process, we obtained GO Entries as illustrated in Supplementary cellular component.txt,bioprocess.txt,molecular function.txt.Among which, B7-H3 participated in plasma membrane part (p=2.92xE-4).Obviously, B7-H3 functioned on the plasma membrane.Additionally, the Entry of lymphocyte activation also changed (p=4.85xE-7).Therefore, a critical change brought by B7-H3 alteration is the activation of lymphocyte, which is reported on the publications before.
Unearthing the Biological Pathways Associated B7-H3
To further unfold the mechanisms of B7-H3 function, we zoom in the biological pathways related.We chose the differentially expressed genes for KEGG sub pathway enrichment analysis.We got the changed signal pathways in colon carcinoma and selected the pathways associated with B7-H3.Here, we chose Benjamini<0.05,and at least two genes as the restricting condition for significantly changed biological pathways, illustrated in Table 1.
In the changed signal pathway, B7-H3 is located in Cell adhesion molecules (CAM).Furthermore, B7-H3 is reported to be a critical molecule in T cell immunity response reaction.Therefore, B7-H3 is directly related to T cell receptor signaling pathway.From these two signal pathways, we can find some clues of B7-H3 functions: when colon carcinoma occurs, B7-H3 is significantly high expressed as one of the components of cellular junction.Therefore, T cell receptor signal pathway is activated and hence performs the function of B7-H3.
Searching Small Molecules Mimic B7-H3 Function
As activation factor of T lymphocytes, B7-H3 can at some extent enhance body's immunity and hence perform immunity to cancer cells.With molecules mimic B7-H3 be found, we can enhance self-immunity ability and hence be helpful for colon carcinoma therapy.
Here we chose closely associated genes with B7-H3 (genes in the same GO Entry or pathway or whose expression changes in the same way in T cell receptor signaling pathway, see gene list at Supplementary CD276 RELATED GENE EXPRESSION.xlsx) to assess B7-H3 affections.These genes were classified into up-regulated and down-regulated and then compared to small molecules treated differential expression genes in CMAP database using GSEA to get some small molecules similar to B7-H3 functions.The most associated 20 molecules similar to B7-H3 functions are illustrated in Table 2 (the whole list of similarity between small molecules and B7-H3 is attached in Supplementary cmap.xls).
Therefore, small molecule ajmaline (enrichment=0.918) can imitate well as B7-H3, which means that ajmaline may enhance self-immunity ability in colon carcinoma therapy.When treating colon carcinoma, with ajmaline added as auxiliary, the therapy may improve.
Immunohistochemistry staining analysis of Colonic Cancer Cells and Normal Colonic Cells
There were brown coloring on the cell membrane of colonic cancer cells (Figure 2 A) and normal colonic cells (Figure 2 B).The expression of B7-H3 was observed in plasma-membrane of tumor cells.In the tumor tissues analyzed, immunocompetence was observed for B7-H3 of the 104 tumor tissues in both colonic cancer cells and normal colonic cells.This study indicates that the B7-H3 plays important roles in response to inmmuoreaction of colonic cancer cells in tumor proliferation and cell cycle arrest.H7-B3 is a favorable prognostic biomarker in the mechanism of colon carcinomas and provides the information regarding the likely targeted intervention.
Discussion
B7-H3 is newly discovered B7/CD28 superfamily member, functions as an important T cell immunity response molecule (Han et al., 2011).The fact that B7-H3 highly expressed in a lot of cancer cells induces the guess that B7-H3 is a regulation factor in automated tumorresistant.Therefore, there is a deep and further meaning for B7-H3 mechanism research (Pourhoseingholi et al., 2010).
Seeing through the location of B7-H3 in GO clusters of differentially expressed genes, B7-H3 is associated with some other genes which changed in colon carcinoma plasma membrane, and these changes activated T lymphocyte.This point is supported by experiments before (Liu et al., 2011).The following pathway analysis also validated this point.Therefore, the possible pathway of B7-H3 functioning in colon carcinoma is the change of components on colon carcinoma plasma membrane and hence effect on the activation of T lymphocyte and then induce automatic immunity resistance (Sundarraj et al., 2010).This is also normal self-save method.
Through the GSEA methods, we found some small molecules mimic B7-H3 Function, such as tanespimycin, LY-294002, trichostatin A, ajmaline and so on.Heat shock protein 90 (HSP-90) is a kind of defense mechanism of cells, when cells encounter heat and other pressure (such as noxious cancer drugs), this mechanism will start.On tumor speaking, HSP-90 can help the cancer cells survive from death, if it doesn't work, the cancer cells are equivalent to suicide (Liu et al., 2011).HSP-90 molecules surround around malignant cells like lobster claw, and help cancer cells spread and growth, make the cancer cells to attack the human body.With this defense mechanism, the cancer cells can resist to the drug therapy.So many pharmaceutical companies have developed HSP-90 inhibitors to help the initial treatment exercise their "regular job".This inhibitor can be used to fight against all kinds of cancer (Vaishampayan et al., 2010), and ensure that tumor will not develop resistance to initial treatment drugs (Modi et al., 2007).Also writing as LY-294002, LY-294002 or LY 294002, LY294002 is a common phosphatidylinositol 3-kinase (PI3K) inhibitors, can stop the acylation inositol protein kinase (Phosphotidylinsitol-3-kinase) in cell signaling pathways.LY294002 can pass through cells, and specifically inhibit PI3K, PI3K/Akt signaling pathways (Liu et al., 2010), including common inhibition of Akt phosphorylation (Zhong et al., 2010), etc.It is widely used in the study of PI3K cell signaling pathways (Du et al., 2010).As the derivant of quercetin, LY294002 is reversible, high-efficient PI3K kinase selective inhibitors, competitive combine to the ATP binding sites of enzyme.LY294002 has no inhibition on PI4K, DAGK, PKC, PKA, MAPK, S6K, EGFR and c-src tyrosine kinase.LY 294002 can inhibit in vitro proliferation of choroidal melanoma OCM-1 cells with IC50 = 10μM.When using colon cancer cells as the research object of in vivo and in vitro experiments, LY 294002 have shown both inhibition proliferation and induce the apoptosis of cancer cells.(Tang et al., 2009) Notably, B cell receptor signaling pathway is found through pathway analysis to be changed.As we know, B lymphocyte, a critical member in automatic immunity, also changed in colon carcinoma cells.Therefore, B7-H3 may play certain role in this process.Only because this possible function unreported before, it is not included in the calculation on B7-H3 reaction.The automatic immunity pathway is activated, but the colon cancer cells remains.Therefore, we may conclude that in the colon carcinoma, certain step in the process from immunity activation to eliminate abnormal cell mutated and hence disarmed the automatic immunity function.Therefore, it seems very important to probe this step.
Figure 1 .
Figure 1.The Gene Expression Profile of B7-H3 Expressed in Colon Carcinoma and Normal Colon Cells.The characters under colon bars are the chip number of GEO
Table 1 . Changed Biological Pathways in Colon Carcinoma
P Values were calculated with multiple inspection correction for better credit (here we use Benjamini & Hochberg method for correction).And the corrected p | 3,872.4 | 2012-08-31T00:00:00.000 | [
"Biology",
"Medicine"
] |
Communication Strategy of Aceh Government to Handling Covid-19 Pandemic in Aceh Province
Received Dec 30, 2020 Revised Jan 30, 2021 Accepted Feb 27, 2021 Corona Virus Disease 2019 or Covid-19 that firstly identified in Wuhan-China rapidly spreaded to 215 countries over the world and has not ended yet. In Indonesia, the first Covid-19 outbreak case was on March 2020 and also has spread rapidly to all provinces. Aceh is noted to be more successful in dealing with the pandemic compared to other provinces. Aceh achieved appreciation from central government and asked to share successful experiences with other provinces. This study is aimed to identify and analyze the Communication Strategy applied by Aceh Government and the challenges faced while handling the Covid-19 Pandemic in Aceh. This descriptive-analytic research uses interview, observation and documentation study techniques as data collection techniques. Furthermore, the data is processed using Constant Comparative Techniques and Domain Analysis in order to obtain relatively valid conclusions. The results of the study concluded that there were 4 (four) Communication Strategies implemented by the Aceh Government, namely: Motivational Communication Strategy, Quick Response Strategy, Leadership Commitment Communication Strategy and Mass Communication Strategy. Meanwhile, there are various challenges faced by the Aceh Government in implementing a communication strategy while handling the Covid-19 pandemic, that is; Regional Accessibility, Socio-Economic, Socio-Cultural and Religious as well as Local Political Dynamics.
Introduction
Since the beginning of its spread in Hubei, China on December 2019, the Corona Virus or known as Covid-19 has rapidly spread around the world. This virus that firstly identified in Wuhan-China, at the end of August 2020 has rapidly spreaded to 215 countries with a total number of infected victims of 25,382,745 people, 850,503 died and 17,704,831 people declared cured (worldometers, 2020). The world health institution, WHO, led by Tedros Adhanom Ghebreyesus on January 30, 2020 initially declared Covid-19 only as an epidemic or Global Health Emergency. However, seeing the very fast transmission almost all over the world, finally on March 11, 2020 WHO declared the Corona Virus as Global Pandemic and appealed countries in the world to take urgent and aggressive action in preventing the spread of Covid-19. (who.com, 2020).
Although other countries have been busy with the number of Covid-19 cases in their country, Indonesia has just announced its first case on March 2, 2020. President Joko Widodo announced himself from the State Palace there has been 2 patients stated infected with the covid-19 virus in Indonesia. Jokowi said, the mother and daughter had interacted with a Japanese who had been infected with the corona virus when visited Indonesia (nasionalkompas.com, 2020). After the announcement, almost all news programs in mass media filled with information about additional case of Covid-19 patients as well as the rapid spread throughout Indonesia. Effective until the end of August 2020, the data shows that Covid-19 cases in Indonesia have reached a total of 174,796 positive cases; with 7,417 people died and 125,959 people recovered. This number increased significantly compared to its first case on March 2, 2020. In just 6 (six) months, cases of Covid-19 have reached more than 100,000 patients. (covid19.go.id, 2020).
Aceh, as a province in the westernmost part of Indonesia, cannot be spared from the Covid-19. The data shows, effective until the end of August 2020 there are 1.545 positive cases in Aceh; with 57 people died and 602 people recovered. (dinkes.acehprov.go.id, 2020). Most of the residents that were confirmed positive Covid-19 had a history of travel from other areas, and only a few people who are infected from local transmission. Based on this data, if compared to other provinces in the country, Aceh is a province with a relatively small number of positive Covid-19 cases. This is based on the results of comparisons with 22 other provinces in Indonesia, which shows that Aceh ranks 12th out of 34 Provinces with the lowest number of positive Covid-19 cases in Indonesia. This figure is a positive number for Covid-19 case after Aceh experienced the First and Second Wave of the spread of this deadly virus. The First Wave occurred since the establishment of the Emergency Response period by the Government through the BNPB, which began on January 28, 2020 until 23 April 2020. At that time, Acehnese who were in other provinces, even from Malaysia (also a pandemic country near Aceh) returned to Aceh because of the establishment of Large-Scale Social Restrictions (PSBB), even lockdowns policy by the Regional / Local Government. The Second Wave occurred when entering the fasting month of Ramadan, when the celebration of Eid Fitr and Eid Adha in 2020. Because at that time may had been a mobilization of citizens who returned home to enjoyed Ramadan and celebrated Eid in their hometown. This high level of citizen mobilization has significantly increased the spread of Covid-19. This is very influential, since the spread of Covid-19 requires a medium or carrier (infected human) to move from one to another person.
The academics and practitioners as well as mass organizations those who care about health in Aceh, including Management of the Indonesian Doctors Association (IDI) for Aceh region, previously had predicted the addition of positive cases Covid-19 significantly in the second wave (aceh.tribunnews.com, 2020). Their predictions were proven after July 30, 2020, Aceh Province noted the addition of 45 positive Covid-19 case. These 45 cases are the highest number of increases that have occurred for the first time in Aceh. Data also shows that Covid-19 has spread to nearly 23 districts / cities in Aceh. From then on, the number of positive Covid-19 cases continued to grow and attained 168 cases on August 17, 2020August 17, . (dinkes.acehprov.go.id, 2020. Predictions from IDI are now very accurate compared with predictions of the first wave (early March -May 2020) Aceh still has the lowest number of positive Covid-19 case and the most successful province that suppressed the spread of Covid-19 both in terms of prevention and treatment to the victims. No doubt, on 27 May 2020 the Head of the National Agency for Disaster Management (BNPB) in Jakarta as Chair of the Indonesian Covid-19 Accelerated Handling Task Force sent an official letter to Aceh Government asked to share Success Experiences related to Covid-19 Handling strategies for other provinces in Indonesia (aceh.tribunnews.com, 2020).
Based on authors'observation, the concern of academics, activists, mass-organizations and health practitioners about the increasing number of positive Covid-19 cases in Aceh is reasoned. If we look at regional accessibility, Aceh has big potential for the spread of the Covid-19 virus. Access by land, sea and air provides an opportunity for anyone to entering Aceh. Likewise, several socio-cultural, socio-economic, socio-political and religious conditions have the potential to gave way for the spread of Covid-19. These conditions are big challenges in handling Covid-19.
The Aceh Government through the Aceh Covid-19 Accelerated Handling Task Force as the most responsible institution for handling Covid-19 has taken many quick and important steps, and announced to the public. Seeing those considerable challenges, coupled with the fake news attacks (negative campaigns) about the handling of Covid-19 on social media, the Aceh Government has communication strategies specially to reinforce the Covid-19 control program. (humas.acehprov.go.id, 2020). From the first data obtained, several actions by the Aceh Government in fighting Covid-19 received appreciation from the Central Government and foreign parties. Several main themes were spread in public communication activities to get sympathy and support from the people of Aceh, to fight Covid-19 together. Among the main themes published, the most interesting one is a sentence that begins with a hashtag, namely #AcehLawanCovid19. This hashtag is very inspiring to authors and will make it a highlight or main analysis in this paper. program was implemented fairly successful, including the marketing of fertilizers Azola in the Philippines, the revolution in agricultural production in Indonesia, the campaign against drunk for drivers in North America, the campaign energy savings in electricity in Canada, the campaign participation of the population in the Village Bank program in Bangladesh, etc. So, the success of a communication or development program that requires communication support, basically depends on the planning itself. (Cangara, Perencanaan dan Strategi Komunikasi, 2007, p. 2).
Study of communication planning, initially emerged from the conflict of interest between developing countries and developed countries which peaked in the 1970s, especially regarding the imbalance of information. Distribution of information around the world is considered unfair, developed countries which have the power of communications technology transferring various information to developing countries which are less able to buy information technology. The imbalance of information is also evidenced by Gebner's research on 60 newspapers from 9 countries (3 capitalist countries, 3 socialist countries and 3 developing countries) that found the newspapers published in capitalist and socialist countries have measly news from developing countries. Even if there is news about developing countries, only negative news, such as conflict, poverty, riots, hunger, ignorance, etc. (Cangara, Perencanaan dan Strategi Komunikasi, 2007, p. 3).
Cassandra in Hafied Cangara believes, in message management techniques, communication can be considered effective, there are two models of message composing, namely: composing informative messages, and composing persuasive messages. (Cangara, Pengantar Ilmu Komunikasi, 2007, p. 116). The message formulation model is informative, devoted more at the knowledge and awareness expansion of the audience, diffuse, simple, clear, and do not use any terms.
There are four kinds of composing informative messages: a. Space Order, which is composing messages that look at the conditions of a place or space, such as local, regional or national.
b. Time Order, that is arrangement of messages based on time or period chronologically.
c. Deductive Order, that is composing messages from general to specific topic.
d. Inductive Order, that is composing messages from specific to general topic.
Persuasive messaging models can be classified as follows: a. Fear Appeal, that is a method of composing or conveying messages by frightening audiences.
b. Emotional Appeal, that is a method of composing or conveying messages by arousing public emotion, such as exposing issues of ethnicity, religion, discrimination, economy, etc.
c. Reward Appeal, that is a method of composing or conveying messages by making promises, such as in election campaigns.
d. Motivasional Appeal, that is a method of composing or conveying messages without promises but with psychological motivation.
e. Humorious Appeal, that is a method of composing or conveying messages with jokes so that it becomes an attraction for the public, not saturated.
Besides those techniques, to manage messages effectively, we also need to paying attention to the following points: a. The message conveyed must be understood first, systematically.
b. Able to argue logically to support the material presented.
c. Mastering language intonation and nonverbal movements that can attract audiences.
Communication Strategy During Disaster
Haddows and Kims in Rudianto revealed that there are 4 (four) principal foundations that can be used as a strategy in building communication during a disaster, namely: a. Costumer Focus, that understands the information needed by customers, in this case the community and volunteers. Governments must regarding public as customers to meet their needs for information, both in quantity and quality related Covid-19. So, people can stay alert and not panic. A communication mechanism must be established to ensure that information is conveyed correctly and accurately.
b. Leadership commitment, leaders in emergency situations must be committed to effective communication and must be actively involved in the communication process. The public will feel safe if they see the seriousness and ability of the government in controlling the Covid-19 pandemic. Good coordination between institutions, both central and regional, is part of the leader's commitment to its citizens.
c. Situational awareness, effective communication is based on controlled collection, analysis and dissemination of information related to disasters. Principles of effective communication such as transparency and trustworthiness are the key.
d. Media partnership, television, newspapers, radio and others are very important media to convey the latest information about the proper handling of a pandemic to the public. Cooperation with the media concerns; understanding of the needs of the media with well-trained team, to cooperate with the media, to get information and share it to the public (Rudianto, 2020, p. 8).
Legal Basis
Aceh Government is a Provincial Government within the system of The Unitary State of the Republic of Indonesia based on the 1945 Constitution of the Republic of Indonesia which organize government affairs implemented by the Aceh Regional Government and the Aceh Regional People's Representative Assembly in accordance with their respective functions and authorities. Pemerintahan Aceh (The Aceh Government) is equal to other Provincial Governments in Indonesia, a new version of Pemerintahan Provinsi Daerah Istimewa Aceh (Provincial Government of Aceh the Special Region) and Pemerintahan Provinsi Aceh (The Government of the Aceh Province). Aceh governance is carried out by the Governor as an executive instituton and the Aceh People's Representative Council as a legislative institution. The Aceh government is formed based on the Government System of the NKRI (Unitary State of the Republic of Indonesia), which according to the 1945 Constitution of the Republic of Indonesia, recognizes and respects regional government units that are special or special in nature. The constitutional journey of the Republic of Indonesia places Aceh as a regional government unit that has special authority, related to the historical unique character of Acehnese struggle, which has high resilience and fighting power. Those resilience and high fighting power comes from a way of life based on Islamic law, which rising strong Islamic culture, so that Aceh becomes one of the important capital when fighting for independence of The Republic of Indonesia, which is based on Pancasila and the 1945 Constitution. Thus, life requires the formal implementation of sharia law enforcement. The enforcement of Islamic sharia is carried out on the principle of Islamic personality for everyone in Aceh, regardless of nationality, position and status in the region.
Special Autonomy of Aceh
State recognition of the privileges and specialties of the Aceh is given through Law Number 11 of 2006 about Aceh Governance (LN 2006 No 62, TLN 4633). This Aceh Government Law is inseparable from the Memorandum of Understanding between the Republic of Indonesia and Gerakan Aceh Merdeka (Aceh Independent Movement) which was signed on August 15, 2005 and is a reconciliation towards sustainable social, economic and political development in Aceh.
Law No. 11/2006, which contains 273 articles, is the special 'Regional Government Law' for Aceh. The substance of this law, and the specificity and distinctiveness of Aceh, which forms the main framework of Law 11/2006, is mostly the same as Law No. 32/2004 on Regional Government. Therefore, Aceh is no longer depend on the Regional Government Law (for matters that have been regulated according to the Aceh Government Law).
Implementation of Islamic Sharia
Aceh Province is the only province in Indonesia that implements Islamic Sharia, which refers to the provisions of Islamic Criminal Law known as the Hukum Jinayat (Law of Jinayat). The application of Islamic Sharia is regulated in a regional regulation or Qanun of Aceh Province number 6 of 2014 concerning the Law of Jinayat.
Borderline
The outer boundaries of the Aceh region are:
Aceh Government Structure
The Aceh region is divided into regency and cities. Regency and Cities are part of the Provincial Region as a legal community unit, which is given special authority to regulate and manage government affairs and the interests of the local community by themselves, in accordance with the laws and regulations in the system and principles of the NKRI government based on the 1945 Constitution led by a Regent/Mayor. Regency / City is divided into Districts. A sub-district is the working area of a Head of District as an apparatus of a Regency / City, in administering District government. Districts are divided into Kemukiman. Kemukiman is a community unit under a sub-district, consisting of several gampongs, which have certain territorial boundaries, which are led by Imeum Mukim (or another name) and are directly under the Head of District Furthermore, Kemukiman is divided into Gampong. sub-district in Aceh was phased out to become Gampong (or other names) within the Regency/City. Gampong (or another name) is a legal community unit under Kemukiman and led by Geuchik (or other names) who have the right to carry out their own household affairs (humas.acehprov.go.id, 2020).
Corona Virus Disease 2019 (Covid-19)
Corona viruses are a group of viruses that cause the disease, ranging from mild to severe symptoms. (Kemenkes, 2020).
Research Type and Approach
This research is designed as a qualitative research with a phenomenological approach, a study that describes the results of in-depth interviews with informants and observations of the subjects, about various data related to visible and invisible phenomena such as oral speech, attitudes, behavior, and facial expressions, as primary data, without operationalizing or testing the concept on the reality under study first (Kriyantono, 2006, p. 67).
In line with this definition, Bogdan and Biklen in Syukur Kholil, define the Qualitative Method as a research procedure that produces descriptive data, in the form of written or spoken words from observable people and behavior. Furthermore, Kirk and Miller, also in Syukur Kholil, define Qualitative Research as a particular tradition in social science which fundamentally relies on observing humans, within their own domain and relating to people in their language and terminology (Kholil, , 2016, p. 121).
Research Informants
In qualitative research, determining key sources or informants is very important, because their role is as the main resource person to obtain valid data which is sometimes concealed and closed from the research object.
The key informants should be people who are considered capable and has the capacity to provide information related to the focus of research conducted. Research informants can come from internal circles of the bureaucracy or regular community, they are: a. Bureau chief of Public relations and Regional secretariat protocol, Aceh. f. Academics and Observers in information Technology and Communication Field.
Data Source
All data obtained in this study consisted of two sources, namely: a. Primary data sources, which are obtained from the results of interviews (indept interviews) with the speakers, namely a number of key informants and Focus Group Discussion participants (who will be selected), direct observation of research subjects as well as other important data and information obtained from the field.
b. Secondary Data Sources, obtained from official manuscript archives (official Aceh government letters), textbooks, theoretical assumptions, journals, papers, research reports, proceedings, photos and video documentation, websites, and press scrapbook.
Collecting Data Instruments
The collecting data instruments used are: a. Interview Guidelines as a guideline for data collection through in-depth interviews with informants, in this case the author uses semi-structured interview guidelines, that is interview guidelines that are arranged in detail but there is still a chance to dig deeper into the data from informants.
b. Observation Guide as a tool in data collection activities by observation to research subjects. Observation in qualitative research only is made in the form of the outlines of the main concepts that will be observed.
c. Audio Visual tools to capture sound and images as supporting evidence of research results.
Data Collection
Data collection in this study was carried out using multi-methods, as follows: a. In-depth Interviews is the process of collecting data through verbal questions and answers between researchers and sources. This type of in-depth interview is also called 'intensive interview' because it is carried out more than once, interviews are conducted repeatedly right on the location and in a relatively long duration if compared to regular interviews but are still focused on relevant data only.
b. Observation is the method of collecting data by observing with the eyes and other senses. In observation, the researcher does not only use the eyes, but always connects what he sees with what is received by the other senses.
c. Documentation study is collecting data by reviewing documents, such as legal products and official scripts that have been published by the Aceh Government, as well as text books, journals, proceedings, press scrapbook, websites, and audio-visual recordings that have been produced and published by the Aceh Government, related to the handling Covid-19 in Aceh 2020.
Data Processing and Analysis
In this study, the data obtained were analyzed using a qualitative approach, which consists of, Constant Comparative and Domain Analysis. Before applying these two data analysis techniques, the data interpretation process was carried out in accordance with the theory to make it easier to understanding and arguing the existing data. This data analysis process starts from empirical facts called Data Collection, then it is included in the data classification by looking at general characteristics (categorization), then giving it meaning or interpretation.
Motivational Communication Strategy
A psychotherapist who is well known with his books about spirituality, once reminded us that in handling a pandemic in the world, the number of victims could be five times more if there is fear. One thousand people became victims because of illness, while four thousand more became victims because of panic. (who.com, 2020). Reflecting on this, communication is the most important thing in dealing with a pandemic. Public trust needs to be built and maintained, in order to reduce panic among the society, handling process could run smoothly. Therefore, the Government must show their seriousness, readiness and ability to handle this outbreak. Perceptions about readiness and seriousness of Aceh Government need to be conveyed through a comprehensive and periodic explanation to the public. Describe what the Government has done and will do. The Aceh Government implemented this communication strategy very well during Covid-19 pandemic. Even Government, through the Covid-19 Accelerated Handling Task Force, invited the public to unite to fighting Covid-19. The government and citizens must work together in handling Covid-19, both in preventing and handling act. In an interview, the Bureau chief of Public relations and Regional secretariat protocol said that Aceh Government through several Regional Apparatus Organizations (OPD), which are members of the Covid-19 Accelerated Handling Task Force, motivated public to be calm but still wary, and understand what they have to do in their circle. The communication strategy with motivational techniques is also aimed to build people's perceptions that the State (Aceh Government) is responsive in preventing and controlling the Covid-19 pandemic (KaroHumpro, 2020). Likewise, the solidarity of the Aceh Government with their colleagues (Forkopimda), showing their concern for the community through several appeals that were signed jointly, carried out by all Acehnese people in the face of the Covid-19 pandemic. On March 19, 2020, Forkopimda Aceh issued an appeal regarding the Acceleration and Anticipation of the Spread of Covid-19 in Aceh (humas.acehprov.go.id, 2020). This joint appeal was widely publicized to the Acehnese people through the Mass Media, as well as the entire Government Network, until all gampong throughout Aceh. On March 29, 2020, Forkopimda Aceh also issued a Joint Decree regarding the implementation of the curfew. (bpba.acehprov.go.id, 2020). This shows the solidarity of the leaders, motivating society, so that people feel comfortable, feel cared and protected during this pandemic. This motivational communication strategy has got support from Acehnese. The citizens will voluntarily obey all appeals during the Covid-19 pandemic.
Quick Respon Strategy
Long before the Central Government formed Covid-19 Accelerated Handling Task Force on the central level in April 2020, the Aceh Government on 26 January 2020 had established an Alert Post for handling in Aceh Province first, which is based at the Aceh Social Service. (KadisInfokom, 2020). The first Quick Response of the Aceh Government by the Posko Siaga (Post Alert) are recorded all Aceh's students and residents in Wuhan and send them to the Ministry of Foreign Affairs of the Republic of Indonesia in Jakarta. Then on the recommendation of the Indonesian Ministry of Foreign Affairs on January 27, 2020, the Aceh Government sent logistical support to Acehnese students and residents in Wuhan. (serambinews.com, 2020).
During an interview with the Head of Aceh Information and Communication Department, he said, on the same day with delivery of logistical support to students and residents of Aceh in Wuhan, the GoA did appointments of two (2) Hospital for referral patients Covid-19, the Zainal Abidin hospital in Banda Aceh and Cut Meutia Hospital in Lhokseumawe (KadisInfokom, 2020). Another step taken by the Aceh Government is trying to return Acehnese students and residents in Wuhan to Indonesia. From 65 students throughout China, the Central Government of Indonesia evacuated 13 students from Wuhan to Natuna. 5 (five) people were facilitated to return to Aceh, while 45 other residents were already in Aceh (antaranews.com, 2020). Furthermore, prior to an order from the Central Government, using the Emergency Response Fund, the Aceh Government ordered various Covid-19 referral hospital facilities and Personal Protective Equipment (PPE) for Health Workers. Besides health facilities and equipment, the Emergency Response Fund is also used to provide basic foodstuffs for families of Covid-19 patients and health workers, even though at that time there was still no positive Covid-19 case in Aceh.
These various quick responses from the Government of Aceh were well publicized by various media, under the coordination of the Aceh Public Relations and Regional secretariat protocol and Aceh Information and Communication Department. This quick response received appreciation from the Acehnese, some of the people's representatives who served in the Aceh People's Representative Council (DPRA) and also from the Central Government through the Ministry of Foreign Affairs of the Republic of Indonesia.
Leadership Commitment Strategy
Leaders who play a role in the emergency response must have a strong commitment to actively take part in every activity to handle Covid-19. The public will feel calm if they see the seriousness and ability of the leaders in controlling the Covid-19 pandemic. Good coordination and cooperation between agencies is part of the leader's commitment to their citizens.
During an interview with the Head of the Aceh Head of Health Department, he said that the Deputy Governor of Aceh, Mr. Ir. H. Nova Iriansyah, MT, who currently serves as Acting Officer (PLT) for the Governor, carried out this Communication Strategy very well. He handled directly various activities related to the handling of Covid-19. This is the leadership's commitment to solving the problem as well as motivating the field workers.
Praying together for students and Acehnese in Wuhan that was held on February 5, 2020 at the Hall of Vice Governor Office shows the concern and seriousness of the leader. When the students and Acehnese returned from Wuhan on February 16, 2020, the Acting Officer (PLT) for the Governor welcomed them at Sultan Iskandar Muda Airport -Banda Aceh (antaranews.com, 2020). Furthermore, the Acting Officer (PLT) for the Governor also showed his commitment by attending directly to review the readiness of the Zainal Abidin Hospital in Banda Aceh and the Cut Mutia Hospital in Lhokseumawe as the Covid-19 Reference Hospital (antaranews.com, 2020). On March 29, 2020, the Acting Officer (PLT) for the Governor also sent a letter to all special health workers whose handle covid-19 patients. The letter contains motivation and appreciation for their dedication in carrying Covid-19 patients (ajnn.net, 2020). To improve the quality and quantity of handling of Covid-19 patients, on March 31, 2020, the Aceh Governor PLT came to inaugurate the Pinere Outbreaks Room at Zainal Abidin Hospital, Banda Aceh (m.kumparan.com, 2020). A day later, on April 2, 2020, the Acting Officer (PLT) for the Governor inaugurated an accommodation for the Medical Team that handles Covid-19 patients (beritasatu.com, 2020). Furthermore, on April 8, 2020, the Acting Officer (PLT) for the Governor was also came to inaugurate the Special Polyclinic for Infectious Diseases at Zainal Abidin General Hospital in Banda Aceh. (kasadar.com, 2020). At the same time, 3 out of 4 positive covid-19 patients who were treated in the RSUZA isolation room were declared cured (m.liputan6.com, 2020).
This on-the-spot activity was not only carried out by the Aceh Acting Officer (PLT) for the Governor, but also his wife as the Chairperson of the Aceh Provincial TP PKK (The Team for Mobilizing Family Empowerment and Welfare), Dr. Diah Nova Iriansyah to pay attention to women, mothers and children in facing Covid-19 pandemic especially in Aceh. A support from the Government was directly delivered (symbolic) by the Chairperson of the PKK Aceh to the entitled people. This is one of the communication strategies of the Aceh Government in dealing with the Covid-19 pandemic by giving serious attention and strong commitment to the citizens.
Mass Communication Strategy
Mass Communication Strategy means a media partnership strategy, such as; Television, newspapers, radio and others are very important media to convey information about the real updates about handling Covid-19 to the public. Collaboration with the media concerns an understanding of data and information that can be shared with the public. The Aceh Government through the Aceh Bureau chief of Public relations and Regional secretariat protocol, the Aceh Information Communication Service, and the Spokesperson for the Covid-19 Accelerated Handling Task Force have intensive communication with the media crew. Provision of fast and accurate 'data and information' about Covid-19 data updates in Aceh. Media crews easily access every updates of Covid-19 and put it on the news for public consumption.
The target audience can be reached through various channels in Aceh, either through mainstream media, social media, online media daily or through communication networks. The media used by the Aceh government to convey data and information related to developments in the handling of the Covid-19 pandemic are: The official website of the Aceh Government, namely acehprov.go.id, television, print media, online media, radio, SMS gate a way, social media, Local Government (Regency / City) information networks, inter-school information networks, youth / religious / political organization networks and other informal networks.
With this strategy, most of the activities of the Aceh Government in dealing with Covid-19 have been completely recorded in mass media reports, and finally consumed by the public. With this 'Media Partnership' strategy, the Aceh Government hopes that all information regarding the handling of Covid-19 can become a routine agenda for the media in Aceh. Every day, even at any time, the media is expected to be able to inform and educate the public about how to deal with the Covid-19 pandemic without feeling worried but still wary. This Media routine expected to become a routine agenda for Acehnese so people will share and educate each other for healthy habits during the pandemic. (KadisInfokom, 2020).
Challenges for The Aceh Government in Implementing the Communication Strategy While Handling the Covid-19 Pandemic
Challenges for the Aceh Government in implementing a Communication Strategy in handling the Covid-19 Pandemic in Aceh are:
Accessibility
Aceh Province has these following accesses: a. Access to air traffic, Aceh has 13 airports (airports) with 2 (two) International Airports and 11 (eleven) Domestic Airports.
b. Access to sea traffic, Aceh has 18 (eighteen) ports classified as 8 (eight) Ferry Ports and 10 Export Import Seaports.
These accesses provide opportunities for citizens outside of Aceh to enter Aceh Province, both legally and illegally (especially from neighboring Malaysia). c. Myth, some Acehnese figures and people believe that Covid-19 is a kind of ghost (ta-uen) that will disappear with a ritual of driving out demons.
Social-Economy
d. Speculation, has spread among some public figures that Covid-19 is a type of biological weapon produced by certain countries to control and destroy a certain country to dominating the world politically and economically (m.detik.com, 2020).
e. The aftermath of the long conflict and the earthquake and tsunami disaster, has changed the character, attitudes and behavior of the Acehnese to be individualist, hedonist and fragmatis. (Abdullah , 2006, p. 131).
Spiritualism
a. The Covid-19 health protocol, which requires residents to be at home, has led to the wrong perception of some Muslim religious teachers and this has greatly affected their followers. The protocol is perceived as prohibiting worship in mosques or Meunasah (smaller place for praying), even though it is only a limitation in the way of praying b. Some circles of society perceive that Covid-19 is the "Army of God" who will scorch the evil on earth and people who worship will be saved from it. This makes some people feel normal and do not care about what is happening.
c. Some other Acehnese think that Covid-19 is a havoc that must be fought by increasing worship and 'dzikir' (remember God) rituals together by gathering and involving a lot of people.
d. Because of the religiousness, the Acehnese are easily offended if there are new rules that contrary to the religious rituals (m.liputan6.com, 2020).
Local-Politics
The political relationship between the Aceh Government as the Executive and the Aceh People's Representative Council (DPRA) as the Legislative is not harmonious nowadays. This is greatly affected the accuracy and speed of every decision and action of Aceh Government in handling Covid-19 which often provoked criticism from the DPRA as the legislature (m.republika.co.id, 2020). Excessive and political criticism from some DPRA sembers has obstructed the Aceh Government's steps in dealing Covid-19 Pandemic. The problems that appeared in the DPRA Special Committee (Pansus) sessions which discussed the Supervision of Covid-19 Handling Funds (refocusing) eventually led to the submission of Interpellation Rights by DPRA (news.detik.com, 2020). What has been done by the DPRA was challenge for the Aceh Government in fast handling of Covid-19 in Aceh.
Conclusion
The small number of residents who have become victims of the Covid-19 pandemic in Aceh is considered as the success of Aceh Government in dealing Covid-19. The handling efforts included prevention and control of the Covid-19 pandemic. This activity was well communicated through Communication Strategies so that the public can supported and played an active role in overcoming the pandemic. The challenges faced by the Aceh Government, in implementing communication strategies in handling the Covid-19 pandemic, are very diverse, namely: Regional Accessibility, Socio-Economic, Socio-Cultural and Religious as well as Local Political Dynamics. These challenges have more or less affected the acceleration of the handling of the Covid-19 pandemic in Aceh Province.
Innovative communication strategies implemented by the Aceh Government during the Covid-19 pandemic, so that they can be applied to handling emergency disasters or other outbreaks in Aceh Province in the future. The aforementioned innovative strategies should be put in writing, so that other Provinces can learn / imitate them, which have similar characteristics with Aceh Province. | 7,919.8 | 2021-03-26T00:00:00.000 | [
"Computer Science"
] |
It’s Not Easy Being Agile: Unpacking Paradoxes in Agile Environments
In this paper, we outline inherent tensions in Agile environments, which lead to paradoxes that Agile teams and organizations have to navigate. By taking a critical perspective on Agile frameworks and Agile organizational settings the authors are familiar with, we contribute an initial problematization of paradoxes for the Agile context. For instance, Agile teams face the continuous paradox of ‘doing Agile’ (= following an established Agile way of working) versus ‘being Agile’ (= changing an established Agile way of working). One of the paradoxes that organizations face is whether to start their Agile journey with a directed top-down (and therefore quite un-Agile) ‘big bang’ or to allow an emergent bottom-up transformation (which may be more in-line with the Agile spirit but perhaps not be able to overcome organizational inertia). Future research can draw on our initial problematization as a foundation for subsequent in-depth investigations of these Agile paradoxes. Agile teams and organizations can draw on our initial problematization of Agile paradoxes to inform their learning and change processes.
Introduction
Agile and hybrid project environments are increasingly becoming the norm within and even beyond the IT industry, and organizations increasingly start scaling Agile 1 beyond IT project teams [1]. There are numerous methodologies for Agile project management and scaling Agile, which claim to embody the Agile Manifesto's principles and values (e.g., Scrum, SAFe, Disciplined Agile etc.). Studies show that embracing Agile leads to generally satisfied individuals and companies, but there are also a variety of obstacles that teams and organizations may face [2][3][4].
In a more general perspective, "most management practices create their own nemesis" [5 p. 491], and Agile is no exception. As one role of research is to critique the status quo [6], we do so in this paper for the Agile context by outlining areas of tension which result in paradoxes that Agile teams and organizations running Agile teams may have to navigate. Following Putnam et al. [7], we define paradoxes as "contradictions that persist over time, impose and reflect back on each other, and develop into seemingly irrational or absurd situations because their continuity creates situations in which options appear mutually exclusive, making choices among them difficult" (p. 72).
By providing this critique, we problematize [8] Agile beyond a functionalist view that is centered on performance or effectiveness. Our initial problematization therefore paves the way for future, more in-depth research contributions that investigate each paradoxas an instance of 'the dark side of Agile'more closely. We see these paradoxes as a starting point for more focused theoretical and empirical investigations how Agile teams and organizations encounter, experience, and cope with these Agile paradoxes. As one key tenet of Agile organizations is continuous learning and change, such in-depth treatments of Agile paradoxes can therefore also contribute to organizational learning and change efforts in practice.
Our analysis draws on a critical reading of selected Agile methodologies and techniques, the Agile research literature, as well as a critical assessment of Agile environments that the authors are familiar with (see Sect. 2). Note that while there is a quite comprehensive dataset that informed the authors' other research in Agile organizational contexts, there was no specific data analysis conducted for this paper to inform our initial problematization of Agile paradoxes. We see such an undertaking as a fruitful endeavor for future research.
In this short paper, we first outline the backdrop against which we provide our critique. We then start discussing sources for agile paradoxes on the levels of the Agile team as well as on the organizational level for those organizations who scaled Agile beyond individual teams.
Empirical Background
Both authors are involved in a large-scale cross-industry and cross-country research program on Agile organizational transformation and have collected extensive data across two phases. The first data collection phase consisted of interviews and focus groups with seven executives (e.g., CIO or CDO), whereas the second phase consisted of interviews with lower level managers (e.g. program managers, product owners, enterprise architect) or external consultants. The participants had two essential criteria to fulfill: 1) their organization is undergoing a transformation towards organizational agility, and 2) the participants hold a position with in-depth insights on the overall (agile) organizational system. For the executives group, we conducted three single day focus group workshops [9] and seven semi-structured interviews. For the other group, we conducted 33 semi-structured interviews. Each interview session lasted 45-75 min and was audio-recorded and transcribed.
All gathered data has been qualitatively analyzed to inform research on the implications of Agile for topics such as portfolio management [10], enterprise architecture [11], business/IT alignment [12], and IT governance (currently under review). Beyond these specific topics, however, the authors also observed more general patterns of a paradoxical nature in the Agile organizational contexts the interview and focus group participants gave insight into, and likewise in the (Scaling) Agile frameworks that the interviewees referred to. The following two sections outline the sources for these paradoxes that the authors have identified. Due to space restrictions in this short paper, we are only able to outline and problematize each paradox on a rather general level.
3 Sources for Agile Paradoxes on the Team Level
Being Agile Versus Doing Agile
The different aspects of Agile such as values, principles, methodologies, or techniques allow us to distinguish between teams that are 'being Agile' (i.e., embrace Agile principles and values in an Agile mind-set and truly focus on delivering customer value while learning continuously) and 'doing Agile' (i.e. adopt an Agile methodology or a set of Agile techniques and simply follow them). Note that 'doing Agile' can be a step on the way of towards fully embracing the Agile mindset [13,14]. However, there is the danger that an Agile team stops advancing beyond the 'doing Agile' stage, i.e. it keeps trying to 'perfect' their adoption of their chosen Agile approach. In contrast, teams 'being Agile' commit themselves to being accountable for their work, being willing and able to handle uncertainty in their work, and to strive for continuous improvement. The specific way of working (methodology, process, techniques, tools) or any form of adherence is less important. In this sense, the term 'Agile methodology' is already paradoxical in itself, as the term 'methodology' implies a specific prescription. Especially in volatile environments or in environments where an Agile methodology or framework forms the cornerstone of the Agile transformation, there may be a permanent paradoxical tension between 'doing Agile' and 'being Agile' for Agile teams.
Experience Versus 'Appetite' for Change and Flexibility
Agile environments are built around the assumption that information completeness is never achieved due to ever-changing environments and customer needs. Hence, a high level of readiness for coping with change is a critical factor for Agile team effectiveness. However, a high amount of team members' experience in particular may also be a source for a paradox. A team member's experience can come from traditional project environments (particularly since Agile is still a quite young trend) and therefore include a preference for stable processes and predefined requirements based on detailed planning. Each Agile team member also continuously gains experience in (and may become accustomed to) their particular Agile approach and also regarding the artefact they are working on. Both variants of experience are challenged, however, by Agile's 'permanent uncertainty' in its mindset. Sometimes, a radical change to the way of working or the deliverable may be what the situation or the market requires, and extant experiences may be source for resistance or inertia regarding those changes. The paradox here is therefore that an increase in individual and collective experience may lead to a decreased 'appetite' for future change and therefore to less flexibility for a team.
Exploration Versus Exploitation
Agile teams are also characterized by a high level of self-organization and decisionmaking autonomy. In traditional Agile teams, this autonomy mainly concerns the choice of and ongoing changes to the methodology, techniques, and tools [15][16][17]. In Scaling Agile teams, this autonomy often extends to product or service design changes and future directions for their product(s)/service(s)/area(s) [18,19]. In the former case, a paradox arises out of the tension between the requirements of getting work done and continuously sharpening (and potentially re-learning) one'smetaphorical and literaltools. This may pose the danger of splitting a group into those advocating change and those advocating getting things done. Autonomy over one's artefact in the latter case could lead to a similar paradoxical scenario of the well-researched tension between exploration vs. exploitation [20,21]. Should a team radically re-invent the artefact to adapt to or anticipate market changes, or incrementally refine the artefact to fine-tune it to established customer needs? In both cases, the team's handling of this paradox would enable or constrain future actions.
Directed Versus Emergent Team Process Change
As the notion of continuous change is built into Agile environments and teams, roles such as the Scrum Masters and Agile coaches are responsible for guiding and supporting the Agile team towards becoming more effective. However, there are two general archetypes how these roles could be set up (or choose by themselves) to fulfill their task: Agile coaches and Scrum Masters could either direct a team's development according to what they perceive as best for the team (to be an 'Agile leader' or even an 'Agile police', so to speak), or could nurture the teams instead (i.e. 'help the people to help themselves') and let any changes to a team's way of working emerge from within the team. In the former case, having change directed and induced from outside the team could potentially undermine a team's autonomy. On the flip side, a team that is perhaps 'too comfortable' with their current Agile approach may not engage in a selftransformation without external direction even though it would benefit from certain changes [22]. Either way, Scrum masters and Agile coaches could even oppose or counteract good Agile practicesperhaps just subconsciouslyin order to continuously create their own work in order to be kept employed or contracted and make themselves seemingly indispensable. The underlying paradox here is the one of balancing team autonomy with external directions with respect to changes to the team's way of working.
Starting/Realizing the Agile (Self-)transformation: 'Big Bang' Versus Emergence
When aiming to introduce Agile on a larger scale, organizations have to choose an approach that lies somewhere between an initial 'big bang' top-down transformation towards Agile or an incremental, iterative, and emergent approach where different parts of the organizations can choose whether and how they adopt Agile [23,24]. In other words, how Agile should the Agile transformation itself be, and how much predefined structures and processes should the first target state have? For instance, in one situation a common way of working across several Agile teams or units may be more effective to successfully transform (parts of) the organization, whereas in another situation selftaught bottom-up experimentation with Agile techniques and tools may be the more effective approachparticularly when considering how to set the stage for 'being Agile' in a longer-term and sustained perspective. As Agile implies a high degree of team autonomy instead of having top-down pre-planned decisions, a 'big bang Agile introduction' is therefore paradoxical in itself. The danger of mixed messages during an Agile transformation lies in a regression to a directive (i.e. non-autonomous) way of working and organizational culture, and also would constrain the Agile units' autonomy to selftransform in the future. Simultaneously, unfettered team autonomy right from the start could lead to the danger of an aimless or quickly stalling transformation process.
Directing Teams Versus Team Autonomy
The tension between directing and simultaneously sustaining autonomous Agile teams may not only occur during the initial Agile transformation but may stay with organizations throughout their entire Agile journey. The nature of the resulting paradox, however, shifts to issues related to focus, resources, effectiveness, and efficiency. The focus component affects how a team's strategic direction is set and influenced. While each team may know their product's customers best, an organization's top management may wish to change or retire some products. In this situation, the tension arises whether a team should be in charge of a changed purpose or even its own dissolution, or whether an organization wants to override its teams' autonomy in these cases. With respect to staffing and resourcing, the Agile idea generally implies that a team would be responsible for the resources they require to fulfill their purpose. However, resource scarcity in organizations, competition for resources across teams, and the willingness to achieve a global optimum across teams may prevent a purely bottom-up decisionmaking on resources. Again, the organization would paradoxically interfere with a team's autonomy if it denies requested necessary resources. Measuring Agile team effectiveness or performance is another source for paradoxes. Measuring performance could have the purpose of identifying the extent to which an Agile team contributes business value, or the purpose of aligning teams with overarching strategic objectives. In both cases an organization would again interfere directly with team autonomy. Finally, efficiency concerns the way of working throughout the organization, i.e. should teams be provided with or even have to adhere to a common set of Agile values, processes, techniques, and tools, which would allow team members to be shared or move between teams without having to adjust to fundamentally new ways of working? On the other hand, an organization-wide 'Agile standard' is again a paradox in itself, since one emphasis of Agile lies on continuous change and adaptability, and different Agile approaches may be effective for different teams. All these aspects are manifestations of a systemic contradiction of having autonomous teams within a coherent business organization. In a nutshell, any decision above the team level may ultimately undermine the teams' perceived autonomy.
Team Identity and Purpose Versus the Need for Radical Business Change
When an Agile team in a Scaled Agile environment is made responsible for (a) particular product(s)/service(s)/area(s), it achieves its sustained focus through this purpose. Over time, having a consistent focus and purpose contributes to a team's shared identity. However, being responsible for a specific product or service for quite a long period may lead to a 'blindness' and attachment of teams to their built artefact. Consequently, a team may add unnecessary bells and whistles to 'their' artefact to justify the product's as well as the team's continued existence and resourcing in comparison to other teams. A team may also become protective of 'their' product or service (area) instead of recognizing the need for a radical change or its retirement, in order to fulfill and surpass changed customer needs and support the organization in thriving in the changing business environment. A team's purpose may therefore become a selfreferential part of its identity so that strong repressions of or reactions against a radical change to the purpose occur, with the unanticipated consequence of limiting the effective team agility to self-transform when necessary. The paradox here therefore is that the same mechanisms that keep an Agile team together and effective may also hinder its ability to detect the best time and ways to re-invent themselves for their best possible contribution to organizational value.
Discussion, Conclusion, Outlook
In this paper, we identified and briefly discussed several potential paradoxes in Agile contexts. Through our discussion of these Agile paradoxes, we contribute a problematization [8] of Agile on a deeper level than a functionalist perspective that analyzes 'what works' [25], a critique of Agile as a management fashion [26], or previous attempts at identifying Agile paradoxes [27]. In our problematization, we interrogated key Agile tenets and found that embracing Agile may produce a number of paradoxes on the team and the organizational level. We do not see these paradoxes' existence as a negative thing. In fact, to harness the true potential of Agile transformations organizations may need to become adept at continuously confronting these paradoxes and utilizing their forces in a constructive and not a destructive way for their ongoing selftransformation. Since learning and change are two key Agile tenets, Agile organizations may be uniquely positioned to incorporate the confrontation with their paradoxes into their 'business as usual', instead of treating tensions and paradoxes as issues that stand in the way of organizational effectiveness and need to be resolved. While we have not investigated each paradox in-depth, our initial problematization may still be useful to guide and inspire [28] learning and change processes in Agile organizations. Some of the underlying tensionssuch as the exploration vs. exploitation oneare already well known in the literature [20,29]. Otherssuch as the tensions around Agile team autonomymay be specific for Agile environments and transformations. They haveto the authors' best knowledgenot been thoroughly investigated yet. Our problematization therefore contributes to a comprehensive research agenda to investigate how Agile teams and organizations encounter, experience, and cope with paradoxes on their Agile journeys. We therefore encourage empirical validation and extension of our findings, as the paradoxes in this paper are limited by being based on general insights from IT organizational roles within two countries. Thus, we also advocate for analyzing tensions perceived by the business side in order to capture a truly comprehensive perspective on the paradoxes. | 3,978.8 | 2020-08-18T00:00:00.000 | [
"Business",
"Computer Science"
] |
Bidirectional communication between the Aryl hydrocarbon Receptor (AhR) and the microbiome tunes host metabolism
The ligand-induced transcription factor, aryl hydrocarbon receptor (AhR) is known for its capacity to tune adaptive immunity and xenobiotic metabolism—biological properties subject to regulation by the indigenous microbiome. The objective of this study was to probe the postulated microbiome-AhR crosstalk and whether such an axis could influence metabolic homeostasis of the host. Utilising a systems-biology approach combining in-depth 1H-NMR-based metabonomics (plasma, liver and skeletal muscle) with microbiome profiling (small intestine, colon and faeces) of AhR knockout (AhR−/−) and wild-type (AhR+/+) mice, we assessed AhR function in host metabolism. Microbiome metabolites such as short-chain fatty acids were found to regulate AhR and its target genes in liver and intestine. The AhR signalling pathway, in turn, was able to influence microbiome composition in the small intestine as evident from microbiota profiling of the AhR+/+ and AhR−/− mice fed with diet enriched with a specific AhR ligand or diet depleted of any known AhR ligands. The AhR−/− mice also displayed increased levels of corticosterol and alanine in serum. In addition, activation of gluconeogenic genes in the AhR−/− mice was indicative of on-going metabolic stress. Reduced levels of ketone bodies and reduced expression of genes involved in fatty acid metabolism in the liver further underscored this observation. Interestingly, exposing AhR−/− mice to a high-fat diet showed resilience to glucose intolerance. Our data suggest the existence of a bidirectional AhR-microbiome axis, which influences host metabolic pathways.
INTRODUCTION
The mammalian body is a mosaic of different microorganisms and eukaryotic cells which share a set of biological and biochemical needs important for growth, body physiology, survival and reproduction (reviewed in reference 1). The gut microbiota, in addition to their ability to process dietary derived material, also influences host responses to xenobiotics, 2 adding to the growing consensus that factors involved in xenobiotic metabolism could be in intimate partnership with the microbial world. The aryl hydrocarbon receptor (AhR) is a xenobiotic sensor and, belongs to the basic helix-loop-helix Per-Arnt-Sim family and regulates phase I drug-metabolising enzymes from the cytochrome p450 family: Cyp1a1, Cyp1a2 and Cyp1b1. 3 Apart from well-known man-made pollutants (e.g., 2,3,7,8-tetrachlorodibenzo-p-dioxin), 4 a battery of natural AhR ligands have been discovered. These include kynurenine and planar indoles made during metabolism of tryptophan, 5,6 such as indole-3-carbinol, which is present in broccoli and cauliflower. 7,8 AhR is also known to be an important regulator of metabolic and immune processes, both of which are vital for intestinal homeostasis, as well as for optimal coexistence of the host and its microbiome. Ligand-dependent activation of AhR has been shown to abrogate colitis, a disease linked to changes of the gut microbiome homeostasis. 7,9 More recently, bacterially derived molecules such as phenazines and indole derivates have been shown to work as AhR activators, 9,10 which implies the existence of a possible microbiome-AhR communication. In this study, host metabolic homeostasis and health has been explored within the context of gut microbiome's influence on AhR functions.
Gut microbiome influence AhR function
In the first set of experiments we assessed whether the microbiome or its metabolites could influence AhR function. We compared AhR function in livers of mice carrying a normal bacterial flora (specific pathogen-free, SPF) and that from germfree (GF) mice. Expression of AhR along with the AhR target genes Cyp1a1 and aryl hydrocarbon receptor repressor (AhRR) was higher in the liver of SPF mice than in those of GF mice (Figure 1a). Indoleamine-2,3-dioxygenase (Ido) proteins are key enzymes that control the metabolism of tryptophan to kynurenine, which is a low-affinity ligand for AhR. 6 The expression of Ido1 was also induced in the presence of bacterial flora (Figure 1a). The expression of Cyp1a2, and Cyp1b1 though remained unaltered. Short-chain fatty acids (SCFAs), including acetate, propionate and butyrate, are derived through microbiota-driven anaerobic fermentation and are used as an energy source for some cell types, such as colonocytes. Nutrients absorbed from the intestine, including SCFA, are transported to the liver through the enterohepatic circulation and thus can influence metabolic processes in the liver and affect host health. We then assessed how hepatic tissue responded to selected bacterial metabolites. Administration of butyrate to GF mice marginally induced the expression of AhR and AhRR. The AhR target genes Cyp1a2 and Cyp1b1, however, responded robustly ( Figure 1b). Furthermore, we confirmed that bacterial signals regulate AhR activity in the intestine as well. We observed significant elevation of Cyp1a1 and AhRR in the intestinal epithelial cells (IECs) of SPF mice than in those of GF mice (Figure 1c). Administration of butyrate to GF mice induced the expression of the AhRR and Cyp1a1, similarly to the effect observed in the presence of whole bacterial flora (SPF mice; Figure 1d). We also used an in vitro system where HT-29 cells were treated with the most prevalent Quantitative RT-PCR results depict the expression of Cyp1a1, Cyp1a2, Cyp1b1, AhRR, AhR and Ido1 in liver tissue from (a) germ-free (GF) and specific pathogen-free (SPF) mice (n = 5 mice per group), and (b) GF mice gavaged with water or butyrate (1 g/kg body weight; n = 5 mice per group). Quantitative RT-PCR results regarding the expression of Cyp1a1, Cyp1a2, Cyp1b1, AhRR and AhR in epithelial scrapings from the distal small intestine of (c) germ-free (GF) mice and specific pathogen-free (SPF) mice (n = 5 mice per group) and (d) GF mice gavaged with water or butyrate (1 g/kg body weight; n = 5 mice/group). *P o0.05, **Po 0.01 (Student's t-test). (e) Quantitative RT-PCR results demonstrate the effects of AhR knockdown by siRNA (siAhR) on AhR and (f) Cyp1a1 mRNA expression in HT-29 cells. Cells were transfected with Silencer Select siRNA products directed against AhR (siAhR) or Silencer Select Negative Control #2 siRNA (scrambled). Cells were treated with butyrate (NaB, 2 mmol/l) for 24 h. Control cells (NT) were treated with RPMI medium only. Experiments were performed twice, with biological triplicates, per treatment and per experiment and technical triplicates of each sample for qPCR. Bars and error bars depict the mean ± s.e.m. (a-d) or mean ± s.d. (e, f). Genes of interest were normalised to Hprt (a), and 18SrRNA (b) and to β-actin (c-f). *Po 0.05, ***P o0.001 against GF controls (Student's t-test). *Po 0.05, **P o0.01 between indicated bars (two-way analysis of variance (ANOVA)). RT-PCR, reverse transcription PCR.
bacterial metabolites, such as acetate, propionate and butyrate (Supplementary Figure 1). Only butyrate was able to induce the expression of both AhR and its target gene Cyp1a1 (Supplementary Figure 1a,b). Propionate could induce AhR expression only, whereas, administration of acetate had no significant effects on the gene expression levels of Cyp1a1 and AhR (Supplementary Figure 1a,b) indicating butyrate to be more efficient to influence AhR activity. To test further whether the effect of butyrate on intestinal epithelial cells is AhR-dependent, we blocked the activity of AhR in HT-29 cells using AhR siRNA ( Figure 1e). Butyrate-induced expression of Cyp1a1 was reduced in siRNA treated group, suggesting that butyrate activates the expression of Cyp1a1 in AhR-dependent manner (Figure 1f). These observations demonstrate that the gut microbiome can activate AhR. Previously, the commensal bacterial strain, Lactobacillus bulgaricus OLL1181, has been shown to induce Cyp1a1 expression in IECs in vitro and in vivo 11 further consolidating observations that indigenous bacteria might influence AhR activity. Moreover, microbial metabolites such as the SCFAs (as observed in our study) may affect AhR function indirectly by signal transduction via G-protein-coupled receptors that use SCFA as ligands (GPR41 and GPR43). SCFA may also regulate AhR function through the inhibition of histone deacetylases. [12][13][14] Another mode of action maybe through the Toll-like receptor (TLR) signalling, especially via TLR2. 15,16 In response to oral challenge of AhR ligand benzo(a) pyren, TLR2 − / − mice do not show upregulation of the AhR target gene Cyp1a1 expression. 17 Furthermore, metabolites produced by the microbiome, owing to their similar aromatic structure, could be considered as endogenous ligands for the AhR, for example phenazines, which are produced by Enterobacteriacea, or naphthoquinones, present in a broad range of prokaryotes. 18,19 AhR expression influences the gut microbiome composition preferentially in the small intestine Having established a possible microbiome-AhR axis, we next investigated whether AhR expression could influence and shape the intestinal bacterial community. To gain detailed insight into bacterial composition within different compartments of the gastrointestinal tract, we collected colonic and small intestinal contents, as well as faecal samples from AhR knockout (AhR − / − ) and wild-type (AhR +/+ ) mice for sequencing. AhR is activated by dietary ligands that are present in standard mouse chow (e.g., phenols and tryptophan derivatives). In order to avoid such confounding effects, the offspring of AhR − /+ crosses were fed a specially formulated diet depleted of potential AhR ligands (F2 diet) or a F2 diet enriched with a known AhR ligand (DIM diet). 7 The faecal, colonic, and small intestinal materials were collected from AhR − / − and AhR +/+ mice and the composition of bacterial communities were evaluated and compared using 16S ribosomal RNA (rRNA) 454 pyrosequencing. We observed differences in the composition of the microbial communities to the presence or absence of AhR itself, independently of ligand activation (Supplementary Table 1, Figure 2). Bacteroidetes, Actinobacteria and Tenericutes were more prevalent in AhR +/+ mice in comparison to the AhR − / − mice on F2 diet. Moreover, small intestines of AhR +/+ mice that received a DIM-enriched diet exhibited lower prevalence of Bacteroidetes and higher prevalence of Firmicutes than mice that received F2 chow. Our findings are in accordance with a previous report showing the outgrowth of bacteria belonging to the Bacteroidetes phylum in the small intestines of AhR − / − mice. 8 This difference in bacterial composition in the small intestine of AhR +/+ mice fed F2 versus DIM diet, indicates that the activation of AhR by dietary ligand is able to Table 1, Figure 2). Significant differences were also observed within the Firmicutes phylum: bacteria belonging to the class Bacilli were more prevalent in DIM-fed mice, while Clostridia were more prevalent in F2-fed mice. These differences were not pronounced when comparing small intestinal bacterial communities of AhR − / − mice receiving F2 or DIM diets, which emphasises the specificity of response to DIM and excludes the possibility that the food component DIM (which can be treated as a source of bacterial nutrition) directly affects bacterial composition in an AhR-independent manner.
We did not, surprisingly, observe any significant differences in the composition of microbiome in the faeces or colon between the genotypes and diets (Supplementary Tables 2 and 3; Supplementary Figure 2a,b). These results support the notion of regional microbiome-tissue communication that was recently proposed for the crypt region of the intestine. 20 In addition, this data, raise further concerns regarding the conventional way to profile microbiome status through characterisation of fecal samples. That the faecal bacterial composition might not be fully representative of the communities in various anatomical regions of the gastrointestinal tract has been reported previously. 21 The distribution and co-localisation of microbiome communities are at present completely unknown and studies to further clarify the prevalence of bacterial phyla, classes, and species within the stomach, small intestine, and colon are highly warranted. Altogether, our results suggest that compromise of AhR function, through genetic modification or lack of ligands, leads to changes in the composition of commensal bacteria within the small intestine.
AhR regulates energy metabolism Alterations in gut microbiome influences host metabolism and energy homeostasis. To address the role of AhR in the regulation of energy homeostasis, we measured global changes in metabolic phenotype between AhR − / − and AhR +/+ mice by generating metabolic profiles of plasma, liver, and skeletal muscle (Supplementary Figures 3a-c and 4a-c, respectively) using proton nuclear magnetic resonance spectroscopy ( 1 H NMR). We compared the levels of identified metabolites in AhR − / − versus AhR +/+ animals and observed significant differences in the concentrations of various metabolites (summarised in Table 1) after a 12 h fasting period.
Glucose levels in both plasma and liver were found to be lower in the AhR − / − mice than in AhR +/+ mice, while levels of lactate (the main product of glycolysis) were elevated in the plasma and the skeletal muscle of AhR − / − mice. Lactate, together with alanine acts as important substrate for gluconeogenesis during fasting. Notably, levels of alanine were lower in AhR − / − muscle and liver. Increased release of lactate and alanine into blood, as observed in AhR − / − probably indicate that glucose-utilising peripheral tissues catabolise glucose and hence allow it to be utilised as gluconeogenic precursors in the liver undermining the metabolic switch to gluconeogenesis to provide energy to the system during fasting. Glycerol is another known substrate for gluconeogenesis. Glycerol levels were found to be higher both in the plasma and liver of AhR − / − mice, indicating that these metabolites might also be used as substrates for gluconeogenesis. On the basis of these observations, we queried whether gluconeogenesis was altered in AhR − / − mice liver by checking the gene expression level of glucose-6-phosphatase (G6Pase), the final enzyme in the gluconeogenesis pathway in the liver. Indeed, the level of G6Pase was higher in AhR-deficient mice (Figure 3a) confirming that gluconeogenesis is induced in AhR − / − mice probably to maintain blood glucose levels to sustain energy metabolism of other glucose-dependent tissues in the fasted state.
Another striking difference was the lower level of ketone bodies (3-hydroxybutyrate) in the plasma and skeletal muscle of AhR − / − mice (Table 1). During fasting, ketone bodies are produced as a product of fatty acid oxidation or metabolism of certain amino acids. The liver synthesises and releases ketone bodies, primarily 3-hydroxybutyrate, to be used as fuel by peripheral tissues. Decreased levels of ketone bodies in both plasma and muscle reflect that AhR − / − mice are somehow impaired of utilising fatty acid oxidation as their source of fuel. This was further evaluated by observing lower levels of expression of Hmgcs2, the main enzyme controlling ketone body production (Figure 3b), and of various genes involved in fatty acid transport and metabolism. Reduced hepatic Pparα expression level was also observed in AhR − / − mice compared to AhR +/+ mice (Figure 3c). We subsequently also observed that the mRNA expression levels of other genes involved in fatty acid transport and metabolism (Cd36, Fabp1, Acox1, Cpt1a, Cpt2, Cyp4a1 and Mcad), were generally downregulated in AhR − / − mice (Figure 3d). Impaired fatty acid oxidation and enhanced gluconeogenesis indicates that AhR − / − mice might be experiencing metabolic stress, which is reflected by increased levels of corticosterol in the blood. Indeed, as expected higher levels of glucocorticoids in the plasma of AhR − / − mice were observed (Figure 3e). Elevated levels of glucocorticoids, as well as cellular stress, is known to accumulate and stabilise p53, a master regulator known to promote cellular survival under energy shortage conditions. [22][23][24] Most interestingly, we did observe elevated levels of p53 protein in the livers of AhR − / − mice (Figure 3f) further underscoring the metabolic duress in AhR − / − mice.
In order to understand the metabolic limitations of the AhR-deficient mice, we challenged these mice with a diet rich in Figure 5a), chow intake ( Supplementary Figure 5b), or fasting insulin levels (Supplementary Figure 5e) were observed between the two genotypes. Interestingly, at the basal condition we observed that the body weight of AhR − / − mice is significantly lower than the AhR +/+ mice (Supplementary Figure 5c). However, this difference in body weight no longer exists once the mice were fed HFD for eleven weeks (Supplementary Figure 5d). A significant increase in body weight was observed in the AhR +/+ and AhR − / − mice after 11 weeks of HFD treatment in comparison with the respective chow-treated groups (Supplementary Figure 5d). However, there was no significant difference in the food intake between the AhR +/+ and AhR − / − mice on chow diet or on HFD (Supplementary Figure 5b). Surprisingly, we observed that AhR − / − mice exhibited lower fasting glucose levels ( Figure 4a; Table 2) and improved glucose tolerance (Figure 4b), compared with AhR +/+ mice indicating partial protection against diet-induced glucose intolerance in AhR − / − mice. Furthermore, the expression of glucose-6phosphatase seemed to be lower though not statistically significant in AhR − / − mice livers compared with AhR +/+ mice when fed HFD (Figure 4c). This is in striking contrast to our observations in fasted conditions under normal chow feeding. Thus higher hepatic glucose levels ( Table 2) along with higher gluconeogenic precursors such as alanine and lactate in HFD fed AhR − / − mice liver reflects a possible mode to control and maintain peripheral glucose levels in response to HF feeding. This might possibly be due to better glucose disposal to peripheral tissues. From these observations it is evident that AhR is instrumental in the dynamic regulation of whole-body glucose homeostasis depending on nutrient availability and energy demand of the host. However, metabolic profiling of plasma in HFD conditions revealed similar differences for 3-hydroxybutyrate between AhR − / − and AhR +/+ mice as were observed for normal chow-fed mice (summarised in Table 2). Consequently, the expression of Hmgcs2 and Pparα remained lower along with other genes involved in fatty acid transport and oxidation in livers of AhR − / − mice (Figure 4c) signifying impaired hepatic lipid metabolism in these mice. HF feeding also abrogated the differences in the levels of plasma corticosterol in AhR − / − mice to the level observed in AhR +/+ mice (Figure 4d) reflecting that their modulations are indeed in response to the metabolic milieu that could be altered as and when energy demand of the system changes. We also observed high expression levels of p53 in AhR − / − mice in comparison to AhR +/+ mice when challenged with HFD ( Supplementary Figure 5f), though not statistically significant. The induced p53 is probably a reflection of the metabolic stress these mice encounter, as also observed in AhR − / − mice in basal condition on chow diet (Figure 3f). Hence, it seems that the AhR − / − mice is likely at a metabolic advantage through enhanced gluconeogenesis in liver during fasting to regulate hypoglycemia. However, with dietary challenge the AhR − / − mice is probably more efficient in disposal of glucose load to peripheral tissues as well as restricting gluconeogenesis, preventing HFD induced glucose intolerance.
DISCUSSION
In the present study, we have identified a bidirectional microbiome-AhR axis that influences host metabolism in the liver and ketone body production. Our findings showing that production of ketone bodies can be regulated by AhR implies an AhR-dependent feed-forward mechanism to secure nutrients to the host under conditions of starvation. Our observations also suggest a novel but less understood role of AhR in the modulation of gut microbiome composition in the small intestine. Such changes in the microbiome possibly impart metabolic consequences and may contribute to deregulated energy metabolism. However, an altered immune system imparting the changes seen in the microbiota composition of AhR − / − mice cannot be ruled out. Indeed, the reported elevation of inflammation circuits in AhR − / − mice supports such a mechanism. In a recent study the ketone metabolite beta-hydroxybutyrate, was shown to suppress NLRP3-driven inflammation under nutritional constrain conditions. 25 Thus, AhR-mediated regulation of ketone body production illustrates the intricate interplay between inflammation and metabolism where this metabolic product may act as an immunomodulatory currency.
Most well-known ligands for the AhR, including biphenyls, phenylalanine hydroxylases, aromatic amines and dioxins, are lipid-soluble molecules. 4 These substances, also known as persistent organic pollutants, accumulate in the white adipose tissue and are released from this tissue together with lipids. 26 Persistent organic pollutants have endocrine disruptive properties and can interfere with the activity of many nuclear receptors, resulting in profound alterations of hormonal balance. 27 We speculate that lack of AhR, one of the main proteins that orchestrate the breakdown of these dangerous substances, initiates transcriptional and translational changes in the liver in order to protect it from the toxic effects of persistent organic pollutants. Possible protective responses include the downregulation of lipid transport to the liver by decreasing the expression of Cd36 and Fabp1 and also invoking cellular responses to stress through the regulation of gluconeogenesis, by increasing hepatic glucose production and the expression of glucose-6-phosphatase in the liver. 28 Disturbances in glucose and fatty acid metabolism may lead to serious metabolic problems, including type II diabetes and obesity-related co-morbidities. Quantitative trait locus analysis of dietary obesity in C57BL/6 and129P3/J F2 mice revealed that the AhR gene is one of seven candidate genes associated with increased body weight. 29 Moreover, a shift in the ratio between Bacteroidetes and Firmicutes in the intestine has been linked with development of obesity in both mice and humans. [30][31][32][33][34] We did not observe a difference in weight gain between AhR − / − and AhR +/+ mice on HFD. However, we did observe that the AhR − / − mice were partially protected against diet-induced glucose intolerance. Whether this protection is due to their altered microbiome composition in the small intestine in the AhR − / − mice remains to be seen. In order to demonstrate cause or consequence effects, extensive additional experiments will have to be done, but is beyond the scope of this study.
Our data suggest that bilateral communication links the microbiome to AhR, an evolutionarily conserved environmental sensor in many eukaryotes, impacting immune and energy homeostasis. Dynamic changes in the gut microbiome may confer metabolic and developmental consequences to the host through AhR. The current study has established such associations between microbiome-AhR crosstalk. Further experiments are certainly required to reveal the more precise mechanisms and to identify the set of selected microbial metabolites that may account for the observed metabolic effects. Finally, AhR resides within a family of drugable receptors with an abundance of putative ligands, making it an attractive target for future treatment of metabolic and other disorders. and mice were randomly assigned to receive F2 or DIM diets. The AhR − / − , AhR +/+ and AhR − /+ offspring were co-housed and genotyped at the 5 weeks of age. The faecal, colonic and small intestinal material was collected from AhR − / − and AhR +/+ mice when they reached the 8 weeks of age. Only healthy male mice of similar age and weight were used in these experiments after which wild type and knockout mice were randomly assigned to different treatments. For experiments involving butyrate treatment, GF mice (8-10 weeks of age) were gavaged with water or butyrate (1 g/kg body weight) and killed after 72 h of treatment. Experimental protocol was similar to previously published treatment dose and schedule. 35 Mice were killed by cervical dislocation at the end of experiment. All protocols involving the use of animals were approved by the Regional Animal Research Ethical Board, Stockholm, Sweden (Stockholms norra djurförsöksetiska nämnd), following proceedings described in EU legislation (Council Directive 86/609/EEC). Animal husbandry was in accordance with Karolinska Institutet guidelines and approved by the above-mentioned ethical board (Ref: N 100/10 and 299/12). Animal experiments adhered to 3R policy to ensure minimum numbers of animals were used to maximise data mining.
Glucose tolerance testing and insulin measurements Glucose tolerance tests were performed with the use of Roche Acuvue glucometer and adequate strips. For each test, a blood drop was collected from the tip of the tail. Blood was collected after an overnight (12 h) fasting period, and then 15, 30, 60, 90 and 120 min after oral glucose administration (2 g/kg body weight). Insulin levels in mice sacrificed after overnight fasting (12 h), were measured in serum by ELISA (Millipore, Billerica, MA, USA) post mortem.
Cell lines, culture conditions and treatments
The human epithelial cell line HT-29 (HBT-11) (ATCC, Rockville, MD, USA) was cultured in RPMI 1640 media supplemented with 10% heat inactivated foetal bovine serum (both from Invitrogen, Carlsbard, CA, USA). Cells were maintained in a 5% CO 2 humidified atmosphere at 37°C. Cell morphogenesis was monitored microscopically. To prevent contact inhibition, cell densities for each experiment did not exceed 80%. Dimethyl sulfoxide and sodium butyrate were purchased from Sigma Aldrich (St Louis, MO, USA), 3,4-dimethoxyflavone was from Cayman Chemicals (Ann Arbor, MI, USA). Inhibition experiments using AhR inhibitor 3,4-dimethoxyflavone (10 μmol/ l) were performed by pretreating cells with the inhibitor or vehicle (dimethyl sulfoxide) for 1 h before stimulating cells with sodium butyrate. Downregulation of AhR transcripts in HT-29 cells were achieved by SMART Pool siRNA products directed against AhR (ThermoScientific). Controls were transfected with Silencer SMART Pool Non-Targeting siRNA (ThermoScientific, Waltham, MA, USA). Transfection was carried out according to the manufacturer's protocol using DharmaFECT 4 (Thermo-Scientific) reagent (final concentration, 0.3%), with a final siRNA concentration of 40 nM. Cells were treated with acetate (10 mmol/l), propionate (5 mmol/l) or butyrate (2 mmol/l) for 24 h as previously published. 37 Control cells (NT) were treated with RPMI medium only. All in vitro experiments were performed twice, with three biological replicates per treatment and per experiment.
RNA extraction and RT-qPCR RNA was isolated using the Qiagen RNeasy Mini kit according to the manufacturer's instructions. cDNA was synthesised with SuperScript II (Invitrogen). OligodT primers were used in the presence of RNaseOUT reagent (Invitrogen). One microgram of RNA was used per reaction. Complementary DNA (cDNA) was diluted 1:5 and then 1 μl was used for each quantitative PCR (qPCR) reaction. qPCR was performed using SYBRGreen reagent (Applied Biosystems, Carlsbard, CA, USA ) and genespecific primers (Supplementary Table 4). Reactions were performed with the use of an Abi Prism 7500 (Applied Biosystems) thermal cycler. Housekeeping genes were carefully selected for each experiment so that their expression levels did not exhibit significant differences between treatments. Relative expression was calculated using the formula 2 − ΔΔCt . The average from the controls was taken as 1, and fold change for each treatment was calculated accordingly. Each sample was tested in triplicate for qPCR.
Statistical analysis
Statistical analyses were performed using GraphPad Prism 6 statistical software (La Jolla, CA, USA). For analysis with multiple groups, One-way or Two-way analysis of variance tests were performed where relevant. Unpaired t-test (two-tailed) was performed when observations between two groups were compared. P o0.05 was considered statistically significant unless otherwise stated. Values were expressed as mean ± s.e.m.
NMR metabolic profiling
Plasma sample preparation. Sample preparation and acquisition methods were annotated from previously published methods. 38,39 Aliquots of mouse plasma (100 μl) were mixed with 500 μl of saline solution (0.9% NaCl in D 2 O), incubated for 10 min at room temperature, and then centrifuged at 13000 rpm for 10 min in order to remove insoluble material. Supernatants were transferred into 5 mm NMR tubes for 1 H NMR analysis.
Preparation of aqueous tissue extracts. For liver and muscle analysis, an amount of each sample was weighed out (~200 mg liver;~50 mg muscle) and added to 1.5 ml of 50:50 water/methanol. Samples were incubated on dry ice for a few minutes before adding 30-40 high-density 1-mm zirconia beads. The samples were then homogenised in a bead beater (Precellys 24) for 3 cycles (5 min each); samples were kept on dry ice between cycles. Next, the samples were centrifuged at 13,000 r.p.m. for 10 min, and 500 μl aliquots were transferred into a separate Eppendorf tube. The pellet was dried and retained for the later organic extraction. Protein was precipitated from the aqueous phase by adding 1 ml methanol, vortexing for 3 min (Multimixer, Thomas Scientific, Swedesboro, NJ, USA, and incubating the samples at − 20°C overnight. Aliquots of 500 μl were then taken from the supernatant, dried in a speed vacuum overnight at room temperature, and subsequently frozen at − 80°C. Before NMR acquisition, samples were resuspended in 550 μl phosphate buffer solution (0.2 M Na 2 HPO 4 / 0.04 mol/l NaH 2 PO 4 , pH = 7.4 with 0.1% sodium azide and 1 mmol/l 3trimethylsilyl-1-[2,2,3,3,-2 H 4 ] propionate in D 2 O) and transferred to a 5 mm NMR tube for analysis.
Acquisition of 1 H NMR spectra. 1 H NMR spectra were acquired with a Bruker Avance 600 MHz spectrometer (Bruker Biospin, Karlsruhe, Germany) operating at 600.13 MHz for 1 H at 300 K. It was equipped with a 5 mm broadband inverse configuration probe. Both plasma and tissue extracts were analysed with a water-suppressed 1D NMR spectrum using the NOESYPRESAT pulse sequence (256 transients). Irradiation of the solvent (water) resonance was applied during presaturation delay (2.0 s) for all spectra and for the water-suppressed 1D NMR spectra also during the mixing time (0.1 s). The pulse sequence parameters, including the 90°p ulse (~12 μs), pulse frequency offset (~2,800 Hz), receiver gain (~200), and pulse powers, were optimised for each sample set run. The spectral width was 20 p.p.m. for all spectra. The NMR was processed with 1.0 Hz exponential line broadening prior to Fourier transformation. Fourier transformations were collected with~32,000 real data points.
NMR spectral data pre-processing. Data (−1.0 to 10.0 p.p.m.) were imported into MATLAB 7.0 software (MathWorks, Natick, MA, USA), in which they were automatically phased, baseline corrected and referenced to the 3-trimethylsilyl-1-[2,2,3,3,-2 H 4 ] propionate peak (0.00 p.p.m.) using scripts written in-house. To reduce analytical variation between samples, the residual water signal (4.67-4.98 p.p.m.) was truncated from the data set. To enable separate normalisation to total area and use of probabilistic quotient (median fold-change) methods, each spectrum was set to have a unit total intensity such that each data point was expressed as a fraction of the total spectral integral. 40 Endogenous plasma metabolites and metabolites extracted from liver and muscle tissues were assigned by referring to data from published literature, [41][42][43][44] as well as to in-house and online databases.
Statistical methods and software. After the pre-processing of the NMR data, multivariate statistical analysis was performed using both Matlab R2013b (MathWorks, Natick, MA, USA) and SIMCA-P 13.0 software packages (Umetrics, Umeå, Sweden). Principal components analysis (PCA) was performed using univariate scaling. The first two components of variation were plotted against one another to access the inter-cohort variation across the global metabolic profile. Orthogonal partial least squares discriminant analysis (OPLS-DA) using both univariate and mean centred scaling was employed to identify specific metabolites pertaining to a particular sample group. 45 All OPLS models were validated using random permutation testing of the supervised model.
High-throughput sequencing of bacterial content DNA extraction and sequencing of 16S rRNA gene regions. DNA was extracted from each sample using the FastDNA Spin Kit for soil (MP Biomedicals, Leicester, UK). A modified protocol was employed as described. 46 For each sample, the V4 and V5 regions of the 16S rRNA genes were amplified using the universal primers U515F (5′-GTGYCAGCMGCCGCGGTA-3′) and U927R (5′-CCCGYCAATTCMTTTRAGT-3′) . The forward fusion primer also contained the GS FLX Titanium primer A, library Key (5′-CCATCTCATCCCTGCGTGTCTCCGACTCAG-3′), and 10-bp multiplex identifiers (MID) (Roche Diagnostics, West Sussex, UK). The reverse fusion primer included the GS FLX Titanium primer B and library key (5'-CCTATCCCCTGTGTGCCTTGGCAGTCTCAG-3′) identifiers. Amplification conditions, sample pooling, preparation and sequencing on the GS FLX titanium platform were undertaken as previously described. 47 Data analysis. Raw 16S rDNA sequences were processed in QIIME 48 version 1.5.0 using default parameters. Sequences were removed from the analysis if they were o350 or 4450 base pairs, were of low quality, contained ambiguous bases or if there were mismatches in the barcode or forward sequencing primer. The reverse sequence primer was removed. Remaining sequences were clustered into operational taxonomic units using UCLUST 49 at 97% sequence identity. A representative sequence for each operational taxonomic unit was chosen and assigned taxonomy using the RDP classifier 50 and Greengenes (February 2011 release). 51 Sequences were rarefied to 3568 to remove bias caused by heterogeneity in the number of sequences for each sample. The Mann-Whitney U-test was used for statistical analysis of the samples. | 7,484.4 | 2016-08-24T00:00:00.000 | [
"Biology"
] |
Hybrid power plant using synchronization controller system to save electricity cost
ABSTRACT
INTRODUCTION
Hybrid energy power generation is a power plant that combines non-renewable energy with new renewable energy [1].Photovoltaic (PV) hybrid energy power plants often have lower costs and can offer higher reliability [2]- [4].A hybrid power plant uses both non-renewable and renewable energy sources in its operation [5].One of the renewable fuels that has a lot of potential for utilization is solar energy, which is accessible anywhere that receives sunlight [6].The problem with using hybrid power plants that use solar energy sources and paid electricity networks is that sunlight energy cannot produce energy consistently from sunrise to sunset.Maximum energy can only be obtained when the maximum solar intensity can be received evenly by all solar cell components [7].The utilization of solar energy as an alternative energy source for the community is currently not the main solution to replacing paid electrical energy [8].
In addition to the problem of a short duration of time to obtain electrical energy, PV power plants need to be installed in a location that can receive sunlight effectively and use a lot of solar cell modules, such as the roof of a building with an appropriate slope [9]- [11].Building a hybrid energy power plant that uses both renewable and non-renewable energy is required to achieve savings in the usage of electrical energy on a paid electricity network [12], [13].The problem experienced by large users of electrical energy, such as industries, government offices, and universities is the high cost of paying for electricity [14].Research conducted in the United States shows that a campus building with an area of approximately 50,000 square feet can cost more than USD 100,000 to pay for electricity each year.The largest electrical energy use is mostly for air conditioning and lighting equipment [15].
Hybrid photovoltaic systems have been shown to cut electricity costs and harmful emissions from fossil fuels [16].To obtain adequate savings in the use of electrical energy, hybrid power plants must be designed, taking into consideration the factors that could make the power plant run continuously for a long time.The PV component system's quick disintegration is a challenge that is frequently encountered [17].This is because hybrid electricity generation planning has not been carried out, which considers aspects that affect PV energy generation and the installation of installations divided into several groups to obtain an effective energy supply from the PV system, even though PV performance is low.The majority of the energy drawn from electric storage batteries is also consumed until the battery is depleted [18].
Building a hybrid power plant with a layered control system for the electric supply divider is the aim of this effort.In hybrid energy power plants that use the synchronization controller system (SCS), the value of electrical energy provided to the load will be arranged in stages according to the PV energy capacity.Despite a decline in photovoltaic performance, energy supply can still be provided to other load groups with lower electricity data requirements.A multilevel electrical energy supply is designed so that the PV is at its lowest performance.Large power requirements will automatically be diverted to be supplied via a paid electricity grid.A recent innovation employed in hybrid energy power generation systems to reduce the cost of using electrical energy is the application of SCS.The application of SCS is a new breakthrough used in hybrid energy power generation systems to save the cost of using electrical energy.To undertake a financial analysis of the development of a hybrid power plant, the budget plan is generated using the pertinent national unit pricing standard [19].For the purpose of determining the return on investment, the feasibility of creating a hybrid power plant to function efficiently, conserve electricity, and be consistent is estimated using the return on investment (ROI) approach [20].
METHOD
The research method combines design analysis and simulation techniques to create a hybrid energy power plant that makes use of a synchronization control system to reduce electricity consumption on the Gorontalo State University campus.Investment needs for the construction of hybrid power plants are determined by calculating the cost budget plan with the applicable national unit price standard.To calculate the length of time the return on investment takes using the return on investment (ROI).
Design of hybrid energy power plants
The hybrid energy power plant design process begins with analyzing the building roof area for solar cell placement and determining the type of solar cell to be used (dimensions and solar cell power capacity).Furthermore, figuring out how many solar panels can be mounted on the building's roof.To obtain the ability of solar cells to produce electrical power, trials and measurements of solar cell electrical energy were carried out from 6 a.m. to 6 p.m. (for 12 hours every day in September 2022).At the same time, solar irradiation measurements were also taken (January to September 2022).The results of testing the performance of solar cells by applying SCS in a hybrid generation circuit system can be obtained by determining the value of the power that can be generated for 12 hours (6 a.m. to 6 p.m.).The cost of the electricity generated can be calculated by multiplying the value of the power generated by solar cells by the number of solar cells that can be installed on the building's roof.
The Gorontalo State University (UNG) Campus Building's roof area was examined utilizing the geographic information system (GIS) analytical technique [21].The determination of the type of solar cell used takes into account the electrical energy produced, the efficiency, and the price of the solar cell.To obtain a high level of efficiency, it is recommended to use a monocrystalline type solar cell because it has a high-efficiency level of up to 24.4%, the price is moderate, and it has a long life span [22].A rooftop PV system's energy potential can be determined using the CR formula in kilowatt peak (kWp) using (1) [23], where: -AR is the roof area of the building for solar cell installation in m 2 -RCR is the ratio of building roofs to install solar cell modules in m 2 -CM is the solar cell's capacity in Wp -AM is its dimensions in m 2 -SO is its area in m 2 Int J Pow Elect & Dri Syst ISSN: 2088-8694 Hybrid power plant using synchronization controller system to save electricity cost (Sardi Salim)
379
The ( 2) is used to convert electrical energy (E) to kilowatt hours (kWh) [24], where: -GSR is calculated as kWh/m 2 /month over a five-year period -Direct current (DC) is changed into alternating current (AC) using a derate factor value -The derate factor (D) has a value between 0.6 and 0.8.A standard derate factor of 0.75 is used in this study Where is installing a single monocrystalline solar panel allows for testing the production of electrical energy from solar cells, 300 WP, at the Faculty of Engineering Building's rooftop, which is designated as the testing and simulation planning site.Store voltage and current data using a current and voltage logger circuit module with ZMPT101B and SCT-013-100 sensors [25] and an SD card as a data storage medium.To obtain the performance value of solar cells in producing electrical energy, electric light loads are used with regulated power to show the highest solar cell power capacity.Tests and simulations were conducted on September 1 st -30 th , 2022, at the UNG Faculty of Engineering.Solar intensity data were obtained from campbell stokes, a solar light intensity measuring station.campbell stokes measures the amount of sunlight that falls on the earth also the time and duration of the sun's exposure in one day.The campbell stokes location is installed close to the campus location with a distance of ±120 meters, belonging to the Bone Bolango Regency's Gorontalo Meteorology, Climatology, and Geophysics Agency.To obtain solar cell electrical energy data for the previous months (January to August 2022) was carried out through curved curve analysis using the linear regression analysis method with the independent variable X as solar intensity data and the dependent variable Y as the solar cell power value.To obtain accurate results, solar intensity, and solar cell energy data are juxtaposed according to the same incident time (hours/minutes).Calculations from (3) can be used to determine the electrical energy generated by a hybrid power plant utilizing SCS from January to September 2022, from 6 a.m. to 6 p.m., during sun radiation in the Faculty of Engineering Building's.As in (3), where: -EPV stands for photovoltaic output power (Wh) during solar irradiation -effPV for photovoltaic efficiency -nPV stands for the most solar panels that may be installed on a building's roof A simulation process using the same methodology as in the Faculty of Engineering Building's is used to determine the value of electrical energy in other buildings on the UNG Campus.
Analysis of saving energy use, investment value, and investment payback period
By determining the value of electrical energy supplied by the PV system in every building on the UNG Campus every month, an analysis of the reduction of electrical energy use in buildings is carried out.The energy value produced by the PV system per kWh is multiplied by the price according to government regulations.The value of cost savings is obtained from the electricity costs paid each month minus the electricity costs obtained from the PV system.
Relating to rooftop solar power plants connected to the electric power network regulation number 26 of 2021 of the Minister of Energy and Mineral Resources, it is converting the energy generated by the PV system to expenses in Indonesian Rupiah.Export and import kWh must be used by the government.Customers of the roof PV system utilize electrical energy each month based on the difference between the value of imported kWh and the value of exported kWh.Given that the UNG Campus control panel uses more than 500 kW of electricity, it is required to have a Certificate of Operation Worthiness (SLO).The price of PV roof system electricity is determined to be IDR 1,440.7 per kWh.Based on the budget plan and the current national unit pricing, an analysis of the investment value needed to build a hybrid power plant is performed to calculate the ROI for developing a hybrid power plant using return on investment (ROI) methodology.
Locational conditions for the research
The Gorontalo State University campus is located at coordinates 0°33'21.51"North Latitude and 123°7'59.65"East Longitude, according to the GIS mapping findings.On the UNG campus, there are one library building and four faculty building's.Analysis the calculation of the building roof area for the installation of a rooftop solar cell module serves as an example of its application in the Faculty of Engineering Building's.The roof area of each portion of the Faculty of Engineering building is displayed in Table 1 based on the results of the mapping using the GIS technique.The Engineering Faculty Building's has The roof area of Faculty of Engineering Building is 4858.93m 2 .The rooftop space utilized is 3044.91 m 2 , or 62.66%, based on the findings of the solar cell installation pattern according to the ratio of the Building's roof area and the solar cell's dimensions.According to the layout results, 384 modules of solar cells, each measuring 1640×992×35 mm, can be mounted on the roof of the Faculty of Engineering Building's.Using the same planning process, the Faculty of Mathematics and Natural Sciences roof can support 200 solar cell modules, 136 modules on the roof of the Faculty of Agriculture Building's, and a total of 1,036 modules on the rooftop of the Faculty of Engineering Building's.The roof of the Faculty of Letters and Culture building is 122 modules, and the rooftop of the library building is 34 modules.As a result, 876 modules worth of solar cells can be put in all UNG Campus Building's.
Photovoltage powered electricity
Based on the findings of testing one solar cell's ability to produce electrical energy when exposed to solar radiation from September 1 st -30 th of 2022, at 6 a.m. to 6 p.m. UTC +8 Indonesian time, using the solar specification cell power 300 WP, monocrystalline, the total electrical energy produced the system's PV is 78.874Wh.The electrical energy generated by the Faculty of Engineering by the PV system, specifically the PV system has a 16.24% PV efficiency and a 78.875 Wh output power per module.Overall, the PV system produces 30.287 kWh. Figure 1 displays the power graph of the PV system at the Faculty of Engineering in September 2022.
To predict the electrical energy generated by solar cells in the previous month, namely January to August 2022, a linear regression equation was used.The results of the linear regression analysis, as shown in Figure 2, obtained the equation Y=0.0035 X+1389.30with R2=0.8837.By entering data X (solar intensity) for January to August 2022, the electrical energy value for January to August 2022 is obtained, as shown in Figure 3.
The The results of the analysis in the same way as applied in the Faculty of Engineering obtained the electrical energy of PV systems in the faculty of mathematics and natural sciences, the faculty of agriculture, the faculty of literature and culture, and the library building in January to September 2022, as shown in Figure 4. Also, the total electrical energy generated by the PV system at the UNG Building is 426,884 kWh.The graph of the total energy of the UNG campus is shown in Figure 5.
Module for the synchronization controller system (SCS)
Controller drivers function to detect changes in current in the PV system.Suppose there is a decrease in the load current to the lowest limit (according to the program on the Arduino ATmega), the current sensor will signal the Arduino system to cut off the current to the automatic transfer switch (ATS) [26].ATS automatically switches the PV system network to a grid system and vice versa.If the current sensor detects current according to the highest setting, the ATS system will move the network from the grid system to the PV system.The electrical power source is switched from the grid system to the PV system using a circuit known as a synchronization controller.The test results of the synchronization controller system show that when the PV is at maximum performance, the Arduino Mega driver automatically turns off contactors C1-G, C2-G, and C3-G, then turns on contactors C1-PV, C2-PV, and C3-PV.The synchronization control system will automatically switch the load supply to the PV power plant.
When the PV experiences a decrease in performance, The Arduino Mega system will gradually work to turn off the C1-PV contractor and turn on the C1-G contactor so that the electricity supply in MCB1 automatically switches to the grid system.So on, if the performance of the PV power plant continues to fall, the power supply to MCB2 will automatically switch to the grid system when the C2-PV contactor is turned off by the Arduino Mega driver and on by the C2-G contactor.If PV performance keeps declining, then C3-PV will turn off, and C3-G will turn on and automatically supply the electric load at MCB3 to the grid system.Conversely, suppose the performance of the PV power plant rises again.In that case, the synchronization controller system will again divert the electricity supply to the PV system and turn off the
Utilization of electricity on the UNG Campus
Based on information obtained from the finance and equipment department of Gorontalo State University, the use of campus electrical energy from January to September 2022 amounted to 859,240 kWh.Electrical equipment that uses large amounts of energy, based on the electrical energy intensity audit findings for the UNG campus in 2022, is air conditioning [27].The cost incurred to pay for the use of electrical energy in January to September 2022 is IDR 747,605,340.Information on the use of electricity and the cost of procuring it on the campus of Gorontalo State University is shown in Figure 8.
Analysis of electricity cost savings
Analysis of saving electricity costs using the equation: savings value = cost of using electrical energy -cost of buying energy produced by a hybrid energy power plant.From January through September 2022, the electrical infrastructure on the UNG Campus used 859,240 kWh at a cost of IDR 747,605,340.Between January and September 2022, the hybrid system produced 426,984 kWh of electricity.The cost per kWh of electricity produced by rooftop solar energy systems in 2021 is fixed at IDR 1,440 in compliance with Minister of Energy and Mineral Resources Regulation No. 730.Therefore, it is possible to compute that the energy investment value of the PV system is 426,984 kWh × 1,440 IDR = 82,908,000 IDR.
An examination of the project's return on investment for building hybrid power plants
According to the findings of the bill of quantity (BoQ) method analysis of the construction of hybrid power plants at the Faculty of Engineering, the total budget required is IDR 6,653,590,025.Faculty of Literature and Culture is IDR 4,247,720,000; Faculty of Agriculture is IDR 3,720,675,000; Faculty of Mathematics and Natural Sciences is IDR 5,530,550,000; library building is IDR 810,000,000.Hybrid PV construction at UNG will cost a total of IDR 20,962,535,025.
Analysis of return on investment (ROI)
From January to September 2022, hybrid PV systems will generate 426,984 kWh of electric energy.The total investment required to construct a hybrid power plant at Gorontalo State University is IDR 20,962,535,025.return on investment can be calculated as: = / ℎ = 20,962,535,025/614,844,090 = 34
CONCLUSION
Hybrid energy power plants using S in the electricity system on the Gorontalo State University Campus is a good solution for saving electricity use.The results showed that a hybrid energy power plant with SCS could save Rp 68,301,440 or 82% of electricity bills per year at Gorontalo State University.And for the return on investment (ROI) can be achieved for 34 years.
ISSN: 2088-8694 Int J Pow Elect & Dri Syst, Vol. 15, No. 1, March 2024: 377-385 380 a 4858.93 m 2 roof area.The rooftop space utilized is 3044.91 m 2 , or 62.66%, based on the findings of the solar cell installation pattern according to the ratio of the building's roof area and the solar cell's dimensions.
PV system's electricity production in January was 20,722 kWh; February 19,223 kWh; March 20,770 kWh; April 20,319 kWh; May 20,719 kWh; June 20,375 kWh; July 20,289 kWh; August 20,353 kWh; and in September 25,375 kWh.From January through September 2022, the Faculty of Engineering solar panels will have generated 188,145 kWh of electricity.
Figure 1 . 381 Figure 2 .Figure 3 .Figure 4 .Figure 5 .
Figure 1.Graph of electrical energy generated by the PV system in September 2022 Int J Pow Elect & Dri Syst ISSN: 2088-8694 Hybrid power plant using synchronization controller system to save electricity cost (Sardi Salim)383grid system.Controller drivers system, as shown in Figure6.The installation of the synchronization controller's load transfer mechanism is shown in Figure7:-Current sensors 1 is set at the highest capacity current, which occurs at 11 a.m.-2 p.m; -Current sensor 2 medium capacity currents occur at 9.00-10.59a.m. and 2.05-3.30p.m.; and -Sensor 3 at low capacity currents, which occur at 6.30-8.55a.m. and 3.35-5.30pm.
Figure 8 .
Figure 8. Consumption of electricity on the UNG Campus, January-September 2022
Table 1 .
Shows how much roof space there is on each level of the Faculty of Engineering Building's | 4,454.8 | 2024-03-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Colossal electromagnon excitation in the non-cycloidal phase of TbMnO3 under pressure
The magnetoelectric coupling, i.e., cross-correlation between electric and magnetic orders, is a very desirable property to combine functionalities of materials for next-generation switchable devices. Multiferroics with spin-driven ferroelectricity presents such a mutual interaction concomitant with magneto- and electro-active excitations called electromagnons. TbMnO3 is a paradigmatic material in which two electromagnons have been observed in the cycloidal magnetic phase. However, their observation in TbMnO3 is restricted to the cycloidal spin phase and magnetic ground states that can support the electromagnon excitation are still under debate. Here, we show by performing Raman spectroscopy measurements under pressure that the lower-energy electromagnon (4 meV) disappears when the ground state enters from a cycloidal phase to an antiferromagnetic phase (E-type). On the contrary, the magnetoelectric activity of the higher-energy electromagnon (8 meV) increases in intensity by one order of magnitude. Using microscopic model calculations, we demonstrate that the lowerenergy electromagnon, observed in the cycloidal phase, originates from a higher harmonic of the magnetic cycloid, and we determine that the symmetric exchange-striction mechanism is at the origin of the higher-energy electromagnon which survives even in the E-type phase. The colossal enhancement of the electromagnon activity in TbMnO3 paves the way to use multiferroics more efficiently for generation, conversion and control of spin waves in magnonic devices.
INTRODUCTION
In the so-called improper multiferroics such as perovskite manganites RMnO 3 with R being a rareearth ion, the electric polarization is induced by a spin order breaking the spatial inversion symmetry via the spin-orbit interaction [1][2][3][4][5] or the magnetic exchange striction 6,7 . TbMnO 3 is one of the most studied multiferroic materials of this class. Such a compound is fundamental to study novel coupling between microscopic degrees of freedom such as spin and charge 8 . In 2006, Pimenov et al. 9 succeeded in exciting spin waves with the electric-field component of terahertz (THz) light in TbMnO 3 . They called these excitations electromagnons, excitations theoretically put forth by Smolenskii and Chupis 10 more than 20 years ago. They also showed that the electromagnons could be suppressed by applying a magnetic field, directly demonstrating magnetic-field-tuned electric excitations 9,11 . Their signatures have been evidenced by a large number of techniques : THz 9,11 , infrared (IR) 12,13 , Raman 14 spectroscopies and inelastic neutron scattering 15 . The interest for the electromagnons extends to applications. Tuning magnetic properties of multiferroics is the first step to build a new technology based on spin waves called magnonics using these promising materials 16,17 . In particular, it has been demonstrated that the electromagnons can be exploited to directly manipulate atomic-scale magnetic structures in the rare-earth manganites using THz optical pulses 18 .
Considerable efforts have been devoted to determine the origin of the electromagnons. Katsura et al. 19 firstly proposed a model for a dynamical coupling based on the inverse Dzyaloshinskii-Moriya (DM) interaction, originally developed to explain the static ferroelectricity in TbMnO 3 1 .
However, the light polarization predicted to observe such excitations is in contradiction with the two electromagnons observed in IR 13 and THz spectroscopies 11 . The two electromagnons are observed around 60 cm −1 (7.4 meV) and 35 cm −1 (3.7 meV) in the cycloidal magnetic phase. The higher-energy electromagnon has already been explained as a zone-edge magnon activated purely by the magnetostriction mechanism 13, 20, 21 , but the origin of the lower-energy electromagnon is still under debate. Two models have been proposed, one based on the anharmonic component of the cycloidal ground state 21 , and another assuming an anisotropic magnetostriction coupling 20 .
In this work, we investigate the dynamical part of the magnetism in TbMnO 3 under hydrostatic pressure to determine the exact spin ground state responsible for the electromagnon activity. At ambient pressure and below T C = 28K, the Mn magnetic moments exhibits an incommensurate cycloidal magnetic order propagating along the b direction with a wave vector Q C = (0, 0.28, 1) 3 . The magnetic order combined with an inverse DM interaction induces an electrical polarization along the c axis 7,22,23 . At around 5 GPa and 10K, the spin ground state changes from this cycloidal state to the E-type antiferromagnetic state with a wave vector Q E = (0, 0.5, 1) 24, 25 , in which a giant spin-driven ferroelectric polarization has been observed along the a axis 26,27 . Tracking the electromagnons, low energy excitations with small Raman intensity, requires the development of dedicated instrumentation to optically study them under pressure. The experimental setup is based on a diamond anvil cell in a non-colinear scattering geometry. It allows to increase the numerical aperture collection and reduces the background signal to measure small Raman signal at low energy. We have then been able to measure the electromagnon Raman signal on TbMnO 3 as low as 10 cm −1 and up to 8 GPa. Figure 1a shows the low-energy part of the A g Raman spectra at 11 K for different pressures with light polarizations parallel to the a axis (for more details see Methods). At 0 GPa, two low energy excitations, associated with electromagnons, are observed at 60 cm −1 (e 2 ) and 35 cm −1 (e 1 ).
Experiments
The mode at 60 cm −1 corresponds to the zone-edge magnon of the cycloid activated by the pure exchange-striction mechanism 13,21 . Even if its origin is still under debate, the mode at 35 cm −1 has been attributed to a magnon located at 2Q C away from the zone-edge, which corresponds to a replica of the e 2 mode and hence is referred to as a twin electromagnon 20,21 . As seen in the insert of Fig under pressure is included in Fig. 2. We find that both set of data follow a similar behavior.
The experimental polarization reaches a maximum value of 1.1 µC.cm −2 at 9 GPa, an order of 7 magnitude larger than the value at ambient pressure. It is clear that the continuous increase of the zone-edge electromagnon activity is correlated with the emergence and the hardening of the electric polarization along the a axis observed in the E-type phase. This underlines that the mechanism of the spin-driven ferroelectricity is also involved in the origin of the electromagnon activity in this phase.
Theory
To shed light on a physical mechanism of the electromagnons in the E-type phase, we de- To generate the high-pressure phase we set J b = 1.20 meV and ∆J ab = 0.04 meV, we obtain the commensurate E-type antiferromagnetic phase with spins pointing in the b direction.
The electromagnon and magnon excitations has been calculated with the microscopic spin model.
In the E-type phase, this term shifts oxygen atoms along the x and y axes and gives rise to a ferroelectric polarization along the a axis as observed experimentally 26,27 . We first prepare spin configurations at a low temperature by the Monte-Carlo thermalization, and then let the spins configuration relaxed with a sufficient time evolution using the Landau-Lifshitz-Gilbert (LLG) equation. We apply a short pulse of electric field E ω a to the relaxed system at t=0 and trace the time evolution of ferroelectric polarization by numerically solving the LLG equation using the fourth-order Runge-Kutta method (see Supplementary Information). The Fourier transform of the obtained time profiles of the spin-dependent electric polarization gives the electromagnon absorption spectrum.
DISCUSSION
The calculated electromagnon spectra for the two phases are displayed in Fig. 3c. We obtain two modes at 30 cm −1 and 80 cm −1 in the bc-plane cycloidal phase. These two modes correspond to the twin (e 1 ) and the zone-edge (e 2 ) electromagnons, respectively. On the other hand, we obtain only one mode at 60 cm −1 in the E-type phase which corresponds to the down shifted e 2 electromagnon. Aguilar et al. 13 argued that the exchange-striction mechanism does not work in the collinear spin phase. Since the E-type phase has long been believed to be a collinear order, we expect that the electromagnon excitation mediated by the exchange-striction mechanism should disappear in the E-type phase. However, our calculation reproduced a large electromagnon resonance in the E-type phase. This can be understood from the non collinear nature of the E-type order. The Mn spins in the E-type phase are not perfectly collinear but are considerably canted to form a depressed commensurate ab-cycloid as found in previous theoretical studies 28,29 . Importantly, this canted E-type phase has a magnetic periodicity of Q E = (0, 0.5, 1) that fits with a crystal periodicity of the orthorhombically distorted perovskite lattice of TbMnO 3 . Therefore, the E-type order does not contain any higher harmonic components, which explains the absence of the lower-lying mode observed in the incommensurate cycloidal phase because it originates from the Brillouin-zone folding due to the magnetic higher harmonics of the cycloidal order 21 . As a result, the electromagnon spectrum has a single peak corresponding to the higher-lying zone-edge mode only.
These results are in good agreement with our experimental data shown in Fig. 1. In the E-type phase, the e 1 electromagnon disappears, whereas the e 2 electromagnon has its frequency down- shifted. This evidences that the anharmonicity is a mandatory condition for emergence of the twin electromagnon. The fact that the twin electromagnon (e 1 ) disappears before the transition may be due to weakening of the anarhominicity of the cycloid or/and to the spatial coexistence of the cycloid and the E-type state in the vicinity.
In conclusion, we investigated the dynamical magnetoelectric properties of TbMnO 3 under hydrostatic pressure with both Raman spectroscopy and microscopic model calculations. We find that the lower-lying e 1 electromagnon in the anharmonic cycloidal order disappears in the E-type phase for which anharmonic components are absent. Our finding provides the evidence that the activation of the low-energy electromagnon requires an anharmonicity of the cycloid in TbMnO 3 .
As in the case of the multiferroic BiFeO 3 30, 31 , the anharmonicity is the key to understand the finest properties of the cycloidal multiferroics. We also have shown that an electrical polarization in-duced by the exchange-striction mechanism increases the activity of the zone-edge electromagnon by one order of magnitude. Such conditions have been realized at ambient pressure in strained TbMnO 3 thin films 32 in which enhanced electromagnon excitations might be observed, providing more efficient building blocks for magnonics devices.
Samples
Single crystals of TbMnO 3 were grown by floating-zone method and aligned using Laue X-ray back-reflection. The crystals have been polished to obtain high surface quality for optical measurements. TbMnO 3 crystallizes in the orthorhombic symmetry (Pbnm) with lattice parameters equal to a = 5.3Å, b = 5.86Å, c = 7.49Å. 33 TbMnO 3 becomes antiferromagnetic below the Néel temperature T N = 42K 34 . In this phase the Mn magnetic moments form an incommensurate sinusoidal wave with a modulation vector along the b axis. The ferroelectric order appears below T C = 28K where the Mn magnetic moment transit to an incommensurate cycloidal phase 3,35 . In this phase the spin of Mn 3+ rotates in the bc plane, and the ferroelectric polarization appears along the c axis. We have probed one TbMnO 3 single crystals with a ac plane.
Light scattering
Raman scattering measurements are performed in a diamond anvils cell equipped with a membrane for change of the hydrostatic pressure. The fluorescence of ruby balls is used as a 13 pressure gauge. The pressure transmitting medium is helium. The incident laser spot is about 20 µm diameter size. We have used a triple spectrometer Jobin Yvon T64000 equipped with a liquid-nitrogen-cooled CCD detector and solid laser with a line at 561 nm.
DATA AVAILABILITY
The authors declare that [the/all other] data supporting the findings of this study are available within the paper [and its supplementary information files]. | 2,772 | 2018-11-20T00:00:00.000 | [
"Physics"
] |
A food color-based colorimetric assay for Cryptococcus neoformans laccase activity
ABSTRACT Cryptococcus neoformans is a fungal pathogen that causes cryptococcosis primarily in immunocompromised patients, such as those with HIV/AIDS. One survival mechanism of C. neoformans during infection is melanin production, which catalyzed by laccase and protects fungal cells against immune attack. Hence, the comparative assessment of laccase activity is useful for characterizing cryptococcal strains. We serendipitously observed that culturing C. neoformans with food coloring resulted in degradation of some dyes with phenolic structures. Consequently, we investigated the color changes for the food dyes metabolized by C. neoformans laccase and by using this effect explored the development of a colorimetric assay to measure laccase activity. We developed several versions of a food dye-based colorimetric laccase assay that can be used to compare the relative laccase activities between different C. neoformans strains. We found that phenolic color degradation was glucose-dependent, which may reflect changes in the reduction properties of the media. Our food color-based colorimetric assay has several advantages, including lower cost, irreversibility, and not requiring constant monitoring , over the commonly used 2,2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) assay for determining laccase activity. This method has potential applications to bioremediation of water pollutants in addition to its use in determining laccase virulence factor expression. IMPORTANCE Cryptococcus neoformans is present in the environment, and while infection is common, disease occurs mostly in immunocompromised individuals. C. neoformans infection in the lungs results in symptoms like pneumonia, and consequently, cryptococcal meningitis occurs if the fungal infection spreads to the brain. The laccase enzyme catalyzes the melanization reaction that serves as a virulence factor for C. neoformans. Developing a simple and less costly assay to determine the laccase activity in C. neoformans strains can be useful for a variety of procedures ranging from studying the relative virulence of cryptococci to environmental pollution studies.
Laccases catalyze a broad spectrum of reactions and serve as an important virulence factor for C. neoformans that is expressed in its cell wall (2).Since laccases are versatile, in the sense that they can catalyze the oxidation of various substrates, this makes them useful for a range of environmental applications.Laccases can oxidize polyphenolic compounds and iron, and the functions of the enzyme are regulated through signal transduction pathways (2).Laccase has even more diverse functions since a broad range of organisms utilize the laccase enzyme, including fungi, plants, and insects (2).
Specifically, in C. neoformans, the laccase enzyme converts Iron (II) to Iron (III), which lowers the susceptibility of C. neoformans cells to hydroxyl radicals that are generated in vitro (3).Regulation of laccase expression has been found to possibly be strain-dependent because disruptions of a G-protein subunit homolog affect laccase activity in wild-type C. neoformans, but this was not observed in a serotype A H99 strain (2).More research studies are needed to understand the molecular complexities of the laccase enzyme serving as a virulence factor of C. neoformans.Vacuolar proton pump ATPases were found to play a role in laccase activity by testing strains with mutated genes that encode for ATPases, so targeting these proton pumps could be useful for drugs that aim to control laccase activity (2).
Laccase plays an important role in virulence by catalyzing the generation of melanin in C. neoformans, which protects the fungus from reactive oxygen species and nitro gen oxidants produced by phagocytic cells (3).Fungal laccase may also contribute to virulence by reducing the formation of antimicrobial hydroxyl radicals (3).Laccase is also involved in the generation of prostaglandins by fungal cells that can affect local inflammatory responses (4).Specifically, laccase is induced during glucose starvation and increased temperature (30°C) and helps moderate fungal stress resistance (2).Studying laccase can provide insights into C. neoformans pathogenesis and its mechanisms of defense in the phagolysosome.The C. neoformans laccase is known to have broad structural activity, producing pigments of a spectrum of colors, including those similar to melanin, following the oxidation of phenols and catechols (5).
Laccases are known to destroy dyes and are used in food preparation, industry, and environmental applications such as reducing the environmental impact of synthetic dyes (6,7).Factory production of non-biodegradable synthetic dyes results in contaminated wastewater, and fungal laccases can be used to degrade the colors in these bodies of water (6).The by-products of laccase catalysis are lower in toxicity than those from alternative chemical methods for wastewater purification, and laccases have broad specificity, which allows them to break down a variety of synthetic dyes (6).Additionally, a common substrate for laccase is phenol, a toxic pollutant which is dangerous to humans (6).The use of laccases as versatile agents of bioremediation is emerging as a sustainable option to degrade chemical pollutants, and there has been interest in investigating the catalytic mechanisms of laccase to optimize its function in bioremedia tion (8).
In this study, we carried out a serendipitous observation to investigate the use of mangosteen-colored food dye in a fast, efficient, low-cost colorimetric assay for determining laccase activity in C. neoformans cultures.Using absorbance spectroscopy, we could compare several strains of C. neoformans in various culture conditions to both detect and quantify relative laccase activity.Low-cost and facile assays for laccase activity have many potential applications in environmental industries, including purification of wastewater.These assays are also applicable when comparing the virulence of different strains by measuring relative laccase activity, an important fungal virulence factor.
Yeast strains and culture conditions
C. neoformans species complex serotype A strain H99 was obtained from John Perfect in Durham, NC (9).The lac1Δ mutant is from the 2007 Lodge library (Fungal Genetics Stock Center) (10).The H99 GFP strain was obtained from the laboratory of Dr. Robin May at the University of Birmingham, United Kingdom (11).The CNAG 01373Δ, CNAG 06646Δ, and CNAG 01029Δ strains are KN99α mutants obtained from a previously published gene knockout library (12).The KN99α strain was obtained from Heitman Lab at Duke University Medical Center Durham, North Carolina (13).Additional C. neoformans strains used are 24067, Mu-1, and B3501.Other yeast strains used for comparisons were Candida albicans 90067, Saccharomyces cerevisiae S188C, C. gatti R265, and C. gatti WM179.
Medium preparation
Minimal media (MM) was prepared with 15 mM dextrose, 10 mM MgSO 4 , 29.3 mM KH 2 PO 4 , 13 mM glycine, and 3 µM thymine-HCL dissolved in water supplemented with either 2.7 g/L or 20 g/L glucose, vacuum-sterilized via SteriCup Quick Release filter, and stored at room temperature.Yeast extract peptone dextrose (YPD), manufactured by Difco Laboratories, was prepared according to the manufacturer's protocol.
Comparison of color change in H99 and lac1Δ mutant cultures
H99 and lac1Δ mutant strains were seeded at a concentration of 10 4 cells /mL in 3 mL of MM in 12-well tissue culture plates.The following colors of Limino brand food color ing (Amazon Corp. Limino Baiyun, Guangzhou, China) were used: Strawberry, Tanger ine, Lemon, Lime, Purple Cabbage, Blueberry, and Mangosteen.These food colorings contained various food dyes and sorbitol (CAS No. 50-70-4), water (CAS No. 7732-18-5), glycerin (CAS No. 56-81-5), carboxymethylcellulose sodium (CAS No. 9004-32-4), and potassium sorbate (CAS No. 590-00-1).The food dyes present in the food coloring were Acid Red 27, Food Red 7, Food Blue No. 1, Food Yellow 3, and Food Yellow No. 4. No lot numbers were provided for the food coloring.Ten wells contained MM supplemented with 10 µL food coloring, one well with uncolored MM, and one well with uncolored YPD.Color-change observations were recorded 7 days after the plate was either left on the bench at room temperature or placed on a 120 rpm shaker in a 30°C incubator and measured either by the unaided eye or via Spectramax iD5.
Different concentrations of glucose were tested with high-glucose minimal media conditions
Two H99 cultures were seeded in MM supplemented with mangosteen color in Falcon 14-mL polypropylene round-bottom tubes (REF 352059).One tube was prepared with MM at its regular glucose concentration of 2.7 g/L, while the second tube was prepared with MM at an elevated glucose concentration of 20 g/L.Culture tubes were placed in the 30°C incubator with shaking, and color change observations were recorded after 7 days.
Kinetic assay for laccase activity
Twenty-well tissue culture plates were seeded with 10 6 C. neoformans cells in 1 mL of MM with 10 g / L glucose in each well.Three wells were immediately treated with 1X stock of the NuPAGE antioxidant reagent (Product Number: NP0005), and then the plate was incubated at 30°C shaking overnight.This antioxidant has a proprietary formulation of N,N-dimethylformamide.The next morning, an antioxidant reagent was added to the last three wells and incubated for another hour to observe a possible color change of blue to green or back to purple.
Addition of antioxidants to colorimetric assay
96-well tissue culture plates were seeded with 10 4 C. neoformans cells in 100 µL volumes of MM.The 1X stock of commercial antioxidant was added either at 0 hour or after 24 hours of observing the color change.
24-hour assay for laccase activity
Every culture of interest was grown in regular MM and left in a 30°C incubator with rotation at 120 rotations per minute.A 1.5 mL volume of each culture was centrifuged at 2,300 g for 5 minutes in microcentrifuge tubes and then resuspended in 1.5 mL of 10 g/L MM.A volume of 150 µL of the resuspended culture was pipetted into 10 wells of a 96-well plate, so each culture was measured with 10 replicates of the well.Then, 7.5 µL of a 1-10 dilution of Mangosteen food coloring in water (100 µL food coloring in 1 mL of H 2 O) was added to each well.At 0 hour, 50 µL of the supernatant was placed in another 96-well plate for the initial 520 nm absorbance measurement with the Spectramax iD5 (Baltimore, Maryland), and at 24 hours, another 50 µL of the supernatant was placed in another 96-well plate for the 24-h measurement with the Spectramax iD5.For 24 hours, the plate was incubated in a 30°C incubator on a 120 rpm shaker.We compared the laccase activity of the following C. neoformans strains: H99, H99 GFP, KN99α, CNAG 01373Δ, CNAG 06646Δ, and CNAG 01029Δ.Additional trials were conducted for comparing the following C. neoformans strains: H99, 24067, Mu-1, and B3501 to the following other yeast strains: C. albicans 90067, S. cerevisiae S188C, C. gatti R265, and C. gatti WM179.
2,2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) assay
The H99 strain was seeded in MM cultures.Each culture used was washed twice with phosphate-buffered saline (PBS).Then, a 1:100 dilution, or 1:1 dilution for some wells, was prepared with MM.A 1 mL volume of 20 mM of the ABTS solution was prepared and filter-sterilized.ABTS was added for a final concentration of 1 mM ABTS in the cultures.Incubate for 24 hours in a 30°C incubator on a 120 rpm shaker.Initial and 24-h absorbance measurements were taken at 734 nm with the Spectramax iD5.
Statistical analysis
The statistical tests conducted for each absorbance measurement experiment are denoted in their respective figure descriptions with tests for multiple hypotheses.Two-way ANOVA with Tukey's comparison analyses were conducted using RStudio Version 2023.09.1+494 and GraphPad Prism Version 10.0.2 (171).Statistical comparisons were made both with all the data for each individual experiment replicate and for data from all trials pooled.
Phenolic dyes are degraded in the C. neoformans culture
The observation that C. neoformans degrade some food dyes was made serendipitously.While searching for conditions to study the growth of C. neoformans, we noted difficulty in the measurement of fungal growth caused by turbidity when comparing cultures grown in MM and YPD.We hypothesized that the problem arose because MM was clear, while YPD had a yellowish color.We thought that adding food coloring to MM wells would facilitate the interpretation of growth curve absorbance data.However, after 7 days of culture growth, we noticed a color change occurred in multiple wells.Upon closer inspection, we noted color changes only in dyes containing a red component, whose chemical structures contained phenolic groups, and that the resulting colors resembled the original dye if only the phenolic (red) components were removed (Fig. 1 and 2C).When the absorbance was measured, we observed a loss of the red color in wells with the C. neoformans culture that had degraded the red dye (Fig. 2A).Specifically, these wells contained the phenolic dyes Food Red 7 and Acid Red 27 (Table S1).We hypothesized that the dye degradation was a result of laccase activity, due to broad substrate specificity in cryptococcal laccases, and established this by replicating the experiment with ΔLac1-H99 and found no color change after 7 days (14,15).Taken together, these data suggest that fungal laccases can degrade phenolic food dyes, and we hypothesized this phenomenon could be harnessed for a colorimetric laccase activity assay.
Glucose concentration, culture agitation, and cell density affect laccase activity
We sought to optimize culture conditions for detecting laccase expression and activity, attempting to determine conditions which would provide robust results while remaining cost-and time-efficient for any basic laboratory.We found that a 7-d incubation with agitation at 30°C was required to observe color changes with a standard MM preparation (Fig. 2C).
We observed that high glucose concentrations (10-20 g/L) induced color change even at room temperature and without agitation at high initial cell densities.Addition ally, using glucose concentrations of 10 g/L and 20 g/L MM allowed us to observe a visible color change quicker than by using regular glucose MM with any cell concentra tion from 10 3 to 10 7 cells / mL in a plate with agitation at 30°C (Fig. 3B).We found that high glucose concentrations repressed laccase expression (Fig. 3A).Through an examination of glucose-dependence, we observed increased dye degradation in wells with cultures resuspended in MM with higher glucose concentrations (Fig. 4A).Using a higher glucose concentration allowed us to view color change effects quicker as compared to wells with lower glucose concentrations, and with a 24-h assay, we were not able to see any color change in MM glucose conditions (Fig. 4B).
With the combined data containing all individual trials of the glucose dependence experiments measuring the absorbance at 520 nm, a statistically significant difference was found between the 20 g/L glucose MM condition when compared to the 0 g/L glucose MM condition (P = 0.002).An ANOVA test for glucose dependence was also conducted with the change in absorbance measurements between 24 hours and 0 hour of exposure to food coloring, which revealed significant differences between the 10 g/L and 0 g/L conditions (P = 0.0002), 20 g/L and 0 g/L conditions (P < 0.00001), 20 g/L and 10 g/L conditions (P < 0.00001), and 20 g/L and 2.7 g/L conditions (P < 0.00001).Although not many significant differences were observed when analyzing the absorb ance measurements themselves, analyzing the change in the absorbance throughout the 24-hour timeframe showed differences between the glucose conditions.Overall, it 22) and redrawn using ChemDraw software (18).(A).Food Red 7 (B).Acid Red 27 (C).Food Blue 1 (D).Food Yellow 3 (E).Food Yellow 4 (F).Oxidation by the laccase mechanism was easier qualitatively to observe differences in laccase activity across different glucose conditions than it was to establish the quantitative differences, which we suspect could be due to the sensitivity of the equipment used.
Laccase dye degradation is irreversible
One potential disadvantage of current laccase activity assays, and specifically with the commonly used ABTS assay, is that the observed color change is not permanent and must be measured within a specific timeframe.To investigate whether our colori metric assay was permanent, we treated wells with commercially available Thermo Fisher antioxidant before and after the observed color change.We found that adding antioxidants after the reaction did not revert the color, nor did leaving the sample on a benchtop long term over a span of 3 weeks (Fig. 5B).This suggests that the reaction was irreversible, allowing samples to be read at the investigator's convenience.Interestingly, when treating the culture with antioxidants at the start of the experiment, we observed a new green color.A spectrum scan revealed a new peak at ~420 nm, suggesting the formation of an alternative product (Fig. 5A).
Developing cheap and available laccase activity assay
Our next aim was to minimize the necessary reagents and instruments in this assay (Fig. S1).The optimized assay protocol uses a concentration of 10 6 cells / mL yeast seeded in MM with 7 uL of 1:100 diluted food coloring in a 96-well plate.The plate is then incubated at 30°C with 120 rpm agitation to observe color change in around 3-7 days.
To quantify the extent of red food coloring degradation in 24 hours, we designed an assay using culture resuspended in 10 g/L MM.This quicker assay involves centrifuging the culture of interest and resuspending cells in 10 g/L glucose MM, with 100 µL of the culture in each well in a 96-well plate and utilizing 7.5 µL of a 1-10 dilution of food coloring in water for each well.Absorbance measurements at 520 nm at 0 and 24 hours can be used to compare differences between strains or other conditions to quantify changes in laccase activity (Fig. 6).Linear regression analysis allowed us to compare rates of dye degradation across various C. neoformans strains, with increased rates of degradation in an H99 GFP strain (Fig. 6B and 7).Differences between multiple C. neoformans strains and other yeast strains with varying levels of laccase activity were compared with the 24-h assay (Fig. 8).
With the combined data containing all individual trials of the first set of experiments comparing the absorbance at 520 nm across different C. neoformans strains, we observed overall significant differences in absorbance between KN99α and 01029 (P = 0.0006), KN99α and 01373 (P = 0.00005), KN99α and 06646 (P = 0.000007), and H99 and 06646 (P = 0.02).In one of our trials, we observed significant differences in the absorbance between the KN99α strain and its mutant strains 01029 (P = 0.009), 01373 (P = 0.003), and 06646 (P = 0.03).Significant differences between the strains tested within another trial are described in the figure legend of Fig. 6B.While we observed a more noticeable difference in color change between the H99 and H99 GFP strains (Fig. 7), this was not always reflected in the statistical analyses, particularly with the combination of multiple trials, possibly due to a loss of power with the analysis.
Advantages of food dye-based colorimetric assay relative to the ABTS assay
We compared the standard ABTS assay to our colorimetric assay.The H99 GFP well with the higher concentration of the culture showed a color change within 5 minutes of adding the ABTS solution to the well, so our initial photograph shows this color change already (Fig. 9A).However, this color change was impermanent since it was not clearly evident after 24 hours.This places a limitation on the ABTS experiment because it is possible to miss the window of color change and not obtain the absorbance measurements or photographs needed.Additionally, with our ABTS assay, we observed the clearest color change mostly within the H99 GFP strain and not the H99 strain (Fig. 9A).With the 24-h colorimetric assay, we can observe a clear color change within 24 hours for multiple C. neoformans strains, and this color change exhibits permanence, allowing for absorbance measurements or photographs to be taken post-color change at the investigator's convenience.The hues of the color change in the ABTS assay were relatively faint and more difficult to see with the unaided eye as compared to the colorimetric assay.Additionally, there is a significant difference in cost of materials for the ABTS assay compared to the food coloring assay.The ABTS solution was sold by Roche Life Science Products in a quantity of 300 mL for $349.00USD at the time of this study (23).The food coloring used in this experiment is available on Amazon.com,costing $2.66/Fl oz (24).Purchasing an equivalent amount of about 300 mL of food coloring would cost about $26.99, showing that the food coloring method is a significantly cheaper way to confirm laccase activity (25).The ABTS assay costs about $1.16 per assay, while the colorimetric assay costs about $0.02 per assay, or even less than that if using higher dilutions for the optimized 24-h assay.
other stressors, and investigation of the role of the laccase enzyme in C. neoformans pathogenesis can provide insights into the fungal survival strategy.Laccases are also relevant in many aspects of the food industry and may be useful in reducing the environmental impact of synthetic dyes from factory production.Synthetic food dyes may be used as an efficient and cheaper assay for determining laccase activity in C. neoformans cultures, and methods to quantify laccase activity are of interest in a variety of fields ranging from environmental concerns from accumulation of dyes in bodies of water and the defense mechanisms of the pathogenic fungi (26,27).Since laccase is involved in catalyzing the melanization defense mechanism of C. neoformans, the relative comparisons of laccase activity across C. neoformans strains could be valuable in understanding differences across different strains.
Our results suggest that laccase irreversibly breaks down the red pigment known as Food Red 7 and Acid Red 27, as color changes were not observed with the lac1Δ mutant strain.We hypothesize this phenomenon is due to the enzyme's ability to catalyze formation of free radicals through "removal of a hydrogen atom from the hydroxyl group of ortho-and para-substituted mono-and polyphenolic substrates" (26).We initially considered that agitation was necessary to promote red color degradation in the wild-type H99 strain since the cryptococcal laccase reaction uses oxygen but found that it was not necessary since plates at room temperature without agitation still showed color change when exposed to elevated glucose conditions, albeit requiring a few more days to exhibit color degradation.
We observed more colorimetric activity in cultures with higher concentrations of glucose.This result was unexpected, as previous literature reported increased melani zation at lower glucose concentrations, which we expected to correlate with laccase activity (28).In fact, a review of laccase activity cites glucose as a repressor (2).However, those observations refer to melanization and not to direct laccase activity (28).However, melanization may not always be a reliable proxy measurement for laccase activity (29).Other reports have noted that laccase expression in C. neoformans is induced during glucose starvation, but also "stimulated by copper, iron, and calcium and repressed at elevated temperatures." (30).The glucose-dependency of color degradation shown in these experiments leads us to hypothesize that the phenomenon we observed could require energy.Alternatively, the glucose effect could reflect changes in the reduction potential of the solution that potentiate the dye-destroying reaction of laccase.The laccase reaction is heavily influenced by the reduction potential of the solution, and glucose has reducing properties, as illustrated by the blue bottle experiment carried out in high school chemistry courses (31,32).
When comparing different C. neoformans strains, we found that the H99 GFP strain showed signs of increased laccase activity with an earlier visible color change and a greater change in the absorbance at 520 nm over time.We do not have an explana tion for this phenomenon but suspect that association of the GFP construct to the actin promoter of this C. neoformans laboratory strain could cause secondary metabolic effects.Additionally, we note that GFP can generate free radicals and affect the oxidative state of the cell (33).Given the importance of reduction potential in the laccase reaction, it is possible that the enhanced color associated with GFP-expressing C. neoformans reflects altered oxidative conditions in the cells (31).
The 24-hour color change method requires absorbance measurements to quantify differences in the extent of red color degradation by the fungi.Therefore, we have determined that the different variations of the assay could be used for different purposes depending on what is needed for the experiment.Larger wells and higher cell densities can be used to observe color change over the course of 3-7 days to provide a positive or negative conclusion on presence of laccase activity, whereas resuspending smaller volume cultures in high-glucose MM can determine reaction differences over the course of 24 hours.Strain differences did not always reach statistical significance, even though color changes were visible to the human eye, and this could reflect variability sensitivity of the plate used by the reader and a need to further optimize the assay to improve the quantification of results.While significant differences were observed when comparing C. neoformans strains to other fungal strains, the relevance of these results in terms of laccase expression is unknown.As of now, this assay is preferentially suited for determin ing whether a strain produces laccase rather than comparing strain activities.
The absorbance spectrum revealed a new peak at ~420 nm in wells containing C. neoformans that degraded the food coloring, suggesting the formation of a new product when pretreated with antioxidants.This new product was chemically stable.Filamentous fungi have been found to degrade a red diazo dye, using laccases to transfer the azo dye to nontoxic products (34).On the other hand, synthetic dyes that are made up of aromatic compounds, such as azo dyes, when degraded produce amines, which are mutagenic to humans (35).These studies suggest that these products could have been produced in our experiments, but further investigation is needed to determine what product was produced by the fungal cells when degrading the food coloring.
Currently, the most widely used laccase activity screen in the cryptococcal field is the 2,2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) assay in which blue-colored ABTS is oxidized to green-colored ABTS+.A disadvantage of this assay is that the ABTS reduction is impermanent, and samples must be measured before the oxidized ABTS + is reduced.In contrast, the observed color change in our assay was permanent with no time-sensitive window required for performing the measurement.Samples may be left for long periods of time without special preservation before measurement.We also observed differences in the degree of visible color change across strains in the ABTS assay, which so far have been mitigated by the 24-hour colorimetric assay that shows complete color change across the strains we tested.The colorimetric assay with food coloring materials is also significantly less costly relative to the cost of the ABTS solution (23).
The C. neoformans laccase enzyme has other functions in addition to melanin production.For example, C. neoformans laccase is involved in the production of fungal prostaglandins (4).It is possible that laccase deactivates molecules toxic to the fungi and that its effect on food dye color is a reflection of its nonspecific chemical activity.In this study, we have used this effect to develop a new assay for laccase activity that can be used to study fungal laccases.
FIG 2
FIG 2 Observed degradation of red color after 7 days in different temperature, agitation, and culture conditions.(A).Absorbance measurements showing loss of red color peak in wells with C. neoformans culture compared to wells with the undegraded food dye.Experiments were repeated in triplicate with similar results.(B).Sample 12-well plate setup showing the location of the various compounds in the assay.Wells shown in panels A to C. Wells with a red circle designation are colors that commonly showed red color degradation.Image created with Biorender.com.(C).Left: Plate with the wild-type H99 culture.Right: After 7 days of agitation at room temperature.(D).Left: Plate with the lac1Δ mutant culture.Right: After 7 days of agitation at 30°C.(E).Left: Wild-type H99 plate after 7 days of agitation at 30°C.Right: Wild-type H99 plate after 7 days left at the bench.
FIG 3 (
FIG 3 (A).Absorbance measurements at 520 nm wavelength for wells with varying cell densities and MM glucose concentrations.(B).Color change from Mangosteen (purple) to blue observed in wells with varying cell densities with regular 2.7 g/L MM, MM with 10 g/L of glucose, and MM with 20 g/L of glucose.Experiment conducted for two trials.
FIG 4 (
FIG 4 (A).Absorbance measurements of the supernatant at 520 nm at 0 and 24 ours for different glucose concentrations.Each point represents an individual measurement.The following significant differences were determined when analyzing differences in absorbance across different glucose conditions over 24 hours: 10 g/L and 0 g/L condition (P = 0.0002), 20 g/L and 0 g/L condition (P < 0.00001), 20 g/L and 10 g/L condition (P < 0.00001), and 20 g/L and 2.7 g/L condition (P < 0.00001).Experiments conducted in triplicate.(B).Observations of clor change from Mangosteen (purple) to blue.No color change observed with 0 g/L glucose MM in the 24-h colorimetric assay.Rows 1-3 are replicates of the given glucose condition with the H99 strain.Each experiment is conducted in triplicate: vertical columns.
FIG 5 (
FIG 5(A).Absorbance measurements at different wavelengths for wells that either degraded the food coloring to produce color change or did not degrade the coloring.Note that formation of a new peak at ~420 nm was observed in the absorbance measurements of wells with color degradation.(B).ThermoFisher NuPAGE antioxidant-treated wells.The top row is the control row without addition of antioxidants, the middle row is the addition of the antioxidant before color change occurs, and the bottom row is the addition of the antioxidant after color change occurs.Each column is a replicate of the given condition.All wells were initially mangosteen (purple) color.The pre-oxidant row was given antioxidant before any color change occurred, while the control and post-oxidant rows were given antioxidants after they turned blue due to red color degradation.Color change is not reversible since the wells turned green and not back to purple.
FIG 7 FIG 8
FIG 7 Laccase activity observed in the H99 GFP strain after 3 days with various food dye colors.Color change observed with red color degradation: orange to lighter orange hues, mangosteen (purple) to blue, and indigo to lighter blue.(A).Left: 0-h photo of the plate with H99 and H99 GFP wells with different colors.Right: 3-d photo of the plate after being left on the shaker in a 30°C incubator.Experiment conducted in triplicate.(B).Diagram of the 12-well plate set-up diagram for comparison between H99 and H99 GFP strains with the food coloring colors used; official names given by the Limino brand in parentheses.Wells with a red circle designation show the red color degradation. | 6,900.6 | 2024-06-13T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
THE TREND OF APPLICATION OF SERVICE ROBOTS FOR INSPECTION, PLANNED MAINTENANCE AND REMOVAL OF DISRUPTIONSIN PIPING SYSTEMS
Наразі світ знаходиться на початку четвертої промислової революції – Індустрії 4.0, кінцевою метою якої є зробити все розумним: як виробничі процеси в галузі, так і обслуговування системи. У навколишньому середовищі є багато систем трубопроводів, таких як: водо-газо-нафтогони, каналізація тощо, які потребують постійного обслуговування. Інакше кажучи, вони вимагають періодичних перевірок для виявлення таких неполадок, як корозія, тріщини, деформації або інші перешкоди. Сервісні роботи для огляду та обслуговування дуже зручні для перевірки систем трубопроводів. Крім того, вони цікавлять багатьох дослідників у цій галузі, тому існує незліченна кількість розроблених сервісних роботів, які зараз використовуються. Сервісні роботи для перевірки системи трубопроводів використовуються для перевірки та надають візуальну інформацію зсередини відповідної труби. Коли сервісний робот рухається трубою, на камеру фіксується його переміщення, що забезпечує отримання відео про внутрішній стан труби. Відео можна використовувати пізніше, щоб після виявлення неполадок системи трубопроводів прийняти правильне рішення щодо його подальшої робити. У статті представлено тенденцію застосування сервісних роботів для перевірки стану труб. Показано ряд конструкцій цих сервісних роботів, які вже впроваджуються. Сервісні роботи ефективно знижують наслідки і кількість усіх проблем, що пов’язані з обслуговуванням, очищенням та перевіркою систем трубопроводів. Зростаюча тенденція застосування сервісних роботів пов'язана з впровадженням базових технологій Індустрії 4.0, оскільки її метою є постійне отримання інформації про роботу системи. Для огляду небезпечних для здоров’я працівників трубопровідних систем і установок розроблено різноманітні роботизовані системи. Сервісні роботи керуються камерою, датчиком або простими інструментами. Більшість сервісних роботів призначені для резервуарів, систем трубопроводів для всіх матеріалів, для огляду вентиляційних отворів і труб повітряних систем, каналізації, атомних станцій або роботи в агресивних середовищах. Очікується, що найближчим часом розвиток і застосування сервісних роботів для інспекції продовжиться. Сервісні роботи ефективно зменшують усі проблеми, пов’язані з обслуговуванням, очищенням та перевіркою систем трубопроводів. Ключові слова: сервісний робот, перевірка, трубопровід, неполадки,обслуговування.
INTRODUCTION
The development of the core technologies of Industry 4.0, including robotic technology, is responsible for the development of a large number of service robot constructions for professional use [1]. Their implementation is necessary where human presence is impossible, such as work in systems that are dangerous to human health, e.g., nuclear plants, high temperatures, metal casting, production of glass and ceramic products, processes involving high pressure painting, welding, grinding, polishing, etc. Also, their implementation is necessary in places which man is not physically able to access such as piping systems, both below and above ground. Service robots that belong to the group for professional use have a wide range of implementations in the environment. With the implementation of Industry 4.0 core technologies their application is continuously increasing. The implementation of service robots for professional services is shown in Figure 1 [2].
As shown in Figure 1, their implementation can be found in: logistics, inspection, environment, medicine, agriculture, professional cleaning, defense, construction, exoskeletons, rehabilitation, etc. They are especially welcomed in dangerous conditions or places inaccessible to humans (e.g., nuclear plants, high temperatures, inaccessible plants such as pipelines, etc.). Since global manufacturing industry is implementing Industry 4.0, innovations in robotics, automation, and artificial intelligence (AI) are gaining ground in all manufacturing processes where humans and robots work together. The companies must be ready to implement them to meet consumer demand in the global marketplace [3][4].
The world is currently at the beginning of the fourth industrial revolution -Industry 4.0, whose ultimate goal is to make everything intelligent, both production processes in the industry and system maintenance. The environment around us has plenty of piping systems such as: water, gas, oil, sewage, etc., which need to be continuously maintained. In other words, they require periodic inspections to identify errors such as corrosion, cracks, deformations, or obstruction with obstacles. Service robots for inspection and maintenance are very convenient for the inspection of piping systems. In addition, they are a point of interest to many researchers in the field, so there are countless developed service robots that are currently in use. Service robots for inspection of piping system are used for inspection and provide visual information from inside the corresponding pipe. When the service robot moves through the pipe, it records the inside with a camera and provides us with visual information, i.e., provides a video of the inside of the pipe where we can locate the error. We can use the video later to establish the condition of the recorded piping system and make the right decision what to do. The paper presents the trend of application of service robots for inspection. A number of constructions of these service robots that are already in implementation are shown. Service robots effectively reduce all problems related to the maintenance, cleaning and inspection of piping systems. The growing trend of service robots application is due to the implementation of basic technologies of Industry 4.0 because its aim is to receive the information about the operation of a system all along. Various robotic systems have been developed for inspection and examination of piping systems and plants that are dangerous to workers' health. Service robots are controlled by camera, sensor or simple tools. Most service robots for inspection are intended for tanks, piping systems for all materials for inspection of ventilation openings and pipes of air systems, sewer systems, nuclear plants, or work in aggressive environments. It is expected that the development and application of service robots for inspection will continue to grow in the nearest future. Service robots effectively reduce all problems related to the maintenance, cleaning and inspection of piping systems.
Keywords: service robot, inspection, pipeline, error, maintenance. Service robots for inspection of piping systems are very good for inspection and maintenance of piping systems. The process of maintaining the piping system includes three main activities: Inspection, which includes activities that monitor and provide information on the state of the water supply system in order to enable the prediction of disturbances or early detection of disruptions.
Scheduled maintenance, which includes activities in which elements of the system are replaced or repaired according to a predetermined schedule, in order to avoid or reduce the frequency of disruption.
Disruption management, which includes activities that change the elements of the system and restore the desired state after the disruption.
Maintenance of piping systems requires access to dynamic environments. The tasks in maintaining these systems may be less predictable, both in terms of the nature of the tasks and the frequency of maintenance or any delays between the tasks. Service robots for inspection are used occasionally, unlike robots that are active all the time and are cost-effective, but can pay off with one application, for example in a nuclear plant or gas pipeline system where gas is lost and we are unable to establish the reason and how to fix it [5,6].
RESEARCH METHODOLOGY
In order to analyze the trend of service robots for professional use as well as the application of service robots for inspection and scheduled maintenance, we have taken statistics from the International Robotics Federation (IFR), the United Nations Economic Commission for Europe (UNECE) and the Organization for Economic Co-operation and Development (OECD), which have aggregate data coming from about 750 robot companies [9][10][11][12][13][14][15][16][17][18][19][20]. Statistical analysis methods and the MS-Excel software system were used to calculate the parameters of statistical descriptions and graphical presentation of data.
RESULTS
Development and implementation of new technologies, such as: robotic technologies, new generation of digital technologies, artificial intelligence, machine learning, internet, genetic modification, new types and methods of energy and information storage, quantum computing, 3-D printing, genetic engineering and biotechnology in the manufacturing processes of the industry, makes automation flexible which leads to reduced product life and increased productivity. In addition, the implementation of innovations in robotic technology enables the development of a new generation of robots that are aware, connected, and intelligent, while control technology enables their autonomous adaptation based on internal or external command. In other words, they are equipped with a powerful computer that allows autonomous decision making and algorithm-based self-learning process. Nextgeneration robots communicate with the environment, understand the environment through models, automatically generate a program based on planned tasks, understand human actions, and follow human social norms. The development of robotic technology is credited with the increasing trend of implementation of service robots for professional use, as shown in Figure 2. Based on Figure 2, we come to the conclusion that the implementation of service robots for professional use has been progressing exponentially in the last ten years. In 2009, 13.249 service robot units were applied, while in 2020, the application amounted to 225.000 units of service robots. In other words, it is an increase of about 17 times, which is the results of the implementation of Industry 4.0 and its base technologies. It is estimated that this trend will continue until 2023, when the application of about 530.000 service robot units for professional use is predicted. Service robots for inspection are precisely those used for inspection and scheduled maintenance of piping systems, which will be analyzed in this paper. We will show the trend of application of service robots for inspection, as illustrated in Figure 3. The trend of implementation of service robots for inspection in the world is shown in Figure 3. As can be seen, the implementation of robots has slightly increased, from 168 robot units in 2009 to 800 robot units in 2017. There has been a high implementation in the last three years, so that in 2020, about 18.000 robot units were implemented. The increasing trend of implementation in the recent years has been provided thanks to innovations from advanced technologies and companies that implement innovative technologies of Industry 4.0. It is estimated that this increased trend will continue, so that the implementation of about 37.000 service robot units for inspection is predicted by 2023. The implementation of service robots for inspection and maintenance is inevitable in all systems that man cannot access, such as water supply systems, or systems in which human health is endangered, such as nuclear plants, high temperature systems, etc. [7][8][9].
IMPLEMENTATION OF SERVICE ROBOTS FOR INSPECTION AND MAINTE-NANCE OF PIPING SYSTEMS
Piping systems for water, gas, oil, sewage, and tanks need to be inspected periodically to notice the accuracy or errors that occur, in order to eliminate them in time with the assistance of suitable service robots for inspection. Service robots for use in piping systems are segmented robots equipped with wheels or tracks for the flow of internal oil, gas or wastewater, industrial or air ducts [21][22][23]. In addition, service robots for inspection help rapid detection of problems within piping systems, such as weld failures, corrosion, erosion, fractures, deposition, loose parts, faulty internal coatings, etc. Sewage service robots can clean pipes with a diameter of 200 to 600 mm, which are inaccessible to people. In order for the piping system to be constantly in function, it is necessary to have planned maintenance activities in which the elements of the system are changed or replaced according to predetermined schedule, in order to avoid frequent disruptions. Robots involved in the production processare cost-effective because they work throughout their service life, while control service robots differ from other robotic applications. If used in maintenance, especially in the nuclear industry, they can pay off during a single use, because by using a robot the nuclear plant avoids shutting down or continues to operate during the maintenance time. Depending on the dimensions and purpose of the piping system, the designs and constructions of service robots are customized so that we have many different robot constructions offered by many companies in the world. Due to limitations, we will only illustrate certain constructions and purposes of service robots for inspection and maintenance of piping systems.
Carnegie Mellon University (CMU) and the National Robotics Engineering Center (NEREC) with the support of the Gas Company (NGA), designed a new generation Explorer-II (X-II) service robotic system capable of inspecting pipeline control systems intended for pressurized gas [24,26]. The electronic architecture allows GPS to locate defects during pipeline inspections and quickly take action to repair anyerrors. The service robot, called a robot train, is made up of the module shown in Figure 4. Using a combination of its built-in drive elements and steering joints, the Explorer is designed to travel through straight pipes and reduction of pipe diameters, elbows, sharp curves and T-curves. The system is sealed and purified, which ensures safe operation in a natural gas environment.
The Explorer service robot is a remotecontrolled robot for inspection and control of lowpressure and high-pressure gas pipelines with battery supply. This technology has the potential to improve the overall safety, reliability and integrity of natural gas infrastructure by providing a stateof-the-art tool for inspecting almost all piping systems. This robot performs a visual inspection of long-range piping systems made of cast iron and steel. The operator controls Explorer wirelessly and can monitor pipeline images in real time. Service robot for inspecting and controlling piping systems has exceptional mobility and can move like a snake through pipes with its flexible body. The body segments contain batteries for power supply, a computer, two heads symmetrically placed with cameras and two connectors for operating elements, while the module allows the robot to move arbitrarily. The ball joint has three degrees of variation and is located between the drive units. Highly integrated electronics use a digital processor signal that controls the motor and gives accurate angle calculations.
Service robots for inspection and control of piping systems are reliable for inspection of corrosion, cracks and other types of defects, the detection of which avoids severe and expensive accidents. The service robots are designed so that different types of sensors and cameras can be placed on it, which can be integrated into the robotic navigation system.
Large electricity producers want to be competitive in the market. Their goal is to extend the service life of generators, turbines, boilers and pressure lines. They can achieve this only through inspections of the specified equipment, detection and elimination of defects. Until now, this was achieved manually with experts in the field. In recent years, with the development of robotic technology, this is achieved by service robots that perform measurement, imaging and scanning of surfaces. The advantages are great because we are able to inspect even those surfaces that are inaccessible to humans.
The service robots are designed so that different types of sensors and cameras can be placed on it, which can be integrated into the robotic navigation system. Figure 5 shows the various construc-tions of service robots for inspection and control of both piping systems and plants that are dangerous to the health of workers, such as nuclear plants or plants for the production of electricity such as generators, turbines, boilers and pressure lines. The main goal is to extend the service life of the plant, which can only be achieved through inspections of the specified equipment, detection of defects in time and their elimination. CONCLUSION Service robots have already found application in inspections and maintenance, where human potential is replaced and where human presence is disabled. They have application in places where movement is disabled and hindered, such as testing piping systems underground and above ground. There is also a wide application of service robots in the branches of inspection and maintenance in which working conditions are dangerous to humans, where high temperatures are developed and where the concentration of substances dangerous to human health is increased, such as production and casting of metals, glass and ceramics, processes that include high pressure painting, welding, sanding, polishing, etc. The trend of application of service robots for professional use is growing on annual basis, by an exponential function. This growing trend is expected in the coming years. There is also an increasing trend of service robots used for inspection and control of various systems, especially piping systems which humans cannot physically inspect. This growing trend is due to the implementation of basic technologies of Industry 4.0, whose aim is to have information about the operation of a system at all times. Various robotic systems have been developed for inspection and examination of piping systems and plants that are dangerous to workers' health. Service robots are tele-controlled by tracking or rolling of a moving platform and transmitted by a camera, sensor or simple tools.Most service robots for inspection are intended for tanks, piping systems for all media (water, gas, fuel, oil, etc.), for inspection of ventilation openings and pipes of air systems, sewer systems, nuclear plants, or work in aggressive environments. It is expected that the development and application of service robots for inspection will continue to grow in the future. | 3,861.2 | 2021-12-28T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Human and Mouse CD137 Have Predominantly Different Binding CRDs to Their Respective Ligands
Monoclonal antibodies (mAbs) to CD137 (a.k.a. 4-1BB) have anti-tumor efficacy in several animal models and have entered clinical trials in patients with advanced cancer. Importantly, anti-CD137 mAbs can also ameliorate autoimmunity in preclinical models. As an approach to better understand the action of agonistic and antagonistic anti-CD137 mAbs we have mapped the binding region of the CD137 ligand (CD137L) to human and mouse CD137. By investigating the binding of CD137L to cysteine rich domain II (CRDII )and CRDIII of CD137, we found that the binding interface was limited and differed between the two species in that mouse CD137L mainly combined with CRDII and human CD137L mainly combined with CRDIII.
Introduction
CD137 belongs to the third group of receptors encoded by the TNF superfamily which also includes OX40, CD27, CD30, and CD40. These receptors are all costimulatory molecules and act at different stages in T and B cell activation to modulate the immune response [1][2][3]. CD137 is expressed by multiple myeloid cells including activated effector CD8 + and CD4 + T cells, natural killer (NK) cells, NK/T cells, dendritic cells (DCs), macrophages, neutrophils, eosinophils [4], and according to recent data, also by regulatory T cells (Tregs), activated B cells, mast cells and endothelial cells in tumor capillaries [4][5][6][7]. Engagement of CD137 boosts proliferation of T cells, activates their effector functions, survival and establishes immunological memory [8]. CD137 signaling promotes a T cell response by activating PI-3-kinase and Akt/PKB signaling pathway, increases expression of Bcl-XL and Bfl-1 and enhances IFN-c secretion to polarize Th1 differentiation [9]. CD137-deficient mice have a decreased CD8 + T-cell response to virus infection [10]. Baessler and colleagues recently reported that the engagement of CD137 on mouse and human NK cells had opposite effects in that CD137 functions as an inhibitory receptor in humans and as a stimulatory receptor in mice [11].
Administration of anti-CD137 mAbs has significant therapeutic activity against established tumors in several mouse models, including tumors that are poorly immunogenic [12][13][14]. Engagement of CD137 can also down-regulate immune responses for therapeutic benefit in a variety of mouse models of autoimmune diseases [15][16][17]. Two fully human anti-CD137 mAbs have been developed and entered phase I-II studies in patients with advanced solid tumors or B-cell malignancies [18][19]. However, there is a concern using these mAbs in view of the expression of CD137 and its ligand by a variety of normal cells as well as the fact that opposite biological effects and severe side-effects have been observed [20][21].
Antibodies to costimulatory receptors can be either antagonistic or agonistic. There are similarities between the toxicities induced by engaging CD137 and CD28, including a systemic inflammatory response involving CD4 + T cells and a ''cytokine storm'' [22]. For example, two different mAbs to CD28, JJ316 and JJ319 trigger different functional signals via CD28 with JJ316 being a hypercostimulatory activating mAb [23]. The mechanisms responsible for the different between different mAbs to the same costimulatory molecule are not known.
Little is known about the molecular interactions that are responsible for the binding of CD137 to CD137L. Predicting specific interactions on the basis of structural information alone has not been possible. Data from multiple mutagenesis and binding studies have allowed the identification of amino acid residues in the extracellular domain of TNF which are critical for receptor binding [24][25][26][27][28]. The binding between CD40 and CD40L and between OX40 and OX40L [29][30] has been determined. In contrast, no crystals of CD137-CD137L have been produced [31]. In this report we have mapped the mouse and human CD137 regions which are responsible for binding to the corresponding natural ligands and we analyze their structures.
Research involves taking peripheral blood from healthy human and spleen from mice, and all these had been approved by Beijing Tuberculosis and Thoracic Tumor Research Institute Ethics Committee. Animal experiments have been conducted according to relevant national and international guidelines. All participants provided written informed consent prior to participation in the study.
Isolation, activation of lymphocytes and cDNA preparation
Human peripheral blood mononuclear cells (PBMCs) from healthy donors were isolated by Ficoll-Hypaque gradient centrifugation, resuspended at 1610 6 /ml in RPMI 1640 medium (Gibco, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (Gibco) and activated by incubation with phytohemagglutinin (PHA, Sigma, St Louis, MO, USA) at 50 mg/ml for 36 h at 37uC. Lymphocytes from mouse spleens were prepared after lysing the erythrocytes with ammonium chloride and activated by incubation with concanavalin A (ConA,Sigma) at 5 mg/ml for 36 h at 37uC in 10% FBS RPMI 1640 medium. Expression of CD137 on the T cells was confirmed by flow cytometry (FACS Caliber, BD, San Jose, CA, USA) after double staining with FITCconjugated anti-CD3 (OKT3, ebioscience, San Diego, CA, USA) and PE-conjugated anti-CD137(BD Biosciences, San Diego, CA, USA). RNA was extracted from activated and unstimulated human and mouse lymphocytes using TRIzol (Life Technologies, Carlsbad,CA, USA), and single-stranded cDNAs of IgG-Fc, CD137 and CD137L were synthesized using poly(dT) primers and M-MLV RT (Promega Corporation, Madison,WI, USA).
Production of CD137 CRDs and CD137 Ligand fusion proteins
The designed fragments of truncated CD137 and its ligand were amplified by PCR from cDNA and subcloned into the pCDH, pCDH-L and pCDM-L vectors. The primers used for amplification of various mouse and human CRDs or CD137L are listed in Table S1. A pair of primers: m1-AAG AGG ACA CGA AGG AGC TGG TGG TCG CCA and m2-TGG CGA CCA CCA GCT CCT TCG TGT CCT CTT, was used to perform a site directed mutagenesis (gRc) at the 405 th base of human 4-1BBL extracellular domain to destroy the AgeI restriction enzyme site through overlap PCR without any 4-1BBL amino acid changed [2], and then human 4-1BBL extracellular domain was cloned into pCDH-L and pCDM-L by using AgeI+XbaI restriction site. The point mutation was verified by sequencing. All the constructed expression plasmids were transfected into COS-7 cells with lipofectamine 2000 (Life Technologies, Carlsbad, CA, USA) to produce human and mouse IgG fusion proteins, which were subsequently purified with rProtein A Sepharose (GE healthcare, Uppsala, Sweden ) and eluted by glycine elution buffer. The mouse CD137 extracellular domain with hFc tail (mE-hFc) was obtained by transfection of COS-7 cells using DEAE dextran (Sigma, St Louis, MO, USA).
Quantitation of expressed proteins
After purification by affinity chromatography, the expressed fusion proteins were quantitated by spectrophotometer analysis, and a sandwich ELISA was established to measure the concentration of human CD137. In brief, anti-human CD137 mAb h4-1BB-M127 (BD Pharmingen, San Diego, CA, USA) was used as a capture antibody, and a serially diluted solution of hCD137-hFc fusion proteins (Sino Biological Inc, 10041-H03H, China) with a purity over 95% was used as reference standard. This was followed by addition of a biotin-labeled anti-human CD137 mAb (BD Pharmingen, San Diego, CA, USA), 4B4-1, which recognizes a different epitope after which strep-HRP was added followed by TMB (Sigma, St Louis, MO, USA) substrate solution as color developing agent, and OD450 nm was determined. By establishing standard curves, the concentration of transfected culture supernatants of hCD137-hFc fusion protein was determined.
ELISA assays for CD137 and CD137 Ligand binding
For mouse CD137 and CD137L binding experiments, microtiter plates (Nunc, Roskilde, Denmark) were coated overnight with purified mCD137L-mFc (mouse CD137L extracellular domain with mouse IgG-Fc fusion protein) at 1 mg/ml, blocked by 5% milk (OXID LTD, Basingstoke, Hampshire, England) for 1 h at 37uC after which expression supernatants of mE, mE2, mE3, mE4(mouse CD137 CRDs with human IgG-Fc fusion proteins) were added for 2 h at 37uC. For detection, microtiter plates were incubated with goat anti-human IgG-HRP antibodies (Jackson Immuno Research laboratories Inc, PA, USA) at 37uC for 1 h. In subsequent quantitaion ELISA experiment, using the same procedures, expression supernatants were replaced by purified mE, mE2, mE3, mE4 proteins, concentrations ranging from 0.1 mg/ml to 10 mg/ml. TMB (Sigma, St Louis, MO, USA) substrate solution was used to detect positive reactions which were evaluated at OD450 nm.
To examine the ability of human CD137 and various CRDs binding to human CD137L, microtiter plates were coated with purified hCD137L-mFc (human CD137L extracellular domain and mouse IgG-Fc fusion protein) at 1 mg/ml, after which expression supernatants of hE, hE2, hE3, hE4 (human CD137 CRDs with human IgG-Fc fusion proteins) were added, followed by goat anti-human IgG-HRP. Parallel to this, microtiter plates were coated with purified hCD137L-hFc (human CD137L extracellular domain with human IgG-Fc fusion protein) which was followed by adding expression supernatant of hE300, hE3003, hE3004 (human CD137CRDs with mouse IgG-Fc fusion proteins) and detected by goat anti-mouse IgG-HRP. Experiments were also done in which microtiter plates were coated with purified hCD137L-mFc, 1 mg/ml, after which a range of concentrations of purified hE3 (CRDI and CRDII) from 0.2 mg/ml to 10 mg/ml was added to further confirm the binding of recombinant hE3 to hCD137L.
For cross-binding experiments, microtiter plates were coated with purified hCD137L-mFc and mCD137L-mFc at 1 mg/ml, after which expression supernatants of hE and mE were added. In other experiments, expression supernatants of hE, hE4 and mE, mE3, mE4 were added for cross binding of human CD137L to mouse CD137 CRDs. P/N (average OD value of positive wells/ average OD value of negative wells ) .2.1 was chosen as a positive ELISA reaction. The error bars represent standard deviation of the means based on three replicates of one representative experiment.
Monoclonal antibodies and blocking experiments
Rat hybridoma 2A [34], which is specific for mouse CD137, was kindly provided by Dr. Lieping Chen from Yale University. Another anti-mouse CD137 hybridoma, D4-67, was established in our laboratory by immunizing a Lewis rat by i.p. and s.c. injection of 100 mg/0.3 ml of mCD137-hFc fusion protein mixed with Freund's Complete Adjuvant (Sigma, St Louis, MO, USA) followed by 3 subsequent injections i.p. and s.c. of 100 mg/ (1-23 aa). mE, mE4, mE3 and mE2 represent the expressed mCD137 segments covering different CRDs (I, II, III, IV). (B) Identification of purified Fc-fusion proteins expressed with pCDH (mE, mE4, mE3 and mE2) and pCDM-L (mE100 and mCD137L, i.e.mL) in reducing gel: left panel, SDS-PAGE gel with Ponceau S staining; right panel, Western blotting (ECL). (C) Binding of mCD137L to various recombinant mE or subdomains by ELISA assay. Affinity-purified mCD137L (1 mg/ml) was coated on plastic, followed by adding expressed supernatants of mCD137 fragments and subsequently detected by HRPgoat-anti-human IgG. (D) A quantitative binding ELISA in which mCD137L was coated with 1 mg/ml; after blocking with milk, four purified mCD137 protein fragments (mE,mE4,mE3,mE2 ) were added with decreased concentrations 10 mg/ml, 5 mg/ml, 1 mg/ml, 0.1 mg/ml and detected by HRP-goatanti-human IgG. The binding experiment was repeated 3 times with different batch supernatants. Control was culture supernatant from cells that had not been transfected. *P/N.2.1. doi:10.1371/journal.pone.0086337.g001 0.3 ml of mCD137-hFc given every second week together with Freund's Incomplete Adjuvant (Sigma). One month after the last immunization, the rat was boosted by i.p. injection and euthanized 4 days later when spleen cells were hybridized with mouse P3-X63-AG8.653 myeloma cells using standard procedures. Wells containing hybridoma cells were screened for binding to mCD137-hFc fusion protein and not to mCD28-hFc fusion protein. D4-67 was cloned from a high producing well.
Purified hCD137-hFc at a concentration of 1.0 mg/ml and mouse anti-human CD137mAbs and OKT3 at a concentration of 5 mg/ml or 25 mg/ml were mixed and incubated at 4uC overnight. Microtiter plates were coated with purified hCD137L-mFc at 1 mg/ml and incubated with blocking solution, after which the above mixture was added and incubated for 2 h 37uC followed by mouse anti-human Fc-HRP for detection. Two different antihuman CD137 mAbs (4B4-1 and h4-1BB-M127, BD Pharmingen) were serially diluted from 0.1 mg/ml to 10 mg/ml and used for similar ELISA blocking experiments. mCD137-hFc at a concentration of 1 mg/ml was mixed with two rat anti-mouse CD137mAbs (2A and D4-67) and added to mCD137L-mFc coated plates, which was followed by mouse anti-human-Fc-HRP for detection.
PAGE and Western blot analysis
Expressed proteins were purified with rProtein A, mixed with loading buffer, boiled for 5 minutes, separated on a 10% SDS polyacrylamide gel and transferred onto a nitrocellulose membrane (PALL Corporation, Pensacola, FL, USA). The membrane was stained with Ponceau S at RT, washed with water until the background was white. The membrane was incubated in 5% milk-PBS for blocking, subsequently goat anti-mouse IgG-HRP or goat anti-human IgG-HRP were added. Chemiluminescent substrate for HRP (Thermo scientific, Rockford,IL, USA) was used to visualize the protein band. In addition, CD137-Fc and CD137L-Fc, in loading buffer without 2-ME and without being boiled, were run on PAGE gel after which the gel was stained with Coomassie Blue Stain Solution (Amesico, Solon, OH 44139 USA).In experiments to confirm the binding of mCD137 and CRDs to mCD137L on gel, purified proteins of mE,mE4,mE3,-mE2,mCD28 and mL (mCD137L-mFc) in loading buffer with 2-ME were boiled for 5 minutes, or mE-hFc and mE3-hFc in loading buffer without 2-ME and without being boiled, then separated on a 10% SDS polyacrylamide gel and blotted onto a nitrocellulose membrane. The membrane was incubated with 1 mg/ml mCD137L-mFc, after blocking, goat anti-mouse IgG was added for detection.
Expression and quantification of fusion protein
We constructed three different vectors with human and mouse IgG-Fc and transfected COS-7 cells to express various truncate fusion proteins which were then used in binding assays. According to a quantitative ELISA, the concentrations of expressed proteins in the supernatants of transfected COS-7cells were around 0.5 mg/ ml, and the proteins were .80% pure after rProtein A affinity chromatography. The expected monomeric molecular weights, including IgG-Fc, ranged from 32 kD to 48 kD.
Mouse CD137 Ligand mainly binds to CRDII of the mouse CD137 extracellular region
CRDs were constructed as a basic structural unit for the mapping of CD137L binding. Mouse CD137 (referred to as mCD137) has four CRDs among which the structure of CRDII and CRDIII is typical [35]. We designed mE2,mE3,mE4 and mE which cover CRDI, CRDI+II, CRDI+II+III and the full-length mouse CD137 extracellular domain, respectively (Fig. 1A); mE100 covers CRDIII only. Fig. 1B shows the expression of mE,mE2,-mE3,mE4,mE100 and mL (mCD137L-mFc) according to SDS-PAGE gel staining and Western blotting. The bands show the proteins have larger molecular weights than expected because the fusion proteins include not only the hinge-like region and Fc fragment but also protein glycosylation. In a pilot assay to measure binding of mCD137L to CD137,wells in an ELISA plate were coated with purified mCD137L and supernatants from mE,m-E2,mE3 and mE4 transfected COS-7 cells were added. ELISA assays showed that mE,mE3 and mE4 bound to mCD137L while mE2 (CRDI) did not, and that mE3(CRDI+II) had the strongest binding. Fig. 1C shows data from a representative ELISA assay.
As the next step, the wells of an ELISA plate were coated with mE100 (CRDIII only) and mE2 (CRDI) after which mCD137L was added to investigate whether CRDIII or CRDI can independently bind to mCD137L.The 450 nm readings for mE100, mE2 and negative control were 0.165, 0.133 and 0.128 respectively, indicating that neither mE100 (CRDIII) nor mE2 (CRDI) can bind to mCD137L. The results were the same when the proteins were coated at 5 mg/ml, further confirming that CRDII is the main region for binding of mCD137L (Table 1). Human CD137 Ligand mainly binds to CRDIII in the human CD137 extracellular region The amino acids of human and mouse CD137 have 73.5% expected similarity and 59.6% identity [2]. The human CD137 extracellular region contains 186 amino acids which is one more than that of mice and the distribution and structure of the CRDs from the two species is very similar with their CRDI and CRDIV regions both containing only four cysteines that cannot form the typical CRD structure while their CRDII and CRDIII both ). hE, hE2, hE3, hE4 and hCD137L-hFc (hL-hFc ) were expressed by pCDH and pCDH-L respectively.hE300, hE3003, hE3004 and hCD137L-mFc ( hL-mFc ) were expressed by pCDM-L. (C) Binding of hCD137L to various recombinant hE and subdomains by ELISA assay. Affinity-purified hCD137L-mFc (1 mg/ml) was coated on plastic, followed by addition various hCD137 fragments expressed supernatant, then detected by HRP-goat-anti-human IgG. (D) hCD137L-hFc (1 mg/ ml) was coated, followed by addition hE300, hE3003 and hE3004 fragments expressed supernatant and also an unrelated epithelial cell adhesion molecule (Epcam) protein with mFc, the binding detected by HRP-goat-anti-mouse IgG., Similar results were obtained in all of 3 individual experiments. *P/N.2.1. doi:10.1371/journal.pone.0086337.g002 contain six cysteines. Glycosylation sites are located between CRDIV and hinge-like region in both species [36]. Therefore,hE2,hE3,hE4 and hE (human CD137 extracellular region) were designed to cover CRDI, CRDI+II, CRDI+II+III and the full-length CD137 extracellular domain as in the experiments with mouse CRDs. hE300, hE3003 and hE3004 were designed to cover CRDIII+CRDIV, single CRDIII, and single CRDIV (Fig. 2 A,B). ELISA wells were coated with purified hCD137L after which supernatants of hE, hE2, hE3, hE4 expressed cells were added. The hE and hE4 supernatants were positive while the hE2 and hE3 supernatants were negative, indicating that the CRDI+ CRDII constructs cannot bind to hCD137L, while CRDI+ CRDII+CRDIII can (Fig. 2C).
In order to validate whether hE3 can bind to the ligand, purified hE3 protein at different concentrations (10 mg/ml, 5 mg/ml, 1 mg/ ml, 0.2 mg/ml), instead of supernatants, were added to ELISA wells coated with hCD137L. No binding was observed (Fig. 3). ELISA assays were also performed with another series of CD137 fragments-hE3003 (separate CRDIII), hE300(separate CRDIII+ CRDIV) and hE3004 (separate CRDIV). The results were also negative (Fig. 2D). We conclude that CRDIII is the main binding region for human CD137L, while CRDI+CRDII may be needed to assist CRDIII in contact with human CD137L (Table 1). CRDI+CRDII may less involved in contact also in view of data showing that a CRDII specific anti-human CD137 mAb (4B4-1),which binds to hE3 but not to hE4, cannot interfere with the binding between the ligand and its receptor in the blocking experiments.
Cross-binding of human CD137 Ligand to mouse CD137
Purified human CD137L (hL) was coated to ELISA wells at 1 mg/ml and expression supernatants of human CD137 (hE) and mouse CD137 (mE) were added. As shown in Fig. 4A, both hE and mE can bind to hL, although the binding by hE is greater. In a parallel experiment, purified mouse CD137L (mL) was used to coat the plastic wells at 1 mg/ml after which hE and mE supernatants were added. As shown in Fig. 4A, mE binds to mL Figure 3. Binding experiment of hE3 to hCD137L by ELISA assay. hE3, which is hIgG1-Fc fusion protein expressed by pCDH, containing hCD137 CRD I+II. hCD137L-mFc was coated at 1 mg/ml and a series of decreasing concentration of hE3, 10 mg/ml, 5 mg/ml, 1 mg/ml and 0.2 mg/ml were added, followed by detecting with HRP-goat-antihuman IgG. The experiment was repeated 3 times with different batch supernatants. doi:10.1371/journal.pone.0086337.g003 Figure 4. Cross binding of CD137L to CD137 between human and mouse species by ELISA assay. (A) Human or mouse CD137L-mFc (hL/ mL) was coated (1 mg/ml) on an ELISA plate. After blocking with milk, supernatant containing the entire human or mouse CD137 extracellular domain (hE, mE) with human IgG tail was added and followed by HRP-goat-anti-human IgG for detection. (B) hL-mFc was coated to an ELISA plate and supernatants containing hE, hE4, mE, mE3 and mE4 all with h-Fc tail fusion proteins were added. hE and hE4 were used as positive controls and supernatant from cells that had not been transfected was used as a negative control. HRP-goat-anti-human IgG was added for detection. The figure shows representative results from one of three experiments. *P/N.2.1. doi:10.1371/journal.pone.0086337.g004 Figure 5. Cross binding of hCD137L to mCD137 via PAGE gel. mCD137 extracellular region (mE) and hCD137 extracellular region (hE) were run on 10% SDS-PAGE gel without 2-ME, then stained by Ponceau S(left panel) and followed by Western blotting with hCD137L-mFc and HRP-goat-anti-mouse IgG for detection (ECL,right panel). doi:10.1371/journal.pone.0086337.g005 while hE does not. In order to verify the binding of hCD137L to mCD137, ELISA plates wells were coated with hL and expression supernatants of mE3 and mE4 were added. As illustrated in Fig. 4B, both mouse CD137 fragments can bind to hCD137L, but binding affinity of mouse fragments to hL is much lower than that of hE to hL. Furthermore, when a SDS-PAGE-ECL assay was used for binding analysis under non-reduced condition (i.e. without 2-ME), both hE and mE were detected with dimers and the MW was twice that of predicted monomers (data not shown). While the protein amounts of hE and mE were similar on gel (Fig. 5 left panel), the ECL result showed that the hCD137L binding to hCD137 was much stronger than to mCD137 (Fig. 5 right panel). We conclude that hCD137L can bind to mCD137. CD137 binding to CD137 Ligand is related to the conformational structure mL-Fc and mE-Fc were separated by SDS-PAGE without 2-ME, and Fig. 6A demonstrates that both the ligand and the receptor exist as dimers. Subsequently, mE, mE2, mE3, mE4, mCD28 (all with hFc tail) and mL (mCD137L-mFc, as a control for anti-mFc detection) were run on SDS-PAGE gel with 2-ME after which the proteins were transferred to a nitrocellulose membrane and incubated with mCD137L (mL-mFc) for ECL. Only mCD137L was detected while the other proteins were all negative (Fig. 6B). However, when mE and mE3 were run on SDS-PAGE without 2-ME, followed by incubation with mCD137L for the ECL, both mE and mE3 bound to mCD137L (Fig. 6C). These results indicate that the binding of CD137 to its ligand involved to intrachain disulfide bonds which are necessary for maintaining the CRD's intramolecular structural stability [2].
Anti-CD137mAbs specific to CRDs that predominantly bind to CD137 Ligand with high blocking efficiency Two anti-human CD137 mAbs with different epitopes specificity, clone 4B4-1, which recognizes hCD137 CRDII by binding to hE3 and clone h4-1BB-M127, which recognizes hCD137 CRDIII by binding to hE4 but not to hE3 (Fig. 7A), were tested for their ability to interfere with the binding of hCD137 to hCD137L. At concentrations of 5 mg/ml and 25 mg/ml, h4-1BB-M127 fully blocked the binding (Fig. 7B), while the 4B4-1 mAb did not interfere with the binding (Fig. 7C) any more than the negative OKT3 control (Fig. 7D). In order to determine the efficiency of the mAbs to inhibit this binding, we varied their concentrations in the mixture from 10 mg/ml to 0.01 mg/ml and found that h4-1BB-M127 had an inhibitory effect at 1 mg/ml, which increased (. 80%) when it was applied at 10 mg/ml. In contrast, even at 10 mg/ ml, the 4B4-1 mAb only showed around 10% blocking effect (Fig. 7D). This finding indicates that CRDIII is involved in the contact of human CD137 with human CD137L.
In similar experiments, two rat anti-mouse CD137 mAbs, D4-67 and 2A, were tested for their ability to block the binding of mCD137L to mCD137. The mAbs are specific for CDRIII+ CDRIV or a hinge-like region immediately adjacent to the transmembrane domain, respectively (data not shown). They modestly inhibited the mCD137-mCD137L binding at a concentration of 5 mg/ml and inhibited ,50% of binding when the concentration reached 25 mg/ml ( Fig. 8A and B). The experiments support the conclusion that there is a discrepancy with respect to the location of the CD137-CD137L interaction between human and mouse.
Discussion
We have mapped the CRDs for the CD137L binding to CD137 in both mouse and human. A mouse CD137 extracellular fragment contained CRDI+II bound to mouse CD137L equally well as the whole mouse CD137 extracellular domain, while the CRDI alone could not bind to mCD137L. In contrast, human CRDI+CRDII could not bind to human CD137L while CRDI+ CRDII+CDDIII could. We conclude that the binding between CD137 and its ligand primarily involves CRDII in the mouse and CRDIII in human. The possibility that CRDII plays a role in the binding of human CD137 to its ligand seems unlikely also in view of our finding that the anti-human CD137 mAb 4B4-1 which binds to CRDII, could not block the hCD137L-hCD137 binding, while the anti-human CD137 mAb h4-1BB-M127, which binds to CRDIII, blocked this binding. When studying other TNF family numbers such as TNF1/TNFR1 and TNF10/TNFR10B crystallographic complexes, specific patches were detected on the receptor side, involving the second and third CRDs (CRD2 and CRD3) and appear to play a major role in modulating the binding affinity and specificity [24][25]37]. The literatures in reference to CD137 were anticipated to bind their ligand through the conservation of the CRDII and CRDIII [36]. Recently, It has been reported that CRD1, also named pre-ligand-binding assembly domain (PLAD), is physically distinct from the domain that forms the major contact with ligand, and there is increasing evidence for PLAD-mediated receptor association [38].
Cross-binding of CD137L to CD137 was observed between human and mouse. The CD137-CD137L cross-binding showed the following characteristics: (i) Human CD137L can bind to mouse CD137, but mouse CD137L cannot bind to human CD137. The combining affinity of hCD137L to mouse CD137 is about 30% of that of human CD137. (ii) hCD137L cross-binding to mCD137 is mapped to mouse CD137 CRDII or CRDII+ CRDIII. Although some TNFL/TNFR interactions are mutually exclusive, cross-interactions have been reported in a majority of cases [39].
Typical TNFR family members possess four CRDs that can be distinguished on the basis of primary sequence characteristics. Direct structural analysis of several TNFL:TNFR family complexes demonstrates that the majority of the residues contacting the cognate ligands are contributed by CRD2 and CRD3, with CRD1 and CRD4 making few contacts [37]. The contact regions between ligands and receptors are diverse in the third group of TNF family members. For example, both human OX40 and CD40 receptors use CRD1,CRD2, and CRD3 regions to bind their respective ligands. The binding interface lacks a single ''hot spot'' in OX40 [30], whereas in the CD40-CD154 receptor-ligand binding is concentrated to two areas [29]. Our data indicate that only one CRD is predominantly responsible for the CD137L to receptor binding both in human and mouse.
It is unclear what causes the different CRDs in humans and mice to interact with their respective CD137L. We first sequentially analyzed the CRDII and CRDIII for human and mouse CD137 and found that the constitutive amino acid length of the CD137 extracellular region was almost identical ( Table 1). Alignment of the human and murine CD137 amino acid sequences has indicated 60% identity between the two species and analysis of the aligned amino acid sequence of CRDII and CRDIII showed 65% and 71% identity, respectively. Furthermore, the numbers of hydrophilic and hydrophobic amino acids of human and mouse CD137 are similar (Table S2), which are 12/15 in CRDII and 10/10 in CRDIII of hydrophilic amino acid; 19/19 in CRDII and 12/11 in CRDIII of hydrophobic amino acid in reference to their sequence [2]. The cross-binding of hCD137L to mCD137 may be related to the similarity of the respective CRD's, although mutational analysis of the CRDII and CRDIII will be needed to obtain more detailed information on the ligand-CRD's interaction. There is only 25%-30% amino acid similarity between TNF-like ligands which is largely confined to internal Figure 7. Blocking the binding of hCD137L to hCD137 by anti-human CD137 mAbs 4B4-1 and h41BB-M127. Panel A shows the binding of the two mAbs to hE, hE2, hE3, hE4 and control. Panels B, C and D shows an experiment where human CD137L-mFc (hL) was coated (1 mg/ml) on an ELISA plate, blocked with milk and followed by adding hE(1 mg/ml) which had been preincubated with h41BB-M127 (B), 4B4-1 (C) and an anti-human CD3, OKT3 (D) at a concentration of 5 mg/ml or 25 mg/ml. (E) Inhibition of the binding of hCD137L to hCD137 at a series of concentrations of antihuman CD137mAbs. doi:10.1371/journal.pone.0086337.g007 aromatic residues of assembled constructs [40], and there is only 36% amino acid indentity in the CD137 proteins of mouse versus human origin [2,31]which may contribute to the different ligandreceptor bindings. Crystal structure of the CD137L-CD137 complex and site-directed mutagenesis experiments are needed to further indentify the important CRDII and CRDIII residues for ligand-receptor binding and gain insight whether any additional CRDs are also involved. There is much interest to develop novel therapies by using agonist and antagonist mAbs to CD137 [41][42], and exact mapping of the CD137-CD137L binding should aid future research on CD137 and help the production of antibodies that are either agonistic or antagonistic and improve in vivo efficacy versus toxicity.
Supporting Information
Table S1 The primers used in RT-PCR for extracellular domains of CD137(E) and CD137L(L). (DOC) | 6,613.6 | 2014-01-21T00:00:00.000 | [
"Biology"
] |
Memcapacitor Crossbar Array with Charge Trap NAND Flash Structure for Neuromorphic Computing
Abstract The progress of artificial intelligence and the development of large‐scale neural networks have significantly increased computational costs and energy consumption. To address these challenges, researchers are exploring low‐power neural network implementation approaches and neuromorphic computing systems are being highlighted as potential candidates. Specifically, the development of high‐density and reliable synaptic devices, which are the key elements of neuromorphic systems, is of particular interest. In this study, an 8 × 16 memcapacitor crossbar array that combines the technological maturity of flash cells with the advantages of NAND flash array structure is presented. The analog properties of the array with high reliability are experimentally demonstrated, and vector‐matrix multiplication with extremely low error is successfully performed. Additionally, with the capability of weight fine‐tuning characteristics, a spiking neural network for CIFAR‐10 classification via off‐chip learning at the wafer level is implemented. These experimental results demonstrate a high level of accuracy of 92.11%, with less than a 1.13% difference compared to software‐based neural networks (93.24%).
Introduction
[3][4] As a matter of fact, the progress of AI inevitably accompanies the enlargement of the AI model.For instance, OpenAI's GPT-3-which has DOI: 10.1002/advs.202303817 made remarkable progress in natural language processing and can even write a basic program code-requires 175 billion parameters, and 1.6 trillion parameters are utilized in the case of Google's Switch Transformer. [5,6]Also, this trend is expected to continue as the effectiveness of large-scale neural networks is proven. [7]Therefore, tremendous computational cost and energy consumption are required to train the state-of-the-art model, highlighting the importance of low-power neural network implementation. [8]Consequently, edge-oriented computing is emerging along with the decentralization of the computing paradigm due to limitations including signal communication traffic induced by von Neumann bottleneck. [9]nergy-efficient neural network implementations are also attracting attention for performing these compute-intensive tasks in resource-constrained environments. [10]13][14][15][16][17][18] A neuromorphic computing system is an architecture that overcomes the limitations of von Neumann computing structures inspired by biological neural systems, where synapses and neurons are massively integrated in parallel. [19,20]Synapses store weight information and generate output signals by multiplying inputs with weights, and neurons integrate signals received from synapses and send them as output signals.In this architecture, vector-matrix multiplication (VMM) can be efficiently and simultaneously conducted by maximized parallel connections.Furthermore, as an event-driven asynchronous system rather than a clock-based synchronous system, it is energy-efficient because the input is spatiotemporally sparse. [21,22][25][26][27][28][29] Since they are implemented in a crossbar array consisting of two-terminal devices, they have the advantages of simple fabrication, high cell density, intuitive VMM operation, and easy implementation of a large-scale array.[32][33][34] In contrast, transistor-type synaptic devices are free from these issues thanks to the gate electrode, [35][36][37][38] and these devices are typically integrated in NAND or NOR array structures.NOR-type structure offers the advantage of parallel matrix operations, similar to artificial neural networks (ANNs), but requires individual drain contacts for each cell, resulting in an area inefficiency of over 10F 2 .Conversely, NAND-type structures occupy an area of 4F 2 , providing better area efficiency but lack parallel matrix operations due to their serial connection structure. [39]emcapacitive devices based on capacitive coupling have been recently studied due to crossbar array integration by their two-terminal structure. [40]Thanks to their efficient prevention of sneak current through the open-circuit nature of a capacitor, additional selecting device, such as a transistor or a selector, is not necessary for a memcapacitor array configuration.Furthermore, memcapacitors have the advantage of possessing a significantly high effective resistance, making them less susceptible to performance degradation by line resistance.Also, the high static power inherent in a resistance-based array system can be effectively eliminated, as capacitors consume only dynamic power.Experimental implementations of memcapacitor crossbar arrays have mainly utilized ferroelectric switching-based capacitors due to their low switching voltage and fast switching speed. [41]However, it has limitations due to a lower multi-level capability according to scaling down, lower reliability compared to charge trap flash (CTF) memory, and a larger cell density over 4F 2 to form separate doped silicon regions. [42]In some previous studies, simple images were created in-house to implement pattern recognition applications in fabricated synaptic arrays. [42,43]owever, it is straightforward to classify such simple patterns, making it challenging to precisely assess the performance of hardware-based neural networks.Furthermore, the absence of a standard dataset for evaluating neural network performance in these studies makes it difficult to compare their results.Some studies have relied on modeling a single cell rather than fabricating a synapse array to construct the neural network, [44,45] which limits the experiments to simulations for the demonstration of cognitive functions.
In this article, we present an 8 × 16 memcapacitor crossbar array based on the NAND flash architecture with a charge-trapping layer.It aims to combine the high cell density of a crossbar array with the technological maturity of CTF cells for reliable operations.Our proposed device maintains the small footprint of the NAND array (4F 2 ) while enabling program (PGM)/erase (ERS) of individual cells and parallel VMM processing, facilitating accurate weight transfer and VMM operations.Additionally, we experimentally demonstrate an energy-efficient hardware neural network by applying systematic techniques to minimize accuracy loss in converting ANNs to spiking neural networks (SNNs) using the fabricated memcapacitor crossbar array for CIFAR-10 classification.
Device Fabrication
The fabrication process of the memcapacitor crossbar array is shown in Figure 1a.First, 300 nm thick buried oxide (BOX) was formed on a bulk silicon wafer by wet oxidation using a fur-nace at 1000 °C for 55 min.Subsequently, a polysilicon substrate of 300 nm thick was deposited by low-pressure chemical vapor deposition (LPCVD) at 625 °C and 150 mTorr using SiH 4 of 60 sccm.B + ions for body doping were implanted with a dose of 5 × 10 13 cm −2 , followed by drive-in at 1050 °C for 3 h.After etching the active region, medium temperature oxide (MTO) of 4 nm thick was deposited as a tunneling oxide by LPCVD at 782 °C and 350 mTorr using SiH 2 Cl 2 of 40 sccm and N 2 O of 160 sccm.Si 3 N 4 of 5.5 nm was deposited as a charge-trapping layer by LPCVD at 785 °C and 200 mTorr using dichlorosilane (DCS, SiH 2 Cl 2 ) of 30 sccm and NH 3 of 100 sccm.Then, atomic layer deposition (ALD) was used for 9 nm thick Al 2 O 3 layer deposition as a blocking oxide using trimethylaluminum (TMA, Al(CH 3 ) 3 ) and H 2 O as precursors.A TiN metal gate of 50 nm was deposited by sputtering, and 25 nm thick SiO 2 was deposited as a hard mask by PECVD using tetraethoxysilane (TEOS, Si(OC 2 H5) 4 ) of 700 sccm and O 2 of 700 sccm at 380 °C.After etching the hard mask by dry etch with CHF 3 of 25 sccm, SF 4 of 5 sccm, and Ar of 70 sccm and wet etch with buffered oxide etchant (BOE), the TiN gate was etched by wet etch process using H 2 O 2 :DI water (=1:4) solution at 60 °C for 45 min in order to completely remove metal sidewall along with the active area.As + (dose: 5 × 10 15 cm −2 , energy: 30 keV) and BF 2 + (dose: 5 × 10 15 cm −2 , energy: 30 keV) ions were implanted to form source and substrate contact regions, respectively.Finally, a conventional back-end-of-line (BEOL) process consisting of inter-layer-dielectric, contact hole formation, and metallization was carried out.Figure 1b shows a transmission electron microscopy (TEM) image of the fabricated memcapacitor based on a CTF cell positioned at each cross-point, confirming that TiN/Al 2 O 3 /Si 3 N 4 /SiO 2 /poly-Si (TANOS) stack was successfully formed.In Figure 1c,d, scanning electron microscopy (SEM) images confirm that bitlines (BLs) and wordlines (WLs) were fabricated in the 8 × 16 crossbar structure, where each cell was a size of 1 × 5 μm 2 .In this context, we designated the substrate electrode receiving the pre-synaptic signal as WLs and the gate connected to the post-synaptic neuron as BLs, unlike conventional NAND flash electrode names.
Device Characterization
Figure 2a depicts the readout scheme of the memcapacitor crossbar array for VMM computations.The pulse timing diagram shows that pre-synaptic spikes, V in,i , are applied to i-th WL, while the unselected WLs are in a floating state.During this time period, the BL voltage (V BL ) is charged to the charging voltage (V c ), and charge accumulation occurs only across the memcapacitors in the selected WLs because the floating state ensures that the unselected memcapacitors remain uncharged.Afterward, as the BL discharges from V c to the discharging voltage (V d ), charge Q j of the j-th BL flows out of BL as a form of current I j (t), which may be expressed using the following Equation (1): Equation (1) shows that the VMM operation can be conducted by Q j when i and ∫ C i,j dV represent the pre-synaptic spikes and synaptic weight, respectively.A synaptic weight is defined as ∫ C i,j dV instead of C i,j since it involves the nonlinear capacitancevoltage (C-V) characteristics of a MOS capacitor.All BLs can be read simultaneously, making parallel computation possible for artificial neural networks.The electrical characteristics of the memcapacitor cell were verified to validate the operation scheme of the memcapacitor crossbar array.Firstly, the C-V characteristics were measured depending on the device state at a high frequency of 100 kHZ, as shown in Figure 2b, confirming that the flat-band voltage (V FB ) can be shifted by the incremental step pulse program (ISPP) and erase (ISPE) schemes.Figure 2c The readout operation was demonstrated by measuring the discharging current (I dis ) according to PGM states, as shown in Figure 2d.When V c was applied to the WL electrode, it induced a charge Q(V c ) across the capacitor.Subsequently, when the WL voltage was reduced to V d , the capacitor discharged to the charge Q(V d ), causing the I dis to flow out of the BL.It is confirmed that I dis was decreased with respect to the decrease in V FB since the capacitance value at V d becomes smaller (depletion region), and the synaptic weight was obtained by integrating the I dis with respect to time.Since the weight window (w max -w min ) is determined by the read condition (V c and V d ) during the readout operation, the weight values were measured and extracted under different device states by varying either V d or V c , as shown in Figure 2e,f.The characteristics of weight window, weight fine-tuning, and energy consumption depend on V c and V d .A large voltage difference (V c -V d ) is required to achieve a larger weight window (w maxw min ) by utilizing the wide region of C-V curves from depletion to accumulation with the same amount of V FB shift.It allows accurate weight fine-tuning, but it can increase energy consumption due to the high operation voltage.Conversely, a small V c -V d can reduce the energy consumption, but a more precise fine-tuning process is required for weight modulation due to a narrow V FB shift region, which can increase the number of tuning cycles and lead to an increase in the electrical stress of the memcapacitor.As depicted in Figure 2e, the increase in the weight window reaches a saturation point with increasing V c while V d is fixed at 0 V because a higher V c extends to the minimum capacitance region.
Hence, raising V c too high is unnecessary, and memory windows exceeding a certain threshold become redundant.To achieve a wider weight window while maintaining a low operating voltage, V C and V D were set to 1 and 0 V, respectively, during the readout operations.
Based on the operation principle of the proposed memcapacitor, it is necessary to have a significant difference between the capacitances of the depletion (C dep ) and accumulation (C ox ) states to obtain stable and precise multi-bit operation with a higher weight window.From a device fabrication perspective, achieving a large C ox requires a thin gate stack with a high dielectric constant, while a low C dep demands a thick active layer and low body doping concentration.However, gate stack engineering should be carefully carried out for the device memory functions as a CTF cell.Also, if the doping concentration of the active region is too low, accessing a specific cell becomes difficult due to the high resistance of the selected active region when the target cell is fully depleted.This resistance issue becomes more pronounced when multiple cells are selected on the same BL.Therefore, it is required to properly determine the thickness and doping concentration of the active layer, considering the trade-off relationship between these resistance issues and memory characteristics.With optimization in The target cells were programmed and erased using ISPP and ISPE, while the surrounding cells were inhibited.c) Weight fine-tuning results for all devices in the memcapacitor array, with equally spaced 4-bit levels between 10.5 and 14 fC.d) Read disturbance results for 4-bit levels measured for up to 500 cycles and extrapolated up to 10 3 cycles.e) Retention characteristics of 4-bit weight levels measured at room temperature (300 K) for 10 4 s.f) Cycling endurance characteristics measured by repeatedly applying V PGM of 12 V and 100 μs and V ERS of −14 V and 1 ms, switching the device over 10 3 cycles between its two end-states.g) Cycling endurance characteristics of the multi-level readout, measured by gradually increasing and decreasing the 4-bit levels through weight fine-tuning.Even after more than 10 4 repetitive pulses, the weight could be fine-tuned properly.h) VMM results when different numbers of WLs were randomly selected.The sum of the weights read through individual access (on the x-axis) and the BL charges when those cells were read simultaneously (on the y-axis) showed a substantially accurate linear relationship.
these aspects, it is expected that nanometer-scale device scaling down could be achieved with a proportional decrease in energy consumption.
Array Operation
To characterize the electrical properties of the fabricated 8 × 16 memcapacitor array, we used different PGM and ERS schemes compared to conventional NAND flash memory, as each WL has an independent substrate contact (see "Experimental Setup" section). [46]It is important that surrounding cells maintain their states during the PGM or ERS operation of a selected cell to en-sure the accurate transfer of pre-trained weights onto the fabricated array.To verify the effectiveness of the PGM/ERS schemes including disturbance, the changes in the weight value of a selected target cell and surrounding unselected 8 cells in 3 × 3 subarray were measured during ISPP and ISPE operations.The read condition used for extracting synaptic weights was V c of 1.0 V and V d of 0.0 V. Starting from the fully erased state of all the memcapacitors, only the target cell was programmed by ISPP program voltage (V PGM ) from 6.0 to 10.0 V in steps of 0.1 V. Figure 3a confirms that the states of the surrounding cells kept the same while only the target cell weight was increased.Similarly, the target cell was erased by ISPE erase voltage (V ERS ) from −6.0 to −10.0 V in steps of −0.1 V, with all cells initially fully programmed.It was observed that only the target cell weight was decreased whereas there was no change in the synaptic weights of the surrounding cells, as shown in Figure 3b.These results verify that the proposed PGM/ERS schemes are effective in selectively modifying the synaptic weights of target cells without affecting the surrounding cells.
For reliable and accurate inference operations in a neuromorphic system, it is necessary to accurately set the device weight to the desired target value, and we employed a closed-loop weight fine-tuning scheme (see Figure S2, Supporting Information).Starting with an initial PGM/ERS voltage of ±6 V, a pulse step of ±0.1 V, and cycle number n = 0, the current weight state Q was verified under the read condition V c = 1.0 V and V d = 0.0 V.The weight fine-tuning was completed when the criterion (Q t -ΔQ < Q < Q t + ΔQ) was met, where Q t and ΔQ are the target weight and an error margin, respectively.Otherwise, V PGM or V ERS was applied to change the weight state depending on the current state.The fine-tuning was considered a failure if the maximum cycle (N = 200) or maximum PGM/ERS voltage (±10.0V) was exceeded, which was determined considering the device reliability.To get linearly separated 4-bit level weight values in the memcapacitor, the weight fine-tuning operation was performed with the error margin (ΔQ) of 0.025 fC for a whole memcapacitor crossbar array.As a result, the cumulative probability distribution in Figure 3c confirms that all the cells were accurately adjusted to the target 4-bit state.On average, 23 fine-tuning pulses were required to adjust the synaptic weight from the previous state to the next state (see Figure S3, Supporting Information).
In addition, the reliability characteristics including endurance and retention were investigated since the performance of neural networks can be significantly affected.Above all, the read disturbance of the 4-bit level weight state for 500 read cycles was verified, as shown in Figure 3d.It is observed that there was almost no read disturbance in the fabricated crossbar array under the read condition in the results of extrapolation up to 10 3 read cycles from the measurement data.Moreover, the retention characteristics of the 4-bit level weight were experimentally demonstrated at room temperature (300 K), as shown in Figure 3e, confirming that the 4-bit weight level was still accurately distinguished up to 10 4 s.We also evaluated the cycling endurance using V PGM (12 V, 100 μs) and V ERS (−14 V, 1 ms), as depicted in Figure 3f.The weight window remained almost unchanged even after 10 3 cycles of PGM/ERS operations.Subsequently, the multi-level cycling endurance for the 4-bit weight level was evaluated, as illustrated in Figure 3g.The 4-bit level weight could be constantly manipulated by applying PGM/ERS pulses over 10 4 times yet exhibited good endurance characteristics.
Finally, to verify the accuracy of VMM operations in various weight mapping scenarios, we randomly selected memcapacitor cells in the crossbar array and programmed some of them with V PGM values ranging from +6.0 to +10.0 V, while the others were erased with V ERS values ranging from −6.0 to −10.0 V, resulting in a distribution of randomized weights across the array.Pre-synaptic inputs were then applied to randomly selected WLs simultaneously, emulating the sparsity of SNNs.The charges at BLs, corresponding to the weighted sum, were extracted and compared with the sum of each individual cell weight, as shown in Figure 3h.The correlation diagram shows the weighted sum of individual cells on the x-axis and the BL weighted sum when mul-tiple WLs were selected simultaneously on the y-axis.It shows an excellent correlation between the x-axis and y-axis, with a substantially small root-mean-squared error (RMSE) of 0.62 fC, meaning that the VMM operation can be accurately performed in the fabricated memcapacitor array.The correlation was consistent regardless of the number of selected WLs, which indicates that the capacitor coupling impact on the floating node hardly occurred and there was no sneak current issue, which is a common problem in memristor crossbar arrays.
Hardware SNN Demo
A partially hardware-based SNN for CIFAR-10 classification was experimentally demonstrated using the fabricated memcapacitor crossbar array with a slightly modified VGGNet-7 (see Experimental Section).Specifically, a hidden layer was added with eight neurons right before the output layer, as shown in Figure 4a.While the convolution layers and the early fully connected layers were preprocessed by software, the final 8 × 10 fully connected layer was experimentally implemented in wafer-level hardware.The performance of the SNN was verified by post-processing the measured output from the memcapacitor array, assuming that ideal integrate-and-fire (I&F) neurons were utilized. [47]Two synaptic devices were paired to implement negative weights.Positive weights were implemented by adjusting the excitatory synaptic devices together with inhibitory synaptic devices fixed at the minimum weight.In contrast, negative weights were implemented by adjusting the inhibitory synaptic devices together with excitatory synaptic weights fixed at the minimum weight.Therefore, the weights trained through software were normalized to fit within the memcapacitor weight range (≈ −5 fC < w < +5 fC) and transferred to two 8 × 10 arrays, as shown in Figure 4b.The weight transfer was performed one by one, resulting in the adjustment of a total of 160 cells in both the excitatory and inhibitory arrays to their target values.When implementing positive weights, the cells in the inhibitory array were fully erased, while for negative weight implementation, the cells in the excitatory array were fully erased.The target weights were then adjusted accordingly.Following this, the 160 cells in the two 8 × 10 arrays were accessed one by one to precisely adjust the weights to their target values.The weighted sum was then experimentally measured by applying pre-synaptic inputs to the 8-WL and post-processing the outputs to determine their accuracy.The I&F neurons were assumed to have a membrane capacitance (C mem ) of 4.7 fF and a threshold of 1.0 V.
After a total of 160 weights, including both excitatory and inhibitory synapses, were transferred to the fabricated memcapacitor array through the weight fine-tuning procedure, we compared them to the desired target weights, as shown in Figure 4c.The discrepancy between them can come from several factors.During the weight fine-tuning process, soft PGM/ERS can cause disturbances in cells that have already been fine-tuned.Despite PGM/ERS inhibition characteristics with respect to ISPP/ISPE, disturbances can occur when applying a large number of pulses required to transfer all 80 synaptic weights on the excitatory and inhibitory arrays.Additionally, the verification process during fine-tuning involved C-V measurement, which applies ac small signals to dc bias and can cause disturbances due to the stress caused by the DC bias.Despite these disturbances, it presents a good correlation between the ideal weights and the transferred weights in the memcapacitor array with an RMSE of 0.48 fC.In classification tasks, the decision-making process only involves the most frequently firing neurons, so it is sufficient to precisely determine the output as long as the correlation is maintained to some extent.
By post-processing the measured transient discharging current characteristics of all the BLs, we compared the performance of the hardware-based SNN with that of the ideal software-based SNN, as shown in Figure 4d.During the inference time of 12 ms, the hardware-based SNN for image classification achieved an 88.76% accuracy with the conventional I&F neuron model, which is nearly identical to the 89.95% accuracy obtained by softwarebased SNN simulation with the ideal target weight.However, there was a large difference compared to the ANN baseline accuracy of 93.24%.This was due to the considerable latency, which prevented the performance from reaching a steady state within 12 ms.To improve the SNN accuracy to that of ANN, we applied pre-charged membrane potential (PCMP) and delayed evaluation (DE) techniques to the conventional I&F neuron model, which have been reported to help reduce the inherent latency of SNNs. [48,49]hen the membrane potential of the I&F neuron was precharged to 0.4 V prior to inference by PCMP, a significant reduction in the latency of the hardware-based SNN was observed, which showed a classification accuracy of 90.71%.This result indicates an accuracy drop of only 1.26%p compared to that of the software-based SNN (91.97%) with the ideal weights using the same pre-charging level.We assessed the performance by addi-tionally applying DE of ≈4 ms to the hardware-based SNN.As a result, the accuracy was 92.11% in the hardware demonstration, indicating an additional accuracy improvement of ≈1.40%p (92.11%-90.71%= 1.40%p).Even though the accuracy was reduced by ≈1.08%p compared to that of the software-based SNN with the ideal weights (93.19%) when the same techniques were used, this is a noteworthy outcome given that wafer-level hardware demonstration achieved significantly high accuracy close to the ANN baseline accuracy.Consequently, it was demonstrated that the weight transfer was successfully performed, resulting in an average accuracy drop within 1.18%p relative to the softwarebased SNN in all cases.
In addition, we investigated the impact of weight transfer errors on the performance of the hardware-based SNN. Figure 5a-e presents the weight distributions transferred to the memcapacitor crossbar array with varying RMSE values from 0.48 to 0.83 fC.While all the weight distributions were relatively close to the ideal weight distribution shown in the inset of Figure 5a, the accuracy drop increased as the RMSE value of the transferred weight increased, as illustrated in Figure 5f displaying the best accuracy achieved during SNN inference time of 12 ms with all the techniques for low-latency SNN (such as PCMP and DE) applied.It confirms that an accurate weight transfer, which can be obtained by the fine-tuning process and charge-trapping layer, is required for the high performance of hardware-based neural networks.Additionally, if the entire network were implemented in hardware instead of a subset with the same level of error, a greater decrease in accuracy would be expected; therefore, a highly accurate weight control capability is essential to achieve high-performance hardware-based SNNs (see Figure S4, Supporting Information).
Conclusion
In this paper, we have demonstrated the feasibility of the memcapacitor crossbar array based on CTF and its potential for implementing hardware SNNs.Despite its structural similarity to NAND flash and high integration density of 4F 2 , which was fabricated using standard Si CMOS processes, this array enabled simultaneous parallel VMM computations by operating based on capacitive coupling behaviors rather than transistor-based operations.The weight modulation of the memcapacitor devices within the crossbar array and the readout scheme based on charging and discharging were verified.Moreover, we confirmed the PGM/ERS operation characteristics of the whole crossbar array, including inhibition and disturbance, and demonstrated accurate VMM operations within the memcapacitor crossbar array.Finally, we implemented the hardware SNN in the memcapacitor crossbar array and achieved nearly the same CIFAR-10 recognition performance as the baseline accuracy, thanks to the accurate weight adjustment and reliability characteristics of CTF cells.The experimental validation of the memcapacitor crossbar array with 4F 2 cell density using CTF is expected to lead to stable and accurate neuromorphic computing systems.
Experimental Section
Measurement Details: To characterize the memcapacitor devices, electrical measurements were conducted using various modules of the Keysight B1500A parameter analyzer, including the source measure unit (SMU), high voltage semiconductor pulse generator unit (HV-SPGU), multi-frequency capacitance measurement unit (MFCMU), and waveform generator/fast measurement unit (WGFMU).Especially, ISPP/ISPE was applied to the memcapacitor to confirm V FB shift through C-V measurements.Then, the synaptic weight was extracted by integrating the current over time, which was obtained from transient measurement using the WGFMU.For the characterizations of a whole array, the Keysight E5250A low-leakage switch mainframe and a 32-channel probe card were used for BL and WL selections.The built-in programming tool was employed and customized to control both the parameter analyzer and switching matrix.
Program and Erase Scheme for Memcapacitor Crossbar Array: The fabricated memcapacitor crossbar array had a structural similarity to a NAND flash array, but there were differences in the substrate commonality and dopant types of the source/drain, leading to notable discrepancies during PGM and ERS operations.When comparing PGM operations between a NAND flash array and the fabricated memcapacitor array, differences could be observed in the applied voltage schemes.In a NAND flash array, V PGM was applied to the selected word line (WL), and pass voltage (V pass ) was applied to the unselected WL with a grounded substrate.Then, the target cell was programmed while the unselected cells on the same WL were inhibited from programming through self-boosting.When comparing ERS operations, it is important to note that in a NAND array, all BLs are connected to a common substrate, which makes it impossible to erase individual cells.Therefore, block ERS was performed, which could be disadvantageous for weight transfer because fine-tuning of weights requires the erasure of individual cells.
In contrast, in the fabricated memcapacitor array, for the BL with the target cell, a sufficiently large positive bias (V PGM ) was applied to trap electrons by FN tunneling, while a sufficiently large negative bias (V ERS ) was applied for hole injection.Simultaneously, the BLs of unselected cells received either +V pass or −V pass , with a magnitude sufficient to induce the channel under the target cell by pulling carriers from the n + or p + region of the substrate, all while avoiding FN tunneling in the unselected cells.Additionally, either the n+ or p+ region was set to 0 V for electron trapping and hole trapping, respectively, while the other was left in a floating state.Conversely, it was essential to inhibit all memcapacitors for BLs without the target cell.In this scenario, preventing channel formation was crucial because certain memcapacitors still experienced the carrier injection voltage (V PGM or V ERS ) applied to these BLs.This could be achieved by applying +V pass or −V pass to the n + or p + region supplying the target carriers, while leaving the other region floating.In this experiment, V PGM was employed within a voltage ranging from +6.0 to +10.0 V and V pass at 5 V in the PGM state.For the ERS state, V ERS was used within a voltage ranging from −6.0 to −10.0 V, with V pass set at −3 V.The illustration of the voltage scheme in the memcapacitor array is depicted in Figure S5 (Supporting Information).
Training ANN for Off-Chip Learning and SNN Conversion: The trained neural network was a modified VGGNet-7 composed of 3C128-3C128-AP2-3C256-3C256-AP2-3C512-3C512-AP2-FC1024-FC8-FC10 where nCm, APn, and FCn represent m convolution filters of size n × n, average pooling layer of size n × n, and fully connected layer with n neurons, respectively (see Figure 4a).The network was trained on the CIFAR-10 dataset using stochastic gradient descent with a momentum of 0.9 as the optimizer.The initial learning rate was 0.1, which was decreased by a factor of 0.1 after 120, 220, and 280 epochs.To improve generalization performance and training speed, batch normalization was applied to all layers except for the output layer, and data augmentation was used based on a random 32 × 32 crop from an image padded by four pixels on each side and with horizontal flipping.After training, the network achieved a classification accuracy of 93.24% on the test dataset.The trained weights were normalized using the ANNs-to-SNNs conversion method and then rescaled to fit the weight range of the memcapacitor device, making them suitable for the off-chip training of SNNs.
Figure 1 .
Figure 1.Device fabrication.a) BL and WL cross-sectional views of the fabrication process.b) Transmission electron microscopy (TEM) image of a gate stack in a memcapacitor.c) Scanning electron microscopy (SEM) image of a fragment of fabricated memcapacitor array.Individual memcapacitors are located at the cross point of BLs and WLs.d) SEM image of 8 × 16 memcapacitor crossbar array.
illus-trates the charge-voltage (Q-V) characteristics obtained by integrating the C-V curves.Due to capacitance changes occurring when the surface state of the substrate transitions from accumulation (C ox ) to depletion (C dep ) states in the C-V characteristics, a nonlinear region shifts in response to PGM and ERS states.By selecting proper V c and V d to cover the depletion region, the synaptic weight can be extracted as w = Q(V c )-Q(V d ) (see Figure S1, Supporting Information, for more details).
Figure 2 .
Figure 2. Electrical characteristics of memcapacitor device based on CTF.a) Readout scheme of memcapacitor crossbar array for VMM.Only the charge stored in memcapacitors on selected WL contributes to the parallel summation of BL current during the discharge stage.b) Measured C-V characteristics with V FB shift by ISPP and ISPE methods.c) Q-V characteristics extracted by integrating C-V curve depending on the device state.The non-linear transition region is apparent as the memcapacitor switches from accumulation to depletion state.d) Transient charging and discharging current characteristics according to the memcapacitor state.A change in the current occurred when the memcapacitor was charged and discharged by voltages that covered the non-linear region.e,f) Extracted weight values by varying V c or V d to find the optimal read condition for a sufficiently large weight window.
Figure 3 .
Figure 3. Switching and VMM characteristics of 8 × 16 memcapacitor crossbar array.a,b) Weight modulations of target cells and surrounding cells.The target cells were programmed and erased using ISPP and ISPE, while the surrounding cells were inhibited.c) Weight fine-tuning results for all devices in the memcapacitor array, with equally spaced 4-bit levels between 10.5 and 14 fC.d) Read disturbance results for 4-bit levels measured for up to 500 cycles and extrapolated up to 10 3 cycles.e) Retention characteristics of 4-bit weight levels measured at room temperature (300 K) for 10 4 s.f) Cycling endurance characteristics measured by repeatedly applying V PGM of 12 V and 100 μs and V ERS of −14 V and 1 ms, switching the device over 10 3 cycles between its two end-states.g) Cycling endurance characteristics of the multi-level readout, measured by gradually increasing and decreasing the 4-bit levels through weight fine-tuning.Even after more than 10 4 repetitive pulses, the weight could be fine-tuned properly.h) VMM results when different numbers of WLs were randomly selected.The sum of the weights read through individual access (on the x-axis) and the BL charges when those cells were read simultaneously (on the y-axis) showed a substantially accurate linear relationship.
Figure 4 .
Figure 4. Implementation of a hardware-based SNN for CIFAR-10 classification.a) Illustration of a modified VGGNet-7.b) Schematic diagram of a hardware-based SNN implementation in memcapacitor crossbar arrays.Pre-trained weights were normalized to accommodate the memcapacitor weight range and transferred to two arrays.c) Correlation diagram between the ideal weights trained by software-based ANN and the transferred weights in the array.d) Accuracy comparison of software and hardware-based SNNs according to timestep.
Figure 5 .
Figure 5. Evaluation of classification accuracy at a steady state with respect to different RMSE values of transferred weight distributions in the fabricated synapse array.a-e) Weight distributions with different RMSE values, ranging from 0.48 to 0.83 fC.The inset in panel a shows the ideal pre-trained weight distribution.f) Evaluation of SNN inference accuracy using weight distributions with different RMSE values and low-latency SNN techniques, including PCMP and DE.The highest accuracy was achieved at an RMSE value of 0.48, and the accuracy drop increased with increasing RMSE values. | 7,952 | 2023-09-26T00:00:00.000 | [
"Computer Science"
] |
Deficiency of the eIF4E isoform nCBP limits the cell-to-cell movement of a plant virus encoding triple-gene-block proteins in Arabidopsis thaliana
One of the important antiviral genetic strategies used in crop breeding is recessive resistance. Two eukaryotic translation initiation factor 4E family genes, eIF4E and eIFiso4E, are the most common recessive resistance genes whose absence inhibits infection by plant viruses in Potyviridae, Carmovirus, and Cucumovirus. Here, we show that another eIF4E family gene, nCBP, acts as a novel recessive resistance gene in Arabidopsis thaliana toward plant viruses in Alpha- and Betaflexiviridae. We found that infection by Plantago asiatica mosaic virus (PlAMV), a potexvirus, was delayed in ncbp mutants of A. thaliana. Virus replication efficiency did not differ between an ncbp mutant and a wild type plant in single cells, but viral cell-to-cell movement was significantly delayed in the ncbp mutant. Furthermore, the accumulation of triple-gene-block protein 2 (TGB2) and TGB3, the movement proteins of potexviruses, decreased in the ncbp mutant. Inoculation experiments with several viruses showed that the accumulation of viruses encoding TGBs in their genomes decreased in the ncbp mutant. These results indicate that nCBP is a novel member of the eIF4E family recessive resistance genes whose loss impairs viral cell-to-cell movement by inhibiting the efficient accumulation of TGB2 and TGB3.
Scientific RepoRts | 7:39678 | DOI: 10.1038/srep39678 of viral proteins from MNSV and CMV RNAs 21,22 . Thus, although the roles of some recessive resistance genes have been partially elucidated, understanding of the variety of recessive resistance genes and their roles remains limited.
Alpha-and Betaflexiviridae are groups of flexuous, filamentous viruses that predominantly infect plants, and encode an RNA-dependent RNA polymerase (RdRp), a 30K-type movement protein (MP) or triple-gene-block (TGB)-type MPs, and a coat protein (CP). Some viruses in Alpha-and Betaflexiviridae encode additional proteins in their genome. The most extensively studied of these plant viruses are members of the genus Potexvirus in the family Alphaflexiviridae, which have one genomic RNA with a cap and poly(A) tail [23][24][25] . The 5′ -terminal open reading frame (ORF) encoding RdRp is translated directly from genomic RNA, but the 3′ -proximal ORFs encoding TGB1, TGB2, TGB3, and CP are translated from subgenomic RNAs (sgRNAs) 26,27 , which are generated during virus replication 28 and possess a cap 29,30 and the same 3′ ends as the genomic RNA 25 . TGB1 and CP were shown to be translated from sgRNA1 and sgRNA3, respectively, while TGB2 and TGB3 are translated from sgRNA2 of Potato virus X (PVX) 26,27 .
In this study, we found that an A. thaliana mutant of another member of the eIF4E family gene, nCBP, was resistant to a potexvirus, Plantago asiatica mosaic virus (PlAMV). Cell-to-cell movement of PlAMV was delayed in an ncbp mutant compared to the wild type, although viral accumulation in single cells of the mutant and the wild type did not differ. The accumulation of TGB2 and TGB3 was decreased in the ncbp mutant compared with the wild type.
Results
PlAMV propagation was delayed in ncbp mutants. A. thaliana has three types of eIF4E isoforms, eIF4E, eIFiso4E, and nCBP. Two homologs of eIF4E, eIF4E1b and eIF4E1c, are also encoded by A. thaliana 31 (see Supplementary Fig. S1). To determine whether any of the eIF4E isoforms has a role in the infection cycle of PlAMV, A. thaliana mutant lines bearing a T-DNA insertion or a point mutation in an eIF4E family gene were mechanically inoculated with PlAMV-GFP, a GFP-expressing variant of PlAMV. The inoculated leaves were examined for fluorescence from PlAMV-GFP at 4 days post inoculation (dpi). Infection foci on the inoculated leaves of eif4e, eif4e1b, eif4e1c, and eifiso4e mutants were nearly the same size as those on the A. thaliana ecotype Columbia-0 (Col-0) leaves (Fig. 1a). In contrast, the foci on the inoculated leaves of two ncbp T-DNA insertion lines, ncbp-1 and ncbp-2 (see Supplementary Fig. S2), were noticeably smaller than those of Col-0 (Fig. 1a). To quantify the virus accumulation, total RNA was extracted from each of the inoculated leaves from the mutants and analyzed by quantitative RT-PCR (RT-qPCR) with PlAMV-specific primers 32 . The relative accumulation of PlAMV viral RNA in eif4e, eif4e1b, eif4e1c, and eifiso4e mutants did not differ from that in Col-0, whereas viral RNA in ncbp-1 and ncbp-2 mutants was drastically decreased to approximately 15% and 25% of that in Col-0, respectively (Fig. 1b).
To explore whether PlAMV systemically infects ncbp mutants, we mechanically inoculated ncbp mutants and Col-0 with PlAMV-GFP. At 3 weeks post inoculation (wpi), the GFP signal was observed in stems and upper leaves of Col-0, but no GFP signal was apparent in the upper leaves of the ncbp-1 mutant (Fig. 1c). To quantify the accumulation of viral RNA, we extracted total RNA from non-inoculated upper leaves of the ncbp-1 and ncbp-2 mutants, and Col-0, and amplified viral RNA by RT-PCR. At 3 wpi, PlAMV infected the Col-0 plants systemically, while PlAMV did not propagate in upper leaves of the ncbp-1 and ncbp-2 mutants (see Supplementary Table S1 and Fig. 1d). At 4 wpi, 3 of 5 ncbp-1 plants, and none of the 6 ncbp-2 plants, harbored PlAMV RNA (see Supplementary Table S1). The accumulation of PlAMV in the upper leaves of the ncbp-1 mutants at 4 wpi was lower than in Col-0 plants (see Supplementary Fig. S3), suggesting that PlAMV systemic infection delayed in these mutants.
To confirm that the decreased accumulation of PlAMV in the ncbp mutants was due to the loss of nCBP, we cloned and introduced the genomic DNA sequence of nCBP with its possible promoter and terminator regions into the ncbp-1 mutant. Three independent transgenic lines (#1A, #3E, and #3F) were obtained and analyzed for their expression of nCBP. While we failed to detect the nCBP protein in the ncbp-1 mutant, three transgenic lines expressed it (Fig. 2a). When those transgenic lines were then inoculated mechanically with PlAMV-GFP, all of the nCBP-complemented lines had GFP foci similar in size to those of Col-0 at 4 dpi (Fig. 2b). The RT-qPCR analysis confirmed that the accumulation level of viral RNA in the inoculated leaves did not drastically differ between the transgenic lines and Col-0 (Fig. 2c). These results suggest that nCBP is essential for the efficient accumulation of PlAMV in inoculated leaves. We also confirmed that PlAMV systemically infects these complemented lines at 3 wpi (see Supplementary Table S1).
Cell-to-cell movement of PlAMV is delayed in ncbp mutant. To analyze the efficiency of cell-to-cell movement of PlAMV, we monitored the sizes of PlAMV-GFP infection foci in inoculated leaves of Col-0 and the ncbp-1 mutant. We often observed the fusion of multiple infection foci during the mechanical inoculation assay, which limits our statistical evaluation of the infection foci size (Figs 1a and 2b). Therefore, we utilized a particle delivery system to introduce the PlAMV-GFP cDNA into a single cell. The bombarded leaves were analyzed at 1, 1.5, and 2 days post bombardment. The PlAMV-GFP in the bombarded leaves of Col-0 spread more rapidly than in those of the ncbp-1 mutant (Fig. 3a). To quantitatively analyze viral cell-to-cell movement, we measured the area expressing GFP signals. We found that the GFP-expressing area in the ncbp-1 mutant was drastically smaller than that in Col-0 (Fig. 3b). These results suggest that the cell-to-cell movement of PlAMV was delayed in the ncbp-1 mutant compared to Col-0.
Replication efficiency of PlAMV is not compromised in ncbp-1 mutant at a single-cell level. We next investigated whether the delay of viral cell-to-cell movement in the ncbp mutants was caused by a decreased replication efficiency of the virus. We isolated mesophyll protoplasts from the ncbp-1 mutant and Col-0 and Scientific RepoRts | 7:39678 | DOI: 10.1038/srep39678 transfected them with infectious PlAMV cDNA. Total RNA was extracted from the cells at 0, 12, and 24 hours post inoculation (hpi), and the amount of viral RNA was analyzed by RT-qPCR using the cotransfected GFP gene as an internal control. RT-qPCR analysis revealed that the accumulation level of viral RNA in the ncbp-1 mutant protoplasts was similar to that in Col-0 protoplasts (Fig. 4a). We further performed northern blot analysis to detect viral genomic RNA, sgRNAs, and negative-strand viral RNA that serves as a template for synthesis of the RNA genome. We found that viral genomic RNA, sgRNAs and negative strand RNA accumulated to a similar degree in both the ncbp-1 and Col-0 protoplasts (Fig. 4b, top and middle panels, Supplementary Fig. S4). These results show that nCBP is not an essential factor for PlAMV genomic replication, suggesting that nCBP is involved in viral cell-to-cell movement. Accumulation of TGB2 and TGB3 is decreased in ncbp mutant. Since potexviruses need TGB1, TGB2, TGB3 and CP for their cell-to-cell movement, we analyzed the accumulation of these viral proteins in PlAMV-transfected protoplasts of the ncbp-1 mutant and Col-0. Immunoblotting of the protoplast samples harvested at 3 dpi revealed that the accumulation of RdRp did not differ between the ncbp-1 and Col-0 protoplasts (see Supplementary Fig. S5). However, we could not clearly detect TGB2 and TGB3 proteins in protoplast samples (see Supplementary Fig. S5), possibly because of their low accumulation levels. Therefore, we Fluorescence images of more than five foci in (a) were processed using ImageJ software v1.40 (NIH) to measure the size of viral infection foci. The sizes are normalized to Col-0 at 1 dpi. Error bars represent standard errors of at least six measurements. Asterisk indicates a significant difference compared with Col-0 (two-tailed Student's t-test, asterisk; P < 0.05, triple asterisk; P < 0.001). Experiments were replicated three times.
used an Agrobacterium-mediated transient expression (agroinfiltration) assay in which infiltrated regions of plant leaves were infected uniformly with virus. Leaves of the ncbp-1 mutant and Col-0 were agroinfiltrated with Agrobacterium carrying infectious PlAMV-GFP cDNA. The leaves were harvested at 4 dpi and the accumulation of viral proteins was analyzed by immunoblotting (Fig. 5a). Since the band for TGB3 exactly overlapped with a nonspecific band, we separated soluble and insoluble fractions (S30 and P30 fraction, respectively) by ultracentrifugation to exclude the nonspecific band based on the fact that the TGB3 protein is membrane-associated 33 . The nonspecific band was detected only in the S30 fraction, enabling us to evaluate the accumulation of TGB3 in the P30 fraction (see Supplementary Fig. S6 and Fig. 5b). Although the accumulation of RdRp, TGB1, GFP-CP and CP was similar between Col-0 and the ncbp-1 mutant plants, that of TGB2 and TGB3 drastically decreased in the ncbp-1 mutant compared to that in Col-0 ( Fig. 5a and b). The intensities of protein bands in each leaf sample were quantified (Fig. 5c). We evaluated the ratio of TGB1, TGB2, TGB3, and CP accumulation relative to that of RdRp, since the amount of viral RNA and RdRp was almost the same between the ncbp-1 mutant and Col-0 protoplasts (see Supplementary Fig. S5 and Fig. 4). Band quantification confirmed that the amounts of TGB2 and TGB3 drastically decreased in ncbp-1 leaves compared to Col-0 leaves, while the relative amount of TGB1 and CP in the ncbp-1 leaves did not differ (Fig. 5c). To determine which viruses require nCBP for their efficient accumulation, we mechanically inoculated the ncbp-1 mutant and Col-0 with viruses from various families and tested their accumulation. In Alphaflexiviridae, we analyzed two
Discussion
We identified nCBP as a novel recessive resistance gene against plant viruses in Alphaflexiviridae and Betaflexiviridae. This identification revealed that all members of the eIF4E family (eIF4E, eIFiso4E, and nCBP) can act as recessive resistance genes.
Recessive resistance exhibited by eIF4E deficiency is thought to be caused by the specific use of eIF4E family gene products, eIF4E or eIFiso4E, by plant viruses 2 . Generally, plants encode three eIF4E isoforms, namely, eIF4E, eIFiso4E, and nCBP 10 . The lack of eIF4E or eIFiso4E does not influence the viability of plants 16,31,34,35 , presumably due to their redundancy during translation initiation. However, the lack of eIF4E or eIFiso4E does decrease the infectivity of plant viruses; therefore, such viruses are thought to use specific eIF4E family gene products during their infection 2 . The specific use of eIF4E family gene products may be correlated with the unique translation initiation strategies of plant viruses. In fact, a large number of plant viruses do not possess the cap and/or poly(A) structures in their genomic RNA 36 . Because both of these structures play a critical role in translation initiation 37 , these viruses developed unique translation initiation strategies, e.g., recruiting specific eIFs directly to viral RNAs using their own cis-acting RNA elements 36 . Examples include the 3′ cap-independent translation element (3′ CITE) located within the 3′ UTR of viruses in Tombusviridae, Umbravirus, and Luteovirus. Each virus recruits specific eIFs to the 3′ CITE to facilitate the translation of viral proteins 38 . Therefore, eIF4E-mediated recessive resistance is effective against plant viruses lacking cap and/or poly(A) structures 4,19,36 . In this study, we showed that the infection of plant viruses with cap and poly(A) structures was inhibited in ncbp mutant plants. Our results showed that eIF4E family genes can serve as recessive resistance genes against viruses with cap and poly(A) structures.
In this study, we showed that cell-to-cell movement of PlAMV was inhibited in the ncbp mutant (Fig. 3); thus, we further analyzed the role of nCBP during PlAMV infection. Protoplast transfection assays revealed no significant difference between the ncbp-1 and Col-0 cells in the accumulation of PlAMV genomic RNA (Fig. 4). This result indicates that nCBP is not required for viral replication at the single-cell level, including translation of viral RdRp and its viral genomic RNA synthesis. In agreement with this result, the level of RdRp in PlAMV-agroinfiltrated leaves of the ncbp mutant was similar to that of Col-0 (Fig. 5a). However, the accumulation of TGB2 and TGB3 drastically decreased in PlAMV-infiltrated ncbp mutant leaves ( Fig. 5a and b), indicating that decreased accumulation of these proteins might cause the inefficient cell-to-cell movement of the virus. It still remains unclear why the accumulation of TGB2 and TGB3 decreased in the ncbp mutant. Considering that nCBP is a member of eIF4E isoforms, it is attractive to think that the translation of TGB2 and TGB3 from sgRNA2 may be specifically inactivated in the ncbp mutant, becuase sgRNA2 was shown to function as a template for translation of TGB2 and TGB3 in the case of PVX 26,27 . Otherwise, the stability of TGB2 and TGB3 could be reduced in the mutant. However, since sgRNA2 of PlAMV was below the detectable level in our northern blot analysis (Fig. 4b), it remains also possibile that the synthesis and/or stability of sgRNA2 might be affected during PlAMV infection in the ncbp mutant.
In this study, we showed that nCBP-mediated recessive resistance limits viral cell-to-cell movement (Fig. 3). During viral cell-to-cell movement, three potexviral movement proteins, TGB1, TGB2, and TGB3, function in a concerted manner. TGB2 and TGB3 induce formation of ER-derived TGB2/3 vesicles, which are subsequently directed to plasmodesmata (PD) 39,40 . TGB1 accumulates at the PD only when TGB2 and TGB3 are expressed 33,40 . TGB1 and TGB2 can increase the PD size exclusion limit 41,42 . In addition to TGBs, potexviral CP is considered an essential factor to move viral RNA between cells 43,44 . TGB2/3 vesicles, TGB1, CP, and viral RNA form a layered complex at the PD opening with ER membranes, possibly to promote efficient movement of the viral ribonucleoproteins 40 . The lack of any one of these movement-associated proteins should disable cell-to-cell movement of potexviruses. Therefore, the delay of cell-to-cell movement of PlAMV in the ncbp mutant can be explained by the inefficient accumulation of TGB2 and TGB3. Involvement of the eIF4E protein family in translation of movement proteins was reported in CMV 21 . In A. thaliana cum-1 mutant, which possesses a nonsense mutation in the eIF4E coding sequence, inefficient translation of the CMV 3a movement protein resulted in the inhibition of cell-to-cell movement of the virus 21 . The observation that two unrelated viruses, CMV and PlAMV, utilize specific eIFs for the accumulation of MP supports the importance of controlling viral MP accumulation.
To explore the universal role of nCBP in the plant-virus interaction, we inoculated ncbp mutants with viruses from various genera and examined their accumulation levels. We showed that viruses in the genera Potexvirus, Lolavirus, and Carlavirus require nCBP for their accumulation, whereas viruses in the genera Potyvirus and Tobamovirus do not (Fig. 6). One noticeable characteristic common among potexvirus, lolavirus, and carlavirus is that they encode TGB-type MPs. Considering that nCBP was required for the accumulation of TGB2 and TGB3 of PlAMV (Fig. 5), nCBP may also facilitate the accumulation of TGB2 and TGB3 from lolavirus and carlavirus to promote their movement. Since there are some genera of plant viruses other than Alphaflexiviridae and , and YoMV (f), and total RNA was extracted from inoculated leaves at 4 dpi. Virus accumulation was analyzed by quantitative RT-PCR using virus-specific primers. The accumulation of viral RNA was normalized relative to actin mRNA in each sample. The mean level of viral RNA in Col-0 was used as the standard (1.0). Error bars represent standard errors of 12 measurements from three independent experiments and asterisk and double asterisks indicate significant differences compared with Col-0 (one-tailed Student's t-test, asterisk; P < 0.05, double asterisk; P < 0.01). Betaflexiviridae, such as Hordeivirus, Pomovirus, Pecluvirus, and Benyvirus, that encode TGB proteins 26 , it would be interesting to explore whether these viruses require nCBP for their infection. It would also be interesting to explore whether viruses not encoding TGBs in Alphaflexiviridae and Betaflexiviridae are influenced by the nCBP mutation. Moreover, it is possible that the ncbp mutation is a determinant of the actual recessive resistance of crop cultivars against TGB-encoding viruses, in which the responsible genes for the resistance remain unknown. In addition, the artificial introduction of mutations in the nCBP gene may be a novel and robust strategy to provide crops with a virus-resistant trait, similar to eIF4E or eIFiso4E. Antibodies. Anti-TGB2 and TGB3 antisera were raised in rabbits using purified peptides (TGB2, GDNLHALPHGGRY; TGB3, KQTLHHGTQPSTDL) as antigen (eurofinsgenomics, Tokyo, Japan). The nCBP protein was expressed in Escherichia coli, using a pET30a vector, and the purified recombinant protein was used as an antigen. Anti-TGB1, CP and RdRp antibodies were prepared as described previously 32,43,45 . Plasmid construction. An infectious cDNA clone and a GFP-expressing vector of a PlAMV isolate 46 were constructed as described previously 47,48 . For the complementation assay, the nCBP gene with putative promoter and terminator sequences was PCR-amplified with primers Sl-At5g18110-up1374F and Nt-At5g18110-down1011R using total DNA extracted from Col-0 as template (for primer sequences, see Supplementary Table S2). The amplified product was digested with SalI and NotI restriction enzymes and inserted between SalI and NotI sites of pENTA 49 . To produce pFAST01-nCBPg, the region between attL1 and attL2 containing the nCBP expression cassette was cloned into a binary plasmid vector pFAST01 (Inplanta Innovations Inc., Kanagawa, Japan) using Gateway LR Clonase II enzyme mix (Thermo Fisher Scientific, Massachusetts, USA).
Methods
To produce RNA probes for the detection of positive-and negative-stranded viral RNA, the PCR-amplified fragment of the 3′ terminal region of a PlAMV isolate 46 (nucleotides from 5,101 to 6,102) was cloned into pCR-Blunt II-TOPO vector (Thermo Fisher Scientific), generating pCR-Pr-1 (in antisense orientation behind the T7 promoter to produce the negative-stranded RNA detection probe) and pCR-Pr-2 (in sense orientation to produce the positive-stranded RNA detection probe).
Virus isolate and inoculation. Mechanical inoculation with an extract of PlAMV-GFP-infected N. benthamiana plants and agroinoculation of PlAMV-GFP was performed as described previously 50 . CymMV (accession number, LC125633), AltMV 51 , LoLV 52 , YoMV (MAFF number 104033; National Institute of Agribiological Sciences GenBank), PVM (MAFF number 307027), TuMV 53 , and CMV 54 were also used for mechanical inoculation. Rosette leaves of three-week-old A. thaliana were inoculated with extracts of the upper leaves of N. benthamiana or A. thaliana plants, which were inoculated with each virus and infected systemically. Plant transformation. Agrobacterium tumefaciens strain EHA105 carrying pFAST01-nCBPg was used for transformation of A. thaliana by the floral-dip method, as described previously 55 . T1 seeds of transgenic plants were selected by GFP fluorescence expressed from the seed-specific OLE1 promoter encoded by the pFAST01-nCBPg.
Alignment and phylogenetic analysis of nCBP proteins. Sequences of eIF4E family genes, excluding AteIF4E1b and AteIF4E1c, were obtained from EST databases of the National Center for Biotechnology Information (NCBI, http://www.ncbi.nlm.nih.gov) and Sol Genomics Network (http://solgenomics.net). Predicted cDNA sequences of AteIF4E1b and AteIF4E1c were obtained from The Arabidopsis Information Resource (TAIR, https://www.arabidopsis.org). Amino acid alignments of the core region 10 (from His-37 to His-200 in Homo sapiens eIF4E) were preformed using ClustalW software. Phylogenetic trees were constructed from nucleotide alignments of the core region using the neighbor-joining and boot-strapping algorithms within the Mega 6.0 software.
Scientific RepoRts | 7:39678 | DOI: 10.1038/srep39678 Northern blot analysis. Total RNA (1 μ g) was analyzed with the digoxigenin (DIG) system (Roche). To produce probes for plus-and minus-strand viral RNA detection, pCR-Pr-1 and pCR-Pr-2 were digested with BamHI restriction enzyme and transcribed with T7 RNA polymerase. The intensities of RNA bands were quantitated using ImageJ software v1.40 (National Institutes of Health).
Particle bombardment. Particle bombardment was performed using a Biolistic PDS 1000/He Particle
Delivery System (Bio-Rad, California, USA), as described previously 56 . The area showing GFP signal was quantitated using ImageJ software v1.40.
Protoplast preparation and transfection. Arabidopsis protoplast preparation and transfection were performed as described previously 57 with modifications. We added 0.1 M mannitol to W5 solution. For virus inoculation, 100 μ g of 35S-driven virus infectious clone was added to 300 μ L of protoplast suspension (5 × 10 6 protoplasts/mL). | 5,034.4 | 2017-01-06T00:00:00.000 | [
"Biology"
] |
The Melt of Sodium Nitrate as a Medium for the Synthesis of Fluorides
The preparation of NaLnF4 complexes, LnF3 (Ln = La, Ce, Y) rare earth binary fluorides, CaF2 and SrF2 alkali earth fluorides, as well as mixtures of these compounds from their nitrates dissolved in molten NaNO3 has been studied in order to select the ideal solvent for fluoride synthesis by spontaneous crystallization from flux. Sodium fluoride (NaF) was used as a fluorinating agent. The results of our experiments have confirmed that NaNO3 melt is one of the most promising media for precipitating said inorganic fluoride materials within a broad temperature range (300–500 ◦C). Also, in contrast with precipitation/co-precipitation from aqueous solutions, our syntheses have resulted in obtaining equilibrium phases only.
Introduction
Inorganic fluorides can be prepared using a broad variety of techniques [1][2][3][4][5][6].However, using these methods in practice is always accompanied by some experimental obstacles.The most important problem for the fluoride syntheses is the reaction of said fluorides with water, i.e., hydrolysis, which leads to an accumulation of hydroxyl and/or oxygen impurities in the obtained materials [7][8][9].Atmospheric water vapor, water from aqueous solutions and/or water adsorbed on the solid particle surfaces can participate in hydrolyzing fluorides.The rate of hydrolytic chemical transformation increases significantly when temperature goes up and hydrolysis at an elevated temperature has been called pyrohydrolysis.In order to suppress hydrolysis, an excess of fluorinating agent has been widely implemented to create a proper fluorinating atmosphere [10,11].Direct oxidation by molecular oxygen in the course of fluorination is usually unfavorable from a thermodynamics point of view and, thus, as a result, it usually does not create additional problems for fluoride syntheses.
Undesirable batch reactions with crucible materials are important factors in solid phase fluorination and syntheses in molten media [2,12,13].Typically, the use of ceramic crucibles is not possible due to their corrosion accompanied by the formation of highly volatile fluoride by-products, such as SiF 4 and/or AlF 3 .On the other hand, metal (Mo, Ni, Cu, steel, etc.) or carbon crucibles can efficiently reduce fluorides, lowering the oxidation state of the fluoride-forming metal (europium, ytterbium) [14,15] or even completely reducing the latter to the free element (bismuth) [16].Gold [17] and copper [18] seem to be convenient materials for fluoride preparation, but, nevertheless, one has to consider that hermetically sealed reactors are preferred in comparison to unsealed crucibles [17,19], for even platinum can become a reducing agent in the opened systems [20,21].Teflon and/or other organofluorine polymers also can be used for the synthesis of fluorides with lower melting points and higher reaction activities [22,23].
The sintering time that is necessary for reaching the solid-phase equilibrium increases exponentially with a temperature decrease, and this factor becomes an overwhelming obstacle for the low-temperature phase synthesis [13,24].
Non-equilibrium specimens with unique functional properties (e.g., enhanced ionic conductivity) can be obtained by mechanochemical techniques.The use of such methods leads to product contamination with milling materials, such as ZrO 2 or WC.However, a detailed study of possible oxygen contamination of the mechanochemical synthesis products has yet to be carried out [6,25].
Various functional fluoride materials with highly developed surfaces have been synthesized by thermal decomposition of the corresponding trifluoroacetates [6,26,27].This is a very attractive synthetic method, but it leads to the product's contamination with elemental carbon.
In contrast with the solid-phase synthesis, the use of solvents leads to a very significant rate acceleration.Water is the simplest solvent, and most fluorides, having low solubility in it, can be easily precipitated from aqueous solutions [3,4,28].Such precipitation is very inexpensive.It requires very simple experimental arrangements and, very frequently, leads to the formation of nanofluorides [3,4,28].Precipitation at room temperature minimizes hydrolysis by-products, but a well-developed surface of fluoride nanoparticle makes said hydrolysis easier.For example, hydrolytic oxygen contamination has been observed for bismuth fluoride precipitation [29].
We have performed systematic studies of phase formation in the fluoride systems using co-precipitation from aqueous nitrate and/or citrate solutions by various fluorinating agents (e.g., hydrofluoric acid, sodium fluoride, ammonium fluoride, etc.) [3,4,[29][30][31][32][33][34][35][36][37][38][39].Our results are valuable from both practical (luminophores, laser ceramics precursors, solid electrolytes) and theoretical (studying non-classic crystallization mechanism via oriented attachment crystal growth [40]; making novel non-equilibrium phases of variable compositions [30]) points of view.Our results have revealed a broad variety of the synthesized material properties compared to the existing data for the known phase diagrams obtained under equilibrium conditions [34]: low solubilities of the precipitated fluorides hindered achieving the chemical equilibrium.
It is worth noting that nanofluorides, obtained by co-precipitation, are hydrated.Highly active nanopowder surfaces causes multilayered water adsorption [41].This hydration leads to the formation of quite unusual chemical compounds, such as (H 3 O)Ln 3 F 10 (Ln = rare-earth elements) [30,42], and causes luminescence quenching by OH − groups and/or water molecules in freshly prepared nanopowders.It is not easy to remove water that has been adsorbed on nanoparticle surfaces but their pyrohydrolysis under thermal treatment can be prevented by forming compounds with fluorinating agents, such as BaF 2 •HF [37] or ammonium fluoride solid solutions in strontium fluoride, Sr 1−x−y Ln x (NH 4 ) y F 2+x−y [34,38,39].
By definition, hydrothermal syntheses occur at elevated temperatures.This leads to a solubility increase and a surface energy decrease (particle faceting) for the synthesized nanofluorides, and equilibrium conditions can be reached.We successfully used the aforementioned hydrothermal techniques to prepare potassium-rare earth fluoride compound series [43][44][45] and, thus, refined phase equilibria data (including the ones for the lower temperatures) for the KF-RF 3 systems [46].However, temperature increase is accompanied with hydrolysis acceleration, and the content of the contaminating oxygen admixtures also increases in the K 2 RF 5 < KRF 4 < KR 2 F 7 < KR 3 F 10 series due to the isomorphous substitution of fluorine ions by OH − fragments with preservation of the phase homogeneity.
Molten salt synthesis (MSS) represents an additional group of synthetic techniques.These techniques include phase formation and crystallization from molten solutions (flux) and it requires the used solvents to have lower melting temperatures, be chemically inert (i.e., solvents shall not form compounds and/or solid solutions with the target compounds or phases), have sufficient solubility for the starting materials, be able to separate solvent from the products, and be non-volatile and display low toxicity [50].Of course, there is no such thing as an ideal solvent, but one still can try to find the best fitting materials for the aforementioned MSS technology.Sodium fluoride can be considered as one of the most promising MSS materials due to its sufficient water solubility and relatively low melting point (994 • C).However, it easily forms numerous phases with the other fluorides, like in the NaF-BaF 2 -GdF 3 system [51].Other fluoride flux materials for MSS methods include the following examples: Garton and Wanklyn grew K 2 NaGaF 6 and Rb 2 KGaF 6 single crystals using PbF 2 solvent [52], Hoppe utilized toxic thallium fluoride as a flux [53], and other authors [54] implemented the highly reactive NH 4 HF 2 as a flux and fluorinating agent for KMgF 3 preparation [54].
Oxide glasses can be considered as some kind of fluxes, too, as nanofluorides can be obtained from oxyfluoride glass ceramics by a dissolution of the silicate matrix in hydrofluoric acid [58].However, the aforementioned oxyfluoride glass ceramics are not the only ones representative of exotic type fluxes.Thus, an orthorhombic single CaF 2 crystal polymorph has been grown under high pressure from molten Ca(OH) 2 flux [59].
All of the above-listed reasons illustrate the need to search for novel solvents and, as a result, we have explored those opportunities and suggested nitrate melts as such alternative fluxes.Namely, we present below our results for the use of NaNO 3 melt for fluoride syntheses.
Results
In the present paper, we have studied the preparation of well-known LnF 3 (Ln = La, Ce, Y) rare earth and CaF 2 , SrF 2 alkali earth fluorides as well as their mixtures and NaLnF 4 complexes from solutions of corresponding metal nitrates in molten NaNO 3 by spontaneous crystallization (precipitation) in melt solutions.Despite the fact that actual chemical composition of the aforementioned NaLnF 4 phases is Na 3x Ln 2−x F 6 (gagarinite-type structure, derived from UCl 3 -type; x ~0.5) [2,32,46], we will name such solid solution compounds formed in the NaF-LnF 3 system as NaLnF 4 in agreement with modern literature naming conventions.
We have used sodium fluoride (NaF) as the fluorinating agent in the studied systems; the corresponding chemical transformations can be described by the following equations: Results of our work are presented in Table 1 and Figures 1-10.
The NaF-YF 3 System
X-ray diffraction patterns of NaYF 4 powders, obtained by precipitation from the NaNO 3 melt, before (a) and after (b) washing with water are presented in Figure 1.The most intensive peaks in sample (a) X-ray diffraction pattern correspond to NaNO 3 phase (R−3c space symmetry group (SSG); PCPDFWIN # 85-1466).After washing with water, sample (b) contained only one phase, NaYF 4 or β-Na(Y 1.5 Na 0.5 )F 6 (P63/m SSG; PCPDFWIN # 16-0334).* See Appendix A, Table A1.As our example with 3 and 4 samples of Y(NO 3 ) 3 :NaF:NaNO 3 = 7:52:41 starting material ratio has demonstrated, raising the temperature of the specimen synthesis from 320 to 435 • C did not affect its X-ray diffraction pattern: in both cases, the synthesized products contained NaYF 4 phase only with nearly the same crystal lattice parameters (Table 1), but with different particle morphology (Figure 4).NaYF 4 particles, formed at 320 • C (Figure 4b,d), contained relatively large hexagonal-shaped micron-size particles.Such particles, made up by ca.50 nm thick platelets, were hollow inside.In turn, a similar composition sample, formed at 320 • C (Figure 4b,d), also contained elongated micron-size particles with habitus, typical for the hexagonal system, but the temperature increase has led to the formation of dense, completely filled (bulk) particles.
The NaF-LaF 3 and NaF-CeF 3 Systems
Our attempts to prepare NaLaF 4 phase in the NaF-LaF 3 system were unsuccessful: for any ratios of the starting materials, only LaF 3 microcrystals were formed (Figure 5a).
The CaF 2 -SrF 2 System
The results of our syntheses in the calcium and strontium fluoride systems are presented in Figures 7-11.The values of yield were 68-91% (see Appendix A, Table A2).Intrinsic fluorides form easily washable micron-sized particles with fluorite-type crystal structures (Figure 7) with lattice parameters coinciding with the literature data (CaF 2 , a = 5.463 Å, PCPDFWIN # 35-0816; SrF 2 , a = 5.800 Å, PCPDFWIN # 06-0262).However, the morphology of the aforementioned powder particles is different: strontium fluoride formed faceted particles, whereas calcium fluoride did not (Figure 8).The use of equimolar Ca:Sr = 1:1 starting composition produced a mixture of two different fluorite-type phases with crystal lattice parameters different from both unit cell parameters for intrinsic CaF 2 and SrF 2 (Figure 9).SEM data unequivocally indicate that fluoride particles, containing more strontium fluoride, had a much larger size than the particles, containing more calcium fluoride (Figures 10 and 11).
Discussion
For the reader's convenience, the properties of molten sodium nitrate are summarized in Table 2.The solubility of NaF in NaNO 3 at 350 • C is about 5 mol %, and it increases up to 10 mol % at 450 • C [68].Pure NaNO 3 decomposes at 557 • C, but its decomposition temperature lowers when NaF is added (the addition of 7 mol % NaF decreases the decomposition temperature to 502 • C) [69].Nevertheless, the use of NaNO 3 melts provides a sufficiently broad temperature interval for the corresponding syntheses.Moreover, the addition of other nitrates and/or salts like KNO 3 , NH 4 NO 3 , etc. can lower further the NaNO 3 melting temperature [70]: solid solution, formed in the NaNO 3 -KNO 3 system, has its minimum point in the melting curves at 222 • C for 52 ± 3 mol % KNO 3 [71].However, addition of the different cations in the molten reaction mixture unavoidably results in the contamination of the formed microcrystals and even in the parallel formation of supplementary parasitic phases [46].
Sodium nitrate is a well-known oxidizer, but it has not shown any oxidative properties in our experiments, including absence of Ce(III) to Ce(IV) oxidation (i.e., formation of Ce(IV) phases was not observed).Nevertheless, one has to monitor a possibility of such oxidation processes in the future.
The oxygen and nitrogen content in the synthesized samples are below the detection limit (1%) of the EDX method.
It is important to note that the simultaneous preparation of strontium and calcium fluorides has resulted in the formation of the mixture of slightly contaminated individual CaF 2 and SrF 2 , respectively, whereas the use of the other synthetic techniques, such as melting [72], co-precipitation from aqueous solutions [28], or mechanochemical synthesis [73], resulted in continuous Ca 1−x Sr x F 2 solid solution.However, in accordance with the third law of thermodynamics [24], all solid solutions must decompose under cooling, and the lack of such decomposition unequivocally indicates a non-equilibrium state of the prepared specimen.The highest critical decomposition temperature for Ca 1−x Sr x F 2 solid solution, estimated for the CaF 2 -SrF 2 -MnF 2 triple system data, is about 1160 K [74].The decomposition curve of the Ca 1−x Sr x F 2 solid solution, calculated for the regular solution model, is presented in Figure 11.Taking into account crystal lattice parameters a = 5.463 Å for CaF 2 and a = 5.800 Å for SrF 2 and considering a linear correlation between crystal lattice parameters and composition of the Ca 1−x Sr x F 2 solid solution (Vegard's Law), we carried out calculations for the concentrations of the CaF 2 -based and SrF 2 -based solid solutions (Table 3).We also used the data of Table 1 for the unit cell parameters of the phases formed in the CaF 2 -SrF 2 system.The obtained 5 ± 2 mol % solid solution concentrations are in a good agreement with the estimated data for T = 573 K (Figure 11).At the present time, we plan to continue our investigations of the influence of the synthesis durations on the aforementioned crystal lattice parameters.Therefore, in the aforementioned Ca 1−x Sr x F 2 system, only the synthesis in the NaNO 3 melt produced equilibrium phases.This observation demonstrates quite unique perspectives for the preparation of the lower temperature equilibrium phases in the fluoride systems: making and studying such phases has been quite limited up to the present moment due to the above experimental obstacles [46,75,76].Also, it is worth noting that the absence of NaLaF 4 in our synthesized specimens can be easily explained in view of the aforementioned observations, for NaLaF 4 , easily obtainable by sintering or melting, is not stable at lower temperatures, while being stable at the higher temperatures [32,60,61].
It is worth mentioning the unusual morphology of the particles in synthesized NaYF 4 powders (Figures 3 and 4).It appears that flat nanoplates were formed first, and then they merged together forming hollow hexagonal prisms (Figures 3c and 4a,c), perhaps, by some attachment crystal growth process [40] with unknown/obscure mechanism.Bulk hexagonal prisms with habitus corresponding to the crystal lattice type have been formed at the higher temperature (Figure 4b,d).
Also it is worth mentioning the difference between morphologies of CaF 2 and SrF 2 microcrystals (Figure 8): SrF 2 microcrystals are faceted (see such simple-shaped polyhedral like cubes and rhombododecahedrons in the SEM images), but the faceting of CaF 2 was much fuzzier for unknown reasons.When both CaF 2 -based and SrF 2 -based solid solutions formed in the same precipitate, the particle size of the former is less than the particle size of the latter (Figures 10 and 11).Dissolution of CaF 2 in SrF 2 led to the lower quality faceting compared to the intrinsic SrF 2 (Figure 10).
Comparison of NaNO 3 and NH 4 NO 3 melt properties provides the former with two advantages: (1) NH 4 NO 3 is thermally unstable and easily undergoes solid state decomposition before reaching its melting point at 167 • C [77], producing toxic gases containing N 2 O (one has to use hood exhaust equipment while working with NH 4 NO 3 melts); and (2) as per Huang et al. [66], the use of NH 4 NO 3 melts does not necessarily produce equilibrium conditions ("BaCeF 5 " phase is absent in the BaF 2 -CeF 3 phase diagram [2], so cubic solid solution had to undergo mandatory ordering that has yet to be observed in [66]).
The use of ionic liquids for nanofluoride preparation represents a separate synthesis venue [78][79][80][81][82]. Ionic liquids are salts, containing large organic cations (usually, the unsymmetric ones), melt below 100 • C, and possess good thermal stability, high fire hazard safety, low corrosive activity, along with the low viscosity and vapor pressure.Ionic liquids appear to be good solvents for various organic and inorganic chemical compounds.Typical examples of ionic liquids include 1-butyl-3-methylimidazolium hexafluorophosphate (BmimPF 6 ), tetrafluoroborate (BmimBF 4 ), and/or chloride (BmimCl) [83].These ionic liquids have been used successfully for the LnF 3 and NaLnF 4 rare earth nanofluoride syntheses [78][79][80][81][82] as solvents (ionic transport media) and as fluorinating agents (water traces initiate ionic liquid pyrohydriolysis).Synthesized nanopowders can be easily separated from ionic liquids by rinsing with methanol.Usually, ionic liquid syntheses are carried out at the elevated temperatures under microwave or solvothermal process conditions.Nevertheless, our NaNO 3 melt-based synthetic technique can be a good replacement or supplement to the aforementioned preparation methods in ionic liquids: NaNO 3 possesses the same advantages as the aforementioned ionic liquids (lower-temperature melting point, high fire hazard safety, low corrosive activity, low toxicity), but it also is less expensive and can be washed out with water instead of toxic and flammable methanol.
We synthesized the fluoride specimens in NaNO 3 melts at 300-435 • C in ceramic crucibles with and without aluminum foil lining as well as in alundum crucibles.Separate experiments have indicated that ceramic crucibles were capable of withstanding melt corrosion, and metal lining did not provide any additional benefit to the quality and purity of the synthesized samples.
Starting materials were ground in the mortar with pestle until a homogeneous powder was formed.It was then quantitively transferred to the glazed porcelain crucible and placed in the oven for the synthesis at the elevated temperatures.Ratios of the starting materials were varied within the aforementioned ranges, including changes of the amounts of fluorinating agent excess and amount of the solvent used.All synthetic experiments were carried out at 300-435 • C (10 • C/min heating rate, 1 h exposure at maximum temperature) for all specimens unless specified otherwise.Annealed samples were cooled slowly over 10 hours.Molten products were removed from the crucibles, washed with doubly distilled water to remove remaining solvent and unreacted fluorinating agent, and dried under air at ~40 • C.
Conclusions
Results of our experiments unequivocally indicate that NaNO 3 melt is a very promising medium for the preparation of inorganic fluoride materials over a broad temperature range (300-500 • C).The latter synthetic temperature can be further lowered by using nitrate mixture, e.g., NaNO 3 -KNO 3 .Syntheses in NaNO 3 melts allowed preparation of micron-sized powders containing faceted microcrystals with the lowered surface areas and decreased adsorption capabilities.The obtained luminescent materials did not require additional thermal treatment.Synthesized specimens contained equilibrium phases, and were not contaminated by oxygen and/or carbon.Suggested synthetic protocols are inexpensive, environmentally friendly and can be used as an alternative to ionic liquid synthesis methods.
Figure 1 .
Figure 1.X-ray diffraction patterns of NaYF 4 sample 3 before (a) and after (b) washing with water.NaNO 3 phase lines are marked with triangle symbols.
Figure 2
Figure 2 data illustrate the influence of fluorinating agent stoichiometry on the phase composition of the reaction products.NaF shortage results in precipitation of the two-phase specimen (Figure 2a)
Table 1 .
Conditions and results of fluoride synthesis experiments.
Table A2 .
The values of yield. | 4,638.2 | 2018-03-29T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |