text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Mechanical and Electroconductive Properties of Mono-and Bilayer Graphene – Carbon Nanotube Films
This article presents the results of a computer study of electrical conductivity and deformation behavior of new graphene–carbon nanotube (CNT) composite films under bending and stretching. Monoand bilayer hybrid structures with CNTs (10,0) and (12,0) and an inter-tube distance of 10 and 12 hexagons were considered. It is revealed that elastic deformation is characteristic for monoand bilayer composite films both in bending and stretching. It is found that, in the case of bending in a direction perpendicular to CNTs, the composite film takes the form of an arc, and, in the case of bending in a direction along CNTs, the composite film exhibits behavior that is characteristic of a beam subjected to bending deformation as a result of exposure to vertical force at its free end. It is shown that monoand bilayer composite films are more resistant to axial stretching in the direction perpendicular to CNTs. The bilayer composite films with an inter-tube distance of 12 hexagons demonstrate the greatest resistance to stretching in a direction perpendicular to CNTs. It is established that the CNT diameter and the inter-tube distance significantly affect the strength limits of composite films under axial stretching in a direction along CNTs. The composite films with CNT (10,0) and an inter-tube distance of 12 hexagons exhibit the highest resistance to stretching in a direction along CNTs. The calculated distribution of local stresses of the atomic network of deformed monoand bilayer composite films showed that the maximum stresses fall on atoms forming covalent bonds between graphene and CNT, regardless of the CNT diameter and inter-tube distance. The destruction of covalent bonds occurs at the stress of ~1.8 GPa. It is revealed that the electrical resistance of monoand bilayer composite films decreases with increasing bending. At the same time, the electrical resistance of a bilayer film is 1.5–2 times less than that of a monolayer film. The lowest electrical resistance is observed for composite films with a CNT (12,0) of metallic conductivity.
Introduction
At present, a new scientific direction devoted to the theoretical and experimental study of hybrid materials based on two-dimensional (2D) graphene and one-dimensional (1D) carbon nanotubes (CNTs) exists in materials science [1][2][3][4][5][6].Research teams from different countries proposed several structural varieties of this hybrid material, differing in the method of joining CNTs and graphene, as well as their mutual orientation [7].Three-dimensional (3D) composites (pillared graphene) with a vertical orientation of CNTs spliced with graphene structures [8][9][10][11][12][13][14][15][16][17], and 2D films with a horizontal Coatings 2019, 9, 74 3 of 15 graphene sheets demonstrated ~95.8% transmittance at a 550-nm wavelength with a sheet resistance of ~600 Ω/sq, indicating better performance than those of stacked bilayer graphene or CNT films at the same transmittance.A similar graphene-CNT hybrid film was obtained by Kim et al. [19] using the thermal chemical vapor deposition (CVD) method on a Cu foil coated with CNTs.The resulting film possessed a sheet resistance of 300 Ω/sq with 96.4% transparency.A distinctive feature of these hybrid structures is the alignment of CNTs on graphene, which makes it possible to obtain improved current characteristics of composite films that are promising for the design of field-effect transistors.
One of the main criteria for effectiveness of the use of new composite materials as the elemental base of modern electronics is their ability to withstand certain mechanical loads retaining the electroconductive properties.The preservation of the electroconductive properties of the material during deformation is especially important for devices of flexible and transparent electronics.The mechanical properties of pillared graphene were studied in detail by experimental methods [39] and computer simulation methods [40][41][42][43][44].For these hybrid carbon structures, the tensile strength at axial stretching and compression was already determined, stress-strain curves were constructed, and Young's modulus and Poisson's ratio were estimated.The mechanical properties of composite films based on covalently bonded graphene and horizontally oriented CNTs are currently still unexplored.There are only a few works devoted to the experimental study of the electromechanical properties of hybrid films based on horizontally oriented CNTs covered with a graphene layer, interacting through van der Waals forces [33,35].At the same time, information on the behavior of such hybrid films during deformation and the evaluation of their tensile strength and electrical conductivity are necessary for the development of devices of flexible and tensile electronics with improved characteristics.The purpose of this work was to study the mechanical and electroconductive properties of mono-and bilayer graphene-CNT composite films with horizontal orientation of CNTs using quantum and molecular dynamics modeling.
Atomistic Models of Graphene-CNT Composite Films
The super-cells of mono-and bilayer graphene-CNT composite films under study were constructed using an original approach, which we called the "method of magnifying glass" [45].The essence of this approach lies in the combined use of molecular-mechanical and quantum-mechanical mathematical models at different stages of modeling in order to obtain the topology of the considered structure as close as possible to the data of a natural experiment.At the initial stage of the "method of magnifying glass", an atomistic model is constructed as a large fragment of the graphene-CNT composite with a number of atoms of several tens of thousands, and the atomic network of the object is optimized by minimizing its total energy using the molecular dynamics method and the empirical adaptive intermolecular reactive bond order (AIREBO) potential [46].At the next stage, a smaller fragment is cut from the middle part of the optimized composite structure, which is re-optimized in a periodic box using the self-consistent charge density functional tight-binding (SCC-DFTB) method [47].The dimensions of the box are also optimized to find the configuration that corresponds to minimum total energy.At the final stage, the unit cell is selected from the previous optimized fragment, which is again optimized in the periodic box using the SCC-DFTB method.The optimization parameters in this case are the coordinates of the atoms, and the dimensions of the box.
As shown previously, the most stable monolayer graphene-CNT composite films are formed from semiconductor (10,0) and metal (12,0) CNTs with an inter-tube distance of 8-14 hexagons [45].For these atomistic models of the composite film, the heat of formation is in the range from −1.5 to −0.1 kcal/mol•atom.Therefore, in the current work, bilayer composite films were built on the basis of CNTs (10,0) and (12,0).The inter-tube distance was taken in a wide range of 8-16 hexagons.Our research results showed that the inter-tube distance should be an even number of hexagons to obtain a regular structure with the same spacing between CNTs in both layers.This situation is shown in Figure 1, which presents obtained models of composite films based on CNT (12,0) with an inter-tube Coatings 2019, 9, 74 4 of 15 distance of 12 hexagons.This figure also shows that the middle graphene layer of the bilayer composite film is deformed unlike the other two.The sites of the middle layer enclosed between two adjacent areas of covalent contacts with CNTs (the covalent bonds of graphene-CNT are marked in red) are almost straight.At the same time, the outer graphene layers exhibit an obvious curvilinearity as in the case of a monolayer composite.The geometric and energy parameters of all models of the super-cells of mono-and bilayer composite films based on CNTs (10,0) and (12,0) are presented in Table 1.This table shows the translation vectors L x in the direction of the X-axis (in the direction perpendicular to the CNT axis), the heat of formation H f , the inter-tube distance r t-t , and the parameter a/b characterizing the degree of deformation of CNTs, where a is the major semi-axis and b is the minor semi-axis, as shown in Figure 1.The length of the graphene-CNT covalent bond in all cases is 1.61-1.62Å.The value of the translation vector L y in the direction of the Y-axis is not given in Table 1, since it is approximately the same for all super-cells and is equal to 4.27-4.29 Å. Analysis of the characteristics of the constructed super-cells shows that the degree of deformation of CNTs during formation of the composite film is the same for all types of models and is equal to ~1.64-1.66.The heat of formation is negative in all cases.The super-cells of atomistic models with an inter-tube distance of 10 and 12 hexagons are the most energetically favorable for both mono-and bilayer composite films.red) are almost straight.At the same time, the outer graphene layers exhibit an obvious curvilinearity as in the case of a monolayer composite.The geometric and energy parameters of all models of the super-cells of mono-and bilayer composite films based on CNTs (10,0) and (12,0) are presented in Table 1.This table shows the translation vectors Lx in the direction of the X-axis (in the direction perpendicular to the CNT axis), the heat of formation Hf, the inter-tube distance rt-t, and the parameter a/b characterizing the degree of deformation of CNTs, where a is the major semi-axis and b is the minor semi-axis, as shown in Figure 1.The length of the graphene-CNT covalent bond in all cases is 1.61-1.62Å.The value of the translation vector Ly in the direction of the Y-axis is not given in Table 1, since it is approximately the same for all super-cells and is equal to 4.27-4.29 Å. Analysis of the characteristics of the constructed super-cells shows that the degree of deformation of CNTs during formation of the composite film is the same for all types of models and is equal to ~1.64-1.66.The heat of formation is negative in all cases.The super-cells of atomistic models with an inter-tube distance of 10 and 12 hexagons are the most energetically favorable for both mono-and bilayer composite films.
Deformation of Graphene-CNT Composite Films and Its Effect on Electrical Conductivity: A Mathematical Model
To study the deformation behavior of graphene-CNT composite films, a series of numerical molecular dynamics experiments on the stretching and bending of the considered hybrid carbon structures along different axes were carried out using the SCC-DFTB method to calculate more accurately the object energy during relaxation at each deformation step.
To quantify the mechanical properties of composite films at different stages of deformation, the distribution of local stresses of the atomic network of the structures under study was calculated using the approach proposed by us earlier [48].This approach is based on the original idea, according to which the stress per atom of the deformed structure should be evaluated by a change in the energy of the framework atom under external influence.The stress per atom should be understood as the value of the difference between the energy of an atom of a deformed framework and an unloaded (free) framework.This value will reflect the degree of deformation at a given point of the structure, that is, the stress of an atomic network near this atom.The calculation of the local stress was carried out according to the following algorithm:
•
Optimization of the atomic network of a non-deformed composite film by minimizing its total energy by the coordinates of all atoms using the SCC-DFTB method; • Calculation of the distribution of the bulk energy density over the atoms using the AIREBO potential; • Optimization of the atomic network of the deformed composite film by minimizing its energy by the coordinates of all atoms using the SCC-DFTB method; • Calculation of the distribution of the bulk energy density over the atoms of a structure subjected to an external action using the AIREBO potential; • Calculation of the local stress distribution of the atomic framework from the difference between the values of the bulk energy density of the atoms of the deformed and non-deformed composite film.
The stress per atom with the number i was calculated as follows: where w 0 i is the bulk energy density of the atom of the composite film before deformation, and w i is the bulk energy density of the atom of the composite film after deformation.The bulk energy density of the atom in the framework of the AIREBO potential was calculated as follows: Coatings 2019, 9, 74 6 of 15 where E REBO ij is the interaction energy of covalently bonded atoms, which is determined by the type of atoms and the distance between them; i and j are the numbers of interacting atoms; E tors kijl is the energy of torsion interaction, which is a function of a linear dihedral angle built on the basis of atoms with an edge on the i-j bond (atoms forming chemical bonds with atoms i, j); E LJ ij is the van der Waals interaction energy between covalently unbounded atoms; V i = 4/3πr 0 3 is the volume occupied by the atom i; and r 0 is the van der Waals radius of carbon atom, equal to 1.7 Å.
In order to carry out the numerical experiments, we used the DFTB+ software package (version 18.1) [49], which implements the SCC-DFTB method, and the Kvazar software package (version 2.0) [50], which implements the molecular dynamics method and the AIREBO potential.
The calculation of the electrical conductivity was carried out in the framework of the Landauer-Buttiker formalism [51].This formalism allows us to calculate the electron transmission function and static conductivity.The electron transmission function is determined as follows: where G A C , G R C are the advanced and retarded Green matrices describing contact with the electrodes, and Γ s (E), Γ D (E) are the broadening matrices for the source and drain.Static conductivity is described as follows: where E F is the Fermi energy of the material of the contacts which are connected to the object under study, e is the electron charge, h is the Planck constant, and e 2 /h is the conductance quantum.The multiplier (2) takes into account the spin of the electrons.
Results and Discussion
The first series of numerical experiments was devoted to a study of the behavior of mono-and bilayer composite films during bending deformation.To carry out calculations, two types of atomistic models of the composite film were constructed from the super-cells shown in Figure 1, taking into account the direction of bending.The deformation was considered in the direction perpendicular to CNTs, and in the direction along the CNT axis. Figure 2 shows atomistic models of mono-and bilayer composite films using the example of a hybrid structure with CNT (12,0) and an inter-tube distance of 12 hexagons.It can be seen from the Figure 2 that the atomistic model of the composite film in both considered cases of deformation consists of five super-cells which we constructed earlier by means of the method of magnifying glass.In one case, the cell length increases in the direction perpendicular to CNTs (X-axis), and, in the other case, it increases along the CNT axis (Y-axis).The number of super-cells in the atomistic model was chosen to minimize the influence of edge effects and to reproduce the deformation behavior of the material adequately from the physical point of view, as well as to take into account the computational features of the SCC-DFTB method used to recalculate the energy at every stage of deformation.
Figure 3 illustrates the scheme of the composite film bending in each of the considered deformation directions using the example of topological models of bilayer composite film with CNT (12,0) and an inter-tube distance of 12 hexagons.In both cases, in order to maintain the atomic network deformation during relaxation of the structure to a minimum of energy, the middle atoms in each of the composite layers were rigidly fixed, forming a neutral layer of atoms not involved in minimizing Coatings 2019, 9, 74 7 of 15 the object total energy.These atoms are marked in Figure 3 in orange.A similar scheme of bending and the fixing of atoms was used for monolayer composite films.
Figure 3 shows that, in the case of bending in the direction perpendicular to CNTs (along the X-axis), the bilayer composite film takes the form of an arc, and, in the case of bending in the direction along the CNT axis (Y-axis), the upper composite layer stretched along the Y-axis, and the lower layer compressed along the same axis.This behavior is typical for a beam subjected to bending deformation as a result of an impact of vertical force on its free end, and is described in the framework of the classical beam bending theory.For monolayer composite films, the pattern of deformation behavior was similar to that of the the bilayer composite films.During the study, the bending angle of the composite film along the X-axis ranged from 0 • to 120 • , and, along the Y-axis, it ranged from 0 • to 15 • .The radius of curvature of the atomic network was changed in the range from 8 to 40 nm for the case of bending along the X-axis, and in the range from 8 to 81 nm for the case of bending along the Y-axis.Using the SCC-DFTB method, the change in the total energy of the composite film at each deformation step was monitored, and relaxation of the atomic network of the structure was carried out.It was found that the composite film structure continued to maintain an arched shape with an increase in the degree of bending along the X-axis, and only the distance between its ends changed.With an increase in the degree of bending along the Y-axis, the composite film structure retained a tendency to contraction near the base, while the upper layer of the composite film stretched along the Y-axis, and the lower layer compressed along the same axis.This behavior is typical for a rod.The degree of bending was estimated from the radius of curvature of the composite film atomic network.According to the results of numerical experiments, we plotted the dependence of the strain energy of the composite film on the radius of curvature of its atomic network.The strain energy was found according to the difference between the total energy of the object before and after bending.Figure 4 shows the dependences obtained for mono-and bilayer composite films, respectively.
These plots show that the strain energy varied in a similar way in both directions of bending for Using the SCC-DFTB method, the change in the total energy of the composite film at each deformation step was monitored, and relaxation of the atomic network of the structure was carried out.It was found that the composite film structure continued to maintain an arched shape with an increase in the degree of bending along the X-axis, and only the distance between its ends changed.With an increase in the degree of bending along the Y-axis, the composite film structure retained a tendency to contraction near the base, while the upper layer of the composite film stretched along the Y-axis, and the lower layer compressed along the same axis.This behavior is typical for a rod.The degree of bending was estimated from the radius of curvature of the composite film atomic network.According to the results of numerical experiments, we plotted the dependence of the strain energy of the composite film on the radius of curvature of its atomic network.The strain energy was found according to the difference between the total energy of the object before and after bending.Figure 4 shows the dependences obtained for mono-and bilayer composite films, respectively.
These plots show that the strain energy varied in a similar way in both directions of bending for Using the SCC-DFTB method, the change in the total energy of the composite film at each deformation step was monitored, and relaxation of the atomic network of the structure was carried out.It was found that the composite film structure continued to maintain an arched shape with an increase in the degree of bending along the X-axis, and only the distance between its ends changed.With an increase in the degree of bending along the Y-axis, the composite film structure retained a tendency to contraction near the base, while the upper layer of the composite film stretched along the Y-axis, and the lower layer compressed along the same axis.This behavior is typical for a rod.The degree of bending was estimated from the radius of curvature of the composite film atomic Coatings 2019, 9, 74 network.According to the results of numerical experiments, we plotted the dependence of the strain energy of the composite film on the radius of curvature of its atomic network.The strain energy was found according to the difference between the total energy of the object before and after bending.Figure 4 shows the dependences obtained for mono-and bilayer composite films, respectively.
These plots show that the strain energy varied in a similar way in both directions of bending for the monolayer and bilayer composite structures.With an increase in the radius of curvature of the atomic network, the strain energy decreased, indicating that the graphene-CNT composite film adapted to its new geometric shape.The presence of the second layer affected mainly the energy, i.e., it doubled in comparison with the monolayer.Analyzing the course of the dependences presented in Figure 4, it can be noted that, in the case of bending in the direction perpendicular to CNTs, the strain energy rapidly decreased in a narrow range of variation of the radius of curvature of the composite film atomic network, i.e., in the range of 15-30 nm for monolayer structures, and in the range of 20-40 nm for bilayer structures.A different pattern was observed in the case of the composite bending in the direction along the CNT axis.In this case, the strain energy decreased more slowly and more smoothly, reaching a saturation with the radius of curvature of about 80 nm for both monolayer and bilayer composites.Such a distinction can be due to the fact that the properties of the structural components of composite film manifested differently depending on the direction of deformation; along the X-axis, the properties of graphene were represented, while, along the Y-axis, the properties of CNTs were represented.This was also indicated by the structure of the super-cell of the composite film in each of the considered cases of bending.When simulating the bending along the X-axis, the cell was translated in the direction of the graphene edge, while, when bending along the Y-axis, it was translated in the direction of the CNT axis.The absence of the sharp energy peaks on the graph suggests that elastic deformation is typical for bent mono-and bilayer composite films.This deformation was accompanied by an exponential decrease in strain energy down to zero as the atomic network of the composite film was curved.
Coatings 2018, 8, x FOR PEER REVIEW 8 of 14 composite film atomic network, i.e., in the range of 15-30 nm for monolayer structures, and in the range of 20-40 nm for bilayer structures.A different pattern was observed in the case of the composite bending in the direction along the CNT axis.In this case, the strain energy decreased more slowly and more smoothly, reaching a saturation with the radius of curvature of about 80 nm for both monolayer and bilayer composites.Such a distinction can be due to the fact that the properties of the structural components of composite film manifested differently depending on the direction of deformation; along the X-axis, the properties of graphene were represented, while, along the Y-axis, the properties of CNTs were represented.This was also indicated by the structure of the super-cell of the composite film in each of the considered cases of bending.When simulating the bending along the X-axis, the cell was translated in the direction of the graphene edge, while, when bending along the Y-axis, it was translated in the direction of the CNT axis.The absence of the sharp energy peaks on the graph suggests that elastic deformation is typical for bent mono-and bilayer composite films.This deformation was accompanied by an exponential decrease in strain energy down to zero as the atomic network of the composite film was curved.
(a) (b) To estimate the energy stability of graphene-CNT composite film during bending deformation in different directions, the local stress distribution of the deformed-structure atomic network was calculated using the algorithm described in Section 3. Since the deformation behavior of the composite film was estimated by the change in strain energy at the previous stage of the study, the use of the original method based on the energy approach to calculate the local stresses of the atomic network seems justified from a physical point of view.The local stress distribution was calculated for all considered structural models of composite films at the time of ultimate bending in the directions perpendicular to CNTs and along CNTs.The obtained results allowed us to establish the patterns of the local stress distribution for graphene-CNT composite films, regardless of the varied CNT diameter and inter-tube distance, as well as the number of layers.Figure 5 shows the local stress distribution of the composite film atomic network by the example of central fragments of the super-cells of monolayer graphene-CNT structures with CNT (12,0) and an inter-tube distance of 12 To estimate the energy stability of graphene-CNT composite film during bending deformation in different directions, the local stress distribution of the deformed-structure atomic network was calculated using the algorithm described in Section 3. Since the deformation behavior of the composite film was estimated by the change in strain energy at the previous stage of the study, the use of the original method based on the energy approach to calculate the local stresses of the atomic network seems justified from a physical point of view.The local stress distribution was calculated for all considered structural models of composite films at the time of ultimate bending in the directions Coatings 2019, 9, 74 9 of 15 perpendicular to CNTs and along CNTs.The obtained results allowed us to establish the patterns of the local stress distribution for graphene-CNT composite films, regardless of the varied CNT diameter and inter-tube distance, as well as the number of layers.Figure 5 shows the local stress distribution of the composite film atomic network by the example of central fragments of the super-cells of monolayer graphene-CNT structures with CNT (12,0) and an inter-tube distance of 12 hexagons.It can be seen from the figure that, in the cases of bending perpendicular to CNTs and along CNTs, the maximum stresses fell on the atoms forming the covalent bonds between graphene and CNT in the composite film.Covalent bonds between these atoms were broken at the time of ultimate bending deformations for the composite films.The values of the critical stresses experienced by the atoms of the deformed framework were the same for different bending directions and corresponded to the previously established stress value of 1.8 GPa, at which the C-C bond is broken in deformed graphene [48].For bilayer composite films, the pattern of stress distribution had a similar outcome as the distribution for monolayer composite films; however, the values of maximum stress decreased slightly for bilayer composites.In general, analyzing the results of the numerical modeling of the deformation behavior of mono-and bilayer composite films during bending, we can note the higher energy stability of graphene-CNT films when bending along the CNT axis.The second series of numerical experiments was devoted to a study of the behavior of graphene-CNT composite films under stretching.The calculations were carried out for the same super-cells as in the case of bending.During the experiment, the length of the composite film was subsequently increased by 1% at each deformation step.The graphene-CNT structure was retained in the stretched state due to the rigid fixing atoms along the edges of the composite film super-cell and could not return to the initial state.The fixed atoms did not participate in the searching of the equilibrium configuration of the framework corresponding to the ground state.The dependences of the strain energy of the composite film on the strain value in relative units were plotted according to the results of numerical experiments.The strain energy was found by the difference between the total energy of the composite film before and after stretching.Figure 6 shows the dependences obtained for mono-and bilayer composite films.From the plots presented in the figure, it can be seen that an increase in the strain energy was observed according to a quadratic law for both types of composite films, which corresponds to the elastic deformation of the structure.It should be noted that the graphene-CNT structures under study were more resistant to axial stretching in the longitudinal direction (perpendicular to CNTs).With deformation in the transverse direction (along CNTs), the destruction of the composite film structure occurred faster.This behavior of the composite film can be explained by the topological features of the super-cell types under consideration, i.e., the significant difference in cell lengths in the direction of the stretching graphene-CNT structure.In particular, for mono-and bilayer composite films with CNT (12,0) and an inter-tube distance of 12 hexagons, the length of the super-cell was ~14.5 nm in the direction of deformation perpendicular to CNTs (X-axis), and ~2 nm in the direction along the CNT axis (Y-axis).
Analyzing the curves in the graph, one can see that the CNT diameter and inter-tube distance The second series of numerical experiments was devoted to a study of the behavior of graphene-CNT composite films under stretching.The calculations were carried out for the same super-cells as in the case of bending.During the experiment, the length of the composite film was subsequently increased by 1% at each deformation step.The graphene-CNT structure was retained in the stretched state due to the rigid fixing atoms along the edges of the composite film super-cell and could not return to the initial state.The fixed atoms did not participate in the searching of the equilibrium configuration of the framework corresponding to the ground state.The dependences of the strain energy of the composite film on the strain value in relative units were plotted according to the results of numerical experiments.The strain energy was found by the difference between the total energy of the composite film before and after stretching.Figure 6 shows the dependences obtained for mono-and bilayer composite films.From the plots presented in the figure, it can be seen that an increase in the strain energy was observed according to a quadratic law for both types of composite films, which corresponds to the elastic deformation of the structure.It should be noted that the graphene-CNT structures under study were more resistant to axial stretching in the longitudinal direction (perpendicular to CNTs).With deformation in the transverse direction (along CNTs), the destruction of the composite film structure occurred faster.This behavior of the composite film can be explained by the topological features of the super-cell types under consideration, Coatings 2019, 9, 74 10 of 15 i.e., the significant difference in cell lengths in the direction of the stretching graphene-CNT structure.In particular, for mono-and bilayer composite films with CNT (12,0) and an inter-tube distance of 12 hexagons, the length of the super-cell was ~14.5 nm in the direction of deformation perpendicular to CNTs (X-axis), and ~2 nm in the direction along the CNT axis (Y-axis).
Analyzing the curves in the graph, one can see that the CNT diameter and inter-tube distance had an impact on the tensile strength of the composite film under axial stretching.In particular, both mono-and bilayer composite films with CNT (12,0) and an inter-tube distance of 10 hexagons offerred the least resistance to tensile strain in the direction along the CNT axis.For monolayer composite films, the destruction of the atomic network occurred at 5% stretching, while that for bilayer composites occurred at 3% stretching.Composite films with tubes (10,0) and an inter-tube distance of 12 hexagons demonstrated the most resistance to tensile deformation along the CNT axis among the studied topological models of mono-and bilayer graphene-CNT hybrid structures.When the composite film was stretched in the direction perpendicular to CNTs, a dependence of the strength properties of the graphene-CNT hybrid structure on the layering was clearly observed.For monolayer composite films, the C-C bond breaking occurred at 7% stretching regardless of the CNT diameter and inter-tube distance.For bilayer composite films, the highest tensile strength was demonstrated by hybrid structures with an inter-tube distance of 12 hexagons.For them, the covalent bond breaking occurred at 8% stretching.The calculation of the local stress distribution showed that the destruction of the atomic network of mono-and bilayer composite films occurred at a critical stress of ~1.8 GPa, regardless of the diameter of the tube and inter-tube distance.This result is in good agreement with the results of computer studies of the deformation of graphene nanoribbons [48].As is well known, the stretching of the atomic network of nanostructures does not change their electrical conductivity, since the electronic structure does not change.The electrical conductivity value will also change dramatically at the moment of the breaking of interatomic bonds.A completely different situation occurs in the case of the bending.During the bending of nanostructures, the electron clouds of atoms are re-hybridized; therefore, the nature of electron transport changes.In this connection, we carried out calculations of the transmission function T(E) using Equation (3) and the electrical conductivity G using Equation (4) at each step of bending.First of all, we note that an anisotropy of electrical conductivity was observed in the composite films under study.In the X-direction (see Figure 2), perpendicular to the CNT axis, the electric current was almost absent, since the electrical resistance was tens to hundreds of megaohms.In the Y-direction, along the CNT axis, the electrical resistance value was comparable with the resistance of individual CNTs.In this regard, we carefully studied the pattern of changes in T(E) and G during bending precisely in the Y-direction.
At first, the behavior of the transmission function T(E) during bending of mono-and bilayer composite films based on semiconductor CNT (10,0) and metal CNT (12,0) with different inter-tube distances was investigated.Initially, the function T(E), regardless of the CNT type and the number of layers, exhibited a small gap near the Fermi level (0.2-0.5 eV), i.e., zero conductivity.The presence of As is well known, the stretching of the atomic network of nanostructures does not change their electrical conductivity, since the electronic structure does not change.The electrical conductivity value will also change dramatically at the moment of the breaking of interatomic bonds.A completely different situation occurs in the case of the bending.During the bending of nanostructures, the electron clouds of atoms are re-hybridized; therefore, the nature of electron transport changes.In this connection, we carried out calculations of the transmission function T(E) using Equation (3) and the electrical conductivity G using Equation (4) at each step of bending.First of all, we note that an anisotropy of electrical conductivity was observed in the composite films under study.In the X-direction (see Figure 2), perpendicular to the CNT axis, the electric current was almost absent, since the electrical resistance was tens to hundreds of megaohms.In the Y-direction, along the CNT axis, the electrical resistance value was comparable with the resistance of individual CNTs.In this regard, we carefully studied the pattern of changes in T(E) and G during bending precisely in the Y-direction.
At first, the behavior of the transmission function T(E) during bending of mono-and bilayer composite films based on semiconductor CNT (10,0) and metal CNT (12,0) with different inter-tube distances was investigated.Initially, the function T(E), regardless of the CNT type and the number of layers, exhibited a small gap near the Fermi level (0.2-0.5 eV), i.e., zero conductivity.The presence of the gap was due to the peculiarity of the conductivity of zigzag CNTs.As is known, zigzag CNTs, even metal, are characterized by the presence of a small area of zero conductivity near the Fermi level.However, during bending, accompanied by re-hybridization and additional overlapping of electron clouds, the zero-conductivity gap completely disappears for all types of composite films, regardless of the CNT type and the number of layers.As expected, the electrical conductivity of composite films reacted differently to bending.The nature of the response was determined only by the type of conductivity of CNTs.For samples based on metallic CNTs, the conductivity at the Fermi level increased with increasing curvature (decreasing radius of curvature), regardless of the layering.At a radius of curvature of ~15 nm, the composite films based on CNT (12,0) already had one conduction channel at the Fermi level.For clarity, Figure 7 shows the plots of changes in T(E) of monolayer films as the radius of curvature decreased, i.e., as the bending angle increased.The plots are given for CNTs (10,0) and (12,0) with the same inter-tube distance of 10 hexagons.The unit of measurement for T(E) is the conductance quantum.A violet color represents a plot corresponding to the infinite radius of curvature, when bending is absent.The behavior of T(E) with increasing bending qualitatively predicts an increase in conductivity.The value of conductivity G gives a quantitative prediction, and the value of electrical resistance R(Ω) is calculated using conductivity G. Plots of the change in electrical resistance R during bending in the Y-direction are shown in Figure 8 for mono-and bilayer composite films with CNTs (12,0) and (10,0) at different inter-tube distances.An analysis of the plots shows that the trend in the changes of electrical resistance was the same for both types of films, i.e., resistance decreased.The lowest value of R was observed for composite films with metal CNT (12,0).These films initially had a lower electrical resistance than samples with semiconductor CNTs, which was quite expected.However, it should be noted that the electrical resistance of an individual CNT (10,0) was significantly greater than the resistance of a composite film with the same CNT.This was due to the covalent bonding of CNT with graphene.It can also be seen from the plots that the resistance of the two-layer film was 1.5-2 times less than that of the monolayer film.This can be explained by the presence of two channel-tubes in a super-cell, i.e., two conductors instead of one.
one conduction channel at the Fermi level.For clarity, Figure 7 shows the plots of changes in T(E) of monolayer films as the radius of curvature decreased, i.e., as the bending angle increased.The plots are given for CNTs (10,0) and (12,0) with the same inter-tube distance of 10 hexagons.The unit of measurement for T(E) is the conductance quantum.A violet color represents a plot corresponding to the infinite radius of curvature, when bending is absent.The behavior of T(E) with increasing bending qualitatively predicts an increase in conductivity.The value of conductivity G gives a quantitative prediction, and the value of electrical resistance R(Ω) is calculated using conductivity G. Plots of the change in electrical resistance R during bending in the Y-direction are shown in Figure 8 for mono-and bilayer composite films with CNTs (12,0) and (10,0) at different inter-tube distances.An analysis of the plots shows that the trend in the changes of electrical resistance was the same for both types of films, i.e., resistance decreased.The lowest value of R was observed for composite films with metal CNT (12,0).These films initially had a lower electrical resistance than samples with semiconductor CNTs, which was quite expected.However, it should be noted that the electrical resistance of an individual CNT (10,0) was significantly greater than the resistance of a composite film with the same CNT.This was due to the covalent bonding of CNT with graphene.It can also be seen from the plots that the resistance of the two-layer film was 1.5-2 times less than that of the monolayer film.This can be explained by the presence of two channel-tubes in a super-cell, i.e., two conductors instead of one.
Conclusions
As a result of a series of numerical experiments carried out using the molecular dynamics method and the SCC-DFTB method, we determined patterns of the deformation behavior and electrical conductivity of mono-and bilayer graphene-CNT composite films with CNTs (12,0) and (10,0) and inter-tube distances of 10 and 12 hexagons under stretching and bending.It was established that the direction of deformation plays an important role in determining the deformation behavior of the graphene-CNT structure under bending.In the case of bending perpendicular to CNTs, the composite film takes the form of an arc, and, in the case of bending along the CNT axis, the composite film exhibits behavior characteristic of a beam subjected to bending deformation as a result of exposure to vertical force at its free end.The revealed patterns are valid for both mono-and bilayer composite films.When the composite film was axially stretched both in the direction along the CNT axis and perpendicularly to CNTs, a pattern of elastic deformation, accompanied by a quadratic increase in the deformation energy, was observed.At the same time, the graphene-CNT composite film showed greater resistance to stretching deformation in the direction perpendicular to CNTs.In the case of stretching in the direction perpendicular to CNTs, the destruction of the composite structure occurred at 7% stretching for monolayer films, regardless of the CNT diameter and inter-tube distance, and at 8% stretching for bilayer films with CNTs (10,0) and (12,0) and an inter-tube distance of 12 hexagons.During the deformation in the direction along CNTs, the breaking of C-C bonds of the atomic framework occurred for individual topological models of composite films at 3% stretching.The results of the calculation of the local stress distribution of the composite film atomic network under study during bending and stretching showed that the of C-C bond breaking occurred at a critical stress of ~1.8 GPa per atom.A similar pattern was previously observed for graphene nanoribbons subjected to axial compression.Some patterns were found in the behavior of the electrical conductivity of composite films.Bilayer composite films with CNTs of metal conductivity had the highest conductivity.For these, the value of electrical resistance reached 5 kΩ.The electrical conductivity of bilayer composite films was 1.5-2 times higher than the conductivity of monolayer composite films, since not one but two tubes were in one super-cell.As a result, both CNTs mimicked two conductors or conduction channels.This was evidenced by a decrease in the electrical resistance of the composite films, regardless of the type of conduction of CNTs and the inter-tube distance.
Coatings 2018, 8, x FOR PEER REVIEW 7 of 14 8 to 40 nm for the case of bending along the X-axis, and in the range from 8 to 81 nm for the case of bending along the Y-axis.
Figure 2 .Figure 3 .
Figure 2. Atomistic models of graphene-CNT composite films with CNT (12,0) and an inter-tube distance of 12 hexagons used to simulate bending deformation in two directions: (a) perpendicular to CNTs; (b) along CNT axis.
Figure 2 .
Figure 2. Atomistic models of graphene-CNT composite films with CNT (12,0) and an inter-tube distance of 12 hexagons used to simulate bending deformation in two directions: (a) perpendicular to CNTs; (b) along CNT axis.
Figure 2 .Figure 3 .
Figure 2. Atomistic models of graphene-CNT composite films with CNT (12,0) and an inter-tube distance of 12 hexagons used to simulate bending deformation in two directions: (a) perpendicular to CNTs; (b) along CNT axis.
Figure 3 .
Figure 3.A diagram of the bending of bilayer graphene-CNT composite films with CNT (12,0) and an inter-tube distance of 12 hexagons in various directions: (a) perpendicular to CNTs (X-axis); (b) along CNT axis (Y-axis).
Figure 4 .
Figure 4. Dependence of the bending energy on a radius of curvature of the atomic network of graphene-CNT composite films: (a) a monolayer; (b) a bilayer.Solid lines correspond to the case of bending in the direction perpendicular to CNTs (X-axis), while dashed lines represent bending along the CNT axis (Y-axis).
Figure 4 .
Figure 4. Dependence of the bending energy on a radius of curvature of the atomic network of graphene-CNT composite films: (a) a monolayer; (b) a bilayer.Solid lines correspond to the case of bending in the direction perpendicular to CNTs (X-axis), while dashed lines represent bending along the CNT axis (Y-axis).
Coatings 2018, 8 ,Figure 5 .
Figure 5.The local stress distribution of the atomic network of the central fragment of a monolayer graphene-CNT composite film with CNT (12,0) and an inter-tube distance of 12 hexagons when bending in the direction perpendicular to (a) CNTs and along the (b) CNT axis.
Figure 5 .
Figure 5.The local stress distribution of the atomic network of the central fragment of a monolayer graphene-CNT composite film with CNT (12,0) and an inter-tube distance of 12 hexagons when bending in the direction perpendicular to (a) CNTs and along the (b) CNT axis.
Coatings 2018, 8 ,Figure 6 .
Figure 6.Dependence of the strain energy on the magnitude of the stretching of graphene-CNT composite films: (a) a monolayer; (b) a bilayer.Solid lines correspond to the case of stretching perpendicular to CNTs (X-axis), while dotted lines represent stretching along the CNT axis (Y-axis).
Figure 6 .
Figure 6.Dependence of the strain energy on the magnitude of the stretching of graphene-CNT composite films: (a) a monolayer; (b) a bilayer.Solid lines correspond to the case of stretching perpendicular to CNTs (X-axis), while dotted lines represent stretching along the CNT axis (Y-axis).
Figure 7 .
Figure 7. Plots of the transmission function T(E) of monolayer graphene-CNT composite films at various degrees of bending: (a) for the sample with CNT (10,0) and an inter-tube distance of 10 hexagons; (b) for the sample with CNT (12,0) and an inter-tube distance of 10 hexagons.
Figure 7 .
Figure 7. Plots of the transmission function T(E) of monolayer graphene-CNT composite films at various degrees of bending: (a) for the sample with CNT (10,0) and an inter-tube distance of 10 hexagons; (b) for the sample with CNT (12,0) and an inter-tube distance of 10 hexagons.
Figure 7 .Figure 8 .
Figure 7. Plots of the transmission function T(E) of monolayer graphene-CNT composite films at various degrees of bending: (a) for the sample with CNT (10,0) and an inter-tube distance of 10 hexagons; (b) for the sample with CNT (12,0) and an inter-tube distance of 10 hexagons.
Figure 8 .
Figure 8. Plot of changes in electrical resistance during bending of graphene-CNT composite films along the CNT axis (in the Y-direction): (a) monolayer films; (b) bilayer films.
Table 1 .
Geometric and energy parameters of mono-and bilayer graphene-carbon nanotube ( | 10,607.8 | 2019-01-26T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Separate Developmental Programs for HLA-A and -B Cell Surface Expression during Differentiation from Embryonic Stem Cells to Lymphocytes, Adipocytes and Osteoblasts
A major problem of allogeneic stem cell therapy is immunologically mediated graft rejection. HLA class I A, B, and Cw antigens are crucial factors, but little is known of their respective expression on stem cells and their progenies. We have recently shown that locus-specific expression (HLA-A, but not -B) is seen on some multipotent stem cells, and this raises the question how this is in other stem cells and how it changes during differentiation. In this study, we have used flow cytometry to investigate the cell surface expression of HLA-A and -B on human embryonic stem cells (hESC), human hematopoietic stem cells (hHSC), human mesenchymal stem cells (hMSC) and their fully-differentiated progenies such as lymphocytes, adipocytes and osteoblasts. hESC showed extremely low levels of HLA-A and no -B. In contrast, multipotent hMSC and hHSC generally expressed higher levels of HLA-A and clearly HLA-B though at lower levels. IFNγ induced HLA-A to very high levels on both hESC and hMSC and HLA-B on hMSC. Even on hESC, a low expression of HLA-B was achieved. Differentiation of hMSC to osteoblasts downregulated HLA-A expression (P = 0.017). Interestingly HLA class I on T lymphocytes differed between different compartments. Mature bone marrow CD4+ and CD8+ T cells expressed similar HLA-A and -B levels as hHSC, while in the peripheral blood they expressed significantly more HLA-B7 (P = 0.0007 and P = 0.004 for CD4+ and CD8+ T cells, respectively). Thus different HLA loci are differentially regulated during differentiation of stem cells.
Introduction
HLA class I molecules present cytoplasmic peptides to T-cell receptors on CD8 + T cells, which play a central role in the protection against viral and other intracellular infections as well as in immune reactions to neoplasms. Furthermore, certain HLA class I molecules play important roles as ligands for inhibitory NKcell receptors. The presence or absence of HLA class I expression and its mode of regulation in various tissues are therefore of great importance for our understanding of T-cell and NK-cell mediated protection. In contrast to statements found in many authoritative text books of immunology claiming that HLA class I is expressed by all nucleated cells in the body [1][2][3], the expression is in fact lacking in several cell types [4][5][6][7][8][9][10][11][12][13][14]. Thus HLA class I expression is repeatedly reported as negative in vivo in neuronal cells of the brain, sperm and ova, placenta and islets of Langerhans [5][6][7]9,13,15]. In fact, unequivocal evidence for cell surface HLA class I expression is limited to most cells in lymphoid tissues, epithelial cells of different body surfaces and the endothelial lining of blood vessels (excluding large vessels) [6,7,9,10,13,14,[16][17][18][19][20][21][22][23][24][25]. Apart from these tissues, constitutive HLA class I expression is a matter of controversy. Skeletal muscle cells have been reported to express low amounts of HLA class I [6,13] while other studies have found them to be negative [9,11,14]. Other examples are smooth muscle cells [6,9,13,14,25,26], the parenchymatous cells of the thyroid and the adrenal glands [6,9,13,27] and the kidney [8,12] for which conflicting evidence has been reported. The discrepancies may be due to differences of specificity and sensitivity of the techniques used, because in most of the studies immunohistochemistry (IHC) was used where the read out is at best semi quantitative and different thresholds for positivity may be applied. In addition, it is difficult to compare the staining intensity between samples in different studies because different reagents and techniques were used. Class-specific or allele-specific HLA antibodies were developed originally for complement-dependent cytotoxicity assays (CDC) and flow cytometry. Establishing the sensitivity of such antibodies in IHC assays requires careful examination and validation which is not always undertaken.
Most studies that have addressed HLA class I expression in tissues used antibodies that detect HLA class I in general, most commonly the W6/32 or PA2.6 monoclonal antibodies. W6/32 is well known for binding to all HLA class I alleles [5]. It is therefore largely unknown if all three HLA class I antigens: HLA-A, -B, and -C are co-expressed in class I positive tissues. A few studies demonstrated that both HLA-A and -B are expressed in bone marrow and colon epithelium [17,22,28]. Because these studies have used IHC as the primary technique, the comparison between HLA-A and -B loci was at best semi-quantitative and an absolute comparison was not possible.
There is evidence that the HLA-A locus is regulated separately from the -B locus in some tissues. Recently, we showed that cell surface expression of HLA-B is low or absent on human mesenchymal stem cells (hMSC) while HLA-A is fully expressed [29]. While it is common to see locus or allele-specific down regulation in tumor cells, this was the first report in normal human cells. Such divergence of classical HLA class I expression in stem cells indicates that separate developmental programs may control the expression of classical HLA loci during normal cell differentiation and demonstrates that HLA class I expression should be revisited using locus specific (-A, -B, -C) or even allele-specific reagents.
In this study, we have expanded the scope and studied surface expression of HLA-A and -B alleles on pluripotent embryonic stem cells, multipotent stem cells (hMSC and human hematopoietic stem cells (hHSC)) and some of their end-stage progenies (different subsets of lymphocytes and in vitro differentiated adipocytes and osteoblasts).
Cell Lines and Culture Conditions
In this study two embryonic stem cell lines, four hMSC lines, bone marrow (BM) aspirates (n = 7) and peripheral blood mononuclear cells (PBMC) from healthy volunteers (n = 7, different from the BM donors) were used (Table 1). These cells represent different levels of differentiation as outlined in Figure 1.
The derivation, characterization and routine culture of hESC lines (huES9 and KMEB2) has previously been reported [30][31][32]. Cells were cultured on Matrigel (BD Biosciences, San Jose, CA, USA) in mouse embryonic fibroblast-conditioned medium essen-tially as described in 'Protocols for Maintenance of Human Embryonic Stem Cells in Feeder Free Conditions' from Geron Corporation (Menlo Park, CA, USA). Briefly, mouse embryonic fibroblast-conditioned medium was prepared by incubating mitotically inactivated mouse embryonic fibroblasts for 24 hours in medium consisting of Knockout D-MEM, 15% knockout serum replacement, 1% Penicillin-Streptomycin, 1% MEM non-essential amino acids (without glutamine, Cat. No.: 10370-070), 1% GlutaMAX (all from Life Technologies, Taastrup, Denmark), 0.5% human serum albumin (CSL Behring, King of Prussia, PA, USA), and 0.1 mM b-mercaptoethanol (Sigma-Aldrich, St. Louis, MO, USA). This medium was sterile filtered and supplemented with 5-8 ng/ml recombinant human fibroblast growth factor (FGF) (Peprotech, London, UK) immediately before use. Cells were routinely passaged at a 1:3 ratio using 0.05% trypsin-EDTA (Life Technologies). hESC were cultured for 3 days after passage before the addition of human recombinant IFNc (Gibco, Life Technologies). Following incubation for 24, 48 and 72 h periods, respectively, cells were washed in PBS and collected by mechanical scraping in PBE (PBS containing 2 mM EDTA and 0.5% BSA) and stained with monoclonal antibodies for flow cytometry (see antibodies in Table S1).
hMSC and BM mononuclear cells (BMNC) were established from bone marrow aspirates according to a previously established method [29]. Briefly, 20 ml of bone marrow were aspirated into a syringe containing 2 ml of heparin (5000 IU/ml, Amgros, Copenhagen, Denmark), mixed thoroughly and rapidly diluted with 25 ml of complete RPMI (RPMI enriched with 10% fetal bovine serum (FBS), L-glutamine, penicillin and streptomycin, all from Life Technologies) containing 30 IU/ml heparin. The in 50 ml tubes and centrifuged for 40 minutes at 500 6 g with the acceleration and brake set to minimum. The interface layer was transferred to new tubes and washed twice with complete RPMI (+heparin 30 IU/ml). The isolated BMNC were used to generate hMSC lines or frozen in 90% heat inactivated FBS and 10% DMSO (Sigma-Aldrich) for later flowcytometric analysis. hMSC were generated by seeding 10 7 BMNC in a 75 ml culture flask in 15 ml of complete MEM (MEM enriched with 10% FBS, Lglutamine, penicillin and streptomycin, all from Life Technologies) and incubated at 37uC, 5% CO 2 for 5 days. The non-adherent cells were removed and the adherent cells were subsequently trypsinized with TryPLE express (Life Technologies) and transferred to new 75 ml culture flasks. hMSC properties were confirmed by flow cytometry (positive for CD73, CD90, CD105, CD146 and negative for CD34, data not shown) and by their ability to differentiate to adipocytes and osteoblasts (see differentiation protocols section).
Genomic HLA Typing
Genomic DNA was purified from the thawed cells using QiaAmp DNA mini kit (QIAGEN, Hilden, Germany) according to the manufacturer's protocol. Low resolution HLA typing of the cell lines and donors was performed using LabType SSO (Sequence-specific oligonucleotide probes) (One Lambda, Los Angeles, CA, USA) using a Luminex 100 IS (Luminex Corp., Austin, TX, USA).
Antibodies
The antibodies used are summarized in Table S1.
Flow Cytometric Staining
The frozen fraction of the bone marrow aspirate was thawed, washed twice with pre-heated (37uC) fresh medium (complete RPMI), re-suspended in medium and left at room temperature for 30 minutes. The cells were then spun down at 500 6 g for 5 min and re-suspended in medium at a concentration of 10 7 cells/ml. Adherent cells were incubated with TryPLE express (Life Technologies) for 5-8 minutes until most of the cells were brought into suspension. Then cells were washed twice and re-suspended in medium.
Primary antibody staining with fluorochrome-conjugated antibodies was performed using a maximum of 10 6 cells in 100 ml of medium with the amount of antibody recommended by the manufacturer. All HLA antibodies were titrated and used in concentrations saturating the staining. We found that final concentrations of 1:2000 of HLA-A2, 1:10 of HLA-A3, 1:500 of HLA-B7, 1:100 of HLA-B8 and 1:10 of HLA B-27 antibodies were sufficient. The cells were incubated for 30 minutes (all staining incubations were performed in the dark at 4uC) and washed twice with PBE (PBS containing 2 mM EDTA and 0.5% BSA). Samples stained with mouse anti-human antibodies, not conjugated with a fluorochrome, were subsequently stained with FITC-conjugated goat anti-mouse IgG/IgM antibody and incubated for another 30 min., then washed twice with PBE. If further staining was needed the sample was incubated with mouse serum (Dako, Glostrup, Denmark) for 30 min (to block available binding sites on the secondary antibody) and washed twice and then stained with fluorochrome-conjugated murine antibodies for 30 min. as well. After staining, the cells were re-suspended in PBE containing 1% formaldehyde solution and kept in the dark at 4uC. The cells were analyzed by flow cytometry immediately or at the latest the next day.
hHSC were defined by being positive for CD34 while negative for CD38 and lineage markers (CD3, CD4, CD8, CD14, CD16, CD19, CD20 and CD56) [33].The fluorochrome conjugation of the antibodies is detailed in Table S1. Mature CD4 + T cells were defined by gating on the CD4 and CD3 double positive population. Mature CD8 + T cells were defined by gating on the CD3 and CD8 double positive population, both for BM and peripheral blood-derived cells.
All flow cytometry was carried out using CyAn ADP from Beckman Coulter and the results were analyzed using the Summit 4.3 program. For direct immunofluorescence, MEF (molecules of equivalent fluorochrome) values were calculated from the MFI (mean fluorescence intensity) and a standard curve made by running FluoroSpheres (Dako) the same day and with the same settings. Samples stained by indirect immunofluorescence, ABC (antibody binding capacity) values were calculated from MFI and a standard curve produced by using the QIFI kit (Dako) beads coated with different known numbers of Ig molecules. The MEF/ ABC value of a relevant isotype control staining was subtracted from the corresponding HLA antibody MEF/ABC value, thus calculating the specific MEF/ABC value for each HLA antibody.
Adipocyte differentiated cells were harvested at day 13 (longer differentiation was not possible because the cells would become too fragile for flow cytometry). Osteoblast differentiated cells were harvested after 16 days of differentiation. In both cases, cells were trypsinized (5-8 minutes at 37uC) and then re-suspended in complete MEM, followed by flowcytometric staining (Table S1).
Oil Red O Staining for Adipocyte Differentiated Cells
Differentiation of hMSC to adipocytes was confirmed by Oil Red O staining [34]. The cells in the culture dish were washed with PBS, fixed with 4% paraformaldehyde for 10 minutes at room temperature, rinsed with 3% isopropanol and stained with Oil Red O (ORO) (Sigma, Steinheim, Germany) for 1 hour at room temperature and then washed twice with distilled water. The staining solution was prepared by dissolving 25 mg of ORO in a mixture of 5 ml of 100% isopropanol and 3.3 ml of water, incubated for 15 minutes at room temperature and then filtered through Whatman filter paper (Q-Max CA-S, 0.20 mm pore size, Frisenette APS, Knebel, Denmark) before use. The stained cells were examined microscopically for formation of fat droplets inside the cells.
Alizarin Red Staining for Osteoblast Differentiated Cells
Osteoblast differentiation was confirmed by matrix mineralization visualized by Alizarin Red staining [34]. Briefly, cells in four well plates were washed with PBS, fixed with 70% ice-cold ethanol for 1 hour at 220uC, rinsed with distilled water, stained with 40 mM Alizarin Red (pH 4.2, Sigma-Aldrich, USA) for 10 minutes at room temperature under gentle rotation. Excess dye was removed with H 2 O followed by a brief wash with PBS for 2 minutes under gentle rotation.
Quantification of Alkaline Phosphatase Activity
Cells were plated in 96 well plates at a density of 6000 cells/well and induced to osteogenic differentiation as described above. On day 7, cells were washed with PBS, rinsed with TBS, fixed with 3.7% formaldehyde for 30 seconds, and then 100 ml of reaction substrate solution (1 mg/ml p-nitrophenylphosphate (Fluka, USA) in 50 mM NaHCO 3 (pH 9.6) with 1 mM MgCl 2 ) were added and incubated for 20 min at 37uC. Finally 50 ml of 3 M NaOH were added to stop the reaction and the absorbance was measured at 405 nm.
Gene Expression Analysis for Differentiated Cells
During MSC differentiation to both adipocytes and osteoblasts, cells were harvested at day 12 and 16 respectively, mRNA expression of the osteogenic marker, Collagen 1, and the adipogenic marker, PPAR gamma, were verified using RT-PCR.
Statistics
Statistical analysis was performed using GraphPad Prism 5 software. Student t-test or one-way ANOVA were used to test differences of means between two or three groups of data, respectively. P values ,0.05 were considered significant and marked with * (P,0.01 marked with ** and values ,0.001 with ***). Error bar signifies mean +/2 SEM.
Ethical Approvals
The study was reviewed and approved by the Ethical Committee for the Region of Southern Denmark with the issue No. 2008-00-92. Written informed consent was obtained from the blood and bone marrow donors with respect to sampling and establishment of hMSC cell lines.
hESC Constitutively Express Low Amounts of Cell-surface HLA-A but no Detectable HLA-B
First, we investigated the basal expression of HLA-A and HLA-B on two well characterized hESC; huES9 [35] and KMEB2 [30] ( Figure 2A). These cell lines were chosen because they genetically carry HLA alleles that, if expressed, can be specifically detected by commercial antibodies validated for diagnostic purposes (Table 1). HLA-C expression was not included in the study due to the lack of HLA-C allele-specific antibodies. The allele-specific monoclonal antibody staining showed extremely low, but detectable, HLA-A2 (huES9) and -A3 (KMEB2) expression while HLA-B7 and -B13, respectively, were undetectable on the surface of the cell lines ( Figure 2A). However, after stimulation of huES9 with IFNc Figure 2C). Similarly, a dramatic up-regulation of HLA-A3 was seen in hESC KMEB2, while only a modest induction of HLA-B7 was seen (Figure 2A). Stimulation for 24 and 48 hours showed less pronounced upregulation but otherwise similar expression patterns (data not shown).
hMSC Express High Levels of HLA-A Alleles and Low Levels of HLA-B hMSC are more differentiated than pluripotent hESC yet still multipotent (Figure 1). We have previously shown that hMSC of bone marrow or adipose tissue origin constitutively express HLA-A alleles on the cell surface, but only weakly HLA-B alleles [29]. In this study, we confirmed those findings in a primary hMSC line (ToB11-13) ( Table 1), which showed a relatively high HLA-A2 surface expression (mean MEF = 35,012) and a five times lower but significant HLA-B7 expression (mean MEF = 6,324) under basal growth conditions ( Figure 2B and C). The gap between the level of HLA-A2 and -B7 expression was narrowed after stimulation with IFNc with high levels of both HLA-A2 and -B7 (mean MEF = 123,766 and 76,130, respectively) ( Figure 2).
The Constitutive HLA-A Expression on hMSC is Reduced During Differentiation into Osteoblasts
We next investigated whether the expression of HLA-A and -B could be altered by differentiation from hMSC (multipotent) towards their differentiated adipocyte or osteoblast progenies (Figure 1). Established differentiation protocols were used to drive hMSC into either adipocyte or osteoblast differentiation and successful differentiation was confirmed by expression of cytoplasmic lipid droplets and increased expression of PPARc mRNA in adipocytes and by extracellular matrix deposition as well as increased expression of Collagen 1 mRNA in osteoblasts ( Figure 3C and 3F). Figure 4B shows that HLA-A2 surface expression was significantly reduced during differentiation of hMSC into osteoblasts (P = 0.017). A similar trend was noted during differentiation of hMSC to adipocytes (albeit not significant, P = 0.2) ( Figure 4A). Some individual cell lines even exhibited complete absence of HLA-A2 after culture in both differentiation protocols. HLA-B expression remained undetectable in both hMSC and their differentiated progenies (Figure 4).
hHSC Express High Levels of HLA-A and -B
Like hMSC, hHSC are multipotent stem cells found in BM (Figure 1). hHSC are progenitors to almost all cells in the blood and the immune system. In this study, hHSC where defined by being CD34 + , CD38 2 and Linwhere Lin denotes a mixture of lineage markers (CD3, CD4, CD8, CD14, CD16, CD19, CD20 and CD56). BM aspirates were studied using indirect staining with calibration beads allowing quantification of the antibody binding capacity (ABC) ( Figure 5).
hHSC expressed high levels of HLA-A which did not differ much (though statistically significant) from that of mature CD4 + and CD8 + T cells also present in the marrow aspirate (1.8 fold higher in hHSC, P = 0.007 and 1.5 fold higher, P = 0.008, respectively). Also HLA-B expression on hHSC was high and indistinguishable from that of bone marrow CD4 + and CD8 + T cells (P = 0.7 and P = 0.3, respectively). However HLA-A2 was expressed at significantly higher levels than HLA-B7 and -B8 on hHSC (P = 0.0002), BM CD4+ T cells (P = 0.0045), and BM CD8+ T cells (P = 0.002) ( Figure 5).
Mature Lymphocytes Express More HLA on the Surface in the Peripheral Blood than in BM
The finding that BM-derived lymphocytes expressed more HLA-A2 than HLA-B7 as shown above, prompted us to compare them with peripheral blood lymphocytes ( Figure 5). Peripheral T lymphocytes tended to express marginally more HLA-A2 and -B8 than in the bone marrow (although not statistically significant, P.0.11, unpaired t-test). However HLA-B7 was expressed significantly more on peripheral T cells than on BM-derived T lymphocytes (P = 0.0007 and P = 0.004 for CD4 + and CD8 + T cells, respectively).
Further, when the different allelic forms of the proteins (HLA-A2, -B7 and -B8) expressed on the selected peripheral blood lymphocytes were compared, HLA-B8 was significantly less expressed than HLA-A2 and -B7 in CD4 + and CD8 + T cells (as judged by 1 way ANOVA, P = 0.0001 and P = 0.0001, respectively, Figure 5). This marks a noticeable difference between HLA-B7 and -B8 since the first is expressed two-fold higher than the latter (P = 0.0002 for CD4 + T cells and Alizarin red stained the matrix formed by osteoblasts (E). qPCR data shows marked up-regulation of PPAR gamma gene expression after adipocyte differentiation (C) and up-regulation of Collagen 1 gene expression in osteoblast differentiated cells (F). doi:10.1371/journal.pone.0054366.g003 P = 0.0002 for CD8 + T cells). In contrast, there was no significant difference between the analyzed HLA alleles in CD19 + cells. Peripheral blood CD19 + B lymphocytes expressed HLA-A2, -B7 and -B8 levels as high as peripheral CD4 + and CD8 + T lymphocytes (P = 0.7). BM CD19 + cells were likely to be a heterogeneous population of pre-B cells, immature and mature B cells and not evaluated further. Taken together, the levels of HLA-B8 expression were comparable between T cells . Cell surface HLA expression of hHSC and lymphocytes derived from BM and peripheral blood. All allelic forms of HLA studied were highly expressed in all the cell types shown. However, HLA-A2 was expressed at significantly higher levels than HLA-B7 and -B8 on hHSC, BM CD4 + T cells, and BM CD8 + T cells. When BM lymphocytes and peripheral blood lymphocytes were compared, there was a marginal (but nonsignificant) difference in the expression of HLA-A2 and -B8 alleles which tended to have higher expression in peripheral blood than in BM. A significantly higher expression level of HLA-B7 was observed on peripheral blood CD4 + and CD8 + T lymphocytes. In peripheral blood, the expression of HLA-B8 was significantly lower than HLA-A2 and -B7 on CD4 + and CD8 + T lymphocytes. Peripheral CD19 + lymphocytes expressed similar amounts of HLA-A2, -B7 and -B8. doi:10.1371/journal.pone.0054366.g005 derived from peripheral blood and BM (P.0.14) and even in hHSC ( Figure 5). Thus, the surface expression of HLA class I molecules does not only vary between cell types and levels of differentiation, but it is also influenced by the compartment in which the cells reside. Moreover, variation may even be at the level of individual HLA alleles on the same locus.
Discussion
Our study compares for the first time the simultaneous quantitative expression of HLA-A and -B alleles on undifferentiated pluripotent embryonic stem cells, more differentiated multipotent stem cells and terminally-differentiated lineage-specific cells with different fates in body compartments.
Allele specific HLA cell-surface expression was markedly different between cells at different stages of differentiation and maturation. Pluripotent hESC are known to express low levels of cell-surface HLA class I based on the staining patterns obtained after incubation with a W6/32 pan HLA class I antibody [36][37][38][39]. Basak et al. have described low constitutive levels of HLA-A2 cell-surface expression on the H9 human embryonic stem cell line [40] while the surface expression of HLA B or Cw have, to the best of our knowledge, not been investigated. Low but detectable expression of HLA-A, -B, and -C mRNA has been reported as judged by quantitative RT-PCR [36], but this does not imply expression on the protein level because many post transcriptional factors are needed for successful expression and some potentially important ones like TAP1, TAP2, LMP2 and LMP7 have been reported missing at the mRNA level in the HS293 embryonic stem cell line [36]. Our results show that classical HLA class I cell surface expression on hESC does not comprise expression of the HLA-B locus, at least not of the alleles studied in the two hESC lines here. However, after stimulation of hESC with IFNc the expression of HLA-A alleles increased to high levels as we have previously observed it in multipotent hMSC [29], while only a modest induction of HLA-B was seen. This indicates that the antigen-presentation pathway(s) required for generation, transport and expression of HLA molecules on the cell surface are readily inducible in hESC. The mechanism behind the differential constitutive expression of HLA-A, but not -B remains to be elucidated but could relate to different dependencies on the peptide loading complex [41].
hMSC are multipotent cells that are developmentally more differentiated than hESC. As we have recently reported, hMSC do express high levels of HLA-A, but the expression of HLA-B is substantially down-regulated, though detectable in most cases [29]. Both the examined HLA-A and HLA-B alleles retained the ability to become induced upon IFNc stimulation as reported previously [29]. Furthermore, after hMSC cell lines were subjected to in vitro differentiation according to established validated procedures into some of their downstream lineages (adipocytes and osteoblasts), they showed a marginal reduction in the cell-surface expression levels of HLA-A (though only statistically significant for osteoblast differentiation) when compared to their levels in their parental hMSC lines. This is in accordance with a previous report based on HCA2 and HCA10 antibodies [42] that detect multiple loci of HLA class I molecules [43,44].
hHSC represent another class of multipotent cells that share the local bone marrow microenvironment with the hMSC. hHSC showed a strong HLA-A cell-surface expression comparable to that found on the surface of mature T and B lymphocytes, and comparatively higher than multipotent hMSC of bone marrow origin. This indicates that cells of hematopoietic lineages are programmed to express high levels of classical HLA class I proteins very early during hematopoiesis. Indeed, a recent study demonstrated that hemangioblasts, a precursor of hHSC and endothelial cells, show an increase in HLA-A2 expression compared with hESC, and this expression increased dramatically as cells differentiated into hHSC [40] in accordance with our results. Interestingly, we find that HLA-B alleles are also relatively strongly expressed on hHSC though still lower expressed than HLA-A. Thus, it is evident that hHSC express three times more of the HLA-A alleles compared to HLA-B alleles.
Greene JM et al. [45] recently studied the relative expression of HLA transcripts in blood leukocytes by pyrosequencing and found that whereas the class I loci contributed differently, an almost equal contribution was found from alleles on the same locus and this contribution varied very little among lymphocyte subsets. Though not incompatible with this, our protein expression data is somewhat in contrast to this, because we found that mature T lymphocytes (CD4 + and CD8 + ) exhibited different patterns of expression depending on the type of individual alleles as well as their body compartment of origin. Again, this suggests that post-transcriptional mechanisms are likely to affect HLA class I expression in allele-specific ways. Moreover, the difference in expression between peripheral blood and BM-derived T lymphocytes suggests that HLA surface expression may even be influenced by signals exerted on them by the compartment where they are located even in the absence of overt inflammation or immunization. However, it is surprising that cell surface expression of the HLA-B8 allele remained low in CD4 + and CD8 + lymphocytes regardless of their origin, while its expression was moderately high in B lymphocytes. This suggests that HLA-B8 allele might be regulated differently from other alleles.
Locus or even allele-specific regulation of classical HLA class I expression may prove important for understanding T cell and NK cell responsiveness in several tissues. The density of expressed peptide-loaded HLA molecules encoded by an individual allele may impact the outcome of an immune response just as their affinity for a given T-cell receptor [46]. This opens for an alternative explanation for the phenomenon that certain HLA alleles are associated with clearing of certain viral infections e.g. HCV [47] and with slow progression of other infections, e.g. HIV [46,48].
The finding that HLA-A is preponderant in most cells studied could be related to the clinical observation in allogeneic bone marrow transplantation that mismatches to HLA-A is tolerated more poorly than mismatches in HLA-B alleles [49]. However, our finding that HLA-A was down-regulated during differentiation of hMSC toward osteoblasts may be promising for future stem cell therapies using in vitro differentiated tissues.
Overall, our findings show that expression of cell-surface HLA-A and -B alleles are regulated individually through different mechanisms in normal human cells, according to the cell type, differentiation state or their location in the body. Future studies should address the specific mechanisms governing allele-specific HLA expression.
Supporting Information
Table S1 Description of the antibodies used in the study. (DOCX) | 6,487.6 | 2013-01-18T00:00:00.000 | [
"Biology",
"Medicine"
] |
Scattering of compact oscillons
: We study various aspects of the scattering of generalized compact oscillons in the signum-Gordon model in (1+1) dimensions. Using covariance of the model we construct traveling oscillons and study their interactions and the dependence of these interactions on the oscillons’ initial velocities and their relative phases. The scattering processes transform the two incoming oscillons into two outgoing ones and lead to the generation of extra oscillons which appear in the form of jet-like cascades. Such cascades vanish for some values of free parameters and the scattering processes, even though our model is non-integrable, resemble typical scattering processes normally observed for integrable or quasi-integrable models. Occasionally, in the intermediate stage of the process, we seen the emission of shock waves and we have noticed that, in general, outgoing oscillons have been more involved in their emission than the initial ones i.e. they have a border in the form of curved worldlines. The results of our studies of the scattering of oscillons suggest that the radiation of the signum-Gordon model has a fractal-like nature.
do not radiate at all and so they can live forever. Such infinitely long living objects are called "breathers" which distinguishes them from standard oscillons. In this paper we describe our study of another interesting group of oscillons which share many properties of oscillons and breathers. A solution of such a type was first discovered in the signum-Gordon model [24]. The signum-Gordon oscillon is an exact solution which, if not perturbed, behaves like a breather i.e. it can live forever without emitting any radiation. This is a very interesting and extremely rare behaviour for time dependent solutions in nonintegrable field theories. Of course, due to the non-integrability of the model a perturbed signum-Gordon oscillon would emit some radiation. Such radiation often takes the form of emissions of smaller oscillon-like packages. So, this type of an oscillon can be thought of as being a stable (or perhaps metastable) time dependent non-topological solution of a non-integrable model. Moreover, some very special perturbations of such oscillons lead to more general, exact and infinitely long lived oscillons (generalized oscillons). Such oscillons were constructed in [25,26].
The signum-Gordon model [27] is perhaps the simplest example of a wider class of scalar field-theoretic models with non-analytic potentials. A very important and characteristic property of such models is their possession of compactons [28][29][30][31][32][33] and scaling symmetry [34]. This symmetry makes these models relevant in the description of dynamics of fields in other models with approximate scaling symmetry in the limit of small amplitudes [35][36][37][38]. In other words, the signum-Gordon model can be thought of as emerging, in this limit, from models containing non-analytic potentials. This shows that studies of solutions of this model are useful and can have relevance in the description of some aspects of solutions of other models with non-analytic potentials. Of course, due to an often encountered rich structure of minima of such more general models, the field configurations with larger amplitudes could also have some nontrivial topology (kinks, skyrmions etc.) [5,28,35,39] and so their complete dynamics would be essentially different from the dynamics of the signum-Gordon compactons.
In what follows we present some basic notions about the signum-Gordon model. The model is defined by the Lagrangian density L = 1 2 ∂ µ φ∂ µ φ − |φ| (1.1) and its dynamics is described by solutions of the Euler-Lagrange equation The Euler-Lagrange equations contain a term sgn(φ) = ∂ ∂φ |φ| = ±1 and so they do not include the vacuum solution φ = 0. In order to include explicitly the vacuum solution into the set of solutions of (1.2) we require that sgn(0) := 0. The model (1.1) has naturally appeared in the study of the behaviour of scalar fields in the vicinity of minima of V-shaped potentials i.e. potentials whose left and right side derivatives at minima are different from each other. Such models are perfectly well-defined from a physical point of view. Moreover, in some cases they can be seen as field-theoretic limits of certain mechanical models, which certainly admit experimental realizations. In fact, it was such mechanical models that led to scalar field models with non-analytic potentials [28].
JHEP01(2020)006
The physical origin of models with non-analytic potentials is wider than the continuous limit of mechanical models. Quite recently it was reported in [35] that models with Vshaped potentials may be obtained from other physical models when a parametrization associated with the symmetry reduction leads to new field variables that are restricted, i.e they cannot take arbitrarily large (or small) values. In the case of models with a mechanical realisation such a restriction is a priori imposed on the system. The restrictions on values of fields lead to some inconvenience in the description of the dynamics of the system which, in such a case, is governed by both the Euler-Lagrange equations and the extra condition on the time derivative of the field. For instance, the mechanical model, from which the signum-Gordon model originates, has a continuous limit described by the field variable that satisfiesφ ≥ 0. Thus the model possesses the potential with an infinite barrier at φ = 0 i.e.Ṽ (φ) =φ forφ ≥ 0 andṼ (φ) = ∞ forφ < 0. The field must also satisfy the reflection condition ∂ tφ → −∂ tφ atφ = 0. One can avoid such an inconvenient reflection condition by introducing an auxiliary model with a new field φ ∈ (−∞, ∞) and the potential V (φ) = |φ|. 1 This new model is so-called the unfolded model. The dynamics of this auxiliary field can be mapped onto the dynamics of the physical field through the folding transformation, see [27]. Thus the signum-Gordon model and other models of this type can describe behaviour of physical systems with restrictions on the values of scalar fields.
The signum-Gordon model is certainly non-integrable. This conclusion can be drawn from the existence of radiation in numerical simulations of generic initial field configurations. In fact, very little is known about the nature of this radiation. In this paper we describe results of our study of some aspects of the radiation in the signum-Gordon model. We pay particular attention to the exact time-dependent solutions of the model known as compact oscillons 2 [24,25], which are our principal candidates for constituents of the radiation. The signum-Gordon oscillons rely on three principal properties of models with V -shaped potentials: the existence of compact solutions, their scale invariance (exact or approximate) and the lack of linearization of small amplitude oscillations. The existence of compact solutions, like compact oscillons in particular, follows from the fact that models with standard kinetic and gradient terms in the Lagrangian approach vacuum in a quadratic manner if the potential has a V -shaped form close to its minimum [28]. The scale invariance [34] is a straightforward consequence of the form of the field equations. In the case of the signum-Gordon model the scale invariance is exact because sgn(φ) is a scale invariant term. A very important consequence of this fact is the existence the self-similar solutions and oscillons of all scales of energy and length. Thus the perturbed oscillons may lose energy by the emission of smaller (perturbed) oscillons. We will demonstrate in this paper that this is exactly what happens and so that the oscillons are main ingredients of the radiation of the signum-Gordon model. Finally, the absence of linear small amplitude oscillations follows from the fact that sgn(φ) term cannot be linearized at φ = 0. This 1 This potential generates the term sgn(φ) = dV dφ in the field equations which justifies the name of the model. 2 In the literature the infinitely long lived oscillons are more often called breathers than oscillons. Here we keep the name oscillon following the original paper [24].
JHEP01(2020)006
implies that, independently of their size, the signum-Gordon oscillons are fundamentally non-linear field configurations. The existence of many exact solutions of the signum-Gordon model in (1+1) dimensions follows from the fact that equation (1.2) reduces to a non-homogeneous wave equation on segments of the x axis, where the sign of the scalar field is fixed, and so, on each segment, it has the general solution of the form: where F and G are arbitrary functions. The main point here is that this reduction is local i.e. the size and the localization of the segments of constant sign varies with time. This is a direct manifestation of a non-linear character of the model. Solutions like (1.3) are called partial solutions. The exact global solution of the model is given by the explicitly known set of properly patched partial solutions which must hold for arbitrary times. Determining such a closed set of partial solutions usually generates some small technical difficulties. They correspond to the unpleasant side of finding solutions of models with V -shaped potentails.
The main aim of this paper is to describe the results of our study of interactions between two oscillons. Our motivation for such a study follows from the fact that the previous numerical simulations of models with V -shaped potentials [37] have indeed observed collisions of oscillon-like structures in emerging radiation. The study of systems containing colliding oscillons may throw new light on the nature of the radiation of the signum-Gordon model. In particular, we are interested in the scattering processes involving only two oscillons. Unfortunately, even such a 'simple' scattering process is too complicated for a purely analytical investigation. For this reason we have used the numerical analysis as our principal tool and have restricted our attention to initial field configurations which possess certain symmetries. We hope that our results will find applications not only for models with a single minimum potential but also for models with multi V -shaped minima that can support the existence of compact kinks etc.
The paper is organized as follows. In section 2 we present some facts about generalized oscillons with vanishing total linear momentum. Then, making use of the Lorentz covariance of the model, we construct traveling oscillons. In section 3 we study scattering of two oscillons. Due to the compactness of the oscillons we construct the initial configurations of such oscillons by considering simple superpositions of non-overlapping oscillons. Further on in this section we also discuss shock waves and oscillons with non uniformly moving borders. The last section presents a short summary of our results and some comments.
Generalized oscillons
The generalized oscillons are exact compact solutions of the signum-Gordon model which are distinguished by the fact that their borders move periodically from left to right and back again. The first particular solutions of this class, characterized by constant velocity of the motion, were reported in [25] and, due to this nature of the motion, were called swaying oscillons. A further generalization of the swaying oscillons to arbitrary periodic -4 -JHEP01(2020)006 motions of the borders was discussed in [26]. In order to simplify our initial discussions we first consider the field configurations describing oscilllons swaying with a constant velocity v. The more general oscillons are discussed later in section 3.5 where we also comment on the results of scattering of compact oscillons.
Before we go further let us establish the terminology for different types of oscillons considered in this paper. The oscillons which are a priori exact solutions (like initial field configurations) will be called the exact oscillons if their borders do not move periodically or, if they do move, the generalized exact oscillons. The oscillons produced in the process of the scattering are numerical solutions of the signum-Gordon equation, hence, they are not a priori exact. They will be called quasi-oscillons in the case of their strong similarity to the exact oscillons and perturbed oscillons if such a similarity is only approximate i.e. when they are approximately periodic, emit radiation etc.
Oscillons at rest
In our work here we are particularly interested in scattering of generalised exact oscillons. Such oscillons move uniformly (modulo a periodic motion given by v) in a certain reference frame S. We shall refer to this frame as to the laboratory reference frame. Moreover, we preserve symbols t and x for coordinates exclusively in this reference frame. On the other hand, the reference frame of the oscillon is denoted by S and called the rest frame of the oscillon. The precise meaning of expression "rest frame" in the case of generalised exact oscillons is the following one: it is the inertial reference frame in which the total linear momentum of the oscillon vanishes. The space time coordinates in S are denoted by t and x .
The basic oscillon has period T = 1. Smaller and bigger oscillons, which differ by their period, can be obtained from the basic one by the scaling transformation which exploits the scaling symmetry of the signum-Gordon equation, see section 3.1.1. The Minkowski diagram presented in figure 1 shows regions of validity of the partial solutions φ k (t , x ), k = {C, L 1 , L 2 , L 3 , R 1 , R 2 , R 3 } that together describe the motion of the oscillon. These partial solutions are given by quadratic polynomials in variables t and x in the rest frame of the oscillon. Physically relevant parts of the polynomial solutions are restricted to some intervals of t and x . In order to get a periodic solution for any t one has to replace t by a periodic function of t . Below we discuss this process in detail.
We start our discussion with a set of partial solutions which are valid only in the interval t ∈ 0, 1 2 . These solutions have been given in [25] and they are denoted by the letter ϕ. Here, we present them in a new notation. In fact, there are seven partial solutions in the interval t ∈ 0, 1 2 . Among them, four solutions are essentially different, namely -5 -JHEP01(2020)006 Figure 1. The world sheet of the generalized exact oscillon seen in its own rest frame S . Only in the interval t ∈ 0, 1 2 φ k = ϕ k holds; other partial solutions are given by (2.9). The axes x of the laboratory reference frame seen in frame S have an inclination with respect to axis x . Here we present three different cases of inclination corresponding to the velocity u = −V of the laboratory frame S that moves to the left on the diagram S . The angle of this inclination between axes x and x is given by arctan(V ).
The other three partial solutions ϕ R j (t , x ; v), j = 1, 2, 3 can be obtained from those shown above by performing the transformation
JHEP01(2020)006
Note that the solution ϕ C (t, x; v) is invariant under this transformation. Note also that all solutions ϕ k (t , x ; v) are negative-valued in their domains. Each solution ϕ k (t , x ; v) is valid only in a specific region of the Minkowski diagram. For this reason we define a few region step functions Π k (t , x ; v) which are equal to unity only in the region in which the given partial solution holds and vanish outside this region: where θ(z) is the unit step function θ(z) = 0 for z < 0 and θ(z) = 1 for z ≥ 0. One can The parabolic functions like (2.1)-(2.4) are not periodic whereas the oscillon is a periodic solution. In order to give formulas which are valid for any t , we define two periodic functions τ (z) := 1 π arcsin(| sin(πz)|), (2.7) σ(z) := sgn(sin(2πz)), (2.8) where τ (z) maps any t onto the intarval [0, 1 2 ] and σ(z) = ±1 agrees with classical derivative of τ (z). The functions (2.7) and (2.8) allow us to construct the periodic solutions. They are presented in figure 2. The function σ(z) is needed to describe the changes of the sign of the partial solutions at t = n 2 . Any partial solution (for arbitrary t ) can be written in terms of basic solutions ϕ k (t , x ; v), where k = {C, L 1 , · · · , R 3 }, that are valid only on the interval t ∈ 0, 1 2 . The partial solutions presented in the Minkowski diagram in figure 1 have the form (2.9) The functions (2.7) and (2.8) allow us to introduce a more compact notation than the one in [37]. Note that here the functions φ − L 1 /R 1 and φ + L 3 /R 3 have been absorbed into the definitions of functions φ L 1 /R 1 . Similarly, φ − L 3 /R 3 and φ + L 1 /R 1 have been absorbed into the definitions of functions φ L 3 /R 3 . The total solution is given by a continuous function which is a sum of non overlapping partial solutions (2.9). The derivative of the oscillon solution with respect to time is given by where σ 2 (z) = 1. Note that all the derivatives of the region step functions Π α can be ignored because the sum of partial solutions is a continuous function so there is no reason to expect delta functions at the matching points.
In figure 3 we present three snapshots of the generalized exact oscillon solution φ(t , x ; v) and its time derivative ∂ t φ(t , x ; v) at t = 0.1, t = 0.4 and t = 0.75. Solutions presented in figures (a) and (d) consists of (from left to right)
Travelling oscillons
The signum-Gordon equation is invariant under the Lorentz transformations. Thus, traveling compact oscillons can be obtained from the non traveling ones by an appropriate Lorentz transformation. In particular, the oscillon with a non vanishing linear momentum is obtained from the generalized exact oscillon by a Lorentz boost. In what follows we assume that the laboratory reference -8 -
JHEP01(2020)006
frame S moves with velocity u = ∓V with respect to the rest frame of the oscillon S . Thus, the oscillon has velocity u = ±V in the laboratory frame S. The field configuration that describes the traveling oscillon is a function of coordinates t and x and is given by performing the transformations in (2.9) and obtaining the partial solutions The derivatives of the fields with respect to time t are given by the expression where d dz τ (z) = σ(z) and σ 2 (z) = 1 at open supports of the partial solutions. Hence, the travelling oscillon in S is a solution of (2.11) given by a sum of non overlapping partial solutions (2.13), namely, Note, that the axis x i.e the line t = 0 in S is not a simultaneity line in S. The scalar field φ(t , x ; v) vanishes at the horizontal lines t = n 2 , n = 0, ±1, ±2, . . . which are shown in figure 1. It shows that the oscillon seen in the laboratory reference frame S has some isolated traveling zeros. Such isolated zeros are given by points of intersection of lines parallel to the axis x (given by t = const) with the lines t = n 2 . The number of isolated zeros and the composition of the oscillon (types of partial solutions seen in S at given instant of time t) depend on the value of the velocity u = ±V which the oscillon has in S. The axes x and x form an angle arctan(V ) so for V < 1 2+v there is only one point of intersection of straight lines parallel to x with the line t = n 2 . For 1 2+v < V < 1, a second isolated zero arises at some value of the time interval.
According to the diagram presented in figure 1, the initial (at t = 0) configuration of the field in S, obtained by the Lorentz boost of the oscillon given in S , consists of different sets of partial solutions corresponding to different values of the velocity V . Let us consider the oscillon that moves with the velocity u = +V in the laboratory reference frame. The oscillon configuration ψ at t = 0 consists of (from left to right) The cases (2.17) and (2.18) differ by a sign of partial solutions ψ R 3 and ψ R 2 . The compactness of exact oscillons allows the construction of some multi-oscillon configurations which are exact solutions of the signum-Gordon equation. The only condition to satisfy is the non-overlapping of the supports of individual oscillons. A generic initial configuration {Ψ(x), Ψ t (x)} containing two travelling oscillons is given by the superposition of non-overlapping travelling oscillons obtained from generalized exact oscillons φ(t, x; v 1 ) and φ(t, x; v 2 ) by the transformations which are symmetries of the signum-Gordon equation: • Poincaré transformatons in (1+1) dimensions: boosts, spatial and temporal translations, spatial and temporal reflections, • sign flipping of the field φ → −φ.
Two individual oscillons, i = 1, 2, are obtained by transformations where u i are boost velocities, t 0i are temporal translations, x 0i are spatial translations, λ i stand for scale parameters and ε i = ±1 and i = ±1 for reflections. Let us fix the sign of and so the sign of v can be absorbed into the combination of spatial reflexion and translation. The initial configuration {Ψ(x), Ψ t (x)} is now given by where the non-overlapping of their supports restricts the values of admissible spatial translations. It turns out that some of left-over parameters can be omitted without any loss of generality. The only relevant parameters of the initial configurations involving two oscillons are those which are not equivalent i.e. they cannot be related by a symmetry transformation. Thus the set of relevant free parameters contains relative scale, relative velocity etc. In some cases it will be more convenient fix one of them and so only study the dependence on the last one.
In figure 5 we present an example of the initial configuration containing two oscillons which move in opposite directions with velocities u 1 = 0.5 and u 2 = −0.7. This configuration was obtained by taking the generalized exact oscillons with v 1 = 0.3 and v 2 = 0.34 and λ 1 = λ 2 = 1. We have set t 0 = 0 and ε i = 1 as well as i = 1. The oscillons were shifted in space taking x 01 = 0 and x 02 = −1.3.
Symmetric configurations
A numerical study of the scattering process shows that the process depends on many parameters like v 1 , v 2 , the relative velocity of the oscillons, their initial distance, time shift and reflections. In order to simplify the set of parameters we have decided to restrict our considerations to symmetric and anti-symmetric initial configurations which certainly reduces the number of free parameters. It has turned out that even with this restriction we have been left with a sufficiently rich set of physical systems. We think that more general configurations are even interesting; however, their systematic study would have required much more work so we have decided to put main effort on symmetric configurations. In order to get a symmetric Ψ (s) or an antisymmetric Ψ (a) configuration we can take a single exact oscillon parametrized by v and perform a sequence of symmetry transformations which leads to ψ(t + t 0 , x + x 0 ; v, u)| t=0 , where the boost velocity is chosen to be u = +V with V ≥ 0. The second oscillon can by obtained from this result by the spatial reflection x → −x and, optionally, by sign flipping. Naturally, x 0 must be chosen in a way that the support of ψ lies on negative semiaxis x. In such a case the support of the oscillon and its mirror image do not overlap. The resultant initial symmetric and antisymmetric configurations are given by Note that an alternative symmetric (antisymmetric) initial configuration can also be obtained by taking u = −V and shifting the resulting oscillon to the right i.e. choosing x 0 < 0. The second oscillon is obtained by the mirror reflection x → −x (and possibly the sign flip). An example of such an initial configuration is shown in figure 7 (shadowed regions). The configuration shown in figure 6 is marked in figure 7 by curves without shadowing.
Phase of the oscillon
Unlike for the case of compact kinks, the scattering process of two oscillons depends on the initial distance between them. This observation follows from the fact that the shape of the -12 -JHEP01(2020)006 oscillons changes with time and the outcome of the scattering process depends strongly 3 on the shapes of oscillons at the moment when their supports begin to collide. Thus the phase of the oscillon is another relevant parameter which must be taken into account in the analysis of the scattering of oscillons. Two traveling oscillons that differ exclusively by the value of spatial translation are said to have the same phase. The phase of the oscillon, in its own rest frame S , is the number α ∈ [0, 1), where the lowest value of α = 0 represents an oscillon configuration at t = 0 and the upper limiting value α = 1 = T rest (for λ = 1) is given by the period of the oscillon. The period of the oscillon in the laboratory reference frame S, in which the oscillon has a certain velocity V , is given by The phase of the oscillon in the laboratory reference frame S can be chosen again as the number α ∈ [0, 1), whose upper limit is given by T lab /γ. Note, that two oscillons with the same phase in two different inertial reference frames describe different field configurations. Below we describe in more detail the choice of the initial symmetric (antisymmetric) configurations containing two generalized exact oscillons.
The uniform motion of the oscillons from t = 0 to the moment of the collision results in a variation of their individual phases or the variation of their common phase in the case of symmetric (antisymmetric) initial configurations. The variation of the phase depends on the initial distance between support of two oscillons. Since the oscillons do not interact until they supports begin to overlap one can eliminate this initial distance without any loss of generality. This can be done choosing properly the value of spatial translation. The condition that oscillons begin to collide at t = 0 (their supports touch each other) makes the parameter of spatial shift a function of the phase i.e. x 0 = x 0 (α).
In order to set up the phase of oscillation at t = 0 one can make use of the translational symmetry t → t + t 0 of the signum-Gordon equation. Due to the periodicity of the solution the parameter t 0 can be chosen as t 0 = αγ. A sequence of generalized exact oscillons with different phases α is plotted in figure 8a. The configurations α = 0 and α = 1 differ exclusively by a spatial translation which implies that they have equal phases. In figure 8b we plot a worldsheet of the generalized exact oscillon in the laboratory reference frame in which it moves with the velocity u = +V . The endpoints of the oscillon are marked bỹ x(α) (the left one) and x(α) (the right one). It is pretty clear from this diagram that the length of the oscillon in the laboratory reference frame is a periodic function of the time when v = 0 and it is equal to 1/γ (for λ = 1) when v = 0. In order to get an initial configuration for the scattering process we translate the oscillon to the left by a distance x 0 (α). The second oscillon is obtained by the mirror reflection of the first one. Hence the initial configurations that are subject of considerations in this paper are those given by (3.3) with t 0 = αγ and x 0 = x 0 (α). In figure 9 we plot the initial profiles of the field Ψ (s) (x) with different phases.
Determination of the endpoints x(α) andx(α) of a oscillon
We start with the determination of the function x 0 (α) which describes the position of theright endpoint of the oscillon. To obtain it is sufficient to restrict considerations to v ≥ 0 because any non-travelling oscillon satisfies relation φ(t, x; −v) = φ(t, 1 − x). So we shall consider here the boosts in two directions, namely u = ±V , where 0 ≤ V < 1. In all formulae containing "±" the upper sign corresponds to u = +V and the lower one The simultaneity line t = αγ in the laboratory reference frame S is described by the equation in the reference frame of the oscillon S . In coordinates in S the right border of the oscillon is a worldline given by where τ (t ) is a function defined in (2.7). Eliminating x from (3.5) and (3.6) we find that -14 -JHEP01(2020)006 where is a straight line and τ (t ) is a saw-shape function plotted in figure 10. The plot shows two cases which differ by the sign of the boost velocity u.
Note that due to the periodicity of the oscillon the parameter α is restricted to the inteval 0 ≤ α < 1. This can be seen from figure 8b (case u = +V ), where the simultaneity lines t = 0 and t = γ, parallel to the axis x, cross the left border of the oscillon at (t , x ) = (0, 0) and (t , x ) = (1, 0) and therefore two oscillons seen in the laboratory frame S at the instants of time t 0 = 0 and t 0 = γ have the same shape. The same is true for u = −V case.
Two straight lines y(t , α) with α = 0 and α = 1, plotted in figure 10a, cross the axis t at t = −V and t = 1 − V . Similarly, in the case u = −V plotted figure 10b, they cross the axis t at t = V for α = 0 and t = 1 + V for α = 1. In order to cover the whole range of velocities V ∈ [0, 1) we shall consider the equation (3.7) on two intervals; namely, on the interval −1 < t < 1 for u = +V and on the interval 0 < t < 2 for u = −V . The saw-shape function τ (t ) is given by the pieces of straight lines There are two different cases dependent on the absolute value of the boost velocity V = |u|. They are separated by the critical case for which the velocity has value The straight line y(t , α) crosses maxima of τ (t ) i.e. τ (t max ) = 1 2 at t max = − 1 2 , 1 2 for u = +V and at t max = 1 2 , 1 in the case u = −V . It crosses the minimum τ (t min ) = 0 at t min = 0 for u = +V and at t min = 1 for u = −V . The corresponding values of the parameter α are denoted by α (±) l for lower maximum, α (±) u for upper maximum and α (±) 0 for the minimum. The signs "±" correspond to u = ±V . They have the form The upper letter stands for the upper sign (+) and the lower one for (−).
When the boost velocity is less than the critical value, V < V c , then the straight line y(t , α = 0) crosses the saw-shape chain τ (t ) at a certain point given by t belonging to the interval B D and y(t , α = 1) crosses the chain τ (t ) at the point with t that belongs to F . Note that the intervals B D and D F are covered only partially by α. On the other hand, for V > V c (the case sketched in figure 10) the relevant intervals are A C , B D and C E . In the interval A C the parameter α belongs to 0 ≤ α ≤ α In this case the intervals A C and C E are covered only partially by α. The solution of the equation (3.7) is given by where the values of (a, b) are determined by the velocity V and the phase α. A position of the right endpoint of the oscillon, obtained from the spacetime interval, takes the value where x , introduced in (3.6), is a function of α given by x (α) = 1 + v at (α) + b . After some manipulations one gets
JHEP01(2020)006
This results shows that x 0 is a linear function of α in the intervals in which (a, b) remain constant functions of α.
Next we determinate the functionx 0 (α) which describes the position of the left endpoint of the oscillon. The left border of the oscillon is described by the worldline x = vτ (t ) in the rest frame of the oscillon. In order to get the function t (α) one has to solve the equation Taking τ (t ) = a t + b we find that t (α) is given by The coefficients (a , b ) correspond to 0 ≤ t ≤ 1 2 and 1 2 ≤ t < 1 and they are given by We denote the solutions of the equation c . They are given by . Finally, taking similar steps for the case of the 'right endpoint' we find
Remarks on the initial configurations
Having determined the expressions for x 0 (α) andx 0 (α) we can now construct arbitrary initial configurations containg generalized exact oscillons which begin to collide at t = 0. In the case of oscillons with |u| < v an additional caution is necessary. In figure 11 we present the worldsheets of two generalized exact oscillons which form a symmetric configuration. The initial field configuration for the scattering process is given by t (x)) cannot be obtained by approaching two travelling oscillons. A requirement that worldsheets of colliding oscillons have no intersections for t < 0 excludes such field configurations.
Although we have paid the main attention only to the symmetric (antisymmetric) initial configurations it is quite clear that nonsymmetric configurations with vanishing total momentum can be obtained by taking colliding oscillon with different phases. An example of two oscillons with phases α L and α R is given by and shown in figure 12a. This field configuration is obtained by shifting one of the oscillons by distance x 0 (α L ) (placing it to the left of x = 0) and the second one by x 0 (α R ) and reflection x → −x (placing it to the right of x = 0). Another possibility is shown in figure 12b. The first oscillon, with u = −V , is shifted byx 0 (α R ) (which places it to the right of x = 0) and the second oscillon, with u = +V , is shifted by x 0 (α L ) (which places it to the left of x = 0). In this case It is quite clear from diagrams in figure 12 that there exist a set of phases for which the worldsheets of oscillons overlap for t < 0.
Antisymmetric configurations
In this section we present some numerical results for the scattering of two oscillons which form an antisymmetric initial configuration of the signum-Gordon field. We have chosen antisymmetric initial data to be discussed here first because the result of the scattering of oscillons in such a case is not as complex as that for symmetric configurations. The fact that initial configuration is antisymmetric implies that in regions x < 0 and x > 0 effectively splits into two independent problems containing an initial oscillon and the boundary condition (3.13).
In our numerical study we have evolved an antisymmetric configuration without assuming the condition χ = 0 at x = 0. Looking at results presented in figure 13 we clearly see that the condition (3.13) is satisfied. This effect manifests itself in the presence of vertical white segments at diagrams which are located at x = 0. Another important observation is the absence of radiation in the central region of the diagrams independently on the value of the initial speed of oscillons. The sources of radiation generated in this process are irregular borders of two outgoing oscillons, see figure 13a.
One can note that irregularities of the border are more likely to appear for small velocities V of colliding oscillons than for the larger ones. Moreover, in spite of being irregular the outgoing oscillons radiate significantly less than it would be expected for strongly perturbed oscillons. In fact, the outgoing oscillons with irregular borders belong to a more general class of oscillons. We discuss this subject in more deatil in section 3.5.
Another interesting question is the dependence of the scattering process on the phase α that characterizes the initial configuration. In figure 14 we present the results of the scattering at V = 0.8 for four values of the phase. The figures show that the evolution of the initial configurations does not depend much on the value of the parameter α.
Another parameter that the incoming oscillons depend on is the speed v of the oscillon border in its rest frame. Our numerical studies have shows that irregularities of borders of two outgoing oscillons increase with increasing of the value of parameter v which determines the swaying motion of the incoming oscillons. In figure 15 we present the results of the scattering of an anti-symmetric configuration of oscillons with speed V = 0.8, phase α = 0 and speeds of the border v = 0.47 and v = 0.82. The figures show that as the borders of outgoing oscillons became more and more irregular the number and size of the oscillons emitted from these borders increases.
It is quite notable that the radiation generated in the process of the scattering of antisymmetric configurations is emitted only from the border irregularities of the outgoing A typical situation with v = 0 is shown in figure 16. Two incoming exact oscillons form the initial configuration at t = 0. This configuration is adopted as an input for our numerical simulations.
We have performed many numerical simulations for distinct velocities and phases of scattered oscillons. In figure 17 we plot the scattering process of two exact oscillons with JHEP01(2020)006 v = 0 and α = 0 for six values of boost velocity V . All initial configurations contain two exact oscillons whose supports touch each other at t = 0. Numerics shows that the result of collision strongly depends on the velocity V of colliding oscillons. All six diagrams contain two main oscillon-like objects (they perhaps can be identified with perturbed oscillons) that emerge shortly after collision. They move with the velocity which is almost equal to velocity of colliding oscillons. The emerging oscillon-like objects obtained in the process of the scattering are significantly less regular when initial velocities of oscillons are small. For higher velocities the main outgoing oscillons are quite regular. Such field configuration are very close to the exact (generalized) oscillons. In the rest of the paper we call them quasi-oscillons. A fundamental difference between diagrams is visible in their central region where radiation appears in the form of jets. Looking more carefully at the radiation we see that it contains certain structures that strongly resemble oscillons. They look a bit like perturbed oscillons. Note also that the emerging oscillons obtained for small velocities V emit oscillon-like objects directly from their irregular border. A presence of radiation in the scattering process reflects the non-integrable character of the signum-Gordon model.
In figure 18 we present the numerical results of scattering of two oscillons with V = 0.93, v = 0 for four initial phases α = 0, α = 0.25, α = 0.84 and α = 0.93. The figure demonstrates that the initial phase of the colliding oscillons is indeed a relevant scattering parameter. The form of the jets in each subfigure of figure 18 is clearly different. This difference clearly shows that the result of the scattering process is very sensitive to the value of the phase α.
Looking at figures 17 and 18 we see that in the first stage of the scattering the interacting oscillons exist on support that shrinks from its initial size 2L to a certain minimal size 2L min where L = √ 1 − V 2 is the size of the oscillon in the laboratory reference frame. This process takes some time t s . For t > t s we observe emerging of two main oscillons and the appearance of waves of energy identified with radiation. In order to evaluate the time t s we assume that the left (right) border of the oscillon, which initially moves with velocity u = V (u = −V ), moves freely with the velocity that the oscillon has in the laboratory reference frame until it hits the future light cone of the event (0, 0); see figure 16. This assumption allows to determine the event P s with the coordinates (t s , x s ) in the laboratory 14) The formula (3.14) is valid exclusively for v = 0. Due to the symmetry of the initial configuration we see that the minimal support size of the scattering oscillons can be estimated by the expression ∆x min = 2 1−V 1+V . In figure 19 we show numerical data (dots) and analytical curves (solid lines) representing the characteristic time of the scattering t s and the minimum size of the oscillon configuration ∆x min at t s . and ∆x min = 2t s .
High velocities -formation of shock waves
Next we discuss in more detail some of our numerical results. First we note that for small velocities the numerical solution is very irregular. Looking at figures 17a-17c we clearly see a formation of a strongly perturbed oscillon centred at x = 0. This perturbed oscillon is certainly unstable and it radiates out smaller oscillon-like objects. The situation changes for scatterings at high velocities as then the numerical solution has a more regular pattern.
In particular, when velocity of incident exact oscillons is close to unity we observe another -24 -JHEP01(2020)006 interesting solution. An example of such a solution is presented in figure 17e. The solution presented there was obtained for the scattering of two exact oscillons with initial phases α = 0, speeds V = 0.93 and no swaying motion i.e. for v = 0. A very characteristic feature of this numerical solution is a presence of regular waves that are localized in a diamond-like shape region on the Minkowski diagram. Such waves emerge shortly after the collision and eventually decay into a sequence of oscillon-like structures. It turns out that the nature of these waves can be understood in terms of the so-called shock waves that are exact solutions of the signum-Gordon model reported in [34]. A shock wave is a particular solution of the signum-Gordon model with two discontinuities that propagate with the speed of light. Here, however, we do not observe such wavefronts due to the presence of two oscillons that move with subluminal velocities. The collapse of the wave suggests that our numerical solution is only an approximation to the exact shock wave which would exist for arbitrary times t > 0.
We can check the hypothesis as to the nature of our wave solution comparing its zeros to the zeros of the analytical shock wave. According to [34] a shock wave solution belongs to the class of the signum-Gordon solutions described by φ(t, x) = θ(−z)W (z), where z = 1 4 (x 2 − t 2 ). The function W (z) obeys the ordinary equation zW (z) + W (z) = sgn(W (z)) and it consists of infinitely many partial solutions W k (z), k ∈ Z matched at points z = −a k . Each partial solution satisfies the equation zW k (z) + W k (z) = (−1) k and the conditions W k (−a k ) = 0 = W k+1 (−a k ) and W k (−a k ) = W k+1 (−a k ). So, it can be written in the form Note that α 1 = 1. Furthermore, it follows from (3.16) that α k+1 is determined by a k and b k . Solving numerically the second equation of (3.16) one gets y k+1 and so a k+1 and b k+1 can be determined too. Thus we note that the zeros of the field φ(t, x) are localized on the hyperbolas Outside of this region they break and produce oscillon-like structures.
Vanishing of the radiation
In our numerical studies we have also spotted a very interesting case. Namely, some symmetric initial configurations evolve in such a way that the resulting field contains only the main quasi-oscillons i.e. the amount of radiation generated in this process is virtually insignificant. This phenomenon was observed in the high speed range (V approximately above 0.7) i.e. when two main outgoing quasi-oscillons had very regular form. We have also found that, for a given velocity V , there are two phases α for which the radiation is absent. Figure 22 shows two examples of the scattering processes containing initial . In particular, this shows that the incoming zeros x (0) (t), shown in figure 23a, and x (1) (t), shown in figure 23b, lie on the same straight line. Moreover, it can also be checked that both initial configurations characterized by phases α and α + 1 2 differ only by the sign i.e. ψ → −ψ when α → α + 1 2 , see the insertion plots in figure 23b.
The case presented in figure 22 is not unique. Our numerical results suggest that in the regime of high velocities V one can fine-tune the initial parameters V and α (and also v, see the next section) in such a way that the outgoing oscillons are not accompanied by almost any radiation. The absence of radiation demonstrates that the outgoing quasi-oscillon has virtually the same energy as the incoming exact oscillon. In this case the numerical quasioscillon is very similar to the exact one. In the most common situations there is certain difference of energies and this difference is explained by the release of a very large number of small oscillon-like structures. In figure 24 we plot the energy radiated out by the system as a fraction of its initial energy. Two dark regions in the upper part of the diagram correspond to the choices of the initial parameters (α, V ) such that this fraction is very close to zero. Looking at this picture we see that the initial configurations that minimize the radiation correspond to parameters α and V that lie, approximately, on straight lines. For V < 0.7 the outgoing oscillons are quite irregular; see for instance figure 17a. Usually, some radiation can be seen in the vicinity of outgoing irregular perturbed oscillons and this radiation is emitted from these irregularities. In consequence, the determination of the energy of outgoing oscillons is less reliable for small velocities.
Oscillons with v = 0
In this section we present some of our results on the scattering of generalized exact oscillons i.e. oscillons which depend on v -the parameter controlling the swaying motion of the -28 -JHEP01(2020)006 oscillon endpoints. In figure 25 we show the wordsheets of two such oscillons. The configuration at t = 0 was taken as the initial data for our numerical simulation. In similarity to the case v = 0 we can find the expressions for the characteristic time of the scattering t s and the minimum support size ∆x min = 2|x s | by solving the equations x = −L(α) + wt and x = −t. We have found where L(α) ≡ x 0 (α) − x 0 (α) is the size of the oscillon at t = 0. x 0 (α) is given by (3.10) and x 0 (α) by (3.11). An interesting question now arises of how the replacement of v = 0 by v = 0 modifies the results of the scattering processes. For instance, we can take a v = 0 generalization of the symmetric initial configuration with V = 0.93 and α = 0.414. In figure 26a we present the result of the scattering for v = 0.02. We see that even such small value of the parameter v is sufficient for the appearance of shock waves which further transform into a cascade of oscillons. This demonstrates that the scattering process is quite sensitive to the value of the parameter v. In order to minimize this radiation one can adjust properly the parameter α. We have found that in this specific case the radiation vanishes for α = 0.420 and α = 0.918, see figure 26b and 26d. For higher values of v there was much more radiation emitted during the process of scattering.
In figure 27 and figure 28 we present the cases of v = 0.2 and v = 0.7. In both cases we have found two values of the phase α that minimize the radiation.
Looking carefully at the initial configurations we see that there is a significant difference between the case v = 0 and v = 0. In figure 29 we present the initial field configurations that minimize the radiation. Comparing configuration Ψ (s) (x; v, V, α + ∆α), where ∆α is taken from the numerical simulations, with Ψ (s) (x; v, V, α) we see that the configuration with α+∆α is not equal to the negative of the configuration with α. The difference between -29 -JHEP01(2020)006 In figure 30 we have plotted the fraction of the initial energy carried by the radiation. The figure was produced for v = 0.45. The dark regions, corresponding to very low values of the radiated energy, are less regular when compared with figure 24 for the case of vanishing v.
Non-symmetric configurations
The simplest non-symmetric configurations are described by oscillons that differ only by their phases. In order to probe this class of scatterings, we use initial conditions given by (3.12) taken at time t = 0. In figures 31a-31d we plot four cases of the time evolution for these processes. Interestingly, there are all non-symmetric configurations for which no radiation is present. Figures 31b and 31d show two of these cases. In order to have a clearer picture of the amount of input energy converted into the radiation, we present a density plot much like the ones presented in figures 24 and 30, except that in this case we have fixed the values V = 0.93 and v = 0, and we vary the parameters α L and α R (now independent of each other). This plot is shown in figure 32.
The case α L = α R corresponds tothe symmetric configurations of the initial conditions, and is marked in the plot as a black dashed line. This line passes through two minima in the emitted radiation, as is expected according from the results discussed in section 3.3. Along this line, the plot has a coincident set of values with the plot in figure 24 along the line given by V = 0.93. Also, as mentioned in section 3.3.3, for null swaying speed of the oscillon endpoints (v = 0), a shift of 1 2 in the phase of a given oscillon produces the same oscillon with a sign change in the value of its field and its time derivative. For this reason, the relation α R = α L ± 1 2 corresponds to the anti-symmetric initial field configurations. This relation is marked in the figure as the two dot-dashed white lines. In agreement with the results from section 3.2, these lines lie on top of the dark regions that correspond to no-radiation zones.
The plot can be seen as periodic both in α L and in α R (although the phase is originally defined as a value between 0 and 1, the field configuration is precisely the same for both these values), leaving it with a toroidal topology. So, the two black strips marked by the white dot-dashed lines form a belt around the torus and become a single continuous region.
We note, still, a second strip of no-radiation forming a 90 degree angle with these lines. One could expect the eventual vanishing of radiation in non-symmetric initial scattering configurations e.g. figures 31b and 31d. Yet, the regularity of this region (it is a straight line) is quite remarkable and requires further considerations.
We present a similar plot with α L fixed in which we varied α R and V . This plot is presented in figure 33 and, along the line given by V = 0.93, its values coincide with those of figure 32 for α L = 0.
We note that the strip along α R = 1 2 corresponds to the anti-symmetric configurations and so it shows little or no radiation around it. Below the value V 0.7 there appears a large irregular region devoid of radiation. From the standpoint of numerical stability of our 3.0% 4.0% 5.0% Figure 33. Fraction of the total energy of the initial configuration carried out by the radiation as a function of the input speed V and phase of right input oscillon α R while holding fixed the right input oscillon's phase α R = 0.
methods for the measurement of radiation in regions of low V , as mentioned in section 3.6, this region could well be a numerical artefact. In a brief investigation of this hypothesis, though, we have found this void to be an accurate description and the region, indeed, represents a zone of no-radiation interactions for very low energy input (low boosts). So the value V 0.7 is critical in the sense that, below it, the entire region for this particular set of values in the parameter space, generates initial conditions to the scattering that produces no radiation, and the region, itself, does not seem to have a very well defined shape.
One possible explanation of this void is that there is, indeed, some generation of the radiation in the scattering process but this radiation happens to be absorbed by the outgoing oscillons, which in turn makes them become more perturbed. This hypothesis can be checked by considering the energy of each individual outgoing oscillon in this region. In figures 34a and 34b we plot the energy balance of each outgoing oscillon compared to its initial energy. This balance is calculated by where E L is the energy of the left outgoing oscillon (which entered the interaction region from the right), E R is the energy of the right outgoing oscillon and E the total input energy. Note that the plot in figure 33 shows the value of the amount of energy lost to the radiation, which is given by so that the total radiation is just the sum of the energy balances of individual scattered oscillons. From our figures (figures 34a and 34b) we see that each oscillon looses/gains a considerable amount of energy in the process. This amount, in some casees, is close to up to 15% of each oscillon's incident energy, yet the total radiation generated is no larger than about 5.5% of the total incident energy. In figure 35 we present the case for V = 0.5, α L = 0, α R = 0.642, v L = 0 = v R , which corresponds to a configuration that produces no radiation (as it is located within the void region) and, at the same time, has a relatively large energy transfer between the two interacting oscillons (the energy balance plots of these regions show a large energy change of both oscillons). Note that the left input oscillon (outgoing towards the right) is larger than the one outgoing to the left. Most such configurations seem to reproduce this behaviour, and the channel through which this energy is transferred, in all cases, seems to be related to the scale of the outgoing oscillons.
Oscillons with accelerating borders
Looking at figure 17 and figure 13 we note that the outgoing oscillons are significantly different from the exact generalized oscillon with uniformly moving endpoints. The main difference is in the form of worldlines describing the borders of the oscillons which take the form of continuous curves rather than zig-zag piecewise straight lines. Moreover, the curves are surprisingly regular in shape and this suggests that the outgoing objects are -35 -JHEP01(2020)006 not just simple perturbations of the generalized exact oscillons. Recently, in [26] further generalization of the signum-Gordon oscillins has been proposed. This generalization leads to the emergence of oscillons with borders that are described by arbitrary time-like curves.
Here we present a construction of oscillons with curvilinear borders and produce some plots of such oscillons after applying to them Lorentz boosts.
General properties
For an oscillon with period T the complete solution can be constructed from the restriction of the solution ϕ(t, x) to the interval 0 ≤ t ≤ T /2. Similarly to the oscillons already known, we can get periodic and localized solutions imposing the conditions where f (x) is a continuous function such that f (x) ≤ 0 for all 0 ≤ x ≤ T and f (0) = f (T ) = 0. As a consequence, we assume that the solution ϕ(t, x) is negative for 0 ≤ t ≤ T /2. The oscillon solution is localized in the sense that it is nonzero only in the region between two time-like curves γ L and γ R -the borders of the oscillon, see figure 36. The right border is a displacement by T of the left curve, so that the size of the oscillon remains constant. For t ∈ [0, T /2] the borders move to the right by ∆ and, for t ∈ [T /2, T ], the borders move in the opposite direction -returning to the original position.
The conditions (3.20) and (3.21) are satisfied if we take the following ansatz for the non-zero part of the solution: where F (x) satisfies: For less general oscillons, we can demand that the solutions approach zero smoothly at the left border γ L i.e. the partial solution (3.22) for −t < x < t gives where points belonging to the curve γ L have coordinates related by x = x(t). Expressing condition (3.25) in light-cone coordinates where we have used the fact that (y + , y − ) are related and dy + dy − = − 1 2 dx dt . Since γ L is a time-like curve dy + dy − < 1 2 and so condition (3.27) is equivalent to ∂ + ϕ(y + , y − )| γ L = 0. This leads to F (y + ) = − 1 4 (y + − y − ) for the first expression in (3.22). Taking into account expression (3.24) we finally get f (y + ) = − 1 2 (y + − y − ), (3.28) where y − = g(y + ) is a function of y + representing the left border of the oscillon (worldline γ L ). Similarly, demanding that ∂ x ϕ(t, x)| γ R = 0 at the right border of the oscillon γ R we get a condition which, when written in terms of y − , takes the form ∂ − ϕ(y + , y − )| γ R = 0. Then, from the last expression in (3.22) we get Here y + = h(y − ) is a function of y − representing the right border of the oscillon (worldline γ R ). Note that for points on γ L we have 0 ≤ y + ≤ T /2 + ∆ and for points on γ R we have Using this formalism, all oscillon solutions limited by a pair of identical time-like curves can be constructed by plugging the a priori given trajectories of the border in (3.28) and (3.29). The trajectories should have a form that one can to describe them explicitly as y − = g(y + ) for γ L and y + = h(y − ) for γ R . The only remaining problem is to integrate the resulting expressions to get F (x) -and consequently ϕ(t, x).
Example
The first interesting example we have found is an oscillon with borders having a constant acceleration a in the instantaneous rest frame of the border. In what follows we will use units in which c = 1. In the reference frame of the oscillon, in which the border has acceleration γ −3 a, the trajectory describing the motion of such borders takes the form where v 0 is the velocity of the border at t = 0, γ 0 = (1−v 2 0 ) −1/2 and x 0 = 0 (x 0 = T ) for γ L (γ R ). Note that if the oscillon has no extra motion then its rest frame is just the laboratory reference frame. Using the light-cone coordinates y ± = x ± t we put the expression (3.30) into the form y − = g(y + ), where x 0 = 0, and y + = h(y − ) and where x 0 = T . Then plugging these expressions into (3.28) and (3.29) we get our explicit expression for f (x): Once we know f (x), it is possible to integrate it and get partial solutions ϕ k (t, x) with k ∈ {C, L 1 , L 2 , L 3 , R 1 , R 2 , R 3 }, each one valid in a specific subset of the region between γ L and γ R . Such solutions are given by:
JHEP01(2020)006
where AB = a −2 . Similarly to the oscillons previously presented and discussed we can relate the solutions ϕ R i (t, x) to the solutions ϕ L i (t, x) through the transformations: a → −a. (3.40) Note that the last two transformations are equivalent to the transformations: Once again we can write the complete solution with the help of the step functions. We have where ∆ = x(T /2) − x 0 and so is given by The periodicity of the solution can now be taken into account by involving the generalized forms of the functions τ (z) and σ(z) for an arbitrary period T :
Comparing the generalized exact oscilons presented in figure 37 with numerical solutions presented in figure 17 and figure 13 we conclude that there is certain similarity between outgoing oscillon-like object produced in the scattering process and the exact oscillons with non-uniformly moving endpoints. This suggests that the process of the scattering of two exact oscillons can lead to the production of field configurations corresponding to perturbed generalized oscillons with non-uniformly moving endpoints. In some cases these perturbations can be almost zero and when this happens one can call the outgoing oscillon-like field configurations quasi-oscillons with non uniformly moving endpoints. The transformation of oscillons from one class into the oscillons belonging to another class during the scattering process is an open problem which requires further investigations.
Fractal nature of the radiation
In this section we discuss the interesting possibility that the radiation generated in the process of the scattering of oscillons has a fractal-like nature. There are two facts that suggest this. Firstly, many numerical simulations show that the radiation of the signum-Gordon model is dominated by oscillating structures that look like travelling oscillons.
Such oscillon-like structures are generated as radiation during the scattering of oscillonlike objects. Alternatively, they are emitted from strongly perturbed oscillons. In fact, a production of small-size oscillons during the evolution of perturbed oscillons has been conjectured in ref. [24]. Our work presented in this paper and also in [37] suggests that this conjecture is true. Secondly, the oscillon-like field configurations exist at arbitrarily small scales. Although the numerical approach does not allow for the arbitrary good resolution we know that the existence of exact oscillons with any size is guaranteed by the dilation symmetry of the signum-Gordon equation (1.2). This symmetry implies that for any real number λ > 0 and any solution of the signum-Gordon equation φ (1) (t, x) the function is also a solution of (1.2). Looking at energy of solutions we see that it scales according to process is a source of new smaller oscillon-like objects. Certainly, we do not expect that oscillon-like objects seen in our numerical simulations are exact oscillons. On the other hand, many of them are surprisingly regular and sufficiently stable. They all possess characteristics necessary to be called quasi-oscillons. Less regular oscillon-like structures "decay" emitting smaller and more regular oscillating objects. The emission of smaller oscillons is a physical mechanism allowing strongly perturbed oscillons to get rid of a surplus of their energy. Summarizing, we can say that interaction between individual oscillon-like objects (constituents of radiation) produces more and more such objects during their evolution. Finally, we note that the relation (3.53) is quite general and it allows us to take as a solution φ (1) (t, x) not only a single oscillon but the whole diagram. Certainly, there is no qualitative difference between scattering processes involving two oscillons with λ = 1 and a scattering of smaller oscillons with λ 1. In principle, the whole diagram (like figure 27a and 27c) could repeat itself at any length scales. Such repetitions of structures involving oscillons at all length scales in the spacetime diagram suggest a possible fractal-like nature of the radiation. This statement still has a status of a conjecture and it certainly deserves further investigation. Below we present only some preliminary results of a numerical study which reinforces this idea.
In order to check our hypothesis we have perform high resolution simulations of the scattering processes and then looked at the spacetime diagrams representing the result. In figure 38a we plot the two main outcoming oscillons and the radiation in the central region between these oscillons. Looking in more detail at the region inside the rectangle in figure 38a which we replot in figure 38b we see that there exist a huge number of smaller oscillons invisible in the previous picture. Choosing another rectangular region of figure 38b which we replot in figure 38c we see that, again, it contains many oscillating structures. This result supports our idea of the fractal-like nature of the radiation of the signum-Gordon model.
Conclusions
In this paper we have reported our results on the scattering of compact oscillons in the signum-Gordon model in one spatial dimension. We have looked at two qualitatively distinct initial configurations -symmetric and anti-symmetric one. In both cases the initial configurations consisted of exact compact solutions. Due to the compactness of oscillons -42 -
JHEP01(2020)006
there was no problem with their overlapping at t = 0. In fact we also evolved oscillons whose supports touched each other but did not overlap at t = 0. A time dependence of the shape of oscillons was responsible for the existence of an additional scattering parameter which we called the phase of the oscillon. This phase was a important quantity and the properties of the scattering process depended very strongly on it.
Looking at the results of the scattering of oscillons we have found that there was a significant qualitative difference between symmetric and anti-symmetric initial configurations. The emission of radiation for anti-symmetric configurations was restricted to situations where outgoing oscillons had irregular borders. Such irregular borders act as sources of radiation which has the form of showers of smaller oscillons sent out from the borders. The central region of the Minkowski diagrams just after emergence of outgoing oscillons was free of radiation. On the other hand, symmetric configurations produced much more radiation than the anti-symmetric ones. In this case the radiation was emitted mainly in the central part of the Minkowski diagram where structures similar to exact shock wave solutions of the signum-Gordon model were formed. These waves were not stable and, eventually, they decaysed into cascades of oscillons. We have spotted that there were special values of the phases of colliding oscillons for which there was almost no radiation. We suspect that this fact was associated with the absence of the shockwave-like structures between outgoing oscillons. The relation between the collapse (decay) of the shockwave-like solution and the appearance of a cascade of oscillons has been found to be a very interesting subject and it requires more thorough analysis than we could carry out in the present paper. We hope to report more on this subject in near future.
Comparing incoming oscillons with outgoing ones we have spotted that, in general, the later ones belong to a wider class of oscillons. This class is characterized by a nonuniform motion of the border of the oscillon in its own rest frame. In our numerical study many of the outgoing oscillons had borders described by a segment of the worldline curve whereas for incoming oscillons these borders were segments of straight lines. Thus the collision transformed the compact oscillons of a very special class into more general compact oscillons.
We have also looked at the radiation of the signum-Gordon model and have found that it possessed what looked like a self-similar structure. Since the model has the scalling symmetry one can show that an exact compact oscillon can have arbitrarily small support and energy. Our numerical studies have shown that small quasi-oscillons were emitted from perturbed oscillons or appeared in the scattering processes of two oscillons-like structures. Since, in general, they were also perturbed the process of emission repeated itself (in principle infinitely many times). This mechanism of emission of oscillons from perturbed objects and the fact that oscillons existed at arbitrarily small scales suggests to us the emergence of dynamical fractals i.e. of the fractal structures in the spacetime diagrams.
Final remarks.
1. Our investigations of the scattering processes have been based primarily on the numerical integration of the signum-Gordon equation. The complexity of this process excludes any analytical approach to this problem. We have made many attempts to -43 -
JHEP01(2020)006
calculate analytically the evolution of the initial profile containing two exact oscillons. Unfortunately, even before the emergence of main oscillons we have encountered technical difficulties in the construction of partial solutions. Moreover, the number and localization of any partial solution depends on the initial data. In contradiction to the standard analytical non-linear models the perturbative approach cannot be used in the case of signum-Gordon model because of the non analytical character of the potential V = |φ| at φ = 0. Hence, the small perturbations of the vacuum solutions are always nonlinear.
2. One can get some analytical results considering the decay of shock waves in a cascade of oscillons. This subject has been recently studied and the results have been reported in [40].
3. Of course, we can also think of the comparisons of our results with those obtained in other models, such as the Sine-Gordon model of λφ 4 model. However, these models are basically very different as they do not possess compact solitons. Of course, their solitons are exponentially localised and some studies of such solitons have also been performed in much detail. The most comparable studies involved looking at the properties of Sine-Gordon kinks on scattering on various obstructions (potential wells or barriers) and the effects of the obstructions on the properties of the basic solitonic structures. The obstructions generated the emission of kink-antikink pairs, either in the form of breathers or as invidual pairs. And for perturbed models, which still had solitonic solutions, one had emission of long lived breathers (basically oscillons) or annihilations. An interested reader can look at papers [41][42][43][44] and references therein.
A Comments on the numerics
The numeric results presented in this paper have been generated by the use of the standard 4 th -order Runge-Kutta method, integrating the system via the discrete timesteps ∆t. The second-order equation in time has been decomposed into a coupled system of two first-order equations φ(x, t i ) = φ(x, t i−1 ) + ∆t ψ(x, t i−1 ) (A.1) where t i = n ∆t (n = 1, 2, · · · ) and ψ(x, t) = ∂φ ∂t . The spatial dimension was made discrete over N sites of width ∆x, so it had width L = N ∆x.
In most of our simulations we have used a spatial resolution of N = 2 15 (for L = 6 this corresponds to ∆x 1.8 × 10 −4 ). There were two exceptions to this. The first one corresponded to the case of the fractal, which in order to generate and capture the small scale details we had to us N = 2 20 (in this case, the simulation space length was L = 12, leading to ∆x 1.14 × 10 −5 ). The second exception corresponded to the generation of the diagrams of describing the percentage of energy lost to radiation (e.g. figures 24, 30) and -44 -JHEP01(2020)006 the balance of energy after the interaction (figures 34a and 34b). Since the value of each pixel of these images is computed based on one entire simulation, in order to speed up the computations (in the case of figure 32, we have used 350 2 = 122500 simulations) we had performed lower resolution simulations, with N = 2 12 .
The timestep value, in all our simulations, was given by the relation a = ∆t ∆x and we have used in all our simulations a = 0.1.
Finally, we would like to point out to the reader the need of some caution while dealing numerically with the very small scale structures appearing within the fractals. This is further explained in section B.
B Caveats
We would like to add a few comments on the difficulties associated with the numerical integration of the signum-Gordon equation. The main difficulty has origin in the fact that the radiation contains perturbed oscillons of arbitrarily small sizes. Certainly, oscillons smaller than the size of numerical domain cannot be seen in the simulation. An obvious solution involves increasing the number of points in the grid. We have run many simulations changing the number of points and comparing the results. Such tests have shown that very small oscillons are very sensitive to the number of points. In some cases, changing the number of points by a factor of two resulted in the appearance or even disappearance of some tiny structures whereas bigger structures remained stable under this procedure. A similar problem was spotted in simulations of a special type of self-similar solutions with an infinite number of zeros on a finite segment. In that case the numerical solution and the analytical one diverged after a very short time (the numerical solution was very unstable), and the increase of the simulation resolution resulted in only very small in stability. On the other hand, our numerical simulations of the exact oscillons did not lead to visible instability within intervals of time corresponding to many oscillon periods. Also, the simulations of exact shock waves were very consistent with the analytical solutions. Thus, in the regions dominated by radiation (or the special self-similar solutions) the solutions of the model were found to be sensitive to the initial conditions. In this sense, the signum-Gordon model shares some properties with chaotic systems. This property is, for instance, one of the main difficulties in generating of high resolution fractals. However, although some results were quite sensitive to the details of the details of the numerical procedures, most of them were not and we strongly believe in their validity. | 17,876.6 | 2019-09-04T00:00:00.000 | [
"Physics"
] |
HER2-encoded mir-4728 forms a receptor-independent circuit with miR-21-5p through the non-canonical poly(A) polymerase PAPD5
We previously reported that the human HER2 gene encodes the intronic microRNA mir-4728, which is overexpressed together with its oncogenic host gene and may act independently of the HER2 receptor. More recently, we also reported that the oncogenic miR-21-5p is regulated by 3′ tailing and trimming by the non-canonical poly(A) polymerase PAPD5 and the ribonuclease PARN. Here we demonstrate a dual function for the HER2 locus in upregulation of miR-21-5p; while HER2 signalling activates transcription of mir-21, miR-4728-3p specifically stabilises miR-21-5p through inhibition of PAPD5. Our results establish a new and unexpected oncogenic role for the HER2 locus that is not currently being targeted by any anti-HER2 therapy.
Scientific RepoRts | 6:35664 | DOI: 10.1038/srep35664 We identified a miRNA gene, mir-4728, that is encoded in an intron of the HER2 gene, indicating that, contrary to what is commonly assumed, the HER2 locus has a second role in addition to the membrane receptor 6 . HER2 transcription simultaneously produces HER2 mRNA as well as mir-4728 to the point that expression of mir-4728 has been suggested to accurately mark HER2 status and to work as a non-invasive biomarker in HER2-positive breast and gastric cancer 7 . Among other functions, miR-4728-3p has been shown to modulate expression of oestrogen receptor alpha (ERα ) 8 and may act as a negative feedback mechanism for HER2 signalling by regulating the MAPK pathway 9 .
In collaboration with the de Hoon laboratory we have also reported that the non-canonical poly(A) polymerase PAPD5 is involved in non-templated 3′ adenylation of miRNAs 10 . In particular, PAPD5 adenylates the 3′ end of miR-21-5p, marking it for 3′ -to-5′ trimming by the poly(A) specific ribonuclease PARN. We show now that the miR-21-5p tailing-and-trimming pathway is controlled by miR-4728-3p-mediated downregulation of PAPD5 in HER2-amplified tumours. Aberrant expression of miR-21-5p has been implicated in cancer and cardiovascular disease and is associated with oncogenic processes such as epithelial-to-mesenchymal transition (EMT), cell cycle control, apoptosis, and metastasis (reviewed by Kumarswamy et al. 11 ). The tumorigenic role of miR-21-5p is exerted through downregulation of various tumour suppressor genes; regulation of PTEN by miR-21-5p leads to induction of protein kinase B (Akt) signalling and has been associated with resistance against HER2-targeted therapies 12 . Cells resistant to trastuzumab treatment were found to overexpress miR-21-5p and ectopic expression of miR-21-5p also conferred resistance in vitro, an effect that was reversed by overexpression of a PTEN gene lacking target sites for miR-21-5p 12 .
In summary, we show that the HER2 receptor and miR-4728-3p contribute to carcinogenesis in a cooperative but independent manner; while HER2 signalling induces transcription of mir-21, miR-4728-3p contributes to the oncogenic effect by maintaining high steady-state levels of active miR-21-5p. This new oncogenic function of the HER2 locus indicates that targeting only the HER2 receptor, the cornerstone of treatment of cancers that overexpress HER2, may not be sufficient for complete treatment of HER2-postive cancer.
Results miR-4728-3p activity in HER2-positive cells regulates PAPD5. In a previous study 6 , we observed that miR-4728-3p was the main mature product of the HER2-encoded mir-4728 precursor. Analysis of next-generation sequencing data provided by the YM500 database version 2 13 confirmed that expression of miR-4728-3p far exceeded -5p in cancer samples ( Supplementary Fig. S1). We therefore focused on miR-4728-3p and selected two HER2-positive breast cancer cell lines, SK-BR-3 and BT-474, to block miRNA function by transfecting 2′ -O-methyl-modified antisense oligonucleotides (ASOs). To verify the action of this treatment we cloned a 3′ untranslated region (UTR) with perfect complementarity to miR-4728-3p downstream of a firefly luciferase gene in a reporter vector. As expected, transfection of this vector showed that the presence of a miR-4728-3p target site in the 3′ UTR reduced luciferase activity through the action of endogenous miR-4728-3p and that co-transfection with miR-4728-3p ASO reverted this repression ( Supplementary Fig. S2a). Global effects were then investigated by gene expression analysis 48 and 96 hours after ASO transfection. To confirm the specificity of the ASO treatment at global level we performed Gene Set Enrichment Analysis (GSEA). This analysis showed significant enrichment of TargetScan-predicted miR-4728-3p targets among upregulated genes at both time points in SK-BR-3 (FDR < 0.001 and 0.035, respectively, see Supplementary Fig. S2b), again confirming the experimental approach. The number of differentially expressed genes was considerably smaller in BT-474 compared to SK-BR-3 upon miR-4728-3p ASO treatment (125 and 412 genes at 48 h, respectively; cut-off log 2 fold change ± 0.5 and adjusted P < 0.05).
Among the top upregulated genes we found the non-canonical poly(A) polymerase PAPD5 in both SK-BR-3 and BT-474 cells (log 2 fold change 0.94, adjusted P = 7.80 × 10 −9 and log 2 fold change 0.53, adjusted P = 0.0018, respectively). This is interesting in light of our recent report where we show that the oncomiR miR-21-5p is subjected to 3′ adenylation by PAPD5 10 . Upregulation of PAPD5 in HER2-positive breast cancer cell lines treated with miR-4728-3p ASO was confirmed by real-time qRT-PCR for both time points with a doubling of mRNA abundance 48 hours after ASO transfection in SK-BR-3 (Fig. 1a). Although differential expression was more modest in BT-474, we observed a clear trend of PAPD5 upregulation upon miR-4728-3p blocking (Fig. 1b). To confirm that the observed effect on PAPD5 was mediated by miR-4728-3p we repeated the ASO treatment in HeLa cells that do not express mir-4728 and PAPD5 levels remained unchanged ( Supplementary Fig. S3a). TargetScan lists one predicted target site for miR-4728-3p in the PAPD5 3′ untranslated region (UTR), suggesting a putative direct link, although repeated assays failed to show consistent functionality for this target site (data not shown).
Blocking miR-4728-3p leads to downregulation of miR-21-5p and inhibition of cell proliferation.
We then investigated whether miR-4728-3p may affect the PAPD5-mediated regulation of miR-21-5p. As previously noted, the most prominent miR-21-5p isoform produced by Dicer is a 23-nt isomiR that carries a templated cytosine at the 3′ end not present in the 22-nt miR-21-5p sequence registered in miRBase. This isomiR is called miR-21-5p + C and its 3′ end is the preferred substrate of PAPD5 10 . We first assessed miR-21-5p levels independently of isoform by real-time qRT-PCR in SK-BR-3 cells and total miR-21-5p levels were reduced when blocking miR-4728-3p (Fig. 1c). GSEA of predicted target genes for miR-21-5p from TargetScan in the microarray data also confirmed significant enrichment of targets among upregulated genes in SK-BR-3 upon blocking of miR-4728-3p at 48 and 96 h (FDR = 0.0014 and 0.0011, respectively, see Supplementary Fig. S4).
HER2 overexpression has been reported to increase the stability of the Microprocessor complex and the efficiency of Dicer cleavage of growth-promoting miRNAs 14 . Also, PARN has been shown to be involved in miRNA processing 15 . With this in mind we investigated whether changes in the miRNA processing machinery are the cause for the specific downregulation of miR-21-5p. We reasoned that if miR-4728-3p affected PAPD5/PARN tailing-and-trimming, this should only change the levels of miR-21-5p while miR-21-3p and other parts of the primary mir-21 transcript should remain unchanged. If, however, the downregulation of miR-21-5p were to be caused by aberrant processing, all parts of pri-mir-21 should change proportionally. We sequenced the small RNA fraction of miR-4728-3p ASO-treated SK-BR-3 cells in biological triplicate and evaluated the abundance of pri-mir-21 fragments in relation to miR-21-5p. As a proxy for changes in maturation we compared fragments upstream of 5p, the pre-miR loop, mature 3p and fragments downstream of 3p. We found that neither miR-21-3p nor any of the pri-or pre-miRNA fragments were affected by blocking miR-4728-3p and that the observed downregulation was specific for mature miR-21-5p ( Supplementary Fig. S3b). We conclude that it is highly unlikely that differences in Drosha and Dicer processing, which would affect both mature miRNAs, could cause the observed effect on miR-21-5p.
Up-regulation of miR-21-5p promotes proliferation in many different cell types [16][17][18][19] . To test if this was also the case upon blocking of miR-4728-3p we assayed the rate of proliferation of SK-BR-3 and BT-474 with alamarBlue after transfection with ASOs. As shown in Fig. 1f,g, blocking miR-4728-3p resulted in a significant decrease in proliferation rate in both cell types. This effect was not observed in HER2-negative MCF10A cells that do not express mir-4728 ( Supplementary Fig. S5). Similar results were also obtained from HER2-negative MCF7 cells. These results indicate that this pro-proliferative process is restricted to HER2-positive cells where miR-4728-3p expression is up-regulated.
We noted that the miR-4728-3p ASO microarray data showed significant accumulation of genes involved in metabolism and oxidative phosphorylation and alamarBlue works as an indirect indicator of cell viability by measuring metabolic activity based on reduction of Resazurin to Resorufin by NAD(P)H dehydrogenase in mitochondria. We therefore wanted to exclude a metabolic change rather than a decrease in the rate of proliferation so we also counted cells under the microscope and confirmed that the observed reduction was due to a decline in cell number (Supplementary Fig. S6). miR-4728-3p controls miR-21-5p expression and cell proliferation independently of HER2 receptor signalling. Since mir-4728 has been suggested to regulate factors downstream of HER2 9 , we wanted to test if HER2 signalling could control PAPD5 and thus cause the observed downregulation of miR-21-5p upon blocking of miR-4728-3p. MicroRNA-4728 is a 5′ tailed half-mirtron with its 3′ end coinciding with the 5′ splice site of the HER2 exon 24 (NM_004448.3). We first investigated whether blocking the miRNA with ASOs could interfere with HER2 splicing. Real-time qRT-PCR analysis showed that HER2 mRNA levels remained unchanged upon miR-4728-3p ASO treatment in the two cell lines (Fig. 2a,b). Furthermore, a western blot for HER2 showed that also protein levels were unaffected by miR-4728-3p ASO treatment (Fig. 2c,d).
Approximately one third of HER2-positive tumours express C-terminal fragments (CTFs) of the HER2 protein generically called p95-HER2 20,21 . These fragments lack the extracellular domain of the HER2 receptor and are in theory resistant to trastuzumab treatment, while they respond to tyrosine kinase inhibitors such as lapatinib. In western blot analyses of ASO-treated BT-474 and SK-BR-3, C-terminal fragments of HER2 were detectable at low levels but unchanged. To further investigate any possible effect of the HER2 protein on the tailing-and-trimming of miR-21-5p we overexpressed full-length HER2 as well as p95-HER2 22 in HER2-negative MCF7 breast cancer cells. The transfected cDNA clones produce the respective HER2 variant, but not the intronically encoded miR-4728-3p, allowing us to functionally separate the effects of miRNA and host gene. As shown in Fig. 2e, PAPD5 levels were not influenced by overexpression of either full-length or p95-HER2. We confirmed the transcriptional induction of mir-21 by full-length HER2 and observed a similar, if not stronger, activation upon overexpression of p95-HER2. Not only miR-21-5p, but also miR-21-3p and other parts of the mir-21 precursor increased upon expression of p95 and full-length HER2 (Fig. 2f-h). The induction of both miR-21-5p and miR-21-3p was verified by real-time qRT-PCR.
To uncouple HER2-mediated transcriptional induction from PAPD5-mediated regulation of miR-21-5p we calculated the adenylation and degradation ratios for miR-21-5p after HER2 and p95-HER2 induction. Neither ratio was affected by induction of either HER2 construct, excluding the involvement of HER2 signalling in the PAPD5-mediated regulation of miR-21-5p (Fig. 2i,j). In conclusion, the HER2 locus increases miR-21-5p levels through the cooperative action of the growth factor receptor (transcriptionally) and its encoded miRNA (stabilisation). Moreover, this shows that the PAPD5-mediated trimming pathway is independent of HER2 receptor signalling, suggesting that the pro-proliferative action of miR-4728-3p on HER2-positive cells cannot be targeted by anti-HER2 drugs. To test this idea we transfected miR-4728-3p ASOs in SK-BR-3 cells again, but this time in combination with trastuzumab. This double treatment lead to a significant reduction in proliferation compared to trastuzumab alone (Fig. 2k), indicating that current anti-HER2 therapy directed solely against the receptor may not be enough to block the complete oncogenic capacity encoded by the HER2 locus.
HER2-positive tumours have reduced adenylation/degradation and increased expression of miR-21-5p. Finally, to study if the mechanisms identified in these cell line experiments are also present in breast tumours we used samples included in the population-based breast cancer project SCAN-B 23 . This analysis may be complicated by the fact that miR-21-5p has a dynamic expression in different cell types within the tumour 24 while the tumour data represents bulk tumour RNA with varying percentages of cancer cells. We selected 186 breast tumour samples, produced small RNA sequencing data from the extracted total RNA and used poly(A)-positive mRNA-sequencing data to classify samples into the intrinsic subtypes according to the expression of genes included in the PAM50 classifier. To attain more robust results we also analysed breast tumour data from The Cancer Genome Atlas (TCGA) 25 . We reasoned that if miR-4728-3p acts in concert with HER2 to increase the level of miR-21-5p, the latter should be up-regulated in HER2-like tumours compared to other breast cancer subtypes, while the adenylation and degradation ratios should decrease. Analysis of the SCAN-B data showed that expression of miR-21-5p and miR-21-3p were significantly higher in tumours belonging to the HER2-like subtype vs other subtypes (P = 9.56 × 10 −9 and P = 1.40 × 10 −7 , respectively, Student's t-test) (Fig. 3a,b) and in accordance with our results, miR-21-5p degradation ratios were also lower in the HER2-like subtype vs other subtypes (Fig. 3d) (P = 2.66 × 10 −5 , Student's t-test). Furthermore, adenylation ratios were lower in the HER2-like subtype although the difference was not statistically significant (P = 0.18, Student's t-test). In the TCGA data expression of miR-21-5p and miR-21-3p were also significantly higher in tumours belonging to the HER2-like subtype vs other subtypes ( Supplementary Fig. S7a,b, P = 5.11 × 10 −3 and P = 5.02 × 10 −4 , respectively, Student's t-test), while the miR-21-5p adenylation ratio was significantly lower in HER2-like tumours ( Supplementary Fig. S7c, P = 0.0013, Student's t-test). The degradation ratio did not differ significantly in the TCGA data. Altogether the association between mir-21 expression and the HER2 subtype in these two independent tumour data sets is in agreement with our experimental results.
Discussion
Non-templated 3′ modification of mature miRNAs by nucleotidyl transferases is a common event in animal cells 26,27 . Both extent and type of 3′ end additions vary widely between miRNAs and, where the functional consequences have been characterised, they do not seem to follow a general rule. For instance, 3′ terminal adenylation Scientific RepoRts | 6:35664 | DOI: 10.1038/srep35664
Figure 2. Blocking miR-4728-3p does not interfere with expression of HER2 and overexpression of fulllength HER2 or a truncated, intracellular form (p95) increases transcription of mir-21.
HER2 mRNA levels remained constant upon treatment of SK-BR-3 (a) and BT-474 (b) cells with the miR-4728-3p antisense oligonucleotide. Transcript levels were quantified with real-time qRT-PCR using primers spanning the intron that encodes mir-4728. Western blotting confirmed that HER protein expression was also unchanged in SK-BR-3 (c) and BT-474 (d) cells. Contrast was increased to visualise p95-HER2. Tubulin is shown as a loading control. (e) As expected, expression of PAPD5 mRNA was not affected by overexpression of cDNA clones of HER2 or p95-HER2 lacking the intronic miR-4728-3p in the HER2-negative breast cancer cell line MCF7. (f) Expression of mature miR-21-5p increased upon overexpression of HER2 and, more strongly, of p95-HER2. Also other parts of mir-21 were induced, including miR-21-3p (g) and the 5′ part of pri-mir-21 (h), indicating an effect at the level of transcription. Adenylation (i) and degradation (j) ratios remained largely unchanged. stabilises miR-122 28 in what appears to be competition between the poly(A) polymerase PAPD4 (GLD2) and PARN 29 , while two of our groups recently described a tailing-and-trimming pathway for regulation of miR-21-5p abundance where the 3′ end of miR-21-5p is adenylated by PAPD5, marking the miRNA for 3′ -to-5′ trimming by PARN 10 . Here we show that regulation of miR-21-5p by this pathway is controlled by the HER2-encoded miRNA mir-4728. We also demonstrate that inhibition of miR-4728-3p results in a significant decrease in cell proliferation, implying that this miRNA contributes to the oncogenic activity of the HER2 locus by sustaining proliferation through inhibition of miR-21-5p degradation. Since the miRNA is normally expressed at low levels in most tissues, this regulatory mechanism cannot be assumed to be widely active in human organs, but it is induced in tumours with amplification of the HER2 locus. This observation may have important clinical implications; since e.g. PTEN is a direct target of miR-21-5p and has been associated with trastuzumab resistance in HER2-positive breast cancer cells 12 . Although the role of miR-21-5p as sole mediator of trastuzumab resistance via PTEN regulation has been challenged 24 , it seems that at least part of the drug insensitivity can be attributed to miR-21-5p function. Regardless of whether it occurs exclusively through miR-21-5p, our work uncovered an oncogenic role for miR-4728-3p in sustaining proliferative signalling in a pathway that is independent of the HER2 receptor. This implies that although large efforts are made to improve anti-HER2 therapy, complete inhibition of the carcinogenic signals encoded by the locus cannot be achieved by only targeting the transmembrane receptor. This fact may well contribute to the fact that many patients are refractory to anti-HER2 treatment 30 . Since mir-4728 piggybacks transcription of HER2 and, as shown above, the two genes work together in an orchestrated manner, blocking HER2 signalling exclusively could unbalance some functions of mir-4728. It is tempting to speculate that mir-4728 might be involved in some of the adverse effects of trastuzumab such as cardiotoxicity because of the well-characterised association between miR-21-5p and heart failure 31,32 . The effect of co-targeting mir-4728 on alleviating some of these issues as well as blocking its pro-proliferative action would be interesting to study. Since mir-4728 is lowly expressed in most tissues and amplification or overexpression of HER2 increase the cellular concentration above a functional threshold 8 , inhibition of mir-4728 and the mir-4728/PAPD5/miR-21-5p circuit could act as an anti-HER2 cancer therapy that is selectively active in HER-positive cancer cells.
The importance of keeping miR-21-5p under tight control is supported by the fact that we could not identify any other miRNA that appeared to be adenylated by PAPD5. We have also previously observed that regulation of miR-21-5p by PAPD5 and PARN is disrupted in highly proliferative tissues 10 , suggesting that degradation of miR-21-5p is restricted to cells in a state of quiescence. This idea is supported by data produced by Thompson and co-workers 33,34 , who showed that miRNAs in quiescent mouse cells associate with inactive, low molecular weight Argonaute complexes which lack essential components for miRNA-directed repression such as the GW182
Figure 3. Breast tumours overexpressing HER2 have increased expression of mir-21 and decreased degradation of miR-21-5p.
Breast tumours belonging to the HER2 subtype have higher expression of both miR-21-5p (a) and miR-21-3p (b) mature miRNAs compared to other molecular subtypes. Data is expressed as counts per million reads (cpm). In the SCAN-B data, the HER2 subtype also exhibited significantly decreased degradation (d), but not adenylation (c) of miR-21-5p.
proteins. These complexes are suggested to be reservoirs of inactive mature miRNAs that can be re-activated into high molecular weight (active) complexes upon mitogenic signalling. Reanalysing their data with focus on tailing-and-trimming of miR-21-5p we found that adenylation and degradation ratios increased in resting cells. The highest peak of adenylation/degradation was detected in the size fraction just below the ones containing the bulk of AGO2 protein (97 kDa) (Fig. 4a,b) coincident with the expected sizes of PAPD5 (63 kDa) and PARN (74 kDa). By contrast, stimulated cells showed no specific enrichment in any fraction. In concordance with our previous results, no other miRNA displayed an adenylation profile similar to miR-21-5p in the Thompson et al. data, confirming that this is likely a miR-21-5p-specific pathway and indicating the importance of keeping it under strict control.
In summary, the HER2 growth factor receptor induces a signalling cascade that increases transcription of mir-21, while miR-4728-3p blocks PAPD5, a negative regulator of miR-21-5p stability. Together they act to increase the levels of the oncomiR miR-21-5p and promote cell proliferation. The proposed regulatory circuit is depicted in Fig. 4c. Stimulation of the 185 kDa HER2 protein in breast cancer cells activates mir-21 transcription by induction of transcription factors including ETS-1 35 and AP-1 36 or through STAT3 37 . STAT3 also acts as a transcriptional activator of HER2 by binding to a response element in the HER2 promoter and recruits nuclear HER2 38 as a coactivator to regulate the transcription of mir-21 39 . Furthermore, miR-21-5p targets STAT3, leading to its downregulation in a negative feedback loop 40 . Interestingly, STAT3 mRNA levels increased upon blocking of miR-4728-3p in our experiments. This regulatory circuit establishes a new oncogenic role for the HER2 locus through the intronically encoded miRNA mir-4728 that is not currently being targeted by any anti-HER2 therapy. 33 revealed increased adenylation (a) and degradation (b) ratios in resting compared to stimulated T cells. Enrichment is particularly pronounced in the very low molecular weight fractions that do not contain the bulk of AGO2 protein (fractions 16 and 17). (c) In summary, mir-4728 is co-expressed with HER2 to produce the mature miRNA miR-4728-3p, which downregulates PAPD5. Reduced adenylation of miR-21-5p by PAPD5 prevents PARN-mediated degradation of the miRNA. Signalling by HER2 and p95-HER2 induces the MAPK1/2 pathway, which activates transcriptional regulators including STAT3, ETS-1, and AP-1, in turn increasing the transcription of mir-21. STAT3 is also regulated by miR-21-5p in a negative feedback loop. m-HER2, membrane-bound HER2; n-HER2, nuclear HER2.
Material and Methods
Cell culture and transfections. All cell lines were purchased from ATCC and used at low passage numbers.
Cells were cultured as reported previously 41 . MCF7 Tet-Off cells (BD Biosciences) expressing inducible full length or p95-HER2 were established as described 22 and maintained in medium containing 1 μ g/ml doxycycline (Sigma). Antisense oligonucleotides (IDT DNA Technologies) contained 2′ -O-methyl modifications and are listed in Supplementary Table 1. Transient transfections were performed using Lipofectamine 2000 (Life Technologies) following the manufacturer's instructions with 25 nM (SK-BR-3) or 100 nM (BT-474) antisense oligonucleotide, as indicated. AlamarBlue (Invitrogen) proliferation assays were performed according to the manufacturer's instructions, using a FLUOstar Omega (BMG LABTECH) to measure fluorescence (excitation 544 nm, emission 590 nm) after 2 h incubation.
Luciferase assay. For luciferase assays, two DNA oligonucleotides corresponding to a perfectly complementary target site for miR-4728-3p were phosphorylated, annealed and ligated between the SacI and SalI sites of the pmirGLO plasmid (Promega). Luciferase assays were performed with the Dual-Luciferase Reporter Assay System (Promega) on a FLUOstar OMEGA Microplate Reader (BMG LABTECH) at 24 h after transfection of SK-BR-3 cells with 10 ng plasmid and 25 nM ASO in 96-well plates. Luminescence readings for the firefly target site reporter gene were normalised to the signal from the Renilla reporter gene and the negative control-treated empty vector.
Microarray expression analysis. RNA was extracted with TRIZOL (Life Technologies) according to the manufacturer's instructions. RNA quantity and quality were assessed with NanoDrop ND 1000 spectrophotometer (NanoDrop Tech) and Bioanalyzer (Agilent) before analysis on HumanHT-12 v4.0 Expression BeadChips (Illumina). All data were imported and normalised using quantile normalization implemented on the Base server (http://base.thep.lu.se) 8 . Gene Set Enrichment Analysis (GSEA) 42 with default settings was done for predicted miRNA targets based on TargetScan 6.2 predictions 43 using RefSeq identifiers and gene lists were pre-ranked by log 2 fold change between treatment and control.
Western blot. Cells were harvested at indicated times on ice in RIPA buffer (10 mM Tris-HCl pH 7.4, 150 mM NaCl, 1 mM EDTA, 0.1% SDS, 1% Triton X-100, and 1% sodium deoxycholate) supplemented with complete protease inhibitor mixture tablets (Roche Diagnostics). Lysates were clarified by centrifugation and protein concentrations were determined by BCA Protein Assay kit (Thermo Scientific). Equal amounts of crude lysates were separated by SDS-PAGE on 4-12% bis-tris gels and proteins were transferred to a PVDF membrane (both Life Technologies). Membranes were then blocked and probed with HER2 (AMAB90627, Sigma) and tubulin (ab7291, abcam) antibodies according to the manufacturers' instructions. HRP-conjugated secondary antibodies (abcam) were visualised with ECL (Santa Cruz) and staining intensity was determined using a Chemidoc MP (Bio-Rad).
Real-Time quantitative RT-PCR.
Reverse transcription and real-time qRT-PCR were performed as described, with poly(A) tailing and reverse transcription (RT) for miRNA qRT-PCRs and only RT for mRNAs 44 . Real-time qRT-PCR was performed with cDNA diluted 1:10 in SsoFast EvaGreen reagents (Bio-Rad) on a CFX96 instrument (Bio-Rad). Expression data were normalised to selected reference genes (PPIA for mRNA and U6, RN7SL, and let-7a for miRNA). Primer sequences can be found in Supplementary Table 1. Next-generation sequencing. Sequencing libraries were prepared with the NEBNext Multiplex Small RNA Library Prep Set for Illumina (New England Biolabs) according to the manufacturer's instructions and sequenced on Illumina HiSeq sequencer in paired end mode for 2 × 101 cycles or on Illumina MiSeq in single-end mode for 51 cycles. Sequences were demultiplexed using Picard and aligned against hg19 using Novoalign with settings -a AGATCGGAAGAGCACACGTCT -l 14 -h -1 -1 -t 90 -g 50 -x 15 -o SAM -o FullNW -r All 51 -e 51. MicroRNA and isomiR expression were analysed using custom Perl scripts. Expression values for miR-21-5p in the TCGA dataset were retrieved from the TCGA website, while miR-21-5p + C and adenylated variant expression values were retrieved from the YM500 database 13 .
Statistical analysis. For luciferase assays, the relative, normalised luminescence values were plotted as mean ± standard deviation (s.d.) and for real-time qRT-PCRs, the relative, normalised expression values were plotted as mean ± standard error of the mean (s.e.m.). Expression values from next-generation sequencing were normalised as counts per million reads (cpm), plotted as mean ± s.d. Differences among these values, and the miR-21-5p adenylation and degradation ratios were tested using Student's t-test. Analysis of microarray data including GSEA is described in the separate section "Microarray expression analysis". | 6,028.4 | 2016-10-18T00:00:00.000 | [
"Biology",
"Medicine"
] |
Topological-insulator-based terahertz modulator
Three dimensional topological insulators, as a new phase of quantum matters, are characterized by an insulating gap in the bulk and a metallic state on the surface. Particularly, most of the topological insulators have narrow band gaps, and hence have promising applications in the area of terahertz optoelectronics. In this work, we experimentally demonstrate an electronically-tunable terahertz intensity modulator based on Bi1:5Sb0:5Te1:8Se1:2 single crystal, one of the most insulating topological insulators. A relative frequency-independent modulation depth of ~62% over a wide frequency range from 0.3 to 1.4 THz has been achieved at room temperature, by applying a bias current of 100 mA. The modulation in the low current regime can be further enhanced at low temperature. We propose that the extraordinarily large modulation is a consequence of thermally-activated carrier absorption in the semiconducting bulk states. Our work provides a new application of topological insulators for terahertz technology.
Terahertz (THz) technology has been well developed in the past several decades with applications spanning from time-domain spectroscopy 1 , to public security 2 , medical imaging 3 , and high speed communications 4 . High performance THz components including sources 5 , detectors 6 and modulators 7 are urgently needed to promote further THz technology applications. In an advanced THz system, modulators can be used to actively control the amplitude, phase, and spectrum of the THz wave. THz modulators based on semiconductors and metamaterials have been demonstrated to control the carrier concentration and thus the optical response of semiconductors by electrical or optical doping [8][9][10][11][12] . Moreover, some phase transition materials, such as VO 2 and superconductors, have been applied and incorporated with metamaterials to thermally modulate the electric conductivity [13][14][15][16] . However, conventional thermal-controlled modulators have integration issues with current semiconductor techniques. Recently, it was found that graphene-based modulators have superior performances due to its special band structure with linear dispersion and density of states close to the Fermi energy [17][18][19][20] . In particular, a broadband modulation depth of up to 93% based on graphene/ionic-liquid/graphene sandwich structure has been achieved 20 .
Topological insulators (TIs), which are considered as three dimensional analogies of graphene, possess linear Dirac-like states in the insulating bulk gap 21,22 . In contrast to graphene, the strong spin-momentum locking of helical surface states can enable the conversion of charge current into spin current 23 which offer promising applications in electronic and optoelectronic devices [24][25][26][27][28] . Although the existence of surface states at room temperature has been confirmed by angle-resolved photoemission spectroscopy (ARPES) results [29][30][31] , the surface states are always contaminated by the residual conductivity in the bulk arising from the presence of intrinsic impurities 32,33 . Alternatively, as narrow bandgap semiconductors, e.g., Bi 2 Se 3 and Bi 2 Te 3 with bulk gap of ∼300 and 150 meV, respectively 34,35 , TIs are known to be excellent thermoelectric materials 36,37 and have potential applications at room temperature 25 . Recently, Bi 1.5 Sb 0.5 Te 1.8 Se 1.2 (BSTS), one of the most insulating topological insulators has been characterized by Terahertz Time-Domain Spectroscopy (THz-TDS), which indicated the presence of an impurity band about 30 meV below the Fermi level 38 . The pronounced temperature dependence of low energy absorptions may be exploited to construct a THz modulator.
Here, we demonstrate a current-driven THz intensity modulator using BSTS crystal. High modulation depth over a broadband THz region is obtained with applied in-plane current. We also show that the THz modulation could be further enhanced at cryogenic temperatures. Moreover, we confirm that the large modulation arises from the thermal-activated free carriers in the semiconducting bulk state.
Results
The device studied, as shown in Fig. 1(a), is a sandwich structure consisting of a BSTS crystal and two layers of Kapton tapes. Figure 1(b) shows the transmittance of 30-μm-thick BSTS single crystal and single layer Kapton tape, about 10% and 90%, respectively. The transmittance of BSTS almost drops to zero above 1.5 THz. This could be explained by the existence of an optical phonon mode at 1.9 THz 38,39 , resulting in the strong absorption in the transmittance spectrum above 1.5 THz. Figure 2(a) shows the measured transmitted THz waveform in the time domain as a function of in-plane current at room temperature. The peak amplitude of electric field decreases significantly with the applied DC current tuned from 0 to 100 mA. The attenuation of THz peak are the same at both positive and negative current (negative means reversing the direction of in plane current). Also, no obvious peak shift was observed in the THz pulses.
By Fast Fourier Transformation (FFT) of the time domain pulses, the corresponding THz amplitude spectra are obtained. These spectra are normalized by a reference spectrum obtained from the same device without applying current, as shown in Fig. 2(b). The normalized strength of the THz electric field decreases with increasing bias current at the frequency range from 0.3 to 1.4 THz, above which the signal is unreliable due to the strong absorption. To verify the performance of this THz modulator more clearly, the relative change in the amplitude of transmittance is used to define the modulation depth: where t(0) and t(I) are the electric field transmittance of the device under zero and non-zero biased current, respectively. A relative flat modulation depth is achieved in the 0.3-1.4 THz frequency range at various bias current, as indicated in Fig. 2(b). Increasing the magnitude of the current from 0 to 100 mA decreases the relative transmittance significantly, achieving a maximum modulation depth of 62% (at 0.5 THz, peak position of spectra) at the highest bias current, as illustrated in Fig. 3
(a).
A three-dimensional topological insulator has metallic surface sate in the insulating bulk energy gap. Thus one should expect the surface states to dominate the electric transport. However, as mentioned above, due to the free carriers in the bulk, the contribution of surface states is difficult to detect. In other words, the electric transport at room temperature is dominated by the semiconducting properties of bulk states. For our BSTS sample, an impurity band lies ∼30 meV below the Fermi level with a bulk gap of 0.25 eV 38,40,41 . Current-Voltage (I-V) measurements, as shown in the inset of Fig. 3(a), further confirms the Schottky character of the modulator. The symmetric behavior of the I-V characteristic may be due to the formation of two back-to-back Schottky diodes at the interfaces of BSTS and silver paste. A flow of electric current through the two electrodes will cause a Joule heating effect. As the current increases, the device is heated, and the corresponding rise in temperature of the device is ∼124 K for the maximum current amplitude of 100 mA. A larger thermal energy causes more electrons to be excited from the impurity band to the Fermi level in BSTS, which then results in larger absorption of THz radiation by these electrons via intraband transition. The temperature change of the device surface is also plotted in Fig. 3(a), showing excellent agreement with THz modulation depth at 0.5 THz. This agreement is a consequence of the fact that, in a dielectric slab, the change in real part of the optical conductivity relative to the zero-current conductivity, σ σ σ , which is the modulation depth, as well as (2) temperature change of the device to a first approximation according to σ σ ∆ ≈ ∆ d dT T ( / ) 1 1 . Moreover, the temperature change of the device should be proportional to the heating power (voltage multiplied by current). Therefore the similar current dependence between the normalized modulation depth and heating power, as shown in Fig. 3(b), provide additional evidence that the large modulation depth obtained here is related to thermal heating effect.
The THz conductivity of BSTS was studied by Tang et al., from which, both the low-frequency σ ω ( 1 = 0.4 THz) and the square of plasma frequency ω p could be well described by a thermally-activated hopping model 38 38 , which is consistent with the Fourier transform infrared spectroscopy results 43 . Therefore the thermally-induced carrier density is about × 6 10 17 cm −3 at room temperature under 100 mA bias current. Thus the relative change of the carrier density under 100 mA bias current is roughly 70%, comparable to the modulation depth at room temperature. These thermally-induced carriers absorb the THz wave, leading to the significant decrease of transmission of THz wave.
After identifying the thermal origin of the large modulation effect, we measure the THz response of the device under various bias current at temperatures down to 5 K. The normalized transmittance spectra under different bias current at 5 K is shown in Fig. 4(a). The modulation is significantly enhanced than that at room temperature, e.g., 6 mA bias current could lead to a modulation depth of 10%. Figure 4(b) shows the modulation depth at 0.5 THz under various temperatures. We see that a higher bias current is needed to achieve the same modulation depth at 5 K with increasing temperature. In the low-bias current range, the modulation depth is much higher at lower temperature under the same current, especially below 100 K, increasing linearly with applied current. As mentioned above, the BSTS bulk sample shows typical semiconductor behavior. The resistivity of the device, derived from the I-V curve at low biased current, increases from 23 Ω at room temperature to 453 Ω at 10 K. Higher resistivity means stronger heating effect. Moreover, the thermal conductivity of Kapton tape below 100 K is about two orders of magnitude smaller than that at room temperature 44 , which means the heat generated by Joule heating cannot be removed fast enough by thermal conduction through the Kapton tape. Therefore the higher temperature change induces more thermally-activated carriers, and leads to a larger modulation depth. The consistency between the modulation depth and normalized heating power in this regime at 5 K, as illustrated in Fig. 4(b), again supports the thermal origin of THz modulation. For high bias current range (above 30 mA), the modulation depth at low temperature deviates from the linear behavior, which can be explained by the extremely large heating effect. The BSTS crystal could be heated up to a very high temperature even though the sample holder is still kept at the fixed experimental temperature. At the same time, thermal conductivity of Kapton increases slightly with increasing temperature, resulting in higher equilibrium temperature of the whole device. Consequently, the temperature change tends to saturate, leading to the saturation of modulation depth at higher bias current regime. Note that the maximal bias current at low temperatures is 60 mA, above which the device would be damaged.
One may argue that the large modulation depth at low temperature should be related to the surface states. The transmittance of one surface layer of BSTS is estimated about 98.3% 38 , which is much larger than the transmittance we observed in our data. This suggests that THz absorption even at low temperature is still dominated by the bulk. On the other hand, for a fixed experimental temperature, the transmittance under different current is referenced to that without applying current, according to which the contribution from surface states could be eliminated. Therefore we can conclude that the dramatic increase of modulation depth with increasing in-plane current at low temperature is also related to the semiconducting bulk states.
Discussion
The carrier concentration in BSTS can be electrically controlled by tuning the temperature of the crystal, making it possible to modulate the terahertz wave through the device. On the basis of this principle, a highly tunable broadband THz intensity modulator based on topological insulator is proposed and experimentally demonstrated. The electric-field modulation depth is about 62% in 0.3-1.4 THz range corresponding to a power modulation depth of ∼85% which is significantly higher than that of most of the previously developed semiconductors-based modulators [8][9][10][11][12] . Although the insulator-to-metal phase transition of VO 2 can offer a higher modulation depth, the electrical controllability of the device requires a very high voltage 45 . On the other hand, the high modulation depth of our device could be obtained with a bias current of 100 mA or an equivalent bias voltage less than 1.5 V, which is comparable to that of the single-layer graphene-based modulator by ionic liquid gating 20 . Therefore, we can confidently say that the TI-based device could be utilized as a high-efficiency THz intensity modulator.
Besides the modulation depth, the insertion loss is also important for evaluating a modulator. For our sandwich-structure device, the electric field peak attenuation is about 92% at room temperature, which can be diminished by optimizing the sandwich structure. On the one hand, decreasing the thickness of the TI crystal could decrease the free carriers absorption in the bulk. The transmittance of BSTS flake which is mechanically exfoliated from the single crystal pallet increases from 10% to 64% when the thickness decreases from 30 μm to 1.3 μm at room temperature, as shown in the Fig. 1(b). On the other hand, using the more THz-transparent thin film capping layers, for example Al 2 O 3 , to instead of the Kapton tape can further minimize the insertion loss.
In conclusion, we demonstrated a proof-of-principle topological-insulator-based THz intensity modulator fabricated by BSTS single crystal, which can be efficiently controlled by a DC current. We observed a maximal modulation depth about 62% for our sandwich-structured device in a wide frequency range from 0.3 to 1.4 THz at room temperature, with current modulation further enhanced at low temperature. We also confirmed the observed large THz modulation to be mainly due to the temperature-tunable carrier population of bulk states. Our results suggest a new application of topological insulators for terahertz technology.
Methods
Device fabrication. High-quality BSTS single crystals are synthesized using the modified Bridgeman method, and can be easily cleaved using Kapton tape. Their structure and transport properties have been reported in an early study 40 . The devices are fabricated using a two-step tape method. First, a BSTS flake (~30 μm) is mechanically exfoliated from single crystal pallet using Kapton tape. Second, most of top surface of the BSTS crystal is covered by another Kapton tape, which prevents the decay of the sample in the ambient atmosphere. Kapton tape (typical thickness of ∼70 μm) can remain stable over a broad range of temperatures. Silver paste is used to form two electrodes on the exposed BSTS crystal. Thus a device based on Kapton/BSTS/Kapton sandwich structure is prepared for THz modulation with a clear aperture of ∼6 × 3 mm 2 for testing.
THz-TDS measurements. THz time-domain spectra of the sandwich-structure device were measured by TPS-3000 spectrometer with a frequency range of 0.3-2.7 THz which is incorporated with a Janis ST-100-FTIR continuous flow cryostat in the temperature range from 5 to 400 K. The in-plane current was applied between the two electrodes using a Keithley 2400 sourcemeter operating in direct current mode with the voltage being measured simultaneously. Data collection under different bias currents was initiated after stabilization of the source-drain voltage which typically took about 1-5 seconds. Each trace was averaged from 900 spectra with a scanning frequency of 30 Hz. The temperature of the device under various bias currents at room temperature was measured separately using a FLIR T620 thermal imaging camera at ambient condition. Data Availability. All data generated or analysed during this study are included in this published article. | 3,598 | 2017-10-18T00:00:00.000 | [
"Physics"
] |
Multi-Factor Analysis on the Stability of High Slopes in Open-Pit Mines
: During the production of open-pit mines, the stability of slopes can be affected by various factors such as structural surfaces, production blasting vibrations, and mining areas. In this study, the researchers focused on the slope of the open-pit mine at Yinshan and employed UAV mapping technology to conduct an on-site geological engineering investigation. Information on the yield, trace length, spacing, and density of the structural surface of the south slope was obtained. The researchers also carried out vibration blasting tests in combination with the production blasting activities in the mine to determine the blasting vibration attenuation law and whether the blasting vibration speed met safety specifications. Additionally, numerical simulation methods were used to examine the influence of the mining area on the stability of the current slope and the designed excavation slope. The slope stability was evaluated using the limit equilibrium method, and the researchers separately discussed the influence of self-weight load and self-weight load plus blasting vibration force on the stability of the high slope of the open pit. The results showed the following: (1) The rock mass structural plane in the south slope of the mining area was mainly dominated by a medium-large dip structural plane, and three faults and joint fissures in the investigation area combined to form cutting and sliding surfaces in the rock mass that were prone to collapse and sliding. (2) The maximum blasting vibration speed met safety requirements. (3) There was no large range of plastic zone damage in the entire slope, and the overall stability of the slope was good. (4) The present slope was relatively stable when considering only self-weight stress and the blasting vibration force. However, there was a certain risk of instability in the design of the excavation slope.
Introduction
With the increasing mining depth of medium and large open-pit mines, a number of high and steep slopes are inevitably formed.The stability of these slopes poses a huge hidden danger to the daily functioning of mines and to the safety of the lives and property of construction personnel [1,2].There are many unfavorable factors affecting the safety and stability of open-pit mine slopes [3][4][5].First of all, the traditional method of geological surveying requires manual standing at the foot of the open-pit slope and using geological survey tools such as a compass and measuring tape to measure and record on site, which requires the participation of many people and requires a large survey area, resulting in low survey efficiency.The general height of the slope of the open-pit mine is large, and the height of the slope after merging sections is often as high as 30 m, while manual measurements can only be carried out within a very limited range of 2-3 m at the top of the slope.The structural surface measured is not very representative.On-site using a numerical simulation and the limit equilibrium method.Sun et al. [28] explored the characteristics and evolution of slope slip under different mining sequence combinations of open-pit and underground combined mining.Shi et al. [29] carried out an analysis and evaluation of the potentially dangerous area division, slope stability characteristics, and isolation thickness of an open-pit slope according to the theoretical deduction of the goaf and the results of borehole detection.
A large number of experts and scholars have made fruitful studies on various factors affecting slope stability respectively, but due to the complex environment of open-pit mines, there are more factors that may cause slope instability, whether they are the preferred structure plane, production blasting vibration, or underground mining area.In order to solve this problem, all kinds of factors that may cause slope instability damage need to be analyzed.
Background of the Project and Technical Route 2.1. Overview of Regional Project
This study is based on the copper-gold open-pit mine in the ninth district of Yinshan Mining Co., Ltd., Jiangxi Copper Industry Group.The site is located in the northern suburbs of Dexing City, Jiangxi Province, at the northwest foot of the Damaoshan branch of the Huaiyu Mountains (Figure 1).It is a typical hilly mountain landform.The mining area is 2.7 km long from north to south and 2.15 km wide from east to west, with an area of about 4.36 km 2 and a production capacity of 5000 t/d.The Neoproterozoic Zhangcun Group is widely exposed in the mining area, followed by the Mesozoic Lower Cretaceous Daguding Formation and a small amount of the Shixi Formation.The Zhangcun Group is the basement strata of the mining area, which is composed of metamorphic rocks.The Daguding Formation is a continental volcanic rock series covering the Zhangcun Group, which is distributed in the southwest of the mining area and near the Xishan crater.Its distribution is characterized by a northeast zonal distribution.The Zhangcun Group and the Daguding Formation are the main ore-bearing surrounding rocks in the mining area.The Shixi Formation is unconformably covered by the Zhangcun Group and the Daguding Formation, and a small amount is distributed in the southwest of the mining area.The main lithology of the mining area includes phyllite, quartz porphyry, and Quaternary residual deposits.Phyllite is the most important and widely distributed rock mass in the mining area, mainly comprising sericite phyllite and mixed with quartz.The quartz porphyry is a small dyke, which is nearly east-west to northeast-east, with an inclination angle of 80-90 • , a length of 300-800 m, a width of 2-20 m, and a depth of more than 500 m.The Quaternary is the Quaternary residual slope and alluvial loose accumulation layer, widely distributed in the mining area.The lithofacies are mainly clay, mixed with breccia such as slate and phyllite, mostly gravel with a diameter of less than 2 cm.The surrounding rock of the ore body is mainly phyllite, quartz porphyry, and a small amount of dacite porphyry.The surrounding rock is relatively stable.The factors affecting the stability of the surrounding rocks are the development of phyllite schist and faults in local sections.
Technology Lines
The technical route of this study is shown in Figure 2.
Technology Lines
The technical route of this study is shown in Figure 2.
UAV Technology
Due to the influence of the geological structure and external conditions, a large number of fracture structural surfaces with different directions and scales are left in the rock mass.The fractured structural surfaces are the parts of the rock mass with the lowest strength and the weakest resistance to deformation, and their existence leads to significant weakening and strong anisotropy of the overall mechanical properties of the rock mass.
Technology Lines
The technical route of this study is shown in Figure 2.
UAV Technology
Due to the influence of the geological structure and external conditions, a large number of fracture structural surfaces with different directions and scales are left in the rock mass.The fractured structural surfaces are the parts of the rock mass with the lowest strength and the weakest resistance to deformation, and their existence leads to significant weakening and strong anisotropy of the overall mechanical properties of the rock mass.
UAV Technology
Due to the influence of the geological structure and external conditions, a large number of fracture structural surfaces with different directions and scales are left in the rock mass.The fractured structural surfaces are the parts of the rock mass with the lowest strength and the weakest resistance to deformation, and their existence leads to significant weakening and strong anisotropy of the overall mechanical properties of the rock mass.Rock mechanics emphasizes that the deformation and damage of the rock mass is in general the deformation and damage of the structural faces and their networks, mainly through the tension and shear deformation of the structural faces.In order to fully understand the appearance quality as well as distribution state of rock joints in each area of the mine, and to derive the dominant orientation of structural face production in the investigated area, this paper uses UAV mapping technology to conduct on-site geological investigation of slopes.UAV mapping geological surveying is a new non-contact slope engineering geological survey method, the principle of which is to use a digital camera carried on the UAV to collect aerial images by remote control, and finally use professional interpretation software to interpret the collected images.Compared with conventional geological survey methods, the UAV mapping geological survey method has the advantages of high degree, low cost, and high safety.The flight platform used in this survey is a DJI brand (Shenzhen, China) Phantom 4 Pro V2.0 quadrotor UAV, the 20-megapixel FC6310 camera on the UAV is used for image acquisition, and the 3D reconstruction system based on Sfm algorithm is used to reconstruct the aerial images [30].Finally, the 3D reconstructed point cloud model is used for intelligent recognition and information resolution of the structural surface [31] to obtain information on the yield, location, trace length, spacing, and density of the structural surface.
Scope and Content of the Survey
Due to the influence of the geological structure and external conditions on the south slope of the stope, historically, small landslides have occurred on the south slope in December 2014 and June 2016.Therefore, geological surveying work has been carried out in this area to gather information about its structural plane and to provide the basis for slope stability evaluation and future disaster prevention and control.The survey is divided into two areas, namely, survey area I and survey area II, as shown in Figure 3.The geological survey includes (1) the yield of the structural surface, including its tendency and dip angle, (2) the location and scale of the traces of the structural surface, and (3) the spacing and density of the structural surface.Rock mechanics emphasizes that the deformation and damage of the rock mass is in general the deformation and damage of the structural faces and their networks, mainly through the tension and shear deformation of the structural faces.In order to fully understand the appearance quality as well as distribution state of rock joints in each area of the mine, and to derive the dominant orientation of structural face production in the investigated area, this paper uses UAV mapping technology to conduct on-site geological investigation of slopes.UAV mapping geological surveying is a new non-contact slope engineering geological survey method, the principle of which is to use a digital camera carried on the UAV to collect aerial images by remote control, and finally use professional interpretation software to interpret the collected images.Compared with conventional geological survey methods, the UAV mapping geological survey method has the advantages of high degree, low cost, and high safety.The flight platform used in this survey is a DJI brand (Shenzhen, China) Phantom 4 Pro V2.0 quadrotor UAV, the 20-megapixel FC6310 camera on the UAV is used for image acquisition, and the 3D reconstruction system based on Sfm algorithm is used to reconstruct the aerial images [30].Finally, the 3D reconstructed point cloud model is used for intelligent recognition and information resolution of the structural surface [31] to obtain information on the yield, location, trace length, spacing, and density of the structural surface.
Scope and Content of the Survey
Due to the influence of the geological structure and external conditions on the south slope of the stope, historically, small landslides have occurred on the south slope in December 2014 and June 2016.Therefore, geological surveying work has been carried out in this area to gather information about its structural plane and to provide the basis for slope stability evaluation and future disaster prevention and control.The survey is divided into two areas, namely, survey area I and survey area II, as shown in Figure 3.The geological survey includes (1) the yield of the structural surface, including its tendency and dip angle, (2) the location and scale of the traces of the structural surface, and (3) the spacing and density of the structural surface.
Analysis Result
According to the on-site survey, the survey area was determined by the UAV mapping method, of which six ground control target sites were set up in the survey areas I and II.There were a total of eight steps in the effective area of the point cloud generated in area I, with a length of about 230 m and a height of about 180 m, and a total of seven steps in the effective area of the point cloud generated in survey area II.
The point cloud data of the investigation area I is shown in Figure 4.The numbers 1-9 in Figure 4 represent the number of faults or joints.Two large-scale interlayer faults have developed in the slope of survey area I, accompanied by several joint fissures.Fault 1 has an exposure length of about 120 m, and an occurrence of 290-320°∠45-55°.The lower part forms a wedge with joint 8 (335-355°∠60-70°, outcrop length of about 40 m), causing a rock landslide there.Fault 2 has an outcrop length of about 105 m, and the occurrence is
Analysis Result
According to the on-site survey, the survey area was determined by the UAV mapping method, of which six ground control target sites were set up in the survey areas I and II.There were a total of eight steps in the effective area of the point cloud generated in area I, with a length of about 230 m and a height of about 180 m, and a total of seven steps in the effective area of the point cloud generated in survey area II.
The point cloud data of the investigation area I is shown in 280-300°∠60-70°.We surveyed the lower part of the mud: a section about 2-3 m thick.The joints 3, 5, 6, and 9 are in the same group of joints: the occurrence is 295-320°∠50-60°, and the exposure length is 13 m, 18 m, 16 m, and 29 m, respectively.The joints 4 and 7 are in the same group of joints: the occurrence is 355-025°∠50-60°, and the exposure length is 10 m and 14 m, respectively.280-300°∠60-70°.We surveyed the lower part of the mud: a section about 2-3 m thick.The joints 3, 5, 6, and 9 are in the same group of joints: the occurrence is 295-320°∠50-60°, and the exposure length is 13 m, 18 m, 16 m, and 29 m, respectively.The joints 4 and 7 are in the same group of joints: the occurrence is 355-025°∠50-60°, and the exposure length is 10 m and 14 m, respectively.There are large faults with several joint fissures in investigation areas I and II, and the average joint spacing is about 5-30 m.The average joint density is 0.033-0.2/m.DIPS software was used to map the nodal strike rosettes of the two out survey areas, as shown in Figure 6.
There are large faults with several joint fissures in investigation areas I and II, and the average joint spacing is about 5-30 m.The average joint density is 0.033-0.2/m.DIPS software was used to map the nodal strike rosettes of the two out survey areas, as shown in Figure 6.
Analysis of the Influence of Blasting Vibrations on Slope Stability
Blasting operations are frequent in most open-pit mining processes.The blasting of seismic waves is an important factor affecting slope stability.Due to the existence of structural planes in the rock and soil of the slope, when the vibration wave generated by blasting passes through the weak structural plane, the reflection of the vibration wave at the structural plane is enhanced, which may cause the slope to be destroyed at the structural plane.On the other hand, the blasting vibration wave exerts a dynamic load on the weak structural plane.When the load exceeds the bearing capacity of the structural plane, the slope is destroyed.At the same time, due to the heterogeneity and discontinuity of the slope rock mass, the vibration wave causes a certain relative displacement of the internal structure of the slope rock mass, which then causes the slope to be destroyed.
Testing Method
Usually, the bumping and shaking of all nearby objects caused by blasting is called the blasting seismic effect.When the distance is more than 150 times the radius of the explosive package, the intensity of the vibration wave caused by the blasting is weakened; this means that the particles can only undergo elastic vibration and cannot cause rock mass damage.This is called an elastic wave.A seismic wave is an elastic wave, which is composed of a body wave and a surface wave.The body wave is divided into a longitudinal wave and a transverse wave.The longitudinal wave has the characteristics of low amplitude, short duration, and high speed, while the transverse wave has a long duration, and its amplitude is larger than that of the longitudinal wave.The surface wave is formed by the multiple reflection and superposition of the body wave on the free surface.Its propagation speed is slower than that of the body wave, but the carrying energy is larger.The seismic damage caused by blasting is mainly the effect of a surface wave.When the distance is short, the longitudinal wave, shear wave, and surface wave arrive almost at the same time, so it is difficult to identify the type of wave.When the distance is large, the three waves begin to separate, and can therefore be identified, and then the parameters of
Analysis of the Influence of Blasting Vibrations on Slope Stability
Blasting operations are frequent in most open-pit mining processes.The blasting of seismic waves is an important factor affecting slope stability.Due to the existence of structural planes in the rock and soil of the slope, when the vibration wave generated by blasting passes through the weak structural plane, the reflection of the vibration wave at the structural plane is enhanced, which may cause the slope to be destroyed at the structural plane.On the other hand, the blasting vibration wave exerts a dynamic load on the weak structural plane.When the load exceeds the bearing capacity of the structural plane, the slope is destroyed.At the same time, due to the heterogeneity and discontinuity of the slope rock mass, the vibration wave causes a certain relative displacement of the internal structure of the slope rock mass, which then causes the slope to be destroyed.
Testing Method
Usually, the bumping and shaking of all nearby objects caused by blasting is called the blasting seismic effect.When the distance is more than 150 times the radius of the explosive package, the intensity of the vibration wave caused by the blasting is weakened; this means that the particles can only undergo elastic vibration and cannot cause rock mass damage.This is called an elastic wave.A seismic wave is an elastic wave, which is composed of a body wave and a surface wave.The body wave is divided into a longitudinal wave and a transverse wave.The longitudinal wave has the characteristics of low amplitude, short duration, and high speed, while the transverse wave has a long duration, and its amplitude is larger than that of the longitudinal wave.The surface wave is formed by the multiple reflection and superposition of the body wave on the free surface.Its propagation speed is slower than that of the body wave, but the carrying energy is larger.The seismic damage caused by blasting is mainly the effect of a surface wave.When the distance is short, the longitudinal wave, shear wave, and surface wave arrive almost at the same time, so it is difficult to identify the type of wave.When the distance is large, the three waves begin to separate, and can therefore be identified, and then the parameters of the blasting vibration are determined.This vibration monitoring was conducted using the Mini series blasting vibration meter (Figure 7) produced by Chengdu Tai Ce Technology Co., Ltd., (Chengdu, China) which facilitates the online monitoring of blasting vibration velocity and frequency.
the blasting vibration are determined.This vibration monitoring was conducted using the Mini series blasting vibration meter (Figure 7) produced by Chengdu Tai Ce Technology Co., Ltd., (Chengdu, China) which facilitates the online monitoring of blasting vibration velocity and frequency.Mini series blasting vibration meter (Figure 7) produced by Chengdu Tai Ce Technology Co., Ltd., (Chengdu, China) which facilitates the online monitoring of blasting vibration velocity and frequency.The center coordinates of each blasting area are shown in Table 1.The coordinate system used for all coordinates in Table 1 is 2. According to the Technical Code for the Safety Monitoring of Slopes of Metallic and Nonmetallic Open-Pit Mines (AQ2063-2018), the monitoring points of blasting vibration velocity should be set at the foot of the slope in the main sliding direction of the slope, and there should be more than three monitoring points.The velocity of the particle in the three directions of X, Y, and Z should be monitored at the same time, and the maximum value in the three directions should be taken as the maximum value of the particle.Monitoring accuracy should be less than 0.001 cm/s.After monitoring, 13 groups of effective data for the peak vibration velocity, acceleration, and main vibration frequency of the particles were obtained.The monitoring results of each blasting vibration are shown in Table 3.For specific seismic wave propagation conditions, the particle vibration velocity is related to the charge weight and the distance from the particle to the center of the explosion source.The relationship of peak particle vibration velocity with charge weight, distance, site factor K, and the attenuation index α conforms to Sadaovsk's formula: In Formula (1), V is the peak vibration velocity of particle, and the unit is cm/s.Q is the amount of explosive, and the unit is kg.R is the distance between the measuring point and the center of the explosion source, and the unit is m.K is a coefficient related to rock properties, blasting methods, and other factors, namely the site factor.α is the seismic wave attenuation index, related to geological conditions.ρ is the proportional dose.
Because V and ρ in the above attenuation formula are not linear, it is necessary to convert the formula into a linear relationship for regression, so as to obtain the corresponding K and α values.By taking logarithms on both sides of the equal sign of the formula, the following linear form is obtained: When the effective data are sufficient, the above equation is regressed by the least square method of mathematical statistics.
Evaluation of the Influence of Blasting Vibrations on Slope Stability
After eliminating the obvious data noise, the least square method is used to analyze Equation (2), and the coefficient K and attenuation coefficient α, which are related to the terrain and geological conditions between the blasting point and the measuring point, are fitted.The fitting results are shown in Figure 9.The site coefficient K = 328.02and the attenuation index α = 1.809 are calculated.The formula for the vibration velocity attenuation of blasting vibration particles in the monitoring area of the Yinshan Mine is: Three blasting vibration tests were carried out on the blasting vibration measuring points.The following conclusions are drawn from the analysis.According to China's "Technical Code for Non-coal Open-pit Slope Engineering" (GB51016-2014) and Technical Code for Safety Monitoring of Slope of Metallic and Non-metallic Open-Pit Mines (AQ2063-2018), the particle vibration velocity of the slope should be less than 24 cm/s.According to the monitoring results, the maximum one-stage charge of the three blastings is 410 kg, and the maximum blasting vibration is measured to be 7.8745 cm/s in the vertical direction (vector combined velocity 8.8547 cm/s), which meets the requirements of the safety specifications.
Model Building
Due to the increasing depth of mining and the increasing difficulty of mining in large and medium-sized open-pit mines at home and abroad, the cost of open-pit mining has also increased.Therefore, more and more open-pit mines have begun to shift from openpit mining to a combination of open-pit mining and underground mining.Underground mining is carried out above a certain mining depth, and open-pit mining is still used below the depth.The influence of the goaf left by underground mining on the stability of the slope cannot be ignored.The mined-out area near the slope of the Yinshan open-pit mine is mainly distributed in the south of the stope, which contains a total of thirteen partitions.The maximum width of the mined-out area is about 3 m, the maximum length is about 75 m, and the maximum height is about 35 m.In order to study the influence of the mined-out area on the slope stability of the Yinshan Mine, this chapter uses 3Dmine 2021 (Beijing, China) mining engineering software and Midas GTS NX 2021 (Seoul, Republic of Korea) software to complete the establishment of a three-dimensional slope model.The three-dimensional model is shown in Figure 10.FLAC3D three-dimensional numerical simulation software is used to analyze the influence of the mined-out area on the stability of the Yinshan open-pit slope under the conditions of the mined-out area group.FLAC3D is a fast 3D Lagrangian analysis program that uses explicit Lagrangian algorithms and hybrid discrete partitioning techniques to simulate the 3D mechanical properties of geotechnical or other geotechnical materials [32].The analysis process is shown in Figure 11.Three blasting vibration tests were carried out on the blasting vibration measuring points.The following conclusions are drawn from the analysis.According to China's "Technical Code for Non-coal Open-pit Slope Engineering" (GB51016-2014) and Technical Code for Safety Monitoring of Slope of Metallic and Non-metallic Open-Pit Mines (AQ2063-2018), the particle vibration velocity of the slope should be less than 24 cm/s.According to the monitoring results, the maximum one-stage charge of the three blastings is 410 kg, and the maximum blasting vibration is measured to be 7.8745 cm/s in the vertical direction (vector combined velocity 8.8547 cm/s), which meets the requirements of the safety specifications.
Model Building
Due to the increasing depth of mining and the increasing difficulty of mining in large and medium-sized open-pit mines at home and abroad, the cost of open-pit mining has also increased.Therefore, more and more open-pit mines have begun to shift from open-pit mining to a combination of open-pit mining and underground mining.Underground mining is carried out above a certain mining depth, and open-pit mining is still used below the depth.The influence of the goaf left by underground mining on the stability of the slope cannot be ignored.The mined-out area near the slope of the Yinshan open-pit mine is mainly distributed in the south of the stope, which contains a total of thirteen partitions.The maximum width of the mined-out area is about 3 m, the maximum length is about 75 m, and the maximum height is about 35 m.In order to study the influence of the mined-out area on the slope stability of the Yinshan Mine, this chapter uses 3Dmine 2021 (Beijing, China) mining engineering software and Midas GTS NX 2021 (Seoul, Republic of Korea) software to complete the establishment of a three-dimensional slope model.The three-dimensional model is shown in Figure 10.FLAC3D three-dimensional numerical simulation software is used to analyze the influence of the mined-out area on the stability of the Yinshan open-pit slope under the conditions of the mined-out area group.FLAC3D is a fast 3D Lagrangian analysis program that uses explicit Lagrangian algorithms and hybrid discrete partitioning techniques to simulate the 3D mechanical properties of geotechnical or other geotechnical materials [32].The analysis process is shown in Figure 11.
Considering the size effect, the size of the calculation model is 800 m × 1410 m × 774 m, and a total of 181,272 nodes and 1,032,056 meshes are generated.The final calculation model is shown in Figure 12.The calculation is divided into two phases: on the one hand, the influence of the goaf on slope stability under the current slope conditions is analyzed, while on the other hand, the influence of the goaf on slope stability in the later mining process is analyzed.Since the mined-out area group is located in the southern slope of the stope, the main control lithology is quartz porphyry and phyllite, so the numerical calculation model strata are dominated by these two kinds of lithology.The mechanical parameters of the rock mass are shown in Table 4. Considering the size effect, the size of the calculation model is 800 m × 1410 m × 774 m, and a total of 181,272 nodes and 1,032,056 meshes are generated.The final calculation model is shown in Figure 12.The calculation is divided into two phases: on the one hand, the influence of the goaf on slope stability under the current slope conditions is analyzed, while on the other hand, the influence of the goaf on slope stability in the later mining process is analyzed.Since the mined-out area group is located in the southern slope of the stope, the main control lithology is quartz porphyry and phyllite, so the numerical calculation model strata are dominated by these two kinds of lithology.The mechanical parameters of the rock mass are shown in Table 4. Considering the size effect, the size of the calculation model is 800 m × 1410 m × 774 m, and a total of 181,272 nodes and 1,032,056 meshes are generated.The final calculation model is shown in Figure 12.The calculation is divided into two phases: on the one hand, the influence of the goaf on slope stability under the current slope conditions is analyzed, while on the other hand, the influence of the goaf on slope stability in the later mining process is analyzed.Since the mined-out area group is located in the southern slope of the stope, the main control lithology is quartz porphyry and phyllite, so the numerical calcu- the model.After the initial stress field is formed by solving the balance, the displacement is initialized, and then the Mohr-Coulomb constitutive model is used to complete the balance again.After that, the Null constitutive model is used for the subsequent excavation calculation.The boundary conditions are set as follows: the top of the model is a free surface, with normal constraints applied to the surrounding boundaries and full constraints applied to the bottom.
Numerical Simulation Analysis of the Current Slope Stability
Because there are many existing mined-out areas, the relative position of each mining area to the slope of the quarry is not the same, and the interaction between the mined-out areas and the influence degree of each mined-out area on the slope stability is not the same.This paper mainly studies the mined-out areas H and I, which have the most obvious influence on the current slope and the designed excavation slope.
The numerical calculation results of the profile of the current slope H mining hollow area and I mining hollow area are shown in Figure 13.The existence of I mining hollow area above H mining hollow area makes the plastic zone around the upper left corner of H mining hollow area extend upward to the bottom of the current slope, and the overall stability of the slope is less affected by the plastic zone penetration.However, in order to avoid further inducing the plastic zone to be penetrated in a large scale in the later mining, the mining area measures should be advanced to deal with the mining hollow area I and H, focusing on the stability of the upper left area of the H mining hollow area.From the vertical displacement cloud map, it can be seen that the vertical displacement gradually increases from the inside of the slope to the slope surface, and the displacement from the top of the slope to the foot of the slope also gradually increases.The maximum vertical displacement of the slope surface appears at the broken foot of the slope, which is 7 cm, The model calculation considers the self-weight stress field and the initial stress in the three directions of X, Y, and Z according to the ratio of horizontal stress to vertical stress, and it solves the equilibrium under this condition.According to the selected mechanical parameters of rock mass, the elastic constitutive model is used to quickly balance the model.After the initial stress field is formed by solving the balance, the displacement is initialized, and then the Mohr-Coulomb constitutive model is used to complete the balance again.After that, the Null constitutive model is used for the subsequent excavation calculation.The boundary conditions are set as follows: the top of the model is a free surface, with normal constraints applied to the surrounding boundaries and full constraints applied to the bottom.
Numerical Simulation Analysis of the Current Slope Stability
Because there are many existing mined-out areas, the relative position of each mining area to the slope of the quarry is not the same, and the interaction between the mined-out areas and the influence degree of each mined-out area on the slope stability is not the same.This paper mainly studies the mined-out areas H and I, which have the most obvious influence on the current slope and the designed excavation slope.
The numerical calculation results of the profile of the current slope H mining hollow area and I mining hollow area are shown in Figure 13.The existence of I mining hollow area above H mining hollow area makes the plastic zone around the upper left corner of H mining hollow area extend upward to the bottom of the current slope, and the overall stability of the slope is less affected by the plastic zone penetration.However, in order to avoid further inducing the plastic zone to be penetrated in a large scale in the later mining, the mining area measures should be advanced to deal with the mining hollow area I and H, focusing on the stability of the upper left area of the H mining hollow area.From the vertical displacement cloud map, it can be seen that the vertical displacement gradually increases from the inside of the slope to the slope surface, and the displacement from the top of the slope to the foot of the slope also gradually increases.The maximum vertical displacement of the slope surface appears at the broken foot of the slope, which is 7 cm, and the impact of H and I mining areas on the internal displacement of the slope is limited.There is no large vertical displacement around the two mining areas.
and the impact of H and I mining areas on the internal displacement of the slope is limited.There is no large vertical displacement around the two mining areas.
Design Excavation Slope Stability Numerical Simulation Analysis
The results of the numerical calculation of the profile of H and I mining area of the design excavation slope are shown in Figure 14.The H and I mining areas interact with each other, making a plastic zone appearing around the two mining areas, and the plastic zone extends upward and penetrates the step slope.The plastic zone gathers at the foot of the design excavation slope, while the plastic zone does not appear or appears in a small amount at other locations.The vertical displacement of the designed excavation slope has increased compared with the current slope, and the maximum displacement appears at the slope surface near the foot of the slope, reaching 10 cm.
Design Excavation Slope Stability Numerical Simulation Analysis
The results of the numerical calculation of the profile of H and I mining area of the design excavation slope are shown in Figure 14.The H and I mining areas interact with each other, making a plastic zone appearing around the two mining areas, and the plastic zone extends upward and penetrates the step slope.The plastic zone gathers at the foot of the design excavation slope, while the plastic zone does not appear or appears in a small amount at other locations.The vertical displacement of the designed excavation slope has increased compared with the current slope, and the maximum displacement appears at the slope surface near the foot of the slope, reaching 10 cm.
Stability Analysis Method
The limit equilibrium analysis method is one of the most commonly used methods for slope stability research.The limit equilibrium method includes the Fellenius method, the simplified Bishop method, the simplified Janbu method, and the Spencer method.The technical code for building slope engineering (GB50330-2013) recommends the simplified Bishop method for calculating slope stability.At the same time, a large number of practices have proved that the Bishop method has high accuracy, and it has become the most popular safety factor calculation method in engineering.Therefore, this paper uses the simplified Bishop method to analyze the slope stability of each partition of the Yinshan openpit mine, using Rocscience Slide 6.0 software (Toronto, ON, Canada).
Calculation Load Combination and Calculation Method
According to the specific conditions of the Yinshan open-pit mine, two load combinations were analyzed and designed for this slope analysis.Load combination I is the selfweight of the slope, and load combination II is the self-weight of the slope + the blasting vibration force.In this analysis, the horizontal blasting vibration force of each slice can be calculated according to the following formula:
Safety Factor Calculation for the South Slope of the Open Pit 6.1. Stability Analysis Method
The limit equilibrium analysis method is one of the most commonly used methods for slope stability research.The limit equilibrium method includes the Fellenius method, the simplified Bishop method, the simplified Janbu method, and the Spencer method.The technical code for building slope engineering (GB50330-2013) recommends the simplified Bishop method for calculating slope stability.At the same time, a large number of practices have proved that the Bishop method has high accuracy, and it has become the most popular safety factor calculation method in engineering.Therefore, this paper uses the simplified Bishop method to analyze the slope stability of each partition of the Yinshan open-pit mine, using Rocscience Slide 6.0 software (Toronto, ON, Canada).
Calculation Load Combination and Calculation Method
According to the specific conditions of the Yinshan open-pit mine, two load combinations were analyzed and designed for this slope analysis.Load combination I is the self-weight of the slope, and load combination II is the self-weight of the slope + the blasting vibration force.In this analysis, the horizontal blasting vibration force of each slice can be calculated according to the following formula: In Formulas ( 4) and ( 5), F i is the horizontal equivalent static of the blasting vibration force in kN.α i is the maximum horizontal acceleration of the blasting vibration particle in m/s 2 , its value is shown in Table 5. β i is the blasting vibration force coefficient, which may take values from 0.1-0.3.W i is the weight of the slice in kN.g is the gravitational acceleration in m/s 2 .f is the blasting vibration frequency in Hz.V i is the horizontal vibration velocity of the particle at the center of gravity.The calculation method is shown in Formula (1), and the unit is m/s.When considering the influence of blasting vibration force in slope stability software calculations, it is necessary to determine the influence coefficient of the blasting vibration force level ε : According to the calculation of the blasting vibration test results, the average blasting vibration force level influence coefficient of each zone is between 0.01 and 0.03, and the maximum value of 0.03 is used to calculate the slope stability under the action of the blasting vibration force.
Safety Factor Calculation of Mine Slope Stability
The section selected for the calculation of the safety factor of the mine slope is shown in Figure 15.The calculated safety factor of the current slope is shown in Figure 16.The safety factor of the designed excavation slope is shown in Figure 17.The calculated safety factor and the planned excavation slope parameters are shown in Table 6.According to the relevant provisions of China's Non-ferrous Metal Mining Design Specification (GB50771-2012) and Technical Specification for Slope Engineering of Non-Coal Open-Pit Mines (GB51016-2014), safety factor 1.20 is selected as the design stability safety factor.Combining the above limit equilibrium calculation results, it can be seen that: (1) The safety factors of the current slope under combined load I and load combination II are 1.473 and 1.409, respectively, which meet the requirements of safety factor 1.20 specified in the code, meaning that the overall slope is stable.
(2) The safety factor of the designed excavation slope is 1.130 when only the self-weight load is considered, and the safety factor after considering the self-weight load and blasting vibration is 1.081, which does not meet the requirements of the safety factor of 1.20.Because of the final slope angle or slope height in this area, and because faults and goafs occur under some slopes in this area, there is a certain risk of instability.blasting vibration is 1.081, which does not meet the requirements of the safe of 1.20.Because of the final slope angle or slope height in this area, and becau and goafs occur under some slopes in this area, there is a certain risk of insta (3) The bottom of the final slope of the Yinshan open-pit mine is affected by the f the original underground mined-out area.The slope rock mass structure is and prone to wedge failure and local collapse.load is considered, and the safety factor after considering the self-weight load and blasting vibration is 1.081, which does not meet the requirements of the safety factor of 1.20.Because of the final slope angle or slope height in this area, and because faults and goafs occur under some slopes in this area, there is a certain risk of instability.(3) The bottom of the final slope of the Yinshan open-pit mine is affected by the fault and the original underground mined-out area.The slope rock mass structure is broken, and prone to wedge failure and local collapse.
Discussion
This paper presents the results of a geological survey conducted on the southern slope of the open-pit slope of the Yinshan Mine, utilizing UAV mapping technology and the digital structural surface recognition method.Additionally, a production blasting vibration test was performed to analyze the influence of blasting vibration on the rock mass of the slope.Using FLAC3D software, the impact of the underground goaf on the stability of the slope was analyzed, and the limit balance analysis method was applied to evaluate the status quo of the open-pit slope and the stability of the design excavation slope.The following results were obtained: (1) The southern slope of the mining area primarily exhibits a medium-large inclination angle structural surface.The development of thousands of rock fragments in each area was traced from the statistical results of joint fractures in the entire measurement site, where the average joint spacing was found to be 5-30 m, and the average joint density was 0.033-0.2/m.(2) A total of three blasting vibration tests were conducted using the blasting vibration monitoring system, in accordance with the specification.Based on the monitoring results, the maximum one-stage charge of the three blasts was 410 kg, and the measured value did not exceed 24 cm/s.The analysis of the measured data allowed for the blasting vibration attenuation law for the Yinshan Mine to be obtained, and the vibration velocity attenuation law formula was fitted.(3) According to the three-dimensional finite element analysis, the entire slope exhibits no large range of plastic zone damage, with only a small range of plastic zone generated on the step slope above the location of the empty area, which is mainly shear failure.(4) A typical section was selected to carry out the limit equilibrium analysis.Load combination I (considering only the self-weight stress) and load combination II (which includes the blasting vibration force) of the current slope meet the safety factor of 1.20, and the overall slope is stable.However, the designed excavation slope is near the critical value of the safety factor specified in the code under the conditions of load combination I and load combination II, and, thus, poses a certain risk of instability.
Conclusions and Recommendations
In this paper, we analyzed the engineering geology of the mining area of Yinshan open pit in detail, and carried out on-site engineering geological investigation work by
Discussion
This paper presents the results of a geological survey conducted on the southern slope of the open-pit slope of the Yinshan Mine, utilizing UAV mapping technology and the digital structural surface recognition method.Additionally, a production blasting vibration test was performed to analyze the influence of blasting vibration on the rock mass of the slope.Using FLAC3D software, the impact of the underground goaf on the stability of the slope was analyzed, and the limit balance analysis method was applied to evaluate the status quo of the open-pit slope and the stability of the design excavation slope.The following results were obtained: (1) The southern slope of the mining area primarily exhibits a medium-large inclination angle structural surface.The development of thousands of rock fragments in each area was traced from the statistical results of joint fractures in the entire measurement site, where the average joint spacing was found to be 5-30 m, and the average joint density was 0.033-0.2/m.(2) A total of three blasting vibration tests were conducted using the blasting vibration monitoring system, in accordance with the specification.Based on the monitoring results, the maximum one-stage charge of the three blasts was 410 kg, and the measured value did not exceed 24 cm/s.The analysis of the measured data allowed for the blasting vibration attenuation law for the Yinshan Mine to be obtained, and the vibration velocity attenuation law formula was fitted.(3) According to the three-dimensional finite element analysis, the entire slope exhibits no large range of plastic zone damage, with only a small range of plastic zone generated on the step slope above the location of the empty area, which is mainly shear failure.(4) A typical section was selected to carry out the limit equilibrium analysis.Load combination I (considering only the self-weight stress) and load combination II (which includes the blasting vibration force) of the current slope meet the safety factor of 1.20, and the overall slope is stable.However, the designed excavation slope is near the critical value of the safety factor specified in the code under the conditions of load combination I and load combination II, and, thus, poses a certain risk of instability.
Conclusions and Recommendations
In this paper, we analyzed the engineering geology of the mining area of Yinshan open pit in detail, and carried out on-site engineering geological investigation work by using small low-speed UAV and image reconstruction and interpretation.We carried out a production blasting vibration test in the Yinshan Mine, obtained the blasting vibration attenuation law, and analyzed the impact of blasting vibration on the slope rock.We analyzed the stability of the slope of the mining site by using FLAC3d software and the limit equilibrium method.Based on the analysis of the data, the following conclusions and recommendations were made on the stability of the south slope of the open pit of the Yinshan Mine: (1) The main lithology of the mining area includes phyllite, quartz porphyry, and Quaternary residual deposits.The rock integrity of the slope ranges from "broken" to "more complete".There are three faults in the geological investigation area, which are combined with joints and fissures, constituting cutting and sliding surfaces in the rock mass, and easily forming wedges and other conditions favorable to collapse and sliding.It is suggested to pay close attention to the distribution of the newly revealed fault fragmentation zone and dense structure surface, further analyze and summarize the destabilization and damage characteristics of the slope, and, if necessary, take reinforcement measures for the potentially dangerous slope body or partially slow down the slope.(2) The maximum one-stage charge of the three blasts was 410 kg, and the maximum blast vibration was measured as 7.8745 cm/s vertical direction combined velocity 8.8547 cm/s), which did not exceed 24 cm/s and met the safety requirements.
It is recommended that pre-cracking blasting and light surface blasting technology should be used when blasting against the gang slope adjacent to the final boundary of the quarry or the permanent slope, optimizing blasting parameters and further strengthening the research on vibration reduction control technology.(3) According to the results of numerical simulation analysis, the top plastic zones of mining area H and mining area I are penetrated with the upper step slope, suggesting that damage may occur to the slope at local locations due to the impact of the mining area.The influence of the dense area of the empty zone is more obvious.According to the result of the ultimate equilibrium analysis, the current slope meets the requirements of the safety code when considering only the self-weight and taking into account the blasting vibration force load, and the slope stability is good.However, the design excavation slope is close to the critical value of the safety coefficient specified in the code under both combinations of load, and there is the risk of instability.It is suggested that the final slope angle of some slopes should be optimally adjusted during the design mining process, and relevant technical measures should be taken to over-detect and timely deal with the potential mining void area to effectively guarantee production safety.
Figure 4 .
The numbers 1-9 in Figure 4 represent the number of faults or joints.Two large-scale interlayer faults have developed in the slope of survey area I, accompanied by several joint fissures.Fault 1 has an exposure length of about 120 m, and an occurrence of 290-320 • ∠45-55 • .The lower part forms a wedge with joint 8 (335-355 • ∠60-70 • , outcrop length of about 40 m), causing a rock landslide there.Fault 2 has an outcrop length of about 105 m, and the occurrence is 280-300 • ∠60-70 • .We surveyed the lower part of the mud: a section about 2-3 m thick.The joints 3, 5, 6, and 9 are in the same group of joints: the occurrence is 295-320 • ∠50-60 • , and the exposure length is 13 m, 18 m, 16 m, and 29 m, respectively.The joints 4 and 7 are in the same group of joints: the occurrence is 355-025 • ∠50-60 • , and the exposure length is 10 m and 14 m, respectively.
Figure 4 .
Figure 4. Investigation area I point cloud data.The point cloud data of the investigation area II is shown in Figure 5.The numbers 1-3 in Figure 5 represent the numbers of faults or joints.A large-scale interlayer fault has developed in slope II of the survey area, accompanied by several joint fissures of the same occurrence.The exposure length of fault 1 is about 148 m, and the attitude is 300°∠50°.The exposure length of joint 2 is 57 m, and that of joint 3 is 60 m.
Figure 5 .
Figure 5. Investigation area II point cloud data.
Figure 4 .
Figure 4. Investigation area I point cloud data.The point cloud data of the investigation area II is shown in Figure 5.The numbers 1-3 in Figure 5 represent the numbers of faults or joints.A large-scale interlayer fault has developed in slope II of the survey area, accompanied by several joint fissures of the same occurrence.The exposure length of fault 1 is about 148 m, and the attitude is 300 • ∠50 • .The exposure length of joint 2 is 57 m, and that of joint 3 is 60 m.
Figure 4 .
Figure 4. Investigation area I point cloud data.The point cloud data of the investigation area II is shown in Figure 5.The numbers 1-3 in Figure 5 represent the numbers of faults or joints.A large-scale interlayer fault has developed in slope II of the survey area, accompanied by several joint fissures of the same occurrence.The exposure length of fault 1 is about 148 m, and the attitude is 300°∠50°.The exposure length of joint 2 is 57 m, and that of joint 3 is 60 m.
Figure 5 .
Figure 5. Investigation area II point cloud data.Figure 5. Investigation area II point cloud data.
Figure 5 .
Figure 5. Investigation area II point cloud data.Figure 5. Investigation area II point cloud data.
Figure 6 .
Figure 6. Rose diagram of rock joint surface strike.
Figure 6 .
Figure 6. Rose diagram of rock joint surface strike.
Figure 7 .
Figure 7. Composition of blasting vibration meter.4.1.2.Vibration Test Scheme Design According to the time sequence of on-site production blasting, the blasting vibration monitoring is divided into four blasting areas: A, B, C, and D. Five monitoring points are set up in each blasting area.The blasting area and vibration monitoring points are shown in Figure 8.
Figure 8 .
Figure 8. Layout of measuring points in the blasting area and vibration measuring area.
Figure 7 .
Figure 7. Composition of blasting vibration meter.4.1.2.Vibration Test Scheme Design According to the time sequence of on-site production blasting, the blasting vibration monitoring is divided into four blasting areas: A, B, C, and D. Five monitoring points are set up in each blasting area.The blasting area and vibration monitoring points are shown in Figure 8.
Figure 7 .
Figure 7. Composition of blasting vibration meter.4.1.2.Vibration Test Scheme Design According to the time sequence of on-site production blasting, the blasting vibration monitoring is divided into four blasting areas: A, B, C, and D. Five monitoring points are set up in each blasting area.The blasting area and vibration monitoring points are shown in Figure 8.
Figure 8 .
Figure 8. Layout of measuring points in the blasting area and vibration measuring area.Figure 8. Layout of measuring points in the blasting area and vibration measuring area.
Figure 8 .
Figure 8. Layout of measuring points in the blasting area and vibration measuring area.Figure 8. Layout of measuring points in the blasting area and vibration measuring area.
China Geodetic Coordinate System 2000 (CGCS2000), and the unit of coordinates is m.The blasting area A is located on the −36 m platform on the north side of the stope, and its maximum one-stage charge is 410 kg.The vibration parameters of blasting area A are monitored by the A1-A5 vibration points.The blasting area B is located on the −48 m platform on the southwest side of the stope.The maximum one-stage charge is 250 kg.The vibration parameters of blasting area B are monitored by the B1-B5 vibration points.The blasting area C is located on the −48 m platform on the east side of the stope.The maximum one-stage charge is 400 kg.The blasting area D is located on the −60 m platform on the east side of the stope.The vibration parameters of blasting area C and blasting area D are monitored by the C1-C5 vibration measuring points.The coordinates of each vibration point are shown in Table
Figure 9 .
Figure 9. Fitting curve of the particle velocity of the blasting vibration.
Figure 9 .
Figure 9. Fitting curve of the particle velocity of the blasting vibration.
Figure 10 .
Figure 10.Three-dimensional model of the relative position between mine and goaf.
Figure 10 . 21 Figure 10 .
Figure 10.Three-dimensional model of the relative position between mine and goaf.
Figure 13 .
Figure 13.Numerical calculation result cloud of the current slope mining area H and mining area I profile.(a) H mining area plastic zone.(b) Vertical displacement cloud map of H mining area.(c) I mining area plastic zone.(d) Vertical displacement cloud map of I mining area.
Figure 13 .
Figure 13.Numerical calculation result cloud of the current slope mining area H and mining area I profile.(a) H mining area plastic zone.(b) Vertical displacement cloud map of H mining area.(c) I mining area plastic zone.(d) Vertical displacement cloud map of I mining area.
Figure 14 .
Figure 14.Numerical calculation result cloud of the design excavation slope mining area H and mining area I profile.(a) H mining area plastic zone.(b) Vertical displacement cloud map of H mining area.(c) I mining area plastic zone.(d) Vertical displacement cloud map of I mining area.
Figure 14 .
Figure 14.Numerical calculation result cloud of the design excavation slope mining area H and mining area I profile.(a) H mining area plastic zone.(b) Vertical displacement cloud map of H mining area.(c) I mining area plastic zone.(d) Vertical displacement cloud map of I mining area.
( 3 )
The bottom of the final slope of the Yinshan open-pit mine is affected by the fault and the original underground mined-out area.The slope rock mass structure is broken, and prone to wedge failure and local collapse.
Table 1 .
Center coordinates of the blasting area.
Table 4 .
Mechanical parameters of the rock mass.
Table 5 .
Maximum horizontal acceleration data for blast vibration.
Table 6 .
Planned excavation slope safety factor.
Safety Slope Coefficient Planned Excavation Slope Safety Factor Planned Excavation Parameters Load Combination Ⅰ Load Combination Ⅱ Load Combination Ⅰ Load Combination Ⅱ Final Slope Angle (°)
Figure 15. Safety factor calculation section.
Table 6 .
Planned excavation slope safety factor.
Safety Slope Coefficient Planned Excavation Slope Safety Factor Planned Excavation Slope Parameters Load Combination Ⅰ Load Combination Ⅱ Load Combination Ⅰ Load Combination Ⅱ
Figure 15.Safety factor calculation section.
Table 6 .
Planned excavation slope safety factor. | 13,094.2 | 2023-05-11T00:00:00.000 | [
"Engineering",
"Geology",
"Environmental Science"
] |
The Open Bioinformatics iMPT-FRAKEL: A Simple Multi-label Web-server that Only Uses Fingerprints to Identify which Metabolic Pathway Types Compounds can Participate In
: Background: Metabolic pathway is one of the most basic biological pathways in living organisms. It consists of a series of chemical reactions and provides the necessary molecules and energies for organisms. To date, lots of metabolic pathways have been detected. However, there still exist hidden participants (compounds and enzymes) for some metabolic pathways due to the complexity and diversity of metabolic pathways. It is necessary to develop quick, reliable, and non-animal-involved prediction model to recognize metabolic pathways for any compound. Methods: In this study, a multi-label classifier, namely iMPT-FRAKEL, was developed for identifying which metabolic pathway types that compounds can participate in. Compounds and 12 metabolic pathway types were retrieved from KEGG. Each compound was represented by its fingerprints, which was the most widely used form for representing compounds and can be extracted from its SMILES format. A popular multi-label classification scheme, Random k-Labelsets (RAKEL) algorithm, was adopted to build the classifier. Classic machine learning algorithm, Support Vector Machine (SVM) with RBF kernel, was selected as the basic classification algorithm. Ten-fold cross-validation was used to evaluate the performance of the iMPT-FRAKEL. In addition, a web-server version of such classifier was set up, which can be assessed at http://cie.shmtu.edu.cn/impt/index. Results: iMPT-FRAKEL yielded the accuracy of 0.804, exact match of 0.745 and hamming loss of 0.039. Comparison results indicated that such classifier was superior to other models, including models with Binary Relevance (BR) or other classification algorithms. Conclusion: The proposed classifier employed limited prior knowledge of compounds but gives satisfying performance for recognizing metabolic pathways of compounds.
INTRODUCTION
Metabolomics is an important part of systems biology. Many life activities in cells occur at the metabolite level, such as cell signaling, energy transfer, and cell-to-cell communication. At present, metabolomics has developed rapidly and penetrated into many fields, including disease diagnosis, pharmaceutical research and development, nutritional food science, toxicology, environmental science, and botany, which are highly related to human health care. Metabolomics includes several metabolic pathways, and each metabolic pathway is composed of a series of continuous chemical reactions. Each reaction is catalyzed by an enzyme and changes from one molecule to another and provides cells necessary molecules and energy to sustain the life of the organism [1]. Thus, metabolic pathway is one of the most basic pathways in living organisms. A good understanding of metabolic pathways is very helpful for studying the mechanisms of some basic biological processes.
In the past ten years, lots of metabolic pathways have been detected for many organisms, and this information is stored in online public databases. Kyoto Gene and Genome Encyclopedia (KEGG) [2,3] database are one of the most popular metabolome databases, including metabolic pathways and interaction network information. In KEGG PATHWAY (https://www.genome.jp/kegg/pathway.html), metabolic pathways are classified into 12 types: (1) Carbohydrate metabolism; (2) Energy metabolism; (3) Lipid metabolism; (4) Nucleotide metabolism; (5) Amino acid metabolism; (6) Metabolism of other amino acids; (7) Glycan biosynthesis and metabolism; (8) Metabolism of cofactors and vitamins; (9) Metabolism of terpenoids and polyketides; (10) Biosynthesis of other secondary metabolites; (11) Xenobiotics biodegradation and metabolism; (12) Chemical structure transformation maps. As mentioned above, the compounds are the main component for each metabolic pathway. It is essential to correctly predict which metabolic pathway types a compound can participate in. Such study is helpful to find out new participants for an existing metabolic pathway. Clearly, such prediction via traditional experiments is of low efficiency and high cost. Developing effective computational methods is an alternative way.
To date, several computational methods have been proposed in this regard. The first work was proposed by Cai et al. [4] Their method used functional groups to represent each compound and adopted the Nearest Neighbor Algorithm (NNA) [5] as the prediction engine. Later, Lu et al. proposed a more powerful method, which employed AdaBoost as the classification algorithm [1]. However, these two studies only considered compounds belonging to the exact one pathway type. In fact, several compounds can participate in more than one pathway types. Thus, several investigators followed by developing multi-label classifiers. In 2011, Hu et al. proposed a multi-label classifier with the chemical-chemical interaction information in STITCH [6]. Gao et al. fused the interactions of chemicals and proteins to build a classifier with wide applications because this method can not only assign compounds to metabolic pathway types but also predict the metabolic pathway types of enzymes, another main component of the metabolic pathway [7]. Chen et al. [8] used the minimum redundant maximum correlation (mRMR) [9] method to analyze molecular fragment features of compounds, thereby selecting optimal features to build the multi-label classifier with the help of Support Vector Machine (SVM) [10]. The above-mentioned classifiers only output the rank of metabolic pathway types for a given compound; that is, they cannot determine which types were predictions. Recently, the other two methods were proposed. Fang and Chen converted the original multi-label classification problem into a binary classification problem by pairing compounds and pathway types as samples [11]. However, the section of negative samples is a problem; different negative samples can induce different models. Guo et al. built a binary classifier for each pathway type with a complex compound representation scheme and SVM [12]. This method constructed a classifier for each pathway type. For a given compound, users have to execute several classifiers to determine its pathway types, increasing the computational complex.
To partly overcome the defects of the above-mentioned methods and build a new multi-label classifier with wide applications, we used the most classic and widely used form, fingerprints, to represent each compound, which can be extracted from its Simplified Molecular Input Line Entry System (SMILES) [13] format. Then, the Random k-labelsets (RAKEL) algorithm [14,15] was adopted to process the multilabel problem. The SVM with RBF kernel was adopted to build basic classifiers, thereby constructing a multi-label classifier, namely iMPT-FRAKEL. The proposed classifier can determine specific metabolic pathway types for a given compound rather than only giving a pathway type rank, as reported in previous studies [6 -8]. On the other hand, the construction procedures of iMPT-FRAKEL were not involved in negative sample selection, overcoming the problem in a study [11], and it is a unified model for predicting the metabolic pathway types of compounds, improving the method in another study [12] that consisted of several classifiers. Furthermore, the proposed classifier used limited prior knowledge of compounds because it can make prediction as long as the SMILES format of compounds were available. Thus, our classifier had wider applications than most previous classifiers, which always needed several prior knowledge of compound, such as chemical interaction information. The ten-fold cross-validation on iMPT-FRAKEL indicated that the accuracy and exact match were 0.804 and 0.745, respectively, suggesting high performance of the classifier. In addition, a web-server with the same name was developed, which can be accessed at http://cie.shmtu.edu.cn/impt/index.
Benchmark Dataset
Details of the metabolic pathways were obtained from the KEGG PATHWAY (http://www.kegg.jp/kegg/pathway.html) (accessed in September 2019) [2,3]. 5,641 compounds that can participate in at least one metabolic pathway were obtained. After excluding compounds without representations of SMILES [13] and ECFP [16] fingerprints, we finally obtained 4,739 compounds. The detailed information of these compounds can be accessed at http://cie.shmtu.edu.cn/impt /index. As mentioned in Section 1, metabolic pathways in KEGG are classified into 12 types. Accordingly, 4,739 compounds can also be classified into 12 classes in a way that if a compound belongs to a pathway that is in one pathway type, such compound is assigned to this pathway type. For an easy description, we tagged 12 pathway types as P 1 , P 2 , ..., and P 12 , respectively. The correspondence of pathway type names and these tags is shown in columns 1 and 2 of Table 1. The number of compounds in each pathway is also listed in this table. The total number of compounds in 12 pathway types were 5,784, which was larger than the total number of different compounds (4,739), suggesting that some compounds belonged to more than one metabolic pathway. Thus, it is a typical multilabel classification problem for assigning compounds to pathway types.
Representation of Compounds
To construct an efficient classifier, each sample should be encoded into a series of numbers, which contains essential properties of samples. In cheminformatics, SMILES [13] is the most classic and widely used scheme for representing compounds [17 -20]. By this scheme, each compound was represented by a line notation with ASCII strings. Then, fingerprints can be extracted from this representation, which were collected in a binary vector. In this study, we first obtained the SMILES format of 4,739 compounds from STITCH and used RDKit [21] to access ECFP [16] fingerprints of each compound. Obtained binary vectors for investigated compounds are available at http://cie.shmtu.edu.cn/impt/index.
Multi-label Classification Model
As described in Section 2.1, some compounds had multiple pathway types, inducing a multi-label classification problem. Generally, there are two ways for building multi-label classification models: (1) problem transformation; (2) algorithm adaption. The first one converts the original problem into several single-label classification models, while the second one directly reforms the specific single-label classification algorithm such that it can tackle multi-label classification problems. In this study, we adopted the first way to construct the model. The well-known method, RAKEL algorithm [14,15], was employed, which has been applied to deal with several biological problems [20, 22 -27].
The RAKEL algorithm extends another multi-label classification method, Label Powerset (LP) algorithm [28,29], which treats a combination of labels as a new label, thereby converting into a single-label classification problem. However, this algorithm has several defects, such as high computational cost, sample skew, etc. In view of this, Tsoumakas et al. proposed the RAKEL algorithm [14,15]. It breaks the initial set of labels into m label subsets with small size k. On each label subset, LP algorithm is adopted to train a multi-label classifier, namely LP classifier. For example, given a label subset {l 1 ,l 2 ,...,l k }, its power set is defined as the new label set. These new labels are assigned to each sample according to its original labels. Accordingly, each sample has only one new label. A LP classifier is constructed on the dataset with new labels based on a given classification algorithm. The model built by RAKEL algorithm always contains m constructed LP classifiers. For an input sample s, each LP classifier gives its binary decision on each involved label. For each label, RAKEL algorithm counts the average of the binary decisions yielded by LP classifiers, whose underlying label set contains such label. If the average is larger than a predefined threshold, which is always set to 0.5, the label is assigned to the input sample. As mentioned above, there are two main parameters for RAKEL algorithm, m and k, where k determines the size of label subset and m stands for the number of label subsets or the number of LP classifiers. On the other hand, the basic single-label classification algorithm is also an important factor to build effective RAKEL classifiers. The detailed descriptions of RAKEL algorithm can be obtained from another study [15].
To quickly implement RAKEL algorithm, Meka (http://waikato.github.io/meka/) [30] was employed, which is an open-source machine learning framework collecting several multi-label classification scheme. One tool, named 'RAKEL', implements RAKEL algorithm. We tried several values of m and k and selected the best ones to construct the final classifier. Furthermore, two classic single-label classification algorithms: SVM [10] and random forest (RF) [31], were tried. For easy descriptions, models constructed by RAKEL algorithm were called RAKEL models.
Classification Algorithm
To construct LP classifiers, one single-label classification algorithm was necessary. Here, we tried two classic classification algorithms, SVM [10] and RF [31], and we finally selected the best one. Their brief descriptions were as below.
The principle of SVM is to select an appropriate kernel (such as RBF kernel) to map all samples in the training data set to a higher-dimensional space, in which samples in different classes can easily be separated by a hyperplane. Given a kernel, the training procedure of SVM is to find out an optimal hyperplane. For a query sample, its class is determined according to which side of hyperplane it belongs to. To date, several types of SVM have been proposed to deal with different problems and they have wide applications in bioinformatics [20, 22, 32 -36]. In this study, we used the SVM whose training procedures were optimized by Sequential Minimal Optimization (SMO) algorithm [37]. The kernel was set to polynomial kernel or RBF kernel.
RF [31] is another widely used classification algorithm. It always consists of several decision trees. Each tree was built with samples randomly selected, with replacement, from the original training dataset and randomly selected features. Although decision tree is a weak classification algorithm, RF is a relative much more powerful algorithm [38]. Thus, RF is always an important choice to construct classifiers in the fields of bioinformatics and computational biology [17, 18, 39 -44]. The number of decision trees is the most important parameter for RF. We tried several values for this parameter in this study.
All the above-mentioned SVM and RF have been integrated in Meka [30]. They were directly invoked in the tool 'RAKEL'.
Construction of iMPT-FRAKEL
According to the dataset and methods mentioned above, we constructed a multi-label classifier, named iMPT-FRAKEL, for prediction of the metabolic pathway types of compounds. The entire procedures are illustrated in Fig. (1). First, 4739 compounds were retrieved from KEGG PATHWAY and constituted the underlying dataset. Then, each compound was converted into its SMILES format, from which its fingerprints were extracted via RDKit. These fingerprints were encoded into a 1024-D vector. Based on the pathway types of each compound, we assigned these pathway type labels to the corresponding vector. The vectors together with their labels were fed into the RAKEL algorithm, which incorporated SVM with RBF kernel as the classification algorithm, to construct iMPT-FRAKEL.
Assessment and Measurement
To evaluate the performance of each classifier in this study, ten-fold cross validation [45] was used. This method divides the original training samples randomly and equally into ten subsets. Samples in each subset are singled out one by one as testing samples, while samples in the remaining nine subsets are used to train the classifier. Finally, each sample is tested only once.
As a multi-label classification model, we mainly used three measurements to evaluate the predicted results yielded by tenfold cross-validation, they were accuracy, exact matching, and hamming loss. For formulation, some notations were necessary to define. Given a dataset with n samples and m labels, let L i be a set consisting of true labels of the i-th sample, and L i ' be a set consisting of the predicted labels of the i-th sample. The definitions of three measurements were as follows: (1) where ∆ was the symmetric difference operation of L i and L i ', and was defined as below: (2) Clearly, the higher the accuracy and exact match are, the better the performance of the multi-label classification model is, while the lower the hamming loss, the higher the performance.
RESULTS AND DISCUSSION
In this study, we proposed a multi-label classifier iMPT-FRAKEL to predict which metabolic pathway type a given compound can participate in. The construction and assessment procedures are illustrated in Fig. (1). In this section, we mainly introduced the evaluation results of iMPT-FRAKEL and compared it with other models to indicate its utility.
Performance of iMPT-FRAKEL
The iMPT-FRAKEL adopted RAKEL and SVM. To build the model with the best performance, we tried several parameter combinations. For example, the main parameter k of RAKEL was set to various values between 2 and 12; another parameter m was set to 10. For SVM, regularization parameter C was set to 0.5, 1 and 2; two kernels: polynomial kernel and RBF kernel were tried, where exponent parameter for polynomial kernel was set to 1, 2 and 3, and the parameter γ for RBF kernel was set to 0.01, 0.02 and 0.03. Models with different parameters were evaluated by ten-fold crossvalidation 10 times. Finally, we found that k=12, C=3, RBF kernel with γ=0.03 yielded the best performance. The average of accuracy, exact match and hamming loss are listed in Table 2. They were 0.804, 0.745 and 0.039, respectively. Specifically, the hamming loss values yielded by ten-fold crossvalidation ten times were same, they were all 0.039. For accuracy and exact match, their distributions are illustrated in Fig. (2). It can be observed that accuracies were all between 0.802 and 0.806 and exact match values were all between 0.741 and 0.747, indicating that the performance of iMPT-FRAKEL was quite stable for different divisions of the dataset. As mentioned above, we also tried another widely used kernel, polynomial kernel, for SVM. The performance of the best model with SVM (polynomial kernel) as the basic classification algorithm is listed in Table 2. The accuracy, exact match and hamming loss were 0.787, 0.716 and 0.046, respectively. Compared with the performance of iMPT-FRAKEL, the accuracy was 1.7% lower, the exact match was 2.9% lower and the hamming loss was 0.7% higher. These results indicated that the selection of RBF kernel as the kernel of SVM was a good choice.
Comparison of RAKEL Model with Random Forest
The proposed classifier, iMPT-FRAKEL selected SVM as the basic classification algorithm. To elaborate this, selection is proper, we also tried another classic and widely used classification algorithm, RF. For the main parameter, the number of decision trees, we tried various values, including 50, 100, 150 and 200. Models with different parameters were also evaluated by ten-fold cross-validation 10 times. The performance of the best model is listed in Table 2. It can be seen that the accuracy, exact match and hamming loss were 0.784, 0.697 Compared with the accuracy and exact match of the iMPT-FRAKEL, the above accuracy and exact match were all lower. As for hamming loss, it was higher than that of iMPT-FRAKEL, indicating that iMPT-FRAKEL was superior to such model. In addition, such model was also inferior to the model with SVM (polynomial kernel) as the basic classification algorithm. As illustrated in Fig. (1), the proposed model (RAKEL model 1 in Fig. (1) was the best RAKEL model for prediction of pathway types of compounds, followed by the RAKEL model with SVM (polynomial kernel) (RAKEL model 2 in Fig. (1) and RAKEL with RF (RAKEL model 3 in Fig. (1). All these implied that the choice of SVM (RBF kernel) was the best choice in a sense for constructing the RAKEL model.
Comparison of Models with Binary Relevance
Binary Relevance (BR) method [29] is another classic scheme for tackling multi-label classification problems, which builds a binary classification model for each label independently with the one-against-all strategy. We built several multi-label classification models with BR and compared them with RAKEL models. For convenience, these models were called BR models.
The BR model also needs a basic classification algorithm. Likewise, we also employed SVM and RF, as mentioned above. Their same parameter settings mentioned in Section 3.1 and 3.2 were all tried. Each model was assessed by ten-fold cross-validation 10 times. The performance of best BR models with SVM (RBF kernel), SVM (polynomial kernel) and RF is listed in Table 2. It can be observed that with the same basic classification algorithm, RAKEL model always yielded higher accuracy and exact match, about 5% higher, while the hamming loss values of two models were almost at the same level. As illustrated in Fig. (1), RAKEL model had a stronger ability for the prediction of metabolic pathway types of compounds than BR model. All these indicated that the RAKEL algorithm was a good choice for tackling the problem addressed in this study. Furthermore, for BR models, SVM still gave higher performance than RF, which conformed to the results of RAEKL models, further confirming that SVM was the optimal choice for constructing the model.
User Guide of iMPT-FRAKEL
For wide applications of the proposed multi-label classifier, iMPT-FRAKEL, we built its web-server version with the same name. Users can access the web-server iMPT-FRAKEL at http://cie.shmtu.edu.cn/impt/index. Its home page is illustrated in Fig. (3).
In the home page, there are three tabs, say "Read Me", "Supporting Information" and "Citation". By clicking "Read Me", users can obtain the basic information of such webserver, including used methods and parameter settings. Supporting information, such as metabolic pathway types and fingerprints of 4,739 compounds, can be retrieved in the "Supporting Information" tab. The last tab "Citation" lists the reference of such web-server.
To use our web-server for prediction, users should follow the following steps.
1 Use SMILES format to represent each input compound, examples can be found by clicking "Example" button above the input box.
2 Copy the query compounds with SMILES format into the input box and click "Submit" button to submit the query compounds. It is necessary to point out that no more than 100 compounds are permitted each time due to our limited computational power. If users copy wrong information into the input box, click "Clear" button to quickly clear the input box.
3 After a few seconds, users can obtain the predicted results on a new page. Results are divided into two parts. In PART I, predicted metabolic pathway types (represented by tags in Table 1, their detailed names can be found in the top of this page) of each valid compound are listed. Users can download the predicted results by clicking "Result export" button. In PART II, input compounds without fingerprints information are listed. The "Test again" can guide users for another input.
CONCLUSION
This study proposed a simple multi-label classifier to predict the metabolic pathway types of compounds and further built a web-server. Some machine learning algorithms were used to build the classifier, such as RAKEL algorithm and SVM. The experimental results showed that the classifier was quite effective. Compared with the previous classifiers, it was a pure multi-label classifier and had wider applications because it only required the SMILES format of compounds. It is hoped that such classifier can be a useful tool for finding new participants of existing metabolic pathways. | 5,163.2 | 2020-08-18T00:00:00.000 | [
"Computer Science"
] |
Smart Dust Bin for Modern Environment
Nowadays all types of garbage are dumped into the dustbin at the buildings (houses), hospitals etc., Hence it is difficult to segregate it. To avoid all such problems, we are going to implement a project where segregation of wet, dry, plastics and metallic wastes automatically using automated smart dust bin and afterwards signal is sent to the Mobile phones. The automated smart dust bin has several features and the main feature of it is to separate the waste automatically. The bin will have four compartments for the segregation of the wastes: for plastic waste, for wet waste, for dry waste and finally for the metallic wastes. Apart from this the bin will have motor for the rotation of the bin. This project makes it hands-free and evidently more hygienic method of separation. The bin will also notify about the amount of wastes filled through an LED. The bin will also alert by sending a message to phone to tell that it is time to throw the garbage. This idea will help us to dispose the waste in a hygienic manner. This project mainly concentrates on domestic wastes whose value is unrecognized since people do not spend time in separating the wastes. Instead of sending them to municipal corporation, waste can be sent directly for recycling if waste is separated at household level.
Introduction
Rapid increase in population had led to improper waste management in metro cities and urban areas which has resulted in spreading of diseases. The segregation, transport, handling and disposal of waste must be managed properly to minimize the risks to the public and the environment. Waste segregation is an absolutely necessary stage in waste management. Most of the waste is sent directly to the landfills without proper sorting and this has caused a huge loss for us. Properly distinguishing wet, dry, plastic and metallic waste, lets us to recycle it more efficiently and it saves us a lot of money and resources. And the wet waste can be used as compost. With the advent of the plastic ban and people having so many plastics in their households, Automated smart dust bin can help dispose of the waste properly and efficiently without any problems. The wet waste can be used as compost and some of the dry wastes can be recycled. Automated smart dust bin identifies the type of waste and segregates it using the techniques mentioned. The bin can help us to dispose the waste properly and efficiently without any problem. Some of the dry wastes can be recycled. This identifies the type of waste and segregates it using the techniques mentioned. [1−5] Thus they are protected from unwanted smell, toxicity and various diseases. Segregation makes it possible to reuse and recycle the waste effectively. Thus implementing our project at household level will reduce the expenditure on waste disposal, manual effort.
2.Objectives of Automated Smart Dust Bin
The main objectives of AUTOMATED SMART DUST BIN are: • Waste Segregation. Automated smart dust bin segregates the waste into 4 types: Dry, Wet, Plastic and metals.
• Waste identification
The waste can be identified using IR sensor and its capacitive values.
• Recycling
The waste that can be recycled will be identified by its capacitive values and sending the values to some app or to some devices which will tell the user if that waste can be recycled or not based on some pre-given data.
• Re-usability
Identification of waste that can be used again and reused.
• Waste Predication and Optimization
By getting the network from the automated smart dust bin which is sent to the GSM module, it is able to distinguish all kinds of pattern and accordingly the waste is thrown into the particular bin and thus sent the garbage compartments in the most optimized path, which is more cost effective and also save fuel and resources.
System Design:
a.IR Sensor: The IR sensor Infrared Light Which is either absorbed by the material (Waste) or Reflected. The Sensor also consists of an IR receiver which measures the reflected Infrared Light. The 20 receiver is connected to the ADC module of the Microcontroller which the Micro-controller which converts the analog signal to digital signal which is analyzed and used to determine the type of waste. The IR sensor is also used as a proximity sensor for detecting whether the segregation chamber is filled or not.
b. Capacitive Sensors
Capacitive sensors are most often used to measure the change in position of a conductive target. But capacitive sensors can be effective in measuring presence, density, thickness, and location of nonconductors as well. Non-conductive materials like plastic have a different dielectric constant than air.
c. GSM Module:
GSM is a mobile communication modem, it stands for Global System for Mobile communication (GSM). It is widely used as mobile communication in the world. GSM is an open and digital cellular technology used for transmitting mobile voice and data services operates at the 850MHz,900MHz,1800MHz and 1900MHz frequency bands. GSM system was developed as a digital system using Time Division Multiple Access (TDMA) technique for communication purpose.
d. stepper Driver and Stepper Motor:
Stepper driver is used to drive the stepper motor using the microcontroller. The waste chamber is mounted on the stepper motor which aligns the respective sections (Dry, Wet and Plastic) under the capacitive plates. e. Indication LEDs: These LEDs are directly controlled by the microcontroller for displaying the status of the system. For indicating bin is full, Bluetooth Status indicator, for indicating the type of waste.
Design requirements: a. Arduino Uno
Arduino Uno is used in Do-it yourself projects proto typing. In developing projects based on codebased control. Development of automation system. Designing of basic circuit designs.
c. Infrared Radiation Sensors
This device emits and detects infrared radiation. Generally, thermal radiation is emitted by all the objects in the infrared spectrum.
e. Metal Detector
A metal detector is an electronic instrument that detects the presence of metal nearby. Metal detectors are useful for finding metal inclusions hidden within objects, or metal objects buried underground.
Fig.5.Metal Detector
They often consist of a handheld unit with a sensor probe which can be swept over the ground or other objects. If the sensor comes near a piece of metal this is indicated by a changing tone in earphones, or a needle moving on an indicator.
f. Driver Motor
L293D is a typical Motor driver or Motor Driver IC which allows DC motor to drive on either direction. L293D is a 16-pin IC which can control a set of two DC motors simultaneously in any direction. It means that you can control two DC motor with a single L293D IC. and 1900MHz frequency bands.GSM system was developed as a digital system using Time Division Multiple Access (TDMA) technique for communication purpose. A GSM digitizes and reduces the data, then sends it down through a channel with two different streams of client data, each in its own particular time slot. The digital system has an ability to carry 64 kbps to 120 Mbps of data rates.
Arduino IDE The Arduino Integrated Development Environment (IDE)
is a cross-platform application (for Windows, macOS, Linux) that is written in functions from C and C++. It is used to write and upload programs to Arduino compatible boards, but also, with the help of 3 party cores, other vendor development boards. The source code for the IDE is released under the GUN General Public License, version 2. The Arduino IDE supplies a software library from the wiring project, which provides many common inputs and output procedures. Userwritten code only requires two basic functions, for starting the sketch and the main program loop, that are compiled and linked with a program stub main into an executable cyclic executive program with the GUN toolchain, also included with the IDE distribution.
6. Proposed Method: The different types of garbage's are dumped into the dustbin at the buildings (houses), hospitals etc., Hence it is difficult to segregate it. One possible solution for this problem could separate the waste at the disposal level itself (at house hold level). We have thus come up with an Automated Smart Dust Bin (ASDB) that categorizes the waste as wet, dry, plastic and finally metallics waste. An Arduino Uno forms the heart of the system. This Automated Smart Dust Bin consist of four compartments (for wet, dry, plastic and metal) and it separates all the wastes automatically using below mentioned sensors. [7−9] Apart from this the bin will have motor for the rotation of the bin according to the type of wastes and a 12v power supply is used to run this module. Different sensors used in this project are Moisture sensor, IR sensor, Proximity sensor and capacitive sensor. This Automated Smart Dust Bin has a opening on the top of the module which is used to throw the waste into the dust bin and all the sensors will be placed on the opening of the bin. Here moisture sensors sense the moisture content wastes, if the moisture content is above a preset threshold value then the waste is wet waste, after identifying the wet waste it is thrown into the wet compartment. Inductive proximity sensor can only detect metal targets. They do not detect non-metal targets such as plastics, wood, papers and ceramic. Unlike photoelectric sensors, this allow a inductive proximity sensor to detect a metal object through opaque plastic, after identifying the metal waste it is thrown into the metal compartment. Capacitive sensor are capable of detecting plastic, wood and other raw materials including metals. A common application is the detection of liquid, plastics, and grains after identifying the plastic waste it is thrown into the plastic compartment. And finally the nondetected wastes are detected by the IR sensor and the wastes are thrown into the dry compartment. So that the different kinds of wastes are separated automatically in this ASDB. The bin can help us to dispose the waste properly and efficiently without any problem. Some of the dry, metal wastes can be recycled and the wet waste are converted as manure. This Automated Smart Dust Bin will also notify about the amount of wastes filled through an LED.
The bin will also alert by sending a message to phone to tell that it is time to throw the garbage using GSM module. And also consist of a buzzer to alarm if the bin is filled. This idea will help us to dispose the waste in a hygienic manner. Segregation makes it possible to reuse and recycle the waste effectively. Thus implementing our project at household level will reduce the expenditure on waste disposal, manual effort. Thus this idea will help us to dispose the waste in a hygienic manner.
Conclusion:
The Automated Smart Dust Bin helps us in safer, hygienic way of disposal of the waste. this project ensure that garbage disposal by the human are completely hand-free and hence number contact with the bin is needed. This system has its own limitations. It can segregate only one type of wastes at a time with an assigned priority for metal. Thus improvements can be made to segregate mixed type of wastes. | 2,608.2 | 2020-09-20T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
BORON PARTICLES PRODUCTION IN NONEQUILIBRIUM LASER-CHEMICAL RADICAL REACTIONS DURING THE IR MULTIPHOTON DISSOCIATION OF HCIC = CBCIzH MOLECULES
The study of IR MPD of HCIC=CBCI2H molecule by CO2.-laser is presented. We provide experimental
results showing high degree of fragmentation of this molecule (down to elementary boron) during
nonequilibrium laser induced radical bimolecular reactions.
INTRODUCTION
The method of multiphoton dissociation (MPD) of molecules in high-power pulsed IR-laser radiation field has lately found wide application in physico-chemical mole- cular processes, t-3 More specifically it is an effective procedure for producing high concentrations of free radicals.These radicals may be used for initiating nonequili- brium gas-phase radical reactions.A wide variety of final products is known to be produced during thermally initiated gas-phase radical reactions of multiatomic molecules, the desired product yield being quite low.This is due to the large number of possible reaction channels of the multicomponent mixtures in thermodynamic equilibrium, when temperatures of the initial, intermediate and final products are almost equal.The situation is different for nonequilibrium reactions where these temperatures may differ widely.High specificity of IR-laser radical reactions, i.e. high yield of the desired final product and suppression of other products, was mentioned in literature. 4 this paper we discuss the investigation of the MPD of chlorethylendichlorbo- rane molecules (HCIC=CBCI2H) in radiation field of pulsed CO2-1aser.6][7] In this paper we provide experimental results showing high fragmentation of HC1C= CBC12H molecules (down to elementary boron) in the IR laser radiation field during nonequilibrium laser induced radical bimolecular reactions.t EXPERIMENTAL SECTION IR MPD of HCIC= CBCI,H molecules in proper gas in the field of pulsed CO,,-laser was studied experimentally.Pulsed TEA CO2-1aser with line selection was used.In all the experiments the laser was tuned to resonance frequency v 986.6 cm -1 of B---CI bond of HCIC=CIBCIeH molecules.Laser pulse duration was varied by changing the actuating medium composition.Typical pulse specifications: pulse d.uration at the half-height of the head of pulse ---100 ns, pulse duration at the base---1/xs; pulse head to pulse tail power ratio--70:30; energy density q 2.5 J/cm2.The experiments were carried out mainly in the flow-through mode.Stainless- steel cell with monocrystal NaCI windows was used.The pumped gas molecules entered the reactor via the distributor and after forming a laminar flow entered the reaction zone crossing the laser beam pulse (Figure 1).Pumping rate was selected so that during the delay time between pulses the irradiated gas portions could be completely removed from the reaction zone (ideal displacement reactor11).Gas composition before and after irradiation was determined from the IR absorption spectra in the "SPECORD-75 IR" spectrophotometer.
The absorbed energy dependence on the laser radiation energy density, and dissociation yield reduction with buffer gas addition were studied in the static mode in the individual experiments.
EXPERIMENTAL RESULTS AND DATA PROCESSING Figure 2 shows characteristic IR spectrogram of the original material and decay products.Absorption peaks of the BCI3 and C2H2 decay products are clearly observable.When large volumes of the material were processed, dichlorethylene C2H2C12 absorption peaks were also measured.Besides when pumping large portions of the material through the reaction zone the following phenomenon was observed: the input and the output reactor cell windows were covered with a film-like deposit leading to the windows opacity and cracking.All the inner walls of the reactor were covered with powder.The substance deposited more intensely on the output window than on the input one.Thickness of the output window deposit was 2-3 times that of the input window.Deposited substance distribution reflected laser beam iatensity fluctuation.
To determine the film elementary composition mass-spectrograms were recorded using RIBER LAS-2000 surface analyzer and a method of secondary ion mass- spectroscopy (SIMS).For this purpose the film was taken from the sodium chloride monocrystal and transported on to the silica support at:d inserted into the instrument measuring chamber.It was irradiated in 10 -8 Pa vacuum by argon ions accelerated in the electric field.In addition to mass composition SIMS method provided data on the depth of element occurrence in the film.Spectrogram processing showed presence of mass lines of the following elements" boron-10, boron-ll, carbon-12, sodium-23, silicon-28, argon-40.Elementary boron with mass number 10 and 11 was the basic element in the film.All the rest mass lines were due to the impurity levels of up to 10-wt%.Carbon atoms occur because of the secondary chemical reactions between chlorine and acetylene atoms. 2 Detected sodium atoms were traces of the sodium chloride substrate.Silicon atoms get into the chamber from the holder material.
Argon atoms were used as the bombarding particles.Figure 3 shows typical SIMS spectrograms of films, deposited on the input and output reactor windows.They vary greatly isotopic content.This fact was verified by the mass-spectrometric study of film composition in MI/I-1201 unit modified for solid phase measurements.
These films were also studies by electron diffraction and electron microsopy methods using Y3BM-100 K unit, working as an electron diffractometer with IO-2 attachment, and OPTPON EM-10 high-resolution electron microscope (FGR).
Electron diffraction analysis of boron films using 75 kV accelerating voltage shows mainly amorphous phase containing crystallites as well (Figure 4).Electron- microscopic studies at 100 kV accelerating voltage permits to resolve fine amorphous structure of the studied boron film (Figure 5) and to learn that the film contains statistically distributed structural elements--icosahedral boron atom clusters, short- range order of the received amorphous phase being similar to that of the fl-rombohedral crystalline boron modification.
DISCUSSION
To explain the observed phenomena we assume that the following processes are imposed in the reactions zone: (a) MP monomolecular decay of molecules into excited fractions in the field of a laser pulse; (b) secondary nonequilibrium reactions during which production of elementary boron is possible in the reaction zone; (c) elementary boron clusters nucleation in the gas-phase; (d) light induced flow along the laser beam path (confirmed by the predominant particle deposition on the output window, compared to that on the input one, and by the boron isotope content difference in these deposits), which overlaps processes (b) and (c) and causes selective mass-transfer.
Let us consider each item separately.a) Two schemes are possible during monomolecular MP decay simulation: 1) HCIC=CBCI2H molecule existing in the form of trans-isomer transforms into cis-form due to IR multiphoton absorption of the pulsed CO2-1aser resonance radiation, and then BCI3 splitting off occurs according to monomolecular mechanism 2) As a result of IR multiphoton absorption the possibility arises of the weakest C--CI bond break during molecule transition from normal into active state (Ec--c 392 J/mol), and due to the intramolecular chlorine atom migration intermediate carbene may be produced, which may be rearranged into alkine, producing ace- tylene and BCI3 during simultaneous break of two C--CI and C---BCI2 bonds: Monomolecular decay rate constant depending on the internal energy of molecules was calculated using statistic theory of monomolecular decay RRKM.x3 For this purpose we used scheme (2).In our calculations we used the fact, that chlorethylene- dichlorborane molecule is characterized by strong asymmetry and has a distant functional additive group BCI2 producing slight perturbation into the basic vib- rational frequencies of the molecule.Quantum state densities were calculated in Vitten-Rabinovich approximation. Figure 6 shows the received k(E) dependencies of the both boron isotope modifications.They may be used for estimation of an average energy E, at which dissociation of the molecule may occur.The effect of buffer gas on the initial dissociation yield during laser pulse may be used as a quantitative parameter for such estimation.Collisions of the excited molecules with the buffer gas molecules should lead to desactivation, and consequently to the reduction of the dissociation yield probability.When the dissociation yield reduces e-fold (e is a natural logarithm base), then the time interval between collisions of the excited HCIC=CBCI2H molecule with the molecules of the buffer gas is similar to the decay time: 14 The experiments showed that when monoatomic argon buffer gas was added to the initial substance, dissociation yield of lOB isotope reduces e-fold, if Par 103 Pa.
Taking into account that argon atom diameter D 3.64 10 -s cm, t5 we can receive rcol 0.2 10-Ts for room temperature and pressure P 103 Pa, and from Eq. ( 3) k(/) 5" 107s-1.Knowing the decay rate constant we get from the calculated K(/) dependencies, that mean decay energy / of HCIC=CtBCI2H is 315 kJ/mol.Overexcitation energy above the dissociation limit (E / Do), remaining practically in the dissociation products (85 kJ/mol), is 7 IR quantums at laser radiation frequency v 986.6 cm -1.It is known that molecules get into the vibrational quasicontinuum after absorbing 3-5 IR quantums, 1-2 It means that decay fractions are already in the vibrational quasicontinuum.Dissociation product BCI3 may resonantly absorb energy in the vibrational quasicontinuum up to the decay into simpler fractions BCI2 and CI.Further decay down to elementary boron should proceed through the nonradiative way, as proper absorption frequencies of BCI2, 700 cm-1,250 cm -1 and 725 cm -1. are off resonance with IR radiation of CO,,-laser.b) To our mind possible nonequilibrium decay channels of organic boron mole- clues down to elementary boron in laser radiation field are connected with bimole- cular reations.From the analysis of the available negative results of laser induced bimolecular organic reactions it was concluded in paper, 3 that both vibrational excitation of each reagent and excitation of translational degrees of freedom are necessary for this purpose.Our case meets these requirements.During the laser pulse (---]0-Ts) not only excited chlorethylenedichlorborane, bu also excited decay fractions (----10-Ss) appear in the active zone of the beam.(All of them are excited by the same laser pulse).Besides, translational energy received by the fractions of the monomolecular decay may be sufficient to overcome the activation barrier. 16eactions between the excited fractions and the excited molecules of the "hot" ensemble are the possible channels of elementary boron production.Among the various reaction channels between BCI3*, BCI2*, C2H2", HCIC=CHBCI,*, the following reaction is considered the basic one for elementary boron production" BCI2* + C2H2" ; HCIC=CHCI + B Reaction ( 4) is considered as one of the possible channels, because absorption peaks of dichlorethylene CzHC12 (718,895, 1175, 1270, 1225 cm -1) were recorded during the experiments while processing large quantities of the matter.This process is quite probable because it is known, that bonding energy EBc--c < Ec---c (EBc--c 318 J/mol).As bonding energy Ec--H > Ec--c (Ec--ra 443 kJ/mol), substitution reaction is less possible than the reaction of adding halogene to alkenes. 17It may be also said about items (a) and (b) that chemical thermometer method is known to be a simple but rather effective method for distinguishing between a nonequilibrium monomolecular laser process and a thermal reaction, according to which a dope is added into the basic mixture, the molecules of which do not absorb laser radiation but may undergo monomolecular thermal reaction.Then on the basis of final products ratio we can determine the relation between the nonequilibrium and thermal channels.The molecules of nonresonant isotope component play the part of such molecules in our isotope mixture.They meet all the requirements for the chemical thermometer (except slight nonresonant absorption.).Isotope selectivity of the radiative decay preserves, as secondary chemical reactions of elementary boron production proceed mainly between the excited particles.Isotope selectivity of reaction products, determined during the elemental analysis of films, deposited on the optical components, confirms the non-equilibrium character of the studied processes and is a measure of its nonthermality.c) Boron atoms may coalesce into nuclei during collisions.Parameters directly affecting nucleation kinetics and particle growth in the gas-phase are as follows" reaction zone temperature, heating rate, reactant gas partial pressure etc. 6 When nucleus production terminates in icosahedron production, a possibility arises of producing tr-and fl-modifications of boron.To provide /3-boron modification further coalescence of grains, containing up to seven icosahedra, is necessary.These icosahedra may stick together by surface forces which reduce total surface energy providing contact interfaces.
It is difficult to describe grain nucleation mechanism because of the lack of temperature measurements in the reaction zone (vibrational temperature T and translational temperature T).However elementary boron structure should be considered during simulation.Presence of the deposited film structure correspond- ing to the high-temperature fl-rhombohedral boron structure indicates nucleation of particles with short-range order of positioning atom groups in the reaction zone.This is possibly due to the involvement of particles of "hot" ensemble with high internal energy into the reactions.Grains of elementary boron produced in the reaction zone precipitate as a powder on the inner cell surface and are deposited on the optical windows due to the light induced mass-transfer.When amorphous boron film appears on the optical window, its absorptivity rises and local crystallization of amorphous phase takes place due to the heat release enhancement and film temperature rise.Original amorphous boron structure transforms into fl-rhombohedral crystalline structure having crystallographic parameters characteristic for fl-boron.Laser induced phase transition has similar dynamics with that of the particle growth of the amorphous elementary boron, studied in work. 8At first temperature rise provides grouping of statistically distributed boron icosahedra attachment followed by 3,dimensional growth with crystalline structure forming.Figure 7 shows electrone micrograph of coagulation dynamics of " -n structural elements and crystalline sites formation in the film.
Further substrate temperature rise stimulates fl-boron crystal size growth, forming crystallographic cuttings and joining along the cleavage planes (Figure 8).Speaking of items (a), (b) and (c) on the whole it should b mentioned, that literature provides papers on semiconductor powder synthesis in laser induced gas-phase reactions, t6.25 However in contrast to our study they consider mechanisms of laser pyrolysis.d) As concerns mass-transfer phenomena laser induced thermodiffusion 19 and light-induced drift (LID) 2-22 seem to be the most probable effects providing flow motion during IR laser irradiation of molecular gas.Silicon powder deposition on the reactor windows along the laser beam path during Sill4 laser pyrolysis is also discussed in paper. 16This fact was attributed to the gas mass-transfer, induced by the local heating by laser and by exothermal chemical reactions.
Special study is necessary to determine which of the above mechanisms p.rovide flow in our case, and it is the object of our further study.However as there are no data on kr diffusion coefficients for HC1C=CBC12H molecules, we cannot compare the sign of the experimentally observed effect with that of the predicted one.But presence of optically thin gas layer (laser energy absorption by molecules of the gas at 1 m length is only 8-10%), and the fact that particle deposition is observed practically in monopulse regime (due to selection of pumping rate and of relative pulse duration) testify against the common thermodiffusion mechanism --A T. As to the laser thermodiffusion and LID, both of them exploit the effect of particle interaction potential change during excitation.In our case the following could be said about this: HC1C=CBCI2H molecule belongs to the class of hydrocarbon- unsaturated alkenes.During UV excitation of the most typical representative of this class of molecules the less durable :r-bond is known to break and decompensated electron cloud appears at the unitary ---t bond, which provides repulsion of methylcarbene groups and rotating them out of the initial plane state up to reciprocally-perpendicular planes in space.This changes electron configuration and the received system is characterized by the molecular orbitals of 02 molecule.This change of space form may therefore give rise to the substantial change of interaction potential and consequently of particle transport behaviour.Wave functions of highly excited vibrational levels of the ground electron state and lower vibrational levels of the excited electron state in ethylene overlap during UV excitation, 23 which gives rise to the assumption that IR MP ethylene excitation provides similar result.The discussed stereoform is assumed to be typical for the excited states of all the alkenes, 23 and thus for HC1C---CBC12H molecules.
Presence of quasiresonant buffer gas is a positive factor for LID process, 22 .andhere this gas consists of nonresonant component molecules containing other boron isotope modifications.But the problem of "hole burning" in the function of rate distribution, necessary for LID effect, is not quite clear in our case.
CONCLUSION
Experimental study of selective laser-chemistry gas-phase process is discussed, in which elementary boron is the final decay product during initial molecule fission in the field of IR-laser.Final product precipitates as a powder in the reactor and deposits as a film on the optical components in the beam path.
Theoretical model is suggested of such multistage fission due to multiphoton monomolecular decay and subsequent bimolecular reactions between the excited decay fragments.Light induced mass transfer is detected along the beam path, and possible mechanisms of this phenomenon are discussed.
It should be mentioned that additional measurements (intermediate radical detection, reaction zone temperature measurements, etc.) must be performed for more detailed process simulation, which is the object of our further study.
9 10 11 Figure 3
Figure 3 Typical SIMS-spectrogram of the deposited films.(a) On the input window. (b) On the output window.
Figure 4
Figure 4 Electrogram of the amorphous boron films with crystalline inclusions.
Figure 5
Figure 5 Electron micrograph of the elementary boron amorphous film.
Figure 7
Figure 7 Electron micrograph of the amorphous boron film with crystalline inclusions.
Figure 8
Figure 8 Crystallite, formed by growing together three crystallites along cleavage planes. | 3,934 | 1989-01-01T00:00:00.000 | [
"Chemistry",
"Physics"
] |
Analyses of contact forces and kinetic motion on heavy load ball-screw
Effects of contact angle and groove factor of a heavy load ballscrew are discussed with the variation of contact forces at eight ball circulations. Contact forces are varying as a sinusoidal function of each circulation owing to the variation of phase angle. With the increase of contact angles, contact forces at each ball circulation are decreased and variation in each ball circulation. The decrease of the contact forces means that the contact stresses of contact areas are reduced. Fatigue life of raceways can thus be extended. Low groove factor can reduce skidding speed and friction coefficient. By the analyzing results, optimal transmission performance can be achieved in a heavy-loaded ball-screw.
Introduction
Heavy load ball-screw is wieldy used in plastic injecting mold machine, applying in injecting and load-lock axes. Non-preload is applied in a heavy load ball-screw. Operating conditions are low rotational speed and extremely high axial load, contact pressure on a contact area of ball-raceway can reach 1.8 GPa. Therefore, contact forces on each circulation of balls will not be equal owing to the contact mechanism. This will lead to serious wear occurred at the ball circulation which is nearby the flange of nut. The aim of the study is established a kinematic simulation model of multi-cycles ball-screw without preload in order to estimate contact forces on each circulation. This can help in the heavy loaded ball-screw design on uniform contact forces. Finite element model is made by a commercial software, Mash-Part, as shown in the Fig.1. Effective balls are 128 and set in the raceways, contact behavior is set as Hertz contact situation. An axial load is applied to a side of the flange of the nut.
The kinetic study of a ball-screw is started by Lin et al. [1]. They established fundamental coordinates to describe the motion of a ball's center, which is revoluted around a screw. Contact and kinetic behaviors among ball, and raceways of nut and screw are thus be obtained for discussion of geometrical parameters. Wei and Lin [2] considered effects of the * Corresponding author<EMAIL_ADDRESS>centrifugal force of the ball and lubrication at contact areas. Kinetic motion and contact behavior were introduced by a preloaded two-circulation ball-screw numerical model [3]. Sliding velocity and contact force will bring the wear occurred at the contact area between ball and raceway. A two body wear model considering the roughness effect was introduced in the ball-screw contact mechanism [4], the density of roughness peak and sliding speed are major factors to the wear of a preloaded ball-screw using in the normal operating conditions. Sliding roll ratio and position of pure rolling point are major factors to the kinetic motion and wear [5,6]. Contact mechanism and kinetic motion of a heavy load ballscrew are different to a normal situation. Non-preload is applied, and extreme high contact force has existed on the contact area. The study is present a theoretical model on the analyses of contact forces and kinetic motion of a heavy load ball-screw. A commercial software Mesh-Part is also used in the calculating of contact forces and kinetic behavior related to the contact angle and groove factor are discussed in the presented study.
Analysing methodology
The center position of any ball o´ in a ball-screw can be determined by an absolute coordinate (x,y,z), rotational coordinate (x´,y´,z´), and moving coordinate (t,n,b) as shown in the Fig.1. Contact coordinates (Xo, Yo, Zo) and (Xi, Yi, Zi) located on the contact ellipses at ball-nut and ball-screw contact areas, respectively. α o and α i are contact angles of ball-nut and ball-screw, respectively. Contact geometry is shown in the Fig.3. The contact area is varying with the variation of operating conditions. While setting the boundary on the FEM model, the contact area is assumed as a contact stripe [4]. In the Fig.3, the center of the raceway is No, and the contact distribution of angle is defined as θ o,i , which is written as where, the subscript o and i denote the ball-nut and ball-screw contact sides. is the radius of a ball. , is the radius of the raceway of the nut and screw, respectively. R r and R p are radii of screw's root and pitch, respectively. , is the contact angles of the raceway of the nut and screw, respectively.
By the past study of Wei [2], the orbital and spinning speeds of a ball along the screw, and are written as: where, ′ = , is the rotational speed of the screw.
The value of the sliding speed between ball and raceway can be separated as tangential direction (t-axis direction) and vertical direction (X-direction), respectively. The t-direction is parallel to the moving direction of ball center. The X-direction is orthogonal to the tdirection. These two sliding speeds can be written as : The partial speed of at t, n and b directions are written as = ′ ; = − and = − ′ , the angle β and β^' are spinning angles at orthogonal directions, respectively. The sliding speed at the X-direction is due to the spinning and yawing motion of a ball. It means the skidding motion between ball and raceway.
Results and discussions
A heavy load ball-screw is applied an extremely high axial load on the nut. Each circulation balls are carried high contact force, and contact forces are greatly increased with the increase of ball numbers, as shown in Fig.4. The black line separates right and left nuts, contact forces at the left nut are much lower than the right nut. Contact forces are varying as a sinusoidal curve due to the effect of the phase angle of each ball is different. The increase of contact angles can decrease the contact forces. Contact forces are averaged at each circulation and shown in the Fig. 5. The difference of contact forces at each contact angle condition is similar, but the latest circulation is not the same. This is due to the applied axial load is acted on the right end of the nut, the latest circulation is the first part to bear the force and then decay with the circulations. Therefore, the right nut bears much higher contact forces than the left nut. The 7th and 8th circulations, R3 and R4, thus bear similar contact forces. In the figure, the increase of contact angle can effectively reduce the contact force at each circulation. Because higher contact angle can let the radial part contact force become smaller. But too large contact angle will greatly increase the contact stress of top raceway, it will lead to fatigue failure happened on it. The design of groove factor is very important to the contact mechanism of a heavy load ball-screw. The groove factor is well known as the ratio of the radius of the raceway to the diameter of a ball. When the groove factor is near to 0.5, the contact area between ball and raceway has the largest value. The contact angle is also varied with the variation of the groove factor, as shown in the Fig.6. The decrease of the groove factor increases the contact angle and reduces the contact forces, especially at contact areas of the left nut. Friction coefficient increases with the increase of the groove factors due to the sliding speed are also increased. It's interesting in the increases of sliding speeds along t-and X-direction is different. The value of sliding speed at t-direction is greater than X-direction, but the rising slope is not so large. The rising of sliding speed at X-direction with the increase of the groove factor means the skidding motion on contact area becomes serious. So the friction coefficient is also rising with the increase of the groove factor.
Conclusions
The study uses a commercial software Mash-Part and creates a theoretical method for analyzing contact and kinetic behavior to a heavy load ball-screw under different raceway geometry conditions. The contact angle and groove factor were discussed for the lowest contact forces to multi-circulations. Increasing the contact angle or reducing the groove factor can effectively decrease contact forces of each circulation. The friction coefficient is dominated by the sliding speed at X-direction, which is along the raceway and vertical to the moving direction of ball centre. The speed means the skidding motion between ball and raceway. Lower groove factor has lower sliding speed and friction coefficient. The aim of the study indicates how to reduce contact force and friction and can help to design high transmission efficiency and fatigue life heavy-load ball-screw. | 1,990.8 | 2018-01-01T00:00:00.000 | [
"Engineering"
] |
Impact local markets on development single-industry towns of mining regions: exploring the case of Khakassia
. It is well-known that the life quality of monoprophilic territories is largely influenced by city-forming enterprises. The article considers the degree of influence of the city-forming enterprises of the coal industry on the development of agriculture of the industry of mono-profile territories in the region. In particular, the direction of the social vector in the activities of coal mining enterprises with regard to support of the corresponding the life quality in single towns has been revealed. Therefore, the material obtained as a result of the study can be used in the development of social development programs of monoprophilic territories of the coal industry.
Introduction
The socio-economic development of many territories in which coal mining enterprises are located is largely determined by the policies of these enterprises. Therefore, the regions in which the coal industry enterprises are located are faced with the task of finding effective interaction. For the most part, coal mining enterprises are city-forming in single-industry towns; therefore, the issues of socio-economic development of single-industry territories are important for determining development prospects of coal mining enterprises and strategies for socio-economic development of mining regions.
The coal industry enterprises are entrusted with the tasks of supporting the social sphere in the territory of their presence, in particular with regard to the development of singleindustry settlements (mining towns) [1][2][3]. Despite the large variety of industries representing the city-forming enterprises of single-industry towns, the coal industry occupies a significant share of up to 9.4 percent and, therefore, has a serious impact on the current state of single-industry settlements.
The strategy of modernization of single-industry towns includes such provisions as: the development of competition, the provision of equal opportunities for the modernization of enterprises, the creation of conditions for the attraction and development of human resources, the stimulation of the development of innovative technologies, sustainable and consistent development of integration ties of all types (intra-industry, inter-industry and international) [4].
Integration, based on interconnections and interactions at the sectoral, municipal, regional and interregional levels, contributes to the creation of the most favorable conditions for the long-term functioning of all subjects of integration ties. Therefore, the search for optimal interaction schemes will create the conditions for mining regions' sustainable development at the expense of internal resources. The main objective of the modern approach to the management of coal-mining enterprises located in single-industry areas is the social-oriented regulation of activities in the coal industry. Strengthening the social orientation in the activities of coal mining enterprises contributes to the search for integration ties with various sectors of the economy. In particular, one of the important directions in improving the welfare of the population is the provision of single-industry industrial areas with food [5][6][7][8][9].
Contemporary studies widely acknowledged that influences of mining on reshaping human settlements to promote socio-economic growth remain relatively unexplored in many developing countries [10]. Hence, to take full advantages of their huge economic potential, additional studies are required to understand the reasons accounting for the ineffectiveness of strategies in creating sustainable mining towns.
Diversification of the industrial structure of employment contributes to the alignment of employment cycles, as a conclusion of study [11] in cities the same state or region, the same employment cycle is observed. The ongoing processes of diversification of the industrial structure of employment, according to studies, lead to the diversification of regional centers and peripheral localized markets. The development of local markets for peripheral municipalities is more based on the use of natural resources. Although agglomeration increases with increasing transport costs, development of local markets contributes to an increasing spatial concentration of economic activity [12].
Numerous studies by scientists show that providing the population with agricultural products is one of the determining factors for improving the life quality. Single-industry settlements are located in almost all regions of Russia. Considering single-industry towns with city-forming enterprises of the coal industry, it should be noted that many rural settlements are located near coal mines and quarries, and as a result, affect the ecology and crops. The commissioning of new coal deposits in the Republic of Khakassia during the period over 2015 to 2019 led to a decrease in agricultural land, a decrease in the number of cattle, and a deterioration of the environmental situation in certain regional parts. The social responsibility of the business to the territories located in the zone of its functioning forces the coal mining industries to maintain and develop single-industry territories of coal mining specialization. In this case, agricultural products become most in demand. Consequently, in order to assess the degree of influence of the agricultural industry on raising the standard of living of the population of single-industry territories, it is necessary to determine the factors influencing the livelihoods of a single-industry company.
Materials and Methods
The article identifies quantitative characteristics of the impact of local markets functioning on the life quality of single-industry territories of the coal mining industry. As a source of information, statistical materials on the socio-economic development of the Republic of Khakassia for 2013-2018 were used.
The studies were conducted using economic and statistical methods, in particular, the correlation and regression analysis. The dynamics of the functioning indicators of single-industry towns of the Republic of Khakassia showed an increase in the consumption of agricultural products (Table 1). The analysis of statistical data reveals a downward trend in many indicators of social development, but at the same time high rates of growth in coal production persist, for five years the increase was 8.6 per cent. The Republic of Khakassia in the ranking of industrial production index in the field of coal mining takes 3rd place among Siberian regions and 14th place at federal rating. In coal mining sector employed 4 thousand workers (32 %), and since 2017 to 2019 it was decreased around 1.1 per cent at the territories of singleindustry towns, although the local population share increased over 69.1 % to 69.7 %, and the rural share decreased from 30.9 % to 30.3 %. The share of single-industry towns in gross regional production is 80 per cent of output by sector, fixed capital investment share of Khakassia consist around 23 %. There are 9.5 thousand workers or 4 per cent of the economically active population of Khakassia employed at city-forming enterprises. The concentration of economic activity and the local potential of the economy of the Siberian region in single-industry territories, without taking into account the large coal exports share was studied in our previous papers.
In order to identify the quantitative characteristics of the impact of output by agriculture sector on the growth of socio-economic indicators of single-industry towns in mining regions, a multiple correlation and regression analysis was performed, which includes the development of correlation models and analysis of the results obtained.
Using the information database of statistical data for the Republic of Khakassia, the following variables were investigated: per capita income of the population, population size, employment, fixed capital investment per capita and output by industrial sector.
The lack of evaluative empirical models that determine the degree of influence of agriculture on single-industry settlements with city-forming enterprises of the coal industry makes this study interesting from the point of view of the distribution of the obtained models to other regions.
The purpose of constructing models of the dependence of the socio-economic development of monoprofile territories of coal specialization on the state of agriculture is to identify the effects of localization of the economic activity of local markets.
The fact that the study was carried out for the Siberian region exacerbates the adverse conditions of crop production and animal husbandry: extreme climatic conditions; remoteness of settlements not only from the municipality exercising primary status in a district, but also from each other; poorly developed engineering and transport infrastructure; population decline, especially, young people.
Also, single-industry towns with coal specialization (coal mining) have a high risk of depletion of natural resources, and the agricultural sector in the regional economy is constant, even traditional. Consequently, agricultural production is important for the development of the coal industry.
On the one hand, there is a contradiction arises: the active development of coal mining can gradually supplant agricultural sector of activity due to land reduction and environmental impact.
On the other hand, the development of large business creates the conditions for building the potential of medium and small enterprises. Consequently, the development of local markets requires balanced production and consumption. In this case, single-industry towns are just an example of the dependence of socio-economic development on a city-forming enterprise.
Results and Discussion
Correlation and regression analysis revealed trends in the links between the output by the primary sector of the Republic of Khakassia (agriculture and mining have limited spatial mobility because of their dependence on land and natural resource inputs) and indicators of the life quality. The indicator of average per capita incomes of the households who live and work in the single-industry town was chosen as an outcome variable, since it most fully reflects the socio-economic situation of the population, including the efficiency of the town-forming coal-mining enterprise. We now discuss each sector of activity in turn.
The equations for agriculture production
According regression analysis, correlation relationships were identified between variables, a matrix of pair correlation coefficients was obtained for this purpose, and regression analysis between indicators was performed on its basis. For the model, indicators with high and medium correlation coefficients were selected. The obtained mathematical model of the connection of the average per capita income of the population with the effective functioning of the agricultural sector has identified the relationship between the socio-economic situation of the single-industry towns of the coal industry and the provision of agricultural products to the population of single-industry territories: , 966 , 0 521 , 0 032 , 0 721 , 11 3 2 where the variable y denotes the income per capita, roubles; x1 -output by agriculture sector per capita, mln roubles; х2 -grain yield, %; х3 -investments in agriculture per capita, mln roubles. The results show that the increase of output by agricultural sector, yields and investments in agriculture affect the income generation of the population, which indicates an increase of the life quality through improved food supply. Table 2 shows the assessment of the closeness, kind of relationships and characteristics the influence of factors. In overall terms, the coefficients of determination presented in table 2 show the degree of influence of each factor on the average per capita income of the agricultural sector. This result can be attributed to follow factor. Most firms and farmers who consist the set of agriculture sector subjects in Siberia are getting subsidies of federal government, especially, to single-industry towns.
Model for the mining industry sector
The next model is aimed at identifying the relationship between the potential of the mining coal sector in single-industry towns and income per capita: , 650 , 0 612 , 5 690 , 0 850 , 1 661 , 0 4 where: y -income per capita, roubles; x1 -output by mining sector per capita, roubles; х2proven coal reserves, mln roubles; х3 -trucking per capita, mln roubles; х4 -investments in mining industry per capita, mln roubles. Table 3 shows the assessment of the closeness and characteristics the influence of factors. Thus, the size of agriculture significantly affects the standard of living of the population in single-industry towns and is significant. The results of the assessment show the significance of agriculture productivity and mining potential. Identifying the most promising area for development in regional single-industry towns will contribute to an increase in the concentration of business activity of the subjects of the local market. The effectiveness of the interaction of the coal and agricultural industries for the development of the economy of single-industry towns is undeniable. In the mathematical sense, this means an increase in GDP dynamics of the region in forecast periods.
The results presented in table 2 and table 3 are based on the matched sample (sample includes 6 single-industrial areas). We also estimated the model using the full sample by OLS technique. In the case of both models, the estimated coefficients in each of the two performance measures are statistically significant.
Conclusions
The obtained dependence of the socio-economic indicators single-industry towns on the levels of agricultural output and coal mining suggests the need to consider their mutual influence.
Knowing the trends in the socio-economic indicators of single-industry settlements, it is possible to construct a growth forecast for the main vital signs of a single-industry company associated with an increase in agricultural production at coal mining region. The obtained result is interesting for single-industry areas management to revitalize the agriculture and coal mining sectors. To implement the process of coordinating the interaction of singleindustry territories with agricultural organizations, it is necessary to develop integration development plans.
In particular, studies were conducted for single-industry territories of coal-mining industries in order to identify the social orientation in the activities of coal-mining enterprises in relation to single-industry territories. At the same time, correlations were obtained that determine the influence of the agricultural industry on the life quality of single-industry towns. Consequently, coal mining enterprises need to maintain links with agriculture, including them in the program of social development of the territory.
We would like to thank the Editor and anonymous referees and researchers of T. F. Gorbachev Kuzbass State Technical University for the helpful constructive comments and detailed suggestions. | 3,129 | 2020-10-01T00:00:00.000 | [
"Economics",
"Geography",
"Business"
] |
RESONANT STATES IN A ONE-DIMENSIONAL QUANTUM SYSTEM
In this work, the concept of resonant states (RSs) in a finite square quantum well is presented. We first derive the analytic secular transcendental equations for even and odd states by applying the outgoing wave boundary conditions into the one-dimensional Schrödinger’s wave equation. The complex solution of these equations is found using the numerical Newton-Raphson method implemented in MATLAB. We can see in particular, that the RSs present a general class of Eigenstates, which includes bound states, anti-bound states, and normal RSs.
INTRODUCTION
Resonant States (RSs) have been known in quantum mechanics for a quite long time, since the pioneering work of Gamow, 1928 andSiegert, 1939. Theyappear, in the form of resonances, in almost every field of Physics, from classical mechanics and electrodynamics to quantum physics and gravity. Despite this fact, however, many fundamental aspect are still to be investigated. Also, resonant phenomena are of increasing importance in quantum mechanics especially given rapid progress in the physics of semiconductor nanostructures which can be described by various types of quantumpotentials. Many textbooks (Mandle, 2010) describe quantum resonances as singularities of theS-matrix. This is equivalent to solving the Schrödinger equationwith outgoing wave boundary conditions (Doost et al, 2012, Muljarov et al, 2010, Siegert, 1939. These boundary conditions strictly define RSs. These states have complex energy eigenvaluescausing them to decay exponentially in time, leaking out of the system (quantum well/barrier). There are numerous ideas as to how to investigate RSs in quantum-mechanical systems, however, there are certain problems to be overcome, such as knowing how the potential in the Schrödinger equation gives rise to resonances and how to treat and interpret them (Hatano, 2008). New method of finding RSs in an arbitrary called resonant-state expansion (RSE) has recently been introduced (Armitage et al, 2014, Doost et al, 2012, Muljarov et al, 2010. In this work, the concept of RSs in a square quantum well is presented. We first obtain the analytic secular equations in terms of even and odd states by applying the outgoing wave boundary conditions. These equations are solved numerically using the Newton-Raphson procedureimplemented in MATLAB. We consider all types of states (bound, anti-bound, and normal RSs) in such a system. We also calculated the wave functions of RSs.
THEORY The formalism of Resonant States (RSs)
The quantum-mechanical system we use in this work is described by a one-dimensional Schrödinger equation with a finite square well potential. We use this potential because of its simplicity and of practical importance for low dimensional structures such as quantum wells (Andrew, 2010). Quite generally, non-relativistic Schrödinger equation for an arbitrary particle in a three-dimensional potential is The wave functions of RSs satisfy the outgoing waves boundary condition at → ∞. is the energy of theparticle. For convenience, we use in the following = 1/2 and ℏ = 1, so that = 2 , where is the eigen wave number of the particle, ℏ is the Planck's constant, is the effective mass of the particle. The potential ( )of the particle is This potential has been covered in depth by many textbooks such as (Mandle, 2010). However, RSs in general are usually not considered in textbooks. Therefore, in this work, it is interesting to see how the RSs move in the complex k-plane as we increase the depth of the well. The solutions to equation (2) in terms of plane waves are whereA,B,C and D depends on n, but we drop the index for the brevity of notations and also, and are the wavenumbers in their respective regions and are related as To find the eigenvalues, we require that from equation (4) ( ) and ′ ( ) are continuous at = ± . Solving C and D in terms of A and B weobtain (dropping index n also, for the brevity of notations) There are two possible solutions to equation (12): And solution 2 is which after some algebra leads to = − cot( ) (15) where we have restored index n. This solution is odd.
This solution is even.
Equations (15) and (16) (secular transcendental equations) are solved together with equation (5) to find the Eigen wave numbers which are plottedin the complex -plane fig. 1.These equations cannot be solved analytically. However, in this work,we employ the use of Newton-Raphson procedure in MATLAB, which istypically fast to converge to find their complex roots which give rise toall types of states. After we found our transcendental equations we thensubstitute back into the equation giving the relations between the A, B, C,and D toobtain the wavefunctions. We found that: if The solutions are even and thewavefunctions are symmetric. If however, The solutions are odd and thewavefunctions are anti-symmetric.For both the even and odd solutions the wave functions have to be normalizedbut care has to be taken during the calculations. For bound states, this is aneasy task but for the resonant states which have exponentially increasing tails,an additional term must be considered to normalize them correctly. Anouter limit is required for their normalization and is given by R. It is foundthat the value of R can be taken arbitrarily and thus for convenience we arefree to choose the boundaries of the well as the limits of this normalization.The orthonormality condition is given as (Muljarov et al, 2010): It can also show that this equation is suitable for the usual normalizationof bound states as it reduces to the standard normalization condition asR tends to infinity the condition tends towards 302 which is the standard approach to normalizing thebound states.
The numerical procedure of finding the Eigen wave numbers.
There are many numerical procedures for solving equations (15) and (16). While the equations cannot be solved analytically, they can be solved numerically up to any desired accuracy. Below are the few steps we used for the solutions: -We use the relation between and in eq.(5). This makes the final equations to solve written in terms of only. -The second step defines the function ( ) such that the equation we solve becomes ( ) = 0. -The third step sets the physical parameter values. -At the fourth step, we look at the function behavior, to choose the optimal guess values for the Newton-Raphson procedure.
-Lastly, the solutions for are plotted in complex k-plane (see fig.1), and the wave functions and energies are also calculated. (15) and (16) which gives rise to all types of states. As we can see the resonant states present a general class of Eigen states, which includes bound states, anti-bound states, and normal resonant states. For a shallow well, ( see for example, Hatano, 2008) there is only one bound state and the RSs are far down in the k-plane almost parallel to the real axis. When we increase the depth of the well, the RSs wave numbers move upwards parallel to the imaginary axis which also leads to an increase in the number of bound states. Similar results are found in using different parameter. When the pair of conjugate RSs hits the imaginary axis it splits up into a bound-anti-bound states pair which gets more bound when increasing the depth further. The normal resonant states all have non-zero real and imaginary parts of . Each normal resonant state with Re( ) >0 has a partner in the negative real axis with Re( ) <0. The positions of a normal resonant state and corresponding anti-resonant state are symmetric concerningthe imaginary axis. Their locations are mirror images for the imaginary axis. Depending on the system parameters, there are also discrete states on the imaginary axis. These states are called bound and anti-bound states. The bound states of the system considered are the ground state, 1 st excited state, and 2 nd excited state. The bound states are located on the positive imaginary axis Im( ) >0 while anti-bound states are located in the negative imaginary axis Im( ) < 0.
Bound states
Stationary states of a system that correspond to discrete energy levels are called bound states. For bound states the energy is real. We can see for example in ) that the potential generate bound states since the solution of equation (22) have an exponentially decaying tail at the . 4 No. 3, September, 2020, pp 300 -304 boundaries. For potential vanishing at | | → ∞ the bound state energies are negative (Uma, 2010). Applying the asymptotic boundary conditions, the bound states wavefunctions has the asymptotic behavior as
FJS FUDMA Journal of Sciences (FJS) Vol
Therefore the bound state energy E is which shows that the bound states can be presented as negative energy states.
Normal RSs
The resonant state can be defined as an Eigen state of the stationary Schrödinger equation with boundary conditions of outgoing waves only.
Anti-bound states
An anti-bound state shares similar features with the bound states and resonant states but it is called a separate type of state. Unlike bound states, they have a solution for the continuous range of energies which satisfies E <0. It diverges exponentially for large | |(see fig.2). The solution inside the well is similar to that of the bound state. We can see the wave function is symmetric around the origin, which indicates that there must be solutions of defined parity also for anti-bound states. figure 3 shows the 5 th and 10 th normal RSs wave functions. They show a similar behavior to the bound states wave functions having a defined parity within the well. Unlike the bound states, we can see that in a normal RSs and anti-bound states there is a leak on the boundaries of the origin. We can also see that there is only one zero of 1 st , 3 rd, and 5 th RSs which is due to the odd nature of the wave function. However, there are two zeros of anti-bound, which are also due to the even nature of the wave function. For anti-bound states, the wave function is real while is essentially complex for normal RSs.
SUMMARY AND CONCLUSION
In this work, the concept of RSs was introduced and discussed in a one-dimensional finite square well potential. RSs were studied by seeking solutions to the time-independent Schrödinger's equation with outgoing wave boundary conditions. After the application of boundary conditions to the problem, a system of equations was generated and written in terms of secular transcendental equations for even and odd states. Solutions of these equations were analytically solved using the Newton-Raphson method in MATLAB. The full spectrum obtained includes bound states associated with pure imaginary and positive wave numbers, the antibound states associated with pure imaginary and negative wavenumbers, and the normal RSs with complex wavenumbers which lie in the lower half of the complex k-plane. The properties of RSs were considered and discussed in detail. The wave function of states of all types was plotted and compared with each other, demonstrating the probability leakage of anti-bound states and normal RSs. We demonstrate that for each RSs wave function contain asymmetric pair of states which are complex conjugate of each other. | 2,570.4 | 2020-09-23T00:00:00.000 | [
"Physics"
] |
DIGITAL TRANSFORMATION TECHNOLOGIES AS AN ENABLER FOR SUSTAINABLE LOGISTICS AND SUPPLY CHAIN PROCESSES – AN EXPLORATORY FRAMEWORK
Goal: The aim is to present a literature review for identifying digital transformation technologies (DTT) for manufacturing and pointing out their capabilities and applications. Furthermore, the paper lays out an exploratory framework to depict the impact scope of the cases on logistics and supply chain management (L&SCM) processes. Design / Methodology / Approach: The identification of relevant DTTs and their capabilities is based on a systematic literature review. The exploratory framework builds upon Industry 4.0 concepts and frameworks as well as the conditions for sustainable digital artefacts. It is then related to cases found in the systematic literature review. Results: The results indicate that the DTT auto identification, additive manufacturing, and cloud technology lead to improvements concerning transparency efficiency, optimizing distribution distances and logistics resources in networks. The framework presents an avenue for assessing the impact scope and potentials of implementing DTT. Limitations of the investigation: The literature base limits the findings since it is built upon two databases and is restricted to articles published in English. The theoretically deduced framework accounts for the dimensions technologies, the SCOR model and RAMI architecture. The illustration is simplified but can be detailed according to the needs. It is not tested based on case studies and should therefore be applied in practice to further develop it. Practical implications: Practitioners gain insight into how to anchor potential use cases for more sustainable L&SCM processes in the framework. Originality / Value: This paper is the first to relate the capabilities of DTT to more sustainable L&SCM processes in manufacturing by means of a systematic literature review and link the findings to an exploratory framework.
INTRODUCTION
The term digital transformation is a subject that is widely discussed among practitioners, but also paths its way as a scientific discipline. It affects industries, people and organizations. Technology is seen as a major driver and enabler of digital transformation. Those digital transformation technologies (DTT) cause changes in value creation. Companies adapt their strategies, explore new business models, and focus on acquiring new skills and competences. The major goals of digital transformation are increased flexibility, more customer-centric processes, and cutting costs (Hofmann and Rüsch, 2017;Kersten et al., 2017;Ward et al., 2016).
Although DTTs are assumed to be an important driver for more efficient processes in L&SCM for manufacturing companies, their definition remains vague (Schuh et al., 2016). Yoo et al. (2010) define digital technologies as having three important characteristics: (1) the reprogrammability, (2) the homogenization of data, and (3) the self-referential nature of digital technology. However, recent developments indicate that the capabilities of technologies relevant for digital transformation in L&SCM go beyond the characteristics, as described by Yoo et al. (2010), since they also enable decentralized and autonomous processes (Hofmann and Rüsch, 2017). Publications concerning the improvement of processes by the means of DTTs are already available, but still very scarce. They mainly focus on efficiency improvement concerning cost and/or the discussion of the implementation of those technologies. The focus on more sustainability enabled by these technologies is an underrepresented research area. When searching for digital transformation or Industry 4.0 in combination with sustainability in September 2018, Business Source Complete yielded only one relevant result (Gružauskas et al., 2018). This fact points out that more research on the intersection of technologies, L&SCM processes, and sustainability is needed. This paper aims at giving an overview of DTT relevant for logistics and supply chain management in manufacturing. The characteristics of those technologies are presented and related to the areas discussed in the identified papers. The development towards integrated and connected logistics networks requires new assessments for technologies and their impact on the supply chains of manufacturing companies. The resulting research questions are: What are relevant DTTs for manufacturing and what are their capabilities and prospects concerning more sustainable L&SCM processes?
The paper is structured as follows. First the research purpose is embedded in theory as well as literature and contextualized. Subsequently the methodology and the results from the systematic literature review are presented. The underlying cases found in literature are then categorized in an exploratory framework for the impact scope assessment of the technologies on sustainable L&SCM processes. The conclusion and discussion section finalize the contribution.
CONTEXTUALIZATION
Current research about digital transformation and sustainability often focuses on the implications of DT on the three sustainability dimensions: economical, ecological and social sustainability (Beier et al. 2017;Kayicki, 2018). Beier et al. (2017) find that DT provides opportunities for the ecological dimension under the assumption that resource efficiency improvements can be realized. They highlight the impact of DT on the social dimension and the current challenges with regard to job replacement (Beier et al., 2017). According to Kayicki (2018), digital transformation in L&SCM has not yet reached a maturity level; hence, sustainability implications will be improved and changed. The economical dimension of sustainability has the most important impact in the presented case study. This case study qualitatively maps the expected impacts of DT on the economical, ecological and social dimensions (Kayicki, 2018). Future research should elaborate new technological concepts focusing on opportunities created by digital transformation (Beier at al., 2017). This paper intends to approach the aspect of DT for L&SCM from a different angle: according to the Resource Based View, technologies represent a tangible resource that can be a competitive advantage for a company (Wernerfelt, 1984). In order to explore the possible competitive advantage of DTTs for a company with regards to sustainable L&SCM, the prospects and the impact of their deployment need to be anchored to be described. As stated, DTTs show three unique characteristics: (1) the reprogrammability, (2) the homogenization of data, and (3) the self-referential nature (Yoo et al., 2010). DTTs represent digital artefacts that should feature four characteristics in order to be sustainable digital artefacts. The first one is the elaborateness determined by modularity, integrity, accuracy, robustness, and other characteristics regarding the quality of their substance. Second, they should have transparent structures signifying technical openness. Third, semantic information helps to make DTTs easily intelligible to humans and machines through comprehensible structures and metadata. Fourth, distributed location means that that DTTs and associated data are stored and operated on multiple sites, e.g. through replicated data storage or peer-to-peer technology (Stuermer et al., 2017). Many DTT characteristics that need to be met in order to represent a sustainable digital artifact require reference architecture for assessing the impact of technologies on businesses and supply chains. The development of the exploratory framework presented in this paper intends to provide such a frame and to anchor the cases found in literature to better understand the focus of current research and to provide avenues and applications for research and practice.
METHODOLOGY
The systematic literature review follows the approach to first identify the research field, then construct the Boolean phrase for the literature search and identify suitable databases, define inclusion and exclusion criteria, and then analyze the literature and summarize results (Durach et al., 2017).
"To answer the first research question, a systematic literature analysis was conducted in May 2018. The Boolean phrase (digital* OR Industry 4.0 OR smart) AND (logistic* OR SCM OR Supply Chain Management OR Supply Chain* OR SC) AND technolog* AND (manufacturing industr* OR operation*)was used to search through two databases (Business Source Complete and Web of Science) and was applied to title, abstract and keywords. The two databases cover a range of journals relevant for manufacturing, logistics and supply chain management and are thus considered to be apt for answering the research question. Despite the care taken in the systematic literature review, the language (English), the selected keywords, and the selected databases represent a natural limitation. Inclusion criteria for the papers are: • Relevance for the manufacturing industry with multi-variant production. This excludes publications covering process industries such as oil, gas and electricity.
• Focus on logistics and supply chain management.
• Discussion of the implementation or use of a concept or technology with regard to logistics and supply chain management and/or description of a classification/meta-analysis of digital transformation with relevance for logistics and supply chain management.
The results of the systematic literature analysis are displayed in figure 1, showing that 62 papers represent the basis for further consideration (Junge, 2020). "The remaining papers are published more often in production and manufacturing journals (e.g. International Journal of Production Research) than those in logistics and supply chain management. They have different emphases representing the wide range of this interdisciplinary research topic.The papers were subsequently analyzed and investigated with regard to the technologies covered (applicable to all 62 papers) and then with a focus on sustainability. Six out of the 62 papers address sustainability aspects, representing 9.7%" (Junge, 2020).
The method for constructing the framework is as follows: the implications of the literature review are taken as input to define the potentials of technologies as enablers for sustainable L&SCM processes. Furthermore, relevant frameworks for the subject are presented, compared and adapted to a framework fitting the requirements of L&SCM in manufacturing.
RESULTS FROM THE SYSTEMATIC LITERATURE REVIEW
The literature analysis reveals that technologies for collecting data, integrating it and finally putting it to value are the basis for digital transformation in logistics. The resulting technologies are information and communication technologies, auto identification technologies and cloud for integrating data and making it visible. This is the first step to create transparency about processes and assets. Based on transparency analytics, blockchain and cyber-physical systems, decentralized and semi-autonomous decisions and processes can be made. For completely autonomous processes with cognitive abilities, automation technologies combined with analytics are relevant. Further technologies supporting autonomous logistics are virtual and augmented reality as well as additive manufacturing. Those are the technologies found in the systematic literature review being an enabler for increasingly integrated planning and steering of logistics networks. The results show that digital transformation goes beyond the logic of digital representation as postulated by Yoo et al. (2010). Technologies in combination with processes, organization and people enable new capabilities such as decentralized and (semi-) autonomous decision making as well as initiated automated tasks. When analyzing the literature, it can be stated that the papers lay out several success factors as well as barriers for the implementation of DTTs. Success factors mean the need to test technology in practice to learn from the resulting implications and the issue that people-related factors, such as managerial skills, tend to play the strongest role in value creation caused by information technology (Dighero et al., 2005;Dong et al., 2009). Potentials are the offering of more adaptable and flexible supply chain services tailored to customer needs based on data, leading to more agile supply chains (Janssen and Feenstra, 2010). A prerequisite for modular L&SCM are service oriented architectures running on information technology infrastructure that fits organizational needs. Digital transformation will lead to a paradigm shift that is comparable to the shift in software development from structured to object oriented development. This shift will foster the development of new and adapted ontologies for specific requirements and problems in L&SCM (Ameri and Patil, 2012;Kim and Laskowski, 2018;Lu and Ju, 2017;Prause and Weigand, 2016;Zhang et al., 2011).
Existing barriers for the targeted and efficient use of DTTs lack communication standards (Doh et al., 2016), hampering integration (Holmqvist and Stefansson, 2006), insufficient flexibility of intra-and inter-organizational business processes (Seethamraju, 2008), and the current status of many companies not yet disposing of the basic requirements for digital transformation, such as data availability and validity (Bogner et al., 2016).
An unanswered question so far is whether the primary aim of digital transformation -increasing flexibility while decreasing costs -will lead to a reduction or rise of complexity. On the one hand, planning and steering can be simplified; on the other hand, increasing data, e.g. by tracking and tracing on a more detailed level, will impose new challenges for integration and control mechanisms. More decentralized production, for example, enabled by additive manufacturing, can lead to more or less supply chain stages, as discussed by Baumers et al. (2017) and Durão et al. (2017). This can also have an impact on sustainable processes and products concerning L&SCM in manufacturing. The papers found in the systematic literature analysis, focusing on sustainability aspects in L&SCM, are briefly presented and then clustered according to figure 2.
When assessing the different papers that build the basis for the literature analysis, it is surprising that a clear focus lies on additive manufacturing. This is an enabler for more decentralized and customer-focused production since it allows, for example, customized on-demand production of certain parts, which can be very relevant for spare parts management. The paper of Baumers et al. (2017) investigates the advantages and disadvantages of centralized versus decentralized supply chains. They highlight the need for further investigating the environmental consequences of supply chain settings. According to the authors, centralized supply chains enable more efficient manufacturing processes, reduced inventory and reduced requirements for distribution. Potentials of decentralized supply chains lie in envi- information management, and recycling. The application of an optimization method shows that the use of the smart boxes leads to advantages concerning loading rate, reducing distribution distances, and optimizing logistics resources. The smart boxes are an example for a hybrid service bundle (product service system) that can enable a more sustainable, green low carbon distribution pattern. The resulting advantages are maximized revenue for all stakeholders, reduction of the use of natural resources, and increased logistics efficiency (Zhang et al., 2016).
The following figure summarizes and represents the findings of the systematic literature analysis. It can be stated that the discussed papers cover the capabilities real-time, decentralization, and autonomy for enabling more sustainable L&SCM processes in manufacturing. As a second step, it is worthwhile to put these findings in a wider context and to anchor the discussed approaches in a framework. Therefore, in the following steps, different frameworks with relevance for Industry 4.0, manufacturing, L&SCM, and sustainability are presented and discussed. Based on that evaluation, a framework that allows classifying the presented cases is deduced.
AN EXPLORATORY FRAMEWORK FOR THE IMPACT SCOPE ASSESSMENT OF DTT ON MORE SUSTAINABLE L&SCM PROCESSES IN MANUFACTURING
To develop a framework for describing the impact of DTT on more sustainable L&SCM processes in manufacturing, different already existing frameworks with relevance for the subject are briefly presented. The basis for this evaluation is displayed in the following table.
ronmental savings in terms of transportation and handling of intermediate and end products. As a conclusion, they state that the use of additive manufacturing can enable new configurations for distribution. This can result in sustainability impacts; however, possible rebound effects need to be taken into consideration as well (Baumers et al., 2017). Cerdas et al. (2017) also investigate the impact of additive manufacturing on L&SCM. They focus on environmental impacts when using additive manufacturing in a distributed manufacturing system. A special emphasis is put on the product lifecycle, which is compared to a traditional centralized manufacturing system. Their results reveal that it cannot be clearly stated whether the distributed manufacturing systems show advantages compared to the centralized one. The energy efficiency of the respective production process, the regional electricity mix, the material used and user experience, the quality of the printed product and the rebound effects prevention affect the potentials of achievable sustainability impacts (Cerdas et al., 2017). Tien (2012) discusses additive manufacturing combined with cloud computing and analytics to enable co-produced mass customization. According to the author this can lead to less offshoring, because products are co-produced locally allowing a more effective and efficient production. Such a development can shift the initiating event for production, from raw material supply to customer demand (Tien, 2012).
Initiating processes are also in the scope of investigation for the paper by Guo et al. (2017). They simulate the use of automation technologies combined with a cloud service platform to actively publish or request logistics tasks. Their results, achieved through the application of colored Petri nets, show that their approach outperform the event-driven method concerning energy consumption. (Guo et al., 2017) Bechtsis et al. (2017) developed a framework for corporations to consider automated guided vehicles in a structured manner, including decision support for economic, environmental and social sustainability. They subdivide each of the sustainability dimensions on an operational, tactical and strategical echelon. The findings in literature reveal points important for L&SCM decisions according to the proposed framework. Their findings demonstrate a lack of research across the end-to-end supply chain. The main focus lies on warehouse, manufacturing and distribution operations (Bechtsis et al., 2017). Zhang et al. (2016) choose the use of smart auto-identification enabled boxes as an analysis unit combined with a cloud service platform used for collaboration. In their setting a third party logistics provider is the owner of the smart boxes and manages their maintenance, status monitoring, The work presented by Kayikci (2018) is the one closest to the theme of this research since it focuses on digitalization in logistics with a focus on sustainability. The author proposes a sustainable logistics ecosystem, including digitalization enablers, characteristics of digitalization in logistics, technologies and applications, as well as sustainability dimensions. The first three impact end-to-end supply chains result in certain effects concerning sustainability. Therefore, a set of sustainability criteria in the dimensions of economy, environment and society is proposed to evaluate the impacts of digitalization characteristics. This is a qualitative assessment leading to the result that the economic implications were rated as more important than the other two dimensions. Especially in terms of logistics cost, delivery time, delay, inventory, reliability and flexibility issues, a great potential was seen by case study participants. It is also stated that digitalization in L&SCM has not yet reached a maturity level to assess the impacts on a quantitative basis (Kayikci, 2018). Hofmann and Rüsch (2017) state that Industry 4.0 is expected to achieve opportunities in terms of decentralization, self-regulation, and efficiency. They are based on the digitalization framework by Fleisch et al. (2014) and an application model adapted for logistics is designed. According to this application model, the value of logistics 4.0 lies in the value of availability of data (transparency about the physical supply chain and its processes), combined with the value of digital services. Hybrid service bundles can be an opportunity for increased customer value. Within their research the authors qualitatively investigated just-in-time/just-in-sequence delivery as well as the Kanban concept. Concerning just-intime/just-in-sequence, the new scenario consists of production planning based on real-time consumption data and demand patterns and an integrated cloud-based ERP system for the end-to-end supply chain. This enables suppliers to plan their disposition and production based on real-time data and delivery with end-to-end route optimization. They found that reduced bullwhip effects, highly integrated supply chains, and improvements in production planning are among the potential benefits. The modified cross-company Kanban cycle, according to the Industry 4.0 scenario, consists of intelligent bins for demand assessment transmitting a digital real-time Kanban-signal linked to the supplier's disposition and production. Collection and delivery takes place via a demand-oriented milk run and the goods receipt at the buyer is carried out via an auto-identification barrier or scanning device. Hence, an improved demand assessment, dynamic and more efficient milk run milk run s as well as shortened cycle times can be expected (Hoffmann and Rüsch, 2017). The potential capabilities pictured by Hoffman and Rüsch coincide with the findings of the literature review in this paper. The capabilities as depicted in figures 2 and 3 are more extensive as they complement decentralization, self-regulation (autonomy) and real-time by visibility, integration and automation. The customer value and potential efficiency gains of these capabilities for L&SCM need to be taken into account as well, as they are a prerequisite for achieving potentially autonomous processes in L&SCM. Strandhagen et al. (2017) explore Logistics 4.0 and emerging sustainable business models. They investigate implications of Logistics 4.0 trends on value proposition, customer interface, supply chain structure and the financial model, and they also include the three sustainability dimensions. Logistics 4.0 trends encompass individualization, Servitization, accessibility, autonomy, global networks, green logistics, and sharing economy. The two last mentioned trends account for sustainability aspects and partly conflict with other trends, such as individualization. Concerning the changing value proposition, Strandhagen et al. (2017) also stress the shift towards service orientation instead of product orientation. This materializes in the fact that value creation focuses on functionality, solutions, license to use the products, and applications. Furthermore, the customer receives more personalized products and services augmenting the perceived value to the customer. The customer interface will change as well, since customers will increasingly be integrated into the design of products and services. This allows a new relationship with the customer, which can foster long-term relationships. Supply chain structures are susceptible to change towards the sharing of resources and a total systems approach to manage the flow of information and material. New technologies allow the creation of virtual and more adaptable supply chains. The implications within the sustainability dimensions are described as an increase in material productivity, resource efficiency and waste reduction, more sustainable production processes, and shared resources for better asset utilization. The new customer interface can enable a new awareness for products and services (encouragement of sufficiency). The authors discuss that the conflicting objectives of the logistics 4.0 trends, as well as the possible rebound effects of technology use need to be taken into account, but that they see potential for the impact of the mentioned trends with regard to sustainability. Sustainability development in turn can also impact the formation of trends, e.g. through customer pressure to shift the focus to collaboration and sharing solutions in logistics (Strandhagen et al., 2017). Brettel et al. (2014) have analyzed eight scientific journals with regards to the following research fields: individualized production, end-to-end engineering in a virtual process chain, and production networks. They then applied a cluster analysis to assign topics to the research fields. The findings are enriched by structured interviews with managers from industry, as well as from consulting. They found several topics within the three research streams. The one directly linked with sustainability is collaborative consumption linked to horizontal integration in collaborative networks. Out of the 548 articles being part of the cluster analysis, 106 belong to the category of collaborative networks. Further findings of the cluster analysis reflect the capabilities as described in the previous literature analysis as well as the importance of servitization. A primary challenge will remain to incorporate flexibility into mass production to account for the requirement of individualized products. This is also valid for modular L&SCM services and processes (Brettel et al., 2014).
The RAMI 4.0 is an attempt to present an architecture model to serve as a basis to describe informational and physical relations within the smart factory. On the one hand side, it helps describing process changes caused by Industry 4.0. On the other hand, is serves as a basis for standardization. It is developed for a manufacturing context, but can be adapted for L&SCM purposes. The hierarchy levels already include a network of enterprises and the informational layer, which is based on IEC62264, and IEC61512 is also valid for the L&SCM processes of manufacturing companies. The only part restricting the applicability of the RAMI 4.0 for end-toend supply chains is its inherent value stream, which only depicts a product's value stream. However, the importance of L&SCM in this context is recognized by stating that the data used in assembly can help intralogistics organizing itself on the basis of the order backlog. Purchasers can see in real time the position of vendor parts. Customers can track the status of their order and where it is in the manufacturing process. Linking the connectivity potentials to integrate purchasing, order planning, assembly, maintenance, customers, and suppliers generates great potential for improvement. Therefore, the complete value network has to be considered and not only the factory in isolation. This includes suppliers, engineering, production, and customers plus logistics service providers. (Adolphs et al., 2015) A meaningful addition to the RAMI 4.0, from a L&SCM perspective, is the Supply Chain Operations Reference (SCOR) model. The SCOR model is used by science and practitioners alike. It is a model that describes the different levels of logistics processes and is also used for modeling IT processes in L&SCM (Bolstorff et al. 2007). For the purpose of describing the anchoring of the presented cases for DTTs potentials for more sustainable L&SCM processes in manufacturing, the first level is sufficient, also due to the hitherto limited number of included papers and its presented cases.
The framework proposed in figure 4 builds upon the findings previously elaborated. It depicts the DTTs' enabled capabilities as well as the representation of the RAMI 4.0 (information layer, hierarchy level and value stream) with the end-to-end supply chain, which is affected by the DTTs. The categorization of the papers found in the systematic literature review shows that their focus mainly lies on production and delivery. Sourcing and returning are not yet in the focus of academic research with regard to the impact of DTTs on sustainability in L&SCM, although Zhang et al. (2016) briefly mention return processes. The six considered papers also support the fact that the sustainability dimension "economy" is the most relevant in academic research (e.g. Bechtsis et al., 2017). The information layers and hierarchy levels are deliberately not detailed any further, as this analysis would have been small-sized for the number of cases (papers) at hand. However, once the amount of research papers in this area increases, the distinction between informational layers and hierarchical levels can be useful to further differentiate the impact scope of DTTs in an L&SCM context for manufacturing. This would add two additional impact scope dimensions to the assessment. Table 3 depicts a broad analysis of the six cases found in literature for the three axes, information layer, value stream, and hierarchy level. Additionally, it is displayed whether the conditions of sustainable digital artefacts (1-4) are met or not.
The presented framework adds two important angles for the assessment and anchoring of cases for sustainable L&SCM processes in manufacturing to the existing RAMI 4.0. First, it includes the value stream based on the SCOR model and second it includes the conditions for digital artefacts to the technical representation of business and hierarchy layers. Implications for research and practice are depicted in the following section.
CONCLUSION AND DISCUSSION
The research area of digital transformation in L&SCM is currently evolving. As there is still no clear understanding about the concrete implications, this exploratory research paper intends to give insights into more sustainable L&SCM processes in manufacturing enabled by DTTs. The systematic literature analysis offers a snapshot of subjects currently under investigation in scientific literature. Results show that the prospects of DTTs concern optimization of transportation distances, reduction in energy consumption and optimizing logistics resources. These findings mainly concern the -1 -
Information layers
(from assets to business) environmental sustainability dimension. Solely the decision making framework, as proposed by Bechtsis et al. (2017), includes all three sustainability dimensions. The social dimension of sustainability is clearly underrepresented.
H i e r a r c h y l e v e l
The proposed framework for anchoring the impact scope of the cases found in literature shows that they mainly focus on production and delivery. For a more holistic perspective, the end-to-end supply chain should be put into focus, as it is also encouraged and performed by Kayikci (2018). The anchoring of the cases to the axes of the framework, as displayed in table three, shows that the focus of the observed cases mainly can be related to the business layer and the hierarchy level of the connected world. This is inherent to L&SCM. The two cases, D and F, at the work unit level describe concrete cases that also depict challenges concerning the integration of more sustainable and technologically enabled processes. With more practical cases both from literature and case studies, the framework can be further developed and improved. It also allows a more comprehensive investigation using a more detailed level of the SCOR model.
Analysis of the six cases show that an end-to-end perspective is not yet included; return processes are out of focus. This hampers the detection of sustainability potentials concerning closed loop supply chains. Additionally, avenues for research lie in three areas: first conceptualizing the impacts of DTTs for sustainability; second, relating and further developing the conditions for sustainable digital artefacts for L&SCM, and third, developing practical methods and tools for practitioners to assess the sustainability impact of their technology deployment. The first two research avenues are a prerequisite for the third one. The conditions for digital artefacts should be enriched by the ecological and social ones. As standardization projects for end-to-end integration of engineering in real-time connected value networks are currently pursued, this is a tremendous chance to also include the sustainability focus in a stronger way. The framework proposed in this contribution offers a first orientation for those avenues. Practitioners benefit from the framework in two ways: they can test whether their DTTs' deployment fulfill the conditions of digital artefacts and can link them to the hierarchy and information layers. This is especially helpful, when describing challenges with regard to the implementation of DTTs for more sustainable L&SCM processes, e.g. in the information layers with regard to data integration and communication. These challenges should be captured in a structured manner with the sustainability focus in mind. This would help developing purposefully the digital artefacts, which also give indications for policy maker concerning standardization needs and areas for funding, in order to reap the benefits of DTT for a more sustainable and just deployment. This is especially linked to open data and software, as well as social implications of technology use in value networks.
There are several limitations to this exploratory research approach. First, the systematic literature analysis is only based on two databases and on English literature, which might have excluded other relevant papers. Second, the proposed framework should be complemented and enriched by primary case studies to further understand the impacts of DTTs on more sustainable L&SCM processes in manufacturing and to develop a conceptualization of the sustainability potentials of DTTs. The six literature-based cases are a limitation for the analysis and conclusion; therefore, in the future, a database with cases of primary and secondary nature should be established. | 7,064.4 | 2019-08-29T00:00:00.000 | [
"Environmental Science",
"Business",
"Engineering",
"Computer Science"
] |
Spatial Beam Self-Cleaning and Supercontinuum Generation with Yb-doped Multimode Graded-Index Fiber Taper Based on Accelerating Self-Imaging and Dissipative Landscape
We experimentally demonstrate spatial beam self-cleaning and supercontinuum generation in a tapered Ytterbium-doped multimode optical fiber with parabolic core refractive index and doping profile when 1064 nm pulsed beams propagate from wider (120 micrometers) into smaller (40 micrometers) diameter. In the passive mode, increasing the input beam peak power above 20 kW leads to a bell-shaped output beam profile. In the active configuration, gain from the pump laser diode permits to combine beam self-cleaning with supercontinuum generation between 520-2600 nm. By taper cut-back, we observed that the dissipative landscape i.e., a non-monotonic variation of the average beam power along the MMF leads to modal transitions of self-cleaned beams along the taper length.
Introduction
Nonlinear beam propagation in multimode optical fibers (MMFs) has been revisited in recent years: many complex spatio-temporal nonlinear properties have been unveiled [1]. Examples include multimode optical solitons [1][2][3], geometric parametric instability (GPI) [4], ultra-wide supercontinuum (SC) generation [5][6][7][8], spatiotemporal mode-locking [9], and Kerr-induced beam self-cleaning (KBSC) [10][11][12][13][14][15][16], to name a few. KBSC results from a multimode four-wave mixing process appearing above a certain threshold peak power, which produces a dramatic reshaping of the output transverse beam profile. In its simplest manifestation, KBSC transforms the output speckled beam pattern into a high-quality, quasi-single mode bell-shaped beam, accompanied by a low power background of higher-order modes (HOMs). KBSC may be accompanied by a complex temporal pulse break-up [13]. Brightness, peak power, and polarization degree of the output beam [16] may all be substantially increased by KBSC. It is important to note that KBSC critically depends on the input beam transverse mode content: launching a tilted beam into the MMF may lead, for example, to KBSC into the LP11 mode of the fiber [17]. As a result, for different wave fronts at the fiber input, most of the beam energy remains confined in a low-order mode (LOM) along the MMF.
KBSC has been demonstrated in different MMF types: graded index (GRIN) MMFs [10][11][12][13]16], in Ytterbium (Yb) doped MMFs [14] and in photonic crystal non-parabolic refractive index MMFs [15]. Spatial self-cleaning in GRIN MMFs is based on their characteristic spatial beam self-imaging effect, which introduces a longitudinal periodic modulation of the core refractive index, thanks to the Kerr effect. This index modulation acts as a dynamic long period grating, that phase-matches four-wave mixing interactions [18], leading to a complex power transfer between modes, or optical turbulence. This process leads to an irreversible depletion of intermediate modes, accompanied by energy flow into both LOMs and HOMs [19]. This process is analogous to the inverse and direct cascade taking place in 2D hydrodynamic turbulence: the theoretically predicted conservation of the average mode number, during the KBSC process, has been recently experimentally confirmed [20].
As first predicted by Longhi [21], the nonlinear (or dynamic) longitudinal grating induced by self-imaging in GRIN MMFs also leads to the generation of a series of spectral sidebands, ranging from the visible to the near-infrared [1,2,4,22]. SC generation in GRIN MMFs has been observed by injecting either femtosecond or sub-nanosecond pulses in the anomalous (1550 nm) [1,21] or normal (1064 nm) [5][6][7][8] dispersion regime, respectively. In the anomalous dispersion regime, spectral broadening results from the interplay between spatiotemporal multimode soliton oscillations [2] and dispersive wave (DW) generation [22]. Spectral broadening leading to red-shifted (with respect to the pump beam) SC is induced by stimulated Raman scattering (SRS) and soliton self-frequency shift. SC to the blue side of the pump is seeded instead by either DWs or GPI (in the normal dispersion regime). Therefore, self-imaging has a crucial role to generate visible light SC in GRIN MMFs.
On the other hand, tapered optical fibers have been shown to exhibit numerous unique advantages over fibers with longitudinally invariant core diameter, including high output beam quality, HOM filtering and broad SC generation [23,24]. Moreover, active rare-earth doped tapers are used to suppress nonlinear effects in chirped pulse amplifiers, when injecting a beam from the small core side, since pulse amplification is accompanied by a progressive decrease of the nonlinear coefficient, owing to the core diameter increase [24]. However, when injecting a beam into the large core side of a multimode fiber taper, since the self-imaging period is directly proportional to the core diameter, accelerated self-imaging occurs, in analogy with the Airy-Talbot effect. Indeed, a recent experiment has shown that accelerated self-imaging in passive, GRIN MMF permits to broaden the spectral width of SC generation on the blue side of the pump [25].
In this work, we study accelerated self-imaging induced nonlinear mode interactions in an active, Yb-doped GRIN multimode fiber taper. We demonstrate visible-mid-infrared SC generation in combination with KBSC in a relatively long (~10 m) active taper, when injecting 500 ps pulses at 1064 nm, propagating in the normal dispersion regime, and from the largest to the smallest taper diameter. We achieve, for the first time to our knowledge, KBSC in a tapered Yb-doped MMF. In addition, we show that the presence of gain induced by a pump laser diode permits to demonstrate KBSC combined with SC generation. Finally, we analyze by the cutback method the longitudinal evolution of KBSC and supercontinuum along the taper length. This reveals that accelerated self-imaging, combined with a dissipative landscape, may lead to new, unexpected transitions among the self-cleaned modes.
Experimental set-up
The scheme of the experimental setup is presented in Fig.1. We used a Nd:YAG microchip laser (signal) at 1064 nm with Gaussian spatial beam shape, generating 500 ps pulses at the repetition rate of 500 Hz, with up to 130 kW peak power. A polarizing beam-splitter (PBS) and two halfwave plates (HWPs) were used to adjust the input power (HWP1) and polarization state (HWP2) of the signal. In our experiments, we used a 9.5 m long Yb-doped MMF taper exhibiting a strong core absorption at 1064 nm (average attenuation 1.3 dB/m) and parabolic core refractive index (see Fig.2) and doping profiles. The tapered fiber was intentionally wound on a fiber coil which is not presented in Fig1. As shown in Fig.2, the largest input core diameter of the taper was 122 µm ( Fig.2(b)) (with a 350x350 µm cladding), whereas the smallest core diameter was close to 37 µm ( Fig.2(c)) (with a 90x90 µm cladding). The core diameter was exponentially decreasing along the taper length between these two values ( Fig.2(d)). The taper was excited by launching the signal into the largest input diameter. To pump the rare-earth Yb ions, a CW multimode laser diode (LD) of 940 nm and 10 W output power, (Fig.1) was used, providing a net gain for the signal beam propagation along the tapered fiber. The taper was placed between two lenses. The first lens with a focal length of 35 mm was placed on a three-axis translation stage, in order to focus both the signal (with a beam diameter of 20 µm at full width of half maximum intensity (FWHMI)) and the pump LD (with a FWHMI beam diameter of 200 µm) into the input face of the MMF. In order to control the input coupling conditions (injection into the fiber), a micro-lens with focal length of 8 mm was used to image the beam from the output face of the MMF (near field) on a CCD camera (Gentec Beamage-CCD12 and Indigo Systems Alpha NIR Camera: 900-1680 nm).We used two Optical Spectrum Analyzers (OSAs) (Ando AQ6315A: 350-1750 nm and Yokogawa AQ6376:1500-3400 nm) to measure spectrum reshaping.
Pump laser switched off: beam nonlinear self-cleaning
The first experiment was performed in a passive configuration. The signal beam was focused into the core of the Yb-doped GRIN tapered MMFs. The input signal coupled peak power was set to 0.52 kW, and subsequently it was gradually increased up to 114 kW (the damage threshold of the input tapered face). As shown in Fig. 3(a), the spatial beam pattern at the taper output evolved significantly when increasing the signal power: we observed the transition from a speckled beam into a bell-shaped smooth central beam, corresponding to a quasi-single mode emission. The injected beam evolved into the self-cleaned output beam for input peak powers above the 20 kW threshold, and the output beam remained stable for up to 114 kW of coupled signal power. Such behavior can be attributed to KBSC, whereby most of the beam power is transferred into a beam close to the fundamental mode profile (LP01) of the fiber [10]. The selfcleaned beam also remained very robust against external disturbances (e.g., intentional bending of the fiber), similarly to the case of passive GRIN MMFs [10], and Yb-MMFs with a nonparabolic refractive index profile [14]. Owing to the high residual absorption of the fiber at 1064 nm (total attenuation 12.4 dB), the maximum output peak power was limited to 6.5 kW, for an input power of 114 kW (the damage threshold of the input face of taper). The input power threshold for spatial self-cleaning in our experiment is nearly the same as reported by Guenard et al [14] with a 1.1 m length of lossy (i.e., unpumped) Yb-doped MMF with nearly step index profile and a constant core diameter of 55 μm. Previous experiments of self-cleaning with a passive multimode doped fiber indicated that self-cleaning threshold increases as the fiber length grows larger [14], contrary to the case of lossless GRIN fibers. Fig.3b shows the output spectra from the tapered MMFs, for different input peak power levels. No significant frequency conversion was observed when progressively increasing the input power, besides the discrete frequency peak appearing above 44 kW, which corresponds to the first Raman Stokes sideband.
Pump laser switched on: beam nonlinear self-cleaning and supercontinuum generation with gain
In a second experiment, we added the CW pump source provided by a 940 nm laser diode delivering up to 10 W, for enabling amplification along the multimode fiber taper. We kept the same 20 m spot size for the signal laser on the input face of the taper, as in the passive configuration. First, the pump was switched off, and we fixed the signal input peak power at 19.6 kW, just below the KBSC threshold. In this configuration, the transverse content of light at the taper output involved a superposition of the fundamental and higher-order modes, as shown in Fig.4a. Next, we switched on the pump LD, and gradually increased its power, thus adding a growing amount of gain (G) to the fiber. The gain (G) indicated in Fig. 4 corresponds to the ratio between the measured output and input average power of the signal at 1064 nm. Note that the gain G remains low, even for relatively high pump powers. As later discussed with reference to Fig.7, this is due to the dissipative landscape of our nonlinear active taper: the signal is only amplified over the first 2 m of active fiber, and subsequently it is reabsorbed for longer propagation in the taper. In Figs.4b-h we show a series of typical output beam patterns recorded for a fixed input peak power (19.6 kW) of the signal, and different net gain values. We used a bandpass optical filter at 1064 nm with 10 nm bandwidth in front of the camera, in order to block residual radiation from the pump. From Figs.4b-e, a progressive reshaping of the guided beam profile into a bellshaped cleaned spot can be observed. Such self-cleaned beam started to form for G=0.21, and it remained preserved up to G=1.34, which is the maximum net gain. As discussed before, the limited net gain G is due to pump absorption taking place beyond the first meters of taper, where the pump LD has been fully depleted. Note that our observations clearly show that signal amplification along the taper leads to spatial beam self-cleaning. Besides increasing the effective length, the pump LD leads to a gain guiding mechanism, that cooperates with KBSC for the generation of a bell-shaped output beam profile.
After obtaining KBSC, by further increasing the LD pump power we observed the generation of an ultra-broadband supercontinuum. In order to better understand the evolution of the supercontinuum as a function of gain, we present in Fig.5a typical output spectra for varying gain values. As can be observed, up to G= 0.21 the signal is amplified without showing a significant spectral broadening, besides that induced by the stimulated Raman Stokes peak above G=0. 16. By further increasing the gain G, SC generation was obtained starting from G=0.40. From G=0.72, an anti-Stokes sideband induced by GPI is observed, which leads to substantial spectral broadening on the blue side of the signal laser. From Fig.5b, we can observe that for G=1.34 the input signal evolves into a remarkably broad SC spanning between 520 nm and 2600 nm. Subsequently, we characterized the spatial beam profile of the SC from the output face of the taper at various wavelengths, by using bandpass filters with center wavelengths of 600 nm (10 nm bandwidth), 1064 nm (10 nm bandwidth), 1550 nm (12 nm bandwidth) and 1600 nm (12 nm bandwidth), and appropriate imaging cameras. The spatial distributions of the selected parts of the SC are presented in the insets of Fig.5b. At high power levels, the spatial output distributions do not exhibit a speckled structure. Instead, the spatial beam profiles are Gaussian like-shaped at all measured wavelengths across the entire SC spectrum, owing to the interplay of Kerr and Raman self-cleaning, combined with gain guiding. Similar results have been reported on SC generation using passive (i.e., lossless) GRIN multimode fibers [5][6][7][8] and tapers [25]. As we shall see, the LD-induced gain introduces a dissipative landscape (i.e., a nonmonotonic variation of the average beam power) along the taper, which exacerbates nonlinear effects with respect to the passive taper case, leading to combined SC generation and selfcleaning in the Yb-doped MMF taper.
The SC generation process results from a complex interplay of Raman scattering, solitonself-frequency shift, dispersive wave generation, and spatiotemporal instabilities (or GPI) of light propagating in GRIN MMFs. On the red-shifted side of the input signal, SRS combined with four-wave mixing is the main mechanism for SC generation. Whereas GPI sidebands provide a seed that fosters subsequent SC generation on the blue side of the signal, between 600 nm and 800 nm, by parametric amplification of the GPI signal. Since the square of the sideband frequency shift approximatively is inversely proportional to the decreasing self-imaging period, accelerating self-imaging in the taper is expected to lead to a progressive blue-shift of GPI sidebands, which largely expands the overall range of spectral broadening. This is obtained at the expense of a decrease in the parametric frequency conversion efficiency, because of the continuous shifting of the phase-matching condition for parametric gain.
In order to analyze the spatial and spectral beam dynamics induced by accelerating selfimaging, we studied the evolution of KBSC and SC generation as a function of taper length. We fixed the input peak power of the signal at 19.6 kW, and set the maximum pumping condition (G=1.34). A cut-back method was used to determine the spatial and spectral beam evolutions along the taper. A 1064 nm optical filter was used at the output of the taper, in order to measure the power of the amplified signal, and to analyze the transverse beam profile. The average power of the amplified signal (at 1064 nm) at the output taper face was measured for different fiber lengths. Fig.6a summarizes the obtained spectral evolution as a function of taper length. Spectral broadening only appears after the first two meters of fiber, leading to the progressive generation of SC towards the infrared because of the interplay of Raman scattering and Kerr effect. Moreover, a blue shifted continuum seeded by the first anti-Stokes GPI sideband is only clearly visible after 5.55 m of propagation into the taper. In order to analytically calculate the frequency position of the sidebands (limited to the first anti-Stokes GPI sideband), we use, according to Ref. [4], the relation ℎ ≈ ±√ℎ with h=1,2,3, … and = (√2 ( ′′ ) ⁄ ) 2 ⁄ , where and ′′ are the self-imaging period and the group velocity dispersion (GVD), respectively. The self-imaging period varies with the radius of the fiber core and the relative index difference as follows: = √2 ⁄ . Due to the presence of a large number of modes in the tapered fiber, we may consider that the propagation of the amplified signal mainly occurs in bulk silica. Hence, we used the GVD coefficient ′′ = 16.55 × 10 −27 2 ⁄ at 1064 nm for a standard GRIN MMFs [4]. Knowing the radius of the core along the taper, we can easily deduce the variation of the self-imaging period along the length of the taper. Therefore, the frequency detuning of the first resonant GPI sideband can be analytically estimated via 1 . The frequency detuning and the wavelength of the first anti-Stokes GPI sideband are shown in Fig.6b. From Fig.6b, we can see that the sideband frequency shift increases (and its wavelength decreases) along the taper. Fig.6a shows that in our experiments, the GPI-generated spectrum only appears between 5.55 and 9.5 m. In this fiber length range, the first GPI sideband shift varies from 1 = 137 at a core taper radius of ~25 µm (taken at the distance of 5.55 m) to 1 = 159 at a core taper radius of ~18.5 µm (taken at the distance of 9.5 m). The corresponding wavelength of the first anti-Stokes GPI sideband decreases from 710 nm to 675 nm, as shown by the white dashed curve superimposed with the experimental spectrum in Fig 6.a: the estimated spectral shift at distances between 6 and 9 m is larger than the one observed experimentally. This indicates that the blue-shifting SC is mainly seeded from quantum noise by accelerating self-imaging induced GPI, which occurs over the first 2-3 m of taper, that is before a substantial spectral broadening of the signal occurs. In fact, Fig.6b shows that the predicted GPI peak gain over the first 3 m of taper sweeps across the observed range of blue SC, namely, the 650-800 nm spectral region. In order to reveal the dissipative landscape of the taper, we measured output average power of the amplified signal at 1064 nm as a function of taper length, as illustrated in Fig.7. As can be seen, the optimal taper length, i.e., the length at which the amplification of the input signal is maximum corresponds to 1.55 m. This leads to an enhancement of nonlinear effects after 2 m of taper, which effectively turns on Raman gain and the associated spectral broadening.
The nonlinear dynamics of the spatial profile of the beam along the taper induced by the dissipative landscape is shown by the various insets of Fig.7. The spatial beam distribution is speckled at the beginning (first meter) of the propagation, but very interestingly, Fig.7 unveils that the beam is progressively self-cleaned into different LOMs during its propagation along the taper. Between 2 m and 6 m, that is in the region where SC is generated, but before the GPI SC appearance of a GPI-induced spectrum, a LOM (which resembles the LP11 mode) is generated. Subsequently, for distances above 6 m, and in correspondence with the appearance and broadening of the GPI spectrum, self-cleaning occurs into a bell-shaped beam, whose size is close to that of the fundamental LP01 mode. Therefore, in the presence of a dissipative landscape (i.e., the interplay of gain and loss along the MMF), self-cleaning into an LP11-like mode appears as a transient effect. The thresholds of GPI and SC generation are inserted in Fig7 as vertical dashed red lines at the corresponding fiber positions.
Conclusions
To conclude, we experimentally demonstrated, for the first time to our knowledge, that tapered active ytterbium-doped multimode fibers with a parabolic index and doping profile may provide a new and versatile platform for high beam quality supercontinuum generation ranging from the visible to the mid-infrared, when pumped in the normal dispersion regime at 1064 nm. The interplay of GPI and SRS allowed us to generate, in combination with a gain/loss landscape, a spectral bandwidth extending from 520 nm up to 2600 nm by using a 9.5 m long tapered fiber. Accelerating self-imaging led to Kerr-self beam cleaning in both passive and active configurations for our tapered Yb-doped GRIN MMF. In the active case, cooperation of KBSC and Raman beam cleanup led to high beam quality emission across the entire SC bandwidth. By the cut-back method we studied the evolution of beam self-cleaning and supercontinuum generation along the tapered fiber operating in the active configuration. We observed that the output spatial distribution of the beam evolves from speckles in the first meters, into a dual lobe, LP11-like mode as SC generation is obtained, and finally into a bell-shaped beam close to the fundamental mode as the GPI-induced spectrum is developed.
Active MMF tapers may thus combine accelerating self-imaging with a dissipative landscape, and permit a versatile control of the spectral and spatial content of multimode light beams. These results may find important applications in multimode fiber lasers and in nonlinear imaging technologies. | 4,933.6 | 2019-04-05T00:00:00.000 | [
"Physics",
"Engineering"
] |
Note on chromatic polynomials of the threshold graphs
Let G be a threshold graph. In this paper, we give, in first hand, a formula relating the chromatic polynomial of G (the complement of G) to the chromatic polynomial of G. In second hand, we express the chromatic polynomials of G and G in terms of the generalized Bell polynomials.
Introduction
Recall that for a given graph G = (V, E) of order n, a λ-coloring of G, λ ∈ N, is a mapping f : V → {1, 2, . . ., λ} where f (u) ̸ = f (v) whenever the edge uv ∈ E. If such mapping f exists, the graph G is said to be λ-colorable, the chromatic number of G, denoted by χ (G) , is the minimal value of λ for which the graph G is λ-colorable and the number of λ-colorings of G is called the chromatic polynomial P (G, λ), see [4,8,9].This paper is concerned with the chromatic polynomials and the sigma polynomials of threshold graphs.These graphs was introduced by Chvátal et al. [3] and Henderson et al. [5] and have numerous applications, see for example [6].They can be constructed from an isolated vertex by repeated applications to addition a vertex to be an isolated vertex or a dominating vertex to the graph.From this definition, it follows that the complement graph G of G is also a threshold graph.The object of our investigations in this paper is, in first hand, to deduce for a given threshold graph G the chromatic polynomial P (G, λ) from the chromatic polynomial P (G, λ), and, in second hand to express the chromatic polynomials of G and G in terms of the generalized Bell polynomials B r,s (x) defined by Carlitz and studied extensively by Blasiak, Penson and Solomon, see [1,2].Below, we use the following notation: G n is a graph of order n and without edges with the convention P (G 0 , λ) = 1.
Chromatic polynomials of threshold graphs
Upon using the definition of threshold graphs G and G, the following theorem gives simple expressions for their chromatic polynomials.
Theorem 2.1.Let (G n , n ≥ 1) be a sequence of threshold graphs and G n has n vertices.Then Furthermore, we have where with i 0 = j 0 = 0 and δ is the Kronecker delta, i.e. δ (i,j) = 1 if i = j and δ (i,j) = 0 if i ̸ = j.
Proof.By construction, G n is the graph G n−1 plus a vertex x n such that x n is an isolated vertex or a dominating vertex.Similarly, by construction, G n is the graph which can be written as Thus, the desired expressions follow.Let now r k be the order of multiplicity of a number k of the zeros of P (G n , λ) : For a given threshold graph G with known chromatic polynomial P (G, λ) , the following theorem gives the explicit expression of the chromatic polynomial Theorem 2.2.Let (G n ; n ≥ 1) be a sequence of threshold graphs and G n has n vertices such that for some non-negative integers r 0 , r 1 , . . ., r n−1 such that Then, the following holds Proof.From Theorem 2.1, the chromatic polynomial P (G n , λ) can be written asP We prove that the chromatic polynomial P ( G n , λ ) must be as follows Indeed, by induction on n.The case n = 1 is obvious and assume where where s 0 = 1, s j = r j−1 (1 ≤ j ≤ n) , and since λ = λ − s 0 + 1 we get So, the induction is true and produces the desired result.
Corollary 2.1.Let G be a threshold graph of n vertices.Then Proof.It is easy to see that Corollary 2.2.Let G be a threshold graph of n vertices.Then, the sum of all zeros of the polynomial Proof.Setting for some non-negative integers r 0 , r 1 , . . ., r n−1 and s 0 , s 1 , . . ., s n−1 such that is the sum of all zeros of P (G, λ) (resp.P ( G, λ ) ), then, from Theorem 2.2 the sum of all zeros of the polynomial
The generalized Bell polynomials and threshold graphs
To give some connections between the chromatic polynomials and the generalized Bell polynomials (see [7]), let r 0 , . . ., r n−1 and s 0 , . . ., s n−1 be non-negative integers and set r = (r 0 , . . ., r n−1 ), s = (s 0 , . . ., s n−1 ) .Recall that the generalized Stirling numbers of the second kind S r,s (n, k) are defined by and the so-called generalized Bell polynomials B r,s (x) are to be By choosing f (x) = x λ in the identity we obtain [7].
For a given sequence (G n , n ≥ 1) of threshold graphs with G n has n vertices, we prove in this section that the sequence of the sigma polynomials (σ(G n , x), n ≥ 1) can be expressed in terms of the generalized Bell polynomials.The useful representation of the chromatic polynomial of a given graph G = (V, E) used here is where |V | is the number of vertices of V and α i (G) is the number of ways of partitioning V into i nonempty sets.The sigma polynomial σ(G, λ) of a graph G = (V, E) is defined by Proof.From the definition of the chromatic polynomial of G we get exp (−x) The following theorem shows that some generalized Bell polynomials can be interpreted by the chromatic polynomials for the threshold graphs and gives another version of Theorem 2.2. | 1,365 | 2019-10-10T00:00:00.000 | [
"Mathematics"
] |
Methodology for testing pipeline steels for resistance to grooving corrosion
The methodology for testing pipeline steels is suggested on the assumption that for the destruction of pipes in field oil pipelines by the mechanism of grooving corrosion the simultaneous fulfillment of such conditions as the occurrence of scratches on the lower generatrix of the pipe, eventually growing into a channel in the form of a groove, emulsion enrichment with oxygen, presence of pipe wall metal in a stressed state, presence of chlorine-ion in the oilwater emulsion is required. Tests are suggested to be carried out in 3 % aqueous solution of NaCl with continuous aeration by air on bent plates 150×15×3 mm, made of the analyzed steel, the middle part of which is under the action of residual stresses σres, close to the level of maximum equivalent stresses σeqv in the wall of the oil pipeline, with the presence of a cut on this part on the inner side of the plate as an initiator of additional mechanical stresses. Using the value of the modulus of normal elasticity of the analyzed steel, the degree of residual strain of the elastic-plastic body from this material, corresponding to the value σres ≈ σeqv is calculated, based on which the plates are bent to the required deflection angle, after which the cut is applied to them. After keeping the plates in the corrosive medium for each of them the increase in depth of the cut as a result of corrosion of the walls by the corrosive medium is analyzed, from which the rate of steel K by the mechanism of grooving corrosion is calculated taking into account the duration of tests. Corrosion rate values for two pipe steel grades determined by the suggested procedure are given. The comparison of K values obtained leads to the conclusion about the higher resistance to grooving corrosion of 09G2S steel.
There are numerous papers on grooving corrosion, e.g. [4,9,14], but the mechanism of this process and factors influencing its intensity have not been finally determined. Particularly, it relates to the effect of pipeline stress state on the rate of corrosion damage of the pipe metal and the role of groove in the corrosion process [3,25]. As the practice of field pipeline operation shows, though there are a number of developed protection methods (installation into the pipeline section of different devices that turbulate the emulsion flow [12], application of inhibitors [14], protective coating of the pipe inner surface [16], use of preliminary water discharge units, etc.) the problem of grooving corrosion in Russia and worldwide is still far from being completely solved. To the greatest extent, it is relevant for long-operating field pipelines, where sections of steel pipes damaged by grooving corrosion have to be periodically replaced with new ones that require replacement after some time.
This paper proposes the technique of corrosion laboratory testing of steels [11] under conditions simulating possible inter-field pipeline wall damage under grooving corrosion, which allows to select compositions that are resistant to grooving corrosion from existing and engineering pipeline steels and to further recommend these compositions for using in field pipelines. Another application of the technique is to study the effect of stressed state and the presence of a cut on the stressed structure, which simulates a trace of grooving corrosion, on the corrosion rate of the metallic structure in the reaction medium. The technique is not designed to determine resistance of pipeline steels in sulphur-containing media [24,28,32], as well as under conditions promoting stress-corrosion of steels [25,30,33] and stress corrosion cracking [22,27,29] due to significant differences in the mechanisms of these processes.
Statement of the problem. When developing the methodology, it was assumed that grooving corrosion of field pipelines occurs when the following basic conditions are simultaneously met: • separation of the water-oil emulsion with the water fraction washing over the lower generatrix of the pipe; • presence of dissolved oxygen in the water in contact with the metal at a sufficient concentration to allow the electrochemical corrosion reaction of the pipe metal with anodic control. (This may occur, for example, when oil-water emulsions are produced using formation water enriched with oxygen or when an oil-water emulsion is intensively mixed in contact with air). Only under this condition the level of pipe stress can affect the rate of metal corrosion; • presence of corrosion active impurities in water of oil-water emulsions, coming from formation water, the most reactive of which is chlorine-ion [13] (influence of anion S 2anion is not taken into account in this methodology), and highly abrasive solid particles; • effect of tensile stress on the pipe wall, facilitating the release of iron ions from the steel into the aqueous solution during the anodic phase of the process and, consequently, intensifying the corrosion process of the pipe metal; • appearance of scratches on the lower generatrix of the pipe as a result of abrasion by solids contained in the emulsion, developing over time into a groove, the metal of the walls and bottom of which is subject to additional tensile stresses, maximum in the metal of the bottom part [4,14,26].
Methodology. In order to satisfy mentioned conditions, it is proposed to test steels for resistance to grooving corrosion in oxygenated aqueous chloride medium, on the samples subjected to tensile stresses close to the level of equivalent stresses in the pipe, with a grooving cut on them according to the following methodology.
Preparation for the tests. The value of maximum equivalent stresses arising in the pipe wall of the analyzed field pipeline during oil-water emulsion pumping is estimated, taking into account the presence of upward and downward sections on it, which cause bending of the pipeline [10, 23]: where σ1ring stresses caused by internal medium pressure, MPa; σ2longitudinal stresses caused by bends in the pipeline, MPa; σ3stresses of technological origin remaining in the wall after pipe manufacture, MPa [6,19]. For the 219×8 mm ascending pipeline section with a bend radius of 219 m, as one of the main ones in the nomenclature of field pipelines, at operating pressure of 4 MPa and temperature of pumped emulsion of 60 С values σ1 , σ2 , σ3 are 51, 95 and 35 MPa, parameter σeqv has a value ~ 160 MPa, which was used in calculations.
Plates of 150×15×3 mm in size are cut across the rolling direction (pipe axis) from rolled pipeline steels used for manufacturing welded pipes of field pipeline, or from the pipe body, in the case of pipes obtained by rolling. The plates are bent in clamps up to residual deflection, providing residual stresses σres in the middle, plastically deformed arc-shaped part of plates, close to σeqv in a pipe. Thus, as proved in [35], on the inner side of the plates these are tensile residual stresses, and on the outer sidecompression stresses. Provision of equality σres = σeqv is reached by providing the metal in the middle part of the plate necessary degree of residual deformation εres, which according to Genki theorem [18] (Fig.2) for elastic-plastic body corresponds to the value σres: where Еmodulus of normal elasticity for the steel analyzed.
Considering that for all pipeline steels the E value does not differ significantly (200 GPa), the required level of residual stresses σres = σeqv = 160 MPa in the elastic-plastic body of these steels, considered in the example, is reached at the degree of residual strain of the metal εres ~ 0.0008 or 0.08 %.
The degree of residual strain εres received by the metal in the middle part of the arc-shaped plates is estimated from the radius of the circumference R, which can be inscribed in this arc-shaped part (Fig.3). The values of εres and R are in relation to each other (2) where rdistance from the neutral axis to the edge of the plate (half of the thickness), mm.
In accordance with expression (2) at r = 1.5 mm the required level εres = 0.0008 is achieved by bending the middle part of the plates to the shape of an inscribed circumference with radius R ~ 1.8 m, which for plates of given geometry (150×15 mm) corresponds to the deflection of the plate Н 8 mm.
After bending the plates until the radius R has reached the required value, it is assumed that the metal of the middle plastically deformed part is subject to the same residual stresses as the pipe metal of the field pipeline.
Using a 1 mm thick disc shaped cutter with a cutting part in the form of a hemisphere with a radius of 0.5 mm, a crosscut is made in the middle of the plates on the inside with a depth of ~ 0.2 mm and approximately groove-shaped form at the bottom generatrix of the pipeline. Such a cut on a stressed structure serves as a concentrator of additional tensile stresses in the surrounding metal [14,17,31] and, as may be concluded, should intensify the corrosion rate. For example, with a cut of 0.2 mm deep on the curved plate of the considered configuration with σres = 160 MPa residual stress equal to 200 MPa occurs in the metal of the bottom of the cut.
Using a LaboMet-1 optical microscope with a focal length scale step M = 0.003 mm, the exact depth of the cut at the fixed points is determined. For this purpose, the cut along its entire length is optically divided into equal sections, e.g. 1 mm in length. The position of the boundary points (n = 13) is fixed and for each of these the difference in focal lengths in divisions from the bottom of the cut to the plate surface near the cut is determined by rotating the fine adjustment drum ΔI = Ibt -Isf with appropriate recalculation of ΔI (using M) to the original cut depth Hi at that point (Fig.4). To prevent a change in the focal distance to the plate surface near the cut as a result of the corrosive environment, this surface is coated with a protective acetate varnish Ice Color before the corrosion tests.
Conducting the tests. The plates are placed in a thermostat filled with 3 % NaCl aqueous solution as a typical corrosive active medium used both in corrosion investigations, both Russian [5,15,20] and foreign [21], to simulate the composition of the aqueous component of oil-water emulsions pumped through the field oil pipelines.
The plates are kept in the solution at 60 ± 5 С (maximum temperature for pumped water-oil emulsions) for the time sufficient to cause noticeable corrosion of the plates (the recommended duration of exposure according to GOST R 9.905-2007 "Unified system of protection from corrosion and ageing. Corrosion Test Methods. General requirements" is 24; 48; 96 h). During the exposure process, in order to ensure anodic control of the electrochemical reaction, which is necessary to show the effect of the stress state on the corrosion rate, the working solution is enriched with oxygen, which is achieved by continuously blowing air through the solution.
Processing of results. At the end of temperature conditioning, the bottom surface of the plate cut is cleaned from corrosion products with an eraser and the surface of the plates around the cut is cleaned from the protective varnish. At the same points as before the corrosion test, the cut depth of the plates Нi * is measured again (see table) and its increase ΔНi is determined as a result of the corrosive effects of the environment (Fig.5). The side surfaces of the plates are polished to obtain thin sections that, after etching, are used for metallographic analysis of the steel.
The arithmetic average of the increase in cut depth for all points is calculated: where n = 13; mean square deviation an actual increase in the cut depth is established and the rate of the grooving corrosion of the plate materials is estimated ttime of plates temperature conditioning; 8760hours in a year. The distribution of cut depth changes along its length and the corrosion rates obtained using 09ps and 09G2S pipeline steels as examples are illustrated in the Table. Cut depths at different points before and after exposure to corrosive environment, change of ΔНi and values of corrosion rates K of pipeline steels By comparing the K values, it can be concluded that 09G2S steel is more resistant to grooving corrosion in comparison with 09ps steel.
In addition to determining the comparative corrosion resistance of steels, the proposed technique allows investigating the effect of tensile and compressive stresses in the metal on the corrosion rate as well as the presence of the cut on the stressed structure. For this purpose, along with the surface of the plate around the cut, a protective varnish is applied to the plastic deformed curved part of the plate on its outer side, where the metal is exposed to compressive stresses, and also to the surface of the unstrained parts of the plate not exposed to any residual stresses. The protective coating is destructed at the indicated places of the plate at local points, the metal in which will be subjected to corrosive effects in subsequent tests. After carrying out corrosion tests, the protective varnish on plate surface around these points is removed. Then the difference in focal distance from the bottom point of the corrosion damage to the unaffected plate surface, taken as the depth of damage at that point, is determined. The necessary dependencies are obtained after performing experiments on plates prepared in this way, pre-curved to different deflection angles.
Conclusion. The technique has been developed for determining the corrosion rate of pipeline steels under conditions simulating corrosion damage of a field pipeline wall: when the pipe wall metal is in a stressed state, the presence of chlorine-ion in the water component of the oil-water emulsion, a channel in the form of a groove is present on the lower generatrix of the pipe, the water component is enriched with air oxygen. As an example for the application of the suggested technique, the corrosion rates of two pipeline steels 09ps and 09G2S were determined. Corrosion rates (2.3 ± 0.8 and 1.8 ± 0.9 mm/year) appeared to be close to those demonstrated by the materials of field pipelines subjected to grooving corrosion. The developed methodology can be used when investigating the effect of tensile and compressive stresses in the metal, as well as the presence of a cut on the stressed structure, on the corrosion rate. | 3,361.4 | 2021-12-17T00:00:00.000 | [
"Materials Science"
] |
A Finite Element Model of a MEMS-based Surface Acoustic Wave Hydrogen Sensor
Hydrogen plays a significant role in various industrial applications, but careful handling and continuous monitoring are crucial since it is explosive when mixed with air. Surface Acoustic Wave (SAW) sensors provide desirable characteristics for hydrogen detection due to their small size, low fabrication cost, ease of integration and high sensitivity. In this paper a finite element model of a Surface Acoustic Wave sensor is developed using ANSYS12© and tested for hydrogen detection. The sensor consists of a YZ-lithium niobate substrate with interdigital electrodes (IDT) patterned on the surface. A thin palladium (Pd) film is added on the surface of the sensor due to its high affinity for hydrogen. With increased hydrogen absorption the palladium hydride structure undergoes a phase change due to the formation of the β-phase, which deteriorates the crystal structure. Therefore with increasing hydrogen concentration the stiffness and the density are significantly reduced. The values of the modulus of elasticity and the density at different hydrogen concentrations in palladium are utilized in the finite element model to determine the corresponding SAW sensor response. Results indicate that with increasing the hydrogen concentration the wave velocity decreases and the attenuation of the wave is reduced.
Introduction
Surface Acoustic Wave devices are considered to be the earliest types of MEMS due to the continuous electrical and mechanical interactions that take place during propagation. White and Voltmer [1] first reported the generation of Surface Acoustic Waves (SAW) on a quartz piezoelectric substrate. The waves were generated by applying a voltage signal to a set of finger-like electrodes patterned on the surface of a quartz substrate. This layout became known as the Delay Line structure. The SAW delay line offers an easy way of generating and detecting SAW on a piezoelectric substrate because the waves propagate along the free surface thus giving the user control over the signal, which can be sampled or modified according to the desired application. The delay line configuration is widely used in electronic devices such as radars to optimize the signal to noise ratio, pulse compression, band pass filters in TVs and as resonators. The confinement of the wave near the surface of the substrate allows it to be sensitive to changes in the external environment, therefore providing a plethora of sensing applications, which include detecting changes in mass, stiffness, viscosity, temperature, humidity, strain and force.
Hydrogen is widely used in industrial applications such as the preparation of ammonia and methanol, hydrogenation of organic compounds and in the production of semiconductors, petroleum recovery and refining, fueling spacecrafts and is used in fuel cells to power consumer electronic devices. Careful handling of hydrogen is crucial due to the various possible hazards. Diffusion of hydrogen into metals causes embrittlement, cracks and degradation in material properties which can cause catastrophic failure. In addition, hydrogen is explosive when mixed with air at a minimum ratio of 4% [2]. Due to its various applications and the possible hazards due to mishandling, careful monitoring of hydrogen leakage is crucial.
Various sensing mechanisms are adopted for hydrogen detection. D'Amico and Zemel [3] used pyroelectric sensors incorporating palladium electrodes for hydrogen detection. Hydrogen absorption causes heat generation, which affects the output voltage signal. Butler [4] used optical fiber sensors with palladium coated quartz fibers. Hydrogen absorption causes stretching of the optical fiber, which changes its optical length. Kumar [5] used an electrochemical cell where the potential difference between the two electrodes was used as an indicator of hydrogen detection. Cabrera [6] monitored the change in resistivity of thin Pd films due to hydrogen exposure at different hydrogen pressures. Resistivity increased during hydrogen exposure and then decreased to its initial value when hydrogen was removed from the chamber. In addition, it was found that if resistance measurements at a constant film temperature are made, the hydrogen concentration in the Pd film can be determined. Łukaszewski [7] used a Quartz Crystal Microbalance (QCM) for hydrogen detection. The QCM was coated with several films of pure Pd and Pd alloys. The frequency shift in each case was used as an indicator of hydrogen detection. QCM and SAW sensors have closely related operating principles, however SAW sensors offer increased sensitivity because they can operate at much higher frequencies [8]. As the operating frequency of SAW sensors increases the wave becomes more confined near the surface and therefore more sensitive to changes in the adjacent environment. There are various studies in the literature for the use of SAW technology for hydrogen detection [2,[9][10][11][12].
In this study a three dimensional Finite Element (FE) model of a SAW sensor is developed and tested for hydrogen detection. A chemically selective film is added on the surface of the sensor to absorb hydrogen and change the SAW properties accordingly. As illustrated in the above applications palladium is widely used for hydrogen detection, therefore in this study a thin palladium film is used as a chemically selective film. Figure 1 illustrates the layout of a SAW sensor adopting the delay line structure and covered with a palladium film. Hydrogen has a high solubility in palladium which absorbs it like a sponge [13]. The hydrogen molecules break down in to atoms at the surface of the palladium film and then diffuse inside it to change the properties of the film with increasing concentrations. The change in wave velocity can be calculated from the change in phase of the frequency response. In addition, the insertion loss, which indicates the attenuation of the wave will be monitored at different levels of hydrogen absorption.
The Palladium Hydrogen System
Graham [14] observed that large volumes of hydrogen were absorbed or as he termed it occluded by palladium during electrolysis and since then the palladium hydrogen system has been one of the most experimentally investigated. Baranowski [15] showed that for different fcc metals and alloys the increase in the volume of the unit cell due to interstitial hydrogen is linear up to a concentration 0.75 c = a.f (atomic fraction). A hydride stoichiometry of 1 c = a.f can be achieved where the hydrogen atoms occupy all the octahedral sites of the Pd lattice, thus adopting an ideal sodium chloride structure [16]. When palladium absorbs (n) hydrogen atoms the change in volume (V) is ΔV = nΔv, where (Δv) is the change in volume per hydrogen atom. If the mean atomic volume of a palladium atom is (Ω), then the volume of Pd is V = NΩ. The relative volume change due to an atomic fraction n c N = is: which is related to the lattice expansion approximately by: where (a) is the lattice constant [16]. A wide collection of experiments that determine the relative volume change due to hydrogen absorption is available in Peisl [16]. Almost all of the experiments were carried out at room temperature and a value for the relative volume change v Δ Ω of 0.19 ± 0.01 was obtained. In this study the value of 0.19 for the relative volume change will be used.
where the molar masses H m and Pd m are 1.008 g/mol and 106.42 g/mol [18], respectively and o ρ is density of pure Pd.
In addition to a change in density, hydrogen absorption leads to changes in the elastic constants of palladium. The absorbed hydrogen atoms cause changes in the electronic structure of Pd, which have a direct affect on the frozen lattice component of the elastic modulus [17]. Absorbed hydrogen atoms occupy the interstitial octahedral locations in the lattice, hence displacing Pd atoms. This interaction results in a transfer of electrons and a change in the electron-to-atom ratio, which leads to an upward shift in the Fermi level and a reduction in the binding energy of s electrons due to lattice expansion [19,20]. Furthermore, significant temperature changes during hydrogen absorption leads to changes in the phonon components of the elastic modulus with increasing hydrogen concentration [17,20]. Finally, at low levels of hydrogen absorption the hydride gradient leads to precipitation hardening, which slightly increases the modulus of elasticity [20].
Piezoelectricity
Hooke's law is modified to include the electrical interaction that takes place in a piezoelectric material. There are various forms of the piezoelectric constitutive equations; the equations presented here are termed piezoelectric stress equations where the strain is an independent variable: E j is the electric field component and is measured in V/m; D is the electric displacement field, measured in C/m 2 ; ε ij is the dielectric permittivity constant and is measured in F/m. The constants e ijk and e ikl are the piezoelectric stress constants (C/m 2 ), which couple the electric and mechanical fields. The superscript (T) indicates that e ijk and e ikl are transposes of each other. The superscripts (E) on ε ij and (S) on c ijkl indicate that these are the properties at constant electric field and strain, respectively.
Wave Equations
Wave propagation in a piezoelectric crystal involves coupling of the particle displacement and the electric and magnetic fields. The equation of motion is coupled with Maxwell's equations for electromagnetic fields through the piezoelectric constitutive equations, however this coupling is weak [21]. Since the solutions of interest are the acoustic waves, the magnetic field is assumed to be static and the electric field is calculated as the negative gradient of the potential: This is the quasi-static approximation and has negligible effect on the solution [22]. By substituting the equation of motion into (Equation 4), expressing the strain in terms of the displacement components and utilizing the quasi-static approximation, the first piezoelectric constitutive equation can be re-written as: which is first wave equation. Divergence of the second piezoelectric constitutive equation yields: Piezoelectric materials are insulating materials thus 0 D ∇ = i due to the absence of electric charge within the material. Therefore, the second wave equation is: Solving the wave equations yields three displacement equations and a fourth voltage equation, which are called partial wave solutions.
Inter-digital Transducers (IDT)
These are finger-like electrodes patterned on the surface of a piezoelectric substrate using lithographic techniques, as illustrated in Figure 1. They are used for launching and detecting acoustic waves on the surface of piezoelectric crystals. By applying a time varying voltage signal across the electrodes a mechanical wave is generated that propagates along the surface due to the converse piezoelectric effect. As the wave reaches the output IDT it is converted to a voltage signal due to the direct piezoelectric effect. Datta [23] provides detailed discussion on the operating principles of the different configurations of SAW-IDT devices, including the delay line structure.
Numerical Modeling using Finite Element Analysis (FEA)
Various numerical techniques have been used in modeling acoustic wave propagation in piezoelectric media; these include the Finite Element method (FEM), Finite Difference (FD) [24] and the Boundary Element Method (BEM). The FE method is the widely adopted technique especially for modeling the response of SAW sensors in 2D and 3D due to its versatility in modeling complex geometries for any set of material properties and loading conditions as long as the appropriate constitutive and equilibrium equations are satisfied [25][26][27]. In some instances a coupled BEM/FEM approach is adopted especially for modeling semi-infinite problems with periodic boundary conditions such as in SAW resonators [28]. The BEM adopts a periodic Green's function for modeling the substrate and the FEM is used to model the electrodes with finite geometry and arbitrary shape [29].
The problem of interest in this study is to model the full device response using a 3D model of a SAW sensor; therefore the finite element method is adopted. A thorough discussion of the FE formulation for solving the wave equations in a piezoelectric medium has been published elsewhere [30].
Verification Model (ZnO-XY LiNbO 3 )
Ippolito et al. [31] provides both simulation and experimental results for a specific configuration of a SAW sensor. The same configuration is adopted in this study for verification of the FE model. The device consists of a layered piezoelectric substrate with interdigital electrodes patterned on the interface. The piezoelectric substrate is lithium niobate (LiNbO 3 ) with XY orientation, indicating that the X-crystal axis is perpendicular to the surface of the substrate, meanwhile the Y-crystal axis is along the propagation direction. The surface of the LiNbO 3 substrate is completely covered with a thin piezoelectric zinc oxide (ZnO) film. Figure 2 illustrates a schematic of the device configuration with the corresponding dimensions. Each of the input and output IDT's on the surface consists of two electrode pairs. The electrode widths and spacing are equal and are 10 μm, thus the wavelength is 40 μm. The SAW velocity in this configuration is 4000 m/s [31]. The center frequency is calculated using: which in this case is 100 MHz. The material properties of lithium niobate are listed in Table 1. The density of lithium niobate is 4,647 kg/m 3 . The material properties are obtained from Wong [32]. The material properties of zinc oxide are listed in Table 2. The density of ZnO is 5,720 kg/m 3 . The material properties are obtained from Didenko [33]. Lithium niobate and zinc oxide are both piezoelectric materials and are meshed with the same element type. A coupled field element is selected, which has four degrees of freedom per node; displacements (U x ), (U y ), (U z ) and voltage (φ). The elements at the ZnO-XY LiNbO 3 interface are refined for accuracy. Mesh convergence studies show that an element size of 12 μm yields a 0.56% error.
The electrodes at the interface were modeled as a set of nodes coupled by a voltage degree of freedom (DOF). Xu [26] illustrated that at frequencies much less than 1 GHz the electrode mass has negligible effect on the frequency response of the SAW sensor. Various authors [13,34,35] have neglected the electrode mass to eliminate the second order effects and to reduce the size of the model. Figure 3 illustrates the electrodes modeled as coupled node sets.
• The following Dirichlet conditions for the electric potential: where V(t) is the input voltage signal and O(t) is the output voltage. Extension of the boundaries (B) along the length direction as indicated by the arrows and the same for the width boundaries (W). This condition is necessary to avoid wave reflections from the boundaries that would cause interference and hence deteriorate the response.
A transient analysis is carried out and the following impulse signal is applied at the input electrodes: 9 1 10 0 where T s is the time step size set to 1 ns. The simulation time is 100 ns. The frequency response is determined from the Fourier transform of the impulse response. Table 3 lists the results of the current simulation in comparison with the results from Ippolito et al. [31]. As can be clearly seen, results of the current simulation agree very well with both the simulation and experimental values. In the current simulation the center frequency value is 100.56 MHz, however the center frequency in the experimental result is 103 MHz. The 3 MHz variation is due to the tolerance involved in fabricating the electrodes using lithographic techniques. The insertion loss value in the current simulation is -35.5 dB, which more closely matches the experimental value of -34.3 dB than the simulation value of -37.5 dB by Ippolito et al. Now that the model is verified the sensor configuration is altered to allow for hydrogen detection.
Finite Element Model of a SAW Sensor for Hydrogen Detection
In this configuration a bare YZ-LiNbO 3 substrate is used and a thin palladium layer is added between the two sets of IDT's as shown in Figure 5. The YZ-LiNbO 3 configuration is widely used for sensing applications due to its high SAW velocity and high electromechanical coupling coefficient [36]. This configuration has been used by various authors for hydrogen detection [9,11,12].
It has been illustrated that the density and elastic constants of the palladium film change with hydrogen absorption. The changing properties influence the phase velocity (v) of the wave, which is dependent on the material properties of the medium in which it propagates. For an isotropic medium: where c is the stiffness constant (N/m 2 ) and ρ is the density (kg/m 3 ) of the medium. The change in wave velocity is used to evaluate the sensor response to hydrogen absorption. The change in velocity is calculated from the change in phase response of the sensor. The phase is defined as: where ω is the angular frequency in rad/s, T o is the time it takes the wave to travel a given distance (l) and (v) is the phase velocity of the wave in m/s. The change in phase is related to the change in velocity as follows: The change in phase is calculated with respect to the pure Pd case.
Finite Element Analysis of the YZ-LiNbO 3 SAW Sensor Model
The material properties of lithium niobate are listed in Table 1; however the local coordinate system is rotated to obtain the YZ orientation. The lithium niobate substrate is meshed with a coupled field solid element with four DOF per node; displacements (U x ), (U y ), (U z ) and voltage (φ). The boundary conditions for this model are slightly different than those for the ZnO-XY LiNbO 3 layout. Continuity of the displacement field is imposed at the film-substrate interface and a stress free boundary condition is applied at the free surface of the Pd film. A stress free boundary is also applied to the LiNbO 3 substrate surface. In addition, the following conditions apply; • Clamped condition on the bottom of the substrate • Same Dirichlet conditions for the electric potential at the electrodes • Extension of the boundaries along the length and width of the substrate to avoid reflection In deciding the operating frequency the goal is to increase the frequency level because at higher frequencies the wave becomes more confined near the surface and therefore more sensitive to changes in the adjacent environment. However, at higher frequencies the wavelength decreases and the size of the elements at the surface have to decrease leading to a more computationally expensive model. Various frequency levels have been attempted with the model size and simulation run time as constraints. A center frequency of 128 MHz is found to be an acceptable compromise. The parameters of the sensor are listed in Table 4. The palladium film is meshed with a structural field element, which adopts three degrees of freedom per node; (U x ), (U y ) and (U z ). The thickness of the Pd films is set to 2 μm, which is the minimum thickness that could be attained while maintaining a sufficient number of elements for increased accuracy. In addition, this thickness is reasonable and can be attained practically with common deposition techniques such as electron-beam evaporation. The increase in volume of the Pd film due to hydrogen absorption is represented by an increase in the thickness direction, therefore neglecting the shear effects at the film-substrate interface. The Pd free surface can expand easily as it is an unconstrained surface. Film thickness values adopted in the simulations are listed in Table 5 and are determined using (Equation 1) and (Equation 2). Density of pure Pd is 12,020 kg/m 3 [17] and the values at different hydrogen concentrations are calculated using (Equation 3).
In addition, the effects on the elastic constants are included by adopting the corresponding absolute values of the modulus of elasticity for different hydrogen concentrations, which are obtained from Fabre [17]. Material properties for the Pd film at different hydrogen concentrations are listed in Table 6. The Poisson's ratio for Pd can be assumed to be invariant with concentration and the value for pure Pd (ν = 0.375) is maintained [37].
Results
A transient analysis is carried out to determine the sensor response due to the different levels of hydrogen absorption. The impulse signal in (Equation 14) is applied and the simulation is run for 400 ns. The frequency response is obtained from the time domain response by Fourier transform. The frequency response of the SAW sensor with a pure Pd film i.e., no hydrogen absorption is shown in Figure 6.
The properties of the Pd film are inserted in the model to simulate different levels of hydrogen concentrations. Figure 7 and Figure 8 show the phase profiles of the frequency response at the different concentration levels. The ( ) sign on each phase profile illustrates the phase value of the center frequency. This is the phase value used in calculating the change in wave velocity. The normalized is calculated using (Equation 18) and plotted in Figure 9. The values are plotted with a negative sign to indicate a reduction in wave velocity. It can be seen that as more hydrogen is absorbed by the Pd film the velocity continues to decrease up to 0.5 a.f but the reduction in wave velocity is negligible in the concentration range of 0.3-0.5 a.f. In addition to the reduction in wave velocity the attenuation of the wave is monitored and plotted in Figure 10, which illustrates the insertion loss values due to hydrogen absorption. The insertion loss profile adopts a similar trend to the change in wave velocity, indicating that the attenuation of the wave in the 0.3-0.5 a.f region is almost constant. Table 7 lists the results of the SAW sensor at the different concentration levels.
Discussion
A finite element model of a SAW sensor is developed and tested with different levels of hydrogen concentrations. Hydrogen absorption deteriorates the lattice structure and softens the crystal as illustrated by the decreasing modulus of elasticity and density values in Table 6. According to pressure-composition isotherms [38] the lattice structure of palladium hydride at temperatures below 300 °C decomposes into an α-phase and a hydrogen rich β-phase. At room temperature the α-phase exists at a very low hydrogen concentration region; c H/Pd < 0.008 a.f [38]. At a higher concentration of absorbed hydrogen the β-phase starts forming from the discontinuous expansion of the α-phase and is therefore highly distorted [38], which causes the significant reduction in the modulus of elasticity and density. The β-phase is completely formed at c H/Pd > 0.6 a.f. The change in the properties of the palladium hydride system leads to a reduction in SAW velocity.
α-phase Mixed α/β-phase β-phase
The frequency response of the SAW sensor with a pure palladium film is shown in Figure 6. The insertion loss profile has a centre peak at the operating frequency of the sensor, which in this case is 128 MHz. In addition, the linear phase response of the sensor is also shown; the value at 128 MHz is used for calculating the change in phase as the concentration of absorbed hydrogen in the film increases. The phase profiles in Figure 7 and Figure 8 illustrate the change in phase at the centre frequency of the SAW sensor at different levels of hydrogen absorption. According to (Equation16) this change implies a reduction in wave velocity with increased hydrogen concentration.
According to Figure 9 the velocity decrease up to a 0.3 a.f, then the reduction is of a much lower magnitude in the concentration region of 0.3-0.5 a.f. A similar behavior is reported by Jakubik et al. [11], who uses a YZ-LiNbO 3 SAW sensor with bi-layer sensing film, composed of a 160nm layer metal-free phthalocyanine and a 20 nm palladium layer. The experiment is set-up such that the SAW sensor operates in an oscillator configuration and is exposed to various concentrations (volume %) of hydrogen in air at room temperature to determine the sensor response below the explosive limit of 4%. The change in frequency of the sensor in an oscillator configuration is related to the change in velocity by [39]; where G is a constant. According to the results reported by Jakubik et al. the increase in the magnitude of the change in oscillating frequency is due to the formation of the β-phase. The values of the normalized change in oscillating frequency from Jakubik et al. [11] have been converted to normalized change in wave velocity and plotted in Figure 11 for illustration. Figure 11. Normalized change in wave velocity at various H 2 concentrations in air (volume %). Data is obtained from the frequency response values reported by Jakubik et al. [11]. The results of the current simulation in Figure 9 follow the same trend as that of Jakubik et al. The data points in both figures are fitted with a cubic polynomial function and in both cases the R 2 value is above 0.97. The changing properties of the palladium film due to hydrogen absorption also affect the attenuation of the SAW wave. The insertion loss values in Figure 10 illustrate that as the hydrogen concentration increases in the film the insertion loss values increase accordingly indicating less attenuation. The insertion loss values adopt a similar behavior to the change in wave velocity since the values continue to increase but with a smaller magnitude in the concentration range of 0.3-0.5 a.f. This result is expected because as mentioned earlier the formation of the β-phase softens the crystal and as the β-phase dominates the lattice, the properties of the Pd film continue to change but with a lower magnitude. These results of the normalized velocity and insertion loss indicate that the magnitude of the response of the SAW sensor increases with increasing hydrogen absorption in the Pd film then the magnitude decreases as the β-phase of palladium hydride dominates the lattice structure.
Conclusions
A finite element model of a SAW sensor was developed and verified using simulation and experimental results from the literature. The sensor configuration was then changed to allow for hydrogen detection. A palladium film was added on the surface of a bare YZ-LiNbO 3 substrate because Pd has a high affinity for hydrogen. A transient analysis was carried out from which the frequency response was determined. The phase response curves were compared for the different levels of hydrogen absorption and the change in phase with respect to the pure Pd case was calculated. The change in phase was used to calculate the normalized change in wave velocity for each case. Phase results indicated that with increased hydrogen absorption the wave velocity decreases. This behavior was found to be due to the formation of the β-phase of the palladium hydride structure, which causes distortions to the crystal and softens it. In addition, insertion loss values were used to determine the change in attenuation of the wave due to hydrogen absorption. Results illustrated that the softening of film as the β-phase dominates the lattice structure lead to a reduction in the magnitude of attenuation of the wave. | 6,234.2 | 2010-02-02T00:00:00.000 | [
"Materials Science"
] |
Grain number, plant height, and heading date7 is a central regulator of growth, development, and stress response.
Grain number, plant height, and heading date7 (Ghd7) has been regarded as an important regulator of heading date and yield potential in rice (Oryza sativa). In this study, we investigated functions of Ghd7 in rice growth, development, and environmental response. As a long-day dependent negative regulator of heading date, the degree of phenotypic effect of Ghd7 on heading date and yield traits is quantitatively related to the transcript level and is also influenced by both environmental conditions and genetic backgrounds. Ghd7 regulates yield traits through modulating panicle branching independent of heading date. Ghd7 also regulates plasticity of tiller branching by mediating the PHYTOCHROME B-TEOSINTE BRANCHED1 pathway. Drought, abscisic acid, jasmonic acid, and high-temperature stress strongly repressed Ghd7 expression, whereas low temperature enhanced Ghd7 expression. Overexpression of Ghd7 increased drought sensitivity, whereas knock-down of Ghd7 enhanced drought tolerance. Gene chip analysis of expression profiles revealed that Ghd7 was involved in the regulation of multiple processes, including flowering time, hormone metabolism, and biotic and abiotic stresses. This study suggests that Ghd7 functions to integrate the dynamic environmental inputs with phase transition, architecture regulation, and stress response to maximize the reproductive success of the rice plant.
Rice (Oryza sativa) is a main staple food crop that feeds almost half of the world population. Flowering time is one of the most important agronomic traits that determines rice yield. Grain number, plant height, and heading date7 (Ghd7) encoding a CCT (CONSTANS, CONSTANS-LIKE, and TIMING OF CHLOROPHYLL A/B BINDING1) domain protein is considered to be a key regulator of the rice-specific flowering pathway and also contributes to rice yield potential (Xue et al., 2008). Ghd7 controls the critical daylength response of Early heading date1 (Ehd1) and florigen expression through circadian gating and phytochrome action (Itoh et al., 2010;Osugi et al., 2011). Two orthologs of EARLY FLOWERING3 genes, which mediate the circadian and photoperiodic regulation, act as negative regulators of Ghd7 Yang et al., 2013). Rice Indeterminate1 acts as a master switch for the transition from vegetative to reproductive phase and regulates the expression of Ghd7 independent of the photoperiod (Wu et al., 2008). Ehd3, which contains two plant homeodomain finger motifs and is possibly involved in chromatin state modulation, negatively regulates the transcription of Ghd7 (Matsubara et al., 2011). Heading date16 (Hd16), a flowering time quantitative trait locus gene, was recently shown to encode a casein kinase I protein that mediates the phosphorylation of GHD7 and enhances the photoperiod response (Hori et al., 2013). Although the complex regulation network of Ghd7 at transcription and posttranscription levels in flowering time control has been extensively studied, the regulation domain of Ghd7 in rice growth, development, and environmental response has not been adequately investigated.
Recent studies suggested that traditional flowering time genes may have roles in plant development and stress response. In rice, two key flowering time genes, Hd1 and Ehd1, also control panicle development (Endo-Higashi and Izawa, 2011). In Arabidopsis (Arabidopsis thaliana), the flowering promoting gene GIGANTEA and the florigen genes FLOWERING LOCUS T (FT) and TWIN SISTER OF FT (TSF) play a central role in drought escape response (Riboni et al., 2013). FT and TSF also play a key role to link the floral transition and lateral shoot development (Hiraoka et al., 2013). Molecular evidence revealed that FT and TSF proteins directly interact with BRANCHED1/TEOSINTE BRANCHED1-LIKE1 (BRC1) protein, a homolog of TEOSINTE BRANCHED1 (TB1) (Takeda et al., 2003;Choi et al., 2012), and modulate florigen activity in the axillary buds to prevent premature floral transition of the axillary meristems (Niwa et al., 2013). These findings suggest that the regulation of the transition to flowering also plays an important role in the modulation of plant architecture plasticity and environment adaptation.
In this article, we show that the flowering time gene Ghd7 also regulates plant architecture and such regulation is dependent on both genetic background and environmental signaling. Ghd7 responds to various environment signals in addition to daylength to regulate growth, development, and biotic and abiotic stress responses. Our results suggest that Ghd7 may function as a sensor for the plant to adapt to dynamic environmental inputs and that Ghd7 is involved in the plant architecture regulation and stress-response pathways.
The Phenotypic Effect of Ghd7 Is Correlated with Its Expression Level
Ghd7 showed pleiotropic effects on heading date, plant height, and yield traits, and its expression was regulated by light signal and photoperiod (Xue et al., 2008;Itoh et al., 2010). We previously developed a pair of near-isogenic lines, designated NIL(zs7) and NIL (mh7), with almost all of the genetic background of Zhenshan 97 except the introgressed segment, which contained the Ghd7 (Xue et al., 2008). Comparison of the phenotypes of NIL(zs7), NIL(mh7), and their hybrid NIL(het) showed that Ghd7 has a partial-dominant effect on flowering time, plant height, and yield traits (Fig. 1, A and D; Table I), consistent with previous results (Xue et al., 2008). The expression level of Ghd7 in NIL(mh7) is nearly twice that in the heterozygous plants, especially at dawn (Fig. 1G). We examined the relation between the expression level of Ghd7 and the phenotype in transgenic plants, in which the coding sequence of Ghd7 from Minghui 63 driven by the ubiquitin promoter was transformed into Hejiang 19 (HJ19) that has a nonfunctional allele of Ghd7 (Xue et al., 2008). Of the 42 T0 plants, 37 were transgene positive (OX-Ghd7 HJ19 ) and exhibited the expected phenotype (tall with late heading and large panicles; Supplemental Fig. S1). Analysis of Figure 1. Phenotypes and Ghd7 expression levels of the various genotypes generated in this work. A and D, The whole plants (A) and main panicles (D) of NIL(zs7), NIL(het), and NIL(mh7) under natural long-day conditions in Wuhan taken at maturity. B and E, The whole plants (B) and main panicles (E) of the Ghd7-overexpressor in the HJ19 background under natural long-day conditions in Wuhan. C and F, The whole plants (C) and main panicles (F) of OX-Ghd7 and Ami-Ghd7 in the ZH11 background sown in June in Wuhan. Bar in (A) to (C) = 50 cm; bar in (D) to (F) = 10 cm. (G) Diurnal expression analysis of Ghd7 in leaf blades of the near-isogenic lines. The samples were collected at 40 d after germination under natural long-day conditions in Wuhan and were used for RNA preparation. The numbers below the x axis indicate zeitgeber times (ZTs) of the day. The white bar indicates the light period, and the black bar indicates the dark period. The points and error bars indicate average values and SE, respectively, based on three biological repeats. H and I, Expression levels of Ghd7 in transgenic plants in the HJ19 background (H) and the ZH11 background (I). Leaf blades from plants 30 d after germination were collected at 2 h after dawn and used for RNA preparation. Bars and error bars indicate average values and SE, respectively, based on three biological repeats. two random T1 families (OX-14 and OX-25) from the T0 plants showed perfect cosegregation between the transgene and the phenotype (Table I). Notably, the amount of the Ghd7 transcripts was closely related to the degree of heading delay and yield traits at T1 generations ( Fig. 1, B, E, and H). These results indicated that the phenotypic effect of Ghd7 is quantitatively related to the abundance of its transcript, and the enhanced transcript level of Ghd7 caused delayed flowering, increased plant height, and yield traits.
Pleiotropic Effects of Ghd7 on Traits Vary with Genetic Backgrounds and Environmental Conditions
It was previously reported that enhancement of Ghd7 expression had no effect on plant height and yield traits in the ehd3 mutant (Matsubara et al., 2011), and the authors supposed that the function of Ghd7 also depended on other cues such as genetic background or environmental conditions. We performed transformation experiments using Zhonghua 11 (ZH11), a variety with a weak-function allele of Ghd7 (Xue et al., 2008). We introduced Ghd7 overexpression (OX-Ghd7 ZH11 ) and artificial microRNA (amiRNA; Ami-Ghd7) constructs, respectively, into ZH11. Seventeen of the 23 independent OX-Ghd7 ZH11 T0 transformants showed delayed heading, and conversely 13 of the 21 independent Ami-Ghd7 T0 transformants showed accelerated flowering (Supplemental Fig. S2, A and B). Analysis of T1 families of OX-Ghd7 ZH11 and Ami-Ghd7 transformants showed perfect cosegregation between the transgene and the the heading date phenotype (Table I). However, no significant increase in plant height or number of spikelets per panicle was detected in the OX-Ghd7 ZH11 plants (seeds were sown May 1 in Wuhan field conditions, as discussed below; Table I reduction in all three traits (Table I; Supplemental Fig. S2C). A comparison of these results with those obtained from the transformants of HJ19 suggests that the pleiotropic effects of Ghd7 are dependent on the genetic background, which is similar to previous findings (Xue et al., 2008). The phenotypic effects of Ghd7 also varied with the environmental conditions. When grown in the Hainan Island winter nursery (natural short-day), the OX-Ghd7 ZH11 transgenic plants showed a significant increase in plant height and panicle size as well as delayed heading compared with the wild type (Supplemental Fig. S3; Supplemental Table S1). We subsequently evaluated the extent to which the environment may influence the effects of Ghd7 on phenotype by examining T2 families of single-copy transgenic plants of OX-Ghd7 ZH11 and Ami-Ghd7 in three plantings in the summer rice growing season in Wuhan. The first planting sown on April 15 and the second planting sown on May 20 subjected the plants to natural long-day conditions, whereas the third planting sown on June 22 exposed the plants to natural short-day conditions. Compared with the wild-type plants, the Ami-Ghd7 plants in general significantly accelerated heading with decreased panicle branch number and plant height in all three plantings (Table II). The phenotype effect of Ami-Ghd7 plants was much larger in the June 22 planting than in the other two plantings ( Fig. 1, C, F, and I; Table II). Conversely, OX-Ghd7 ZH11 plants showed delayed heading in all three planting conditions (Table II). However, the effect of the transgene on plant height and spikelet number of the June 22 planting was much more drastic than the other two plantings (Fig. 1, C and F; Table II). It should be noted that the increases in the panicle size and plant height of the June 22 planting were not proportional to the length of delayed heading compared with the other two plantings (Table II). It should also be noted that although no significant change was observed in the number of spikelets per panicle between the OX-Ghd7 ZH11 and the control in the May 20 planting, the panicle architecture was changed and showed an increase in the primary branch number in the transgenic plant (Supplemental Table S2). These results suggest that the pleiotropic effects of Ghd7 on the phenotype are influenced by the environment, and that Ghd7 might regulate yield traits through modulating panicle architecture independent of the heading date.
Ghd7 Regulates Branching in a Density-Dependent Manner
Ghd7 increased the panicle branch with a reduced tiller number in NIL(mh7) compared with NIL(zs7) under normal field conditions (Xue et al., 2008). However, we observed that overexpressing Ghd7 in HJ19 increased vegetative branching in pots (Supplemental Fig. S4). We supposed that the enlarged plant size of NIL(mh7) relative to NIL(zs7) brings more competitive pressure, which may promote the shadow avoidance signals. To test such a hypothesis, we planted NIL (mh7) and NIL(zs7) under different density conditions. We found that NIL(mh7) plants had significantly more tillers than NIL(zs7) plants at low-density conditions ( Fig. 2), demonstrating that Ghd7 regulates tiller number in a density-dependent manner. Interestingly, there was also a significant increase in secondary branches of the panicles in NIL(mh7) relative to NIL(zs7) at low density, leading to an increased grain number without compromising the number of primary branches (Supplemental Table S3). These results suggest that Ghd7 regulates the plasticity of branch development of the plant to adapt to the neighborhood environment.
TEOSINTE BRANCHED1 (OsTB1) was previously shown to act as a negative regulator of lateral branching in rice (Takeda et al., 2003;Choi et al., 2012). We found that OsTB1 was repressed in the shoot tip region in NIL (mh7) compared with NIL(zs7) (Fig. 3C). Thus, we generated double-stranded RNA interference lines with reduced expression of OsTB1 (OsTB1RNAi) in the ZH11 background, which showed more tillers but less panicle branching compared with the control plants, in agreement with previous results (Supplemental Fig. S5; Takeda et al., 2003;Choi et al., 2012). We then crossed the OsTB1RNAi plants to Ami-Ghd7 plants (with reduced tiller number and panicle branching relative to the control plants), and examined the branch phenotype of the resulting F 1 . No significant difference in the tiller number of the Ami-Ghd7/OsTB1RNAi plants was detected compared with OsTB1RNAi plants (Fig. 3, A, D, and F), whereas the flowering time of Ami-Ghd7/ OsTB1RNAi plants was similar to Ami-Ghd7 plants (Fig. 3, D and E). Using quantitative real-time PCR (qRT-PCR) analysis, we found a moderate increase in the OsTB1 transcript level in Ami-Ghd7 plants (Fig. 3B). These results suggest that Ghd7 acts upstream of OsTB1 in regulating branching.
Ghd7 Mediates the PHYTOCHROME B-OsTB1 Pathway Previous studies revealed that the plant response to shadow signals and control of branching mainly depended on the PHYTOCHROME B (PHYB)-TB1 pathway (Kebrom et al., 2006;González-Grandío et al., 2013). The role of phytochromes in photoperiodic flowering in rice was recently elucidated (Osugi et al., 2011). The mRNA levels of both Ghd7 and Ehd1 increased in the phyB mutant relative to the wild type (Osugi et al., 2011). To understand the effect of PHYB on the Ghd7 pathway in flowering time and branch development control, we analyzed a phyB mutant in the ZH11 background. The phyB mutant accelerated the heading date as previously described (Takano et al., 2005) accompanied by a reduction in the tiller number in Wuhan field conditions (Supplemental Fig. S6). However, we found no significant difference in Ghd7 gene expression between the phyB mutant and wild-type plants (Fig. 4B). We prepared an anti-GHD7-specific antibody to compare the GHD7 protein level (Supplemental Fig. S7). In wild-type plants, the GHD7 level started to accumulate in the morning, peaked at noon, gradually decreased in the afternoon until midnight, and reached a very low level before dawn (Fig. 4C). The level of GHD7 was low in the phyB mutant throughout the day under long-day conditions (Fig. 4C). This result suggests that PHYB maintains the protein level of GHD7.
To understand the genetic interaction between PHYB and Ghd7, we generated a phyB/OX-Ghd7 double mutant and compared the phenotypes of the resulting F 2 generation. The phyB/OX-Ghd7 double mutant showed a heading date similar to OX-Ghd7 (Fig. 4, A and D). Overexpression of Ghd7 partially rescued the tiller number of the phyB mutant (Fig. 4, A and E). These analyses suggest that Ghd7 works downstream of the PHYB.
GHD7 Represses Transcriptional Activity
It was previously reported that the middle region of CCT domain proteins has transcriptional activation activity (Tiwari et al., 2010;Wu et al., 2013). We thus performed a transcriptional activation assay using the Galectin4 (GAL4) DNA binding domain and the herpes simplex virus protein16 (VP16) activation domain using a transient assay system with luciferase (LUC) as a reporter ( Fig. 5A; Jing et al., 2013). As shown in Figure 5B, BD-GHD7 did not activate the transcription of the LUC reporter gene, suggesting that GHD7 has no transactivation activity in the plant cell. A high LUC signal was detected in the transformants of the BD-VP16 construct, as a result of the transcriptional activation by the VP16 domain (Fig. 5B). However, the activity of LUC was drastically reduced by GHD7 (BD-VP16-GHD7) (Fig. 5B). These results suggest that GHD7 has intrinsic transcriptional repression activity in vivo.
Expression of Ghd7 Is Regulated by Environmental Signals
An analysis of the Ghd7 spatial expression profile revealed that the expression of Ghd7 was mainly detected in the emerged leaf blade, whereas it was virtually absent in other tissues assayed even in the preemerged immature leaf blade surrounded by the leaf sheath (Supplemental Fig. S8A). In the emerged leaf blade, Ghd7 transcripts displayed a gradient with much higher transcript accumulation in the leaf tip than the leaf base (Supplemental Fig. S8B). The Ghd7 transcript levels were relatively constant at vegetative, reproductive, and ripening stages in the leaf blade (Fig. 6A), which was similar to Hd1 (Supplemental We analyzed the DNA sequence of the promoter region of Ghd7 and found a number of cis-elements, including ones involved in stress response, such as the abscisic acid (ABA) response element and the C-repeat binding protein element, and hormone response elements, such as the MYB/MYC recognition site and ABA/jasmonic acid (JA) response elements ( Fig. 6C; Finkelstein and Lynch, 2000;Abe et al., 2003;Brown et al., 2003;Simpson et al., 2003;Svensson et al., 2006). Thus, we assayed Ghd7 expression in rice seedlings treated with different phytohormones and drought stress. The accumulation of Ghd7 mRNA was induced by cold treatments, but was repressed by drought, ABA, JA, and high-temperature treatments (Fig. 6D). The expression of Ghd7 was slightly affected by 1-aminocyclopropane-1-carboxylic acid (ACC) and salicylic acid (SA) treatments (Fig. 6D). These results suggest that Ghd7 is involved in response to various environmental signals in addition to photoperiod.
Ghd7 Regulates the Transcriptomes of Multiple Processes
To gain insight into downstream genes regulated by Ghd7, we performed a microarray analysis using Affymetrix rice gene chips. Young leaves (35 d after germination) and developing panicles (0.1 cm) from field-grown OX-Ghd7 HJ19 transgenic and wild-type plants were used to isolate RNA for chip analysis. With a cutoff of 2-fold change, a total of 256 and 622 genes were up-and down-regulated, respectively, in the leaves of OX-Ghd7 HJ19 plants (Fig. 7, A and D; Supplemental Table S4). In the young panicles of OX-Ghd7 HJ19 plants, 177 genes were up-regulated and 303 were down-regulated compared with the wild type (Fig. 7, B and E; Supplemental Table S5). These analyses support the previous conclusion that Ghd7 mainly plays an inhibitory role in gene expression.
Expression of several flowering-related genes was altered in OX-Ghd7 HJ19 plants, both in young leaves and in developing panicles. Ehd1 and FT-like genes were downregulated in leaves of OX-Ghd7 HJ19 plants, consistent with previous results (Supplemental Table S4; Xue et al., 2008;Itoh et al., 2010). Expression of a large number of MADS box genes appeared to be altered in both leaves and panicles in OX-Ghd7 HJ19 plants, mostly down-regulated, including OsMADS1, OsMADS14, OsMADS18, and OsMADS34 in leaves, which regulate reproductive transition and panicle architecture (Supplemental Table S4; Lee et al., 2004;Kobayashi et al., 2012). However, OsMADS55, which was considered a negative regulator of flowering associated with ambient temperature, was significantly up-regulated in both leaves and panicles in OX-Ghd7 HJ19 plants (Supplemental Table S4; Lee et al., 2012).
The expression of many genes involved in hormone metabolism and signaling pathways was affected in OX-Ghd7 HJ19 plants. The expression of an auxin-inducible gene Oshox1, which regulates the sensitivity of polar auxin transport (Scarpella et al., 2002), increased in OX-Ghd7 HJ19 plants (Fig. 7C). The cytokinin oxidase gene OsCKX2, which negatively regulates the rice grain number (Ashikari et al., 2005), was down-regulated in OX-Ghd7 HJ19 plants (Fig. 7C). Ethylene and GA contribute to internode elongation (Iwamoto et al., 2011). The transcript abundance of OsACO1, a key enzyme gene involved in the ethylene synthesis pathway (Iwamoto et al., 2010), was up-regulated in OX-Ghd7 HJ19 plants (Fig. 7C). The GA2-oxidase gene OsGA2ox6, which controls plant height and tiller number (Lo et al., 2008;Huang et al., 2010), was repressed in OX-Ghd7 HJ19 plants (Fig. 7C). OsCKX2 and OsACO1 were also consistently downregulated and up-regulated in Ami-Ghd7 plants, respectively (Fig. 7C). These results suggest that Ghd7 is involved in regulating multiple hormonal pathways.
Many transcription factor (TF) families also appeared to be affected in OX-Ghd7 HJ19 plants, most notably the APETALA2, basic/helix-loop-helix, myeloblastosis, WRKY, and Zinc finger TFs, both in leaves and panicles (Supplemental Tables S4 and S5). Some TF families, such as CCT domain genes in leaves and TCP and YABBY genes in panicles, are tissue specifically downregulated in OX-Ghd7 HJ19 plants (Supplemental Tables S4 and S5). The CCT genes were implicated in flowering time control by photoperiod and circadian pathways (Valverde, 2011). The YABBY and TCP genes were shown to participate in the activities controlling lateral organs as well as the shoot apical meristem (Dai et al., 2007;Martín-Trillo and Cubas, 2010). These results suggest that Ghd7 plays a different role in vegetative and reproductive organs by regulating various transcription networks.
Ghd7 Is Involved in Stress-Response Pathways and Reactive Oxygen Species Homeostasis
Interestingly, we found that many Ghd7-regulated genes are involved in abiotic and biotic stress-response pathways. Among them, OsDREB1A and OsPR4, which play a role in cold and drought stress, respectively (Dubouzet et al., 2003;Wang et al., 2011), were both significantly up-regulated in OX-Ghd7 HJ19 plants (Fig. 7F).
OsDREB1A was down-regulated in Ami-Ghd7 plants, but not OsPR4 (Fig. 7F). We applied drought stress to OX-Ghd7 HJ19 and Ami-Ghd7 plants, and found that Ami-Ghd7 plants showed enhanced drought tolerance, whereas OX-Ghd7 HJ19 plants were more sensitive to drought (Fig. 8). These results indicate that Ghd7 is indeed involved in regulation of drought stress response.
Reactive oxygen species (ROS) serve as important signaling molecules that participate in response to both biotic and abiotic stresses (Sagi et al., 2004;Gechev et al., 2006;Miller et al., 2008). OsMT2b is a ROS scavenger and functions as the signal in resistance response (Wong et al., 2004). Our analysis showed that OsMT2b was up-regulated in OX-Ghd7 HJ19 plants, but downregulated in Ami-Ghd7 plants (Fig. 7F). OsrbohE and RACK1A genes, which are involved in ROS production during the immune response, were down-regulated in OX-Ghd7 HJ19 plants ( Fig. 7F; Yoshiaki et al., 2005;Nakashima et al., 2008), whereas both genes were upregulated in Ami-Ghd7 plants (Fig. 7F). Finally, a group of ROS homeostasis-related genes and wall-associated kinase family genes showed at least a 2-fold change in expression in OX-Ghd7 HJ19 plants (Supplemental Tables S6 and S7). These data suggest that Ghd7 affects the expression of genes whose proteins might be components in the network of ROS homeostasis and that Ghd7 responds to biotic stresses by changing the cell wall components.
DISCUSSION
Unlike animals, plants have a remarkable ability to alter their development in response to myriad exogenous and endogenous signals in the life cycle. We previously cloned the quantitative trait loci gene Ghd7, which acts as an important regulator of heading date and yield potential in rice (Xue et al., 2008). More recent works showed that Ghd7 mainly functions as a flowering repressor under long-day conditions and was regulated by light-and circadian clock-dependent gating (Xue et al., 2008;Itoh et al., 2010;Osugi et al., 2011). In addition to the light signal, another important environmental aspect, temperature, also regulated Ghd7 expression (Song et al., 2012).
In this study, we found that the Ghd7 transcript was regulated by various environmental signals such as light, temperature, and abiotic and biotic stresses and the expression level of Ghd7 subsequently regulated the growth and development of the rice plant. ABA is a regulatory molecule involved in drought stress tolerance and JA is involved in the plant response to biotic stresses (Yamaguchi-Shinozaki and Shinozaki, 2006;Robert-Seilaniantz et al., 2007). We showed that ABA, JA, and drought treatments strongly repressed Ghd7 expression, which may be related to the response of the plant to quickly end the life cycle in adverse conditions in order to escape or avoid stresses. Moreover, our results showing that Ghd7 regulates stress-related genes and ROS homeostasis genes suggest that Ghd7 might be involved in these stress pathways as well. Matsubara et al. (2011) reported that a plant homeodomain finger gene Ehd3 repressed Ghd7 transcription. However, they observed no substantial increase in seed productivity in the ehd3 mutant, despite increased Ghd7 expression (Matsubara et al., 2011). In this study, we found that overexpression of Ghd7 in ZH11 delayed the heading date regardless of the planting conditions, but drastically increased the yield traits in June but not . The x and y axes indicate the chip hybridization signal in the overexpressor and the wild type, respectively. The pink and green dots indicate the probe sets with OX:wild type signal ratios of .2 or ,0.5, respectively. C, The differential regulation patterns of some hormone-related genes in OX-Ghd7 HJ19 (top) and Ami-Ghd7 (bottom) plants. Bars and error bars indicate average values and SE, respectively, based on three biological repeats. D and E, Expression patterns of all of the differentially regulated genes in leaf (D) and panicle (E) in OX-Ghd7 HJ19 plants relative to the wild type. F, The differential regulation patterns of abiotic and biotic stress-responsive genes in OX-Ghd7 HJ19 (top) and Ami-Ghd7 (bottom) plants. Bars and error bars indicate average values and SE, respectively, based on three biological repeats.
April or May plantings under natural field conditions in Wuhan. These results imply that a certain combination of environmental conditions may be required for Ghd7 to increase the yield traits of the rice plant. Thus, Ghd7 might function not only as a flowering time regulator, but also as a sensor of the environmental signals for the plant to dynamically regulate growth, development, morphology, architecture, and stress responses (Fig. 9).
Tiller and panicle branches are lateral organs at vegetative and reproductive stages in rice, respectively. Panicle branching is often associated with the flowering time, likely because of longer vegetative periods. Studies have also revealed that some flowering time genes, such as Hd1 and Ehd1, control panicle development in rice, independently of flowering time control (Endo-Higashi and Izawa, 2011). Meanwhile, several genes, such as Gn1a, SP1, and DEP1, exclusively alter the number of panicle branches without simultaneous changes of flowering time or the tiller number in rice (Ashikari et al., 2005;Huang et al., 2009;. Tiller branching is modulated by both genetic factors and environmental conditions. The mutations of MOC1, LAX1, and LAX2 lead to a reduced number of both tillers and panicle branches (Komatsu et al., 2003;Li et al., 2003;Tabuchi et al., 2011). While in the case of d and OsTB1 mutants, the effect of the genes on tillers and panicle branches is opposite to each other; an increase in the tiller number is accompanied by a decrease in the panicle branches (Takeda et al., 2003;Lin et al., 2009;Choi et al., 2012).
Two florigen genes, FT and TSF, were recently shown to modulate lateral shoot outgrowth in Arabidopsis (Hiraoka et al., 2013). Moreover, these two florigen proteins interact with BRC1, which was considered as an Arabidopsis TB1 clade gene, to repress the floral transition of the axillary buds in Arabidopsis (Niwa et al., 2013). These results suggest a potential link between flowering time control and branching development. We previously showed that Ghd7 increased panicle branching but decreased tiller branching. These results suggest that Ghd7 positively regulates both tiller and panicle branches in a density-dependent manner, and that Ghd7 is involved in regulating the plasticity of branch development for adaptation to different environmental conditions. This process is partly regulated by PHYB by maintaining the GHD7 protein.
Ghd7 then repressed expression of OsTB1, partly through GA signaling (Lo et al., 2008), and enhanced Figure 8. Response of Ghd7 to drought stress. A, Phenotypes of OX-Ghd7 HJ19 and Ami-Ghd7 under drought stress. Bar = 10 cm. B, Survival rate of OX-Ghd7 HJ19 and Ami-Ghd7 after drought stress (n = 30 each). Bars and error bars indicate average values and SE, respectively, based on three biological repeats. Figure 9. A schematic illustration of the Ghd7 functions learned from this study. Ghd7 functions to link the dynamic environmental inputs with phase transition, architecture regulation and stress response to maximize the reproductive success of the rice plant. the expression of OsMADS57, which is considered an interaction protein of OsTB1 (Guo et al., 2013), to control tiller branching. In the panicle branch, many genes involved in specifying meristem and lateral organ identity, including TCP genes, SPL genes, and YABBY genes, were regulated by Ghd7. Thus, the effects of Ghd7 on multiple traits can be explained by delaying the phase transition and increasing the lateral organ growth activity. We propose that Ghd7 plays a key role in integrating the floral transition and lateral branch development in response to environmental cues to maximize the reproductive success of the rice plant (Fig. 9).
Growth Conditions of the Rice Plants
The rice (Oryza sativa) plants examined under natural field conditions were grown in Wuhan (Huazhong Agricultural University,lat 114°21'E,long 30°2 8'N) and Hainan Island (Lingshui County, lat 110°01'E, long 18°30'N), China. The summer rice growing season in Wuhan generally has relatively high temperatures and long-day conditions (unless otherwise specified), whereas the winter nursery in Hainan has relatively low temperatures and short-day conditions. Germinated seeds were sown in the seed beds (late April to late June in Wuhan, and middle to late November in Hainan) and 1-month-old seedlings were transplanted to the fields. The planting density was normally 16.5 cm between plants in a row, and the rows were 26 cm apart. For the density experiment, this normal density was regarded as the high-density condition. In the low-density condition, the plants were 70 cm apart in a row, and the distance between rows was 30 cm. Field management, including irrigation, fertilizer application, and pest control, followed essentially normal agricultural practices.
Phenotypic Data Collection
The heading date was the day when the first panicle of the plant emerged. The total number of spikelets on the main panicle of the plant was counted approximately 10 d after heading. Plant height was measured from the ground to the tip of the tallest tiller of the plant.
Generation of Constructs and Transformation
To construct the OX-Ghd7 HJ19 vector, the open reading frame of Ghd7 was amplified by PCR using primers OX-F and OX-R (Supplemental Table S8), containing restriction sites for KpnI and BamHI respectively, for subcloning. Complementary DNA (cDNA) was cloned into the pCAMBIA1301U vector and then transformed into HJ19.
To construct the OX-Ghd7 ZH11 vector, the Ghd7 promoter region was amplified with PRO-F and PRO-R primers containing KpnI and BamHI sites, respectively, and subcloned into the pCAMBIA1301 vector (Supplemental Table S8). The fulllength cDNA of Ghd7 was then amplified by PCR using primers ORF-F and ORF-R containing BamHI and HindIII sites, respectively (Supplemental Table S8), and inserted into the pCAMBIA1301 vector to fuse with the promoter region to generate the OX-Ghd7 ZH11 construct and then transformed into ZH11.
To construct the Ami-Ghd7 vector, we used a customized version of the original Web MicroRNA Designer platform to design amiRNA sequences (21mer) based on the TIGR5 rice genome annotation. We selected the most suitable amiRNA candidates suggested by the Web MicroRNA Designer platform that have good hybridization properties to the target mRNAs with a single target in the rice genome, with no off-target effect to other genes. The primary amiRNA construct was amplified with Ami-Ghd7-I, Ami-Ghd7-II, Ami-Ghd7-III, and Ami-Ghd7-IV primers (Supplemental Table S8), which was engineered from pNW55 as previously described (Warthmann et al., 2008). The fusion product of 554 bp was cloned into the pGEM-T Vector (Promega), excised with KpnI and BamHI, and cloned into the pCAM-BIA1301U vector and then transformed into ZH11.
To construct the OsTB1RNAi vector, a 484-bp fragment of OsTB1 was amplified by PCR using primers OsTB1RNAi-F and OsTB1RNAi-R (Supplemental Table S8). The OsTB1RNAi-F primer contained SpeI and KpnI sites and the OsTB1RNAi-R primer contained SacI and BamHI sites, for subcloning into the pDS1301 vector that was a modified version of pCAMBIA1301 (Yuan et al., 2007).
All of the constructs were independently introduced into Agrobacterium tumefaciens strain EHA105, and transformation was performed as previously described (Ge et al., 2006).
RNA Extraction and qRT-PCR
We isolated total RNA using an RNA extraction kit (TRIzol reagent; Invitrogen) according to the manufacturer's instructions. For qRT-PCR, approximately 3 mg of total RNA was reverse transcribed using SuperScript II reverse transcriptase (Invitrogen) in a volume of 100 mL to obtain cDNA. We carried out qRT-PCR in a total volume of 25 mL containing 2 mL of the reverse-transcribed product above, 0.25 mM gene-specific primers, and 12.5 mL SYBR Green Master Mix (Applied Biosystems) on an Applied Biosystems 7500 Real-Time PCR System according to the manufacturer's instructions. Primer pairs for qRT-PCR analysis are listed in Supplemental Table S8. The measurements were obtained using the relative quantification method (Livak and Schmittgen, 2001).
Purification of Recombinant Protein
To construct the recombinant MBP-GHD7 vector, the open reading frame of Ghd7 was amplified with MBP-GHD7-F and MBP-GHD7-R primers containing EcoRI and BamHI sites and subcloned into the pMAL vector. MBP and MBP-GHD7 recombinant fusion proteins were induced by isopropyl b-D-1thiogalactopyranoside and expressed in the Escherichia coli BL21 (DE3) strain. The proteins were then purified by MBP beads according to the manufacturer's instructions.
Antibody Production and Immunoblotting
We synthesized the peptide corresponding to amino acids 243 to 257 of GHD7 (CTYVDPSRLELGQWFR) conjugated with keyhole limpet hemocyanin, and polyclonal antibody was raised in rabbit. Rice leaf total protein extraction was performed as described (Li et al., 2011). Proteins were boiled in SDS loading buffer, separated by 10% SDS-PAGE gels, and blotted onto polyvinylidene fluoride membranes. The proteins were then incubated with anti-Ghd7 (1:200 dilution) or anti-heat shock protein (1:5,000 dilution; Li et al., 2011) and subsequently with the horseradish peroxidase-conjugated goat-antirabbit secondary antibody (Abcam) according to the manufacturer's instructions. The protein bands were visualized by a standard enhanced chemiluminescence kit (Thermo Scientific Pierce) and the signal was exposed with x-ray film.
LUC Activity Assay
To determine the transcriptional activation activity of GHD7, the full-length GHD7 fused with the GAL4 DNA binding domain (BD-GHD7) was cotransformed into Arabidopsis (Arabidopsis thaliana) protoplasts with a reporter construct containing the 43 upstream activation sequence region and mini 35S promoter sequence fused to LUC cDNA. To analyze the transcriptional repression activity of GHD7, the full-length GHD7 was fused with the GAL4-VP16 domain (BD-VP16-GHD7), which is a widely used transcriptional activator, and cotransformed into Arabidopsis protoplasts with the reporter construct. The LUC activity assay was performed as previously reported (Tang et al., 2012). LUC reporter activity was detected with a luminescence kit using the LUC assay substrate (Promega). Relative reporter gene expression levels are expressed as the ratio of LUC to GUS.
Stress Treatments of Plant Materials
To check the expression level of the Ghd7 under various abiotic stresses or phytohormone treatments, rice plants of NIL(mh7) were grown in hydroponic culture medium for approximately 3 weeks in a phytotron (14-h light/10-h dark at 32°C/26°C). Seedlings at the four-leaf stage were treated with abiotic stresses, including drought (removing the water supply under phytotron conditions, 14-h light/10-h dark at 32°C/26°C), cold (seedlings were transferred to a phytotron at 14-h light/10-h dark at 10°C/10°C), and heat (seedlings were transferred to a phytotron at 14-h light/10-h dark at 42°C/42°C). For phytohormone treatments, 20 mM ABA, 0.5 mM JA, 0.1 mM SA, and 0.1 mM ACC were individually added to the culture medium. The sample was collected at the designated time points (0 min, 30 min, 6 h, and 12 h).
To test the drought stress tolerance of transgenic plants at the seedling stage, transgenic-positive and wild-type plants (30 plants each, three repeats) were grown in a half-and-half manner in barrels filled with sandy soil. Drought stress testing was conducted at the four-leaf stage, following a previously described procedure (Tang et al., 2012).
Microarray Analysis
RNA samples used for microarray analysis were prepared from young leaves in a vegetative stage (35-d-old) and from developing panicles (0.1 cm in length) of OX-Ghd7 HJ19 transgenic and wild-type plants grown under normal field conditions with two biological replicates. RNA isolation, purification, and Affymetrix microarray hybridization were carried out using the Affymetrix GeneChip service (CapitalBio) protocol. The microarray analysis was conducted according to a previously described process (Yang et al., 2012).
Sequence data from this article can be found in the GenBank/EMBL data libraries under accession number GSE51616.
Supplemental Data
The following materials are available in the online version of this article.
Supplemental Figure S1. T0 generation plants of OX-Ghd7 HJ19 planted under natural long-day field conditions in Wuhan.
Supplemental Figure S2. T0 and T1 generation plants of OX-Ghd7 ZH11 and Ami-Ghd7 planted under natural long-day field conditions in Wuhan.
Supplemental Figure S3. T2 generation plants of OX-Ghd7 ZH11 and Ami-Ghd7 planted under natural short-day field conditions in Hainan Island.
Supplemental Figure S4. OX-Ghd7 HJ19 plants that showed an increase in the tiller number at the vegetative stage.
Supplemental Figure S5. Phenotype of OsTB1RNAi plants in the ZH11 background planted under natural long-day field conditions in Wuhan.
Supplemental Figure S6. phyB mutants in the ZH11 background planted under natural long-day field conditions in Wuhan.
Supplemental Figure S7. Detection of the GHD7-MBP protein by the GHD7 antibody.
Supplemental Figure S9. Expression patterns of Hd1 and Hd3a at various developmental stages.
Supplemental Table S1. Performance of OX-Ghd7 ZH11 and Ami-Ghd7 plants under natural short-day conditions in Hainan Island.
Supplemental Table S2. The panicle architecture of OX-Ghd7 ZH11 and Ami-Ghd7 plants with different sowing times in Wuhan field conditions.
Supplemental Table S3. Branch number in panicles of NIL(zs7) and NIL (mh7) under low and high planting densities.
Supplemental Table S4. The differentially regulated genes in young leaves revealed by microarray analysis.
Supplemental Table S5. The differentially regulated genes in developing panicles revealed by microarray analysis.
Supplemental Table S6. ROS homeostasis-related genes revealed by microarray analysis of young leaves.
Supplemental Table S7. Wall-associated kinase family genes revealed by microarray analysis of young leaves.
Supplemental Table S8. The primers used in this work. | 8,730.4 | 2014-01-03T00:00:00.000 | [
"Agricultural and Food Sciences",
"Biology",
"Environmental Science"
] |
Early prediction of in-hospital death of COVID-19 patients: a machine-learning model based on age, blood analyses, and chest x-ray score
An early-warning model to predict in-hospital mortality on admission of COVID-19 patients at an emergency department (ED) was developed and validated using a machine-learning model. In total, 2782 patients were enrolled between March 2020 and December 2020, including 2106 patients (first wave) and 676 patients (second wave) in the COVID-19 outbreak in Italy. The first-wave patients were divided into two groups with 1474 patients used to train the model, and 632 to validate it. The 676 patients in the second wave were used to test the model. Age, 17 blood analytes, and Brescia chest X-ray score were the variables processed using a random forests classification algorithm to build and validate the model. Receiver operating characteristic (ROC) analysis was used to assess the model performances. A web-based death-risk calculator was implemented and integrated within the Laboratory Information System of the hospital. The final score was constructed by age (the most powerful predictor), blood analytes (the strongest predictors were lactate dehydrogenase, D-dimer, neutrophil/lymphocyte ratio, C-reactive protein, lymphocyte %, ferritin std, and monocyte %), and Brescia chest X-ray score (https://bdbiomed.shinyapps.io/covid19score/). The areas under the ROC curve obtained for the three groups (training, validating, and testing) were 0.98, 0.83, and 0.78, respectively. The model predicts in-hospital mortality on the basis of data that can be obtained in a short time, directly at the ED on admission. It functions as a web-based calculator, providing a risk score which is easy to interpret. It can be used in the triage process to support the decision on patient allocation.
Introduction
Starting from late February 2020, the COVID-19 outbreak struck the north of Italy causing more than 30,000 deaths in Lombardy alone, up to the end of March 2021. At the beginning of the outbreak, the Spedali Civili di Brescia (SCBH), the university hospital of one of the hardest hit cities in Europe, was faced with a 'flash flood' of severely ill patients seeking admission to the emergency department (ED). For several weeks, their number exceeded the available resources, obliging a continuous organizational restructuring of the hospital wards (Garrafa et al., 2020b).
In those weeks, given the limited evidence of clinically proven predictors (Marengoni et al., 2021;Wynants et al., 2020;Sperrin et al., 2020), prioritizing hospital admission of non-critical patients was an arduous task. Essentially, the criteria were based on the presence of fever, respiratory symptoms, and the level of blood oxygenation. A significant drawback of this approach was that patients referring to the ED with very similar clinical findings underwent inconsistent assessments. In this scenario, the availability of predictors would have been extremely beneficial, not only to triage patients, but also to monitor hospitalized patients and warn of exacerbation of the outbreaks.
Starting from March 2020, all patients referred to EDs underwent a chest X-ray at admission or within a few hours. With the purpose of grading pulmonary involvement and tracking changes objectively over time, a chest X-ray severity score was developed (Brescia X-ray score) (Borghesi and Maroldi, 2020;Maroldi et al., 2021;Borghesi et al., 2020a;Borghesi et al., 2020b). The score was able to predict in-hospital mortality in 302 patients. In addition to the chest X-ray severity score, a dedicated blood sampling profile was included in the COVID-19 ED work-up (Garrafa et al., 2020a). Among its 17 blood analytes, the sampling profile encompassed hemachrome, inflammation biomarkers such as C-reactive protein (CRP), lactate dehydrogenase (LDH), and ferritin, and coagulation markers (fibrinogen and D-dimer). Since that time, the medical literature began to encompass an increasing number of studies advocating the prognostic value of single or grouped blood parameters (Bonetti et al., 2020;Borghi et al., 2020;Avouac et al., 2021;Knight et al., 2020). All these parameters were present in our COVID-19 sampling profile.
This study aims to develop and validate an early-warning model (BS-EWM), predictive of in-hospital death, based on data that could easily be acquired on admission to the ED: age, simple blood biomarkers, and chest X-ray. The model was constructed based on the analysis of a cohort of 2872 COVID-19 patients treated in a single reference center over a 10 -month period.
This paper adheres to the TRIPOD checklist for predictive model development and validation (Collins et al., 2015).
The study was approved by the local ethics committee (COVID-SURG-BS; NP 4059).
Results
Description of the sample The descriptive statistics for all variables in the dataset are presented in Supplementary file 1b and were computed and stratified by the two waves (MA vs. MD) and by outcome (alive vs. dead). The two subsets were similar for most variables.
The correlations between the 17 analytes and the Brescia X-ray score were investigated using Spearman correlation coefficients and visualized using a correlation plot (Figure 2). The Brescia X-ray score was positively correlated with neutrophil to lymphocyte ratio (NLR), CRP, LDH, standardized ferritin, and D-dimer, and was negatively correlated with lymphocyte %, monocyte %, and basophil %.
BS-EWM
A machine-learning model (BS-EWM) was developed by inputting a dataset of 2782 COVID-19 patients admitted to the ED and hospitalized at SCBH from March to December 2020. The majority of patients (2106/2782, 75.70%) belonged to the first wave (MA), the remaining fraction (676/2782, 24.30%) to the second wave (MD). As outcome, the machine-learning model had the condition dead/ alive, and, as covariates: age, Brescia X-ray score, and 17 blood sample analytes. Figure 2. Correlation plot on biomarkers and Brescia chest X-ray score. The relationships between 17 analytes and Brescia chest X-ray score are inspected with the Spearman correlation coefficients, ρs, which are represented in this correlation plot by means of blue and red circles (positive and negative correlation, respectively). The diameter of the circle is proportional to the magnitude of ρs and black crosses on them identify correlation not significantly different from zero (p-values > 0.05). The correlation matrix is reordered according to the hierarchical cluster analysis on the quantitative variables. Figure 1 reports the flowchart that describes how data were divided for training, validation, and testing the BS-EWM.
The synthetic minority oversampling technique (SMOTE) procedure, rebalancing the dead/alive ratio (50% vs. 50%) from the original 20.09%, improved accuracy, specificity, and sensitivity of the random forest applied on it (see Supplementary file 1c which compares performance metrics with/ without the SMOTE method).
The relative variable importance measure (rel VIM) and partial dependence plot (PDP) were extracted from the random forests (Figures 3 and 4, respectively). In Figure 3, the rel VIM of BS-EWM based on age, Brescia X-ray score, and 17 blood analytes are reported on a bar plot. Since age was strongly associated with the risk of death, it masked the role of the other covariates. For completeness, the relevance of the 17 analytes and Brescia X-ray score was estimated in an additional EWM, in which the covariate 'age' was excluded. In Figure 4, 9/17 analytes and the Brescia X-ray score were noted as being important in predicting the risk of death (rel VIM >60). The effects of changes in covariate values on the risk of death threshold of the EWM were reported by means of a PDP (a 2D plot in the x-y plane) (Figure 4). Only fibrinogen was excluded from this graphical representation since in Table 1, it was not significantly different in the two subpopulations deceased/alive. Most PDPs showed nonmonotonic increasing relationships between the x-variable and the EWM, resulting in a plateau corresponding to high values of x.
When compared to other models such as gradient boosting machine (GBM) and logistic regression, the random forest showed better performance in terms of area under the curve (AUC), sensitivity, and specificity. The in-sample sensitivity (0.93) yielded by the model was the highest, and it maintained an important 0.82 in validating the out-of-sample sensitivity, and this decreased to 0.73 when testing the MD subgroup (see Table 2 which contains details on all the metrics extracted from the ROC analysis). ROC curves are visualized in Figure 5 where, for each model (random forest, GBM, and logistic regression), the performances in training, validating, and testing are compared in a unique graph.
In order to compare the BS-EWM score with univariate models based on single biomarkers, three random forest (on training, validation, and testing) are estimated on the most important biomarkers (LDH, D-dimer, neutr/lymph, neutrophils %, fibrinogen, CRP, Brescia chest X-ray, lymphocytes %, ferritin std, monocytes %). Results of these 30 models are reported in Supplementary file 1d. It is Figure 3. Relative variable importance measure (rel VIM). In , Figure 3, A1, there is the rel VIM based on Gini index. It was extracted from a random forest where the outcome is dead/alive and covariates are: the 17 biomarkers, Brescia X-ray score, and age. The algorithm grows 10,000 trees where the number of splitting variables at each tree node is √(# covariates in the model). Missing values are imputed with the 'on-the-fly-imputation' algorithm. A model with the same features was run (Figure 3, A2) excluding the covariate 'age' since it was strongly associated with the risk of death, masking the role of remaining covariates. . Partial dependence plot (PDP) of random forest grown on the 17 biomarkers and Brescia X-ray score. Considering the random forest that excludes the 'age' variable, the PDPs were computed in correspondence of covariates with relative variable importance measure (rel VIM) of Appendix 1-figure 2 > 60 (cut-off identified by the red dashed line) and p-value in Comparison between the performances of three methods: random forest, GBM, and logistic regression model applied on the rebalanced dataset obtained with SMOTE methodology.
Logistic regression predictions are computed using the 10-fold cross-validation in order to be comparable with random forest and GBM predictions (which use out-of-bag and 10-fold cross-validation, respectively). evident that considering one biomarker per time, the model provides good predictions in training, but bad performances out of sample (contrary to the BS-EWM score it loses its predictive power on fresh data). Hence, it is evident that a score based on a multivariate model provide better results since it also considers interactions among variables.
Discussion
The dataset for the development, validation, and testing of the BS-EWM originated entirely from an Italian region, potentially limiting the generalizability of the risk score in other areas of the world. Additional validation studies from different geographic areas are welcomed. Furthermore, though the BS-EWM has been validated using blood sample values obtained by instruments that satisfy internal and external quality control, different equipment could lead to divergent results (Martens et al., 2021;Lippi et al., 2020). Therefore, it would be appropriate to harmonize the results. Another limit could have been the presence of missing values, though the BS-EWM has also performed adequately in this condition since it used a multiple imputation technique to overcome the problem. Finally, it is important to point out that the BS-EWM risk score should not be used for asymptomatic COVID-19 patients or for the pediatric population. It will be interesting, in the future, to verify if the BS-EWM could be applied by general practitioners to the unhospitalized population. This could allow the generalizability of the model outside the hospital context to be tested. Though the BS-EWM has been developed on a cohort of 2106 patients belonging to the COVID-19 first wave, the model also demonstrated a sensitivity greater than 70 % in the early prediction of high risk in patients in the second wave, when in-hospital mortality was 40 % lower.
While these models are mostly based on age and a set of vital (clinical) parameters, in addition to age, the BS-EWM depends on blood parameters. It is conceivable that blood analytes capture a snapshot at hospital admission signaling a specific bodily reaction to viral infection in terms of hyperinflammation, immune response, and thrombophilia. On the other hand, the other models are more influenced by the general status of the patient, which may be determined by concomitant and preexisting diseases. According to the International Federation of Clinical Chemistry (Bohn et al., 2020), no single biochemical or hematological marker is sufficiently sensitive or specific to predict the outcome of SARS-CoV-2 infection. Notably, the IFCC recommends that the interpretation of laboratory abnormalities should be based on groups of analytes (Bohn et al., 2020). In the BS-EWM, three analytes reached a significant value in predicting death: LDH, D-dimer, and NLR. LDH is a non-specific indicator of tissue damage (Bohn et al., 2020;Liang et al., 2020) that emerges as one of the most consistently elevated markers in patients at higher risk of developing adverse outcomes, probably because COVID-19 infection is characterized by systemic tissue damage. Another key feature of SARS-CoV-2 is the coagulopathy: high levels of D-dimers have been reported to correlate with unfavorable disease progression in several cohorts of patients. The coagulopathy linked to COVID-19 infection is likely to involve a complex interplay between pro-thrombotic and inflammatory factors, thus the combined analysis of both inflammatory and thrombophilic markers could play an important role in the early identification of patients at higher risk of unfavorable progression (Bohn et al., 2020;Lazzaroni et al., 2021). Finally, lymphopenia has become a hallmark of SARS-CoV-2. It has been demonstrated in almost all symptomatic patients, though in varying degrees. Disease severity has been correlated with the level of lymphocyte count reduction. A direct infection of lymphocytes, which express the coronavirus receptor ACE-2, is among the mechanisms proposed. A poor prognosis is also associated with an elevated neutrophil count combined with lymphopenia, resulting in a high NLR. The increase in granulocytes is the result of the cytokine storm induced by the virus and is responsible for tissue damage (Bonetti et al., 2020;Bohn et al., 2020).
A further remark concerning the blood analytes is that, in the BS-EWM, the thresholds of the single analytes (namely the point where the functions in Figure 4 become constant and the probability of death no longer increases/decreases) closely overlap with the values recently proposed by other authors (Webb et al., 2020;Caricchio et al., 2021). For completeness, the optimal threshold (computed through the Youden index) for each biomarker to predict the outcome (dead or alive) are reported in Supplementary file 1e.
The present study is not unique in encompassing radiological findings combined with blood analysis. The study by Schalekamp et al., 2021, integrated blood analysis parameters and radiological information derived by grading chest X-rays (0-8 scale points). Unlike the cited study, with the BS-EWM in this study, the radiological score did not reach a high relevance (rel VIM) in predicting high risk. This difference can be explained by the different approaches used to build the model (logistic regression vs. random forests) and by the high degree of correlation of the X-ray score with multiple blood analytes: 'collinearity' thus could have 'stolen' importance from the information provided by imaging. Nevertheless, at admission, the chest X-ray score of patients who subsequently died was significantly higher than for patients who survived. Furthermore, the chest X-ray score may provide additional stability to the model, playing an important role in the case of missing data in the blood sample counterpart.
Further, the BS-EWM delivers high prediction performance and only requires a limited number of readily available variables with easy operability, no time consuming, no extra money since these analytes are required for COVID-19 diagnosis and monitoring. An important and pragmatic aspect offered by the BS-EWM is that the biomarkers employed may be obtained by the emergency laboratory in less than an hour (Garrafa et al., 2020a) and, differently from other biomarkers (Kyriazopoulou et al., 2021), they are non-expensive and frequently used also in developing countries. It is important to note that the same methodology could be applied to other infections and be practical to triage people.
Most laboratories, including the small or peripheral ones, may provide results in a short time. At the Spedali Civili of Brescia, the BS-EWM is integrated within the Laboratory Information System (LIS). It works as a web-based calculator and is easy to interpret. The online calculator allows an easy assessment of the EWM, requiring only the entering of analyte values and X-ray score. The score is calculated even if some of the values are missing. Furthermore, in our center the system may be integrated with the electronic health record and the radiology information system, allowing a completely automatic data retrieval and entering, without any operator interaction. It provides a risk threshold of 0.5, above which patients are graded as having a potentially high death-risk, thus supporting closer clinical observation or admission to a high-intensive care ward. In patients yielding a low risk (score 0-0.49), the decision by clinicians to allocate them to a low-intensive care ward or to monitoring is further sustained. The online calculator allows an easy assessment of the EWM, requiring only the entering of analyte values and X-ray score. The score is calculated even if some of the values are missing. Furthermore, in our center the system may be integrated with the electronic health record and the radiology information system, allowing a completely automatic data retrieval and entering, without any operator interaction.
Finally, the need to regularly update models and closely monitor their performances over time and geographically should be underlined, given the rapidly changing nature of the disease and its management.
According to the two temporal peaks of incidence of the COVID-19 outbreak in Lombardy, the 2782 patients were divided into two groups: (i) MA including 2106 patients admitted during the first wave; (ii) MD including 676 patients in the second wave. Quantitative variables were described using mean (SD), median (IQR), and range (min-max), while categorical variables were reported as counts and percentages. The comparisons between groups were performed using the Wilcoxon rank-sum test for quantitative variables and Fisher's exact test for qualitative variables.
The relationships between the 17 analytes and the Brescia X-ray score were inspected using the Spearman correlation coefficient, ρs, and visualizing results using a correlation plot (Dancelli et al., 2013;Marziano et al., 2019;Figure 2).
To estimate the BS-EWM, the outcome (alive/dead) was modeled using as covariates: (i) Brescia X-ray score, (ii) 17 analytes, (iii) age. Since most of the covariates analyzed were strongly correlated (multi-collinearity) ( Figure 2) and their relationships with the outcome were non-linear, the BS-EWM was estimated using random forests (Breiman, 2001;Carpita and Vezzoli, 2012), a non-parametric machine-learning method (Vezzoli, 2011;Vezzoli et al., 2017). Moreover, the algorithm is able to manage missing values which are common in clinical studies. The 'on-the-fly-imputation' algorithm (Hong and Lynn, 2020) imputes data when it grows the forest handling interactions and non-linearity in the dataset.
Since the prevalence rate of death in the two waves was different (20 % in MA vs. 12 % in MD), a strategy to generalize results in unbalanced datasets was applied, adopting a rebalancing method able to improve the detection of patients with a high death-risk.
The EWM was developed using the 2106 patients in the first COVID-19 wave (MA 2020) when in-hospital death prevalence was 20 %. Seventy percent of them (1474 patients) were used for training the model and the remainder (632 patients) for testing it. Patients were randomly assigned to the two subgroups, and further stratified according to the outcome (alive/dead). Consequently, both the training and testing subgroups included the same rate of deaths (20.09%) as the full sample (2106 patients). With such a 'moderate' incidence of death, the dataset was statistically unbalanced. This limitation could have implied the development of a model yielding unsatisfactory results in predicting new observations for the minority class, that is, patients with death as outcome. An approach to address this limitation is to oversample the minority class (deceased patients) and, subsequently, create the predictive model (BS-EWM). The SMOTE (Chawla et al., 2002) was chosen. The SMOTE function oversamples the minority class by using bootstrapping and k-nearest neighbor to synthetically create additional observations belonging to that class (dead). The procedure is combined with under-sampling of the majority class (alive). To determine the optimum number of k-groups into which to assign the dataset, a matrix containing the 17 analytes and the Brescia X-ray score was used to compute the hierarchical cluster analysis (Salvi et al., 2019;Codenotti et al., 2016). By means of silhouette analysis, k = 2 was determined as the optimal number of clusters into which to assign the dataset. Hence, a synthetic rebalanced dataset was obtained with an equal number of living and deceased patients (888 + 888). The rebalancing procedure enabled a risk score to be devised ranging from 0 to 1 with a threshold of 0.5 to separate non-severely affected from severely affected patients. Subsequently, the model was tested on the subgroup of 632 patients in the first wave excluding the training set. A further validation of the EWM was conducted on the 676 COVID-19 patients in the second wave (Wynants et al., 2020).
The rel VIM (Carpita andVezzoli, 2012, Doglietto et al., 2020b) and the PDP (Friedman, 2001;Doglietto et al., 2020a) were extracted from the model for a better understanding of the relationship between outcome and covariates.
The predictions extracted from the random forests classification were interpreted as in-hospital death probability conditional on the combination of the values of analytes, Brescia X-ray score, and age in COVID-19 patients at admission to the ED.
The BS-EWM performance was evaluated by AUC of an ROC curve. The robustness of the model was compared to other models by running GBM (a machine-learning approach and competitor to random forests), and logistic regression, and computing the same metrics.
The BS-EWM score is available for use online (https://bdbiomed.shinyapps.io/covid19score). In the SCBH it is integrated within LIS returning the death-risk score directly from the medical report.
All the analyses were performed by R, version 4.0.0 (R Development Core Team, 2020). The code is available at 288 https://github.com/biostatUniBS/BS_EWS copy archived at swh:1:rev:7416ba71075402e6a0ed997e7aa6a527e93247b2 . The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Data availability
We are unable to share the dataset as it contains sensitive personal data collected during the pandemic at Spedali Civili di Brescia. We cannot share the full data since are data from patients. Interested researchers should contact the authors for any query related to data sharing and submit a project proposal Once defined the goal of the study, and the data needed authors will submit the potential project of collaboration to the IRB of Spedali Civili di Brescia to receive approval to access a deidentified dataset. Please note that other informations related to patients can be acquired, always following approval of IRB of Spedali Civili di Brescia, not only the ones studied in the paper. Anyway, following request to the authors, it will be possible to share processed version of the dataset ( e.g. an Excel sheet with numbers used to plot the graphs and charts of the manuscript). All code used to analyse the data can be found on GitHub at https://github.com/biostatUniBS/BS_EWS, (copy archived at swh:1:rev:7416ba71075402e6a0ed997e7aa6a527e93247b2). | 5,313 | 2021-06-13T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Numerical study of the greenhouse solar drying of olive mill wastewater under different conditions
The aim of this work is to develop thermal modeling of the olive mill wastewater drying process in a greenhouse solar dryer. A configuration was thus proposed and simulated using the commercial software COMSOL Multiphysics in order to solve the conservation equations governing our problem. The resulting simulations are used to evaluate the temperature, velocity, and vapor mass fraction distributions after hours of sunshine and to provide a quantification of the drying process. The influence of the greenhouse effect on the drying kinetics is highlighted by comparing to open sun-drying results. The effect of some greenhouse geometric characteristics and external meteorological conditions are studied.
Introduction
Olive oil production is widespread throughout the Mediterranean region and in the Middle East. Over the last few years, Tunisia has become the world's secondlargest producer 1 and exporter 2,3 of olive oil after Spain. Thus, the olive oil industry plays a key factor in its economic, environment, and social life. However, olive oil extraction processes, either traditional press extraction or continuous three-phase decanter ones, generate a large quantity of an aqueous waste called ''olive mill wastewater'' (OMWW). This effluent includes used water in different stages of the oil extraction process, olive juice (or olive vegetation water), and small amounts of unrecoverable oil and fine olive pulp particles.
OMWW can be applied to soil as an organic fertilizer thanks to its high nutrients concentration mainly potassium, magnesium, and phosphate salts. 4,5 Whereas, negative effects are related to its high acidity and its large supply of organic contents, mainly phytotoxic polyphenolic compounds, 6,7 with a high chemical oxygen demand (COD; up to 220 g/L) and biochemical oxygen demand (BOD; up to 100 g/L) values. 8 Many authors [9][10][11] have studied its discharge without pretreatment. They showed that it may cause severe environmental problems such as surface and groundwater contamination and can affect the whole ecosystem. Therefore, the pretreatment of this hazardous waste is considered as a key stage to produce a manageable end product.
Several treatment processes using physical (centrifugation, ultrafiltration, 12 etc.), chemical (flocculation, 13 1 Ionized and Reactive Media Studies Research Unit, Preparatory Institute of Engineering Studies of Monastir, University of Monastir, Monastir, Tunisia chemical oxidation, 14 etc.), thermal (evaporation, 15 distillation, 16 etc.), biological (aerobic, 17 anaerobic degradation, 18 etc.), and combined technologies have been suggested. 19,20 However, most of these techniques present some technical and financial limitations that make their implementation, in favorable conditions, difficult. A particular interest is given to thermal treatment due to OMWW's high moisture, which decreases its calorific value and affects its valorization. 21 Currently, evaporation is the most common treatment process where OMWWs are evacuated in storage ponds and evaporated under the open air. 15,22 Despite its low-cost investment and favorable climatic conditions in Tunisia and other olivegrowing countries, this method leads to many problems such as bad odor, insect proliferation, and infiltration. Solar drying seems to be an effective technique as it uses the clean and free sun's energy. [23][24][25] Only a few studies have focused on the solar drying of OMWW at low temperature. Potoglou et al. 16 showed the applicability of a solar distiller, at laboratory scale, for the drying of OMWW and the removal of COD and other chemicals from it.
The experimental results were employed as inputs in a mathematical model for the same purpose. Sklavos et al. 26 studied the simultaneous OMWW solar drying and the recovery of selected phenolic compounds. Two experiments were carried out to show the role of thermal insulation on increasing the performance of the solar distillation unit. The qualitative and quantitative characteristics of the distillate were investigated. Chakir et al. 27 compared the drying performance of the open sun and greenhouse drying (for natural and forced convection) of OMWW. Greenhouse drying showed its efficiency to make this waste ready for valorization and utilization as a fuel. Recently, Galliou et al. 28 examined a novel treatment for OMWW combining greenhouse solar drying and composting. During solar drying, swine manure was used to avoid crusting on the sludge surface and to provide sufficient porosity. The resulted OMWW sludge was rich in nutrients and organic matter but still contained an important quantity of phenols. To detoxify it, grape marc was used as a bulking agent during composting in order to obtain a valuable product for agriculture use.
Modeling a solar drying process is a complex problem including simultaneous heat and mass transfer. Several studies trying to model this process have been developed. [29][30][31][32] Because of its potential efficiency in process optimization, the development of adapted numerical simulations plays a significant role. 33 This article presents a numerical investigation of a simple and low-cost technique for the treatment of OMWW which consists in the use of greenhouse solar drying. A model was formulated to simulate temperature, velocity, vapor mass fraction, and consequently the evaporated liquid quantity. Then, different configurations were proposed and simulated to optimize the drying process.
The geometry of the computational area Coupled heat and mass transfer problems require important calculation time. Therefore, the modeled geometry is simplified as far as possible using physical justification. The proposed geometry is assumed to be two-dimensional (2D) and symmetric to the axis projecting from the center of the curve and perpendicular to the liquid surface. In this way, the computational process is considerably accelerated allowing studying a higher number of cases.
Mesh
The problem equations were numerically solved by means of a finite element method used in the commercial software COMSOL Multiphysics. An unstructured triangular mesh was generated. High grid densities are applied in the regions where most of the heat and mass transfer phenomena are expected to occur. Hence, the mesh is locally refined in the air-OMWW interface so that the evaporated liquid mass and the different transferred heat flux are predicted more accurately.
Sensitivity tests were performed and computations were repeated with four mesh grids having different element densities ( Table 1). The total evaporated mass obtained with the tested mesh grids, represented in Figure 2, show that the mesh no. 3, illustrated in Figure 3, gives good accuracy.
Method
The heat and mass transfer were coupled using the Dumped Newton Method. The Backward Differentiation Formula (BDF) solver was chosen for the time stepping.
In the first part of this study, the proposed dryer pond was fulfilled with OMWW. A typical hot day was considered for the month of July with an average temperature of 304 K and a solar radiation reaching 850 W/m 2 as shown in Figure 4. The temperature, the velocity, the vapor mass fraction, and the evaporated liquid mass are simulated. In the second part, a parametric study was introduced. All the parameters are fixed and one of the input parameters, either a greenhouse geometric characteristic or an external meteorological condition, is varied. The simulations were run and the evaporated liquid mass was plotted as a function of each parameter to emphasize its effect on the thermal performance of the solar and sun drying.
The meteorological data were collected and calculated using data related to Tunisian meteorological conditions, exactly of the city of Sfax, which is considered as one of the largest olive production areas (latitude: 34.44°N, longitude: 10.46°E) 34,35 (Figure 4).
Material
Some OMWW properties (viscosity, thermal conductivity, heat capacity, etc.) are not available in the literature due to the wide variety of OMWW composition which depends on the extraction process and on the agricultural specificities. Since previous studies 36 have indicated that OMWW has about 95% moisture content, the water properties will be considered. Physical properties of humid air are regarded as dependent on temperature and composition. 37 Mathematical modeling
Conservation equations
The drying process involves coupled heat and mass transfer. For its modeling, it is necessary to solve the conservation equations for both air and OMWW. In order to simplify the problem, some assumptions are made: The dried product (OMWW) is Newtonian and incompressible. The humid air is an ideal gas. The air-OMWW interface is a black surface at local thermodynamic equilibrium. All the solar radiations are transmitted to OMWW. 38 The air inlet and outlet are black and diffuse surfaces.
In the OMWW equations, the Boussinesq approximation was adopted.
Hence, the mass, momentum, energy, and species diffusion conservation equations are written in 2D configuration as follows: For the air For the OMWW The mass transfer can be considered as controlled by the diffusion. Therefore, the interfacial evaporating mass flux is assessed by Fick's law as expressed in the following equation The Charles Edwards and Acock model for the solar radiation flux density Q sol for the Nth day and at a certain day time is adopted. 39 The evaporative flux can be expressed as The radiative flux Q rad is calculated using the surface to surface radiation model of COMSOL Multiphysics, taking into account the radiative assumptions given in Figure 1. The view factor is automatically determined for each of the geometry facets using a ''hemicube'' approach.
Initial and boundary conditions
The initial hydrodynamic, thermal, and mass transfer boundary conditions can be written as follows Initially, the humid air is supposed to be saturated. In fact, several small evaporation ponds were superimposed next to each other. The air absorbs moisture from this pond and becomes saturated at the beginning of the day. Once this air is heated by solar radiations, the level of its saturation decreases and air requires additional moisture. Therefore, the humid air mass fraction can be expressed as where P and P vs present, respectively, the reference pressure and the equilibrium vapor pressure at the interfacial temperature T i , which is given by Vachon et al. 40 LogP vs (T i ) = 28:59 À 8:2Log(T i ) The relevant boundary conditions associated with the present problem are as follows: At the inlet At the outlet At the greenhouse upper surface At the pond walls At the OMWW-air interface Validation of the numerical model In order to validate our numerical model, a Rayleigh-Be´nard square cavity with an ''L'' side was considered. The vertical walls were insulated and the horizontal ones were isothermal. The upper vertical wall was kept at a lower temperature (T c ) whereas the bottom one was at a higher temperature (T h ). The presented profile of the local Nusselt number Nu, calculated at the hot wall for a proposed Rayleigh number of 10 5 (Figure 5), show that our results are in a good agreement with those published by Ouertatani et al. 41 To validate our radiative model, a rectangular enclosure (0.2 m 3 0.1 m), filled with a non-scattering media and having black walls and presented in Figure 6, was examined. The bottom wall was at a constant higher temperature (T h ). The other walls were maintained isothermally at a lower temperature (T c ). Figure 7 illustrates the view factor of the different surfaces, determined by the analytical calculation using equation (20), which were in a good agreement with those obtained by the numerical code where q 0 r1 (y) = q 0 r4 (y) =
Results and discussions
Drying kinetics inside the greenhouse dryer The geometry dimensions used in this part are given in Table 2.
Temperature
The variation of the average OMWW and air-OMWW interface temperatures under the open sun and inside the greenhouse solar dryer as a function of time are shown in Figure 8. The solar radiation was trapped and stored in the greenhouse dryer. Thus, the average OMWW temperature and the air-OMWW interface temperatures there, which reached 320.5 and 323.7 K, respectively, were higher compared to those during open sun drying which reached 319 and 322 K, respectively.
The average OMWW and air-OMWW interface temperatures increased for the 16 and 14 h, respectively, due to the increase of solar radiation and thus the stored energy in form of sensible heat and decreased for the rest of the day due to their decrease. By the way, the phase shift between the two temperature maximums can be explained by the fact that the OMWW has higher thermal inertia than the air which allows it to store heat and restore it gradually owing to its low thermal diffusivity.
From Figures 8 and 9, it can be noticed that for both drying processes and before 14 h, the interfacial temperature was higher than the average OMWW temperature, but after 14 h, the situation was reversed and the average OMWW temperature became higher than the average interfacial temperature owing to the decrease of solar radiation. Thus, the cooling of OMWW surface increases its density and forces it to sink to be replaced by the warmer OMWW coming from lower levels. That is why a natural convection movement appeared as depicted in Figures 9(c) and 11(c).
During greenhouse solar drying, all the temperatures were close to 304 K, at the beginning of the day. Then, as it is shown in Figure 9(a) and (b), the OMWW temperature progressively warmed up following the principle of thermal stratification. The interfacial surface had the highest temperature, as it was the most exposed to solar radiation. As the latter dipped from noon, the OMWW temperature also dipped as illustrated in Figure 9(c). Figure 9 shows that there is a significant difference between the temperatures of the incoming and exhaust air of the dryer. The latter still has a high temperature and thus sufficient drying potential, suggesting its recirculation for more efficient utilization of the solar dryer.
Velocity
The rise and fall of solar radiation and air temperature affected the natural convection during the evaporation process. Therefore, air velocity follows the solar radiation and air temperature profiles as shown in For the average OMWW velocity, stagnation can be noticed until 16 h 30 min. But once the solar radiation, as well as the interfacial OMWW temperature, decreased; the OMWW velocity increased as seen in Figure 11(c). In fact, the increase of the OMWW temperature compared to the interfacial temperature, noted in the previous section, led to a drop of its density and thus of the relative buoyancy.
As a result, a bulk movement induced by the buoyant forces appeared as the hotter liquid floated above the colder one while this last fell in the pond bottom.
The air velocity distributions and the streamlines are presented in Figure 11 for different hours of the day. The air entered from the greenhouse inlet, moved horizontally following the pond width and then vertically upward and exited. It is evident that the high velocity regions were close to the greenhouse outlet where the air leaves, whereas the remaining domain displayed a low velocity magnitude.
Recirculation zone can be seen at the top of the greenhouse due to low-pressure area generated by flow direction toward the outside of the greenhouse.
Vapor mass fraction
The drying process is characterized by a gradual reduction of the product moisture content over time. While the kinetic energy of a molecule is proportional to its temperature, evaporation can proceed more quickly and easily at higher temperatures. Thus, when the liquid molecules located at the surface have enough energy, they escape from the liquid to the air and, therefore, the vapor mass fraction in the air increased during the first half of the day as shown in Figure 12(a) and (b). In the afternoon, the solar radiation diminished; that is why the vapor mass fraction also diminished as indicated in Figure 12(c). As a result, from Figures 8 and 12, it can be confirmed that high OMWW and air temperatures can provide greater saturation deficit which is one of the driving forces in the diffusion process. The most intense vapor mass fraction took place at the OMWWair interface.
Evaporated liquid mass
The mass of the evaporated liquid was computed from equation (10) for five consecutive typical days and presented in Figure 13. For each day, the overall profile of the evaporated liquid mass has a peak at about 14 h 00 min. This can be explained by the increase of solar radiation until midday and its decrease for the rest of the day. After 48 h, the periodic regime is established.
During the night, the solar energy and thus the dryer thermal efficiency are very low compared to those obtained around 12 h 00 min. In fact, solar drying thermal efficiency is important when energy drying is high. Although the solar radiation disappeared from about 19 h 00 min, the evaporation process continued but with a slower rhythm compared to the diurnal period, since the liquid is still hot.
As expected, the evaporated liquid mass was higher in the case of the greenhouse solar drying compared to the open sun drying.
Radiative flux and evaporative flux at the liquid-vapor interface
The evolution of the radiative and evaporative flux close to the liquid surface are presented in Figures 14 and 15. It can be seen that the radiative and evaporative flux are attenuated by the solar radiation which is absorbed by the liquid surface. They were more important at the greenhouse inlet. This is due to the fact that, at this zone, the temperature is low which allows a reduction in the radiative heat losses and the concentration gradient is important which promotes the evaporation process.
However, further investigations are required to determine design parameters and optimum conditions allowing better performance of the solar drying process.
Parametric study
The total evaporated liquid mass is an important parameter to describe the drying kinetics of the evaporation process. That is why it will be used in this parametric study.
Effect of the pond depth
To determine the optimum OMWW depth, at which the drying process achieves its maximum performance, six simulations with six pond depths were carried out during July for both drying processes.
As illustrated in Figure 16, there is a significant difference between the different cases and the maximum total evaporated liquid mass was at 0.1 m pond depth.
Consequently, the pond depth plays a key role in the liquid ability to store energy. In fact, a shallow liquid surface is more sensitive to climatic variations than a deep one. It follows that, during the typical summer day, the shallow liquid surface warmed up rapidly due to its thermal inertia and the smaller liquid volume. Hence, the concentration gradient increased and the evaporation process was promoted.
Effect of the pond width
In this section, the effect of using one or separated ponds for the same quantity of OMWW is discussed. From Figure 17, it is clear that for both drying processes, a significant increase of the total evaporated liquid quantity can be achieved by using two ponds with a width equal to 0.2 m instead of a pond with a width equal to 0.4 m. the same thing is noticed by using three ponds with a width equal to 0.2 m instead of a pond with a width equal to 0.6 m. Accordingly, using separated ponds can enhance the evaporation process.
Effect of the greenhouse height
The greenhouse height also can be an important parameter in the solar dryer. That is why, different models with different outlet and fixed dimension were simulated. From Figure 18, it can be noted that the lowest height (0.1 m) gave the highest evaporated liquid mass. With a low height, the hot air can be easily exit and fresh air with lower moisture is renewed. Therefore, a low height can optimize the drying process.
Another point is that the greenhouse height decrease is accompanied by the greenhouse liquid surface view factor increase. In that way, the radiation losses are reduced and the temperature, as well as the concentration gradient, increases. Consequently, the evaporation process can be greatly helped by optimizing the greenhouse height.
Effect of the greenhouse outlet
The greenhouse outlet effect was also studied after being varied. Figure 19 illustrates the total evaporated liquid mass variation during the day for various outlets. It can be noted that there is only a slight difference between the different cases which is caused by the radiation view factor. Hence, using a small air outlet can derive very little advantage.
Effect of the d distance
The last geometrical parameter studied is the parameter d ( Figure 1). From Figure 20, the slight difference between the different cases can be explained by the dependence of the radiation view factor on the d distance. The maximum total evaporated liquid quantity was obtained using a value of d = 0.03 m. Therefore, using the optimum d distance can have some benefit on the performance of the dryer.
Effect of the drying season
The numerical simulation has been carried out on four typical days of the year, using dimensions of Table 2. The equinoxes of spring (20 March) and autumn (23 September) and the days of the summer (21 June) and winter (22 December) solstices are considered in the simulation. The solar radiation and the average temperature corresponding to each day are given, respectively, in Figures 4 and 22.
As shown in Figure 21, the great evaporation appeared for the day during the month of July as it had the best absorbed solar energy during the longest day with the highest ambient temperature. Hence, the drying period plays an incontestable effect on the kinetics.
To be objective in this study, the greenhouse drying, as well as the open sun drying efficiencies were studied for each month of the year.
For clear sky conditions, the monthly average ambient air temperature and the maximum solar radiation along the year are shown in Figure 22. The variation of Figure 23. The evaporation effect appeared to be tightly dependent on seasonal variations. The total evaporated liquid mass progress was globally comparable to that of the average ambient air temperature and the maximum solar radiation. After the 24 h of drying, it increased from 0.4 and 0.5 kg/m in January, respectively, under open sun and within the greenhouse dryer up to 4.4 and 5.05 kg/m in July due to the increase of ambient air temperature and solar radiation, then dwindled to 0.43 and 0.51 kg/m, respectively, under open sun and within the greenhouse dryer in December. Indeed, the highest total evaporated liquid mass values were recorded in the month of July while the lowest values were recorded in the month of January. In addition, significant values of evaporated liquid mass were registered during hot weather suggesting a better performance of the OMWW solar drying during this period. Accordingly, the water quantity that can be evaporated from the liquid surface depends on the amount of solar energy. The higher this energy with its latent heat, the more water molecules can overcome the OMWW intermolecular forces and thus be potentially able to escape to the air.
Furthermore, this energy varies not only from one season to another but also from one place to another depending on the geographical location.
Conclusion
The developed model emphasizes the important physical phenomena in the solar greenhouse drying process and its efficiency compared to open sun drying. The main results can be summarized as follows: The predicted temperature, velocity, vapor mass fraction, and evaporated liquid mass are in good agreement with previous researches and showed the advantage of using a greenhouse solar dryer. The current model is used to study the effect of design changes and the external climatic conditions on the dryer performance. Shallower separated ponds with a low height greenhouse during a summer day can enhance the performance of a greenhouse dryer. With optimum parameters, solar greenhouse drying can be satisfactory and competitive to open sun drying with an important capacity of drying and low-cost investment. | 5,503.4 | 2020-04-01T00:00:00.000 | [
"Engineering"
] |
What Drives Energy Eciency in Africa? Insights from 12 Selected Countries Using Incremental Decomposition Analysis
This paper examines why Africa, and Sub-Saharan Africa (SSA) in particular, has some of the worse energy efficiency indicators in the world. It examines the relationship between total primary energy supply (TPES), final energy consumption, and transmission and distribution (T&D) losses on the continent. We apply the Sun-Shaley incremental decomposition method of logarithmic mean divisia index (LMDI to twelve (12) African countries using data from 2000 to 2016 to decompose TPES into the effects of changes in final energy consumption (FEC), population change (POP), carbon dioxide (CO2) emissions, and economic activity measured by gross domestic product (GDP), and their impact on energy efficiency. The method provides a precise decomposition analysis and incremental results that can be added to study the long-run impacts without any information missing in between. The findings show that the study countries have worsening energy efficiency indicators with energy intensity (EI) as high as 55%, coupled with an inefficient transformation of primary energy supply to final consumption, culminating in significant systems losses. It was further discovered that countries that have more significant proportions of renewables sources in their energy mix have lower transmission and distribution losses. This study serves as a guide to the policy discourse regarding the energy efficiency situation in Africa.
Introduction
Energy availability and its use are indispensable components of the modern economy. However, energy resources must be managed well to make them continuously available to users, hence the need to improve energy efficiency and reduce energy intensity. Energy efficiency, in this context, refers to using less energy to undertake the same task, therefore eliminating energy waste -that is doing more with less (Islam and Hasanuzzaman, 2020). Energy intensity is the measure of the energy efficiency of a nation's economy, calculated as a unit of energy used per unit of GDP. It is an indication of how much energy is used to produce one unit of economic output.
Since the 1970s, various countries across the world encompassing developed and developing countries alike have adopted energy efficiency measures and initiatives as part of their energy and climate policies -that is, primarily to reduce CO2 emissions and improve energy security (Vehmas et al., 2018;Chang and Huang, 2020). In more recent times, improving energy efficiency and reducing intensity has become one of the important milestones under climate policy targets such as The Paris Agreement which seeks to limit global warming to under a 2°C increase. For example, International Energy Agency (IEA) estimates show that a combination of the right energy efficiency policies has the potential to deliver more than 40% of emissions reductions needed to reach the goals of the Paris Agreement (IEA, 2018).
Since 2010, several African countries have been increasing power generation and expanding access to modern energy services to reduce poverty and inequality. Nevertheless, only 48% of the population had access to electricity in 2019, up from 33% in 2010 (IEA, 2020). In contrast, 87% of the population of developing economies had access to electricity in 2019 as compared to 74% in 2010. This means that around 580 million people or about 50% of the population, mostly in Sub-Saharan Africa (SSA), did not have access to electricity (IEA, 2020;SEforALL, 2020). A challenge that remains to provide modern energy services to the remaining underserved population is how to do it in climate-smart ways. This is where energy efficiency could create important win-win opportunities.
Nevertheless, energy efficiency in Africa remains among the lowest in the world. Africa has the lowest per capita energy consumption in the world despite accounting for 17% of the global population (Atlas Africa, 2017;IEA, 2019). In general, Africa accounts for 3.3% of global primary energy consumption, 6% of the world's energy demand, and less than 3% of global electricity demand (RES4A,2020). Also, electricity consumption on the continent is skewed as the northern and southern parts of Africa account for 71% of Africa's 708 terawatt-hours (TWh) total consumption as of 2019 (IEA (2019). SSA, excluding South Africa, accounts for only 29% of total electricity consumption (RES4A,2020; IEA, 2019). Thus, amid this scarcity, it is prudent that Africa manages and expands its current electricity supply to ensure availability to all.
On the other hand, SSA remains one of the world's most energy-intensive regions (IRENA 2020; IEA 2020). That means SSA uses more energy to produce a unit of GDP than other areas in the world. This is compounded by technical and nontechnical factors such as high transmission and distributions (T&D) losses, low-cost recovery, illegal connections, and several hidden costs in the form of tariff under-recovery and the poor financial ratings of the power utilities, making them unable to access debt financing (Trimble, 2016). For example, T&D losses are as high as over 50% in certain countries and putting most power utilities in a precarious financial situation. The narrative of the T&D losses above explains why in some instances, over 50% of electricity produced remains unaccounted for in most SSA countries (Trimble, 2016).
While diversification of the energy matrix in Africa is ongoing, it remains at a slow pace. Renewables are forecasted to increase from 1% currently to 16% in 2040, and electricity demand nearly tripling by 2040 (BP, Africa, 2019). Furthermore, crude oil will still be the dominant fuel, forming 34% in fuel consumption by 2040, a reduction from 44% today, with renewables overtaking coal to become the second-largest source for power generation in 2040. Africa's energy intensity is forecasted to fall by 12% by 2040 (BP Africa, 2019). The share of electricity in final energy consumption will increase to 23% by 2030 (McKinsey & Company, 2019).
Given this background, this paper seeks to examine why Africa, and SSA in particular, has some of the worse energy efficiency indicators in the world. It also examines the relationship between total primary energy supply (TPES) on final energy consumption (FEC), and transmission and distribution (T&D) losses on the continent. To do this, the study will decompose TPES into the effects of change in FEC, change in population (POP), carbon dioxide (CO2) emissions, and economic activity measured by gross domestic product (GDP). We use the Sun-Shaley incremental decomposition method of logarithmic mean divisia index (LMDI) to find answers to these nagging questions on the continent. The Sun-Shaley incremental decomposition method of LMDI is applied to twelve (12) countries using data from 2000 to 2016 to decompose and study changes in TPES, FEC, POP, CO2, GDP, and their impact on energy efficiency. In effect, the paper analyses these macroeconomic variables and their impact on energy efficiency performance of some selected African countriesthe efficiency of the total primary energy supply (TPES) transformation process.
This work adds to the literature in the following ways. Firstly, this work provides further insights into the trajectory of energy efficiency trends on the continent and makes recommendations on how to improve energy efficiency through policy design and formulation. Secondly, this work is unique in that, to the best of our knowledge, it is the only work that uses the Sun-Shaley of the LMDI family approach to analyze energy efficiency in Africa, looking at TPES. Even though studies have been done on Africa at the aggregate level -such as Pappi et al. (2019) -and on the individual country level -as done by Inglesi-Lotz andBlignaut (2011) andOlanrewaju (2019), the Sun-Shapley approach has not been applied in Africa.
The rest of the paper is organized as follows: Section Two reviews the relevant literature. This is followed by Section Three, which describes the applied experimental design, including methods. Section Four presents the results. Section Five summarizes the paper by providing conclusions about the main findings, implications for policymakers, and further research.
State of Global Energy Efficiency
Energy efficiency is one of the cornerstone policies needed for the world to move to a sustainable consumption pattern and ensure energy security. energy efficiency has manifold benefits such as economic, social, and environmental when its implementation is achieved. energy efficiency is a general term that has been defined by many authors. Patterson (1996) defined energy efficiency as using less energy to produce the same quantity of useful output of a good or service. In other words, it is the ratio of user input of a process to a valuable outcome of a process. The definition by Patterson highlighted the importance of using less energy to produce a large quantity of output. It does not mean using less energy to have less output. Furthermore, according to the United States Energy Information Administration (EIA), energy efficiency has been defined as using a technology that uses less energy to perform the same function. For instance, using a comparative light-emitting diode (LED) bulk or a compact fluorescent bulb versus an incandescent light bulb to produce the same amount of light (EIA,2019). The EIA contrasts this with energy conservation, like any behavior that leads to less energy use. When no one is in the room, putting off the light is a behavioral act to conserve energy (EIA, 2019).
Globally, energy efficiency has seen significant improvement. According to the IEA, primary energy intensity grew in 2018; less than 1.2% of energy is needed to produce each GDP unit, lower than required to make the same quantity in 2017. Besides this, the IEA Sustainable recovery plan projects 40% of the one trillion dollars budget for annual expenditure on energy efficiency distributed to the economy's various sectors. This will result in efficiency improvement for 20 million households yearly. Also, 350 million modern and efficient appliances are purchased and improved to industrial processes (IEA, 2020). This is also a significant reduction in energy costs due to adopted energy efficiency measures (IEA, 2020). Furthermore, Sustainable Development Goal (SDG) 7.3 seeks to double the energy efficiency improvement rate by 2030 based 1990-2010 trend, which was about 1.3%. To achieve SDG7.3, the world needs a 3% annual reduction from now to 2030, which is daunting (IRENA, 2020).
The IEA (2019) contends that the rapid improvement in energy efficiency would be the catalyst to transition the world towards a sustainable development future. This would be underpinned by a drastic transformation of the energy sector at a scale to catapult many developing countries including Africa to a sustainable future. It encompasses a paradigm shift in the ways energy is consumed and supplied, coupled with removing barriers to the deployment of energy efficiency such as financial, information asymmetry and regulatory costs (IEA, 2019).
Since 2010, several countries and regions have increased the adoption of policy and regulatory frameworks to scale up energy efficiency adoption (World Bank, 2020). This is shown by the RISE -Regulatory Indicators for Sustainable Energy -sub-indicator on energy efficiency. As of 2019, about 70% of RISE participating countries had adopted legislation planning for energy efficiency, although South Asia and Sub-Saharan Africa had the lowest scores ( Figure 1). The data indicates that countries such as Ghana, Kenya, Egypt, Cameroun, Algeria, Morocco, Cote d'Ivoire Tunisia, and South Africa are faring well in the energy efficiency score (Figure 2). Africa's improvement in its energy efficiency score is stated in the IEA Sustainable development scenarios, where global energy intensity improves substantially, averaging 3.6% a year, with Africa improving significantly towards 2040 (IEA, 2019).
Figure 1 Regulatory Indicators for Sustainable Energy (RISE) Efficiency Scores
Source: World Bank, RISE 2020
Energy Efficiency Estimation Methods and Relevant Empirical Works
As Figure 3 shows, energy efficiency decomposition can be done multiplicatively or additively, where the difference in change can be segregated using either traditional single-factor productivity analysis or multifactor productivity analysis. Pillai (2019) has identified four indicators to track changes in energy efficiency. Pillai (2019) has identified four indicators to track changes in energy efficiency. The first is a thermodynamic indicator, which sums up the energy of useful output of a process to input. The drawback of this process is that it is not ideal for measuring energy quality and macro-level activity (Pillai, 2019). Second is the physical dynamic, which explains energy efficiency as the ratio between the output of material and the variation in energy input in a circle. The third is economic -thermodynamic-that is a double-headed approach that measures energy in thermodynamics units and output in market prices. Finally, financial measures production in monetary value and measures energy in thermodynamic unit
Why LMDI is essential
There are different methods of the Logarithmic Division Index (LMDI). The reasons for choosing any decomposition approach are the theoretical bases, ease of use, adaptability, and ease of interpretation. This method is readily applicable to specific energy subsectors to estimate energy efficiency savings. Thus, it makes it plausible to estimate the TPES on energy efficiency in the selected countries in Africa. More so, it gives easy formulation and interpretation without having to explain the residual term.
The approach is very novel in its process for the African context, to evaluate the trends of energy efficiency on the continent for the study period. This study makes use of the Sun-Shapley additive method, which has been applied in research to study cost allocation, which was pioneered by Albrecht et al. (Ang, 2004) to energy decomposition study. Ang (2004) highlighted Sun and proposed a similar approach to the Shapley decomposition. This approach has become known as the fine-tune Laspeyres index approach that entails decomposing the interaction terms in the traditional Laspeyres index to the main effects. As a result, the Sun-Shaley decomposition approach has been coined to give a complete decomposition. When the segregation method involves two factors, the Sun-Shaley approach is referred to as the Marshall-Edgeworth Method.
Figure 3 Energy efficiency estimation methods
Chang and Huangn (2020) using multiplicative, additive LMDI for 18 Asia-Pacific Economic Cooperation (APEC) countries between 1971-2015 found that population increase coupled with economic growth brought about increased carbon dioxide (CO2) emissions among these countries and improvement in their energy efficiency. A further study in the Chongqing area applied the LMDI, looked at the industrial energy use and using the industrial carbon emission approach (ICE), discovered that these factors influence the ICE, energy intensity, energy mix, industrial output, and structure. They further said industrial output is the main driver for ICE. Olanrewaju (2019) using the LMDI between 1994-2016 to decompose and analyze South Africa's sub-industrial sectors, found three factors to determine energy consumption. These were structural, activity, and intensity. The energy activity effect accounted for a bulk of the energy consumed in the industrial sectors, while the intensity effect reduces energy consumption. Cansino et al., (2015) studied the dynamics of energy consumption in Spain using the LMDI decomposition approach from 2003-2012 revealed that reduction of final energy consumption of 1% was achieved due to an 11% diminishing structural effect, while growth in intensity effect and activity effect was 3.5% and 7% respectively.
Another study by Inglesi-Lotz (2018) that sought to test the hypothesis of ''rebound effect'' or ''Take back effect'' in South Africa from 1990-2014, by studying the drivers of changes in CO2 emission levels and drawing comparisons to BRICS countries, discovered by the decomposition approach that CO2 intensity and energy intensity had no impact on the changes on CO2 emissions levels on all the nations. In other words, as energy intensity decreases, CO2 increases. And for South Africa per se, its energy intensity was a negative indicator of its emissions levels for the period studied.
Similarly, a study by Kim (2017) on the Korea Manufacturing sector from 1991-2011, using the LMDI, concluded that the activity effect was a driving force behind an increase in energy consumption. And that the structure and intensity effect were the reasons for the reduction in consumption in the Korean Manufacturing sector. There was an inverse relationship with the structural impact and intensity effects, as structure effect increases; intensity reduces, with products having varied implications on the various sub-sectors. Furthermore, a study by Moutinho et al., (2018) on the impact of CO2 emissions on top 23 countries on renewable energies globally, from 1985-2011, using LMDI found that electricity financial power effects (EF) have contributed to an increased in the emissions of CO2 While renewable resources productivity(RP) effect has led to falling in CO2 emissions for all countries.
These results corroborate majors findings that countries with more significant proportions of renewable resources in the primary Consumption have lesser CO2 emission levels(Pascumal etal 2019) (Trimble, 2016), as shown in table 3.
Decomposition analysis
Decomposition analysis has become the most preferred method for analyzing energy efficiency research. Two forms of decomposition analysis exist in literature; there is index composition analysis, which uses aggregate time-series data (IDA), and structural decomposition analysis, which uses disaggregated data. The index decomposition analysis had become more popular since the 1980s when papers were published using that approach.
The 1973 oil crisis, coupled with the rising climate concerns, popularized energy efficiency as studies using this approach gained currency by researchers due to the need by nations to look for ways to maximize energy use in the face of rising oil prices. Thus, leading to the development of the index methodology in the United States in the 1970s and the UK (Pillai, 2019). This ignited particular research interest in the decomposition approach. The most popular was the Laspeyres index, and in the 1990s beyond, the Divisia index became famous, the logarithmic mean Divisia index (LMDI) to be precise.
Based on the reasons mentioned above, data from the African Energy Commission (AFREC) and the world Development were derived from undertaking the decomposition analysis from 2000-2016. The Sun/Shapley incremental approach is one of the most practicable means of analyzing decomposition studies.
A significant amount of studies have been done on IDA concerning energy intensity changes; see (Pillai, 2019), (Ang and Zhang (2000) for a study on decomposition analysis. There are two critical types of Divisia index decomposition methods, Arithmetic mean (AMDI) and Logarithmic Mean Divisia Index (LMDI). Many studies have been done on IDA on Energy-related aggregate consumption, like primary energy consumption, CO2 emissions, and energy intensity.
The decomposition analysis gives the following changes Is the KAYA identity that is decomposed to analyze the data According to (Vehmas et al., 2018), following the ''principles jointly created and equally distributed of the Sun -Shapley method, 1 =, 2= 3=0.5 in their analysis, the same value for is applied in this research. Subscript − 1 refers to a moving base year (And the previous year is ), and subscript − 1 to a difference∆ the current year and the previous year − 1 = H% + ;<= + (1 + 1) % + ;;<= J × ;;<= ) Equation (4)
Data
The study makes use of data from 2000-2016 from the African Energy Commission(AFRC) and additional data from the WDI on the EE indicators of these countries applying the LMDI Sun/Shapley approach. The data was obtained regarding their total primary energy supply (TPES), their population size (POP), carbon dioxide emission levels (CO2), their gross domestic production (GDP), and their final energy consumption (FEC). Table 2 provides the country statistics; the more developed countries on the continent such as Algeria, South Africa, Morocco, Egypt tend to have the highest emission per capita, energy intensity, carbon intensity, per capita consumption, and total primary energy supply. The emissions ratios bring to the fore the fact these countries have not been able to decouple energy demand from economic growth and development. They equally exhibit high transmission and distribution losses. All these show that improvements in energy efficiency have been slower and almost nonexistent in most countries. On the other, the less developed countries, particularly those from Sub-Saharan Africa, except for South Africa, are lagging in efficiency programs and exhibit worse figures for energy efficiency indicators such as energy intensity, Per capita consumption; in contrast, they have a growing population. The Rise Africa report confirms this assertion SSA is far behind regarding energy efficiency programs(Rise Africa, 2018) During the past 16 years, the study countries' total primary energy supply (TPES) increased by 106%. This implies the study countries have increased their consumption of energy supply in more than a decade. This is in line with global TPES which will increase by 12, 100 Mtoe by 2010 and 16, 300 Mtoe by 2030 (Mathew, 2007).
Besides, Africa's energy needs are met from different sources; the share of fossil fuel consumption doubled in the study period reaching 93%, increasing from 222,069.4 kilograms to 430,281.6 kilograms. This is in tandem with a global fossil fuel increase of 70% in 2019(IEA, 2019) ('Energy Efficiency, 2020). The rate at which energy is being consumed indicates the economic development of the economies. The increment explains the growing need for energy consumption to keep pace with economic development in the study countries.
Africa's population is increasing day by day. From 2000 to 2016, Africa's population more than doubled to more than 1 billion people. Due to the increasing population, energy demand will increase due to the demand for, industrialization, modern buildings, and energy services, creating more need for energy to meet these sophisticated lifestyles. This trend has been growing in Africa, resulting in primary energy supply increased by 106% during the study period.
The percentage of GDP per capita of Africa has increased by less than 50%. This explains the economic prosperity of the countries. This indicator has a direct relationship with per capita energy consumption. This is because the better off a nation, the country's ability to spend on energy consumption. Higher-income levels translate to higher disposable, making citizens able to spend on cooling and heating appliances in the study countries. The rising middle class in Africa with better spending power shows people can afford comfortable lives. If these lifestyles are not energy efficient, they will increase the energy intensity of these economies, stalling progress towards energy efficiency.
Final energy consumption has increased by 96.3%. Final energy consumption has not increased at the same rate as the total primary energy supply. Africa being at the brink of population explosion means, more energy will be demanded in the next two decades.
Africa's global CO2 continues to be insignificant and does not warrant concerns. This does not imply that Africa should not take steps to reduce its emissions levels since the changing impacts of climate change are global, not local. It has increased by 50%. This is relatively high given that most of the continent's top emitter in the study is South Africa. However, the power sector emits so much CO2 as electricity consumption increases, creating a bidirectional relationship between CO2, and electricity generation, particularly on fossil fuels generation(Pascumal etal 2019)(Chakamera and Alagidede, 2018). Thus, the need to practice energy efficiency to ensure that the emissions levels are reduced. The percentage of TPES does not change considerably but decreases around 2004 and continues as the years go by. This explains the less diversified energy mix of the study countries. The increment is tilted towards fossil fuels instead of renewables. The percentage for TPES has been more than 100% for the study period. The analysis also found TPES/FEC to be decreasing among the study countries. The TPES/FEC explains a country's efficiency in transforming its primary energy sources for final consumption.
Most of the study countries have high T&D Losses as high as 50 % ( Trimble, 2016). Besides, if a country consumes more of its energy as electricity, it is likely to have T&D losses because of the technical conversion processes that the electricity is produced. It must be stated that a country is doing well when its TPES/FEC and FEC/GDP are falling in value. On the other hand, the macroeconomic variables such as POP and GDP/POP are suitable for a country when the said country is experiencing increases in them. An increase in population gives more labor to the market, and a per capita income increases income levels.
Furthermore, the FEC/GDP ratio, which denotes the amount of energy used to produce a GDP unit, is relatively high in Africa. It is otherwise referred to as energy intensity. Africa is touted as an energy intensity continent. Africa's energy intensity has been lagging behind many other continents (RISE, 2021). South Africa is Africa's energy intensity country due to its reliance on coal for electricity consumption (Alemzero et al., 2020). The increased consumption of renewables in electricity will reduce the high T&D losses that are associated with the conversion of fossil fuels to electricity. Thereby improving the energy efficiency rates of these countries (IEA, IRENA 2020) It is apparent from the analysis that some of the study countries have made significant inroads regains regarding efficiency due to structural and reforms and activity effects adjustment. The efficiency of the transformation process has been steady for most countries except Egypt at the beginning of the study period. However, they all peaked in the 2006-2008 period, except Ethiopia, implying a worsened energy efficiency period. Similarly, there was an improvement from 2010-2012 for all the countries, and the trend seems to deteriorate from 2016. (Pascumal et al 2019) found the transformation effect to deteriorate but did not give an elaborate reason that account for that, nevertheless, this study found that the less integration of renewables has accounted for the deterioration of the transformation effect. Countries in this study that have more renewables in their generation mix have a better transformation effect.
South Africa is also an outlier from the analysis with the highest energy-inefficient sector (right figure). The structure of the economy is tilted toward fossil fuels, coal to be exact. However, there has been a recent move to diversify into renewables like wind and solar with the Integrated Resource Plan (IRP). Together with Morocco, Ghana is the north star in this group of countries with smaller TPES/FEC ratios, followed by Kenya and Nigeria. Their ratios are corroborated by the (RISE 2021) study, where they scored above 70% and 60%, respectively.
Overall, the study countries' technical energy efficiency has not improved significantly due to the results arrived at, and the structural and activity effects are prevailing in the study countries. The energy Intensity Effect for Ghana began with the highest energy intensity (EI) ratio. However, they all increased from 2003-2005, implying these countries consume more energy relative to the GDP. One noticeable trend for this study group of countries on the left is that they are all on a steady growth path, suggesting that the activity effect is stable. It is no surprise because apart from DR Congo, the rest of the countries have a higher GDP per capita. More so, Egypt is the most energy intensity country in Africa (Alemzero et al., 2020). They all went into a trough from 2008-2014. 2008 financial crisis that hit the world reducing the activity effect of the economic on these countries. The electricity supply share will determine the efficiency of the country's energy performance and the proportion of renewables in the electricity mix. The DR.has the most energy-intensive economy from the figure, as shown below (on the left-hand side). The rest of the countries are on a steady energy intensity pathway.
On the right-hand side, all the countries went into a trough in 2008 and peaked again in 2014. The 2008 financial crisis impacted the global economy; their energy intensity levels were reduced. A startling result was derived from South Africa with the most stable EI value. It is surprising because South Africa is the most energy-intensive economy on the continent. This result confirmed a study by (RISE 2021) where its overall EE score was above 70%. Figure 6 depicts the transmission and distribution losses and shares of renewables in the study countries' grid systems. As was expected, South Africa came out as having the highest transmission and distribution losses. The results show a strong indication that the countries with the highest transmission and distribution losses generate less energy from renewable sources. Thus, there is an inverse relationship between renewables power generation and transmission and distribution losses. Kenya generates over 60% of its power from geothermal, Wind, and Solar, and tend to have small to nonexistent transmission losses. Same as DR. Congo, Angola, Cape Verde, whose cumulative renewables consumption is very high. The intuition behind this result is that renewables provide space for Distributed generation (DG) or behind the meter generation (BTM). Thus, the more customers generate under DG; the less the TD&L would come on the system, other than the allowable technical losses.
Expectedly, Ghana equally has significant distribution, and transmission losses as Ghana consumes less than 0.01% of renewables in its cumulative power generation (Sun et al., 2020). According to (Gyamfi et al., 2018) 10%, which is the permissible range. That is financially unsustainable and places power utilities in a financial distress situation. There is a need to reform the power in SSA to move away from the vertically integrated traditional model to make them competitive and economically viable.
5.Conclusion and Policy Implication
This paper aims to study energy efficiency for 13 African countries for the period 2000-2016. The study's direction was using macroeconomic variables and their past and current trends, together with their decomposed effects on primary energy consumption. The latest data for the 13 countries from the Africa energy Commission (AFREC) was used during the analysis. As a result of the fact that decomposition analysis reacts to variations in different periods, comparing various studies is quite challenging. The IDA method best explains the analysis results, as the difference between them is primarily from the mathematical approach.
The study applied the following energy efficiency indicators; energy intensity, the efficiency of the transformation process (Total primary energy supply divided by Final energy supply, TPES/FEC). Energy Efficiency and its effects on total primary energy supply were researched using the incremental Sun-Shaley decomposition approach. This approach gives precise decomposition results, which can be summed up to study long-run horizons without missing information. As a result, this makes the Sun/Shapley decomposition method suitable for analyzing annual changes. (Ang, 1995) (Vehmas et al., 2018)(Colinet et al., 2016. Drawing an inference from the descriptive statistics, the summary descriptive statistics show the mean variation among the countries is very high for transmission and distribution losses, GDPs of the countries, TPES, and Final energy consumption. In contrast, carbon intensity (CI) and per capita emissions have low mean variations. The figure is in sharp contrast to the more developed countries on the continent, where per capita emission and carbon intensity are very high. The countries tend to have higher transmission and distribution losses. The analysis indicates the slow improvement in energy efficiency programs in most countries, as shown in the energy efficiency indicators. The analysis aptly describes the energy situation in the study population and the continent as a whole.
Africa's BP scenarios showed that the power sector would be the primary energy user, followed by industry, and transport will continue to consume the bulk of the final energy from now to 2040. Besides, oil will continue to be the dominant fuel until 2040 but will reduce to 34% from 44% today. Gas will increase considerably, while renewables will overtake coal as the second preferred choice for generating electricity on the continent due to falling costs. Coal will nearly be phased out, while Nuclear still exists, and hydro a power generator of choice.
From the decomposition analysis, all the energy efficiency indicators have positive values. The population is on a higher pedestal, confirming most projections from notable institutions like the IMF, the world Bank, AfDB, and several others. The per capita income of the countries studied has been on a steady growth path.
Energy intensity (FEC/GDP), as well as the efficiency of the transformation of primary energy (TPES/FEC) into final Consumption, has been growing among the countries. The low percentage of changes shows the less diversified nature of the primary energy supply on the continent. The countries are experiencing a lock-in state regarding diversifying their energy mix from fossil fuels and increasing renewables consumption. Additionally, the countries also experience transmission and distribution losses; it is even more pronounced for SSA countries on the study, except Morocco, with the highest transmission losses. It was clear from the survey that countries with significant proportions of renewables in their power generation mix also tend to have lower transmission and distribution losses.
African countries need to structurally change their economies and sectoral energy intensities and institute mandatory energy efficiency policies to be practiced and embrace emerging energy generating technologies such as wind and solar to change this unsustainable trajectory. African countries can benchmark from global best practices regarding policy design, implementation while adapting to country-specific circumstances that will enrich their energy efficiency policy portfolio. (IEA, 2020).
In addressing the energy efficiency challenges, policies need to be implemented. Most of the policies are directed at individual households' decision-making regarding the adoption of Energy efficient adoption technologies. Such as tax reduction, Subsidies, discounts, prohibitions, and promoting awareness campaigns include some of the policies aimed at influencing the adoption decisions of households. There are also regulatory rules like performance standards, building codes, trade restrictions that determine the availability of choices households make regarding the adoption of technologies. However, there are relatively new policy instruments like energy efficiency tenders, tradable white certificates, and energy efficiency obligations that are not directed at households but other parties in the energy efficiency chain. The use of clunker -for-cash approach, as a means to push for the adoption of electric vehicles (EVs) by individuals, especially in the developing world, where people patronize second vehicles, which pollute the environment, is a measure that needs to be implemented fully by African countries. In the case of Africa, the unemployed could be given cash in returning energy inefficient appliances to be recycled. These can also help to curb the dumping of energy-inefficient appliances. | 7,817.6 | 2021-09-09T00:00:00.000 | [
"Economics",
"Environmental Science"
] |
miR-146a Overexpression in Oral Squamous Cell Carcinoma Potentiates Cancer Cell Migration and Invasion Possibly via Targeting HTT
Huntingtin (HTT) is one of the target genes of miR-146-a and regulates various cancer cell activities. This study aims to explore the miR-146a expression pattern in oral squamous cell carcinoma (OSCC) and its role and mechanism in OSCC progression and metastasis via targeting the HTT gene. OSCC tissue and non-cancerous matched tissue (NCMT) were obtained from 14 patients. OSCC cell lines and normal HOK cells were used to analyze migration and invasion assay. OSCC-induced miR-146a knockout mice (B6.Cg-Mir146tm1.1Bal) model was developed. Transwell cell migration/invasion and scratch wound assays were used to investigate the OSCC cell migration and invasion in vitro. Kaplan-Meier survival analysis was used to investigate the association of HTT expression patterns in cancer tissue with patient survival percentage and duration. Pearson’s correlation analysis tested the association between miR-146a and HTT expression in OSCC tissues. miR-146a mimic and inhibitor transfection were performed to overexpress and knockdown the miR-146a in OSCC cells, respectively. miR-146a expression was highly upregulated in OSCC tissues and OSCC cell lines. Cancer cell migration/invasion was enhanced in miR-146a overexpressed cells and reduced in mi-R146a knockdowned cells. HTT expression was reduced in OSCC tissues and cell lines compared to NCMT and HOK cells, respectively. HTT expression was downregulated in miR-146a overexpressed OSCC cells and upregulated in miR-146a knockdowned OSCC cells. The expression pattern of miR-146a in OSCC cell lines and tissues was inversely correlated with HTT expression. Prediction of miRNA target analysis showed that HTT possesses the binding sites for miR-146a. HTT overexpression in OSCC tissues was associated with patients’ higher survival percentage and duration. HTT knockdown in OSCC cells enhanced miR-146a expression and cell migration/invasion. Inducing OSCC in miR-146a knockout mice increased the HTT expression in tongue tissue and alleviated the cancer aggressiveness and epithelial damage. Overexpressed miR-146a in OSCC targets the HTT gene and enhances cancer cell migration/invasion unraveling the possible role of HTT in miR146a-mediated OSCC cell migration and invasion.
INTRODUCTION
Oral cavity cancer is the most common head and neck cancer with poor prognosis and a high recurrence rate (1). Oral squamous cell carcinoma (OSCC) accounts for more than 90% of all head and neck cancer and is the sixth most common cancer worldwide (2,3). The incidence rate of OSCC is increasing rapidly, especially among young and middle-aged individuals resulted from smoking and alcohol abuse. Despite the progress of surgery, radiotherapy, and chemotherapy treatments, the 5-year survival rate of OSCC patients was still less than 50% in the last 30 years (2,4). OSCC has a high rate of metastasis in the head and neck region due to local invasion and due to lack of early diagnostic markers (5). The pathogenesis of OSCC is a complicated process, and the molecular mechanisms of OSCC tumorigenesis, progression, and metastasis are still unclear. Therefore, the molecular level understanding in OSCC progression and metastasis is necessary to unveil novel diagnostic and therapeutic targets.
MicroRNAs (miRNAs) are an evolutionarily conserved group of small single strand non-coding RNA molecules of 20 to 22 nucleotides that regulates mRNA expression at transcriptional and post-transcription level (6,7). Emerging pieces of evidence had indicated that the mature miRNAs play critical roles in a broad range of physiologic and pathologic processes, such as development, cell proliferation, differentiation, apoptosis, signal transduction, and development of diseases, including inflammation and cancers (8)(9)(10)(11)(12)(13). Literature had reported the involvement of various miRNAs, including miR-146a, miR-145, miR-433, miR-195-5p, and miR-375 in OSCC (14)(15)(16)(17). Overexpression or underexpression of miRNAs regulates the OSCC development and progression. The role of miR-146a in various cancer etiology, including blood, breast, cervical, kidney, liver, and lung cancer, had been extensively studied (18). Only pieces of literature are available regarding the role of miR-146a in OSCC, and the results are controversial (17,19,20). Hung and colleagues reported the upregulation of miR-146a in OSCC tissues and blood circulation of OSCC patients, and the involvement in OSCC oncogenicity (17). Similarly, Min and colleagues reported a higher expression of miR-146a in OSCC (19). In contrast, Shi and colleagues reported the loss of miR-146a expression in high-grade oral cancer tumors and reexpression of miR-146a reduced oncogenic phenotypes and metastasis (20). Therefore, further studies are necessary to elucidate the exact expression pattern of miR-146a in OSCC as well as its role and mechanism in OSCC progression. miR-146a targets specific genes, regulates signaling pathways, and involves in cancer biology (18). miR-146a targets IRAK1 and TRAF6 to promote NF-kB signaling in oral, cervical, breast, and prostate cancer (17,18,21). Similarly, miR-146a targets EGFR to promote growth and proliferative signaling of various cancer cells, including breast, liver, lung, prostate, and gastric cancer (18). However, other target genes of miR-146a and their role in OSCC etiology have not been fully understood. HTT gene mutation causes a neurodegenerative Huntington's disease (22). Most of the HTT-related studies are mainly focused on the role of HTT in the nervous system. The expression of the HTT gene and protein is ubiquitous. Some recent studies have reported the role of HTT in cancer development and progression. HTT expression is downregulated in breast cancer and regulates cancer cell differentiation via the maintenance of tight junctions (23). HTT gene expression is downregulated in oral cancer tissues and is one of the candidate genes involved in oral cancer biology (24). Moreover, miR-125b, miR-146a, miR-150, and miR-214 target both human and mice HTT and regulate diverse cellular processes (25,26). However, whether the miR-146a targets the HTT gene in OSCC to regulate tumorigenesis and metastasis is still unknown.
In this study, we hypothesized that miR-146a targets the HTT gene to regulate OSCC cell migration and invasion. We found overexpressed miR-146a and underexpressed HTT in OSCC clinical tissues and cell lines. The overexpressed miR-146a downregulated HTT expression in OSCC cells, suggesting the HTT targeting potential of miR-146a. Overexpressed miR-146a or underexpressed HTT enhanced OSCC cell migration/ invasion. Inducing OSCC in miR-146a knockout mice mitigated the cancer aggressiveness and tongue epithelial damage compared to in wild-type mice. For the first time, our study reported the overexpression of miR-146a in OSCC as an inducer of cancer cell migration and invasion possibly via targeting the HTT gene.
OSCC Tissue Collection
A total of 14 patients with OSCC were included in this study. The independent diagnosis of each case was confirmed by both pathologists and physicians following the standard criteria (27). Surgical resection of primary tumors along with paired non-cancerous matched tissues (NCMT) from OSCC patients was collected with the written informed consent from the patients. NCMT sample was obtained from 2 cm distance of the tumor tissue. Tissue specimens were stained with H&E staining to distinguish cancerous tissue from NCMT. This study was approved by the Ethical Institutional Review Board of Affiliated Stomatology Hospital of Guangzhou Medical University (ethical approval number: KY2017018). All the procedures were per institutional ethical standards. All samples were obtained during tumor removal surgery and were frozen immediately in liquid nitrogen and stored at −80°C until the detection of miR-146a and HTT. The tumors underwent TNM classification, according to the American Joint Committee on Cancer (AJCC) system (27). The patient's characteristics and demographics are presented in Table 1.
In Vitro Cell Culture Study SCC9, SCC25, and CAL27 human oral squamous carcinoma cell lines were used in this study. SCC25 (CRL-1628) was purchased from ATCC. HOK cell line was obtained from ScienCell (Chemie Brunschwig, Basel, CH). SCC9 and CAL27 were obtained from the Key Laboratory of Oral Medicine, Affiliated Stomatology Hospital of Guangzhou Medical University. SCC25 cells were cultured in DMEM/F12 supplemented with 400 ng/ml hydrocortisone and 10% fetal bovine serum. SCC9 and CAL27 cells were cultured in DMEM/F12 supplemented with 10% FBS and 1% Penicillin-Streptomycin (Gibco, 15140122). All cells were incubated in an atmosphere of 5% CO 2 and saturated moisture at 37°C.
Knockdown of HTT Gene in OSCC Cells
HTT gene in OSCC cells was knock downed using si-RNA. HTT knockdown efficiency of three different si-RNAs (si-HTT1, si-HTT2, si-HTT3) in SCC9 and CAL27 cells was tested. Since, si-HTT1 (targeted sequence: GCACCTTCCTCCTGAGAAA, Guangzhou RIBOBIO) showed the highest inhibition of HTT expression, we used the si-HTT1 to downregulate the HTT in OSCC cells. HTT knockdown efficiency, miR-146 expression pattern, and OSCC cell migration and invasion were further analyzed.
The cells in the logarithmic growth phase were seeded in a 6well plate. When the cell fusion degree was 80%, siRNA transfection was performed. The specific steps were performed according to the method provided in the instructions of the siRNA transfection kit. After the cells were transfected for 48 h. The morphological changes of cells were observed under an inverted light microscope, and total RNA was extracted, and the silencing effect was detected by quantitative real-time PCR (qRT-PCR).
Quantitative Real-Time PCR Assay
Total RNA was extracted from cultured cells, OSCC clinical tissues, and mice tongue tissues using TRIzol Reagent (Invitrogen, CA, USA). The miRNA was extracted from cultured cells using miRNAiso for small RNA reagent (TaKaRa, Dalian, China). The RNA samples were then reversetranscribed into cDNA with the PrimeScript RT Master Mix (TaKaRa, Dalian, China) and the SYBR PrimeScript miRNA RT-PCR Kit (TaKaRa, Dalian, China). Real-time PCR was performed with the SYBR Premix Ex Taq II and the SYBR PrimeScript miRNA RT-PCR Kit (TaKaRa, Dalian, China), using an Applied Biosystems 7500 Sequence Detection System (Applied Biosystems, Foster City, CA, USA). The expression of HTT mRNA was quantified with GAPDH mRNA expression as an endogenous control. The levels of miR-146a were quantified with U6 control. Quantification of the relative levels was determined by the DD Ct method (28). Primers used for qPCR are listed in Table 2.
Cell Invasion Assay
Cell invasion assay was performed using Transwell chambers (8.0 mm pore size; Corning, US). The chambers were prepared by thawing Matrigel (BD Bioscience, San Jose, CA) at 4°C, and then 100 ml of the thawed Matrigel was added to each insert. After incubating at room temperature for 1 h, the unsolidified liquid was gently removed by pipette. Transfected cells were starved overnight and then seeded in the upper chamber at a density of 2.5 × 10 6 cells/ml in 200 µl of medium with no FBS. A medium with 10% FBS (600 µl) was added to the lower chamber. Following a 24 h-incubation at 37°C with 5% CO 2 , non-invading cells in the upper chamber were removed with a cotton swab, and invading cells were fixed in 4% paraformaldehyde and stained with 0.5% crystal violet. Photographs were taken randomly from five fields of each membrane. The number of invading cells was expressed as the average number of cells per microscopic field over five fields.
Cell Migration Assay
For migration assays, a protocol similar to the invasion experiment. However, the upper chamber was not pre-coated with Matrigel, and cells in the upper chamber were seeded at a density of 1.0 × 10 6 cells/ml.
Scratch Wound Assay
The cell migration was measured by scratch wound assay. Transfected OSCC cells were cultured in 6-well plates to 100% confluence as monolayers and then scratched with a 200-µl sterile pipette tip (29). The medium and cell debris were aspirated away by PBS and replaced with 2 ml of fresh serumfree medium. The area between wound gaps was measured at different time points under an inverted phase-contrast microscope (Olympus, Germany) and then calculated by ImageJ software. Five randomly selected fields along each wound were marked, and each experiment was conducted in triplicate.
Mouse Model for Oral Cancer
Male C57BL/6 mice (wild-type) 6-7 weeks old, weighing 250-350 g, were obtained from Medical Laboratory Animal Center of Guangdong Province. miR-146a knockout mice (B6.Cg-Mir146 tm1.1Bal ) were obtained from the Jackson Laboratory. All mice were housed at the Specific Pathogen Free (SPF) animal facility at Guangzhou Medical University according to animal protocol and regulation of the University Laboratory Animal Resources. All animal experiments were approved by Guangzhou Medical University Institutional Animal Care and Institutional Biosafety Committee (ethical approval number: 2020-002). Animal experiments were performed following the ARRIVE guidelines. OSCC was induced in wild-type (9 mice) and miR-146a (9 mice). The OSCC in mice was induced following the protocol established by our lab (30), as illustrated in Supplementary Figure 1. In brief, the mice were treated with freshly prepared 4-NQO (Cat# N8141; Sigma, St Louis, MO) for 10 weeks and then treated with normal drinking water for 10 weeks. 4-NQO (Sigma Chemical) was dissolved in propylene glycol (final concentration 5 mg/ml) and then added to the drinking water at 60 mg/ml, keeping at −20°C. Drinking water containing NQO was freshly prepared every week in deionized water and was stored in the dark at 4°C until used. Bottles containing 4-NQO-supplemented water were wrapped with foil to preclude photodegradation of the carcinogen and were changed at two-to three-day intervals throughout the study. The drinking water was changed every day, and mice were allowed access to the drinking water at all times while receiving treatment. The mice were anesthetized (xylazine, intraperitoneal injection, 0.13 mg/kg body weight) and euthanized by exsanguination prior to tissue collection. Mice were euthanized at 0, 16, and 20 weeks after treatment, respectively. The tongue tissues were collected and stored at −80°C for the histology and qPCR.
Hematoxylin and Eosin Staining
The tongue tissues were fixed in 4% paraformaldehyde for 16 hours, then dehydrated and embedded in paraffin. Sections (3 mm thick) were stained with hematoxylin and eosin (H&E). The sections of the tongue epithelium were photographed with an inverted optical microscope.
Immunochemistry
Immunohistochemistry was performed to observe the expression of HTT. The anti-HTT antibody (ab109115, Abcam) was incubated overnight at 4°C, followed by incubation with biotinylated anti-rabbit secondary antibodies for one hour and counterstained with hematoxylin to detect HTT expression on human OSCC tissue sections. The number of HTT-positive cells and nucleated cells in five random fields of view on each human OSCC tissue and NCMT tissue section were counted Using ImageJ software. Their ratio (HTT-positive cells/nucleated cells) was used for statistical analysis. Table 1 summarizes the clinicopathological characteristics of patients with OSCC.
Kaplan-Meier Survival Analysis and Pearson's Correlation Analysis
Kaplan-Meier survival curve for 397 OSCC/HNSCC patients with high expression of HTT and 99 patients with low HTT expression was plotted to evaluate the survival percentage and duration as described previously (31). Primary data (survival duration and HTT expression) of the patients were taken from the Kaplan Meier plotter (http://www.oncolnc.org/). Pearson's correlation analysis was performed using GraphPad Prism 7.04 software to investigate the correlation between miR-146a and HTT expression in OSCC tissues.
Statistical Analysis
All statistical analyses were performed using GraphPad Prism 7.04 software and the SPSS 20.0 statistical package (SPSS, Inc., Chicago, IL, USA). Data were presented as the mean ± SD. A nonparametric unpaired Mann-Whitney test was used to compare the mean of two independent groups. Differences were examined using a one way ANOVA followed by Tukey-Kramer post-hoc test and independent samples t-test. Chi-Square was used to test differences between two or more group samples. P < 0.05 was considered statistically significant.
miR-146a Is Highly Expressed in OSCC
To explore the role of miR-146a in OSCC, we analyzed the expression of miR-146a in OSCC tissues and NCMT from 14 patients. Patient characteristics, demographics, and clinical data are provided in Table 1. miR-146a was highly expressed in OSCC tissue compared to NCMT ( Figure 1A). We also analyzed the miR-146a expression in OSCC cell lines and HOK cells. miR-146a expression was highly upregulated in all the OSCC cell lines tested (SCC9, SCC25, and CAL27) compared to HOK cells ( Figure 1B). SCC9 showed the highest expression of miR-146 (135-fold higher) compared to HOK cells. The results of miR-146a expression from clinical samples directly correlated with the results from OSCC cell lines.
Overexpression of miR-146a Promotes OSCC Cell Migration and Invasion miR-146a was transfected in OSCC cells to analyze the effect of miR146a on OSCC cell migration and invasion. Transfection of miR-146a successfully upregulated the expression of miR-146a in SCC9, SCC25, and CAL27 cells (Figure 2A). In vitro cell migration and invasion assay, showed a higher number of migrated and invaded cells in miR-146a transfected SCC9, SCC25, and CAL27 cells ( Figures 2B, C). Quantitative analysis data showed that overexpression of miR-146a enhanced the cell migration and invasion by~1.5-fold in SCC9, SCC25, and CAL27 cells (Figures 2D, E). This result indicates that the overexpressed miR-146a in OSCC cells promotes cancer cell migration and invasion.
Knockdown of miR-146a in OSCC Inhibits OSCC Cell Migration
We knockdowned miR-146a in OSCC cells to further confirm the role of miR-146a in OSCC migration and invasion. The transfection of miR-146a inhibitor dramatically reduced the expression of miR-146a expression in SCC9, SCC25, and CAL27 cells ( Figure 3A). The scratch wound assay revealed that miR-146a knockdown reduces the migration of SCC9, SCC25, and CAL27 cells, as indicated by the higher wound area in the miR-146a inhibitor group (Figures 3B-D). The quantitative analysis of wound closure showed a higher wound area percentage in miR-146a inhibitor transfected SCC9, SCC25, and CAL27 cells ( Figures 3E-G). Moreover, the knockdown of miR-146a inhibited the OSCC cell invasion (Supplementary Figure 2). Our results indicate that miR-146a overexpression and knockdown in OSCC have a stimulatory and inhibitory effect on cancer cell migration/invasion, respectively (Figures 2 and 3).
HTT Expression Is Downregulated in OSCC
Since miR-146a targets the HTT gene, we evaluated the expression pattern of the HTT gene in OSCC tissue and the role of the level of HTT expression on survival percentage and duration in oral cancer patients. We plotted the Kaplan-Meier survival curve from 397 OSCC/HNSCC patients with high HTT expression and 99 OSCC/HNSCC patients with low HTT expression. Cancer patients with higher HTT expression showed a better survival percentage with longer survival duration compared to the OSCC patients with HTT expression ( Figure 4A). We also analyzed the HTT gene expression in OSCC tissues and NCMT of 14 patients as well as in OSCC cell lines. HTT expression was significantly downregulated in OSCC tissues compared to NCMT ( Figure 4B). Similarly, the lower expression of the HTT gene was observed in OSCC cells (SCC9, SCC24, and CAL27) compared to HOK cells ( Figure 4C). The immunohistochemistry study showed a lower expression of HTT protein in OSCC tissues compared to NCMT ( Figures 4D, E). This result indicated that the downregulation of HTT in OSCC correlates with lower survival percentage and duration.
miR-146a Expression Inversely Correlates With HTT Expression in OSCC
Prediction of miRNA targets using TargetScanHuman online tool (http://www.targetscan.org/vert_72/) showed the binding site of miR146a in HTT gene ( Figure 5A). We performed Pearson's correlation analysis to investigate the correlation between miR-146a and HTT expression in OSCC. As expected, the miR-146a expression in OSCC tissues was inversely correlated (r = −0.7022, P = 0.01) with the HTT expression ( Figure 5B). The miR-146a transfected OSCC cells showed a lower expression of the HTT gene ( Figure 5C). The knockdown of miR-146a in OSCC cells robustly enhanced the HTT gene expression ( Figure 5D).
Knockdown of HTT Enhances OSCC Cell Migration and Invasion
To analyze the role of HTT in OSCC cell migration and invasion, we knockdowned the HTT gene by using HTT specific si-RNA ( Figure 6A). Interestingly, knockdown of HTT upregulated the miR-146a expression in OSCC cells ( Figure 6B). This result further confirms the inverse correlation between miR-146a and HTT expression in OSCC cells, as observed in Figure 5. Cell migration and invasion assay showed a higher number of migrated and invaded OSCC cells in HTT knockdown groups ( Figures 6C, D). The quantitative analysis showed ≥ 1.5-fold higher number of migrated and invaded cells in HTT knockdowned OSCC cells (Figures 6E, F). This result indicates the inhibitory effect of HTT on OSCC cell migration and invasion.
Knockout of miR-146a Inhibits the OSCC Progression in Mice
We tested the role of miR-146a in OSCC progression in miR-146a knockout mice with OSCC. Inducing OSCC in wild-type mice showed the tongue epithelial damage at week 10. The higher degree of carcinoma aggressiveness and epithelium damage was observed at week 16 and 20 in wild-type mice ( Figure 7A). In miR-146a knockout mice, OSCC progression was relatively slow, and the epithelial damage was mitigated compared to in wild-type mice at week 16 and 20, respectively ( Figure 7A). Tongue epithelial damage was increased in OSCC wild-type mice at week 16 and 20. Interestingly, OSCC-induced epithelial damage was alleviated in miR-146a knockout mice ( Figure 7B). We also analyzed the HTT mRNA expression in the tongue tissue of OSCC mice at week 20 ( Figure 7C). HTT mRNA expression was highly upregulated in the OSCC tongue tissue of miR-146a knockout mice compared to wild-type mice. This result indicates the protective role of the HTT gene against OSCC by reducing the cancer aggressiveness and tongue epithelial damage.
DISCUSSION
Evidence from literature demonstrated that the over or under expression of miR-146a in specific cancer acts as either a cancer . Data are presented as mean ± SD. Significant difference, *P < 0.05, **P < 0.01, and ***P < 0.001. Scale bar, 100 mm. suppressor or inducer by targeting specific genes (18). However, the expression pattern and function of miR-146a in OSCC is still a controversy (17,19,20). This study investigated: (a) the expression pattern of miR-146a in OSCC tissues and cells, (b) unravel the possible target gene of miR-146a, and (c) the role of miR-146a in OSCC development and metastasis. miR-146a was overexpressed, and its possible target HTT gene and protein was underexpressed in OSCC tissues and cell lines. The expression of miR-146a and the HTT gene in OSCC was inversely correlated. The higher expression of HTT in cancer tissues correlated with higher survival percentage and duration. Overexpressed miR-146a enhanced OSCC cell migration and invasion possibly by targeting the HTT gene. Inducing OSCC in miR-146a knockout mice enhanced HTT expression in tongue OSCC tissue and alleviated cancer aggressiveness and tongue epithelial damage. Figure 8 illustrated the main findings of this study. Our results showed that the overexpressed miR-146a in OSCC possibly targets the HTT gene to induce cancer cell migration and invasion, suggesting the role of miR-146a and HTT in OSCC progression, metastasis, and invasion. Overexpression of miR-146a had been reported in various cancers, including breast (8,32), gastric (33), cervical (21), and bladder cancer (34). The expression pattern of miR-146a in OSCC is still a controversy as two studies reported overexpression (17,19), and one study reported underexpression (20). In this study, OSCC tissues from 14 patients and OSCC cell lines SCC9, SCC25, and CAL19 showed a higher expression of miR-146a. Overexpressed miR-146a in various cancers such as breast (8), bladder (34), gastric (35), and prostate cancer (36) had been reported to regulate cancer cell migration, invasion, and metastasis. Similar to other types of tumors, the metastasis of OSCC mostly involves local invasion restricted to the head and neck region that frequently remain undetected until the advanced stage. Irani S had summarized the distant metastasis of OSCC, including heart, lung, bone, and brain (37). Therefore, it is crucial to unravel the role of miR-146a on OSCC metastasis and its mechanism. We overexpressed and knockdowned the miR-146a in OSCC cell lines to investigate the role of miR-146a in OSCC migration and invasion. miR-146a overexpression promoted and knockdown inhibited the OSCC cell migration and invasion in vitro (Figures 2 and 3). Our data indicated the possible role of overexpressed miR-146a on OSCC metastasis. Differentially expressed miR-146a targets specific genes, affects gene transcription and translation, and regulates tumor biology (18). TRAF6, IRAK1, and SOX2 had been reported as a target gene of miR-146a in OSCC (17,19,20). miR-146a has specificity to target the HTT gene, and HTT plays a vital role in oral and breast cancer biology and many other cellular activities (23)(24)(25)(26). This study hypothesized that the miR-146a might target the HTT gene to regulate OSCC progression and metastasis. We extensively analyzed the expression pattern of HTT and its role in OSCC (Figure 4). HTT gene and protein expression were downregulated in OSCC tissues with higher pathological grade. A similar result in OSCC expression was observed in SCC9, SCC25, and CAL27 cells ( Figure 5). Our results suggest HTT as a possible diagnostic or therapeutic target in OSCC. Thion and colleagues had reported the anti-metastatic role of the HTT gene in breast cancer (23,38). We unraveled the strong association between the lower expression of HTT in OSCC/HNSCC with reduced survival rate/duration and vice versa, indicating a critical role of HTT in OSCC.
Since high throughput sequencing analysis revealed the HTT gene as a putative target of miR-146a, we further analyzed the regulatory role of miR-146a on HTT gene expression in OSCC. The expression patterns of miR-146a and HTT in OSCC tissues were inversely correlated. These findings inmiR-146a overexpression downregulated, and knockdown upregulated the HTT gene/protein expression in OSCC cells. Similarly, knockdown of the HTT gene upregulated miR-146a expression in OSCC cells. Our results indicate the HTT targeting potential of miR-146a in OSCC.
Results from literature and our current study showed that the lower expression of HTT in oral and breast cancer is related to cancer metastasis and lower patient survival (23,24,38). In this study, we found that knockdown of the HTT gene in OSCC cells robustly enhanced the cancer cell migration and invasion in vitro. HTT gene expression was upregulated in the OSCC tongue tissue of miR-146a knockout mice. Aggressive OSCC in tongue tissue with a disrupted tongue epithelial layer was observed in wild-type mice. In contrast, the reduced aggressiveness of OSCC in tongue tissue and intact tongue epithelial layer was observed in miR-14a knockout mice. Our results indicate the role of overexpressed miR-146a on OSCC aggressiveness via targeting the HTT gene.
In this study, we evaluated the expression pattern of miR-146a and HTT gene/protein, as well as HTT targeting potential of miR-146a in well-characterized OSCC clinical samples, mouse model, and cell lines. We also investigated the role of miR-146a in OSCC cell migration, invasion, and cancer aggressiveness. Use of OSCC mice model in miR-146a knockout mice to investigate the role of miR-146a in OSCC and HTT expression in another novelty of this study. This is the first study to illustrate the expression pattern of HTT in OSCC tissues and cell lines, as well as the role of miR-146a in OSCC development and HTT expression in knockout mice model. Using only 14 OSCC patients is a limitation of this study. Another limitation of this study is that we did not investigate the molecular mechanism of miR-146a/HTT-mediated OSCC cell migration/invasion and metastasis. Therefore, future research recruiting more OSCC patients and unraveling the molecular mechanism of miR-146a/ HTT-mediated signaling pathways in OSCC metastasis are strongly recommended. Knockdown of HTT in OSCC cells enhanced the cancer cells migration and invasion, and miR-146a knockout OSCC mice showed higher expression of HTT and inhibited the cancer aggressiveness and epithelial damage ( Figures 7A-C). Based on these results we predict that miR-146a overexpression in oral squamous cell carcinoma potentiates cancer cell migration and invasion possibly via targeting HTT. However, the future in vitro and in vivo studies, overexpressing HTT in miR-146a overexpressed OSCC cells or mice model, as well as inhibiting HTT in miR-146a knockdowned OSCC cells or miR-146a knockout OSCC mice model, are indispensable to further confirm the findings of this study. CONCLUSION miR-146a was overexpressed in OSCC tissues and cell lines. miR-146a overexpression and knockdown in OSCC cell lines enhanced and inhibited cell migration/invasion, respectively. HTT was underexpressed in OSCC, and the expression pattern was inversely correlated with the miR-146a. The knockdown of HTT in OSCC cells enhanced cell migration/invasion. Higher HTT expression in cancer tissues was associated with longer survival percentage and duration. miR-146a knockout OSCC mice model not only showed higher expression of HTT in tongue tissue but also mitigates the OSCC progression and tongue epithelial layer damage. Our results suggest the involvement of the miR-146a/HTT axis on OSCC cell migration and invasion.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Ethical Institutional Review Board of Affiliated Stomatology Hospital of Guangzhou Medical University. The patients/participants provided their written informed consent to participate in this study. The animal study was reviewed and approved by Guangzhou Medical University Institutional Animal Care and Institutional Biosafety Committee.
AUTHOR CONTRIBUTIONS
LPW, YY, YC, JP, and LG designed the study, interpreted the data, and finalized the manuscript. YC, XG, YF, LJW, and YS contributed to perform experiment, data collection, and interpretation. All authors contributed to the article and approved the submitted version. | 6,638.8 | 2020-11-13T00:00:00.000 | [
"Biology"
] |
Aerodynamic Noise Prediction Using stochastic Turbulence Modeling
Amongst many approaches to determine the sound propagated from turbulent flows, hybrid methods, in which the turbulent noise source field is computed or modeled separately from the far field calculation, are frequently used. For basic estimation of sound propagation, less computationally intensive methods can be developed using stochastic models of the turbulent fluctuations (turbulent noise source field). A simple and easy to use stochastic model for generating turbulent velocity fluctuations called continuous filter white noise (CFWN) model was used. This method based on the use of classical Langevian-equation to model the details of fluctuating field superimposed on averaged computed quantities. The resulting sound field due to the generated unsteady flow field was evaluated using Lighthill's acoustic analogy. Volume integral method used for evaluating the acoustic analogy. This formulation presents an advantage, as it confers the possibility to determine separately the contribution of the different integral terms and also integration regions to the radiated acoustic pressure. Our results validated by comparing the directivity and the overall sound pressure level (OSPL) magnitudes with the available experimental results. Numerical results showed reasonable agreement with the experiments, both in maximum directivity and magnitude of the OSPL. This method presents a very suitable tool for the noise calculation of different engineering problems in early stages of the design process where rough estimates using cheaper methods are needed for different geometries.
INTRODUCTION
One of the major contributors to the overall aircraft's noise is its propulsive jet. In order to design quieter aircrafts, noise reduction in jets has become a major area of jet research [1] . This is a difficult task to be done because of the noticeable inefficiency of turbulence as an acoustic source. When there is no solid surface in the flow field, quadrupole acoustic sources formed by the turbulent Reynolds stresses are responsible for generating most of the sound [2] . Three hybrid methods may be used in computational aeroacoustics to study compressible jet flows. Each method has its own particular way for computing the near field turbulent flow and far field noise data [3] . First approach relies on direct numerical simulation (DNS) in which near field is computed by solving the full compressible Navier-Stokes equations. However the practical application of DNS is limited to low Reynolds numbers with simple geometries. Second approach uses the mean turbulent flow field computed using some turbulence modeling method combined with statistical source representation for the noise. In the third approach, the turbulent mean flow is computed as in the second method, but the details of the turbulent fluctuation field are regenerated by stochastic or random-walk models. Then Lighthill's analogy or Kirchhoff integral [4] is used to estimate the noise in the far field.
In all of the above methods, computing the near field has to be done first. Stochastic or random-walk models have proved to be a successful and flexible tool for simulating turbulent fluctuations in high-Reynoldsnumber turbulent flows. They can take account of inhomogeneities, unsteadiness or non-Gaussian distributions in the flow. They can also be used for complex flows [5] . Statistical methods are also used for subgrid scale modeling in LES simulations [6] . In this approach large eddies are solved numerically and small eddies are modeled stochastically. More thorough descriptions of various computational aeroacoustic methods with more emphasis on the hybrid methods can be found in [7,8] .
Here we used the volume integration methods for the far field noise prediction instead of the more common surface integral methods. This type of acoustic post treatment renders the CFD calculation less computationally intensive.
The volume integral approach also seems advantageous for it allows a detailed physical examination of the noise creation process, through the differentiation of the source types (entropy, shear,…), and through the analysis of the spatial distribution of noise sources. [9] In this paper, turbulent mean flow of a two dimensional, compressible, cold-jet at mach 0.56 is computed using RANS with 2 equation k-RNG model, then the mean-flow quantities are exported for use in the stochastic turbulence generation code to simulate the fluctuating velocities and finally computation of the far field noise is done using the Lighthill's volume integration method.
Characteristics of the Two-Dimensional Jets Technical Work Preparation:
We considered a free cold-jet configuration for applying our method because most of the references and available data in the literature are regarding this problem. In a free cold-jet configuration due to very large velocity differences at the surface of discontinuity, large eddies are formed that cause intense lateral mixing.
We know that in the zone of establishment of the jet, there is a core region that has constant velocity and very little turbulence. After the zone of establishment, diffusion of the momentum of ambient fluid reaches the centerline of the jet and the mean velocity on the symmetry line starts to decrease downstream thereafter. Figure 1 shows these properties of a free jet. The lowest Reynolds number used in this study is chosen to be at least 200000. Hence, it is much higher than the critical Reynolds number of a free jet.
As our stochastic method (which will be discussed more in the next chapter) needs the kinetic energy of turbulence and also the rate of dissipation of kinetic energy of turbulence in each grid point, here we choose to solve the RANS equations with the 2 equation kmethod for its closure. A simple 2 equation k-RANS solver code is used on a structured grid for this purpose.
Because of the fact that the mean turbulent quantities are symmetrically distributed, only half of the flow field above the symmetry line was considered for computing the mean quantities of the turbulent flow. All boundaries have constant pressure imposed as boundary condition.
Description of the Stochastic Model:
The turbulence fluctuations are random-like functions of space and time. In this study the continuous filter white noise (CFWN) model [10] , which is based on the classical Langevian-equation [5] was used to simulate the instantaneous fluctuating velocity of the flow field.
u′ is the mean-square of the ith fluctuating velocity, and the summation convention on underlined indices is avoided. T I is the Lagrangian integral time T I =0.30k/ . i (t) is a Gaussian vector white noise random function with spectral intensity π δ ij n ij S = . This in the numerical method is computed Where G i is a zero-mean unit variance independent Gaussian random number and has to be computed at every time step, t, in the entire time domain.
Equation 1 has to be solved for each direction of the flow field independently to obtain the velocity fluctuations in that direction. The numerical data needed for solving Equation 1 at each point of the flow field are the mean velocities, kinetic energy of turbulence (k), rate of dissipation of kinetic energy of turbulence ( ), (All computed in the RANS solver), and the Gaussian random numbers G i , which is generated using the polar form of the Box-Muller transformation. This is a fast and robust method to generate Gaussian random numbers [11] . Here, Eq. 1 is solved analytically and only the integration in the analytical solution was computed numerically. This way less computational error is introduced.
Since different equations are solved for each dimension, the generated turbulence field is not necessarily isotropic. Also note that this equation takes into account the intensity of local turbulence at each point ala the use of kinetic energy and dissipation rate in the formulation.
This technique has some advantages compared to other techniques. It provides correct turbulent intensities and accounts for the proper time scale of turbulence. More importantly the model leads to the correct magnitude of turbulent diffusivity at each fluid point particle [10] .
Evaluation of the Far Field Noise:
In order to evaluate the far field noise emitted from the turbulent velocity distribution, we use the volume integration as prescribed by Lighthill's analogy [2] : Where T ij is the Lighthill's quadrupole source and in most cases can be replaced by u i u j . Note that T ij is calculated at the retarded time, which is the time needed for the sound waves to travel the distance between source and observer positions. Here, all the discritizations are done using 4th order finite difference schemes [12] .
Computing the sound propagated from a turbulent flow region with the aid of volume integral methods has its own cons and pros. If the use of a surface integral method avoids computing 3D integrals with double space or time derivatives, this approach nevertheless comes with certain difficulties. The acoustic pressure has to be calculated using CFD up to a control surface situated in the uniform flow, which imposes a fine mesh over a relatively extended domain. The use of volume integral methods for acoustics could alleviate this constraint since the pressure fronts don't have to be propagated up to the control surface. Furthermore, only the dominant noise production sources have to be finely captured.
The main drawback of our formulation, which uses double space derivative outside of the integral, is the increase of the computational cost of the acoustic computation. Actually since the space derivation takes place at the observer location, three integrals have to be computed for 24 points around the observer location.
This formulation confers the possibility to determine separately the contribution of the different integral terms and also spatial distribution of noise sources to the radiated pressure. Figure 3 presents a schematic of the far-field and the computational flow region. The overall sound pressure level, OSPL, of the sound at far field is computed along the perimeter of a half circle with the radius X (position vector).
Evaluation of the Mean Flow Properties:
To check the accuracy of our RANS numerical results, the mean velocity on the symmetry line of the jet is compared with the experimental data. The experimental profile suggested by Zijnen [13] is rewritten below: Where b 0 is the half of jet exit nozzle and U 0 is the jet velocity at the nozzle exit. In this study U 0 =190 m/s and b 0 =0.0005m. The above experimental relation is only valid in the fully developed region of the jet flow. In contrast, the results from RANS code are valid everywhere in the flow field. Hence, our numerical results are valid even in the potential core region of the jet. So comparison with experimental data can be done only in the fully developed region of the jet far from the nozzle exit. As shown in Fig. 4, the computed mean velocity on the symmetry line lies on the experimental data in the fully developed region of the jet (as it would have been expected).
Velocity on the Symmetry Line of the Jet
Constant Velocity Potential Core Fig. 4: Comparison of numerical with experimental velocities [4] on the symmetry line Another test that can be used to check the validity of the numerical results is the mean velocity profile along the lines normal to the symmetry line. Experimental data fit for velocity profiles from Zijnen [13] [13] .
In Fig. 5, the comparison between the numerical results is presented along with the corresponding experimental data. These mean velocity profiles are non-dimensionalised by the use of m u from Eq. 3. As mentioned earlier, the experimental relations are for the fully developed region of the jet flow.
Hence, further away from the jet exit (larger x/D values), the numerical results better match the experimental data.
Validation of the Stochastic Model Used:
To check the accuracy of the turbulence field generated using CFWN model, we computed the temporal power spectral density of the fluctuating velocity at the center of the jet. The ensemble average of the computed power at each frequency is plotted with respect to the frequency and presented in Fig. 6. The slope of the computed averaged spectrum is compared to the line with -5/3 slope. As shown in Fig. 4, there is a region right after the jet outlet that has the same velocity as the jet exit. This region is called the potential core of the jet and has a wedge shape. In this region, the momentum of the still medium surrounding the jet has not diffused to reach the line of symmetry yet. This potential core can be observed in the velocity fluctuation contours of Fig. 7. Inside the core region, the flow is not turbulent and therefore no fluctuations are present. It is known that the velocities at adjacent points in a turbulent flow are correlated to each other. These dependencies are expressed mathematically with the definition of two point spatial correlation. The CFWN method is categorized as a one point method, due to the fact that the computation for velocity fluctuations in one point does not affect the velocity fluctuations of its adjacent points. Hence, correct two-point spatial correlations can not be reproduced by this method.
Results and Validation of the Far Field Noise:
Power spectra of the density fluctuation at a distance of X =1000D (D is the jet exit width) from the nozzle exit is presented in Fig. 8. Since we evaluate the exact form of the Lighthill's volume integral, therefore it is possible to compute the contribution of the noise produced by any segment of the flow field separately. Different possible integration zones selected in this study to evaluate the volume integral are shown in Fig. 9. [15] . In Fig. 10, the overall sound pressure level, OSPL, as defined by Eq. 6, are shown for different integration regions of Fig. 9 on a half circle of radius X =200D. By comparing different choices of integration zones and their corresponding OSPL, we can see that regions containing large velocity fluctuations are most effective in propagating sound to the far field. For example regions 4 and 5 that have the same length with different width, almost produce the same amount of sound. Even though zone 5 is much larger than zone 4, however they both contain almost the same amount of velocity fluctuations in them. Therefore, it is only necessary to integrate over the highly turbulent regions to compute the sound produced in jet flow. Fig.11: Comparison of the numerical results with the experimental data of Lush [16] and SAE [17] for M=0.56 and |X|=120D
Overall Sound Pressure Level at Different Angles
In Fig. 11 the overall sound pressure levels from our numerical results are compared with the experimental data of Lush [16] and SAE [17] . The OSPL on a half circle with the radius of 120D from the jet exit are presented. As shown in Fig. 11, the general trend in the numerical results is in reasonable agreement with the experimental data. The major difference between numerical results and experimental data is on predicting the maximum directivity angle of the jet. Numerical results show that the maximum directivity is at the jet axis (0 degree). There is no experimental data in vicinity of the jet axis because of the practical difficulties but it is known that the maximum directivity of the jet occurs at about 30 degrees from the jet axis.
There can be several reasons for this discrepancy. The CFWN method used here does not account for spatial structures that exist in real turbulent. As we know the directivity of the sound emitted from turbulent flows is due to large eddy structures existing in the flow. Hence, the discrepancy in the prediction of the maximum directivity can be expected. Also the size of the integral domain have to be large enough to contain all of the noise sources available in the flow, but as the CFD domain would become excessively large for far field calculations only a fraction of domain is considered here.
CONCLUSIONS
The stochastic method used here to simulate the velocity fluctuations satisfies the temporal properties of the turbulence. It also takes into account the intensity of turbulence flow. The calculated OSPL values and trends are in good agreement with the experimental data in the literature.
It seems that the combination of the CFWN method and Lighthill's volume integration is a good method for quick estimation of the OSPL with both reasonable computational speed and relatively good agreement with the experimental data.
This method is not as accurate as LES or DNS methods, but as the LES or DNS data at the near field are not always available or too costly to generate for most geometries, this method is a good alternative for finding quick estimates. This method is not limited to free jet problem and can be used in other geometries. | 3,873.8 | 2008-09-30T00:00:00.000 | [
"Engineering",
"Physics"
] |
An Entropy Based Method for Removing Web Query Ambiguity in Hindi Language
: Problem statement: WSD is core problem of many Natural Language Processing (NLP) tasks; information retrieval is one of them. Information Retrieval in Hindi language also faces the similar problem of WSD. Hindi language is spoken by the major population in India. Natives from the rural area come across the setback of Hindi language information retrieval. WSD is one of them. End users do not understand that how the information retrieval system will remove the ambiguity in the queries. An automatic disambiguation system is required to rectify this problem. Various researchers have worked on it and given solutions. But none of them tried to detect the ambiguity in the query before its disambiguation. Approach: We followed entropy based selective query disambiguation approach for Hindi language information retrieval. The approach will identify the ambiguity in the query which will be further disambiguated. The approach is also stimulated by the feature of Google “Did you mean…” for English queries. This study summarizes the ambiguity detection approach as the prior ambiguity detection leads to conserve computation power. Results: We applied the selective query approach on the set of fifty queries. In our query set 35% queries were unambiguous. The survey of results concludes that several times even if the query consists of polysemous word, it is detected as unambiguous. Conclusions/recommendation: The study concludes that the detection of ambiguity is quiet important as it leads to saving computational time. Followed by ambiguity detection, final disambiguation can be done through human intervention based on google feature.
INTRODUCTION
The ambiguity in natural language is considered as the major barrier in language processing applications, especially in information retrieval. Some query terms have a clear cut sense in their query. However some query terms hold ambiguity. The problem also persists with the Hindi language information retrieval as well. Hindi language information retrieval on the web is still in its nascent stage. The number of users who want the information in Hindi language is increasing. This leads to the demand of the Hindi information retrieval on the web. It is the fact that to date Internet is vigorously used in India by the people who are comfortable in English language. The under development of web in Indian regional languages is one of the important reasons behind the limited growth of Internet in India. Indians use 22 official languages and 11 written script forms and among all the languages Hindi language is spoken by the major population of India. About 5% of population understands English as their second language. Hindi is spoken about 30% of the population [4] . This generates the need of the development of the powerful tools for Hindi language information retrieval.
Various search engines are available on the internet as independent search engine sites in English. But very few like (Google, Raftaar and Webkhoj) Hindi language search engines are available. The search engines that support Hindi language search are not able to provide appropriate result for a user query. There are various problems that the search engines face with Hindi language information retrieval. Sense ambiguity is one of the major problems in Information Retrieval on web in Hindi Language. Many words are polysemous in nature. Identifying the appropriate sense of the words in the given context is a difficult job for the search engines. Word sense disambiguation gives solution to the many natural language processing systems including information retrieval.
Sense ambiguity in Hindi language queries can be clearly understood by the given example query "मे हनत का फ़ल (result of hard work)" (in Hindi language) consists of three terms as follows: It is unclear from the above mentioned query whether the user is interested in the फ़ल as a fruit, फ़ल as a result or फ़ल in context of device. Here फ़ल is a polysemous word. Before we resolve the ambiguity in query the first step should be the identification of the ambiguity level in the query.
We had tried the approach with the first step of ambiguity detection and finally to resolve query ambiguity we had attempted to use the similar tool "Did you mean……?" of Google for English queries. Though Google also support Hindi language information retrieval but it does not leverage it with the similar facility of "Did you mean…" we had endeavored to apply the same approach for Hindi language queries in which we can confirm from the user the particular sense used in the query. Like "Did you mean फ़ल as a fruit, फ़ल as result or फ़ल in context of cutting device?
The existing Word sense disambiguation tools which map words to their synset can be influenced by the above mentioned motivation to detect the level of ambiguity for each query term. According to our approach if the ambiguity passes the threshold we prompt the user with the two most likely senses. The most likely identified sense can be used for filtration of the documents which do not contain the correct sense.
The WSD approaches used for the English language used WordNet. Our approach used Hindi WordNet [8] which presently incorporates nouns only. So our approach for Hindi Language disambiguation is concerned with nouns only.
The problem statement: The given query is Q which contains one or more query terms as q 1 , q 2 , q 3 …q m . The query results into the set of relevant document set D.
Some query terms are polysemous and have a potential set of senses S = {s 1 , s 2 …s n } is for the query Q.
In context of Hindi language Information Retrieval we need to eliminate the कारक (preposition) such as ने , को (to), से (from), के िलए (for), मे (in). and योजक (conjunction) such as या (or), ǑकÛतु (but), परÛतु (but), ÈयोǑक (because), तथा (and), अÛयथा (otherwise). After eliminating these words we have only few keywords left that represent the core query. After the elimination we can detect the ambiguity in query.
The ambiguity is detected in query Q which has polysemous words. We rely on the user input to make the ultimate decision about the possible sense. The user is prompted to select the two most likely senses and selects the correct sense s n ∈S.
If the query term q i is ambiguous the user is allowed to identify the correct intended sense. Further the subsets of results from D that match the intended sense are presented. The disambiguation is related to the resultset rather than the query, because the query is not ambiguous but the result set is ambiguous. It is favorable to identify first the ambiguity in the query. Not all queries are ambiguous in nature. It is necessary to resolve the ambiguity problem to identify queries that can benefit from sense disambiguation.
The process of selecting an intended sense gets tough when no sense has a dominating share in the retrieved result set. If any of the sense dominates the share finding the ambiguity level of the query is quite easy.
MATERIALS AND METHODS
Detecting ambiguity: The focus of the ambiguity detection method is to measure the ambiguity of a query term q i from a query Q. In general WSD algorithms use probabilistic approach where each sense is tagged with some probability of being correct. The low probability tagging is likely to be ambiguous.
Since our approach is applicable for the information retrieval setup we define the ambiguity of the query in relation to the top k relevant documents for the query. The ambiguity detection is the better option then leading to the disambiguation error. For ex. if there are no documents about the "फ़ल" as a fruit, it will be meaningless to ask the user if they mean "फ़ल" as a खाने वाला फ़ल (fruit).
Following the motivation of [2] the ambiguity of a query term is defined as a function of the senses it takes in the relevant documents. For a query term q i and a set of k relevant documents D k where q i takes n senses in D k . They define a maximum likelihood probability distribution p qi over each sense as follows: Here we define C(s, q i , D k ) as the number of times term q i takes sense s in the set of documents D k . From this probabilistic sense distribution, we define the ambiguity of a query term as the entropy of its sense distribution. Entropy is the numeric measure of the uncertainty of the outcome: Finally to detect the ambiguity in the query threshold θ q is calculated. Threshold is calculated on the basis of entropy of the sense distribution like this: If the value of entropy is greater than Threshold or we can say entropy passes a Threshold the query will be an ambiguous query.
Finding most appropriate senses: The Lesk [1] approach which has been modified a bit by the Pushpak Bhattacharya [3] can be followed for finding the two most appropriate senses for the ambiguous words after detecting the ambiguity level of the query. According to Bhattacharya approach: 1. For a polysemous word q i which needs disambiguation, a set of context words in its surrounding window is collected. Let this collection be C, the context bag 2. For each sense s of q i , do the following: (a) Let B be the bag of words obtained from the • Hypernyms • Glosses of hypernyms • Example sentences of hypernyms • Hyponyms • Glosses of hypernyms • Example sentences of hypernyms (b) Measure the overlap between C and B using the intersection similarity measure 3. Output the sense s 1 and s 2 as the most probable sense which has the maximum overlaps The idea behind using the intersection similarity measure is to capture the belief that there will be high overlap between the words in the context and the related words found from the Hindi Wordnet [8] lexical and semantic relations and glosses. Now we proceed to the next step of Human intervention.
Human intervention: Human intervention is the next step after finding the most appropriate senses. In this step user will be prompted to select one appropriate sense in a particular context. The user will get now the subset of the relevant document. If the query does not pass the threshold the query will be unambiguous in nature and in that case step 2 and 3 will not be followed.
Related work:
Various researchers have studied the effect of ambiguity problem on performance of information retrieval task. According to Sanderson [2] short queries are mostly benefited from the ambiguity resolution. His study showed that disambiguation lead to better performance. Lesk [1] proposed the algorithm for WSD, he also implemented his algorithm on the short text sample and found the good results. With the quite similar approach Pushpak Bhattacharya [3] used his algorithm for the Hindi languaage WSD. His algorithm does not detect the ambiguity in the queries.
Krovetz and Croft [5] studied the relationship between sense mismatch and irrelevant documents. They concluded that the co-occurrence of multiple words interacting within a query naturally performs some element of disambiguation indicating that disambiguation might only be of benefit over short queries.
Weiss [6] showed that ambiguity resolution only lead to the 1% increase in accuracy. The above mentioned all the research deals with the disambiguation of all queries whereas our approach is concerned to the queries where ambiguity is highest. Vogel and Kochher [7] also focused their approach on short sample queries. They suggested disambiguating only those queries where ambiguity is detected. They applied their approach on English queries.
Quantitative Evaluation: Quantitative evaluation of the queries is done on the basis of the above mentioned formula for entropy and threshold.
Hindi language use कारक (preposition), योजक (conjunction). These कारक (preposition) and योजक (conjunction) words will be eliminated from the main query. After eliminating case and conjunction from the queries we are left with the major query terms of the query.
A total of 50 queries are tested on Google search engine and keeping in mind the constraint of limitation of the contents of Hindi language first 20 results are considered for the evaluation. Hindi WordNet [8] is used for sense mapping of the query terms.
Query "मे हनत का फ़ल (result of hard work)" on Google result into 14 relevant documents. After elimination of "का" we left out with the two terms: • q 1 = मे हनत (hard work) has one sense according to Hindi WordNet • q 2 = फ़ल (result) has three senses according to
Hindi WordNet
The value of probability distribution for मे हनत will be one and Entropy will be 0, hence threshold cannot be calculated.
The set of relevant document set is 14 which means value of k = 14. So the relevant document set is D k .
The probability distribution of all the senses of query term q2 according to equation 1 is as follows: Entropy is calculated according to the Eq. 2 and the value is 0.2605. Threshold is calculated on the basis of Entropy and it is 1.0745. The value of Entropy is less then the value of Threshold which shows that the uncertainty of the outcome does not passes the threshold. This concludes that this query is not ambiguous.
On evaluation of another query "वण[ ǒवभे द" on Google we get 18 relevant documents. According to Hindi WordNet we get 3 senses for वण[ and 1 sense for ǒवबे ध: Here वण[ is a polysemous word.
The value of probability distribution for ǒवभे द will be one and Entropy will be 0, hence threshold cannot be calculated.
The probability distribution of all the senses of query term q 1 according to equation 1 is as follows: Entropy is calculated according to the Eq. 2 and the value is 0.8800. Threshold is calculated on the basis of Entropy and it is 0.1200. The value of entropy is greater then the value of Threshold which shows that the uncertainty of the outcome passes the threshold. This concludes that this query is ambiguous.
The five sample queries are mentioned below: Here Yashoda is a name of the lady. The central idea is to consider the distribution of a query term sense in an available relevant document set as discussed earlier. According to the result the term highlighted are ambiguous since the entropy value is greater then threshold. It is evident from the results that even if the query has polysemous word then too it is not considered ambiguous because its entropy is less then Threshold. In this condition we will not prompt the end user to select one appropriate sense.
We used Hindi WordNet [8] as a lexical database for mapping the senses in evaluation work. It is developed at Indian Institute of Technology, Bombay, India. The Hindi WordNet is a system for bringing together different lexical and semantic relations between the Hindi words. It organizes the lexical information in terms of word meanings and can be termed as a lexicon based on psycholinguistic principles.
Entropy and Threshold are used as a measure of the ambiguity detection in the queries. Entropy is solely dependent on the probability distribution of each sense of a particular keyword whereas value of Threshold is dependent on the Entropy itself.
RESULTS
We successfully tested the algorithm specially designed fifty queries (TREC pattern) and a quantitative evaluation of detecting ambiguity for five randomly selected queries is presented in Table 1. The results for the rest of the queries are almost the same.
From the results it is clearly evident that ambiguity detection is quiet important before its disambiguation.
The data in Table 2 clearly shows that out of 50 queries when tested on Google the detection of ambiguity is done successfully in 45 queries. 35% queries were unambiguous even though it consists of ambiguous words.
Our approach successfully identifies the ambiguity in the queries which can further proceed to disambiguation. In general WSD system wastes their computational power in disambiguating the unambiguous query. However early detection of the ambiguity in the queries will save the computational power of the system. It is also evident from the results that many times even if the query consists of polysemous word, it is not ambiguous.
DISCUSSION
The study discussed and summarized the approach for the detection of the ambiguity in the Hindi language queries on the web. The future research will cover the evaluation of the human intervention as well. The human intervention will result into qualitative evaluation of the study.
The approach has certain chances of error as the Hindi WordNet [8] is arbitrarily fine grained. Like in the query "गु लाब कȧ कलम (Rose cutting for planting)" query term "कलम" has 9 senses according to Hindi WordNet, but few senses are hard to distinguish and can be merged. Like sense "पे न (pen)" and "तू िलका (brush)" of keyword "कलम" can be merged. The future study can give the solution by using more robust tools in this context. So far researchers tried to disambiguate the Hindi language queries like Pushpak Bhattacharya [3]. He used rectified Lesk [1] approach for disambiguation. Lesk used MRD (Machine Readable Dictionaries) whereas Pushpak Bhattacharya [3] rectified his approach and used Hindi WordNet for the disambiguation. He implemented the Lesk algorithm using the Hindi WordNet lexical semantics for the Hindi languague disambiguation. Pushpak Bhatacharya [3] had done his experiments for the disambiguation of the Hindi language. Our work is related with the Hindi language information retrieval. In his method he only approached to disambiguate the Hindi language. Besides that the central idea of our work is ambiguity detection.
Human
intervention in lexical query disambiguation can be an effective tool for information retrieval applications. Detecting the ambiguity using the concept of Entropy and Threshold is found quite successful. Ambiguity resolution improves the performance of the WSD based applications. It reduces the overload on the system by avoiding the useless efforts to disambiguate the unambiguous queries. The ambiguity resolution provides a robust mechanism for presenting results to a user for better conception of the contents of the result set. | 4,236.4 | 2008-09-30T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Application of ternary logic for digital signal processing
The article shown that digital processing of slowly changing signals based on ternary logic has significant advantages compared to processing based on binary logic. It is shown that for slowly changing signals, digital processing can be performed on the basis of dividing analysed signal into frequency bands. In particular, information on derivatives of a slowly varying signal which spectrum lies in the frequency band from 0 to ω_0, can be restored basing on the analysis of its component lying in the frequency band from ω_0/3, to ω_0‥
Introduction
There are a number of reports [1][2][3] which prove that in relation to digital signal processing ternary logic has a number of advantages compared to binary. The main advantage is a reduction of operations' numbers required to convert an analogue signal into digital form. According to available estimates, the number of operations required to convert a signal into digital form using ternary logic is approximately in 1.5 times less than operations numbers that must be performed to carry out a similar procedure using binary logic. However, the most clearly examples showing benefits of ternary logic over binary are related to digital processing of signals that relatively slowly change in time (more precisely with processing of signals which time derivative does not exceed a certain value). For such signals can be formed -cover. In other words it is possible to choose Δ subinterval by levels when a signal value Δ ( +1) on the ( + 1)-st beat will differ from value of Δ on the beat no more than Δ . That is if the signal on ( + 1)-st beat corresponded to the discrete level number j then the signal on the ( + 1)-st beat will correspond the level with one of numbers − 1, , + 1.
Obviously, an amount of information contained in such signal is most conveniently measured in trit (unit of information [4], which name is built by analogy with the name of generally accepted unit "bit", but relating not to binary, but to ternary logic). Obviously, that information quantity which exhaustively describes signals of the type under consideration is an equal to N trit, where N -is a number of beats (with the accuracy to information which contained in an initial value of signal). However, by historical reasons, ternary logic has not become widespread yet, although there are a number of tasks that require digital processing of slowly changing signals. An example on this regard is any system which applied to track values that are changing relatively slowly in time (in particular, it refers to any systems which built on measurements of inertial quantities, f.e. the temperature of sufficiently massive bodies [5]). Also, a typical example are adaptive optics systems which intend for use in the solar energy industry. Specifically, to track a position of the Sun in the sky [6][7][8]. In this article provides an additional evidence of the benefits of ternary logic. More precisely, it is shown that the ICIPCE 2020 IOP Conf. Series: Materials Science and Engineering 946 (2020) 012002 IOP Publishing doi:10.1088/1757-899X/946/1/012002 2 digital processing of signals which have -coverage, in terms of ternary logic, can be performed using frequency filters (that is, using the separation of the original signal in frequency ranges).
Problem statement
Currently, digital methods are widely used for signal processing. In the vast majority of cases, digital signal processing uses binary logic, which is why analog-to-digital converters are used. These converters must be applied in the same way both to the fast-changing signal and to the slow-changing signal, which was discussed above. Using such converters does not allow you to use the fact that the derivative of these signals is limited. The processing of each clock cycle proceeds independently. This leads to the fact that the existing types of analog-to-digital converters require a significant number of operations, as follows from the materials of this work. If we take into account the specifics of slowly changing signals, the number of operations that are necessary to translate them into the digital form will be significantly reduced [9][10][11][12][13].There is every reason to believe that reducing the number of operations in digital signal processing is becoming an increasingly urgent task due to the development of the Internet of things [14], big data [15] and digitalization of the economy as a whole [16]. The question of further development of ternary logic is also relevant from a philosophical point of view, in particular, for dialectical positivism [17,18]; within the framework of modern mathematical logic, very non-trivial logics are also considered [19,20], the construction of which differs significantly from the logic of Aristotle and Buhl.
Prerequisites for development of a spectral method for digital processing of slowly varying signals
Obviously, that if the signal is approximately replaced by its model, consistent to -cover, then it automatically supposed using of discrete beats (time intervals) during which the value of signal is considered constant. Duration of all beats without losing of generality can be considered as the same.
A spectrum of such model signal is intentionally limited. Specifically, it belongs to a range from 0 to which within transition to discrete quantity take the form Considering that the values of _( ± 1) for the signals of concerned type can differ from _ no more than to one discrete level and the value of ∆ 0 -is a constant, so using of formula (3) de facto means that it should be found some procedure of assessment of the first and second derivative. In essence, representation of the following form is constructed where q can take values -1, 0, +1. From the formula(4), even without detailed calculations, follows that while considering a discrete signals in expansion into Taylor's series -it is necessary to consider elements up to the second derivative inclusive. Because using only the first derivative will not reflect, for example, a situation where both the value _( + 1) and the value _( − 1) are excess of one level the value of the signal _ . The question arises -exist there a connection between conceptions one of the same discretely varying signal through formula (1) and through formula (2). In other words, is it possible to determine characteristics of displaying (4) by analyzing frequency spectrum of the signal?
Evidence of applicability of a spectral method of digital processing to slowly varying signals
Let us divide a spectral interval, in which the spectrum of the function under consideration is defined, into the following three subintervals[− 0 , − (1) is applied directly and here also 1 = 3 0 takes place.
Summing up formulas (6), (8) and (9), we obtain the following representation for the initial signal where all functions ̃1 , ̃3 , 2 have a limited spectrum in a frequency band up to 1 3 0 . This, in particular, means that for three beats of the initial signal, there is only one count of these functions. In other words each of them can be associated with a value that will remain unchanged for three beats. Comparing the result obtained in (10) with the formula (3) which directly derives from expansion into Taylor series, it is possible to see that information on the value of the second and first derivatives actually turns out to be in the frequency band from 1 3 0 to 0 , which can be separated from a frequency band that allows to restore the function 2 (and this can be done using typical electronic tools). We emphasize that considerations forcing to take into account precisely the first two derivatives are fundamental. That is why the range of frequencies in which the spectrum of the signal considered above fits was divided into two unequal parts. Let us consider a harmonic signal modulated by a signal in a certain frequency band. As is well known from radio engineering, in this case a signal spectrum contains combination frequencies, in other words, a band occupied by this signal becomes twice as wide as the band of the original signal. Nevertheless, information transmitted by such signal corresponds to a width of the original band, since in this case opportunities of the double bandwidth capability is not fully utilized. Along with modulation of the harmonic signal, let us say sinusoidal, in order to transmit information it is possible to use modulation of harmonica signal opposite in phase, i.e. in this case cosine one. This signal is capable of carrying the same information, in other words, information can be transmitted in a doubled frequency band by modulating two antiphase signals. For that very reason, we used a quite specific division of frequency band into two unequal parts, related to each other in width as two to one. It can be seen that in this case information on signal is actually contained in three frequency bands, each of which is 1/3 of the original. However, since bands from ω 0 to ω 0 are actually "mixed", it is needed to combine and consider them together, concurrently taking into account the first and second derivatives. The diagram shown in Fig. 1 illustrates how information obtained by radio-frequency analysis of the first and second derivatives can be used to restore behavior of a function with -cover over a period of three beats. These symbols correspond to changes in the signal level during transition from the first beat to the second and from the second one to the third beat (three possible particular cases are shown in Fig. 1). The same figure also shows examples of circuits illustrating the nature of deviation of the signal from the center beat (the right parts of figures). For example, the first signal with a change of the signal in accordance with the -1, 1 sequence, is schematically represented by drawing (the right part of Fig.1a), on which there are two arrows pointing up. This means that deviation of the signal from the value that it takes on the central beat is positive for the beat located on the left and for the beat located on the right. Nine possible combinations of a sequence of change of levels correspond to nine possible combinations of values of the second and first derivatives, which are assigned the value either +1, or 0, or -1. Exactly this consideration can be taken as a basis for digital radio engineering processing of slowly changing signals (more precisely, signals with -cover). Namely, as follows from considerations above, in order to establish the nature of the signal change over three selected beats it is sufficient to identify only the signs of the first and second derivatives. Thereby, dividing the signals into frequency bands allows to highlight the behavior of the signal over three beats by analyzing the sign of the signal in a well-defined frequency range. Moreover, to highlight the sign of cosine and sinusoidal signals, it is quite possible to use standard radio engineering techniques.
Conclusion
Hereby, digital processing of signals based on ternary logic has really very serious advantages compared to binary, especially if we consider signals with -cover. In this case, comprehensive information about derivatives of a given signal in a group of three beats can be established basing on an analysis of its component lying in a frequency band from 0/3 to 0, where 0 -the frequency corresponding by the time to discretization step of the original signal. Wherein, in order to restore the signal, it is sufficient to establish values of the derivatives in digital form i.e. correlate them with values corresponding to a ternary logic: "-1", "0", "+1". The proposed approach can be implemented in hardware since its implementation is essentially based on conventional spectral methods. It is enough to select the signals in the corresponding frequency band and determine whether they exceed the specified threshold or not. Thus, further work in this direction will significantly simplify the digital processing of slowly changing signals. We emphasize that we do not intend to contrast ternary logic with binary logic in this work. On the contrary, we can predict that in the future there should be flexible platforms that, depending on the need, will use one or another logic | 2,843.4 | 2020-10-29T00:00:00.000 | [
"Computer Science"
] |
New method for taxonomic descriptions with coded notation, producing dynamic and interchangeable output
Abstract A proposal for taxonomic species description notation is presented to replace the traditional descriptive texts for a coded matrix, avoiding redundant adjectives and subjective descriptions. This is an attempt to enhance the species description rate and to make the descriptions output available to other scientific disciplines, machine learning, interactive and computer‐assisted identification keys, metadata analysis and its applications. The method consists of presenting the description of the overall morphology in a coded matrix, following a character list with detailed observed conditions for each character. The method is dynamic and open to amendments and new data addition as they become available. We test the new method describing five new species of Collembola Symphypleona of the genus Pararrhopalites as a generalized model and made the coded output available. We conclude that a coded taxonomic description is an advance to the traditional taxonomic text, with potential to enhance the global descriptions rate. The generated descriptions are dynamic, expandable and can be easily used in other fields of science, allowing non‐experts to access the data for phylogenetic, biogeographic, ecological studies and metadata analysis. Even though an experienced taxonomist will always be necessary to make a detailed taxonomic description, it is a step forward to a general template to semi‐automated taxon recognition and to future development of auxiliary tools for species description using machine learning and templates to speed up the time‐consuming phase of schematic figures preparation, after the expert interpretations are done.
| INTRODUC TI ON
Taxonomy has been the focus of debate since the XIX century, and even recently the recognition of the taxonomic research is subject of discussion (Packer et al., 2018;Zeppelini et al., 2021).The global biodiversity crisis exposes the urgency for investment in taxonomy to reveal the largely unknown species diversity.Using Collembola as a parameter, where about 20% of its estimated diversity is known (Hopkin, 1997), between 100 and 120 new species are described each year, and it would take to taxonomists more than 400 years to uncover and describe all the unknown species diversity (Potapov et al., 2020).To be able to understand the diversification processes in Collembola, we need to speed up the rates of species description.This is a matter of concern in every area of entomology, and in some extent, the whole zoology.
Similar to many other taxonomic groups of meso-and microfauna, Collembola taxonomy is largely based on morphological analysis, observing, and describing discrete variations in diagnostic characters.The most abundant morphological source of information for species definition in Collembola is the number, distribution, and shape of cuticular chaetae, this is called chaetotaxy.The current morphological approaches for inference of homology, chaetotaxic systems for chaetal identification, are often room for great subjectivity depending on what is seen and what is visible under an optic microscope, and often different chaetotaxy systems are hardly comparable (Betsch, 1997;Betsch & Waller, 1994;Bretfeld, 1990Bretfeld, , 1999;;Potapov et al., 2020).The challenges and perspectives for Collembola taxonomy is discussed in detail, and the need for an integrative taxonomy and international efforts to direct financial support and expertise recognition to face the global biodiversity crisis, was also the focus of debate (Potapov et al., 2020;Zeppelini et al., 2021).
The impact of recent technologies of high-resolution imaging, molecular sequencing and machine learning will be a great deal towards taxonomic techniques that can improve new and known taxa recognition (Potapov et al., 2020).Integrative taxonomy, combining morphological and molecular data to define species limits is likely to be a trend for most taxonomic groups, not only Collembola.
There is, however, a particular aspect in Collembola (and nearly every taxon of the meso-and micro-fauna) that affects the viability of including molecular sequences in new species descriptions, in many, if not most cases.It is rather a logistic problem, but many times there is not an alternative.The problem is that almost all new species are discovered under light microscope, which means that the specimen was mounted in a slide, after being cleared under several different techniques of chemical washes, which destroy the tissues and, consequently, genetic material.
It is only after the taxonomic identification, that a species is recognized as new for science or undescribed.More often than not, the material analyzed is a limited set of specimens, and there is no available material for molecular analysis after the taxonomic identification and morphologic study, if the extraction of DNA/RNA was not performed before mounting the specimens in slides for microscopy.
Accepting that molecular analysis facilities are available, many times the biological specimens needed for molecular sequencing may be available only in a future, after the species is described.Even when scanning electron microscopy (SEM) is possible, depending on the structure, it is hard to get images of all diagnostic features and light microscopy may be needed as well.Nevertheless, high-resolution imaging and molecular data are powerful tools, and may be indispensable for accurate taxonomic research and species delimitation.
Therefore, the morphologic descriptions must be dynamic, open to easy amendment and additional data insertion.Furthermore, it must be presented in an interchangeable language, to allow the information to flow across different disciplines.Among all methods applied to the external morphology study of Collembola, chaetotaxy is certainly the most complex and extensively detailed (Betsch & Waller, 1994;Cassagnau, 1974;Deharveng, 1983;Fjellberg, 1999;Jordana & Baquero, 2005;Nayrolles, 1988Nayrolles, , 1990aNayrolles, , 1990b;;Potapov, 2001;Szeptycki, 1979Szeptycki, , 1972;;Yosii, 1960).There are many chaetae and groups of chaetae that vary in position and shape in such a way that they allow a great deal of homology inferences.
However, the most advanced approaches are also very complex, which make interpretation difficult and increase ambiguity.These aspects circumscribe the deep taxonomic research to restricted groups of experts, posing difficulties to comparative studies even among different orders of Collembola.In addition, the traditional descriptive texts with morphological and chaetotaxic information are difficult to integrate with machine learning and computational novelties, which could give a lot of agility to phylogenetic analysis, metadata comparison, biogeography, and their various applications (Potapov et al., 2020).Despite all advances in technological instruments and methods, taxonomic descriptions are still written basically in the format as it was about two centuries ago, with a hermetic language in nearly incomprehensible texts for non-experts.This is often a greater barrier to communication among different areas of science, than the access of high-tech equipment and analytical facilities.
The proposal of a coded and illustrated description of new species that can be easily imported, transformed, amended, corrected, or expanded is presented as an alternative to the traditional descriptive taxonomic method.
The strength of the coded description is that new characters, whether morphological, molecular, and ecological, can be easily added to the list and can improve the descriptive matrix as new information is produced.These matrices can be uploaded to public libraries and kept up to date with all available information about the species, and linked to data bases as GBIF, ZooBank and electronic taxonomic catalogs available in different parts of the world, e.g., fauna.jbrj.gov.br/ fauna/ lista Brasil (Zeppelini et al., 2023) and www.colle mbola.org (Bellinger et al., 1996).
Finally, the new proposition for taxonomic notation will not dismiss the need of a experienced taxonomist, as the pre and post descriptional elements (e.g., type material, habitat, distribution, remarks), and all the analytical study (morphological, molecular) will always depend on the expertise of the researcher, nevertheless the identification phase can be automatized and the schematic figures preparation, a very time-consuming phase of the whole description work, can be speed up with templates for each taxonomic group in a pop up fashion, as the data matrix is fulfilled.This may allow the taxonomists to enhance their productivity, increasing the species descriptions rate.
| Coded taxonomic description
The order Symphypleona Börner, 1901 shows some ambiguity in current morphological methods, particularly when describing the head and body chaetotaxy (Betsch, 1997;Betsch & Waller, 1994;Bretfeld, 1990Bretfeld, , 1999;;Christiansen & Bellinger, 1998).The order is composed by springtails with globular body shape, as a result of modification and fusion of thoracic and abdominal segments I-IV, this condition hinders the direct assignment of segments identity.
An approach that can reduce the ambiguity of the taxonomic descriptions is the description of body parts into coded morphological units, straightforwardly representing the actual body segments and appendicular whorls (Hopkin, 1997;Jura et al., 1987;Nayrolles, 1988Nayrolles, , 1990aNayrolles, , 1990bNayrolles, , 1991;;Tomizuka & Machida, 2015), in such a way that any species can be compared from the coded data base.This is in replacement to the traditional descriptive text, many times with ambiguous terminology, and often applying different and not directly comparable chaetotaxic systems.The coded notation method would lead to a more comprehensive analysis of the chaetotaxy, as well as a direct availability of the data for comparative studies.Furthermore, a coded description can easily be amended, molecular data can be added in the character list and matrix, and new complementary morphological features can be inserted as new information is available.
The qualitative description of the shape and size of the different chaeta is also subject to a great deal of ambiguity and poor definition, the adjectives are not standard and the very definition of what is a macro-, meso-or microchaeta is not always clear.
Therefore, a bank of shapes with high quality images is imperative to discard all the subjective descriptions.There are several chaetae banks published for different groups of Collembola, including some with precise line drawings (Betsch, 1980;Christiansen, 1966;Deharveng, 1983;Nayrolles, 1991), and some with SEM photography (Cipola et al., 2020;de Lima et al., 2022;Lukić et al., 2010;Zeppelini et al., 2022;Zhang & Deharveng, 2015), it is a matter of time to have a fully reliable chaetal shapes collection, so a specific chaeta can be addressed directly by its reference in the bank, in the coded description.
A standard, fully coded method for species description may be an improvement to the traditional descriptive text, it may allow to use machine learning and high-quality imaging to enhance the efficiency of species descriptions and diversity recognition, offering a powerful tool to understanding of global processes of diversification and distribution, and face the biodiversity decline.
| Chaetal fields and morphological units
We attempt to access the chaetotaxy of the head and great abdomen of Collembola, by identifying body segments arranged in each tagma (Figures 1 and 2), or whorl in each appendage (Figures 3-5) (Hopkin, 1997;Jura et al., 1987;Nayrolles, 1988Nayrolles, , 1990aNayrolles, , 1990bNayrolles, , 1991;;Tomizuka & Machida, 2015).Each segment has its own set of chaetae, and more than one chaetal field may be observed in a single body segment.A chaetal field is a group of associated chaetae that are consistently observed in a given body segment, often associated with some landmark on the cuticle (Figures 6 and 7).
The morphological units are the actual observed character in a given species.Once recognized all the chaetal fields, the characters are listed, and their inherent character states are described.
Each morphological unit in the character list is given a code (0, 1, 2) to each observed condition.It is important to note that it is not a phylogenetic matrix, once the codes in the resulting matrix are not supposed to include hypothesis of ordering or polarity, and both apomorphies and plesiomorphies may be listed as character, instead, it is a descriptive coded character state matrix.
| Head, body, and appendages chaetotaxy
Chaetotaxic systems attempt to labeling each chaeta along the body, where the label indicates a specific chaeta and its position in the body of the animal, then a qualitative description is made (e.g., spinelike chaeta, macrochaeta, club shaped sensillum, palmate, serrate, lanceolate), bringing subjectivity in the interpretation of a specific chaeta labeling, and by the many different adjectives that can apply for a given shape, depending on the author.
Here we map regions that can be compared in different taxonomic groups, the chaetal fields, within each head and body segment (Figures 1 and 2), and to appendages (Figures 3-5), and refer to a shape for the chaeta in an image data set, the chaetae bank, with images of each kind of chaeta found in the taxon (Figure 8).
After the analysis of the chaetotaxy applying the traditional systems, the labeling of the chaetae is replaced by a code describing the total number of chaetae in the chaetal field, and the qualitative description of the different kind of chaetae found in each chaetal field, is replaced by the respective number in the chaetae bank that represents the actual observed shape and size, to compose the morphological unit definition (see Table 1).
| Coded descriptive dataset
The information of the whole collection of data of each species, result in a dataset as synthesized in Table 1, this is the final morphologic description outcome and represents the complete and up to date set of information for each studied species.However, it is dynamic and open to additional information, when available.
The coded dataset is hierarchically ordered in four columns namely, Tagma, Segment, Chaetal field and Morphological unit (Table 1), a fifth column is inserted with the coded information of the as in Table 1.species (Figure 1).The lines of the resulting dataset bring the different features of the chaetotaxy and general morphology (e.g., eyes, foot complex).The cells in the column of morphological unit are the actual features to be observed in the specimen, where each one is a recognizable morphological unit of the animal whole morphology, that is described and coded in the character list (Appendix).
| Testing the coded description
To test the proposed system, we describe five new species of the genus Pararrhopalites Bonet & Tellez, 1947 (zoobank.org:pub:9ED865EA-F95A-4CBE-947C-3A5C6CD81907),from the order Symphypleona, using Pararrhopalites fallaciosus sp.n., as the explanatory example in the SEM overall morphologic analysis.First, we describe P. fallaciosus sp.n., where all the chaetal fields are delimited, and the species is morphologically defined.The character list is derived from this revision and presented in Appendix.The final descriptions for the remaining new species will bring the code only (Morphologic unit code).
Here we propose the coded description for Pararrhopalites, a genus of Symphypleona, however, once fully established, the system must be applicable to the orders Poduromorpha and Entomobryomorpha as well.Ideally a similar approach could be used to any other zoological taxa.
The chaetotaxic systems were used to verify the congruence of the morphological landmarks and associated groups of chaetae which display constant expression (i.e., chaetal fields).All chaetotaxic information, the actual observed character condition, was described in the character list, and coded accordingly (Appendix).
The descriptions are presented as a list of coded characters in the descriptive plate of each newly described species.The exception is made to P. fallaciosus sp.n., where the morphologic units are described in the matrix corresponding to the chaetotaxy system cited above, as an example of what the observed features are (before coding).The detailed chaetotaxy analysis for all the five species described here is available in the Data S1.
| Habitat and distribution
The species was collected in drilling holes, occurring in the Subterranean Shallow Habitat (SSH), its known distribution is restricted to the type locality, despite the sampling efforts in nearby Good's Biogeographic zone 27 (Culik & Zeppelini Filho, 2003;Good, 1974).The climate according to Köppen's system is As (de Sá Júnior et al., 2012;Köppen, 1936;Shear, 1966), presenting dry winters and wet summers, average temperatures of 18°C during winter and 22°C in summer (valid for all the five species described here).
| Remarks
The new species resembles P. queirozi in the shape of the subanal appendage and cephalic chaetotaxy but can be clearly distinguished by the number of subsegments of Ant.IV (eight in P. queirozi, 10 in the new species), the presence of inner tooth in all ungues, and apical filament exceeding the tip of the unguis in all three empodial complexes in P. queirozi.The new species is similar to Pararrhopalites hermesi sp.n. in the shape of subanal appendages, the lack of inner tooth of all ungues.They differ by the number of eyes (1+1 in P. fallaciosus sp.n. and 0+0 in P. hermesi sp.n.), number of subsegments in Ant.IV (10 and nine respectively), the lack of corner tooth in all unguiculi and mucro with inner lamella smooth in P. hermesi sp.n.
The reduced number of chaetae on the dorsal posterior part of the great abdomen (17+17) also differentiates P. fallaciosus sp.n. from other species of the queirozi-group (all the species which share the same female subanal appendages).
| Remarks
This species is part of a group with a specific kind of subanal appendages (number 28 in Figure 8), which includes P. queirozi and P. fallaciosus sp.n.The species has an intermediary number of subsegments on Ant.IV (nine subsegments), has no eyes, lacks the corner tooth in all unguiculi, and the inner lamella of the mucro is smooth.This combination of features can easily differentiate the three species.
Despite its wide distribution, P. hermesi sp.n. presents some features that may be indicative of its relation to the SSH environment, for instance eye reduction, Ant.IV shorter (nine subsegments), overall small body size, and the reduction of the corner tooth and apical filament on unguiculus.
| Habitat and distribution
This species is restricted to a single area, there are only 10 records for three small caves in the same iron rock formation.The species is most likely distributed along the canga, a SSH formation resulting of weathering of the iron rock, that often connect caves in the same lithology.
| Remarks
This species resembles P. sideroicus Zeppelini &Brito, 2014 andP. ubiquum Zeppelini et al., 2018, in the shape of the subanal appendages and general body chaetotaxy but differ from all the species of the genus with records in this area by lacking the interantennal sensillar triangle, this feature seems to be shared by both the species of the queirozi-group and the ubiquum-group.
The presence of only three chaetae in the dorsal cephalic area DII is also very unusual for the genus and the reduced number of chaetae in the dorsal posterior part of the great abdomen can be diagnostic for this species.
| DISCUSS ION
To access the global species diversity, it is mandatory to enhance the description rates of new taxa, mainly where the biodiversity is least known.A description protocol that can communicate the morphologic characters in a coded notation, allows the application of new technologies in the research and machine learning, which may be a major turnover in the discipline, and affect the species description rates.The scarcity of trained taxonomists and the hermeticity of the taxonomic description manuscripts are the biggest barriers to the advance of the knowledge on the species diversity and evolutionary processes of diversification, both clue elements to understand the global biodiversity decline.
In the study of Collembola, as well as in many other groups, the information content of a traditional taxonomic text is often difficult to access and cannot be transported to analytical software without a detailed revision of the species description, which many times demands an expert in the taxon.The traditional format is also almost impossible to be used for machine learning, as there are many differences in the presentation of the data, that can make the comparison among different manuscripts impossible to non-experts and to artificial intelligence.The open character list of the coded description allows easy insertion and correction of the information, and the character lists, the chaetae banks and the coded species descriptions are fully compatible with technologies that work with data matrices.
Our results can be synthesized in the following conclusions: 1.The coded taxonomic description is a notation method that produces interchangeable data, fully available for different scientific disciplines.The data can be used by non-specialists for different purposes in science.
2. The method makes it possible to add any source of new data to the description when it becames available.It is dynamic and open as a continuous list of characters, the updating of the knowledge of a given species is not dependent of a traditional taxonomic revision.
3. The method allows machine learning that can help to speed the taxon identification and species description rates where they are least known.This can be an important tool to fight global biodiversity crisis.
Figure 8.To the combination of chaetae number and type is given a numeric code (this example is coded 0 in the character list Appendix).
areas and in the whole region, an important mining area which is being consistently sampled in the last decade.
3.3.1 | Habitat and distributionThis species is known from caves and SSH in a range over 200 km across different lithologies.It is a regionally widespread SSH species, but it is not abundant, as there are less than 20 records of this species.F I G U R E 1 0 Pararrhopalites hermesi sp.n. body chaetotaxy and descriptive table.
F
I G U R E 11 Pararrhopalites hermesi sp.n. antennal chaetotaxy and descriptive table.
F
I G U R E 1 6 Pararrhopalites atypicus sp.n. antennal chaetotaxy and descriptive table.
F
I G U R E 2 1 Pararrhopalites ritaleeae sp.n. antennal chaetotaxy and descriptive table.
4.
Coded description is idealized to Collembola but must be applied to any taxonomic group, reducing the ambiguity of narrative F I G U R E 2 3 Pararrhopalites ritaleeae sp.n. abdominal appendages chaetotaxy.(a) Ventral tube lateral view; (b) tenaculum lateral view; (c) furcula chaetotaxy and mucronal lamellae.Solid circles -anterior view, hollow circles -posterior view.F I G U R E 2 4 Pararrhopalites ironicus sp.n. cephalic chaetotaxy and descriptive table.
F
I G U R E 2 6 Pararrhopalites ironicus sp.n. antennal chaetotaxy and descriptive table.
6 | Pararrhopalites ironicus sp. n
Pararrhopalites ironicus sp.n. is a very restricted species, represented by only six records from two caves in the same iron rock formation, called Serra do Tamanduá, the caves are 2700 m away from each other, and connected by SSH.It is a rare species, likely to be confined 3. | 5,041.4 | 2024-07-01T00:00:00.000 | [
"Computer Science",
"Biology",
"Environmental Science"
] |
Interactively visualizing distributional regression models with distreg.vis
A newly emerging field in statistics is distributional regression, where not only the mean but each parameter of a parametric response distribution can be modelled using a set of predictors. As an extension of generalized additive models, distributional regression utilizes the known link functions (log, logit, etc.), model terms (fixed, random, spatial, smooth, etc.) and available types of distributions but allows us to go well beyond the exponential family and to model potentially all distributional parameters. Due to this increase in model flexibility, the interpretation of covariate effects on the shape of the conditional response distribution, its moments and other features derived from this distribution is more challenging than with traditional mean-based methods. In particular, such quantities of interest often do not directly equate the modelled parameters but are rather a (potentially complex) combination of them. To ease the post-estimation model analysis, we propose a framework and subsequently feature an implementation in R for the visualization of Bayesian and frequentist distributional regression models fitted using the bamlss, gamlss and betareg R packages.
Introduction
For modelling parameters beyond the mean of a target distribution, generalized additive models for location, scale and shape (GAMLSS) as introduced by Rigby and Stasinopoulos (2005) provide the ability to link all parameters characterizing the response distribution to a set of explanatory variables via an additive predictor, similar in spirit to generalized additive models but without its distributional limitations. Overcoming some of GAMLSS' earlier restrictions, distributional regression as coined by Klein et al. (2015c) presents a highly flexible modelling framework with a variety of possible target distributions and a wide range of effects including parametric penalized splines, random and spatial effects as well as nonparametric effects such as regression trees.
Introduction
For modelling parameters beyond the mean of a target distribution, generalized additive models for location, scale and shape (GAMLSS) as introduced by Rigby and Stasinopoulos (2005) provide the ability to link all parameters characterizing the response distribution to a set of explanatory variables via an additive predictor, similar in spirit to generalized additive models but without its distributional limitations. Overcoming some of GAMLSS' earlier restrictions, distributional regression as coined by Klein et al. (2015c) presents a highly flexible modelling framework with a variety of possible target distributions and a wide range of effects including parametric penalized splines, random and spatial effects as well as nonparametric effects such as regression trees.
Implementations of distributional regression models feature gamlss (Stasinopoulos and Rigby, 2007) and bamlss (Umlauf et al., 2018), the most prominent extensions with a vast selection of available distributions and effects. A number of software extensions have also been developed to support specific instances of the distributional regression class, such as betareg (Grün et al., 2012) for beta regression. While gamlss and betareg employ different types of maximum likelihood estimation, bamlss implements a Bayesian approach featuring posterior mode estimates which are subsequently used as starting values for Markov chain Monte Carlo (MCMC) sampling. This has the added benefit of being able to construct the posterior distribution of each parameter as well as potentially complex functions of these parameters.
Moving beyond single-parameter regression models naturally leads to additional challenges when it comes to the interpretation of the estimated effects since the same covariate may show up in multiple regression predictors and applying a non-identity link function additionally makes the interpretation depend on the remaining set of covariates. Furthermore, in many cases the parameters employed to characterize the distribution of interest do not directly equate the moments or other interpretable characteristics of a distribution, making another transformation necessary to arrive at interpretable figures.
As a consequence, applied researchers often appreciate the practical appeal of distributional regression where regression relations beyond the mean can be investigated but they struggle when it comes to understanding the output of the regression estimates. To facilitate this process, we introduce a new package distreg.vis that can deal with model classes from the bamlss, gamlss and betareg packages to achieve the following tasks: • moments(): Obtain predicted moments (Expected Value, Variance) of the target distribution based on user-specified values of the explanatory variables, if they exist.
• plot_dist(): Create a graph displaying the predicted probability density function or cumulative distribution function based on the same user-specified values.
• plot_moments(): View the marginal influence of a selected effect on the predicted moments of the target distribution.
With those functions, applied scientists can directly translate regression objects into publication-ready graphs without the need to worry about package-specific predict and plotting functions, the connection between predicted parameters and moments or the correct display of predicted probability density functions. To make the process of interpreting fitted distributional regression models even more accessible, distreg.vis features a rich Graphical User Interface (GUI) built on the shiny framework (Chang et al., 2018). Using this GUI, the user can (a) obtain an overview of the selected model fit and then use the functions mentioned above to (b) easily select explanatory values for which to display the predicted distributions, (c) obtain marginal influences of selected covariates and (d) change aesthetical components of each displayed graph. After a successful analysis, the user can obtain the R code needed to reproduce all displayed plots, without having to start the application again.
The idea of calculating conditional values of the response distribution by way of plot moments() is not new. In fact, many other R packages feature aspects of the functionality of distreg.vis. Notably, the effects package (Fox and Weisberg, 2019) is an extensive library for calculating conditional means of the response distribution depending on varying explanatory covariates in linear, generalized linear and mixed effects-type models. Lacking the graphing capabilites of effects while putting more focus on the estimation of marginal means, the prediction (Leeper, 2019) package offers a bigger range of supported model classes and even non-parametric effects.
Certainly the most general marginal effects packages are emmeans (Lenth, 2019), formerly called lmmeans (Lenth, 2016), and ggeffects (Lüdecke, 2018), as they not only offer a large variety of supported model classes and predictor effects, but contrary to effects and prediction also include the compatibility to all of distreg.vis' supported distributional regression model classes: gamlss, betareg and bamlss (only supported by ggeffects). However, even the latter two effects packages are only able to compute marginal means of moment values for betareg and fail to provide the same functionality for bamlss and gamlss, where predictions remain restricted to the parameter-level. As such, distreg.vis is the only package which supports both gamlss, bamlss and betareg in full generality and can calculate marginal effects on the moments and not only the parameters of a distribution.
The remainder of this article is structured as follows: Section 2 provides an example tutorial on analysing income distributions in which using distreg.vis yields a benefit. Section 3 covers the methodological background of the distributional regression model class and its implementations, while Section 4 offers a glimpse into the GUI. Section 5 concludes the article. The appendices provide an in-depth guide to the implementation of the two main functions of the package (Section A) and the Graphical User Interface (Section B), display special cases of distreg.vis' functions (Section C) and provide complementary graphs (Section D). The R script including the dataset as well as the appendices are available in the online supplemental materials at http://www.statmod.org/smij/archive.html.
Motivating example: Drivers of yearly income
To illustrate the usefulness of distreg.vis, we will give a short example based on a dataset about the yearly income of 3,000 male workers in the Mid-Atlantic region of the United States, provided by the ISLR package (James et al., 2017). This dataset will then be used continuously throughout Sections 2 and 4 as well as parts of the Appendix to illustrate distreg.vis' abilities.
The dataset consists of 11 variables, both continuous and categorical. Our variable of main interest, that is, the response in the following regression analyses, is given by individual's raw yearly income in thousand US$ (variable wage). The remaining variables mostly contain socioeconomic information, including the person's age in years (age), their ethnic origin (ethnicity), their level of education (education) and the year in which the observation was recorded (year). Considering its non-negative nature, we start by assuming a log-normal distribution for the wages, y ∼ Lognormal(µ, σ 2 ). Though more complex distributions would provide a slightly better fit than the log-normal, it was chosen for its simplicity and popularity.
To find out what drives a person's income, we will link both parameters of the log-normal distribution to the aforementioned explanatory variables. The model specification therefore has the following form: where the functions f 11 , f 21 are modelled nonparametrically using penalized splines (Eilers and Marx, 1996), while f 13 , f 23 , f 14 and f 24 are modelled as simple fixed categorical effects of the respective variables. All effects with more than one degree of freedom are therefore consistently denoted with f ij (·) to emphasize that the original input variable has to be recoded prior to inclusion in the model. The standard deviation parameter σ is connected to the explanatory variables using a log-link function to ensure a positive support.
Combined with the sample-based estimation technique of bamlss, distreg.vis can produce credible intervals around marginal influence plots. For this reason, we choose bamlss for model estimation, using the following code: After successful estimation, it makes sense to have a look at the model summary. Using the built-in summary.bamlss() function, we can take a look at the results: As visible in the code output, bamlss provides estimation results for each effect used to describe the target distribution parameters (µ and σ in this case). Even though making statements about each effects' significant difference from zero is possible, the interpretation of such statements as well as for the raw effect estimates is difficult. Model results of penalized splines, for example, only show estimates for their degree of smoothness (τ and α) and estimated degrees of freedom. Without appropriate graphs showing the marginal effect they can therefore not be interpreted. Furthermore, even the absolute coefficients for the categorical effects cannot be taken at face value, since they only affect the distributional parameters µ and σ, and not the moments.
To overcome these limitations, we use distreg.vis. If, for example, we are interested in the impact of education on the marginal income distribution, we create a data.frame object in which all different education categories are present and all other numeric variables are set to their mean. This can be easily achieved by the function set_mean() in combination with model_data, which obtains the explanatory covariates of the model and sets them to the mean. Further defining the row.names of the data.frame to be the different education levels ensures improved legends in further graphs. Figure 1 displays the predicted distributions for each covariate combination specified in df. We can see that the predicted income not only changes in location (higher education shifts the distribution to the right) but also in the variance (higher education leads to a lower variance). Figure 1 is useful to get a visual feel of the influence of education on the modelled distribution. However, we would also like to know the influence of age, which is not a categorical covariate, on the predicted moments of the log-normal distribution. Normally, this would be a tedious task, as the modelled non-parametric effect has to be transformed two times: First, via the link function to ensure the correct support of the modelled parameters. And second, from the parameters to the distributional moments, as the parameters of the log-normal distribution do not directly equate its moments.
The function plot_moments() was written to solve this task. It takes both a fitted distributional regression object and combinations of explanatory variables for which the influence is of interest. In our case, we specify both the df and wage_model objects as arguments of the function: Executing the above code results in what can be seen in Figure 2. The plot is divided into two parts representing the first two moments of the target distribution labelled 'Expected Value' and 'Variance'. On both graphs, the y-axis depicts the moment values, while x-axis displays the variable of interest, which is age in our case.
The lines seen in each graph represent the previously specified covariate combinations, and then display how the moment changes over the whole range of the variable of interest. In our case the five different lines represent different education levels. We can now see that the expected wage level first rises with age until around the age of 40, when it is lowered a bit. Then the income levels increase again up until the age of 60, at which point the wage then decreases. We can also see that this shape roughly stays the same for each education level, which stems from the lack of a modelled interaction effect between age and education.
A strong advantage of the sample-based approach of bamlss is its ability to easily construct credible intervals around the parameter estimates. In Figure 2 we can see small shadows representing credible intervals above and below each of the five lines describing the age effect on the first two moments in each education category. Since the first and highest education categories do not overlap, we can conclude that their effect is significantly different from each other, just from observing the graph. Looking at the right part of Figure 2, we can see that the modelled variance levels of the target distribution show similar linkage to age as the expected value: Two high points can be observed around the age of 40 and 60, both at which the modelled distribution reaches the highest wage variance.
Even though Figure 2 only shows the influence of age on the first two moments, plot_moments() is easily able to include other metrics that depend on the predicted parameters, using its argument ex_fun. A good example in wage distributions would be the Gini coefficient (Lerman and Yitzhaki, 1984), an economic figure measuring a distributions' inequality. Including this measure in our effect plots can be done using the following code: As visible in Figure 3, a new graph window was added in comparison to 2 depicting the influence of age on our specified metric, the Gini coefficient. Further noticable are the credible intervals which we can also obtain if we combine plot_moments() with the sample-based approach of bamlss.
The function plot_moments() is also able to display the difference in moments depending on a categorical covariate. No other arguments have to be specifieddistreg.vis is able to detect the variable type automatically. In the following code chunk, the variable ethnicity is selected as the variable of interest. On the x-axis we can see the categories of our variable of interest, ethnicity. The y-axis now denotes the moments broken up by both the variable of interest and the categories of education, which we specified in the code chunk on page 532. The error bars on the ends of each bar represent credible intervals (95%). From analysing the bars with its error fields we can conclude that the variable education mostly yields significantly different expected values of wage, while ethnicity does not.
From the provided example on modelling wages, it became apparent that distributional regression is a useful tool to handle complex model scenarios. Nonetheless, visualizing the fitted regression models using existing tools is difficult, as transformations are necessary to arrive at interpretable figures. This process is made easier by distreg.vis, which gives the abilities to quickly visualize predicted distributions and display marginal moment influences.
The distributional regression model class
Distributional regression models represent an umbrella class for models where it is possible to link the parameters beyond the mean of a target distribution to available predictors (Klein et al., 2015c). Any choice of the underlying parametric target distribution is valid, as long as the probability density function is twice continuously differentiable with respect to the parameters and, in particular, is not limited to the exponential family typically employed in generalized additive models. Assuming a parametric distribution y ∼ D(θ 1 , . . . , θ l , . . . , θ L ) with L distributional parameters θ 1 , . . . , θ L , we arrive at the model specification where g l (·), l = 1, . . . , L denotes the link function used to uphold the support of parameter θ l , f ql (·), q = 1, . . . , Q l represents the possibly non-parametric effect of covariate(s) X ql on the modelled parameter and β ql , q = 1, . . . , Q l depict the regression parameters which are to be estimated.
Depending on the software package used to fit a distributional regression model, a vast selection of possible effects are available, including but not limited to penalized splines (also in multivariate forms), spatial effects based on Gaussian random fields or Markov random fields, varying coefficient terms and random effects (see Fahrmeir et al., 2013, Ch. 9 for an overview). More recently, new research is done on connecting the distributional regression framework to effects known from Machine Learning, for example, Random Forest (Schlosser et al., 2019). Multivariate extensions have also gained considerable interest in the past, see for example Klein et al. (2015a), Klein and Kneib (2016) or Marra and Radice (2017).
Due to the high flexibility in the target distributions, the predictors and the specified link function, distributional regression encompasses many well-known regression approaches, such as generalized linear models (Nelder and Wedderburn, 1972, GLM), generalized additive models (Hastie and Tibshirani, 1990, GAM), generalized additive mixed models (Lin and Zhang, 1999, GAMM) and generalized additive models for location, scale and shape (Rigby and Stasinopoulos, 2005, GAMLSS). Although in theoretical proximity to GAMLSS, the term distributional regression was coined since distributional parameters do not always represent either the location, scale or shape of a distribution (Klein et al., 2015c).
Implementations of GAMLSS
For estimating distributional regression models in R, the two most capable software implementations are gamlss (Stasinopoulos and Rigby, 2007) and bamlss (Umlauf et al., 2018), with the package betareg (Grün et al., 2012) only focusing on beta regression. The main difference in gamlss and bamlss are rooted in their estimation techniques, which will be described below:
gamlss
The gamlss package features a frequentist approach employing (penalized) maximum likelihood inference. Its main estimation algorithm RS, short for Rigby and Stasinopoulos, uses iteratively reweighted least squares (IRLS) in combination with a modified backfitting algorithm to arrive at coefficient estimates. The algorithm system is broken up into inner and outer iterations, with each inner iteration depicting the fitting of one distributional parameter θ l . Here, a working variable z l consisting of all used predictors, the first derivative of the likelihood (score function) and 'iterative weights' w l , determined with a local scoring algorithm, is calculated. Then, the working variable is fitted to the explanatory variables using backfitted weighted least squares and penalized weighted least squares for parametric and non-parametric coefficients, respectively. The inner iteration is repeated until the inner global deviance has converged. This procedure is done for every θ l , after which one outer iteration is finished. The outer iterations are further repeated, until the outer global deviance has also converged (Stasinopoulos et al., 2017, Ch. 3).
Estimating parameters with backfitting has the advantage of avoiding the need for cross-derivatives and is therefore quite efficient. Uncertainty assessments typically rely on asymptotic normality assumptions and have been found to be pretty conservative in simulation studies (see, for example, Klein et al., 2015b).
The collection of gamlss packages provides a vast number of distributions via its accompanying package gamlss.dist (Stasinopoulos and Rigby, 2019) as well as several extensions concerning spatial data effects (gamlss.spatial, De Bastiani et al., 2018) or truncated distributions (gamlss.tr, Stasinopoulos and Rigby, 2018), for example.
bamlss
The bamlss package (Umlauf et al., 2018, BAMLSS) provides a highly customizable Bayesian estimation framework with both posterior mode estimates via penalized likelihood and fully Bayesian inference implemented via MCMC simulation techniques. By default, it revolves mainly around two functions: bamlss::bfit() and bamlss::GMCMC(). The first function, bfit(), utilizes an optimizing function which seeks to find the mode of the posterior distribution via penalized likelihood with respect to the effect coefficients. Then, those values are used as starting numbers for MCMC simulations (function GMCMC()), which are based on iteratively weighted least squares proposals that rely on multivariate normal proposals obtained from locally quadratic approximations of the log-full conditional (Brezger and Lang, 2006).
Both functions can be swapped by the user with optimizer and sampler functions that more closely resemble the subjective preference. As such, the implementation of bamlss represents a lego-type toolbox that enables replacing specific parts of the model specification with alternative and potentially more flexible variants without altering the rest of the model implementation. Compared to gamlss, the number of supported distributions is more limited but the access to posterior samples facilitates finite sample inference also for complex functionals of the original parameters. Default priors are assigned to all parameters of the model specification but these can also be controlled by the user.
Distributional compatibility
To ensure a wide user audience, distreg.vis is able to support a variety of distributions from the gamlss, bamlss and betareg packages. gives an overview, and divides the available distributions into those that can be used in both plot_dist() and plot_moments() (Table 1a), and those that can only be used in combination with plot_dist() (Table 1b). Interactively visualizing distributional regression models with distreg.vis 541 Figure 5 Plot tab output when specifying five different scenarios with different education levels implementations yet, rendering them incompatible with plot_moments(). To include as many distributional families as possible, we worked together with both the authors of gamlss (Stasinopoulos and Rigby, 2007) and bamlss (Umlauf et al., 2018) and implemented the moment functions for almost all available distributions in their respective packages.
Introduction of the graphical user interface
To understand the fit of the predicted distribution and the magnitude of included effects, plot_dist() and plot_moments() are powerful tools. However, constructing the correct scenarios and specifying them in the appropriate data.frame format takes time. Furthermore, the resulting graphs are then static, as changing the scenarios each time after viewing the results is a slow process. To make using distreg.vis as uncomplicated as possible, it features a rich Graphical User Interface, in which the user can select his/her covariate scenarios and produce publication-ready graphs interactively. In doing so, distreg.vis is strongly based on shiny (Chang et al., 2018), which is an R package designed to create interactive visualizations with HTML code and R functions.
In its core, a shiny application is built using R functions and can therefore be called similarly. In the case of distreg.vis, there are two ways one can start the application. First, the user can run the function vis(). Second, it can Figure 6 Influence of age on the first two moments of the predicted distributions for wage_model also be called using the open source Graphical User Interface RStudio (RStudio Team, 2020). When opened, one can click on the 'Add-Ins' button and then select 'Distributional Regression Model Visualizer' if distreg.vis is installed ( Figure D10 in the Appendix). This will also trigger the function vis().
To give a short glimpse on the look of the GUI, Figures 5 and 6 are provided. In both figures Section 2's analysis was repeated using the interactive GUI. Specifically, in Figure 5 the covariate combinations with only education levels varying were interactively specified, leading to the same values as on page 532. Defining all five covariate combinations leads to its predicted distributions appearing on the right side of the interface, the graph of which is then also a direct copy of Figure 2 from Section 2. Figure 6 shows the second main functionality of the interface, which is the 'influence graph' depicting the impact of a covariate on the first two moments based on the function plot_moments(). It appears after clicking the 'Properties' button and selecting the covariate of interest, in our case age. On the right side of the plot, numerous ways to customize it are provided. Clicking on the 'Obtain Code' button reveals a pop-up window displaying the R code needed to reproduce the graph. For a more comprehensive description of the GUI functionality, see Section B in the Appendix. | 5,682.6 | 2021-05-27T00:00:00.000 | [
"Mathematics"
] |
Based on
— To improve the monitoring system of rock and soil engineering slope, based on the technology of Internet of things, the monitoring of rock and soil slope was studied. A multi parameter remote monitoring system based on the technology of Internet of things was proposed. By combining GPS locator with several sensor instruments, many parameters, such as ground displacement, deep displacement, groundwater level, stress and strain, were collected. Through the Internet of things technology, the key data were transmitted to the monitoring room. The wireless transmission of the monitoring data was completed. Using information fusion technology, database management and Web service, the monitoring data were classified and managed. Through the data analysis, the reliability theory was used to predict the slope. Finally, the rock and soil engineering slope monitoring based on the Internet of things was verified. The results showed that the stability of the slope was influenced by many factors. At the same time, the effectiveness and accuracy of monitoring data were important for slope prediction. To sum up, the reliability theory is more suitable for the stability analysis of the local slope and the section.
Introduction
In recent years, with the development of information technology, the technology of Internet of things has been applied to the construction of mine. In order to ensure the safety of geotechnical engineering process, slope displacement and groundwater level need to be monitored. The traditional monitoring is mainly to read the data through the sensor. In the form of report, the safety condition of the slope is obtained. This method cannot collect information in time, and it cannot meet the requirement of continuous monitoring. While summarizing the information, it is impossible to feed back the real situation of the slope in time.
In the field of engineering, any items are connected to the Internet through information sensing devices. Information is exchanged for intelligent identification, location tracking, monitoring, and management. With the development of the technology of the Internet of things, it is gradually used in slope monitoring. The instability and failure of the slope is a process from a gradual change to a catastrophe. Generally, all iJOE -Vol. 14, No. 6,2018 landslides have a precursor. For the high steep rock slope, the mechanical parameters and the stable state of the rock and soil cannot be determined. It is difficult to accurately predict through human intuition and experience. The sensor can capture the abnormal information of the slope stability before the landslide occurs, so as to predict the danger in time and avoid the loss of personnel and equipment. Therefore, based on the Internet of things technology, geotechnical slope monitoring system can effectively solve many problems at present.
State of the art
At present, Internet of things technology is widely applied in various fields and it is more and more important in geological engineering monitoring field. Smethurst, J. A. et al. [1] discussed the current role of instrumentation and monitoring, including the reasons for monitoring infrastructure slopes, the instrumentation typically installed and parameters measured. And they investigated recent developments in technology and considered how these may change the way that monitoring is used in the future. Srivastava [2] demonstrated the approach for spatial variation modeling of geotechnical parameters and reliability analysis based stability assessment of highly weathered rock slope. He proved that numerical modeling of spatially varying geotechnical parameters gave more realistic treatment to the property variation of a natural material and stability assessment. Ulusay, R. et al. [3] connected a two-year collaborative program of geotechnical and hydrogeological investigations throughout the current pit and the area in the direction of advance of the pit in accordance with laboratory tests and analyses. Aly, H. et al. [4] showed the study of Internet of things and big data mainly from two aspects: discussion of big data on Internet of things and how it is created. They also discussed the challenges and techniques that solve these issues and observed the architecture of Internet of things.
In domestic, there were also a large amount of researches on Internet of things. Du and Wang [5] explored the Newmark displacement model and applied it for analyzing the probabilistic seismic slope displacement hazard. Lin and others [6] focused on the Zigbee-based Internet of Things (IoTs) in 3D terrains. They proposed a novel simulation model for IoT and investigated the effects of various terrains, node's mobility and traffic loads. Rui [7] put forward the application framework for early warning service for dealing with geological disaster warning, which was one of the applications of Internet of things. He also used the firewall and authentication technology, to deal with the early warning information safety requirements. Guo and others [8] attempted to apply the Internet of Things (IoT) technology in the development of the smart tourism industry and smart tourism cities, which showed us the wide application of Internet of things. Sun et al. [9] discussed a tailings dam monitoring and prealarm system (TDMPAS) based on the internet of things (IOT) and cloud computing combined with the abilities of real-time monitoring of the saturated line. Zheng et al. [10] constructed the mixed kernel by a typical local kernel-radial basis function (RBF) and a typical global kernel-polynomial kernel and proved that the Internet of things had real application value in predicting deformations of slope. Li et al. [11] developed a novel approach for predicting slope displacement combining mathematical morphology (MM), nonparametric and nonlinear model, so as to improve the prediction accuracy. And they designed a parallel-composed morphological filter with multiple structure elements to process measured displacement time series with adaptive multi-scale decoupling.
In summary, the Internet of things technology is introduced from its application and combination with other technologies. The slop monitoring is also discussed from slope displacement, slope deformation and so on. However, the above researches do not combine the Internet of things technology in slope monitoring in geological engineering and the effectiveness and precision of slope monitoring are low. Aiming at improving the monitoring system of rock and soil engineering slope, based on the technology of Internet of things, the monitoring of rock and soil slope is studied. A multi-parameter remote monitoring system based on the technology of Internet of things is proposed and the wireless transmission of the monitoring data is completed. Using information fusion technology, database management and Web service, the monitoring data are classified and managed. Through the data analysis, the reliability theory is used to predict the slope. At last, the effectiveness and accuracy of monitoring data are improved by the Internet of things technology, which has the advantage of high practical value.
Multi-parameter remote monitoring method
To play a real monitoring and forecasting effect on unstable slope, a multiparameter monitoring method is needed. Based on the above ideas, the combination of several monitoring techniques is proposed. A comprehensive monitoring system based on GPS monitoring is established, which is supplemented by sensor monitoring. This multi-parameter remote monitoring system can monitor various characteristics parameters of unstable landslide in large scale before it collapses in various aspects, thus obtaining valuable preparation time for engineers. According to the different characteristics of the slope, different parameters can be combined and different types of sensors are selected. Then, the data is transmitted to a remote-control room by wireless communication technology. In the control room, the correct conclusions are given through the system data analysis system and the expert's experience analysis, which can play a real role in prediction and prediction. The field monitoring model is improved on the basis of the RSM-24FD engineering tester. The multi parameter structure diagram of the monitoring is shown in Figure 1.
Management of slope monitoring data in geotechnical engineering
The monitoring data are transmitted to the remote monitoring information control center through the ZigBee technology. A large-scale slope monitoring information management system is set up. They manage the database of the ground and deep displacement, the database of the water level change attribute and the attribute database of the stress and strain change. Through Internet technology, the staff can call and view the data in each database.
In order to facilitate users to manage the application, connection, device and other contents of the intermediate service components of the Internet of things, a Web service system based on Internet is developed. At the same time, it will package and open the interface of wireless sensor network, which is convenient for users to call on the Internet. This is also an example of wireless sensor network joining the Internet. The relationship between Web services, middleware, and wireless sensor networks is shown in Figure 2.
As shown in Figure 2, the Web service background interacts with the middleware through two TCP connections. The data transmission of wireless sensor network and the management of middleware are realized respectively. The Access database stores the information of the equipment, the application information and so on. The wireless sensor network data interface (API) packages the wireless sensor data into the interface and provides the call to the user.
An empirical analysis of rock and soil engineering slope monitoring based on the technology of Internet of things
Before evaluating the stability of the slope, the most important prerequisite is to get the detailed geological data of the slope, including some important parameters, such as lithology, geological structure of the rock mass, joint fracture, elastic modulus of rock and so on. However, the mechanical properties of rock will change with the change of environment. The mechanical properties of rock may change greatly in different time and space. Based on this, a slope uncertainty analysis method based on reliability theory is proposed. The numerical calculation method is combined with the statistical theory to evaluate the stability of the slope.
The first is the geological survey. The geotechnical engineering selection in this paper is almost the east-west direction. Due to the difference of paleo sedimentary environment and material composition, there is obvious phase transition between some strata along the strike and tendency. There are faults, fractured zones and weak structural planes in the northern slope, which makes the stability of the upper slope very poor. The natural slope of the southern slope is steeper. The maximum height difference of the slope in the mining area reaches 600m, which belongs to the high and steep slope. The bottom of the geotechnical project is marked +840m, the maximum elevation is +1503m, the height difference is 663m, and the final slope angle is between 39° ~ 41°. The slope rock mass is mainly biotite andesite, chlorite schist and kaolinite. The rock is loose and broken, the degree is different, and the weathering cracks are developed, and the stability is poor. The slope body has a more developed joint surface, and it is easy to slide in the form of conglomerate.
Furthermore, the geometric model of slope numerical calculation is analyzed. The slope model is established by selecting the geometric parameters at the +1044m~+1056m section at the section. The step height is H = 12m, the slope length is 20m, and the slope angle is 41 °. The bottom border to the bottom of the slope is 2m. The top of the hill to the right of the boundary is 8m. When FLAC3D is used for numerical solution, there are two assumptions: first, the length of the Y direction is a unit length, and the velocity of all the points in the Y direction in the model is constrained. The problem of solving the model is a plane strain problem. Second, the rock mass in the model is isotropic homogeneous rock mass.
Then, the uncertainty analysis based on reliability is introduced in detail. The first is the Latin Hypercube Sampling Monte Carlo Simulation (LHSMCS). Monte Carlo simulation is a numerical calculation method of calculating the probability P of a sample by sampling a random sample in the matrix through a computer. According to the definition of sub-sample, the probability that random event A occurs is P 0 . In N repeated tests, if the number of occurrences A is M, then by the law of large numbers, when N is large enough The calculation of the slope safety factor of N=10, N=100 and N=1000 is simulated respectively. The Monte Carlo simulation principle is very simple. The concept is clear. The simulation calculation can be realized by using the built-in FISH program of FLAC3D. The algorithm solution flow is shown in Figure 3. Finite element analysis of the slope safety coefficient numerical difference method has many calculation methods, such as analysis of circular sliding surface Bishop method, Swedish slice method and analysis of arbitrary sliding surface Sarma method. The Bishop method is selected to calculate the safety factor. In the safety factor, the radius of the sliding surface and the center of the circle should be assumed, and the dangerous sliding surface and the minimum safety factor are found by many iterative calculations. Its calculation is shown in formula (1) and formula (2).
Result Analysis and Discussion
The physical and mechanical properties of the 10 samples were obtained by LHSMCS sampling, as shown in Table 1. First, the grid model is set up to fix the bottom and the boundary. The iterative calculation process can be completed by the fish program in FLAC3D.
Matlab7.11.0 (R2012b) is used for the curve fitting of the data in table 6.2, as shown in Figure 4.
The LHSMCS method is used to calculate the safety factor of the N=100 and N=1000 samples. The calculation results are shown in Figure 5 and Figure 6. From the safety factor frequency histogram, when the number of samples is large enough, the safety factor follows a normal distribution. The mean and standard deviation of the normal distribution are shown in Table 2. It can be seen from Table 3, when the number of samples increases gradually, the average value of samples decreases from 1.3960 to 1.3653. The probability of failure also tends to be a steady value as the number of samples increases. When the number of samples is large enough, the safety factor and failure probability are in a stable state. It does not change much anymore. The experimental results coincide with the large number theorem. When the number of samples N tends to infinity, the failure probability f P gradually converges to the probability P.
The Z test method was used to test the reliability level of the safety factor. The stability of the slope is influenced by many uncertain parameters. When there is a significant difference between a sample parameter µ and the overall mean value 0 µ , a significant test of the parameter is needed. For example, because of the anisotropy of rock mechanical properties, the hypothesis testing method can be used to analyze the probability of mine slope stability affected by uncertain factors. There are many kinds of statistical methods that can be used to assume the test. The Z test method was selected. 0 H is a proposed hypothesis. 1 H is an alternative hypothesis. Its definition is as follows: The test statistics are as follows: In the formula, ! is the standard deviation, n is the sample number, and 0 µ is the overall mean. According to the central limit theorem, when n is large enough, its mean value and variance follow the standard normal distribution. When the difference between the sample value " of the failure probability and the value of the overall average 0 µ is larger, the statistic H Z gradually increases. The absolute value of H Z is also used as a measure of the probability of failure.
As can be seen from Table 2, "=1.3653, s =0.5038. The reliability theory is used to evaluate the stability of the structure. In recent years, the development of the slope is very rapid, especially in the slope stability evaluation. Generally, the safety factor of the slope is calculated by the reliability theory center point method. The failure probability f P is more than 0.07. When the probability P is less than 0.93, the # value at this time is about 1.65 (the calculation of different examples may be different). When the reliability index of the rock slope is less than the above two values, the system will send out a warning to shorten the observation period and increase the observation frequency.
Conclusions
The data collection, data transmission, data management and data analysis in the whole process of slope monitoring are studied. The following main conclusions are drawn.
Firstly, the stability of the slope is influenced by many factors. In order to collect information that affects the stability of slope, a multi-parameter and multi-device monitoring system based on Internet of Things is proposed. The key technology is introduced.
Secondly, the monitoring cycle of the slope monitoring project is long, so it requires low real-time data. However, because of the particularity of monitoring environment, it has high requirements for energy saving, scalability and robustness of wireless transmission network.
Thirdly, the effectiveness and accuracy of the monitoring data have a major role in predicting the stability of the slope. Therefore, the data must be properly managed and classified. The key to the data management process is to extract useful data from the massive monitoring data, which is useful to the prediction process. The information fusion technology, such as Kalman filtering, trend fitting and linear correlation analysis, can effectively solve this problem.
Fourthly, each prediction criterion theory has different emphasis on the change law of slope displacement. The reliability theory is applied to the stability analysis of the local slope and the section. The finite element method focuses more on the prediction of the stress distribution and displacement trend of the whole slope. The combination of several forecasting theories can predict the stability of the slope more scientifically. | 4,134.2 | 2018-06-22T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Attaining the Ultimate Precision Limit in Quantum State Estimation
We derive a bound on the precision of state estimation for finite dimensional quantum systems and prove its attainability in the generic case where the spectrum is non-degenerate. Our results hold under an assumption called local asymptotic covariance, which is weaker than unbiasedness or local unbiasedness. The derivation is based on an analysis of the limiting distribution of the estimator’s deviation from the true value of the parameter, and takes advantage of quantum local asymptotic normality, a useful asymptotic characterization of identically prepared states in terms of Gaussian states. We first prove our results for the mean square error of a special class of models, called D-invariant, and then extend the results to arbitrary models, generic cost functions, and global state estimation, where the unknown parameter is not restricted to a local neighbourhood of the true value. The extension includes a treatment of nuisance parameters, i.e. parameters that are not of interest to the experimenter but nevertheless affect the precision of the estimation. As an illustration of the general approach, we provide the optimal estimation strategies for the joint measurement of two qubit observables, for the estimation of qubit states in the presence of amplitude damping noise, and for noisy multiphase estimation.
Introduction
Quantum estimation theory is one of the pillars of quantum information science, with a wide range of applications from evaluating the performance of quantum devices [1,2] to exploring the foundation of physics [3,4]. In the typical scenario, the problem is specified by a parametric family of quantum states, called the model, and the objective is to design measurement strategies that estimate the parameters of interest with the highest possible precision. The precision measure is often chosen to be the mean square error (MSE), and is lower bounded through generalizations of the Cramér-Rao bound of classical statistics [5,6]. Given n copies of a quantum state, such generalizations imply that the product MSE · n converges to a positive constant in the large n limit. Despite many efforts made over the years (see, e.g., [5][6][7][8][9][10][11][12] and [13] for a review), the attainability of the precision bounds of quantum state estimation has only been proven in a few special cases. Consider, as an example, the most widely used bound, namely the symmetric logarithmic derivative Fisher information bound (SLD bound, for short). The SLD bound is tight in the one-parameter case [5,6], but is generally non-tight in multiparameter estimation. Intuitively, measuring one parameter may affect the precision in the measurement of another parameter, and thus it is extremely tricky to construct the optimal measurement. Another bound for multiparameter estimation is the right logarithmic derivative Fisher information bound (RLD bound, in short) [5]. Its achievability was shown in the Gaussian states case [5], the qubits case [14,15], and the qudits case [16,17]. In this sense, the RLD bound is superior to the SLD bound. However, the RLD bound holds only when the family of states to be estimated satisfies an ad hoc mathematical condition. The most general quantum extension of the classical Cramér-Rao bound till now is the Holevo bound [5], which gives the maximum among all existing lower bounds for the error of unbiased measurements for the estimation of any family of states. The attainability of the Holevo bound was studied in the pure states case [10] and the qubit case [14,15], and was conjectured to be generic by one of us [18]. Yamagata et al. [19] addressed the attainability question in a local scenario, showing that the Holevo bound can be attained under certain regularity conditions. However, the attaining estimator constructed therein depends on the true parameter, and therefore has limited practical interest. Meanwhile, the need of a general, attainable bound on multiparameter quantum estimation is increasing, as more and more applications are being investigated [20][21][22][23][24].
In this work we explore a new route to the study of precision limits in quantum estimation. This new route allows us to prove the asymptotic attainability of the Holevo bound in generic scenarios, to extend its validity to a broader class of estimators, and to derive a new set of attainable precision bounds. We adopt the condition of local asymptotic covariance [18] which is less restrictive than the unbiasedness condition [5] assumed in the derivation of the Holevo bound. Under local asymptotic covariance, we characterize the MSE of the limiting distribution, namely the distribution of the estimator's rescaled deviation from the true value of the parameter in the asymptotic limit of n → ∞.
Our contribution can be divided into two parts, the attainability of the Holevo bound and the proof that the Holevo bound still holds under the weaker condition of local asymptotic covariance. To show the achievability part, we employ quantum local asymptotic normality (Q-LAN), a useful characterization of n-copy d-dimensional (qudit) states in terms of multimode Gaussian states. The qubit case was derived in [14,15] and the case of full parametric models was derived by Kahn and Guta when the state has nondegenerate spectrum [16,17]. Here we extend this characterization to a larger class of models, called D-invariant models, using a technique of symplectic diagonalization. For models that are not D-invariant, we derive an achievable bound, expressed in terms of a quantum Fisher information-like quantity that can be straightforwardly evaluated. Whenever the model consists of qudit states with non-degenerate spectrum, this quantity turns out to be equal to the quantity in the Holevo bound [5]. Our evaluation has compact uniformity and order estimation of the convergence, which will allow us to prove the achievability of the bound even in the global setting.
We stress that, until now, the most general proof of the Holevo bound required the condition of local unbiasedness. In particular, no previous study showed the validity of the Holevo bound under the weaker condition of local asymptotic covariance in the multiparameter scenario. To avoid employing the (local) unbiasedness condition, we focus on the discretized version of the RLD Fisher information matrix, introduced by Tsuda and Matsumoto [25]. Using this version of the RLD Fisher information matrix, we manage to handle the local asymptotic covariance condition and to show the validity of the Holevo bound in this broader scenario. Remarkably, the validity of the bound does not require finite-dimensionality of the system or non-degeneracy of the states in the model. Our result also provides a simpler way of evaluating the Holevo bound, whose original expression involved a difficult optimization over a set of operators.
The advantage of local asymptotic covariance over local unbiasedness is the following. For practical applications, the estimator needs to attain the lower bound globally, i.e., at all points in the parameter set. However, it is quite difficult to meet this desideratum under the condition of local unbiasedness, even if we employ a two-step method based on a first rough estimate of the state, followed by the measurement that is optimal in the neighbourhood of the estimate. In this paper, we construct a locally asymptotic covariant estimator that achieves the Holevo bound at every point, for any qudit submodel except those with degenerate states. Our construction proceeds in two steps. In the first step, we perform a full tomography of the state, using the protocol proposed in [26]. In the second step, we implement a locally optimal estimator based on Q-LAN [16,17]. The two-step estimator works even when the estimated parameter is not assumed to be in a local neighbourhood of the true value. The key tool to prove this property is our precise evaluation of the optimal local estimator with compact uniformity and order estimation of the convergence. Our method can be extended from the MSE to arbitrary cost functions. A comparison between the approach adopted in this work (in green) and conventional approaches to quantum state estimation (in blue) can be found in Fig. 1.
Besides the attainability of the Holevo bound, the method can be used to derive a broad class of bounds for quantum state estimation. Under suitable assumptions, we characterize the tail of the limiting distribution, providing a bound on the probability that the estimate falls out of a confidence region. The limiting distribution is a good approximation of the (actual) probability distribution of the estimator, up to a term vanishing in n. Then, we derive a bound for quantum estimation with nuisance parameters, i.e. parameters that are not of interest to the experimenter but may affect the estimation of the other parameters. For instance, the strength of noise in a phase estimation scenario can be regarded as a nuisance parameter. Our bound applies also to arbitrary estimation models, thus extending nuisance parameter bounds derived for specific cases (see, e.g., [27][28][29]). In the final part of the paper, the above bounds are illustrated in concrete examples, including the joint measurement of two qubit observables, the estimation of qubit states in the presence of amplitude damping noise, and noisy multiphase estimation.
The remainder of the paper is structured as follows. In Sect. 2 we introduce the main ideas in the one-parameter case. Our discussion of the one-parameter case requires no regularity condition for the parametric model. Then we devote several sections to introducing and deriving tools for the multiparameter estimation. In Sect. 3, we briefly review the Holevo bound and Gaussian states, and derive some relations that will be useful in the rest of the paper. In Sect. 4, we introduce Q-LAN. In Sect. 5 we introduce the -difference RLD Fisher information matrix, which will be a key tool for deriving our bounds in the multiparameter case. In Sect. 6, we derive the general bound on the precision of multiparameter estimation. In Sect. 7, we address state estimation in the presence of nuisance parameters and derive a precision bound for this scenario. Section 8 provides bounds on the tail probability. In Sect. 9, we extend our results Comparison between the approach of this work (in green) and the traditional approach of quantum state estimation (in blue). In the traditional approach, one derives precision bounds based on the probability distribution function (PDF) for measurements on the original set of quantum states. The bounds are evaluated in the large n limit and the task is to find a sequence of measurements that achieves the limit bound. In this work, we first characterize the limiting distribution and then work out a bound in terms of the limiting distribution. This construction also provides the optimal measurement in the limiting scenario, which can be used to prove the asymptotic attainability of the bound. The analysis of the limiting distribution also provides tail bounds, which approximate the tail bounds for finite n up to a small correction, under the assumption that the cost function and the model satisfy a certain relation (see Theorem 9) to global estimation and to generic cost functions. In Sect. 10, the general method is illustrated through examples. The conclusions are drawn in Sect. 11. Remark on the notation In this paper, we use z * for the complex conjugate of z ∈ C and A † for the Hermitian conjugate of an operator A. For convenience of the reader, we list other frequently appearing notations and their definitions in Table 1.
Precision Bound Under Local Asymptotic Covariance: One-Parameter Case
In this section, we discuss estimation of a single parameter under the local asymptotic covariance condition, without any assumption on the parametric model.
Cramér-Rao inequality without regularity assumptions.
Consider a one-parameter model M, of the form where is a subset of R. In the literature it is typically assumed that the parametrization is differentiable. When this is the case, one can define the symmetric logarithmic derivative operator (SLD in short) at t 0 via the equation Then, the SLD Fisher information is defined as The SLD L t 0 is not unique in general, but the SLD Fisher information J t 0 is uniquely defined because it does not depend on the choice of the SLD L t 0 among the operators satisfying (2). When the parametrization is C 1 -continuous and > 0 is a small number, one has where is the fidelity between two density matrices ρ and ρ . It is called Bhattacharya or Hellinger coefficient in the classical case [30,31].
Here we do not assume that the parametrization (1) is differentiable. Hence, the SLD Fisher information cannot be defined by (3). Instead, following the intuition of (4), we define the SLD Fisher information J t 0 as the limit In the n-copy case, we have the following lemma: Proof. Using the definition (6), we have lim inf In other words, the SLD Fisher information is constant over n if we replace by / √ n. To estimate the parameter t ∈ , we perform on the input state a quantum measurement, which is mathematically described by a positive operator valued measure (POVM) with outcomes in X ⊂ R. An outcome x is then mapped to an estimate of t by an estimatort(x). It is often assumed that the measurement is unbiased, in the following sense: a POVM M on a single input copy is called unbiased when For a POVM M, we define the mean square error (MSE) V t (M) as Then, we have the fidelity version of the Cramér-Rao inequality:
Theorem 1. For an unbiased measurement M satisfying
for any t, we have When lim →0 V t 0 + (M) = V t 0 (M), taking the limit → 0, we have The proof uses the notion of fidelity between two classical probability distributions: for two given distributions P and Q on a probability space X , we define the fidelity F(P Q) as follows. Let f P and f Q be the Radon-Nikodým derivatives of P and Q with respect to P + Q, respectively. Then, the fidelity F(P Q) can be defined as With the above definition, the fidelity satisfies an information processing inequality: for every classical channel G, one has F(G(P) G(Q)) ≥ F(P Q). For a family of probability distributions {P θ } θ∈ , we define the Fisher information as When the probability distributions are over a discrete set, their Fisher information coincides with the quantum SLD of the corresponding diagonal matrices.
Proof of Theorem 1. Without loss of generality, we assume t 0 = 0. We define the probability distribution P t by P t (B) := Tr [ ρ t M(B) ]. Then, the information processing inequality of the fidelity [32] yields the bound F(ρ t 0 ||ρ t 0 + ) ≤ F(P 0 P ). Hence, it is sufficient to show (12) for the probability distribution family {P t }.
Let f 0 and f be the Radon-Nikodým derivatives of P 0 and P with respect to P 0 + P . Denoting the estimate byt, we have and therefore Also, (14) implies the relation Hence, Schwartz inequality implies Combining (16), (17), and (18) we have (12).
2.2.
Local asymptotic covariance. When many copies of the state ρ t are available, the estimation of t can be reduced to a local neighbourhood of a fixed point t 0 ∈ . Motivated by Lemma 1, we adopt the following parametrization of the n-copy state having used the notation a +b := {ax +b |x ∈ }, for two arbitrary constants a, b ∈ R.
With this parametrization, the local n-copy model is ρ n Assuming t 0 to be known, the task is to estimate the local parameter t ∈ R, by performing a measurement on the n-copy state ρ n t 0 ,t and then mapping the obtained data to an estimatet n . The whole estimation strategy can be described by a sequence of POVMs m := {M n }. For every Borel set B ⊂ R, we adopt the standard notation In the existing works on quantum state estimation, the error criterion is defined in terms of the difference between the global estimate t 0 +t n √ n and the global true value t 0 + t √ n . Instead, here we focus on the difference between the local estimatet n and the true value of the local parameter t. With this aim in mind, we consider the probability distribution We focus on the behavior of ℘ n t 0 ,t|M n in the large n limit, assuming the following condition: Condition 1 (Local asymptotic covariance for a single-parameter). A sequence of measurements m = {M n } satisfies local asymptotic covariance 1 when 1. The distribution ℘ n t 0 ,t|M n (20) converges to a distribution ℘ t 0 ,t|m , called the limiting distribution, namely for any Borel set B.
the limiting distribution satisfies the relation
for any t ∈ R, which is equivalent to the condition Using the limiting distribution, we can faithfully approximate the tail probability as where the n term vanishes with n for every fixed . For convenience, one may be tempted to require the existence of a probability density function (PDF) of the limiting distribution ℘ t 0 ,t|m . However, the existence of a PDF is already guaranteed by the following lemma.
Lemma 2.
When a sequence m := {M n } of POVMs satisfies local asymptotic covariance, the limiting distribution ℘ t 0 ,t|m admits a PDF, denoted by ℘ t 0 ,0|m,d .
The proof is provided in "Appendix A".
MSE bound for the limiting distribution.
As a figure of merit, we focus on the mean square error (MSE) V [℘ t 0 ,t|m ] of the limiting distribution ℘ t 0 ,t|m , namely Note that local asymptotic covariance implies that the MSE is independent of t.
The main result of the section is the following theorem: Theorem 2 (MSE bound for single-parameter estimation). When a sequence m := {M n } of POVMs satisfies local asymptotic covariance, the MSE of its limiting distribution is lower bounded as where J t 0 is the SLD Fisher information of the model {ρ t } t∈ . The PDF of ℘ t 0 ,t|m is upper bounded by J t 0 . When the PDF of ℘ t 0 ,t|m is differentiable with respect to t, equality in (25) holds if and only if ℘ t 0 ,t|m is the normal distribution with average zero and variance J −1 t 0 .
Proof of Theorem 2. When the integral Rt ℘ t 0 ,0|m (dt) does not converge, V [℘ t 0 ,t|m ] is infinite and satisfies (25). Hence, we can assume that the above integral converges. Further, we can assume that the outcomet satisfies the unbiasedness condition Rt ℘ t 0 ,t|m (dt) = t. Otherwise, we can replacet byt 0 :=t − Rt ℘ t 0 ,0|m (dt ) because the estimatort 0 has a smaller MSE thant and satisfies the unbiasedness condition due to the covariance condition. Hence, Theorem 1 guarantees Applying Lemma 20 to {℘ t 0 ,t|m }, we have The inequality (a) holds by Lemma 20 from "Appendix B", and the inequality (b) comes from the data-processing inequality of the fidelity. The equation (c) follows from Lemma 1. Finally, substituting Eq. (27) into Eq. (26), we have the desired bound (25). Now, we denote the PDF of ℘ t 0 ,0|m by ℘ t 0 ,0|m,d . In "Appendix A" the proof of Lemma 2 shows that we can apply Lemma 19 to {℘ t 0 ,t|m } t . Since the Fisher information When the PDF ℘ t 0 ,t|m,d is differentiable, to derive the equality condition in Eq. (25), we show (26) in a different way. Let l t 0 ,t (x) be the logarithmic derivative of ℘ t 0 ,t|m,d (x), The numerator on the right hand side of Eq. (28) can be evaluated by noticing that By local asymptotic covariance, this quantity can be evaluated as Hence, (28) coincides with (26). The denominator on the right hand side of (28) equals the right hand side of (26). The equality in Eq. (28) holds if and only if is proportional tox, which implies that ℘ t 0 ,0|m is the normal distribution with average zero and variance J −1 t 0 . The RHS of (25) can be regarded as the limiting distribution version of the SLD quantum Cramér-Rao bound. Note that, when the limiting PDF is differentiable and the bound is attained, the probability distribution ℘ n t 0 ,t|M n is approximated (in the pointwise sense) by a normal distribution with average zero and variance 1 n J t 0 . Using this fact, we will show that there exists a sequence of POVMs that attains the equality (25) at all points uniformly. The optimal sequence of POVMs is constructed explicitly in Sect. 6.
2.4.
Comparison between local asymptotic covariance and other conditions. We conclude the section by discussing the relation between asymptotic covariance and other conditions that are often imposed on measurements. This subsection is not necessary for understanding the technical results in the next sections and can be skipped at a first reading.
Let us start with the unbiasedness condition. Assuming unbiasedness, one can derive the quantum Cramér-Rao bound on the MSE [5]. Holevo showed the attainability of the quantum Cramér-Rao bound when estimating displacements in Gaussian systems [5].
The disadvantage of unbiasedness is that it is too restrictive, as it is satisfied only by a small class of measurements. Indeed, the unbiasedness condition for the estimator M requires the condition Tr E d i ρ t dt i | t=t 0 = 0 for i ≥ 2 with E := t M(dt) as well as the condition Tr E dρ t dt | t=t 0 = 1. In certain situations, the above conditions might be incompatible. For example, consider a family of qubit states ρ t := 1 2 (I + n t · σ ). When the Bloch vector n t has a non-linear dependence on t and the set of higher order derivatives d i ρ t dt i | t=t 0 with i ≥ 2 spans the space of traceless Hermittian matrices, no unbiased estimator can exist. In contrast, local asymptotic covariance is only related to the first derivative dρ t dt | t=t 0 because the contribution of higher order derivatives to the variablet n has order o 1 √ n and vanishes under the condition of the local asymptotic covariance.
One can see that the unbiasedness condition implies local asymptotic covariance with the parameterization ρ t 0 + t √ n in the following sense. When we have n (more than one) input copies, we can construct unbiased estimator by applying a single-copy unbiased estimator M satisfying Eq. (9) to all copies as follows. For the i-th outcome x i , we take the rescaled average n i=1 x i n , which satisfies the unbiasedness (9) for the parameter t as well. When the single-copy estimator M has variance v at t 0 , which is lower bounded by the Cramér-Rao inequality, this estimator has variance v/n at t 0 . In addition, the average (30) of the obtained data satisfies the local asymptotic covariance because the rescaled estimator follows the Gaussian distribution with variance v in the large n limit by the central limit theorem; the center of the Gaussian distribution is pinned at the true value of the parameter by unbiasedness; the shape of the Gaussian is independent of the value t and depends only on t 0 ; thus locally asymptotic covariance holds.
The above discussion can be extended to the multiple-copy case as follows. Suppose that M is an unbiased measurement for the -copy state ρ ⊗ where is an arbitrary finite integer. From the measurement M we can construct a measurement for the n-copy state with n = k + i and i < by applying the measurement M k times and discarding the remaining i copies. In the following, we consider the limit where the total number n tends to infinity, while is kept fixed. When the variance of M at t 0 is v/ , the average k i=1 x i k (30) of the k obtained data x 1 , . . . , x k satisfies local asymptotic covariance, i.e., the rescaled estimator follows the Gaussian distribution with variance v in the large n limit. Therefore, for any unbiased estimator, there exists an estimator satisfying locally asymptotic covariance that has the same variance.
Another common condition, less restrictive than unbiasedness, is local unbiasedness. This condition depends on the true parameter t 0 and consists of the following two requirements where is a fixed, but otherwise arbitrary, integer. The derivation of the quantum Cramér-Rao bound still holds, because it uses only the condition (32). When the parametrization ρ t is C 1 continuous, the first derivative d dt t Tr ρ ⊗ t M (dt) is continuous at t = t 0 , and the locally unbiased condition at t 0 yields the local asymptotic covariance at t 0 in the way as Eq. (30). Another relaxation of the unbiasedness condition is asymptotic unbiasedness [11] lim n→∞ t Tr ρ ⊗n The condition of asymptotic unbiasedness leads to a precision bound on MSE [34,Chapter 6]. The bound is given by the SLD Fisher information, and therefore it is attainable for Gaussian states. However, no attainable bound for qudit systems has been derived so far under the condition of asymptotic unbiasedness. Interestingly, one cannot directly use the attainability for Gaussian systems to derive an attainability result for qudit systems, despite the asymptotic equivalence between Gaussian systems and qudit systems stated by quantum local asymptotic normality (Q-LAN) (see [16,17] and Sect. 4.1). The problem is that the error of Q-LAN goes to 0 for large n, but the error in the derivative may not go to zero, and therefore the condition (34) is not guaranteed to hold. In order to guarantee attainability of the quantum Cramér-Rao bound, one could think of further loosening the condition of the asymptotic unbiasedness. An attempt to avoid the problem of the Q-LAN error could be to remove condition (34) and keep only condition (33). This leads to an enlarged class of estimators, called weakly asymptotically unbiased. The problem with these estimators is that no general MSE bound is known to hold at every point x. For example, one can find superefficient estimators [35,36], which violate the Cramér-Rao bound on a set of points. Such a set must be of zero measure in the limit n → ∞, but the violation of the bound may occur in a considerably large set when n is finite. In contrast, local asymptotic covariance guarantees the MSE bound (25) at every point t where the local asymptotic convariance condition is satisfied. All these alternative conditions for deriving MSE bounds, discussed here in this subsection, are summarized in Table 2.
Holevo bound.
When studying multiparameter estimation in quantum systems, we need to address the tradeoff between the precision of estimation of different parameters. This is done using two types of quantum extensions of Fisher information matrix: the SLD and the right logarithmic derivative (RLD).
Consider a multiparameter family of density operators {ρ t } t∈ , where is an open set in R k , k being the number of parameters. Throughout this section, we assume that ρ t 0 is invertible and that the parametrization is C 1 in all parameters. Then, the SLD L j and the RLDL j for the parameter t j are defined through the following equations see e.g. [5,6] and [15,Sect. II]. It can be seen from the definitions that the SLD L j can always be chosen to be Hermitian, while the RLDL j is in general not Hermitian. The SLD quantum Fisher information matrix J t and the RLD quantum Fisher information matrixJ t are the k × k matrices defined as Notice that the SLD quantum Fisher information matrix J t is a real symmetric matrix, but the RLD quantum Fisher information matrixJ t is not a real matrix in general. A POVM M is called an unbiased estimator for the family S = {ρ t } when the relation holds for any parameter t. For a POVM M, we define the mean square error (MSE) It is known that an unbiased estimator M satisfies the SLD type and RLD type of Cramer-Rao inequalities respectively [5]. Since it is not always possible to minimize the MSE matrix under the unbiasedness condition, we minimize the weighted MSE tr W V t (M) for a given weight matrix W ≥ 0, where tr denotes the trace of k × k matrices. When a POVM M is unbiased, one has the RLD bound [5] tr with In particular, when W > 0, the lower bound (39) is attained by the matrix V = Re The RLD bound has a particularly tractable form when the model is D-invariant: Definition 1. The model {ρ t } t∈ is D-invariant at t when the space spanned by the SLD operators is invariant under the linear map D t . For any operator X , D t (X ) is defined via the following equation where [A, B] = AB − B A denotes the commutator. When the model is D-invariant at any point, it is simply called D-invariant.
For a D-invariant model, the RLD quantum Fisher information can be computed in terms of the D-matrix, namely the skew-symmetric matrix defined as Precisely, the RLD quantum Fisher information has the expression [5] Hence, (39) becomes For D-invariant models, the RLD bound is larger and thus it is a better bound than the bound derived by using the SLD Fisher information matrix (the SLD bound). However, in the one-parameter case, when the model is not D-invariant, the RLD bound is not tight, and it is common to use the SLD bound in the one-parameter case. Hence, both quantum extensions of the Cramér-Rao bound have advantages and disadvantages.
To unify both extensions, Holevo [5] derived the following bound, which improves the RLD bound when the model is not D-invariant. For a k-component vector X of operators, define the k × k matrix Z t (X) as Then, Holevo's bound is as follows: for any weight matrix W , one has where UB M denotes the set of all unbiased measurements under the model M, V is a real symmetric matrix, and X = (X i ) is a k-component vector of Hermitian operators satisfying C H,M (W, t) is called the Holevo bound. When W > 0, there exists a vector X achieving the minimum in (45). Hence, similar to the RLD case, the equality in (45) holds for W > 0 only when Moreover, we have the following proposition.
In (49), min X:M denotes the minimum for vector X whose components X i are linear combinations of the SLDs operators in the model M . In (50), the minimization is taken over all k × k matrices satisfying the constraint (P) i j := δ i j for i, j ≤ k, J t and D t are the SLD Fisher information matrix and the D-matrix [cf. Eqs. (35) and (41)] for the extended model S at t := (t, 0).
The Holevo bound is always tighter than the RLD bound: The equality holds if and only if the model M is D-invariant [37]. In the above proposition, it is not immediately clear whether the Holevo bound depends on the choice of the extended model S . In the following, we show that there is a minimum D-invariant extension of S, and thus the Holevo bound is independent of the choice of S . The minimum D-invariant subspace in the space of Hermitian matrices is given as follows. Let V be the subspace spanned SLDs Then, the subspace V is D-invariant and contains V. What remains is to show that V is the minimum Dinvariance subspace. Let V be the orthogonal space with respect to V for the inner product defined by Tr ρ X † Y . We denote by P and P the projections into V and V respectively. Each component X i of a vector of operators X can be expressed as X i = P X i + P X i . Then, the two vectors X := (P X i ) and X := (P X i ) satisfy the inequality Z t (X) = Z t (X ) + Z t (X ) ≥ Z t (X ). Substituting Eq. (35) into Eq. (47) and noticing that P X i has no support in V, we get that only the part P X i contributes the condition (47) and the minimum in (46) is attained when X = 0. Hence, the minimum is achieved when each component of the vector X is included in the minimum D-invariant subspace V . Therefore, since the minimum D-invariant subspace can be uniquely defined, the Holevo bound does not depend on the choice of the D-invariant model S that extends S.
Classical and quantum
Gaussian states. For a classical system of dimension d C , a Gaussian state is a d C -dimensional normal distribution N [α C , Γ C ] with mean α C and covariance matrix Γ . The corresponding random variable will be denoted as Z = (Z 1 , . . . , Z d C ) and will take values z = (z 1 , . . . , For quantum systems we will restrict our attention to a subfamily of Gaussian states, known as displaced thermal states. For a quantum system made of a single mode, the displaced thermal states are defined as where α ∈ C is the displacement, T Q α is the displacement operator,â is the annihilation operator satisfying the relation [â,â † ] = 1, and ρ thm β is a thermal state, defined as where the basis {| j } j∈N consists of the eigenvectors ofâ †â and β ∈ (0, ∞) is a real parameter, hereafter called the thermal parameter. For a quantum system of d Q modes, the products of single-mode displaced thermal states will be denoted as where α Q = (α j ) d Q j=1 is the vector of displacements and β Q = (β j ) d Q j=1 is the vector of thermal parameters. In the following we will regard α as a vector in R 2d Q , using the . For a hybrid system of d C classical variables and d Q quantum modes, we define the Equivalently, the canonical Gaussian states can be expressed as where T α is the Gaussian shift operator For the classical part, we have adopt the notation With this notation, the canonical Gaussian state G[α, Γ ] is uniquely identified by the characteristic equation [5] Tr The formulation in terms of the characteristic equation (60) can be used to generalize the notion of canonical Gaussian state [38]. Given a d-dimensional Hermitian matrix (correlation matrix) Γ = Re(Γ ) + iIm(Γ ) whose real part Re(Γ ) is positive semidefinite, we define the operators R := (R 1 , . . . , R d ) via the commutation relation We define the general Gaussian state G[α, Γ ] on the operators R as the linear functional on the operator algebra generated by R 1 , . . . , R d satisfying the characteristic equation (60) [38]. Note that, although Γ is not necessarily positive semi-definite, its real part Re(Γ ) is positive semi-definite. Hence, the right-hand-side of Eq. (60) is contains a negative semi-definite quadratic form, in the same way as for the standard Gaussian states.
For general Gaussian states, we have the following lemma.
Lemma 3. Given a Hermitian matrix Γ , there exists an invertible real matrix T such that the Hermitian matrix T Γ T T is the correlation matrix of a canonical Gaussian state.
In particular, when The proof is provided in "Appendix C".
In the above lemma, we can transform Γ into the block form Γ C ⊕ Γ Q where Γ C is real by applying orthogonal transformation. The unitary operation on the classical part is given as a scale conversion. Hence, an invertible real matrix T can be realized by the combination of a scale conversion and a linear conversion, which can be implemented as a unitary on the Hilbert space. Hence, a general Gaussian state can be given as the resultant linear functional on the operator algebra after the application of the linear conversion to a canonical Gaussian state. This kind of construction is unique up to unitarily equivalence. Indeed, Petz [38] showed a similar statement by using Gelfand-Naimark-Segal (GNS) construction. Our derivation directly shows the uniqueness without using the GNS construction.
Lemma 4. The Gaussian states family
This lemma shows the inverse of the RLD Fisher information matrix is given by the correlation matrix.
Proof. Due to the coordinate conversion give in Lemma 3, it is sufficient to show the relation (63) for the canonical Gaussian states family. In that case, the desired statement has already been shown by Holevo in [5].
Therefore, as shown in "Appendix D", a D-invariant Gaussian model can be characterized as follows:
) The image of the linear map A −1 T is invariant for the application of B. (3) There exist a unitary operator U and a Hermitian matrix
where
Measurements on Gaussian states family.
We discuss the stochastic behavior of the outcome of the measurement on the c-q system generated by R = (R j ) d j=1 when the state is given as a general Gaussian state G[α, Γ ]. To this purpose, we introduce the notation ℘ α|M (B) := Tr G[α, Γ ]M(B) for a POVM M. Then, we have the following lemma.
In this case, the weighted covariance matrix is The proof is provided in "Appendix E". In the above lemma, when X = R, we simplify M Γ P|W to M Γ W . This lemma is useful for estimation in the Gaussian states family M := {G[t, Γ ]} t∈R d . In this family, we consider the covariant condition.
for any t. This condition is equivalent to Then, we have the following lemma for this Gaussian states family.
Corollary 1 ([5]). For any weight matrix W ≥ 0 and the above Gaussian states family
where CUB M are the sets of covariant unbiased estimators for the model M , respectively. Further, when W > 0, the above infimum is attained by the covariant unbiased estimators M Γ W whose output distribution is the normal distribution with average t and covariance matrix Re This corollary can be shown as follows. Due to Lemma 4, the lower bound (43) of the weighted MSE tr W V t (M) of unbiased estimator M is calculated as the RHS of (65). Lemma 6 guarantees the required performance of M Γ W . To discuss the case when W is not strictly positive definite, we consider W := W + I . Using the above method, we can construct an unbiased and covariant estimator whose output distribution is the 2d Q -dimensional distribution of average t and covariance Re( , which converges to the bound (65). By combining Proposition 1, this corollary can be extended to a linear subfamily of k -dimensional Gaussian family {G[t , Γ ]} t ∈R k . Consider a linear map T from R k to R k . We have the following corollary for the subfamily M := {G[T (t), Γ ]} t∈R k .
Corollary 2. For any weight matrix W
Further, when W > 0, we choose a vector X to realize the minimum in (49). The above infimum is attained by the covariant unbiased estimators M W whose output distribution is the normal distribution with average t and covariance matrix Re( (49) can be given when the components X are given a linear combination of R 1 , . . . , R k . Hence, the latter part of the corollary with W > 0 follows from (45) and Lemma 6, implies this corollary for W > 0. The case with non strictly positive W can be shown by considering W in the same way as Corollary 1.
Local Asymptotic Normality
The extension from one-parameter estimation to multiparameter estimation is quite nontrivial. Hence we first develop the concept of local asymptotic normality which is the key tool to constructing the optimal measurement in multiparameter estimation. Since we could derive the tight bound of MSE for the Gaussian states family, it is a natural idea to approximate the general case by Gaussian states family, and local asymptotic normality will serve as the bridge between these general qudit families and Gaussian state families.
4.1.
Quantum local asymptotic normality with specific parametrization. For a quantum system of dimension d < ∞, also known as qudit, we consider generic states, described by density matrices with full rank and non-degenerate spectrum. To discuss quantum local asymptotic normality, we need to define a specific coordinate system. For this aim, we consider the neighborhood of a fixed density matrix ρ θ 0 , assumed to be diagonal in the canonical basis of C d , and parametrized as In the neighborhood of ρ θ 0 , we parametrize the states of the system as and U θ R ,θ I is the unitary matrix defined by Here θ R and θ I are vectors of real parameters where δ j,k is the delta function. We note that by this definition the components of θ R and θ I are in one-to-one correspondence. The parameter θ = (θ C , θ R , θ I ) will be referred to as the Q-LAN coordinate, and the state with this parametrization, which was used by Khan and Guta in [16,17,39], will be denoted by ρ KG θ . Q-LAN establishes an asymptotic correspondence between multicopy qudit states and Gaussian shift models. Using the parameterization θ = (θ C , θ R , θ I ), we have the multicopy qudit models and Gaussian shift models are equivalent in terms of the RLD quantum Fisher information matrix:
Lemma 7. The RLD quantum Fisher information matrices of the qudit model and the corresponding Gaussian model in
The calculations can be found in "Appendix F". The quantum version of local asymptotic normality has been derived in several different forms [16,17,39] with applications in quantum statistics [12,40], benchmarks [41] and data compression [42]. Here we use the version of [17], which states that n identical copies of a qudit state can be locally approximated by a c-q Gaussian state in the large n limit. The approximation is in the following sense: Definition 3 (Compact uniformly asymptotic equivalence of models). For every n ∈ N * , let {ρ t,n } t∈ n and { ρ t,n } t∈ n be two models of density matrices acting on Hilbert spaces H and K respectively where the set of parameters n may depend on n. We say that the two families are asymptotically equivalent for t ∈ n , denoted as ρ t,n ∼ = ρ t,n (t ∈ n ), if there exists a quantum channel T n (i.e. a completely positive trace preserving map) mapping trace-class operators on H to trace-class operators on K and a quantum channel S n mapping trace-class operators on K to trace-class operators on H, which are independent of t and satisfy the conditions Next, we extend asymptotic equivalence to compact uniformly asymptotic equivalence. In this extension, we also describe the order of the convergence.
Given a sequence {a n } converging to zero, for every t in a compact set K consider two models {ρ t,t ,n } t∈ n , and { ρ t,t ,n } t∈ n . We say that they are asymptotically equivalent for t ∈ n compact uniformly with respect to t with order a n , denoted as ρ t,t ,n t ∼ = ρ t,t ,n (t ∈ n , a n ), if for every t ∈ K there exists a quantum channel T n,t mapping trace-class operators on H to trace-class operators on K and a quantum channel S n,t mapping trace-class operators on K to trace-class operators on H such that Notice that the channels T n,t and S n,t depend on t and are independent of t.
In the above terminology, Q-LAN establishes an asymptotic equivalence between families of n copy qudit states and Gaussian shift models. Precisely, one has the following.
Proposition 2 (Q-LAN for a fixed parameterization; Kahn and Guta [16,17]). For any x < 1/9, we define the set n,x of θ as n,x := θ | θ ≤ n x ( · denotes the vector norm). Then, we have the following compact uniformly asymptotic equivalence; where κ is a parameter to satisfy κ ≥ 0.027, and N [θ C , Γ θ 0 ] is the multivariate normal distribution with mean θ C and covariance matrix Γ θ 0 ,k,l : The conditions (73) and (74) are not enough to translate precision limits for one family into precision limits for the other. This is because such limits are often expressed in terms of the derivatives of the density matrix, whose asymptotic behaviour is not fixed by (73) and (74). In the following we will establish an asymptotic equivalence in terms of the RLD quantum Fisher information.
Quantum local asymptotic normality with generic parametrization.
In the following, we explore to which extent can we extend Q-LAN in Proposition 2. Precisely, we derive a Q-LAN equivalence as in Eq. (75) which is not restricted to the parametrization of Eqs. (68) and (69).
In the previous subsection, we have discussed the specific parametrization given in (67). In the following, we discuss a generic parametrization. Given an arbitrary Dinvariant model ρ ⊗n with vector parameter t, we have the following theorem.
Theorem 3 (Q-LAN for an arbitrary parameterization). Let {ρ t } t∈ be a k-parameter D-invariant qudit model. Assume that ρ t 0 is a non-degenerate state, the parametrization is C 2 continuous, andJ −1 t 0 exists. Then, there exist a constant c(t 0 ) such that the set whereJ −1 t 0 is the RLD Fisher information at t 0 and κ is a parameter to satisfy κ ≥ 0.027.
Proof. We choose the basis {|i } d i=1 to diagonalize the state ρ t 0 . We denote the Q-LAN parametrization based on this basis by ρ is the parameter to describe the diagonal elements of ρ t 0 . Since the parametrization ρ t is C 2 -continuous, the function f is also C 2 -continuous. Proposition 2 guarantees that
the combination of this evaluation and (78) yields
The combination of Lemma 5 and (79) implies (77).
The -Difference RLD Fisher Information Matrix
In Sect. 2.1 we evaluated the limiting distribution in the one-parameter case, using the fidelity as a discretized version of the SLD Fisher information. In order to tackle the multiparameter case, we need to develop a similar discretization for the RLD Fisher information matrix, which is the relevant quantity for the multiparameter setting (cf. Sect. 3). In this section we define a discretized version of the RLD Fisher information matrix, extending to the multiparameter case the single-parameter definition introduced by Tsuda and Matsumoto [25], who in turn extended the corresponding classical notion [43,44].
Definition.
Let M = {ρ t } t∈ be a k-parameter model, with the property that ρ t 0 is invertible. If the parametrization ρ t is differentiable, the RLD quantum Fisher information matrixJ t can be rewritten as the following k × k matrix The -difference RLD quantum Fisher information matrixJ t 0 , is defined by replacing the partial derivatives with finite increments: where e j is the unit vector with 1 in the j-th entry and zero in the other entries. Notice that one has where When the parametrization ρ t is differentiable, one has whereJ t 0 is the RLD quantum Fisher information matrix (80). When the parametrization is not differentiable, we define the RLD Fisher information matrixJ t 0 to be the limit (83), provided that the limit exists. All throughout this section, we impose no condition on the parametrization ρ t , except for the requirement that ρ t 0 be invertible.
The -difference RLD Cramér-Rao inequality.
A discrete version of the RLD quantum Cramér-Rao inequality can be derived under the assumption of -locally unbiasedness, defined as follows: and Under the -locally unbiasedness condition, Tsuda et al. [25] derived a lower bound on the MSE for the one-parameter case. In the following theorem, we extend the bound to the multiparameter case.
Theorem 4 ( -difference RLD Cramér-Rao inequality). The MSE matrix for anlocally unbiased POVM M at t 0 satisfies the bound Proof. For simplicity, we assume that t 0 = 0. For two vectors a ∈ C k and b ∈ C k , we define the two observables X : Then, the Cauchy-Schwartz inequality implies the second equality following from -locally unbiasedness at t 0 . Note that one has Tr[Y † Yρ t 0 ] = b|J t 0 , |b and which implies a|V t 0 (M)|a ≥ a|(J t 0 , ) −1 |a . Since a is arbitrary, the last inequality implies (84).
The -difference RLD Cramér-Rao inequality can be used to derive an information processing inequality, which states that the -difference RLD Fisher information matrix is non-increasing under the application of measurements. For a family of probability distributions {P t } t∈ , we assume that P t+ e j is absolutely continuous with respect to P t for every j. Then, the -difference RLD Fisher information is defined as where p t+ e j and p t+ e i are the Radon-Nikodým derivatives of P t+ e j and P t+ e i with respect to P t , respectively. We note that the papers [43,44] defined its one-parameter version when the distributions are absolutely continuous with respect to the Lebesgue measure. Hence, when an estimatort for the distribution family {P t } t∈ is -locally unbiased at t 0 , in the same way as (84), we can show the -difference Cramér-Rao inequality; For a family of quantum states {ρ t } t∈ and a POVM M, we denote by J M t, the -difference Fisher information matrix of the probability distribution family {P M t } t∈ defined by P M t := Tr Mρ t . With this notation, we have the following lemma:
Lemma 8. For every family of quantum states {ρ t } t∈ and every POVM M, one has the information processing inequalityJ
Proof. Consider the estimation of t from the probability distribution family {P M t } t∈ . Following the same arguments used for the achievability of the Cramér-Rao bound with locally unbiased estimators (see, for instance, Chapter 2 of Ref. [34]), it is possible to show that there exists an -locally unbiased estimatort at t 0 such that Combining the POVM M with the -locally unbiased estimatort we obtain a new POVM M , which is -locally unbiased. Applying Theorem 4 to the POVM M we obtain which implies (90).
We stress that (90) is a matrix inequality for Hermitian matrices: in general,J t 0 , has complex entries. Also note that any classical process can be regarded as a POVM. Hence, in the same way as (90), using the -difference Cramér-Rao inequality (89), we can show the inequality for an classical process E when J is the -difference Fisher information matrix on the distribution family {P t } t∈ and J is the -difference Fisher information matrix on the distribution family {E(P t )} t∈ .
Extended models.
The lemmas in the previous subsection can be generalized to the case where an extended model M := {ρ t } t =(t, p) contains the original model M as ρ t = ρ (t,0) . Choosing t 0 = (t 0 , 0), we denote the -difference RLD Fisher information matrix at t 0 for the family M byJ t 0 , .
Lemma 9.
For an -locally unbiased estimator M at t 0 , there exists a k × k matrix P such that P i j = δ i j for i, j ≤ k and Proof of Lemma 9. For an -locally unbiased estimator M at t 0 , there exists a k × k matrix P such that Now, we introduce a new parametrizationρ η : Applying Theorem 4 to the parameter η, we obtain Combining (94) and (96), we obtain the desired statement.
In the same way as Lemmas 8, 9 yields the following lemma.
Lemma 10.
For any POVM M, there exists a k × k matrix P such that P i j = δ i j for i, j ≤ k and
Asymptotic case.
We denote byJ n t 0 , the -difference RLD Fisher information matrix of the n-copy states {ρ ⊗n t } t∈ . In the following we provide the analogue of Lemma 1 for the RLD Fisher information matrix.
Precision Bounds for Multiparameter Estimation
6.1. Covariance conditions. First, we introduce the condition for our estimators. The correspondence between qudit states and Gaussian states also extends to the estimator level. We consider a generic state family M = {ρ t } t∈ , with the parameter space being an open subset of R k . Similar to the single-parameter case, given a point t 0 ∈ , we consider a local model ρ n t 0 ,t := ρ ⊗n t 0 +t/ √ n . Throughout this section, we assume that ρ t 0 is invertible. For a sequence of POVM m := {M n }, we introduce the condition of local asymptotic covariance as follows: Condition 2 (Local asymptotic covariance). We say that a sequence of measurements m := {M n } satisfies local asymptotic covariance at t 0 ∈ under the state family M, if the probability distribution converges to a limiting distribution the relation holds for any t ∈ R k . 2 When we need to express the outcome of ℘ n t 0 ,t|M n or ℘ t 0 ,t|m , we denote it byt.
Further, we say that a sequence of measurements m := {M n } satisfies local asymptotic covariance under the state family M when it satisfies local asymptotic covariance at any element t 0 ∈ under the state family M.
Under these preparations, we obtain the following theorem by using Theorem 3.
Theorem 5. Let {ρ ⊗n
t } t∈ be a k-parameter D-invariant qudit model with C 2 continuous parametrization. Assume thatJ −1 t 0 exists, ρ t 0 is a non-degenerate state, and a sequence of measurements m := {M n } satisfies local asymptotic covariance at t 0 ∈ . Then there exists a covariant POVM M G such that for any vector t and any measurable subset B. HereJ t 0 is the RLD Fisher information of the qudit model at t 0 .
To show Theorem 5, we will use the following lemma. 2 The range of t is determined via the constraint t 0 + t/ √ n ∈ . Just as in the one-parameter case, t can take any value in R k when n is large enough. The range of the local parameter is then t ∈ R k .
holds for any vector α if and only if
Here ξ and y are k-dimensional vectors, | y is a (multimode) coherent state, γ j are thermal parameters of the Gaussian, and F −1 ξ → y (g) denotes the inverse of the Fourier transform F ξ → y (g) := dξ e iξ · y g. Therefore, for a given function f (α), there uniquely exists an operator F to satisfy (104).
The proof can be found in "Appendix G". Now, we are ready to prove Theorem 5.
Proof of Theorem 5. We consider without loss of generality G[t,J −1 t 0 ] to be in the canonical form, noticing that any Gaussian state is unitarily equivalent to a Gaussian state in the canonical form as shown by Lemma 3. For any measurable set B, we define the operator M G (B) as From the above definition, it can be verified that M G (B) satisfies the definition of a POVM: first, it is immediate to see that the term F −1 What remains to be shown is that the POVM { M G (B)} satisfies the covariance condition. Eq. (107) guarantees that and The uniqueness of the operator to satisfy the condition (104) implies the covariance condition
MSE bound for the D-invariant case.
Next, we derive the lower bound of MSE of the limiting distribution for any D-invariant model. As an extension of the mean square error, we introduce the mean square error matrix (MSE matrix), defined as for a generic probability distribution ℘. Since the set of symmetric matrices is not totally ordered, we will consider the minimization of the expectation value tr W V [℘ t 0 ,t|m ] for a certain weight matrix W ≥ 0. For short, we will refer to the quantity tr W V [℘ t 0 ,t|m ] as the weighted MSE. Under local asymptotic covariance, one can derive lower bounds on the covariance matrix of the limiting distribution and construct optimal measurements to achieve them. In general, the attainability of the conventional quantum Cramér-Rao bounds is a challenging issue. For instance, a well-known bound is the symmetric logarithmic derivative (SLD) Fisher information bound where J t 0 is the SLD Fisher information. The SLD bound is attainable in the singleparameter case, i.e. when k = 1, yet it is in general not attainable for multiparameter estimation (see, for instance, later in Sect. 10.1 for a concrete example).
In the following, we derive an attainable lower bound on the weighted MSE. To this purpose, we define the set LAC(t 0 ) of local asymptotic covariant sequences of measurements at the point t 0 ∈ . For a model M, we focus on the minimum value When k ≥ 2, a better choice is the RLD quantum Fisher information bound. The main result of this section is an attainable bound on the weighted MSE, relying on the RLD quantum Fisher information.
Theorem 6 (Weighted MSE bound for D-invariant models). Assume thatJ −1 t 0 exists. Consider any sequence of locally asymptotically covariant measurements m := {M n }. The limiting distribution is evaluated as whereJ t 0 is the RLD quantum Fisher information. When the model is C 1 continuous and D-invariant, we have the bound for the weighted MSE with weight matrix W ≥ 0 of the limiting distribution as where J t 0 is the SLD quantum Fisher information (35) and D t 0 is the D-matrix (41). When S is a D-invariant qudit model and the state ρ t 0 is not degenerate, we have Moreover, if W > 0 and ℘ t 0 ,0|m has a differentiable PDF, the equality in (112) holds if and only if ℘ t 0 ,t|m is the normal distribution with average zero and covariance Further, when {ρ t } t∈ is a qudit-model with C 2 continuous parametrization, the equality in (112) holds, i.e., there exist a sequence of POVMs M t 0 ,n W , a compact set K , and constant c(t 0 ) such that lim sup where κ is a parameter to satisfy κ ≥ 0.027.
In the following, we prove Theorem 6 following three steps. The first step is to derive the bound (112). The second step is to show that, to achieve the equality, the limiting distribution needs to be a Gaussian with certain covariance. The last step is to find a measurement attaining the equality. In this way, when the state is not degenerate, we can construct the measurement using Q-LAN. 3 Proof of Theorem 6. Impossibility part 4 (Proofs of (111) and (112)): To give a proof, we focus on the -difference RLD Fisher information matrixJ t 0 , at t 0 for a quantum states family {ρ t } t∈ . We denote the -difference Fisher information matrices for the distribution family {℘ n t 0 ,t|M n } t and {℘ t 0 ,t|m } t by J n t, and J m t, , respectively. Also, we employ the notations given Sect. 5.4.
Applying (90) to the POVM M n , we have 1 nJ By taking the limit n → ∞, the combination of (116), (98) of Lemma 11,and (117) impliesJ Here, in the same way as the proof of Theorem 2, we can assume that the outcomê t satisfies the unbiasedness condition. Hence, the -difference Carmér-Rao inequality (89) implies that By taking the limit → 0, (99) of Lemma 11 implies When the model is C 1 continuous and D-invariant, adding the conventional discussion for MSE bounds (see, e.g., Chapter 6 of [5]) to (119), we obtain (112). Achievability part (Proof of (113)): Next, we discuss the attainability of the bound when W > 0 and ℘ t 0 ,0|m has a differentiable PDF. In this case, we have the Fisher information matrix J m 0 of the location shift family {℘ t 0 ,t|m } t . Taking limit → 0 in (119), we have The equality of (112) holds if and only if V [℘ t 0 ,t|m ] = V t 0 |W and the equality in the first inequality of (121) holds. Due to the same discussion as the proof of Theorem 2, the equality in the first inequality of (121) holds only when all the components of the logarithmic derivative of the distribution family {℘ t 0 ,t|m } t equal the linear combinations of the estimate of t i . This condition is equivalent to the condition that the distribution family {℘ t 0 ,t|m } t is a distribution family of shifted normal distributions. Therefore, when W > 0, the equality condition of Eq. (112) is that ℘ t 0 ,t|m is the normal distribution with average zero and covariance matrix V t 0 |W . Now, we assume that the state ρ t 0 is not degenerate. Then, we use Q-LAN to show that there always exists a sequence of POVM m = {M n } satisfying the above property. We rewrite Eq. (77) of Theorem 3 as follows.
lim sup where the notation is the same as Theorem 3. Then, we choose the covariant POVM MJ Notice that when W has null eigenvalues, √ W −1 is not properly defined. In this case, we consider W : Meanwhile, since W > 0 we can repeat the above argument to find a qudit measurement that attains tr W J −1 . Taking the limit → 0 the quantity tr W J −1 converges to the equality of Eq. (113). Therefore, we can still find a sequence of measurements with Fisher information {J } that approaches the bound.
Precision bound for the estimation of generic models.
In the previous subsection, we established the precision bound for D-invariant models, where the bound is attainable and has a closed form. Here we extend the bound to any n-copy qudit models. The main idea is to extend the model to a larger D-invariant model by introducing additional parameters. When estimating parameters in a generic model S (consisting of states generated by noisy evolutions, for instance), the bound (112) may not hold. It is then convenient to extend the model to a D-invariant model S which contains S. Since the bound (112) holds for the new model S , a corresponding bound can be derived for the original model S. The new model S has some additional parameters other than those of S, which are fixed in the original model S. Therefore, a generic quantum state estimation problem can be regarded as an estimation problem in a D-invariant model with fixed parameters. The task is to estimate parameters in a model S (globally) parameterized as t 0 = (t 0 , p 0 ) ∈ , where p 0 is a fixed vector and is an open subset of R k that equals when restricted to R k . In the neighborhood of t 0 , since the vector p 0 is fixed, we have t = (t, 0) with 0 being the null vector of R k −k and t ∈ R k being a vector of free parameters. For this scenario, only the parameters in t need to be estimated and we know the parameters p 0 . Hence, the MSE of t is of the form for any local asymptotic covariant measurement sequence m. Due to the block diagonal form of the MSE matrix, to discuss the weight matrix W in the original model S, we consider the weight matrix W = P T W P in the D-invariant model S , where P is any k × k matrix satisfying the constraint (P) i j := δ i j for i, j ≤ k in the following way.
Theorem 7 (MSE bound for generic models). The models S and S are C 1 continuous and are given in the same way as Proposition 1, and the notations are the same as Proposition 1. Also, we assume thatJ −1 t 0 exists. Consider any sequence of locally asymptotically covariant measurements m := {M n }. Then, the MSE matrx of the limiting distribution is evaluated as ℘ t 0 ,t|m . There exists a k × k matrix P such that Moreover, if W > 0 and ℘ t 0 ,0|m has a differentiable PDF, the equality in (125) holds if and only if ℘ t 0 ,t|m is the normal distribution with average zero and covariance matrix Theorem 7 determines the ultimate precision limit for generic qudit models. Now, we compare it with the most general existing bound on quantum state estimation, namely Holevo's bound [5]. Let us define the ultimate precision of unbiased measurements as Since the Holevo bound still holds with the n-copy case, (see [15,Lemma 4]) we have There are a couple of differences between our results and existing results: The Holevo bound is derived under unbiasedness assumption, which, as mentioned earlier, is more restrictive than local asymptotic covariance. Our bound (125) thus applies to a wider class of measurements than the Holevo bound. Furthermore, Yamagata et al. [19] showed a similar statement as (127) of Theorem 7 in a local model scenario. They did not show the compact uniformity of the convergence and had no order estimation of the convergence. However, our evaluation (127) guarantees the compact uniformity with the order estimation. Then, they did not discuss an estimator to attain the bound globally. Later, we will construct an estimator to attain our bound globally based on the estimator given in Theorem 7. Our detailed evaluation with the compact uniformity and the order estimation enables us to evaluate the performance of such an estimator globally.
Proof of Theorem 7. Impossibility part (Proofs of (124) and (125)): We denote the -difference Fisher information matrices for the distribution family {℘ n t 0 ,t|M n } t and {℘ t 0 ,t|m } t by J n t, and J m t, , respectively. Also, we denote the -difference type RLD Fisher information matrix at t 0 = (t 0 , 0) of the family {ρ ⊗n t } t byJ n t 0 , . Then, we have (117) in the same way.
Applying (97) of Lemma 10 with → / √ n, there exist k × k matrices P n such that Hence, the combination of (98) of Lemma 11,(130), and (117) implies that there exists a k × k matrices P such that Due to the same reason as (119), we have By taking the limit → 0, the combination of (99) of Lemma 11 and (133) implies (124). When the model M is D-invariant, since we obtain (125) by using the expression (50) in the same way as (112): Achievability part (Proof of (126)): Since ρ t 0 is not degenerate, we can show the achievability in the same way as Theorem 6 because we can apply Q-LAN (Theorem 3) for the model M . The difference is the following. Choosing the matrix P to achieve the minimum (50), we employ the
Nuisance Parameters
For state estimation in a noisy environment, the strength of noise is not a parameter of interest, yet it affects the precision of estimating other parameters. In this scenario, the strength of noise is a nuisance parameter [46,47]. To illustrate the difference between nuisance parameters and fixed parameters that are discussed in the previous section, let us consider the case of a qubit clock state under going a noisy time evolution. To estimate the duration of the evolution, we introduce the strength of the noise as an additional parameter and consider the estimation problem in the extended model parameterized by the duration and the noise strength. The strength of the noise is usually unknown. Although it is not a parameter of interest, its value will affect the precision of our estimation, and thus it should be treated as a nuisance parameter.
Precision bound for estimation with nuisance parameters.
In this subsection, we consider state estimation of an arbitrary (k + s)-parameter model {ρ t, p } (t, p)∈˜ , where t and p are k-dimensional and s-dimensional parameters, respectively. Our task is to estimate only the parameters t and it is not required to estimate the other parameters p, which is called nuisance parameters. Hence, our estimate is k-dimensional. We say that a parametric family of a structure of nuisance parameters is a nuisance parameter model, and denote it byS = {ρ t, p } (t, p)∈˜ . We simplify (t, p) byt.
The concept of local asymptotic covariance can be extended to a model with nuisance parameters by considering a local model ρ ñ t 0 ,t := ρ ⊗ñ t 0 +t/ √ n . Throughout this section, we assume that ρ˜t 0 is invertible and all the parametrizations are at least C 1 continuous. Condition 3 (Local asymptotic covariance with nuisance parameters). We say that a sequence of measurements m := {M n } to estimate the k-dimensional parameter t satisfies local asymptotic covariance att 0 = (t 0 , p 0 ) ∈˜ under the nuisance parameter model M when the probability distribution
In (138), V is a real symmetric matrix and X = (X i ) is a k-component vector of operators to satisfy
In (139), the minimization is taken over all k × (k + s) matrices satisfying the constraint (P) i j := δ i j for i ≤ k, j ≤ k +s, and, J t 0 and D t 0 are the SLD Fisher information matrix and the D-matrix [cf. Eqs. (35) and (41)] for the extended model S at t 0 := (t 0 , 0). In the following, we derive an attainable lower bound on the weighted MSE. To this purpose, we define the set LAC(t 0 ) of local asymptotic covariant sequences of measurements at the pointt 0 ∈˜ for the nuisance parameter modelM, and focus on the minimum value When the model S is D-invariant, we have the bound for the weighted MSE with weight matrix W ≥ 0 of the limiting distribution as
When the model S is a D-invariant qudit model and the state ρ t 0 is not degenerate, we have
Moreover, if W > 0 and ℘˜t 0 ,t|m has a differentiable PDF, the equality in (144) holds if and only if ℘ t 0 ,t|m is the normal distribution with average zero and covariance where X is the vector to realize the minimum (138 Here κ is a parameter to satisfy κ ≥ 0.027. Before proving Theorem 8, we discuss a linear subfamily of k -dimensional Gaussian family {G[t , γ ]} t ∈R k . Consider a linear map T from R (k+s) to R k . We have the subfamilyM := {G[T (t, p), γ ]} (t, p)∈R k+s as a nuisance parameter model. Then, the covariance condition is extended as follows. ℘ t, p|M (B + t).
Then, we have the following corollary of Lemma 6. T (t, p), γ ]} with C 1 continuous parametrization satisfies where UBM and CUBM are the sets of unbiased estimators and covariant unbiased estimators of the nuisance parameter modelM, respectively. Further, when W > 0, we choose a vector X to realize the minimum in (49). The above infimum is attained by the covariant unbiased estimators M W whose output distribution is the normal distribution with average t and covariance matrix Re((Z t (X))+ This corollary can be shown as follows. The inequality inf M∈UBM tr W V t (M) ≥ C NH,M (W, t) follows from the condition (140). Similar to Corollary 2, Proposition 1 guarantees that the latter part of the corollary with W > 0 follows from (138) and Lemma 6. Hence, we obtain this corollary for W > 0. The case with non strictly positive W can be shown by considering W in the same way as Corollary 1.
Proof of Theorem 8. Impossibility part (Proofs of (143) and (144)): We denote the -difference Fisher information matrix of {℘˜t 0 ,t|m }˜t by J m t 0 , . Due to (132), there exists a (k + s) × k matrixP satisfying the following conditions.
We define the k × (k + s) matrixP bȳ by p i . Then, for two vectors a ∈ R k and b ∈ R k+s , we apply Schwartz inequality to the two variables X :
Lemma 14. WhenS = {ρ (t, p) } (t, p)∈˜ is a D-invariant k + s-parameter nuisance parameter model and J −1 t 0 exists, we have
A few comments are in order. First, the nuisance parameter bound (144) reduces to the bound (112), when the parameters to estimate are orthogonal to the nuisance parameters in the sense that the RLD Fisher information matrixJ˜t 0 is block-diagonal. This orthogonality is equivalent to the condition that the SLD Fisher information matrix J˜t 0 and the D-matrix take the block diagonal forms This is the case, for instance, of simultaneous estimation of the spectrum and the Hamiltonian-generated phase of a two-level system. Under such circumstances, the inverse of the Fisher information matrix can be done by inverting J t 0 and J N independently. The same precision bound is thus obtained with or without introducing nuisance parameters, and we have the following lemma.
Lemma 15. When all nuisance parameters are orthogonal to the parameters of interest, the bound with nuisance parameters (144) coincides with the D-invariant MSE bound (112).
In the case of orthogonal nuisance parameters, the estimation of nuisance parameters does not affect the precision of estimating the parameters of interest, which does not hold for the generic case of non-orthogonal nuisance parameters. Thanks to this fact, one can achieve the bound (144) by first measuring the nuisance parameters and then constructing the optimal measurement based on the estimated value of the nuisance parameters. On the other hand, an RLD bound [cf. Eq. (39)] can be attained if and only if its model is D-invariant. Combining these arguments with Lemma 15, we obtain a characterization of the attainability of RLD bounds as follows.
Corollary 4. An RLD bound can be achieved if and only if it has an orthogonal nuisance extension, i.e. Eq. (154) holds for some choice of nuisance parameters.
The above corollary offers a simple criterion for the important problem of the attainability of RLD bounds. In Sect. 10.3, we will illustrate the application of this criterion with a concrete example. The bound (144) can be straightforwardly computed even for complex models; for Dinvariant models, the SLD operators have an uniform entry-wise expression and one only needs to shot it into a program to yield the bound (144). Moreover, the bound does not rely on the explicit choice of nuisance parameters. To see this, one can consider another parameterization x of the D-invariant model. The bound (144) comes from the RLD bound for the D-invariant model, and the RLD quantum Fisher information matrices J t 0 and J x 0 for two parameterizations are connected by the equation Since both parameterizations are extensions of the same model S satisfying P 0 t 0 = P 0 x 0 = t 0 , the Jacobian takes the form Then we have J −1 and J t 0 are equal. The bound (144) thus remains unchanged.
Precision bound for joint measurements. A useful implication of
The main result of this subsection is the following corollary:
where MSE o i denotes the MSE of o i under joint measurement and J is the SLD quantum Fisher information. The sum of the SLD gaps for all observables satisfies the attainable bound:
where D is the D-matrix.
The right hand side of Eq. (157) is exactly the gap between the SLD bound and the ultimate precision limit. It shows a typical example where the SLD bound is not attainable.
Proof. Substituting W in Eq. (144) by the projection into the subspace R k , we obtain a bound for the MSE {MSE o i } of the limiting distributions: Here J and D are the SLD Fisher information and D-matrix for the extended model, and (A) k×k denotes the upper-left k × k block of a matrix A. Substituting the above definition into Eq. (158), we obtain Corollary 5.
Specifically, for the case of two parameters, the bound (157) reduces to ji L i are the SLD operators in the dual space. Next, taking partial derivative with respect to o j on both sides of Eq. (155) and substituting in the definition of RLD operators, the observables satisfy the orthogonality relation with the SLD operators as By uniqueness of the dual space, we havê . . , k and the bound becomes Another bound expressing the tradeoff between Δ o 1 and Δ o 2 was obtained by Watanabe et al. [48] as Now, substituting O 2 by α O 2 for a variable α ∈ R in Eq. (160), we have For the above quadratic inequality to hold for any α ∈ R, its discriminant must be non-positive, which immediately implies the bound (161). Notice that the bound (161) was derived under asymptotic unbiasedness [48], and thus it was not guaranteed to be attainable. Here, instead, since our bound (160) is always attainable, the bound (161) can also be achieved in any qudit model under the asymptotically covariant condition.
Nuisance parameters versus fixed parameters. It is intuitive to ask what is the relationship between the nuisance parameter bound (144) and the general bound (125).
To see it, let S = {ρ t } t∈ be a generic k-parameter qudit model and letS be a (k + s)parameter D-invariant model containing S. When ρ t 0 is non-degenerate, we notice that the QCR bound with nuisance parameters (144) can be rewritten as where P 0 is a k ×(k +s) matrix satisfying the constraint (P 0 ) i j := δ i j for any i, j ≤ k +s. By definition, P 0 is a special case of P, and it follows straightforwardly from comparing Eq. (162) with Eq. (125) that the general MSE bound is upper bounded by the MSE bound for the nuisance parameter case. This observation agrees with the obvious intuition that having additional information on the system is helpful for (or at least, not detrimental to) estimation. At last, since J˜t 0 and D˜t 0 are block-diagonal in the case of orthogonal nuisance parameters, we have for any k × (k + s) matrix satisfying the constraint (P) i j := δ i j for i, j ≤ k. This implies that the general bound (125) coincides with the nuisance parameter bound (144) when the nuisance parameters are orthogonal.
Tail Property of the Limiting Distribution
In previous discussions, we focused on the MSE of the limiting distribution. Here, instead, we consider the behavior of the limiting distribution itself. The characteristic property is the tail property: Given a weight matrix W ≥ 0 and a constant c, we define the tail region T W,c (t) as For a measurement m = {M n (t n )}, the probability that the estimatet n is in the tail region can be approximated by the tail probability of the limiting distribution, i.e.
up to n being a term vanishing in n. The tail property is usually harder to characterize than the MSE. Nevertheless, here we show that, under certain conditions, there exists a good bound on the tail property of the limiting distribution.
Tail property of Gaussian shift models.
Just like in the previous sections, the tail property of n-copy qudit models can be analyzed by studying the tail property of Gaussian shift models. In this subsection, we first derive a bound on the tail probability of Gaussian shift models. The result has an interest in its own and can be used for further analysis of qudit models using Q-LAN. β] and a measurement M G (α). Then, define the probability ℘ α|M G T W,c (α) , where T W,c (α) is the tail region around α defined as
Consider a Gaussian shift model
Then, for covariant POVMs, the tail probability is independent of α and is given by: When the measurement is covariant, we have the following bound on the tail probability, which can be attained by a certain covariant POVM: with W C ≥ 0. Then, the tail probability of the limiting distribution is bounded as where e is the 2s-dimensional vector with all entries equal to 1. For the definition of E s e −β + e/2 , see (56). When the POVM M G is given as M G (B) = B |α 1 , . . . , α s α 1 , . . . , α s |dα, the equality in (164) holds.
The proof can be found in "Appendix H". When the model has a group covariance, similar evaluation might be possible. For example, similar evaluation was done in the n-copy of full pure states family [49] and in the n-copy of squeezed states family [50, Sect. 4.1.3].
Tail property of D-invariant qudit models. For a k-parameter D-invariant model
for T c := {x ∈ R k | x ≥ c}. The equality holds if and only if ℘ t 0 ,t|m is the normal distribution with average zero and covariance V t 0 |W as defined in Eq. (114).
We note that bounds on the probability distributions are usually more difficult to obtain and more informative than the MSE bounds, as the MSE can be determined by the probability distribution. Theorem 9 provides an attainable bound of the tail probability, which can be used to determine the maximal probability that the estimate falls into a confidence region T W,c as well as the optimal measurement. Our proof of Theorem 9 needs some preparations. First, we introduce the concept of simultaneous diagonalization in the sense of symmetric transformation. Two 2k × 2k real symmetric matrices A 1 and A 2 are called simultaneously symplectic diagonalizable when there exist a symplectic matrix S and two real vectors β 1 and β 2 such that such that with E k defined in Eq. (56). Regarding the simultaneous diagonalization, we have the following property, whose proof can be found in "Appendix I": For a sequence of measurement m := {M n } to satisfy local asymptotic covariance at t 0 ∈ , according to Theorem 5, we choose a covariant POVM M G to satisfy (103). Applying Lemma 16 to the POVM M G , we obtain the desired statement.
Step 2 We consider the general case. Now, we choose the local parameter t := J −1/2 t 0 t. In this coordinate, The inverse of the RLD quantum Fisher information is I +J , the weight matrix has no cross term between the classical and quantum parts. Using the above discussion and Lemma 16, we obtain the desired statement.
Extension to Global Estimation and Generic Cost Functions
In the previous sections, we focused on local models and cost functions of the form tr W V [℘ t 0 ,t|m ]. In this section, our treatment will be extended to global models {ρ t } t∈ . (where the parameter to be estimated is not restricted to a local neighborhood) and to generic cost functions.
Optimal global estimation via local estimation.
Our optimal global estimation is given by combining the two-step method and local optimal estimation. That is, the first step is the application of full tomography proposed in [26] on n 1−x/2 copies with the outcomet 0 for a constant x ∈ (0, 2/9), and the second step is the local optimal estimation att 0 , given in Sect. 6.3, on a n,x := n − n 1−x/2 copies. Before its full description, we define the neighborhood n,x (t) of t ∈ as (167) Given a generic model M = {ρ t } t∈ that does not contain any degenerate state and a weight matrix W > 0, we describe the full protocol as follows.
(A1) Localization: Perform full tomography proposed in [26] on n 1−x/2 copies, which is described by a POVM {M tomo n 1−x/2 }, for a constant x ∈ (0, 2/9). The tomography outputs the first estimatet 0 so that for any true parameter t. (A2) Local estimation: Based on the first estimatet 0 , apply the optimal local measurement Mˆt 0 ,a n,x W given in Theorem 7 with the weight matrix W . If the measurement outcomet 1 of Mˆt 0 ,a n,x W is in n,x (t 0 ), output the outcomet 1 as the final estimate; otherwise outputt 0 as the final estimate.
Denoting the POVM of the whole process by m W = {M n W }, we obtain the following theorem.
holds for any point t 0 ∈ and any t ∈ n,x,c(t 0 ) corresponding to a non-degenerate state, where C S (W, t 0 ) is the minimum weighted MSE as defined in Eq. (110). More precisely, we have lim sup for a compact set K ⊂ , where V t 0 |W is defined in Eq. (146) and n,x,c(t 0 ) is defined in Eq. (76). Further, when the parameter set is bounded and x < κ, we have the following relation.
Here, we should remark the key point of the derivation. The existing papers [8,11] addressed the achievability of min M tr W J −1 t|M with the two-step method, where J t|M is the Fisher information matrix of the distribution family {℘ t|M } t , which expresses the bounds among separable measurement [34,Exercise 6.42]. Hence it can be called the separable bound. In the one-parameter case, the separable bound equals the Holevo bound. To achieve the separable bound, we do not consider the sequence of measurement. Hence, we do not handle a complicated convergence. The global achievability of the separable bound can be easily shown by the two-step method [8,11]. However, in our setting, we need to handle the sequence of measurement to achieve the local optimality. Hence, we need to carefully consider the compact uniformity and the order estimate of the convergence in Theorem 7. In the following proof, we employ our evaluation with such detailed analysis as in Eq. (127).
Proof.
Step 1 Define by t g := t 0 + t √ n the true value of the parameters. By definition, we have t g −t 0 ≤ n − 1−x 2 with probability 1 − O(e −n x/2 ) and t g − t 0 ≤ c(t 0 )n − 1 2 +x by definition. Since the error probability vanishes exponentially, it would not affect the scaling of MSE. In this step, we will show Since Eq. (127) of Theorem 7 implies ℘ a n,x t 0 ,t|Mˆt 0 ,an,x Since ℘ n t 0 ,t|Mˆt 0 ,an,x As we have we obtain Step 2 We will show (170). First, we discuss two exceptional cases t g −t 0 > n − 1−x 2 and t 1 −t 0 > n − 1−x 2 . Eq. (168) guarantees that Eq. (175) and the property of normal distribution implies Tr ρ ⊗a n,x t g Mˆt 0 ,a n, When t g −t 0 ≤ n − 1−x 2 and t 1 −t 0 ≤ n − 1−x 2 , Eq. (172) holds under the condition t g − t 0 ≤ c(t 0 )n − 1 2 +x , which implies that Since the above evaluation is compactly uniform with respect to t 0 , we have (170).
The compactness of guarantees that the error n(t − t g )(t − t g ) T is bounded by nC with a constant C. Due to (178), the contribution of the first case is bounded by nC · O(e −n x/2 ), which goes to zero.
In the second case, sincet 0 =t, the error n(t − t g )(t − t g ) T is bounded by Due to (179), the contribution of the second case is bounded by n x · O(n −κ ) = O(n x−κ ), which goes to zero.
In the third case, since Due to (175), the contribution of the second case is bounded by 2n x · O(n −κ ) = O(n x−κ ), which goes to zero. Therefore, we obtain (181).
Generic cost functions.
Finally, we show that results in this work hold also for any cost function c(t, t), which is bounded and has a symmetric expansion, in the sense of satisfying the following two conditions: (t, t) has a continuous third derivative, so that it can be expanded as To adopt this situation, we replace the step (A2) by the following step (A2)': (A2)' Based on the first estimatet 0 , apply the optimal local measurement Mˆt 0 ,a n,x corresponding to a non-degenerate state.
Theorem 11 is reduced to a bound for the (actual) MSE when c(t, t) = (t T −t T )W (t −t)
for W ≥ 0. Therefore, bounds in this work, Eqs. (125) and (144) for instance, are also attainable bounds for the MSE of any locally asymptotically unbiased measurement. Proof.
Step 1 We prove (1). Consider any sequence of asymptotically covariant measurements m t 0 := {M n,t 0 } at t 0 . Denote by t g := t 0 + t √ n the true value of the parameters. For a cost function c satisfying (ii), we have Step 2 We prove (2). We replace W by W t in the proof of Theorem 10. In this replacement, (173) is replaced by where x ∈ (0, 2/9). Hence, the contributions of the first and second cases of Step 3 of the proof of Theorem 10 go to zero.
In the third case of Step 3 of the proof, we have t g −t 1 ≤ 2n − 1−x 2 , Hence, Hence, in the contribution of the third case, we can replace the expectation of nc(t 1 , t g ) by the weighted MSE with weight W t g . Hence, we obtain the part (2).
Applications
In this section, we show how to evaluate the MSE bounds in several concrete examples.
Joint measurement of observables.
Here we consider the fundamental problem of the joint measurement of two observables. For simplicity we choose to analyze qubit systems, although the approach can be readily generalized to arbitrary dimension. The task is to simultaneously estimate the expectation of two observables A and B in a qubit system. The observables can be expressed as A = a · σ and B = b · σ with σ = (σ x , σ y , σ z ) being the vector of Pauli matrices. We assume without loss of generality that |a| = |b| = 1 and a · b ∈ [0, 1). The state of an arbitrary qubit system can be expressed as where n is the Bloch vector.
With this notation, the task is reduced to estimate the parameters x := a · n, y := b · n.
It is also convenient to introduce a third unit vector c orthogonal to a and b so that {a, b, c} form a (non-orthogonal) normalized basis of R 3 . In terms of this vector, we can define the parameter z := c · n. In this way, we extend the problem to the full model containing all qubit states, where x, y are the parameters of interest and z is a nuisance parameter. Under this parameterization, we can evaluate the SLD operators for x, y, and z, as well as the SLD Fisher information matrix and the D matrix (see "Appendix J" for details), substituting which into the bound (144) yields: 1−|n| 2 +x 2 +2x y s+y 2 +z 2 − x y (1−s 2 )+(1−|n| 2 +z 2 )s 1−|n| 2 +x 2 +2x y s+y 2 +z 2 − x y (1−s 2 )+(1−|n| 2 +z 2 )s 1−|n| 2 +x 2 +2x y s+y 2 +z 2 1−|n| 2 +x 2 (1−s 2 )+z 2 1−|n| 2 +x 2 +2x y s+y 2 +z 2 ⎞ ⎟ ⎠ where s := a · b, x = x−ys 1−s 2 , and y = y−xs 1−s 2 . The tradeoff between the measurement precisions for the two observables is of fundamental interest. Substituting the expressions of D-matrix and the SLD Fisher information matrix (see "Appendix J") into Eq. (159), we obtain which characterizes the precision tradeoff in joint measurements of qubit observables.
Direction estimation in the presence of noise.
Consider the task of estimating a pure qubit state |ψ = cos θ 2 |0 +e iϕ sin θ 2 |1 , which can also be regarded as determining a direction in space, as qubits are often realized in spin-1/2 systems. In a practical setup, it is necessary to take into account the effect of noise, under which the qubit becomes mixed. For noises with strong symmetry, like depolarization, the usual MSE bound produces a good estimate of the error. For other kind of noises, it is essential to introduce nuisance parameters, and to use the techniques introduced in this paper.
As an illustration, we consider the amplitude damping noise as an example, which can be formulated as the channel In terms of the derivative vector, the SLD for the parameter x ∈ {θ, ϕ, η} takes the form After some straightforward calculations, we get Then we have the MSE bound with nuisance parameter η. An illustration can be found in Fig. 2 with W = I in Eq. (144). The minimum of the sum of the (x, x)-th matrix element of the MSE matrix for x = θ, ϕ is independent of ϕ, which is a result of the symmetry of the problem: the D-matrix does not depend on ϕ, and thus an estimation of ϕ can be obtained without affecting the precisions of other parameters. Notice that when the state is close to |0 or |1 , it is insensitive to the change of θ , resulting in the cup-shape curves in Fig. 2. Next, we evaluate the sum of MSEs of ϕ and θ when η is a (known) fixed parameter using Eq. (125) and compare it to the nuisance parameter case. The result of the numerical evaluation is plotted in Fig. 3. It is clear from the plot that the variance sum is strictly lower when η is treated as a fixed parameter, compared to the nuisance parameter case. This is a good example of how knowledge on a parameter (η) can assist the estimation of other parameters (ϕ and θ ). It is also observed that, when the noise is larger (i.e. when η is smaller), the gain of precision by knowing η is also bigger.
Multiphase estimation with noise.
Here we consider a noisy version of the multiphase estimation setting [20,51]. This problem was first studied by [20], where the authors derived a lower bound for the quantum Fisher information and conjectured that it was tight. Under local asymptotic covariance, we can now derive an attainable bound and show its equivalence to the SLD bound using the orthogonality of nuisance parameters, which proves the conjecture.
Our techniques also allow to resolve an open issue about the result of Ref. [20], where it was unclear whether or not the best precision depended on the knowledge of the noise. Using Corollary 4, we will also see that knowing a priori the strength of the noise does not help to decrease the estimation error.
The setting is illustrated in Fig. 4. Due to photon loss, the phase-shift operation is no longer unitary. Instead, it corresponds to a noisy channel with the following Kraus form: Note that η = 0 corresponds to the noiseless scenario. We consider a pure input state with N photons and in the "generalized NOON form" as The output state from the noisy multiphase evolution would be , and ρ η is independent of t. Notice that the output state is supported by the finite set of orthonormal states {|n j : j = 0, . . . , d, n = 0, . . . , N }, and thus it is in the scope of this work. In this case, {t j } are the parameters of interest, while α η and p η can be regarded as nuisance parameters. The SLD operators for these parameters can be calculated as where ℘ H ⊥ refers to the projection into the space orthogonal to |ψ η,t . Notice that p η and α η are orthogonal to other parameters, in the sense that Tr ρ L t j L p η = Tr ρ L α η L p η = 0 and Tr ρ L t j L α η = 2i p η sin 2α η d Therefore, the SLD Fisher information matrix and the D matrix are of the forms Substituting the above into the bound (144), we immediately get an attainable bound for any locally asymptotically covariant measurement m. Taking W to be the identity, one will see that for small η the sum of the variances scales as N 2 /d 2 , while for η → 1 it scales as N 2 /d, losing the boost in scaling compared to separate measurement of the phases. The bound (186) coincides with the SLD bound and the RLD bound. By Corollary 4, we conclude that the SLD (RLD) bound can be attained in the case of joint estimation of multiple phases. In addition, we stress that the ultimate precision does not depend on whether or not the noisy parameter η is known aprior: If η is unknown, one can obtain the same precision as when η is known by estimating η without disturbing the parameters of interest.
Conclusion
In this work, we completely solved the attainability problem of precision bounds for quantum state estimation under the local asymptotic covariance condition. We provided an explicit construction for the optimal measurement which attains the bounds globally.
The key building block of the optimal measurement is the quantum local asymptotic normality, derived in [16,17] for a particular type of parametrization and generalized here to arbitrary parameterizations. Besides the bound of MSE, we also derived a bound for the tail probability of estimation. Our work provides a general tool of constructing benchmarks and optimal measurements in multiparameter state estimation. In Table 3, we compare our result with existing results.
Here, we should remark the relation with the results by Yamagata et al. [19], which showed a similar statement for this kind of achievability in a local model scenario by a kind of local quantum asymptotic normality. In Theorem 7, we have shown the compact uniformity with the order estimation in our convergence, but they did not such properties. In the evaluation of global estimator, these properties for the convergence is essential. The difference between our evaluation and their evaluation comes from the key tools.
The key tool of our derivation is Q-LAN (Proposition 2) by [16,17], which gives the state conversion, i.e., the TP-CP maps converting the states family with precise evaluation of the trace norm. However, their method is based on the algebraic central limit theorem [38,52], which gives only the behavior of the expectation of the function of operators R i . This idea of applying this method to the achievability of the Holevo bound was first mentioned in [18]. Yamagata et al. [19] derived the detailed discussion in this direction.
Indeed, the algebraic version of Q-LAN by [38,52] can be directly applied to the vector X of Hermitian matrices to achieve the Holevo bound while use of the state conversion of Q-LAN requires some complicated procedure to handle the the vector X of Hermitian matrices, which is the disadvantage of our approach. However, since the algebraic version of Q-LAN does not give a state conversion directly, it is quite difficult to give the compact uniformity and the order estimate of the convergence. In this paper, to overcome the disadvantage of our approach, we have derived several advanced properties for Gaussian states in Sects. 3.2 and 3.3 by using symplectic structure. Using these properties, we could smoothly handle complicated procedure to fill the gap between the full quit model and arbitrary submodel.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A. Proof of Lemma 2
In this appendix, we show Lemma 2. For this aim, we discuss the existence of PDF. First, we show the following lemma.
Lemma 19.
Let P be a probability measure on R. Define the location shift family {P t } as P t (B) := P(B + t). For an arbitrary disjoint decomposition A := {A i } of R, we assume that the probability distribution family {P A,t } has finite Fisher information J A,t , where P A,t (i) := P t (A i ). We also assume that J t := sup A J A,t < ∞. Also, we define x + := inf{x |P(x , ∞) = 0} and x − := sup{x |P((−∞, x ]) = 0}.
Then, for Proof. Assume that x + < ∞. We choose The fidelity between P A,0 and P A,t is Hence, we have Hence, we have which implies the existence of p(x). Thus, Hence, we have Hence, when d → 0 i.e., which implies that p(x) is Hölder continuous with order 1/2.
Using the previous Lemma, we are in position to prove Lemma 2.
Proof of Lemma 2. Let A := {A i } be an arbitrary disjoint finite decomposition of R. Let G A be the coarse-graining map from a distribution on R to a distribution on the meshes A i . Then, the Fisher information J A,n,t of {G A (℘ n t 0 ,t|M n )} t is not greater than the Fisher information J t of ρ n t 0 ,t t∈ n . Hence, the Fisher information J A,t of {G A (℘ t 0 ,t|m )} t satisfies Therefore, we can apply Lemma 19 to ℘ t 0 ,t|m . Lemma 19 guarantees the existence of the PDF of the limiting distribution ℘ t 0 ,t|m ,
B. Lemmas Used for Asymptotic Evaluations
In this appendix, we prepare two lemmas for asymptotic evaluations of information quantity of probability distributions.
Lemma 20. Assume that two sequences of probability distributions {(P n , Q n )} on R converges to a pair of probability distributions (P, Q) on R, respectively. Then, the inequality holds.
Proof. Let p and q be the Radon-Nikodým derivative of P and Q with respect to P + Q. Since Eq. (199) implies that Also, we have Then, information processing inequality for the fidelity yields that Since the number of meshes is finite, we have
Lemma 21.
Let be an open subset of R k . Assume that a sequence of probability distributions {P t,n } t∈ on R k converges to a family of probability distributions {P t } t∈ on R k . We denote their -difference Fisher information matrices by J n t, and J t, , respectively. For an vector t and > 0, we also assume that there exists a Hermitian matrix J such that J n t, ≤ J . Then, P t+ e j is absolutely continuous with respect to P t for j = 1, . . . , k , and the inequality holds for any complex vector a ∈ C k .
Proof of Lemma 21. Since J n t, and J t, are real matrices, it is sufficient to show (204) for a real vector a. In this proof, we fix the vector t.
Step (1): We show that P t+ e j is absolutely continuous with respect to P t for j = 1, . . . , k by contradiction. Assume that there exists an integer j such that P t+ e j is not absolutely continuous with respect to P t . There exists a Borel set B ⊂ R k such that P t+ e j (B) > 0 and P t (B) = 0. Let G be the coarse-grained map from a distribution P on R k to a binary distribution (P(B), P(B c )) on two events {B, B c }. Let J t,B, and J n t,B, be the -difference Fisher information matrices of {G(P t )} and {G(P t,n )}, respectively. Information processing inequality implies that J n t,B, ≤ J n t, ≤ J . Also, J n t,B, → J t,B, as n → ∞. Hence, J t,B, ≤ J . However, the j-th diagonal element of J t,B, is infinity. It contradicts the assumption of contradiction.
Step (2): Let p t+ e j be the Radon-Nikodým derivative of P t+ e j with respect to P t . We show that for N , > 0, and any integer j = 1, . . . , k by contradiction. We denote the LHS of (205) by C j and assume there exists an integer j such that C j > 0. We set R = J ; j, j /C j + 2. Setting B to be {x| p t+ e j (x) > R}, we repeat the same discussion as Step (1). Then, we obtain the contradiction as follows.
Step (3): We show (204) for a real vector a. We define the subsets Given R > 0, let G R be the coarse-grained map from a distribution on R k to a distribution on the family of measurable sets {B ⊂ R k \ C R } ∪ {C R }, where B is any Borel set in R k \C R . Given N > 0 and R > 0, let G N ,R be the coarse-grained map from a distribution on R k to a distribution on the family of measurable sets where B is any Borel subset in R k \ C N ,R . Given N > 0, R > 0, and N > 0, let G N ,N ,R be the coarse-grained map from a distribution on R k to a distribution on meshes Lebesgue convergence theorem guarantees that Let J n t, ,N ,R,N be the -difference Fisher information matrix for the distribution family {G N ,R,N ( p t,n )} t . Then, information processing inequality (93) for the -difference Fisher information matrix yields that Since the number of meshes is finite, we have Hence, using (210), (211), (212), and (215), we have lim inf n→∞ a|J n t, − J t, |a ≥ 0.
C. Proof of Lemma 3
Before starting our proof of Lemma 3, we prepare the following lemmas.
Lemma 22. Consider a canonical quantum Gaussian states family {Φ[θ, β]}. When a symplectic matrix S satisfies
where E q is the matrix defined in Eq. (56), there exists a unitary operator U S such that Proof. Consider any coordinate θ = (θ C , θ Q ), where θ Q obtained by a reversible linear transformation S on the Q-LAN coordinate θ Q , i.e. θ Q = Sθ Q .
, and x = (q 1 ,p 1 , . . . ,q q ,p q ) T . We have where y := (S −1 ) T x and Z β > 0 is a normalizing constant. Now, by the definition of E q (x) in Eq. (56) and S E q (e −β V ) S T = E q e −(β V ) , S must be of the block diagonal form S = i O s i . Here {s i } is a partition of {1, . . . , 2q} and j, k ∈ s i if and only if β j = β k , and O s i is an orthogonal matrix acting on any component j ∈ s i . Since β V , β V and ln β V are in one-to-one correspondence, we S E q (e −β V )S T = E q (e −β V ). Substituting it into Eq. (217), we have That is, (S −1 ) T can be regarded as a transformation of x. Finally, S is symplectic since S DS T = D, and there exists a unitary U S such that [50] respectively. We introduce the classical parameters θ C and the quantum parameters θ Q in Ker Im(Γ ) and Supp Im(Γ ), respectively. That is, the classical parameter θ C and the quantum parameter θ Q are given by an invertible linear transformation T such that θ := (θ C , θ Q ) = T t satisfies Since the above separation is unique up to the linear conversion and any classical Gaussian states can be converted to each other via scale conversion, the remaining problem is to show the desired statement for the quantum part. Next, we focus on the quantum part ((T −1 ) T Γ T −1 ) Q of the Hermitian matrix (T −1 ) T Γ T −1 It is now convenient to define the matrix The role of A is to normalize the D-matrix. Indeed, since Im( 0 is a real symmetric matrix, there exists a symplectic matrix S and a vector β such that [53] Meanwhile, we have SS 0 A −1 Im(((T −1 ) T Γ T −1 ) Q )A −1 S 0 S T = Ω d Q since S is symplectic. Overall, when T is given as (I ⊕ (SS 0 A −1 ))T , the desired requirement is satisfied. The uniqueness of β is guaranteed by the uniqueness of symplectic eigenvalues. Hence, when two linear conversions T andT satisfies the condition of the statement, T Γ T T =T ΓT T . Thus, Lemma 22 guarantees that the canonical Gaussian states G(T −1 α, T Γ T T ) and G(T −1 α,T ΓT T ) are unitarily equivalent.
D. Proof of Lemma 5
(3) ⇒ (1): When a Gaussian states family is given in the RHS of (64), it is clearly D-invariant. Hence, the D-invariance is equivalent to the condition (2).
(2) ⇒ (3): First, we separate the system into the classical and the quantum parts. In the Gaussian states family {G[α, Γ ]} this separation can be done by considering the Kernel of I m(Γ ) as in the proof of Lemma 3. In the Gaussian states family G[T (t), Γ ] this separation can be done by considering the Kernel of the D-matrix D 0 in the same way. Since the relation (64) for the classical part is easily done, we show the relation (64) when only the quantum part exists.
Under the above assumption, we define the k × (d − k) matrix T such that F := (T ⊕T ) is invertible and T T A −1 T = 0. Then, Lemma 3 guarantees that G [F(t, t ), Γ ] is unitarily equivalent to G[(t, t ), F −1 Γ (F T ) −1 ]. Since T T A −1 T = 0, we have Putting t = 0, we obtain the condition (3).
E. Proof of Lemma 6
Since Lemma 3 shows that general Gaussian states can be reduced to the canonical Gaussian states, we discuss only the canonical Gaussian states.
Step 1 We show the statement when we have only the quantum part and X = R. For a given state ρ, we define the POVM M ρ by When ρ is a squeezed state with Tr ρ Q j = Tr ρ j P = 0, the output distribution ℘ α|M [ρ] of M[ρ] is the 2d Q -dimensional normal distribution of average α and the following covariance matrix [5]; E d Q (β) + V ρ , with V ρ := (Tr Q i Q j ρ) i, j (Tr Q i P j ρ) i, j (Tr P i Q j ρ) i, j (Tr P i P j ρ) i, j .
In the single-mode case, without loss of generality, we can assume that W is a diagonal matrix w 1 0 0 w 2 because this diagonalization can be done by applying the orthogonal transformation between Q and P. Then, In the multiple-mode case, we choose a symplectic matrix S such that SW S T is a diagonal matrix with diagonal element w 1 , w 2 , . . . , w 2d Q . The matrix
G. Proof of Lemma 12
Denote by Q( y) := 1 π k/2 y|F| y the Q-function of F [54]. Expanding displaced thermal states into a convex combination of coherent states, Eq. (104) can be rewritten as Taking the Fourier transform F y→ξ (g) := d y e i y·ξ g on both sides, we get In addition, we know that the P-function P( y) [55] of F can be evaluated via the Qfunction as (see, for instance, [56]) The combination of (236) and (237) yields By definition of the P-function P( y), F satisfies Conversely, we assume that F is given by (105). Then, we choose the function Q(α) to satisfy Applying the inverse of F α→ξ to (239), we obtain (235). The combination of (235) and (240) implies (104).
In the classical case, the covariant measurement is unique. So, we have the extension as in Lemma 16.
I. Proof of Lemma 17
(i)⇒ (ii): Since S −1 A 2 (S T ) −1 = (S T A −1 2 S) −1 , S T A 1 S, and D commute with each other, we have Since S T DS = D, we have S T D = DS −1 and DS = (S T ) −1 D. Thus, which implies (ii). Hence, S −1 A 2 (S T ) −1 commute with S T A 1 S D. There exists an orthogonal matrix S such that SS is a symplectic matrix, and (SS ) T A 1 (SS ) and (SS ) −1 A 2 ((SS ) T ) −1 are diagonal matrices. Considering the inverse of A −1 2 , we obtain (i). | 27,136.2 | 2019-04-26T00:00:00.000 | [
"Mathematics"
] |
Convolutional neural networks for classifying healthy individuals practicing or not practicing meditation according to the EEG data
The development of objective methods for assessing stress levels is an important task of applied neuroscience. Analysis of EEG recorded as part of a behavioral self-control program can serve as the basis for the development of test methods that allow classifying people by stress level. It is well known that participation in meditation practices leads to the development of skills of voluntary self-control over the individual’s mental state due to an increased concentration of attention to themselves. As a consequence of meditation practices, participants can reduce overall anxiety and stress levels. The aim of our study was to develop, train and test a convolutional neural network capable of classifying individuals into groups of practitioners and non-practitioners of meditation by analysis of eventrelated brain potentials recorded during stop-signal paradigm. Four non-deep convolutional network architectures were developed, trained and tested on samples of 100 people (51 meditators and 49 non-meditators). Subsequently, all structures were additionally tested on an independent sample of 25 people. It was found that a structure using a one-dimensional convolutional layer combining the layer and a two-layer fully connected network showed the best performance in simulation tests. However, this model was often subject to overfitting due to the limitation of the display size of the data set. The phenomenon of overfitting was mitigated by changing the structure and scale of the model, initialization network parameters, regularization, random deactivation (dropout) and hyperparameters of cross-validation screening. The resulting model showed 82 % accuracy in classifying people into subgroups. The use of such models can be expected to be effective in assessing stress levels and inclination to anxiety and depression disorders in other groups of subjects.
Introduction
Stress is one of the most common problems in modern society, and the search for effective methods to assess stress levels is important for early detection of the risk of mental and psychosomatic disorders (Kuh et al., 2003;Kuznetsova et al., 2016).Most psychological methods of assessing stress levels are based on the use of questionnaires, in which the respondent answers questions about their subjective mental condition.The weak point of this approach is the high pro bability of incorrect self-assessments, arising either from a person's unwillingness to report their problems, or as a result of a low ability to recognize changes in their own condition (Iwata, Higuchi, 2000;McCrae et al., 2000).A possible solution to this problem is to develop objective approaches to the diagnosis of mental traits or conditions based on the analysis of brain signals, such as fMRI or EEG.
Meditation is a system of special mental practices aimed at establishing voluntary selfcontrol over one's mental state.Although meditation initially appears as part of religious practices, especially common in oriental religions, at present this phenomenon is a popular topic of interest among scientific researchers.Meditation is considered as a basis for the creation of noninvasive, nondrug techniques that reduce the risk of a wide range of mental or psychosomatic diseases.A number of studies have shown that meditation has many positive effects on mental health, including a general reduction in stress and the level of propensity to depression (Chiesa et al., 2011;Saeed et al., 2019).The analysis of the EEG recorded during recognition of emotional stimuli revealed significant effects of meditation on the state of the human brain (Aftanas, Golosheykin, 2005;Atchley et al., 2016;Savostyanov et al., 2020).Therefore, the comparison of EEG in practitioners and non-practitioners of meditation can be considered as an experimental model that allows the development of methods for assessing stress levels.
Stop-signal paradigm (SSP) is an experimental method for evaluating an individual's ability for voluntary selfcontrol of their own movements in a changing external environment (Logan, Cowan, 1984;Band et al., 2003).The SSP allows us to assess the balance of two processes -activation and inhibition of behavior under conditions of insufficient time for making a decision.A number of studies have shown that SSP is an effective method for diagnosing the level of personal anxiety and propensity to depression (Hsieh et al., 2021;Zelenskih et al., 2022).It can be assumed that the dynamics of brain activity during SSP will serve as a marker distinguishing practitioners and non-practitioners of meditation from each other.
Artificial neural network is a developing technology based on machine learning, which is widely used in various fields.Compared to other traditional methods of machine classification, such as linear discriminant analysis and the knearest neighbor algorithm, artificial neural networks provide more accurate results of classifying individuals according to their behavioral and neurophysiological characteristics (Khosla et al., 2020).Therefore, in comparison with the support vector machine, an artificial neural network is better suited for the tasks of multiple classification, providing convenience for further research, as well as more efficient fitting of complex nonlinear relationships.
The purpose of our research is to develop, train and test an artificial neural network that allows, based on the analysis of eventrelated brain potentials in the stopsignal paradigm, to classify individuals according to the criterion of whether they practice meditation.We assume that afterwards the neural network created in this way will be able to assess individual level of stress and propensity to anxietydepressive disorders.
Methods of experimental research
Participants.A group of people practicing samadhi meditation (also called "mindfulness meditation") was examined in July-August 2018 on the premises of the Baikal Retreat Center (http://www.geshe.ru/).The experimental group included 51 healthy, righthanded participants from 25 to 66 years old (32 men; average age = 41.0,SD = 8.3), practicing meditation for a period of 5 to 15 years.The control group was examined in October-November 2019 on the premises of the medical college of the village of Khandyga, Tomponsky district of the Republic of Sakha (Yakutia).The control group included 49 healthy, righthanded participants from 22 to 58 years old (22 men; average age = 38.0,SD = 8.3) who had never participated in meditation or yoga practices.
The protocol of the study was approved by the local Ethics Committee of the Research Institute of Neurosciences and Medicine in accordance with the Helsinki Declaration of Biomedical Examinations.All the participants signed informed consent to participate in the surveys.
Experimental procedure.The experiment was organized on the basis of the stopsignal paradigm proposed in 1984 (Logan, Cowan, 1984) and modified by A.N. Savostyanov and coauthors (Savostyanov et al., 2009).The experiment was organized in the form of the computer interactive game "Hunt".One of two images appeared on the computer screen: a deer, or a tank.The participant had to press the keyboard button corresponding to the picture.The response time was limited to 0.75 seconds.If the participant pressed the button correctly and faster than 0.75 seconds, their game score increased.If the participant pressed the buttons incorrectly or reached a time out, then their game score decreased.
In total, 135 stimuli were presented to each participant.In 35 cases, after the onset of the target signal, a stopsignal was presented (a red square with the inscription "Stop"), which meant that the participant had to interrupt the movement that had already begun.If the participant did not press the button after the stop-signal, their score did not change.If the participant pressed the button after the stop-signal, their score decreased.The order of activation and stopping trials was randomized.The sequence of "deer" and "tank" stimuli was also rando mized.The interval between the end of the previous task and the start of a new task varied from 3 to 7 seconds.The total duration of the experiment was approximately 12 minutes.
Preprocessing of experimental data.EEG rejection of artifacts was done by the ICA method (Delorme, Makeig, 2004).The initial EEG signal was filtered at 1-40 Hz and referenced to average of all channels.The data was epoched relative to the onset trigger of the target stimulus (deer or tank) at a time interval from -1 to +3 seconds.The baseline EEG level was set in the range from -1000 to -250 ms.In total, 80 to 90 EEG epochs were obtained for each participant, after exclusion of all the trials containing the stop-signal or artifacts.After excluding artifacts, eventrelated potentials (ERPs) were calculated separately for each EEG channel, averaged over all trials and all participants.
The ERP calculation was conducted in the ERPLAB toolbox for MATLAB.Amplitudetime ERP graphs were made for each EEG channel.Then a visual preview of the ERP graph for the C3 channel was performed.In this lead, the ERP motor peaks stand out the most.In particular, two peaks were selected for this lead -an early premotor peak, the amplitude of which precedes pressing the button (the socalled readiness potential) and a late motor peak, the amplitude of which reaches a maximum when the button is pressed.From viewing this visual, the time limits of both the early and late peak were established.After that, the amplitude in each of these time windows was calculated separately for each person and each EEG channel, but averaged over all trials of the activation condition of the task for each participant.The calculation of the averaged amplitude was made using the ERPLAB toolbox (https://erpinfo.org/erplab).The amplitude values were surveyed to the baseline level for each participant separately.The obtained values were used as training and test data for artificial neural networks.
EEG data acquisition.The general structure of the input data is shown in Figure 1.For each participant, EEG was analyzed for 64 channels located at different points of the head surface.According to the international scheme of 10-20 %, the name of the electrode reflects its spatial position.The initial EEG signal for each channel is presented as a continuous series of measurements of the potential difference between the surface electrode and the referent with a time resolution of 1,000 measurements per second.
ERP extraction.When calculating the ERP (eventrelated potential) amplitude, the researcher selects several time windows, in each of which all amplitude values are summed over all time points and averaged over all tests.The amplitude values in different windows reflect the temporal dynamics of the neurophysiological process.We selected two time windows (250-350 and 550-900 ms after the target signal), which reflect, respectively, the physiological processes associated with the preparation and execution of the movement.A numerical value of the ERP amplitude was obtained for each participant separately for each time window and for each EEG channel.Since ERP in different parts of the head can deviate from the zero value of the potential both up (positive peak) and down (negative peak), then the numerical values of the amplitude can be both positive and negative.Thus, our data takes into account both spatial (the name of the channel, i. e. its position on the head) and temporal (the first or second ERP window) characteristics of the brain response to the task in the stopsignal paradigm, as well as the electrical direction of the reaction (positive or negative peak amplitude values).
For each examined individual, the data dimension was 2×64 values.Since 50 participants were included in each group of people, the data size for each of our samples is approximately 50×2×64, and the total size of the data set is 100×2×64.
Designing the structure and framework of a neural network
Since the input set of ERP data is small, a non-deep neural network was designed to predict whether an individual participated in longterm meditations or not.However, the initial EEG recording also has time series characteristics, so a convolutional neural network was additionally used for its analysis as a deep neural network for training and prediction.The main components of the convolutional neural network include convolutional layers, pooling layers, and fully connected layers.
In our case, the input layer of the convolutional network receives EEG data transformed into a twodimensional matrix with a sample size of 2×64, where each row represents an individual ERP peak and each column represents an EEG recording channel.The hidden layer of the convolutional neural network includes three common architectures: a convolutional layer, a pooling layer, and a fully connected layer.We used the Conv1d() tool in PyTorch as the convolutional kernel, which prevented overfitting caused by using more complex convolutional kernels with more parameters (https://pytorch.org/docs/stable/generated/torch.nn.Conv1d. html#torch.nn.Conv1d, 21.02.2023).
The parameters of the convolutional layer include the kernel size, stride size, and padding, which collectively determine the size of the output feature map of the convolutional layer and are hyperparameters of the convolutional network.Due to the characteristics of EEG data, there are both spatial and temporal relationships, so we developed two schemes.The first scheme involves using a total of two onedimensional convolutions.One extracts spatial features, which represent connections between ERP peaks in different electrode channels, and the other extracts temporal features.In this scheme, the PyTorch Conv1d() function wrapper was used to complete the corresponding function.The second approach involves applying only one onedimensional convolution, but this convolution can extract both temporal and spatial features, for which the PyTorch Conv1d() function wrapper was also chosen.
The convolutional layers contain activation functions that help represent complex objects.In our study, three activation functions were used: sigmoid(), relu(), and softmax() from PyTorch (https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html,15.04.2023).After extracting objects in the convolutional layer, the output feature map was passed to the pooling layer for object selection and information filtering.The pooling layer selects the pooling region in the same way as the kernel scanning stage of the convolutional layer, which is controlled by the pooling size, stride size, and padding.The convolutional and pooling layers in the convolutional neural network can extract features from the input data.The role of the fully connected layer is to nonlinearly combine the extracted features to obtain output data.In our case, two fully connected layers were created to prevent overfitting due to the small size of the dataset, for which the Linear() tool in PyTorch was applied.A fully connected layer is typically located before the output layer in a convolutional neural network.We used different loss and activation functions during training based on these two scenarios to improve the accuracy and performance of the model.
According to the abovedescribed scheme, four network structures were designed and used for classifying surveyed individuals (Fig. 2).The only difference between these four architectures lies in the number of convolutional layers and the number of output neurons at the end.
In the first structure, a convolutional layer is used to extract both temporal and spatial objects.Then, two fully connected layers are used, and two values are output after normalization using the softmax activation function.Crossentropy is used as the loss function, and Adam is used as the gradient descent algorithm.
The second structure also uses a convolutional layer to extract both temporal and spatial objects.Then, two fully connected layers are used, and the value is output after activation with the sigmoid function.Binary crossentropy is used as the loss function, and Adam is used as the gradient descent algorithm.
The third structure uses two types of convolutions to extract spatial and temporal characteristics of the data, respectively.Then, two fully connected layers are used, and two values are output after normalization using the softmax activation function.Crossentropy is used as the loss function, and Adam is used as the gradient descent algorithm.
Finally, the fourth structure uses two types of convolutions to extract spatial and temporal characteristics of the data, respectively.Then, two fully connected layers are used, and the value is output after activation with the sigmoid function.Binary crossentropy is used as the loss function, and Adam is used as the gradient descent algorithm.
Optimal hyperparameters were found for each structure and are described in the model evaluation section.
Neural network training
The process of training an artificial neural network can be divided into four stages: initialization, forward propagation, backward propagation, and weight update.
During initialization, we assigned random initial values to each parameter (weights and biases) of the neural network to break symmetry and allow each neuron to have a different gradient and learn different functions.Later, during hyperparameter search, we determined the optimal initialization function for each architecture.During forward propagation, the training data (input and output) were fed into the neural network, and the activation value of each neuron was calculated sequentially from the input layer to the hidden layer, and then to the output layer according to the structure of the neural network.The activation values were obtained from the linear combination of the input data and weights plus bias, followed by a non-linear function such as sigmoid or ReLU.The goal of forward propagation was to obtain the predicted result of the neural network and compare it with the true result.The goal of backward propagation was to obtain the gradient of each parameter, which can be used to update the parameters.In our case, we used crossentropy loss function and binary cross-entropy loss function for this purpose (https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html,20.03.2023).The crossentropy loss function was used to measure the distance between the probability distribution predicted by the model and the true probability distribution.Using this, we evaluated the performance of the model and Each parameter is updated with a certain learning rate (step size) according to its gradient, so that the loss function decreases.The goal of weight update is to optimize the parameters of the neural network so that it can better fit the training data.For this task, we applied the Adam optimization method.Adam is an algorithm for stochastic gradient descent with adaptive momentum, which was proposed at the ICLR conference in 2015 and has become one of the most popular and effective optimizers in deep learning.Adam combines two classical optimization algorithms, Adagrad and RMSProp, which are capable of handling sparse gradients and nonstationary objective functions, and uses the idea of momentum to accelerate convergence.Adam is equivalent to having a separate learning rate for each parameter, and this learning rate is adaptively adjusted according to the change in gradient.Specifically, when the gradient is large, the estimate of the second moment increases, which reduces the learning rate.When the gradient is small or sparse, the estimate of the first moment increases, which increases the learning rate.This effectively avoids oscillations caused by a too large learning rate, or increased complexity of convergence caused by a too small learning rate, or even getting trapped in a local minimum or saddle point.
To reduce overfitting and better train the model, we used batch normalization.Batch normalization is an approach that solves the problem of vanishing gradients by improving the smoothing of losses, speeding up network convergence, and increasing accuracy (Loffe, Szegedy, 2015).This method normalizes the data in minibatches so that the mean value is 0 and the standard deviation is 1.At the same time, two trainable parameters, scale and shift, are introduced so that the model can learn its corresponding distribution during backward propagation.To implement this function, we used the BatchNorm1d() tool from PyTorch.
Overfitting is a common problem in the process of training an artificial neural network, where the model performs well on the training set but poorly on the test set or new data, indicating poor generalization.In our case, the problem was in overfitting due to a small dataset.To solve this problem, we applied initialization, L2 regularization, and dropout, as well as crossvalidation to evaluate the model and select hyperparameters that best train the model, reducing overfitting to some extent.We used L2 regularization (weight decay), which involves adding a penalty term to the loss function proportional to the sum of squares of the model's parameters.
L2 regularization can cause the model's parameters to tend towards smaller values, thereby reducing the model's sensitivity to noise or outliers.Random deactivation (dropout) means the random zeroing of certain neurons or connection layers with a certain probability during training, which reduces the number of model parameters, thereby increasing the reliability and generalization ability of the model.
Crossvalidation is the reuse of data, splitting the resulting dataset, combination into various training and test sets, a training set for training the model and a test set for evaluating the quality of model prediction.We used the Kfold multiplication method as a crossvalidation method to reduce overfitting.
Evaluation of model performance on training data
In accordance with the characteristics of the EEG data sample and the indicators of the benchmark classification model, we used the metrics "F1score", "AUC" (area under curve), and "accuracy" as evaluation indicators for the model (https://keras.io/api/models/model_training_apis).The higher these indicators, the better the model's performance.F1score and AUC are comprehensive evaluation indicators for classification models, but they have different inaccuracies.AUC is less affected by the ratio of positive and negative samples in the dataset.For the purposes of this development, it became clear that predicting a person with a high level of stress as a person with a low level of stress would mean fundamentally incorrect results.Therefore, we chose F1score as the most prioritized indicator for evaluating the model's effectiveness.We evaluated the model's hyperparameters using fivefold crossvalidation to select the most suitable hyperparameters to prevent overfitting and improve model performance.
The results of evaluating the model on the training dataset are presented in Figure 3. Looking at each of the selected indicators, we can see that model 2 showed the most effective classification.Its effectiveness exceeded 80 % for all selected indicators.Models 1 and 4 also show good classification results, while model 3 performs the worst.Therefore, we assume that the output of one neuron surpasses the use of two neurons in the EEG binary classification task.Binary crossentropy loss is obviously more suitable for our classi-fication task based on the available dataset.When evaluating the model's effectiveness, the number of samples was 100, with 51 individuals practicing meditation (low stress level) and 49 individuals not practicing meditation.The number of samples is balanced, so it does not significantly affect the training and performance of the model.Moreover, for data with only two ERP peaks in 64 electrode channels, one convolution extracting both temporal and spatial characteristics worked better than two convolutions extracting temporal and spatial characteristics separately.
Evaluation of model performance on independent data.To evaluate the performance of the model on independent data, we prepared EEG data obtained from 25 individuals who were not included in the training set.Out of these 25 individuals, 12 practiced meditation, while 13 did not.The equipment, experimental design, and preprocessing of the EEG data were the same as in the training set.In this part of the study, all previously trained models were tested on new data that was not included in the training set.Accuracy, reliability, responsiveness, F1score, ROCAUC, specificity, and sensitivity were used as performance indicators for evaluating the models.Despite using parameter initialization functions, the weights were still randomly initialized within a certain range.Therefore, we adjusted the initial value of the random number to ensure the stability of the model's performance.
The performance metrics for different models on the independent test set are shown in Figure 4.According to the test results, structure 4 showed the best results for most selected parameters.Structure 2 also achieved good results.This structure exhibited the lowest sensitivity to overfitting, indicating its higher reliability compared to structure 4.
Conclusion
In our study, a neural network was successfully developed that classifies individuals into groups practicing or not practi cing meditation based on the analysis of their EEG data with an accuracy of approximately 80-85 %.We used an EEG dataset collected and collated during our own experiments, selecting the amplitude of the ERP peak before button press at 250-350 ms and the amplitude value of the peak after button press at 550-900 ms for 64 recording channels.The sample size was 1×2×64.
Four architectures of nondeep convolutional networks were developed, among which structures 2 and 4 performed best in tests on independent data samples.Structure 2, which used a onedimensional convolutional layer, pooling layer, and a twolayer fully connected network, showed the highest reliability.During the development of this model, it was noted that it was often prone to overfitting due to the limitation of the dataset size.This was mitigated by modifying the structure and scale of the model, specific network initialization parameters, regularization, random deactivation (dropout), and hyperparameter screening of crossvalidation.
Overall, the approach proposed by us was tested on two relatively small samples of nonclinical subjects.A similar method on experimental data from the stop-signal paradigm had been previously tested by us in classifying samples of clinical patients with depressive disorders and healthy individuals (Zelenskih et al., 2022).The results of the research presented in this article complement the previous work, as they demonstrate that despite the small sample sizes, the convolutional neural network method allows to achieve a high level of accuracy in classifying different independent groups of people differing in stress levels.Taken together, the results of both studies show that applying neural networks to data obtained from individuals during the stopsignal paradigm is a promising method for assessing their stress levels and the severity of anxietydepressive symptoms.It should be noted that the results of M.O.Zelenskih and colleagues' study are based solely on the application of behavioral data obtained in the stopsignal paradigm.The results of our new publication are based on the analysis of brain electrical responses obtained in the same experiment.The continuation of our research should involve the application of convolutional neural networks for the simultaneous analysis of behavioral and neurobiological data in order to more accurately classify participants based on their stress levels.
It is important to note that most standard methods for assessing stress levels or predisposition to anxietydepressive disorders are based on the use of psychological questionnaires or interviews with a psychiatrist (e. g., Beck et al., 1988).However, such methods have a disadvantage: patients may not want to inform the interviewer about their condition or may inaccurately assess themselves.Inaccurate selfassessment by the patient is often the cause of incorrect conclusions regarding their susceptibility to illness (Nock et al., 2010).Another approach is based on the analysis of behavioral or neurophysiological reactions to emotional stimuli.Such stimuli can be either photographs of faces expressing the patient's or other people's emotional states (Quevedo et al., 2016), or emotional messages (Bocharov et al., 2020).This method allows for an objective assessment of the degree of impairment of the brain's affective functions but is less sensitive to changes in a person's overall ability to selfcontrol behavior.Our proposed method, on the other hand, is based on the use of non-emotional stimuli to induce a complex sensorimotor reaction that requires either activation or inhibition of movement.Our approach allows for the assessment of the overall level of selfcontrol of behavior but does not provide an opportunity to assess the patient's affective state.It is obvious that these three approaches (i.e., testing using questionnaires, analysis of reactions to affective stimulation, and analysis of reactions in motor control tasks) are mutually complementary, i. e., they should all be used together for a more detailed assessment of the same patient.Although our proposed approach currently requires further testing, it may yield significant results in the future in the development of diagnostic tools for stressinduced diseases.
Fig. 1 .
Fig. 1.The scheme of obtaining input data for the neural network.
Fig. 3 .
Fig. 3. Results of testing four different neural network models on the training sample. | 6,193.6 | 2023-12-01T00:00:00.000 | [
"Computer Science",
"Psychology"
] |
Cytotoxic Effects of 2-Bromopropane on Embryonic Development in Mouse Blastocysts
2-Bromopropane (2-BP), an alternative to ozone-depleting solvents, is used as a cleaning solvent. Here, we examined the cytotoxic effects of 2-bromopropane (2-BP) on mouse embryos at the blastocyst stage, subsequent embryonic attachment and outgrowth in vitro, and in vivo implantation via embryo transfer. Mouse blastocysts were incubated in medium with or without 2-BP (2.5, 5 or 10 μM) for 24 h. Cell proliferation and growth were investigated with dual differential staining, apoptosis was analyzed by terminal deoxynucleotidyl transferase-mediated dUTP nick-end labeling (TUNEL) analysis, and implantation and post-implantation development of embryos were assessed using in vitro development analysis and in vivo embryo transfer, respectively. Blastocysts treated with 5 or 10 μM 2-BP displayed significantly increased apoptosis, and decreased inner cell mass (ICM) and trophectoderm (TE) cell number. Additionally, the implantation success rates of 2-BP-pretreated blastocysts were lower than those of untreated controls. In vitro treatment with 5 or 10 μM 2-BP was associated with increased resorption of postimplantation embryos, and decreased placental and fetal weights. Our results collectively indicate that in vitro exposure to 2-BP induces apoptosis, suppresses implantation rates after transfer to host mice, and retards early postimplantation development.
Introduction
2-Bromopropane (2-BP) is used as a cleaning solvent and as an alternative to ozone-depleting solvents. Previous studies report a high incidence of oligozoospermia in male workers after long-term OPEN ACCESS exposure to 2-BP [1][2][3]. Several animal studies further validate the potential injury effects of 2-BP on reproductive, hematopoietic, central nervous, and immune systems [4−11]. In cytotoxicity experiments, mouse embryos treated with 2-BP displayed micronuclei formation and decreased embryo cell number [12]. Moreover, 2-BP was recently identified as a potent DNA damaging agent [5,8]. These results collectively suggest that 2-BP induces various toxicities via activity as a DNA damaging agent. A reproductive toxicity investigation further demonstrated that exposure to 2-BP induced testicular or ovarian dysfunction, causing injury to early types of spermatogenic cells or primordial follicles and oocytes of rats [4,6]. Furthermore, experiments investigating the effects of 2-BP on pre-and postnatal development showed that exposure of pregnant or lactating female rats to 2-BP resulted in delivery rate decrease, peri-and postnatal death increase, loss of body weight development, and increased incidence of reproductive organ dysfunction [13]. However, the regulatory mechanisms underlying the potential adverse effects of 2-BP on embryo-fetal development are yet to be established.
Apoptosis plays an important role in development and disease [14]. A number of studies demonstrate that apoptosis functions in normal embryonic development [15][16][17]. Conversely, chemical teratogens induce excessive apoptosis in early embryos, leading to developmental injury [18−22]. We recently showed that some natural chemical compounds and mycotoxin induce cellular apoptosis and cytotoxicity in mouse blastocysts [20,[23][24][25][26][27][28]. Clearly, chemical teratogen treatment of mouse blastocysts induces apoptosis, decreases cell numbers, retards early postimplantation blastocyst development, and increases early-stage blastocyst death in vitro, while dietary chemical compounds appear to negatively affect mouse embryonic development in vivo by triggering apoptosis and inhibiting proliferation.
In the present study, we examine the cytotoxic effects of 2-BP on mouse blastocysts and related regulatory mechanisms. In our experiments, 2-BP suppressed embryonic cell proliferation during the blastocyst stage, largely by inducing apoptosis in the inner cell mass (ICM) and trophectoderm (TE).
We additionally monitored subsequent developmental injury of blastocysts in vitro and following implantation in vivo via embryo transfer.
Results and Discussion
Mouse blastocysts were treated with 2.5, 5 or 10 μM 2-BP at 37 °C for 24 h or left untreated, and apoptosis was monitored using the TUNEL method. We observed a concentration-dependent increase in apoptosis in blastocysts treated with 2-BP (5 and 10 μM) ( Figure 1A). Quantitative analysis disclosed 6.7-to 10.9-fold higher levels of apoptotic cells in 2-BP-treated blastocysts vs. untreated controls ( Figure 1B). Our results clearly indicate that 2-BP induces apoptosis in mouse blastocysts. embryos were made according to established methods [29]. Values are presented as means ± SD of five to eight determinations. ***P < 0.001 vs. the control group.
To determine the effects of 2-BP on blastocyst development in vivo, we transferred mouse blastocysts (control and pretreated with 2-BP), and examined the uterine content at 13 days post-transfer (day 18 post-coitus). The implantation ratio in the 2-BP-pretreated group was lower than that in the untreated control group ( Figure 4A). Moreover, the proportion of implanted embryos that failed to develop normally was significantly higher in groups pre-treated with 5 and 10 μM 2-BP ( Figure 4A). Embryos that implanted but failed to develop were subsequently resorbed. Figure 4A). Furthermore, the placental weights of mice in the 2-BP-treated group were lower than those in the untreated group ( Figure 4B), and fetal weight was lower in the 10 μM 2-BP-treated group, compared to controls (489 ± 51 mg vs. 607 ± 34 mg, respectively). Consistent with a previous study, recent experiments by our group show that 35-40% of fetuses weigh over 600 mg, and the average weight of total surviving fetuses is about 600 ± 12 mg in the untreated control group at day 18 of pregnancy in a mouse embryo transfer assay [20,27,28,30,31]. only after long exposure. We therefore used an in vitro assay system to assess the mechanisms by which 2-BP has cytotoxic effects on embryo development. To evaluate the possible cytotoxic effects and mechanisms of 2-BP on embryonic development, the present study used short-term higher concentrations of 2-BP treatment than those long-term exposure in animal models. In this model, mouse blastocysts cultured for 1-2 days in an incubator were co-incubated for 24 h with 2.5 to 10 μM 2-BP, concentrations higher than those that can be used in animal models. In the present study, preliminary time-course experiments showed that 2-BP triggers apoptosis in mouse blastocyst cells only after incubation for more than 12 h, which lasts for 24 h (data not shown). Based on this finding, we examined the effects of 2-BP on embryonic development by incubating blastocysts in medium containing 2.5 to 10 μM 2-BP for 24 h. Cell viability was decreased in mouse blastocysts owing to apoptosis ( Figure 1). TUNEL staining revealed that treatment of mouse blastocysts with 2.5 to 10 μM 2-BP induced a 6.7-to 10.9-fold increase in apoptosis in a dose-dependent manner ( Figure 1). Furthermore, dual differential and Annexin V staining disclosed 2-BP-induced cell loss in both the ICM and TE (Figure 2).
The TE arises from the trophoblast at the blastocyst stage and develops into a sphere of epithelial cells surrounding the ICM and blastocoel. These cells contribute to the placenta, and are required for development of the mammalian conceptus [34], signifying that a reduction in the TE cell lineage reduces implantation and embryonic viability [35,36]. In addition, previous studies found that a ~30% or more reduction in the number of cells in the ICM is associated with high risk of fetal loss or developmental injury, even in cases where implantation rate and TE cell numbers are normal [37].
Moreover, the ICM cell number is essential for proper implantation, and reduction in this cell lineage may decrease embryonic viability [35,36]. While apoptosis is responsible for eliminating unwanted cells during normal embryonic development, this process does not normally occur at the blastocyst stage [38,39]. Excessive apoptosis before or during the blastocyst stage is likely to delete important cell lineages, influencing embryonic development and potentially leading to miscarriage or embryonic malformation [40]. Thus, in view of the finding that 2-BP reduces the cell number and promotes apoptosis in both the ICM and TE of mouse blastocysts, we investigated the possibility that the compound causes implantation decrease, mortality and/or developmental delay of mouse embryos in vitro and in vivo. Our results show that 2-BP-treated blastocysts undergo decreased implantation and embryonic development and increased embryonic death in vitro and implantation in vivo (Figure 3 and Figure 4). Previous reports demonstrate that 2-BP induces apoptosis via interactions with mitochondria-dependent and Fas-FasL apoptotic pathways [32]. The regulatory mechanisms and pathways underlying the impact of 2-BP on embryonic development in our model are yet to be established.
Chemicals
Pregnant Expanded blastocysts from different females were pooled and randomly selected for experiments.
2-BP Treatment and Tunel Assay
Blastocysts were incubated in medium containing the indicated concentrations of 2-BP for 24 h. For apoptosis detection, embryos were washed in 2-BP-free medium, fixed, permeabilized and subjected to TUNEL labeling using an in situ cell death detection kit (Roche Molecular Biochemicals, Mannheim, Germany) according to the manufacturer's protocol. Photographic images were taken under brightfield illumination using a fluorescence microscope.
2-BP Treatment and Cell Proliferation
Blastocysts were incubated with or without culture medium containing 2.5, 5 or 10 μM 2-BP. After 24h they were washed with 2-BP-free medium and dual differential staining was used to facilitate counting of cell numbers in the inner cell mass (ICM) and trophectoderm (TE) [35]. Under UV light excitation, the ICM cells (which take up bisbenzimidine but exclude PI) appeared blue, whereas the TE cells (which take up both fluorochromes) appeared orange-red. Since multinucleated cells are not common in preimplantation embryos [42], the number of nuclei was considered to represent an accurate measure of the cell number.
Morphological Analysis of Embryonic Development
Blastocysts were cultured according to a modification of the previously reported method [43].
Briefly, embryos were cultured in 4-well multidishes at 37 °C. For group culture, four embryos were cultured per well. The basic medium consisted of CMRL-1066 supplemented with 1 mM glutamine and 1 mM sodium pyruvate plus 50 IU/mL penicillin and 50 mg/mL streptomycin (hereafter called culture medium). For treatments, the embryos were cultured with the indicated concentrations of 2-BP for 24 h in serum-free medium. Thereafter, the embryos were cultured for 3 days in culture medium supplemented with 20% fetal calf serum, and for 4 days in culture medium supplemented with 20% heated-inactivated human placental cord serum, for a total culture time of 8 days from the onset of treatment. Embryos were inspected daily under a phase-contrast dissecting microscope, and developmental stages were classified according to established methods [29]. Developmental parameters, such as hatching through the zona pellucida, attachment to the culture dish, trophoblastic outgrowth and differentiation of the embryo proper into early or late egg cylinders (germ layer stage) or primitive streak to early somite stage were recorded daily. To decrease observer bias, all the data were analyzed using the following criteria to differentiate in vitro stages of mouse embryos [29].
Implanted blastocyst was defined as the attachment and outgrowth of the blastocyst to the culture dish.
An early egg cylinder (EEC) embryo was defined as an embryo that had reached stages 9 or 10 by day 4. A late egg cylinder (LEC) embryo was defined as an embryo that reached stages 11, 12 or 13 by day 6 of culture. An early somite stage (ESS) embryo was defined as an embryo that had reached stages 14 or 15 by day 8.
Blastocyst Development Following Embryo Transfer
To examine the ability of expanded blastocysts to implant and develop in vivo, the generated embryos were transferred to recipient mice. ICR females (white skin color) were mated with vasectomized males (C57BL/6J; black skin color; from National Laboratory Animal Center, Taiwan, ROC) to produce pseudopregnant dams as recipients for embryo transfer. To ensure that all fetuses in the pseudopregnant mice came from embryo transfer (white color) and not from fertilization by C57BL/6J (black color), we examined the skin color of the fetuses at day 18 post-coitus. To assess the impact of 2-BP on postimplantation growth in vivo, blastocysts were exposed to 0, 2.5, 5 and 10 μM 2-BP for 24 h, and then 8 embryos were transferred in parallel to the paired uterine horns of day 4 pseudopregnant mice. The surrogate mice were killed on day 18 post-coitus, and the frequency of implantation was calculated as the number of implantation sites per number of embryos transferred.
The incidence rates of resorbed and surviving fetuses were calculated as the number of resorptions or surviving fetuses, respectively, per number of implantations. The weights of the surviving fetuses and placenta were measured immediately after dissection.
Statistics
The data were analyzed using one-way ANOVA and t-tests and are presented as the mean ± standard deviation, with significance at P < 0.05.
Conclusions
In summary, we have shown that 2-BP induces cellular apoptosis in both the ICM and TE of mouse blastocysts, leading to decreased implantation, embryonic development, and viability. Clearly, 2-BP is a potent injury risk factor for normal embryonic development. However, further studies are required to elucidate the mechanism(s) by which 2-BP affects embryonic development as well as the teratogenic actions and regulatory mechanisms of 2-BP in human embryogenesis. | 2,928.8 | 2010-02-11T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Revisiting fundamental properties of TiO 2 nanoclusters as condensation seeds in astrophysical environments
Context. The formation of inorganic cloud particles takes place in several atmospheric environments, including those of warm, hot, rocky, and gaseous exoplanets, brown dwarfs, and asymptotic giant branch stars. The cloud particle formation needs to be triggered by the in situ formation of condensation seeds since it cannot be reasonably assumed that such condensation seeds preexist in these chemically complex gas-phase environments. Aims. We aim to develop a method for calculating the thermochemical properties of clusters as key inputs for modelling the formation of condensation nuclei in gases of changing chemical composition. TiO 2 is used as benchmark species for cluster sizes N = 1–15. Methods. We created a total of 90000 candidate (TiO 2 ) N geometries for cluster sizes N = 3–15. We employed a hierarchical optimisation approach, consisting of a force-field description, density-functional based tight-binding, and all-electron density-functional theory (DFT) to obtain accurate zero-point energies and thermochemical properties for the clusters. Results. In 129 combinations of functionals and basis sets, we find that B3LYP/cc-pVTZ, including Grimme’s empirical dispersion, performs most accurately with respect to experimentally derived thermochemical properties of the TiO 2 molecule. We present a hitherto unreported global minimum candidate for size N = 13. The DFT-derived thermochemical cluster data are used to evaluate the nucleation rates for a given temperature–pressure profile of a model hot-Jupiter atmosphere. We find that with the updated and refined cluster data, nucleation becomes unfeasible at slightly lower temperatures, raising the lower boundary for seed formation in the atmosphere. Conclusions. The approach presented in this paper allows finding stable isomers for small (TiO 2 ) N clusters. The choice of the functional and basis set for the all-electron DFT calculations has a measurable impact on the resulting surface tension and nucleation rate, and the updated thermochemical data are recommended for future considerations.
Cloud particles form when a supersaturated gas condenses on the surface of ultra-small particles. These nanosized particles are called cloud condensation nuclei (CCN) (Hudson 1993). The formation of CCN is a crucial step within cloud formation, but its formation rate needs to be determined from first principles, as attempts to derive it from models using the nucleation rate as a free parameter are unable to constrain it accurately (Ormel & Min 2019). Several efforts were undertaken to model the forma-tion of CCN from a quantum-mechanical bottom-up approach including nucleating species such as titanates (Jeong et al. 2000;Plane et al. 2013;Patzer et al. 2014), SiO , Fe-and Kr-bearing molecules (Chang et al. , 2013 and alumina , and Al 2 O 3 (Lam et al. 2015;Gobrecht et al. 2021a). Recently, Köhn et al. (2021) introduced a 3D Monte Carlo approach for the nucleation of TiO 2 that agrees reasonably well with results from kinetic nucleation approaches. Comparison studies like this require knowledge of thermochemical cluster data of the most favourable isomers.
On cool rocky planets, CCN can be formed from external sources such as sulfites from volcanic activity (Andres & Kasgnoc 1998), sea salt from ocean spray, condensing meteoritic dust, and dust particles from sand storms. As these sources do not exist for gaseous planets and are not guaranteed to exist for hot rocky planets, CCN need to be produced by chemical reactions out of the gas phase within the atmosphere to allow cloud formation. The existence of clouds and hazes has been predicted by models and has been observed in hot Jupiters, for example HD189733b (Pont et al. 2013; Barstow et al. 2014), HAT-P-7b (Helling et al. Article number, page 1 of 20 Article published by EDP Sciences, to be cited as https://doi.org/10.1051/0004-6361/202243306 A&A proofs: manuscript no. output 2019), or WASP-43b ; warm Saturns (e.g. Nikolov et al. (2021); super-Earths (Kreidberg et al. 2013); and brown dwarfs (Apai et al. 2013). The process that is considered to produce CCN in the atmospheres of gaseous planets is the formation of small clusters through nucleation. One of the species that has been considered for the formation of CCN in gas giant atmospheres is TiO 2 , in addition to less refractory species, for example SiO or KCl (Helling 2018). Different descriptions have been used to describe the formation of condensation seeds in exoplanet, brown dwarf, and asymptotic giant branch (AGB) star research: classical nucleation theory (Gail et al. 1986), modified classical nucleation theory (Gail & Sedlmayr 2013), kinetic nucleation networks (Patzer et al. 1998), and chemical-kinetic nucleation descriptions (Gobrecht et al. 2016;Boulangier et al. 2019). All rely at some point in the modelling process on thermochemical data of the species and their small nanosized clusters. Because experimental data often exist for condensed phases and simple gas-phase molecules only, quantum-chemical calculations provide the possibility of addressing the cluster size space in between. Each cluster size appears with different geometrical structures (i.e. its isomers), and it is not a priori known which of the isomers is the most favourable at each reaction step to eventually form a CCN. Experimental input would be required here. In lieu of this, we assume that the thermodynamically most favourable isomer takes this role of a key reactant. We therefore search for the isomer at the global minimum in potential energy for each cluster size because ideally, this will be the configuration that any cluster of that size will relax towards. Global minimum candidate isomers were found in previous studies on TiO 2 clusters (Lamiel-Garcia et al. 2017;Berardo et al. 2014;Jeong et al. 2000) and in the analysis of their thermochemical properties (Lee et al. 2015). In this work, a hierarchical approach is applied that uses three different levels of complexity: force fields, density-functional based tight-binding (DFTB), and density-functional theory (DFT) calculations. This approach was developed to globally search for potential geometries of small (TiO 2 ) N clusters (N = 3 − 15), and their thermochemical properties are analysed using quantum-chemical DFT calculations. In Section 2 we describe the methods we used to create possible (TiO 2 ) N cluster structures and the approximations we used to describe their interatomic interactions when searching for a geometry that is energetically favourable, that is, is located at a potential minimum. This approach is tested against previous results for small TiO 2 clusters N=3-6. For the DFT calculations, 129 combinations of functionals and basis sets are benchmarked against experimental data, and we find that the B3LYP functional with the cc-pVTZ basis-set, including Grimme's empirical dispersion, most accurately predicts the potential energies and thermochemical properties of the TiO 2 molecule. In Section 3 we evaluate the results for the different steps in our multi-level approach. This section also evaluates the quality of the approach by comparing the cluster isomers we found with known isomers from the literature. Section 4 analyses the impact of the updated cluster potential energies and the related thermochemical data of nucleation rates for a model hot-Jupiter atmosphere. Finally, we discuss our results and possible further work in Section 5.
Methods
The goal of this paper is to evaluate small clusters of titanium dioxide (TiO 2 ) N , N = 1 − 15, with regard to their geometry, binding energy, and thermochemical properties and the impact of these parameters on cloud nucleation in exoplanet atmospheres. A search is conducted for the most favourable isomer of each size N that represents the global minimum of the the potential energy surface (PES), which characterises the energy of the system. Additionally, other favourable isomers with potential energies close to the minimum are explored. In order to achieve this, the geometries of the clusters are varied and brought towards a geometry that is located at a potential minimum. A hierarchical method is employed that uses three different levels of complexity to describe the PES: force fields, DFTB, and DFT calculations (see Figure 1). Because the DFTB approach takes electronelectron interactions of the valence electrons into account compared to the purely ionic force-field description, it approximates the binding energies for the isomers better. This results in a more accurate energetic ordering of the candidate clusters for each size. For every cluster size N, candidate cluster isomers are created using different methods, and they are optimised towards a local minimum in potential energy using a basin-hopping algorithm. The energy evaluation in this optimisation procedure is based on a description of the inter-atomic interactions by the Coulomb-Buckingham force-field approximation. The resulting cluster geometries are then used as inputs for further geometry optimisation using the DFTB approach because it describes the PES and interatomic interactions more accurately than the force fields. The DFTB approach is used as a second step only as its higher accuracy comes at higher computational cost, making it more economical to only use pre-optimised clusters from the force-field description as inputs. Finally, the candidates at the lowest minima in the potential (which in turn have the highest binding energies) from this second step are used as inputs for all-electron DFT optimisations. With this approach, we aim to achieve highly accurate geometries and energies for the thermodynamically most stable isomers of each cluster size N.
DFT minimisation
Global minimum isomers + thermochemical data Fig. 1: Hierarchical approach to determine global minimum candidate structures for clusters. 1.) The geometries and binding energies of small clusters and isomers are used to calibrate the accuracy of the force field and DFTB methods. 2.) A force-field description of interatomic interactions is used to locally optimise the geometries of a large number of generated clusters for each size N towards a potential energy minimum. 3.) The geometries of these locally and semi-classically optimised cluster candidates are then further refined in a third step, using DFTB methods. This step optimises the geometries of the cluster candidates for the lowest possible potential energy. This provides an energetic ordering of the candidate geometries. 4.) The energetically most favourable candidate geometries are then used as inputs for DFT calculations, resulting in the final and most accurate geometries, binding energies, and vibrational and rotational frequencies for each cluster within this approach.
Construction of seed structures for cluster geometries
We applied the hierarchical approach to (TiO 2 ) N cluster formation. This started with the creation of a broad range of different seed structures or cluster geometries. These seed structures are unoptimised cluster geometries for (TiO 2 ) N , N = 3,...,15, which were then optimised towards potential energy minima. In the first optimisation step, these un-optimised cluster geometries were used as inputs with the force-field approach (Sect. 2.4.2 and the first box in Fig. 1). In order to minimise the chances of missing a particular stable cluster, a large number of seed structures that cover a wide range of structurally diverse geometries were generated. These candidate geometries ranged from closely packed and compact structures to larger and extended structures with void parts and included both symmetrical and asymmetrical structures for all sizes. Ideally, they are also easily optimisable, which means that the average distance between neighbouring atoms should not be much larger than their typical bond length. Four different approaches were used to create these seed clusters of size N: -Random: For a fully randomised seed structure creation, starting with a single TiO 2 monomer unit, more units are iteratively attached. They are placed them at the end of a vector with a random orientation and a length of 1.6Å, which corresponds to the typical Ti-O bond distance for nanosized TiO 2 clusters (Fig. 2a). -Known+1: Known stable isomer structures from the literature (Lamiel-Garcia et al. 2017;Berardo et al. 2014) of size N −1 are taken, and one monomer is randomly attached analogous to method 1 to obtain seed clusters for size N (Fig. 2b).
Both of these methods have a tendency to produce rather compact and highly asymmetric seed structures, especially for larger cluster sizes N. Therefore, two more approaches were introduced to create spatially extended clusters and symmetric clusters, respectively.
-Mirror: For even cluster sizes N, a cluster of size N 2 is taken and mirrored about a random axis. For uneven cluster sizes N, a cluster of size N−1 2 is taken and mirrored. Afterwards, a single monomer is added along the mirror axis to bring the total number of monomer units to N. All seed geometries created using this method fall within the C 2 point group (Fig. 2c).
-Equidistant: Seed structures of size N are created by evenly distributing N monomers equidistant across a sphere of random radius, so that the distances between their centres of mass are between 1.6 − 3.2Å. The resulting geometries resemble hollow spheres (Fig. 2d).
An example of a candidate geometry of size N = 7 produced by each of these approaches is depicted in Figure 2. As the parameter space for possible cluster geometries grows with the cluster size N, a distinction between small (N = 3 − 7) and large (N = 8 − 15) clusters was made in order to save computational cost. For small clusters, 1000 clusters were created with method 1, 400 with method 2, 400 with method 3, and 200 with method 4, giving a total of 2000 cluster seed geometries for each size. For large clusters, the number of guessed geometries was multiplied by 5 to account for the larger parameter space. Ten thousand cluster seed geometries were created for each size, 5000 with method 1, 2000 with method 2, 2000 with method 3, and 1000 with method 4 (Table 5). Hence, 90000 geometrical seed structures were tested in total. All seed geometries were then optimised towards potential minima using the force-field description of the PES that is presented at the end of Sect.2.4.2. Fig. 2: Examples for initial un-optimised cluster geometries of size N = 7, generated (a) randomly, (b) from a known cluster with an additional monomer (marked with yellow circles), (c) from a mirrored cluster of size N = 3 with an additional monomer in the centre, and (d) from N = 7 monomers evenly distributed on a sphere.
Density functional theory
To accurately describe a cluster, a large number of interactions, including quantum-mechanical ones, need to be taken into account. We employed the DFT (see Appendix A.1) as a method for solving the Schrödinger equation approximately and for determining zero-point energies, vibrational frequencies, and rotational constants of our clusters. Density functional theory is parametrised through the choice of a functional and a basis set, which has a large impact on the resulting quantities. It is therefore essential to select an appropriate functional and basis set for our purpose.
Finding the ideal functional and basis set
For the calculations at the density functional level of theory (DFT), the Gaussian16 (Frisch et al. 2013) program was used. It is desirable to use a model parametrisation that agrees with measured data. In order to determine the DFT parametrisation in the form of a combination of functional and basis set that most closely resembles measured data, the results were compared to experimentally derived data for the zero-point energy (Malcolm W. Chase 1998), vibrational frequencies (Malcolm W. Chase 1998), and rotational frequencies (Brünken et al. 2008) for the TiO 2 monomer molecule.
Approach for TiO 2 : The tested functionals include two GGA DFT functionals, 12 hybrid functionals that included Hartree-Fock exchange, and the three complete basis set (CBS) methods listed in Table 1. The functionals were selected to cover a broad range of theoretical approaches as well as with regard to their availability within Gaussian16. Although there are no in-teractions within our clusters that stem from non-bonded interactions, we found that empirical damping coefficients as described by Grimme et al. (2011) improve the overall accuracy of our calibration. The EmpiricalDispersion=GD3BJ keyword was therefore used if available. We note that the use of an empirical dispersion does not originate from a physical or chemical consideration. Employing a similar approach in the selection of basis sets, a total of 129 candidate combinations of functionals and basis sets was produced (Table 1). For each of the combinations, Gaussian16 was used to perform an optimisation of the TiO 2 monomer and to calculate a zero-point energy for the titanium and oxygen atoms. The resulting energies for the individual functionals and basis sets were compared with the experimental data found in the JANAF-NIST tables. All 129 functional or basis set combinations were evaluated according to their binding energy for the TiO 2 molecule calculated at a temperature of T = 0K. The binding energy of the monomer (TiO 2 ) was calculated according to and can now be compared to experimental values (Table 2, 4th column). E ZP denotes the zero-point energy, that is, the energy of the system at rest in its ground electronic state. Further values that are known from experiments are the vibrational frequencies ν 1 , ν 2 , and ν 3 (Malcolm W. Chase 1998) and the rotational constants A rot , B rot , and C rot (Brünken et al. 2008) of the TiO 2 monomer. These quantities are also a result of the DFT calculations, and the derived thermochemical properties depend on them (Eq. C.6). They were therefore used to further constrain which of the candidate combinations of functional and basis set was best suited for modelling TiO 2 clusters. For the vibrational frequencies and the rotational constants, the quality of an individual candidate combination was assessed through a deviation parameter. Because the smallest deviation of the DFT results from the experimental data is desired, these deviation parameters were computed by taking the root sum squared of all relative deviations of the DFT results from the experimental data for the vibrational frequencies and the rotational constants (Table 2, Cols. 5 and 6). For the vibrational frequencies, this parameter (vibrational frequency deviation, VFD) was therefore calculated through and equivalently, for the rotational constants (rotational constant deviation, RCD), (Table 2). Lastly, the runtime for each monomer calculation on a single node (32 cores) was determined because computational speed and efficiency is desirable.
The thermochemical quantity of interest is the Gibbs free energy of formation ∆ f G • (N), which is needed to compute nucleation rates (see Eq. 9 and 17). The molecular system properties that go into computing the Gibbs free energy are the zero-point energy, the vibrational frequencies, the rotational constants and spin multiplicities (Appendix C). The latter do not impact our results as we assumed closed-shell singlet clusters throughout our calculations.
The candidate combination of functional and basis set that has the smallest deviations from the experimentally known values for the zero-point energy, vibrational levels, and rotational constants was considered optimal. The B3LYP functional in combination with the cc-pVTZ basis set and empirical GD3BJ dispersion deviates least from the experimental values for the zero-point energy and the vibrational frequencies, and it has the second lowest value for the rotational constant deviation. Therefore, B3LYP/cc-pVTZ represents the most suitable choice for the calibration of approaches with a lower level of complexity (Sect. 2.4.1) and for the final optimisation of our candidate clusters (Sect. 2.5). Additionally, it is desirable to reduce computational cost. In order to achieve this, the fastest configuration that falls within the 4 kJ/mol threshold, the B3LYP functional in combination with the def2svp basis set, was chosen to pre-optimise candidate geometries. This allows the final optimisation with the cc-pVTZ basis set to be completed in fewer steps, leading to overall lower computational cost. The CCSD(T) method is known as the gold standard in computational quantum chemistry and has been the method of choice in quantum chemistry (Čížek 1969;Purvis & Bartlett 1998;Ramabhadran & Raghavachari 2013;Nagy & Kállay 2019). However, these calculations require many computational resources and are prohibitive for larger molecular systems such as nanosized clusters. Therefore, we performed CCSD(T)/6-311+G(2d,2p) single-point calculations for the GM candidates with the smallest cluster sizes of N=1-4, and compared the 0K binding energies with our results from hybrid DFT (B3LYP/cc-pVTZ with empirical dispersion). The CCSD(T) calculation results deviate from experimental results for the binding energy of the monomer by ≈ 90 kJ mol , which is not adequate for our calibration purposes. When the binding energy ratios are compared to the monomer, E bind,N /E bind,1 , the values for our DFT calculations do not differ by more than 2% from the CCSD(T) values. We therefore conclude that calibration of our method with the experimentally determined properties of the monomer is sufficient (Appendix B).
Force fields
In order to save computational time, it is beneficial to be as efficient as possible in the optimisation process towards local and global potential minima for candidate isomers. For example, a titanium-dioxide cluster of size N=12 ((TiO 2 ) 12 ) has 36 atoms and accordingly, 3x36 = 108 degrees of freedom. The number of possible geometries is simply too large to be modelled at a feasible computational cost for a large number of clusters. Therefore, a modelling approach was employed that described the interactions between individual atoms through an interatomic Buckingham pair potential including the Coulomb potential (Appendix A.2). Several parametrisations for Ti-O systems are provided in previous studies, for instance Matsui & Akaogi (1991) or Lamiel-Garcia et al. (2017). However, we find that neither of them are well suited for our purposes as they do not accurately depict the B3LYP/cc-pVTZ energetic ordering within isomers of the same size N. To find a force-field prescription that reflects the experimental data available for TiO 2 to the extent possible, a search was conducted for a set of Buckingham parameters that reproduce the binding energies and average bond distance of the smallest clusters calculated at the start of Sect. 2.4.1.
Approach to (TiO 2 ) N : The program we used to calculate the energy as well as the bond distance is the general utility lattice program GULP (Gale & Rohl 2003). The optimise keyword was used in a constant pressure environment (conp) to optimise the DFT optimised clusters and determine their binding energies for each combination of parameters. The parameters that need to be determined in Eq. A.1 are the charges of Ti and O, and the Buckingham pair parameters A, B, and C for each of the relevant interactions, Ti-Ti, O-O, and Ti-O. Because overall, charge neutrality needs to be conserved, the charge of Ti was directly coupled to the charge of O by a factor of -2, that is, there were two negatively charged oxygen anions for every positively charged titanium cation. For the TiO 2 molecule, we found low formal charges. A Mulliken charge analysis of the smallest cluster sizes (N=1-4) revealed average Ti charges lower than +1e. The parametrisation therefore has ten free parameters, three from each of the interactions: Ti-Ti, O-O, and Ti-O, plus the charge. To determine the ideal parameter set, all interaction parameters as well as the charge were varied freely, and the scipy.optimise differential evolution algorithm (Storn & Price 1997) was used to determine the set of parameters that deviated the least from our calculated binding energies and average bond distances and that simultaneously reproduced the energetic ordering of isomers of the same size (Appendix D.1). With this physically consistent set of parameters, the potential energy of any Ti-O system, that is, any TiO 2 cluster geometry, could now be predicted accurately and required little computational power. This approach of searching for candidate structures with the application of a Buckingham pair potential has been used before, for example in Gobrecht et al. (2018) for aluminium oxide, in Cuko et al. (2017) for hydroxylated silica clusters, and in Lamiel-Garcia et al. (2017) for titanium dioxide. The seed structures were optimised using our reparametrised force field as the first step in our hierarchical approach. To even further increase the structural complexity of our searches, a basin-hopping algorithm as described by Wales & Doye (1997) was employed. It was used in its implementation in the ase Python package (Hjorth Larsen et al. 2017) to optimise the geometries of all the seed structures. The potential energy calculation at each step of the optimisation was made through GULP with our parametrisation of the force field. After the force-field optimisation, the candidate geometries are analysed further in Sect. 2.4.3.
Density-functional based tight-binding method
From the geometry optimisation with the force-field approach (Sect. 2.4.2), 2000 candidate geometries at local potential minima for each small cluster size, N = 3 − 7, and 10000 candidate geometries for each large cluster size, N = 8 − 15, were obtained. However, the Buckingham-Coulomb pair potential does not describe the interaction between electrons and their related orbitals. More precisely, only interactions between Ti cations, O anions, and themselves are taken into account. The interaction of (binding) electrons and the electron correlation is neglected. It still serves the purpose of optimising the candidate geometries on a approximate and simplified PES. The potential energies of the (TiO 2 ) N cluster candidates are not accurate because among other reasons, the force-field approach considers singlepoint charges instead of a charge distribution of each ion in the cluster. It is therefore a reasonable assumption that the binding energies of these clusters and their energetic ordering can be improved by an intermediate optimisation step with a more accurate description accounting for electronic orbitals. This was done to obtain a more accurate energetic ordering of the candidate clusters. The best set of candidate clusters was chosen to be optimised with computationally expensive all-electron DFT calculations (see Section 2.5). For this intermediate step, the DFTB To enable direct comparison to the results of DFTB+ optimised isomers, the potential energy for each of the candidate geometries was obtained by performing a single-point energy calculation with DFTB+. Next, an identical calculation for atomic Ti and atomic O was performed in order to obtain their respective potential energies. The binding energy of a (TiO 2 ) N cluster can then be calculated by subtracting the individual contributions, analogously to Eq.1, This calculation was performed for each of the candidate geometries obtained from Sect. 2.4.2. Each of the candidate geometries was then optimised using the ConjugateGradient method, making use of its implementation within DFTB+, and was again ordered according to their binding energies calculated with Eq. 4. Because the randomisation of seed structures is not ideal, there were duplicates among the optimised geometries.
In order to filter them out, any two candidates with a bindingenergy difference of ∆E Bind ≤ 0.01eV were analysed with regard to their similarity. To do this, the mean of all inter-atomic distances within the cluster was calculated, giving a size parameter R. If the average inter-atomic distance of the two clusters was closer than ∆R ≤ 0.01Å, the two clusters were found to be duplicates, and one was removed from the process. After the duplicate candidate geometries were removed, the lowest energy clusters for each size were used in our final step, optimisation with all-electron DFT. This is described in Sect. 2.5.
Search for global minima with DFT
The energetically most favourable isomers found with DFTB (see Sect. 2.4.3) were further optimised using Gaussian16. In order to reduce the total computational time, a first optimisation was performed with the B3LYP/def2svp functional and basis set combination with empirical dispersion as described by Grimme et al. (2011). After this step, the final optimisation was made with the B3LYP/cc-pVTZ functional and basis set combination, also with empirical dispersion (Sec. 2.3). For all these isomers, a frequency analysis was carried out using the same functional and basis set, in order to ensure that the isomer was a true minimum and to exclude transition states. In addition, we calculated the final energies as well as rotational constants needed to determine the thermochemical properties we are interested in. The thermochemical properties entropy S • (N) J mol·K , change of enthalpy ddH kJ mol and Gibbs free energy of formation ∆ f G • (N) kJ mol were calculated from the output of the DFT optimisations using the RRHO approximation as implemented within the thermo.pl (Irikura 2002) code.
Force-field optimised clusters
The new parametrisation (Table D.1) for the Buckingham-Coulomb force field was used to optimise the geometric seed structures (Sect. 2.1). The algorithm used for this search is the basin-hopping algorithm described by Wales & Doye (1997). The energy evaluation within this algorithm was made using GULP with the new set of parameters. At each step of the algorithm, a local energy minimisation using the FIRE algorithm (Bitzek et al. 2006) was performed. This algorithm was designed specifically for optimisation of atomistic systems towards their closest local minimum. A numerical temperature of T = 100 within the basin-hopping algorithm was chosen to allow for the exploration of nearby local minima in order to determine the deepest potential well in the vicinity. This temperature is not a physical temperature, but influences the rejection criterium within the algorithm. All seed structures created in Section 2.1 for each cluster size N = 3,...,15 were optimised with this algorithm.
DFTB energy calculations and optimisation
A zero-point energy calculation was performed for all cluster geometries calculated in Sect. 3.1, and their binding energies were calculated according to Eq. 4 (Fig. 3). Then each of these cluster geometries was optimised, minimising their potential energy, with the DFTB description of interactions. The ConjugateGradient optimisation algorithm (Hestenes & Stiefel 1952) was used, and the binding energies of the optimised clusters were calculated again (Fig. 4). A comparison of Figures 3 and 4 demonstrates the approximative character of the forcefield approach, as a seemingly broad variety of local minima for small clusters disappears with the introduction of higher complexity in the form of DFTB. The energy levels become discrete, and many different cluster geometries from the force-field approach produce the same geometry after further optimisation. To find potentially new global minimum (GM) structures, all known literature GM clusters were optimised and their binding energies calculated with the DFTB approach for comparison. For each size, any unique cluster that had a higher binding energy than the known literature GM was categorised as a candidate for a new GM. Additionally, the ten isomers with binding energies closest to that of the literature GM were considered. These isomers (Table 4) were passed on to be optimised with all-electron DFT calculations.
Comparison of candidate creation approaches
In this section, the performance of the different methods for the creation of candidate geometries (see Sec. 2.1) is discussed. Additionally, the necessity for cluster geometries from the literature is assessed to establish whether the method is capable of operating without prior knowledge of particularly favourable cluster geometries. The parameter space that needs to be covered to find all possible geometric configurations increases dramatically with cluster size N. One metric to consider is the number of duplicate isomers. Table 5 lists the number of created candidates and resulting unique clusters for all cluster sizes N we considered. For the small clusters, many duplicates are found because the parameter space is small and many created geometries share a nearby potential minimum. As the size of the clusters grows, the number of identical geometries falls, creating more unique clusters per candidate. A plateau of unique clusters is reached from size 12, at which about 68% of the created clusters are unique. As larger clusters have a larger parameter space for possible geometric configurations, the trend of an increasing number of unique clusters is expected to continue. The resulting plateau therefore signals a limit of the current methods to fully explore the parameter space of clusters larger than N = 12. Two out of the four approaches used in this work rely on known cluster geometries that are reported in the literature in order to produce cluster candidates (Known+1 and Mirror, methods 2 and 3 in Sec. 2.1, referred to as dependent methods), while the other two (Random and Equidistant, methods 1 and 4 in Sec. 2.1, referred to as independent methods) need only information about the monomer. Figure 5 and Table 6 show the 50 best candidates, that is, those with the highest binding energy, and their creation methods. For small clusters N = 3 − 6, the independent methods produce the majority of the 50 clusters with the highest binding energy. These independent methods also find the GM candidates reported in the literature for N = 3 − 6, which are needed for calibration. For larger clusters, the dependent methods perform better. The random generation method only produces one of the 50 energetically most favourable clusters for Article number, page 7 of 20 of clusters after optimisation with the DFTB approach. Individual clusters are colour-coded according to the method of creating the candidate cluster (Sec. 2.1). A random spread along the x-axis has been added to enable comparison between clusters at similar energies.
N > 10. This is because the PES is very large and complex at these sizes and cannot be sufficiently explored by a random walk. Therefore, the methods that contain prior information about stable configurations, such as the dependent methods, show an enhanced performance. The equidistant method is similar because it does not rely on previously reported favourable isomers either. For N > 10, the dependent methods therefore make up far more of the energetically most favourable clusters. More elaborate methods for creating first-guess cluster geometries are available through Gaussian process regression, for example (e.g., Meyer & Hauser (2020)). For the present work, the choice was made to apply simple but fast methods, and comparisons to more complex approaches are desirable in future works.
All-electron DFT calculations
All candidate geometries from Table 4, as well as all literature GM geometries from Berardo et al. (2014) and Lamiel-Garcia et al. (2017), were pre-optimised using Gaussian16 with the B3LYP functional, def2svp basis set, and empirical dispersion. Afterwards, they were optimised and a frequency analysis was performed with the best-performing combination found in Sec. 2.3, B3LYP/cc-pVTZ with empirical dispersion. For all isomers, the binding energies were calculated according to Eq. 4, and their respective geometries can be found in electronic form at the CDS via anonymous ftp to 2 or via 3 . For cluster sizes N = 3 − 10, the GM candidates predicted in the literature were found among the candidate geometries after DFTB optimisation. For cluster size N = 11, the predicted GM was found among the candi- clusters after optimisation with the DFTB approach. Individual clusters are colour-coded according to the method of creating the candidate cluster (Sec. 2.1). A random spread along the xaxis has been added to enable comparison between clusters at similar energies. date geometries after DFT optimisation. For cluster sizes N = 12, 14, and 15, the predicted GM were not found among the candidate geometries, and no more favourable isomer, that is, with an even lower potential energy, was found either. The reason most likely is that the seed-creation approaches employed here are not well suited to cover the large parameter space for these large clusters within 10000 candidates, as was mentioned in Sec. 3.3. For N = 13, we present a new global minimum candidate structure (Fig. 6), which was created through the mirror creation process (method 3 in Section 2.1). Its potential energy is 0.5 kJ mol per monomer unit lower than the previous lowest-lying isomer reported by Lamiel-Garcia et al. (2017). This difference is lower than the typically assumed accuracy for DFT calculations of 4 kJ mol per monomer unit. This supports the assumption that energetically similar isomers need to be studied for each cluster size because either could play a role in the nucleation process.
In thermochemical processes such as nucleation, the most relevant isomer for each cluster size is assumed to be the energetically most favourable one. Any less favourable, meta-stable isomer that forms relaxes into this global minimum, given sufficient time for the relaxation. To compare the thermochemical properties, the thermo.pl program was used to calculate the entropy S [J mol −1 K −1 ], change of enthalpy d∆H [kJ mol −1 ], and Gibbs free energy of formation ∆ f G • [kJ mol −1 ] for three different sets of clusters: This was done in order to compare the impact of the choice of functional and basis set on the zero-point energies and the thermochemical properties to the impact of the completeness of the GM search for the cluster geometries. The complete thermochemical tables for set 2 are only available in electronic form at the CDS via anonymous ftp to or via 4 .
Astrophysically relevant (TiO 2 ) N properties
In the previous sections, we have derived the fundamental quantities including the zero-point energy of TiO 2 clusters. The thermochemical properties of interest for astrophysical studies are The following sections present TiO 2 in the context of the gas phase, TiO 2 clusters as a promising precursors of TiO2 dust formation, and the derivation of their thermochemical properties, especially the size-and temperature-dependent Gibbs free energy of formation ∆ f G • (N, T ). These properties are relevant for cloud formation modelling in exoplanets and brown dwarfs and also for dust formation modelling in AGB stars and supernova ejecta.
Thermodynamical relevance of TiO 2 in hot Jupiters
In order to assess the relevance of TiO 2 for cloud formation in the atmospheres of hot Jupiters or for dust formation in an AGB envelope, it is necessary to know the thermodynamic quantity ranges in which TiO 2 is the most abundant titanium-bearing molecule. To determine this, we applied the gas-phase equilibrium chemistry code GGChem from Woitke et al. (2018). TiO 2 is the most abundant titanium-bearing molecule at a pressure of 10 −3 bar in the low-temperature regime, that is, for T< 1200 K (Figure 7). Less complex Ti-bearing species dominate the chemical Ti-content with increasing temperatures until Ti + is the dominating Ti species at T > 3500K. Figure 8 presents the GGChem model as a 2D (p gas , T gas ) plane. Four 1D atmospheric p gas , T gas profiles are superimposed. The profiles correspond to a model 3D atmosphere of a hot Jupiter with T eff = 1600K and log(g) = 3, orbiting a G star with T * = 5650K (Baeyens et al. 2021). This model was extrapolated to pressures of 10 −14 bar. The 3D atmosphere model is plotted at four different locations on the planet: the substellar and anti-stellar point, and the morning and evening terminator for this hot Jupiter.
Only at the substellar point for the model atmosphere (T eff = 1600K) is TiO 2 not the most abundant Ti-bearing species at any pressure level, except in a thin layer of the atmosphere at 10 such as TiO and later atomic Ti dominant. For the evening terminator, TiO 2 becomes less abundant than TiO at a pressure level of ∼ 10 −2 bar. Moreover, we note that the (TiO 2 ) N cluster ionisation energies of 9.3-10.5 eV are too high to affect related abundances and the TiO N nucleation in exoplanet atmospheres (Gobrecht et al. 2021b). This strengthens the argument that TiO 2 is the most relevant Ti-bearing molecule in the upper atmosphere, which is were cloud formation is expected to take place.
Surface tension of TiO 2
In classical and modified classical nucleation theory, the impact of the Gibbs free energy of formation ∆ f G • on the nucleation process is modelled through the surface tension σ ∞ of the bulk solid (Gail & Sedlmayr 2013). The surface tension is different for small clusters, due to significantly different geometries and properties than the bulk. This difference is neglected in classical approaches. It is possible, however, to determine a surface tension that is valid for small clusters if individual cluster data are available. Here the approach by Jeong et al. (2000) as applied by Lee et al. (2015) was followed, in which the dependence of the Gibbs free energy of formation on the cluster size is linked to the surface tension σ ∞ through with cluster size N, the Gibbs free energy of formation of a cluster of size N, ∆ f G • (N), the Gibbs free energy of the monomer ∆ f G • (1), the Gibbs free energy of the bulk phase ∆ f G • 1 (s), a fitting factor N f , and Here a 0 is the theoretical monomer radius, which is derived from the bulk density of rutile and the molar mass through The fitting factor N f is set to N f = 0, analogously to Lee et al. (2015). The Gibbs free energies for the monomer and for the bulk are known from experiments (Malcolm W. Chase 1998), as are the values necessary to derive a 0 . The Gibbs free energy of formation of the clusters was calcuated from the thermochemical values derived from our all-electron DFT calculations and consistently compared to the thermochemical clusterdata from the study by Lee et al. (2015). This leaves σ ∞ as the only free parameter, which was fit using Eq. 5. The left side of Figure 9 compares the data from the two functional and basis set combinations used in this work to the results from Lee et al. (2015). It becomes apparent that the choice of the functional and basis set influences the resulting thermochemical properties and therefore the surface tension given by the fit. We obtain a different result for the surface tension than Lee et al. (2015) using their data. The result for the surface tension at T = 1000K using all available GM data and the best-performing functional and basis set combination is σ ∞ = 518 erg cm −2 (blue line in Fig. 9). In the right panel of Figure 9, the comparison is made between the fits for σ ∞ using all available GM data (blue) and only the best candidates that were produced from candidates in this work (red). Both the data and the results for σ ∞ vary only slightly. Visible differences occur for N = 14 and N = 15, where this work did not produce candidates that were identical or energetically close to the literature GM. The best-fit value for σ ∞ without using GM candidates found in the literature is σ ∞ = 525 erg cm −2 , which differs from the best-fit value by only 7 erg cm −2 . This indicates that the choice of functional and basis set has a stronger impact on the resulting surface tension than finding the lowest-energy isomer for all sizes, given the extent of our searches.
Because the potential minimum geometries that were missed all have lower enthalpies and therefore presumably also lower Gibbs free energies, they can only lower the resulting surface tension. This approach therefore gives an upper limit for the surface tension σ ∞ .
The best-fit value for the surface tension is found to be dependent on the temperature. Fig. 10 shows the best fit for a linear dependence of σ ∞ on T for T = 500 − 2000 K, Studies have been conducted on the surface tension of bulk rutile at room temperature (Kubo et al. 2007), finding a value of σ ∞ (298.15K) = 1001 erg cm −2 for the (011) lattice. The best fit for room temperature in this study gives a value of σ ∞ (298.15K) = 575.72 erg cm −2 for small (TiO 2 ) N molecular clusters. The factor of almost two in surface tension for small clusters versus the bulk phase of TiO 2 makes it clear that the prior is recommended in all considerations regarding nucleation processes.
Nucleation rates of TiO 2
To quantify the effect of the updated thermochemical cluster data on quantities relevant for cloud formation in exoplanet atmospheres, the nucleation rate for TiO 2 was calculated along the temperature pressure profile of the morning terminator of a hot Jupiter with an effective temperature of T eff = 1600K and a surface gravity of log g = 3 (blue lines in Fig. 8). The gas-phase composition was calculated with the equilibrium chemistry code GGChem.
Modified classical nucleation theory
In this work, the nucleation rates were computed analogously to Lee et al. (2015) to facilitate comparison. The stationary, homogeneous, homomolecular nucleation rate in classical nucleation theory is calculated by Here, S (T ) and f • (1) are the supersaturation ratio and monomer number density of TiO 2 , respectively. τ gr is the growth timescale, defined as with A(N) = 4πa 2 0 N 2/3 the effective cross-section of a spherical (TiO 2 ) N cluster, n f the monomer number density (n f = f • (1)), the sticking factor α, which is assumed to be α = 1, and the relative velocity v rel , which is given for monomers with mass m x by the thermal velocity through Z(N) is the Zeldovich factor, which accounts for the contribution to nucleation from Brownian motion, Fig. 9: Gibbs free energy of formation per cluster size N as a function of cluster size N at a temperature of T = 1000K. For each approach, a fit for σ ∞ was calculated using eq. 5. Left: Comparison of the resulting surface tensions for different sources of cluster data. The sources are Lee et al. (2015) for the orange line, DFT calculations with the fast basis set def2svp for the green line, and DFT calculations with the accurate basis set cc-pVTZ for the blue line. Right: Comparison of the impact of isomer completeness on resulting surface tension. Both lines use thermochemical data derived from the accurate cc-pVTZ basis set DFT calculations. For the blue line, the energetically most favoured isomer was chosen for all sizes, regardless of whether it was found by the approach in this paper. For the red line, only outputs from the approach described here were used. As the resulting difference is small, it becomes apparent that the choice of the basis set has a greater effect than the completeness of the cluster geometries. and in the final term, the Gibbs free energy ∆G(N) is approximated using modified classical nucleation theory, or MCNT, giving with θ ∞ from Eq. 6. Equation 9 is evaluated at the critical cluster size N * , which is given by The critical cluster size is mainly influenced by the supersaturation of molecular TiO 2 at various temperature and pressure points. When TiO 2 is not supersaturated (ln S (T ) < 0), N * is negative (Eq 14), and consequently, Eq. 13 has no real solution, leading to the absence of nucleation for the modified classical case at these temperature pressure points. Using Eq. 9, we calculated the classical nucleation rate for the given (p gas ,T gas ) profile using three different values for σ ∞ : the temperature-dependent σ ∞ from this work, the temperature-dependent σ ∞ from Lee et al. (2015), and a constant σ ∞ = 797 erg cm −2 , derived from the re-fit to Lee's cluster data in this work. Results are shown in Figure 11. Nucleation rates derived from updated cluster data are overall more efficient and extend to higher temperatures. The position of the peak nucleation rate within the atmosphere is very similar for the three different surface tensions. However, nucleation stays efficient up until slightly lower pressures for updated cluster data.
Non-classical nucleation theory
If detailed cluster data for all small sizes N are available, the nucleation rate can be computed using individual cluster growth rates. The non-classical nucleation rate is with τ gr from Eq. 10 and f • (N) the number density of a cluster of size N. This can be computed from the partial pressure of the latter through Applying the law of mass action to a cluster of size N gives its partial pressure as with p • (1) the partial pressure of the monomer and the reference pressure p • − = 1bar. The results for the non-classical nucleation rates are shown in Figure 12.
Results for TiO 2 nucleation rates
For the modified classical nucleation approach, three values for the surface tension were compared. The shape of the nucleation rate in Fig. 11 is influenced by several factors. The reason that no nucleation occurs below a temperature of T ≈ 680K lies in the p gas , T gas profile that is used, which does not extend to lower temperatures (Fig. 8). At the upper temperature end, nucleation is limited because it depends on the supersaturation of the TiO 2 monomer. Because supersaturation (S > 1) is required for nucleation and TiO 2 is no longer super-saturated at these high temperatures, no nucleation occurs. For lower temperatures, the surface tension from this work results in a higher nucleation rate than the recomputed surface tension with cluster data from for Lee et al. (2015) by about one order of magnitude. The lower surface tension also causes nucleation to become inefficient at higher temperatures. Overall, the value for the surface tension from this work agrees well with the value derived in Lee et al. (2015). However, using their cluster data, we find a significantly higher surface tension (see Fig 11).
Nucleation rates for non-classical nucleation (Fig. 12) are lower than for MCNT for T = 680 − 1600K by about two orders of magnitude. This is the result of taking individual cluster data for all sizes into account instead of combining all the information into one quantity, the surface tension. Therefore, monomer growth processes from size N to size N + 1 are modelled more accurately. If one of the growth processes is less efficient than others, it will create a bottleneck for the overall nucleation rate. In this case, N + 1 is considered as the critical cluster size N * . At higher temperatures (T > 1600), cluster data from this work give lower nucleation rates than the cluster data from Lee et al. (2015) and MCNT. There is no strict upper temperature limit for non-classical nucleation, as clusters can grow as long as it is energetically favourable for them to do so and as long as nucleating material, that is, TiO2 monomers, are available (Eq. 17). Because our updated cluster-data do predict that nucleation of TiO 2 becomes inefficient at lower temperatures, the nucleation process starts at lower pressure levels, that is, higher up in the model atmosphere.
Conclusion
We have presented a method for determining and optimising the geometries, zero-point energies, and thermochemical properties for clusters. Emphasis was placed on exploring the parameter space of possible geometric configurations for these clusters and on deriving their potential energies and thermochemical properties accurately. This approach was tested for small (N = 3 − 15) (TiO 2 ) N clusters. To ensure thermochemical accuracy, 129 combinations of DFT functionals and basis sets were tested with regard to their accuracy against known experimental data for the TiO 2 monomer. The B3LYP functional with the cc-pVTZ basis set and GD3BJ empirical dispersion were found to closely approach the experimental data and were used for all final optimisation steps and frequency analysis. A new force-field parametrisation of the Buckingham-Coulomb pair potential was presented that more accurately reflects the cluster geometry (i.e. bond lengths) and energetic ordering for small TiO 2 clusters than previous parametrisations. For the DFTB description of interactions, the matsci set of Slater-Koster integrals was found to best reflect the energetic ordering for small clusters and their isomers given by the all-electron DFT calculations. The hierarchical optimisation approach works as intended and produces a large number of energetically low-lying isomers for all sizes. For the smallest clusters N < 7, all global minimum candidates reported in the literature were found with methods that do not rely on using known cluster geometries. Because these are the same cluster sizes as were used to calibrate the less complex descriptions of inter-atomic potentials, this step of the approach is therefore independent of prior knowledge of cluster geometries. The Random approach of seed candidate creation is well suited to search small parameter spaces, such as N < 7. However, for any cluster sizes N > 10, the parameter space is too large to be searched with a fully randomised approach. For these larger cluster sizes, the Mirror and Known+1 approaches produce the largest number of energetically low-lying isomers. However, they require prior knowledge of favourable clusters of size N − 1 for Known+1 and of size N/2 for Mirror. This presents a constraint on the efficient search for clusters of these sizes, as without prior knowledge, the global minima in potential energy will have to be found iteratively, always growing from N to N + 1. The current implementation of the hierarchical approach was able to find all known global minima for N = 3 − 11, as well as a new global minimum candidate for N = 13 that lies 6 kJ mol −1 below the energy of the global minimum candidate structure reported by Lamiel-Garcia et al. (2017). For N = 12, the global minimum known from the literature could not be reproduced, the closest isomer produced lies 2.45 kJ mol per monomer unit higher. For N = 14, this energetic distance between the lowest found isomer and the known global minimum is 5.71 kJ mol per monomer unit, and for N = 15, it is 6.8 kJ mol per monomer unit. This shows that the search method is still incomplete for larger clusters. Because only clusters up to size N = 6 had isomers that were used for calibration, the Mirror approach only had a single literature isomer to generate the clusters for N = 14 and N = 15, which drastically reduced the possible configurations explored by this approach.
To estimate the impact of the new thermochemical data on nucleation processes, the surface tension for small molecular clusters, as it is calculated in modified classical nucleation theory, was investigated. The findings of Lee et al. (2015) cannot be reproduced, raising the best-fit value for σ ∞ for the test case at T= 1000K to σ ∞ = 797 erg cm −2 . The fast basis set def2svp gives σ ∞ = 325 erg cm −2 , while the thermochemically more accurate basis set cc-pVTZ gives σ ∞ = 518 erg cm −2 . The spread between these values is a factor of two, and as the surface tension appears in the exponent of the modified nucleation rate (Eq. 9), this spread is amplified. Because the B3LYP/cc-pVTZ functional and basis set was specifically chosen for its accuracy at modelling the thermochemical properties of small TiO 2 molecular clusters, the updated value for the surface tension represents the currently best approximation. We find that for modified classical nucleation theory, the impact on the nucleation rates by the choice of the functional and basis set used to calculate the thermochemical properties is more important than finding a true global minimum as long as the structure that is used is one of the low-energy isomers. This is not true for non-classical nucleation description, which depends on accurate data for all sizes. The non-classical nucleation process described by the updated cluster data becomes inefficient at lower temperatures, which raises the atmospheric lower border for seed formation through that process. A limitation of this description is that it only allows for homogeneous and homomolecular nucleation, ignoring pathways through cluster-cluster collisions and through species other than TiO 2 .
We have shown that the updated cluster data affect the nucleation rates both in their classical and non-classical description. Providing cluster data for larger clusters N > 15 will allow a more detailed comparison with independent methods, for example with molecular dynamics methods as presented in Köhn et al. (2021). Additionally, the spectroscopic properties of small (TiO 2 ) N clusters, in particular, the frequency-dependent opacities, are interesting because they can allow for dust coagulation processes to be constrained through observations (Köhler et al. 2012). The method will be improved by additional candidate creation methods and applied to other potentially CCN-forming species in the atmospheres of hot Jupiters. prised in the cluster S • atoms , For a cluster of size N = 5, the last term is the sum of the entropies of five Ti atoms and ten oxygen atoms. Calculating the enthalpy of formation of the cluster requires information about the enthalpies of the elements that make up the cluster. The entropies and enthalpies for Ti and O used for these calculations are taken from the JANAF-NIST tables (Malcolm W. Chase 1998).
The enthalpy of formation of the cluster at temperature T is then calculated through with the reference temperature T • − = 0K. The enthalpy of formation at the reference temperature ∆ f H • (T • − ) is the binding energy of the cluster, caluclated for a cluster of size N through The entropy of the system S • cluster is calculated from the internal energy of the system, With the internal energy U(T ) − U(0), the partition function q(V, T ), and the number of atoms in the cluster N atoms . The internal energy is derived from the partition function q(V, T ) through When the rigid-rotor harmonic-oscillator (RRHO) approximation for a polyatomic, non-linear molecule is used, its partition function is T is the temperature of the system, β = 1 k B T , with the Boltzmann constant k B . h is the Planck constant, and c is the speed of light. σ is the symmetry number, correcting for the repeated count of indistinguishable configurations. The unknown inputs in Eq. C.6 are the vibrational levels ν i , the rotational constants A rot , B rot , and C rot , and V, the volume of the particle, which are all outputs of the DFT simulations. For further reading about the calculation of these thermochemical properties, we refer to Jeong (2000). The partition function used here omits the electronic partition function, q el , with the energy levels i, their energies ϵ i and their respective degeneracies g i . This is because Gaussian16 assumes that the first electronic excitation energy is much greater than the thermal energy (ϵ 1 ≫ kT ) and that it is therefore inaccessible at any temperature, resulting in a value of q el = 1 for all clusters and temperatures. In addition, all clusters are assumed to be in a singlet state with g i =1 ∀ i. N = 3, .., 6) are then compared to the energy of the global minimum isomer of that respective size to obtain the relative energy, equivalent to the approach for the force fields in Sect. D.1. These are again compared to the relative energies that result from the DFT calculations ( Fig. D.2). Analogously to the left panel in Fig. D.1, the relative energy deviations from the global minimum are plotted for the DFT calculations. An ideal description matches the ordering in both directions and falls onto a slope of 1 (black line). If any clusters fall above the horizontal 0 eV line, the DFTB parametrisation shows a different isomer as the global minimum for this cluster size, which is not desirable. When the plots in Figure D.2 are compared, it becomes apparent that for trans3d ( Fig. D.2b) and tiorg ( Fig. D.2c), there are several isomers for all cluster sizes N = 3 − 6 and N = 3, 5 that fall above the horizontal 0 eV line and therefore represent unlikely global minima candidates according to their DFTB description. For matsci (Fig. D.2a), this is not the case, and the lowest-energy isomer corresponds to the global minimum candidate derived from all-electron DFT calculations for each respective cluster size. Visual inspection also shows that the deviation of individual isomers from the linear function with slope 1 is the smallest for matsci, which means that it reproduces the energetic ordering of the DFT calculations best. In comparison, both trans3d and tiorg have large individual outliers for almost all cluster sizes. In order to quantify the quality of each of the Slater-Koster integral sets, the deviation of their relative energies as compared to the relative all-electron DFT energies is calculated. The sum of these residuals is equivalent to Q E in Sec. D.1 and is given in the plots. The visual inspection is confirmed because the matsci set far outperforms the other two. Therefore, the matsci set of Slater-Koster integrals is used for all DFTB calculations in this work. Tiorg Slater-Koster integrals. The positions of all isomers on the x-axis are consistent in all three panels. The energy level of the global minimum according to DFT is given by the horizontal blue lines. The relative energetic ordering from the DFT calculations is given by their order from right to left, and the DFTB ordering is given from top to bottom. | 14,134.4 | 2022-07-11T00:00:00.000 | [
"Physics"
] |
Compounds producing an effective combinatorial regimen for disruption of HIV‐1 latency
Abstract Highly active antiretroviral therapy (HAART) has improved the outlook for the HIV epidemic, but does not provide a cure. The proposed “shock‐and‐kill” strategy is directed at inducing latent HIV reservoirs, which may then be purged via boosted immune response or targeting infected cells. We describe five novel compounds that are capable of reversing HIV latency without affecting the general T‐cell activation state. The new compounds exhibit synergy for reactivation of latent provirus with other latency‐reversing agents (LRAs), in particular ingenol‐3‐angelate/PEP005. One compound, designated PH02, was efficient at reactivating viral transcription in several cell lines bearing reporter HIV‐1 at different integration sites. Furthermore, it was capable of reversing latency in resting CD4+ T lymphocytes from latently infected aviremic patient cells on HAART, while producing minimal cellular toxicity. The combination of PH02 and PEP005 produces a strong synergistic effect for reactivation, as demonstrated through a quantitative viral outgrowth assay (qVOA), on CD4+ T lymphocytes from HIV‐1‐infected individuals. We propose that the PH02/PEP005 combination may represent an effective novel treatment for abrogating persistent HIV‐1 infection.
Thank you for the submission of your manuscript to EMBO Molecular Medicine. We have now heard back from the three referees whom we asked to evaluate your manuscript.
As you will see from the comments below, the three referees are enthusiastic about the study, even though recommendations are made to provide more mechanistic insights into the mode of action, that I would like to encourage you to address, at least in part.
We would welcome the submission of a revised version within three months for further consideration. Please note that EMBO Molecular Medicine strongly supports a single round of revision and that, as acceptance or rejection of the manuscript will depend on another round of review, your responses should be as complete as possible.
EMBO Molecular Medicine has a "scooping protection" policy, whereby similar findings that are published by others during review or revision are not a criterion for rejection. Should you decide to submit a revised version, I do ask that you get in touch after three months if you have not completed it, to update us on the status.
Please also contact us as soon as possible if similar work is published elsewhere. If other work is published we may not be able to extend the revision period beyond three months.
Please read below for important editorial formatting and consult our author's guidelines for proper formatting of your revised article for EMBO Molecular Medicine.
I look forward to receiving your revised manuscript. ***** Reviewer's comments ***** Referee #1 (Comments on Novelty/Model System): The work is interesting and correctly performed. The authors first screened about 180,000 compounds to identify molecules reactivating a latent HIV provirus in a model cell line. They selected 5 compounds that were further characterized, either alone or in combination with known molecules. One compound, termed PH02, reactivates viral replication in CD4+ T cells from HIV-1 infected individuals under successful antiretroviral treatment. The compound works synergistically with previously characterized compounds, including PEP005. The novelty resides in the identification of PH02 and other compounds, and their synergistic activity against the viral reservoir.
Referee #2 (Comments on Novelty/Model System): The manuscript by Hashemi et al. deals with the issue of identifying compounds that, in combination, could lead latent HIV-1 proviruses out of their latent state. In particular, the authors have identified 5 compounds by high throughput screening (HTS) that reverse latent HIV-1 infection without causing a generalized T cell activation. These novel compounds synergize with previously identified latency reversing agents (LRA), particularly with Ingenol in both cell lines and primary resting CD4+ T cells isolated from pts. receiving cART showing synergy by the standard Q-VOA in the case of the combination between PH02 and PEP005.
The paper is very clearly written and represents and impressive thorough analysis of candidate novel LRA up to their validation in resting CD4 T cells of patients under suppressive cART.
Referee #2 (Remarks): impressive work! it would be impossible to reduce it to a short report. The main limitation is the concept of "shock and kill" as a whole; however, within this hypotethical model this study has been really well conducted.
Referee #3 (Comments on Novelty/Model System): Phenotypic screen for novel latency reactivating agents is wel performed and well described.
Referee #3 (Remarks):
Authors report on a phenotypic screen for novel latency reactivating agents. The screen is performed well and the hits are interesting. The paper is very well written. Unfortunately, although some epigenetic characteristics are tested, the true mechanism of action of the novel compounds is not elucidated. I find this a prerequisite for a paper in EMBO Molecular Medicine. (Remarks): impressive work! it would be impossible to reduce it to a short report. The main limitation is the concept of "shock and kill" as a whole; however, within this hypotethical model this study has been really well conducted.
Referee #3: Phenotypic screen for novel latency reactivating agents is well performed and well described.
(Remarks): Authors report on a phenotypic screen for novel latency reactivating agents. The screen is performed well and the hits are interesting. The paper is very well written. Unfortunately, although some epigenetic characteristics are tested, the true mechanism of action of the novel compounds is not elucidated. I find this a prerequisite for a paper in Embo Molecular Medicine.
We thank all three referees for their encouraging comments; several of the referees commented that this was "impressive work" and overall the screen was well performed. All of the referees have commented that the writing was good, and therefore we have made only minor corrections to spelling and grammar in most of the text throughout the revisions. Referee #2 alludes to "limitations of shock and kill", but without mentioning specifics in their comments. We, like others working in this field, recognize the steep incline that potential shock and kill therapies face for effective treatment, and consequently we have noted this in the Introduction in the paragraph ending "Therefore, it is likely that more advanced combination strategies must be used to produce efficient provirus induction for eradication of cells that produce replication-competent viruses using the shock and kill strategy." Referee #3, notes that we have not described a mechanism for reactivation of HIV latency by thecompounds. We believe it important to point out that this study represents the first analysis of completely synthetic compound libraries where novel compounds were identified that can reactivate HIV latency in cell lines, as well as infected PBMCs from patients, and importantly can also cause induction of replication competent virus from latently infected samples from patients. One previous study describes a screen using a similar number of compounds (~200,000), but this study failed to identify compound s capable of reactivating expression of replication competent virus from patient samples (Micheva-Viteva et al., 2011). Furthermore, we note that most comparable previous screens were performed with much smaller "known drug" libraries or libraries of natural products where mechanistic activity had already been identified or at least suspected.
We agree that it will be important to understand the biochemical effect(s) of the compounds we have identified, and we have spent considerable effort over the past several months towards this goal. Unfortunately none of the most obvious possible mechanisms that could reactivate HIV from latency seem to be affected. We provide additional evidence in the revised manuscript showing that PH02 does not affect global histone acetylation and also does not affect expression of key transcription factors regulating activation from the HIV enhancer. In additional experiments, not described in the manuscript, we have determined, unfortunately, that none of the PH compounds inhibit growth or other obvious phenotypes in yeast. From these efforts we believe that the compounds must affect previously uncharacterized mechanisms for maintenance of viral latency. Identification of mechanism of action for compounds identified in synthetic small molecule libraries is never trivial, and will require detailed analysis of responses using proteomics and genomics, which seems beyond the scope of the present manuscript. Thank you for the submission of your revised manuscript to EMBO Molecular Medicine. We have now received the enclosed reports from the referees that were asked to re-assess it. As you will see the reviewers are now globally supportive and I am pleased to inform you that we will be able to accept your manuscript pending final amendments.
***** Reviewer's comments ***** Referee #1 (Remarks for Author): The manuscript has been improved and the authors have addressed my concerns Referee #3 (Remarks for Author): The lack of MOA dampens my enthusiasm but the work deserves publication so the work can be shared and continued. 1.a. How was the sample size chosen to ensure adequate power to detect a pre--specified effect size? 1.b. For animal studies, include a statement about sample size estimate even if no statistical methods were used.
2. Describe inclusion/exclusion criteria if samples or animals were excluded from the analysis. Were the criteria pre-established?
3. Were any steps taken to minimize the effects of subjective bias when allocating animals/samples to treatment (e.g. randomization procedure)? If yes, please describe.
For animal studies, include a statement about randomization even if no randomization was used.
4.a. Were any steps taken to minimize the effects of subjective bias during group allocation or/and when assessing results (e.g. blinding of the investigator)? If yes please describe. In the pink boxes below, please ensure that the answers to the following questions are reported in the manuscript itself. Every question should be answered. If the question is not relevant to your research, please write NA (non applicable). We encourage you to include a specific subsection in the methods section for statistics, reagents, animal models and human subjects.
definitions of statistical methods and measures:
a description of the sample collection allowing the reader to understand whether the samples represent technical or biological replicates (including how many animals, litters, cultures, etc.).
Please fill out these boxes ê (Do not worry if you cannot see all your text once you press return) a specification of the experimental system investigated (eg cell line, species name).
B--Statistics and general methods
the assay(s) and method(s) used to carry out the reported observations and measurements an explicit mention of the biological and chemical entity(ies) that are being measured. an explicit mention of the biological and chemical entity(ies) that are altered/varied/perturbed in a controlled manner.
Data
the data were obtained and processed according to the field's best practice and are presented to reflect the results of the experiments in an accurate and unbiased manner. figure panels include only data points, measurements or observations that can be compared to each other in a scientifically meaningful way. graphs include clearly labeled error bars for independent experiments and sample sizes. Unless justified, error bars should not be shown for technical replicates. if n< 5, the individual data points from each experiment should be plotted and any statistical test employed should be justified the exact sample size (n) for each experimental group/condition, given as a number, not a range; Each figure caption should contain the following information, for each panel where they are relevant:
Captions
The data shown in figures should satisfy the following conditions: Source Data should be included to report the data underlying graphs. Please follow the guidelines set out in the author ship guidelines on Data Presentation.
YOU MUST COMPLETE ALL CELLS WITH A PINK BACKGROUND ê
The sample sizes were chosen to obtain a 95% confidence interval of statistical significance between and within the groups of samples.
NA
Yes, for every figure with statistical tests, there were sufficient numbers of replicates to perform these tests.
Yes, a ratio paired t--test and a one--way analysis of variance (ANOVA) were used when appropriate.
Yes, the error bars were reported as standard errors from the standard deviation. | 2,752.4 | 2017-12-15T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
What does prayer teach us about God?
Using Augustine as a dialogue partner, I consider what prayer teaches us about God and ourselves in relation to God. I argue, primarily, that prayer illuminates the great distinction between creatures and God, enabling us to live into it. Second, I reflect on the elucidating power of prayer with respect to our participation in God. In so doing, I show the ‘shaping effect’ of metaphysical reflection on the prayers of the faithful.
Prayer advances understanding of the distinction and relation between God and creatures -that is my main contention. Insofar as we pray, we learn to be a creature and thus not God; we appreciate in an experiential sense that there is a great distinction between God and us. Prayer as a doctrinally laden practice structures the exercise of our creatureliness in relation to our Creator. The pursuit of genuine creatureliness before the triune God assumes an exercise of prayer that rests on the metaphysics of God, the science of the divine being.
Prayer, moreover, helps us integrate description of God and prescription of a form of life. Description of the first principles of the divine life goes hand in hand with encouragement of a mode of life amenable to those truths. There needs to be, as Rowan Williams writes, 'a theological way of life poised between penitence and wonder', a way of creaturely being oriented to God in fear and adoration. 4 Carnal minds, in other words, express accounts of the difference that are less than edifying. However, when we call upon God in prayer, we may open up fresh horizons of understanding regarding our uncreated Creator and our participation in him. 5 Our main interlocutor in this brief inquiry is Augustine. Augustine offers us deeply theocentric wisdom in regard to our creatureliness. In City of God, Augustine states that there are three things that we need to know about a created thing: 'who made it, how he [i.e. God] made it, and why he made it'. 6 The answers Augustine gives -'"God," "Through his word", [and] "Because it is good"'inform our exploration of the great difference. 7 Accordingly, we turn now to God and to consideration of a prayerful enactment of our creatureliness before God.
The great difference
Our vocation as creatures is to live the distinction between ourselves and God. That vocation requires, however, a clear sense of what distinguishes us from God. The first thing that we say about ourselves, in distinction from God, is that we are made whereas God is not made. This means, among other things, that God does not become God in relation to us; we do not enhance God in any way, making God more than the one he has always been. God is our cause, uncreated goodness, beauty, unity, truth, light and life.
God cannot advance in being. The one who created us experiences neither increase in nor diminishment of being. The living Lord participates in nothing, neither Godhead nor divinity. The one who creates us has need of no one in order to be. Our Creator remains, in rest and repose, the one he has always been. This is not true of us, for we need this God in order to be. How then do we 'advance in being' towards this God? 8 One of the ways in which we advance is prayer. As we pray, we are, it is hoped, made friendly to God. 9 We live into our creatureliness. In prayer, we inhabit the very great difference between our Creator and ourselves, delighting in his uncreated goodness and love.
Augustine notes that 'we resemble the divine Trinity in that we exist'. 10 The resemblance is oblique, to be sure, but it is there, even on this side of the Fall. God does not sustain in being creatures that bear no resemblance to him and that are radically unlike him. Existence is a gift, and insofar as we exist, we resemble him who is existence itself. Our resemblance is heightened when we pray, for in prayer we are made friendly to our beginning and end, becoming more rather than less like God.
We awaken to God's utter self-sufficiency in prayer, the magnificent truth that we are not necessary to our maker. If we pray, we then see that the distinction between our Creator and ourselves is greater than we could conceive, had we not prayed. Our Father to whom we pray is indeed 'in heaven'. Heaven does not contain him; heaven is not superior to him; rather, he simply is above us, 'in heaven'. We are not where he is, but he who is in heaven is present by his Spirit to us wherever we are. God is active on earth -'within in filling', as Erich Przywara notes. 11 In this respect, a key metaphysical principle with respect to God is assumed in prayer. God is above us but nonetheless not so above us that he does not also indwell us.
Let us think about that. 12 Will is something true of God and creatures, although the great difference between God's will and our own will far eclipses the similarities. God's will is simple, whereas our will is not, distinct as it is from our intellect. We may know that something is right but nonetheless fail to do what is right. This is not the case with God. God's will is identical to his essence. When we will what is contrary to God's will, our will no longer conforms to its God-given call to will what is God's. We thus compromise our creatureliness in relation to the God whose will is identical to all that he is. He who created us wills that we freely will him, imitating him, living 'a life of love', resembling him in what we say and do (Eph. 5.2).
When we pray, we live into a foundational truth regarding the Creator/creature distinction: that is, we are not our 'own Good'; rather, God is. 13 The good we will for ourselves is God. In praying that God's will be done, we pray that we might persist less in what is of ourselves and rather more in what is of God, his goodness, truth and beauty. At the same time, we recognize that we will never enjoy goodness as God does, for God experiences his goodness as his own whereas any good that we have is a participation in his goodness.
Here we glimpse something of the extent to which prayer gives insight into the difference between ourselves and our Father. Our Father need not petition anyone for anything as he is in need of nothing. Novel elements never come into his nature, whereas a novel element comes into our nature -namely, the divine nature -when we pray in and through Christ. In prayer, we acquire more likeness to our cause, and the more we resemble our cause and share in his nature, the more we fulfil our vocation as creatures. Prayer prepares us for a relationship that exceeds -without contradicting -our natural capacities. Prayer bestows what John Calvin calls a 'special grace'. 14 To call Israel's Lord 'our Father', the 'I AM WHO I AM', this is something for which we must be made fit, and the means by which we are made fit is the grace of prayer.
Within the context of the mystery of prayer, we see, however dimly, theological truths that we could not have seen before. We begin to glimpse, as Augustine notes, the extent to which God exists 'in some other manner, utterly remote from anything we experience or could imagine'. 15 We appreciate and feel our contingency, hallowing a God who does not change, who is purely actual, having no potential, knowing neither increase nor decrease in being. Theology proper unfolds, accordingly, as a fruit of the mystery of prayer. The truths intrinsic to our distinction from our Creator gloss the act of prayer itself.
We have a choice whether or not to pray. When we choose to pray, good arises, good analogous to our supreme good, God himself. Indeed, we become better, increasing rather than decreasing in likeness to the God in whose image we are made. We grow in intimacy with God, maturing as creatures, rejoicing in the God who cannot do what is contrary to himself, trusting in a God who cannot diminish and thus become less than the love he is. Accordingly, when we pray 'your will be done', we pray that what is essentially true of God might become true of us, that more truth, goodness, holiness, love, etc. be present and done in and among us rather than less. So, it is important to remember that God does not pray, whereas we must, if we are to know and love God. If we do not pray, we rescind in being, losing what we have been given, the gift of being itself.
We confess in prayer that we (and not God) are responsible for our defects. As Augustine notes, 'God is the author of natures, though he is certainly not responsible for their defects.' 16 God has no defects to own but we do: 'And forgive our debts' -this is our petition, not the Son of God's. 17 As we grow in the intimacy with God, we appreciate that much more the incorruptibility of God. God cannot cease to be light. As we become pure of heart, we grow in likeness to God. Like Christ, we become those for whom it is natural to express our love for our Father in prayer. We become in prayer what we are: creatures of our loving Father. 18 As praying creatures, we recognize that the power to live our vocation as creatures is God's. God's power is never at odds with himself. It is identical to him, as is the case with his will. The same is not true for us, composite creatures that we are. Our power is at the disposal of our will, whereas God's power is not at the disposal of any 'part' of God, for there are not any parts in God. Accordingly, to be a creature who lives in harmony with their Creator is a matter of using our free will to will what God wills. 19 If we are to be made over in God's likeness, we must live by God's 'own standard'. 20 In living by God's standards, we subject ourselves to him, thus becoming a little more like him. This is the key to our living well as creatures. Think, for a moment, about the question the Lord Jesus asks of Peter: 'Do you love me?' 21 When we love God, we use our will properly. Such loving, Augustine avers, is indicative of 'a rightly divided will'. 22 We may foolishly use our will to assert our power over and against God. God, however, never wills what is contrary to himself for God is love.
Because God is (again) simple, God cannot lose himself. This is not true of those who seek to live as God's own. When we confess in prayer that power is God's, as with the kingdom and the glory, we appreciate the great difference between God and us, for God gives what he has without either losing or, for that matter, gaining himself. When we experience joy and its culmination in gladness, as God would have us do, we recognize that we may, in this life anyway, lose these things, for they are not coextensive with us. All that we can do, in life and in death, is give glory to God who is what he has and, in turn, gives.
To sum up this section, the great difference between God and us structures our prayer with God, resulting in 'true tranquillity' between us and God. 23 The distinction from God no longer occasions distrust of God but is that into which we live. We see in prayer that it is good to be a creature of God. The great difference between created and uncreated evokes not resentment but trust, delight and, ultimately, love.
Participation in God
Prayer not only provides a remarkable view of our difference with respect to God but also encourages reflection regarding our participation in God. We who were created ex nihilo have the possibility of becoming distorted by sin, using our Godgiven free will to resist God. In praying Jesus' prayer, we are asking that we who are so diminished in being -having been deranged by our sin -would receive God's will and all that God is, enjoying a fresh participation in him. We were 'created upright, that is, with a good will', with God in view. 24 We were not created bad. We were (and are) created to 'live according to God's will'. 25 Crucially, when we live by God's love, loving God in the love that God is, and when we live good lives, we see that God is, as Ian McFarland notes, the 'kind' of Creator who shares what he is. 26 If we pray, we may glimpse what kind of Creator he is -that God is infinitely kind, willing that all enjoy a generous participation in him. Indeed, the glory of the gospel is that creatures are invited to share in what utterly exceeds them as creatures. This is grace. As McFarland writes, following the lead of Irenaeus, God 'gives them [creatures] the glory of uncreated existence through God's own loving presence to them'. 27 We receive something of this presence, rightly, in prayer. And as we will what God wills in prayer, we receive a glimpse of God's life, 'the glory of uncreated existence'.
This register is important to maintain. Divine simplicity, rightly understood, teaches that God's power does not 'depend on God's identity', for God is his power -'yours is the power'. 28 God is power itself. God the Father is power; God the Son and God the Spirit are as well. These three, because one essence is common to them, are power -not three powers but one. Moreover, God's power is identical to God's being. What the three share and communicate to us, and what we are to imitate, is common to them. This is how we understand the notion that God 'gives them [i.e. creatures] the glory of uncreated existence'. 29 The glory in which we creatures participate by grace is that which is proper to God by nature.
What prayer assumes is God's self-subsistent being. Creatures subsist through another, by participation, whereas God subsists through himself; our being as creatures is participated being. As Thomas Aquinas writes, 'every being that is in any way is from God'. 30 We are from God, and as such we enjoy a participation by virtue of our nature in him. If we pray, we appreciate afresh that God wills to be in us not only by nature but also by the grace of the Spirit. We are spoken into being by God by virtue of his goodness in order that we might walk with and receive a participation in him that utterly exceeds nature.
If such is the case, creatures ought to aspire towards goodness, towards what God is -this is the shape of our pilgrimage in relation to the God who does not aspire towards anything. Of course, we shall, in this life, always fall short. We and not God are on pilgrimage. The difference grounds the relation, and we live into that difference in the form of prayer, thereby enjoying a participation in him.
Conclusion
In this article I have endeavoured to show the relevance of prayer to the difference of God and creatures. Throughout this piece, I have sought to move beyond a descriptive mode of discourse, arguing that the difference itself assumes prayer with participation as its fruit. Prayer does not give us a model of the difference but instead creates a spiritually and intellectually charged atmosphere that illuminates our participation in the distinction. By considering prayer, the Creator/creature difference is opened up to us in a less formal and rather more believing way. We see that the truths of the distinction are so precious that they cannot be considered otherwise than in prayer. In praying, we receive something of the greatest good, which is God, and participation in the goods of his kingdom -'spiritual and immortal goods', as Augustine winsomely calls them. 31 Prayer assumes theology proper. The one whose existence is uncreated causes us to be in such a way that we might live as his. Prayer is the gift by which we mature in intellectual and spiritual relation to this great truth. Prayer forms the home in which we learn to imitate uncreated goodness and so to participate faithfully in it. By living devout and good lives we resemble our Creator, living out the great difference from him in ever increasing relatedness to him, thereby participating in our precious Saviour and the divine nature he gifts to us through his Spirit. | 3,963.4 | 2021-07-01T00:00:00.000 | [
"Philosophy"
] |
A Novel Miniaturized Dual Slant-Polarized UWB Antenna Array with Excellent Pattern Symmetry Property for MIMO Applications
A novel miniaturized 1 × 10 uniform linear dual slant-polarized UWB antenna array for MIMO base station is presented. The antenna array operates in the frequency band from 1710 to 2690MHz with a 17.3–18.7 dBi gain in a size of 105 × 1100 × 37mm.The array element is composed of two single-polarized dipoles evolved from bow-tie antenna with slots on them, which miniaturize the size of the antenna. The 10 array elements are fed through an air dielectric strip-line power splitter. Two parameters, the beam tracking and the beam squint, are presented to quantitatively describe the pattern symmetry property of the antenna.The simulated andmeasured radiation performances are studied and compared.The results show that the pattern symmetry property of the single antenna element has been improved about 24% compared with the former study, and the antenna array also provides excellent pattern symmetry property.
Introduction
Multiple-input multiple-output (MIMO) is the use of multiple antennas at both the transmitter and the receiver to improve communication performance [1].MIMO has attracted attention in wireless communications, because it offers significant increases in data throughput and link range without additional bandwidth or increased transmit power [2].It achieves this goal by spreading the same total transmit power over the antennas to achieve an array gain that improves the spectral efficiency and/or to achieve a diversity gain that improves the link reliability (reduced fading) [3].Because of these advantages, MIMO is an important part of modern wireless communication standards such as IEEE 802.11n (WI-Fi), 4G, 3GPP Long Term Evolution (LTE), WiMAX, and HSPA+ [4].While coding and signal processing are key elements to successful implementation of a MIMO system, the propagation channel and the antenna design represent major parameters that ultimately impact the system performance.As a result, considerable research has been devoted recently to these two areas.For example, assessing the potential of MIMO systems requires a new level of understanding concerning multipath channel characteristics.Furthermore, while we have extensive information concerning the behavior of an antenna in multipath channels [5], recent activity surrounding MIMO communications has exposed new issues related to the impact of antenna properties and the array configuration on the system performance.In [6], a simulation study of the channel capacity of a MIMO antenna system exploiting multiple polarizations was carried out, while in [7] Perez and Ibanez analyzed the capacity of MIMO systems based on dual-polarized antenna array.
In modern mobile telecommunications industry, dual slant-polarized (+45 ∘ /−45 ∘ ) base station antennas are widely used for their good anti-multi-path property.Usually, in a 2G network, a dual slant-polarized base station antenna works in 1T2R mode (1 of the 2 channels for transmitting, both 2 channels for receiving).In 3G or LTE network, both 2 channels of the dual slant-polarized base station antenna might be used to transmit and receive the signal.In this case, highly symmetric radiation patterns are required to the dual slant-polarized base station antenna.However, only a few studies about the radiation pattern symmetry of the antenna were carried out in the past.In [8], Chair et al. studied 4 different types of dual-polarized dielectric resonator antennas and analyzed their radiation pattern symmetry qualitatively.In [9], Gao et al. presented a CPW-fed dual-polarized dielectric resonator antenna with "almost symmetrical" radiation patterns.In [10], Mak and Rowell presented a dual-polarized patch antenna with symmetrical patterns for base station in 1710-2690 MHz, but they did not analyze the pattern symmetry quantitatively either.In [11] Kim et al. proposed a modified dual-polarization horn antenna to improve radiation pattern symmetry and presented a method to evaluate the radiation pattern symmetry.They quantitatively analyzed the radiation pattern symmetry by comparing the normalized radiation pattern level of the E/H-plane at some specified angles including the −10 dB beam width point.The smaller the differences, the better the radiation pattern symmetry.Both the simulated and measured results have shown that the radiation pattern level differences increase as the angle increases within the −10 dB beam width range, so the difference at −10 dB beam width point can be considered as a characteristic to describe the radiation pattern symmetry.In this paper, it is called the beam tracking, which is defined as the maximum level difference between the two orthogonal polarizations normalized radiation patterns at the 10 dB beam width point, as shown in Figure 1.The beam tracking can be expressed as where − is the level of −45 ∘ polarization at the 10 dB point on the left, + is the level of +45 ∘ polarization at the 10 dB point on the left, − is the level of −45 ∘ polarization at the 10 dB point on the right, and + is the level of +45 ∘ polarization at the 10 dB point on the right.The smaller the beam tracking is, the better the symmetry of the radiation pattern is.Ideally, the value of beam tracking is expected to be 0, which means that the two patterns coincide completely at the 10 dB point.In [11], the authors supposed that the boresight was the axis of symmetry of the antenna.However, for most antennas, the boresight is not the symmetry axis of the antenna, because the beam is always tilted to one side, more or less.In this case only the amplitude is not enough to describe the radiation pattern symmetry, but also the angular dimension, the beam squint.The beam squint is defined as the ratio of the boresight angle to the 10 dB beam width; Beam squint = boresight angle 10 dB beam width .
Ideally, the value of beam squint is expected to be 0, which means that the boresight is just the symmetry axis of the antenna.The beam tracking and the beam squint describe the symmetric property of the dual-polarized antenna in the amplitude domain and the angular domain, respectively.In this paper, a novel compact 1 × 10 dual slant-polarized antenna array with excellent pattern symmetry property is presented.The proposed antenna array operates in the frequency band from 1710 to 2690 MHz, with a size of 105 × 1300 × 37 mm.The fractional bandwidth of the antenna array is 44.5%, which could be classified as ultra wideband (UWB) antenna according to the definition of UWB antenna by the Federal Communications Commission (FCC) and the International Telecommunication Union Radio (ITU-R) [12].The simulations of the proposed antenna are performed using the commercial electromagnetic simulation software HFSS.In the whole range of 10 dB beam width, the worst beam tracking of the array element is 0.40 dB, which has been improved about 24% compared with the result as 0.5398 dB in [11].And the beam squint values are better than 1%.The antenna array also provides excellent pattern symmetry property.A prototype of the proposed antenna array was fabricated and measured in an anechoic chamber.The experimental measurements concerning the antenna parameters are found to be in good agreement with the numerical results.
Antenna Array Structure and Design
2.1.Antenna Array Element.The pattern symmetry property of the antenna array is mainly decided by the properties of the array element, so the design of the array element is quite important.The geometry of the proposed antenna array element is shown in Figure 2. The dual slant-polarized array element is composed of 2 evolved bow-tie antennas on a 105 mm wide reflector, and the total height is 37 mm.The evolutionary process from bow-tie antenna to the proposed array element is shown in Figure 3.The bow-tie antenna is a kind of antenna with two flaring, triangular shaped arms, which is a typical wide band antenna [13].By adding slot on the arms, the equivalent electrical length of the arms is increased, so as to miniaturize the dimensions of the antenna.The array element has a strictly symmetrical structure, which is expected to obtain good radiation symmetry property.The feeding structure of the array element is shown in Figure 4.By adjusting the shape and the dimensions of the inner conductor, the antenna can be matched to 50 Ω transmission lines in a bandwidth of 1710 MHz-2690 MHz.Besides that, there are some holes in the arms and some raised blocks at the edge of the arms, which are brought in to tune the return loss over the whole frequency band.
Antenna Array.
Based on the array element above, a uniform linear antenna array is designed.The proposed antenna array is composed of 10 elements, and the spacing between the elements was set to 110 mm based on the frequency characteristic.The structure of the antenna array is shown in Figure 5.The antenna array is fed by two 1 to 10 air dielectric strip-line power splitters, which are connected with the 10 elements through several coaxial cables as shown in Figure 6.To dipole 8 To dipole 9 To dipole 10
To antenna port
To dipole 1 To dipole 2 the impedance of the corresponding matching section, and the phase of each dipole could be adjusted by adjusting the corresponding cable length.The dimensions of the antenna array are 105 × 1100 × 37 mm.
Antenna Array Simulation and Measurement
A commercial electromagnetic simulation software HFSS is used to analyze the performance of the proposed antenna element and the antenna array.The weight of the amplitude and the phase of each element were optimized to achieve a low upper side lobe level, which is quite critical for a wireless telecommunication system base station.The antenna array has around 65 degrees beam-width in H-plane, which is quite suitable for base station of mobile communication.The beam tracking and the beam squint of the array element and the antenna array are calculated in E-plane and H-plane.The simulated results of the beam tracking are shown in Table 1.
The simulated beam squint results of the array element and the antenna array are shown in Tables 2 and 3, respectively.The simulated results show that both the proposed antenna element and the antenna array have quite good pattern symmetry property.A prototype of the proposed antenna array was fabricated for measurement.The array elements were made of aluminum for its good processing property and coated with Tin to prevent the surface oxidation.The picture of the proposed antenna array prototype is shown in Figure 7, while the numerical and measurement results concerning the frequency behavior of the parameters of the antenna are reported in Figure 8.
The measurement of radiation patterns was carried out in an enclosed anechoic chamber.The simulated and the measured radiation patterns in H-plane and E-plane are shown in Figures 9 and 10, respectively.
The simulated and the measured gain curves of the proposed antenna array are shown in Figure 11.However, the measured gain data of the antenna is approximately 0.1 dB below the predicted values in the operation frequency band.The reason for this behavior is probably coupling within the cable loss and/or the nonideal power distribution of the power dividers.The beam tracking and the beam squint of the array element and antenna array are measured in E-plane and Hplane; the measured results of the beam tracking are shown in Table 4.The measured beam squint results of the array element and the antenna array are shown in Tables 5 and 6, respectively.The measured array element beam tracking results are better than 0.40 dB, which is about 24% better than the results as 0.5398 dB in [11].And the measured antenna array beam tracking results are better than 0.41 dB.Besides that, all the measured results of the beam squint are better than 1%.The measured data agree very well with the numerical results and show attractive pattern symmetry characteristics for a MIMO base station.
Conclusion
In this paper, a novel compact 1 × 10 dual slant-polarized antenna array with excellent pattern symmetry property has been proposed.The antenna array is composed of 10 dual slant-polarized antenna array elements, which evolved from the bow-tie antennas.The antenna array operates in the frequency band 1710-2690 MHz, with a size International Journal of Antennas and Propagation
Figure 2 :
Figure 2: The geometry of the array element: (a) 3D view, (b) top view, (c) and side view.
Figure 3 :
Figure 3: The evolution from bow-tie antenna to the proposed array element: (a) bow-tie antenna, (b) bow-tie antenna with slots, (c) bow-tie antenna with slots and holes, and (d) the proposed array element.
Figure 4 :
Figure 4: The feeding structure of the dipole: (a) 3D view, (b) the feeding structure for −45 ∘ , and (c) the feeding structure for +45 ∘ .
Figure 5 :
Figure 5: The structure of the antenna array.
Figure 6 :
Figure 6: The feeding structure of the antenna array.
Figure 7 :Figure 8 :
Figure 7: The photo of the proposed antenna array: (a) top view of the array element, (b) side view of the array element, and (c) the proposed antenna array.
Table 1 :
The simulated beam tracking computed in H-plane and Eplane.
Table 2 :
The simulated beam squint of the array element.
Table 3 :
The simulated beam squint of the antenna array. | 2,991.8 | 2015-07-29T00:00:00.000 | [
"Engineering"
] |
Euclid preparation: XI. Mean redshift determination from galaxy redshift probabilities for cosmic shear tomography
The analysis of weak gravitational lensing in wide-field imaging surveys is considered to be a major cosmological probe of dark energy. Our capacity to constrain the dark energy equation of state relies on the accurate knowledge of the galaxy mean redshift $\langle z \rangle$. We investigate the possibility of measuring $\langle z \rangle$ with an accuracy better than $0.002\,(1+z)$, in ten tomographic bins spanning the redshift interval $0.2<z<2.2$, the requirements for the cosmic shear analysis of Euclid. We implement a sufficiently realistic simulation to understand the advantages, complementarity, but also shortcoming of two standard approaches: the direct calibration of $\langle z \rangle$ with a dedicated spectroscopic sample and the combination of the photometric redshift probability distribution function (zPDF) of individual galaxies. We base our study on the Horizon-AGN hydrodynamical simulation that we analyse with a standard galaxy spectral energy distribution template-fitting code. Such procedure produces photometric redshifts with realistic biases, precision and failure rate. We find that the Euclid current design for direct calibration is sufficiently robust to reach the requirement on the mean redshift, provided that the purity level of the spectroscopic sample is maintained at an extremely high level of $>99.8\%$. The zPDF approach could also be successful if we debias the zPDF using a spectroscopic training sample. This approach requires deep imaging data, but is weakly sensitive to spectroscopic redshift failures in the training sample. We improve the debiasing method and confirm our finding by applying it to real-world weak-lensing data sets (COSMOS and KiDS+VIKING-450).
Introduction
Understanding the late, accelerated expansion of our Universe (Riess et al. 1998;Perlmutter et al. 1999) is one of the most important challenges in modern cosmology.Three leading hy-w = −1 is compatible with a cosmological constant, and therefore any deviation from this value would invalidate the standard Λ cold dark matter (ΛCDM) model, in favour of dark energy.This makes the precise measurement of w a key component of future cosmological experiments such as Euclid (Laureijs et al. 2011), the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST; LSST Science Collaboration et al. 2009), or the Nancy Grace Roman Space Telescope (Spergel et al. 2015).
Cosmic shear (see e.g.Kilbinger 2015; Mandelbaum 2018, for recent reviews), which is the coherent distortion of galaxy images by large-scale structures via weak gravitational lensing, offers the potential to measure w with great precision: the Euclid survey, in particular, aims at reaching 1% precision on the measurement of w using cosmic shear.One advantage of using lensing to measure w, compared to other probes, is that there exists a direct link between galaxy image geometrical distortions (i.e. the shear) and the gravitational potential of the intervening structures.When the shapes of, and distances to, galaxy sources are known, gravitational lensing allows one to probe the matter distribution of the Universe.
This discovery has led to the rapid growth of interest in using cosmic shear as a key cosmological probe, as evidenced by its successful application to several surveys.Constraints on the matter density parameter Ω m , and the normalisation of the linear matter power spectrum σ 8 , have been reported by the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS, Kilbinger et al. 2013), the Kilo Degree Survey (Hildebrandt et al. 2017, KiDS,), the Dark Energy Survey (DES, Troxel et al. 2018), and the Hyper-Suprime Camera Survey (HSC, Hikage et al. 2019).These studies typically utilise so-called cosmic shear tomography (Hu 1999), whereby the cosmic shear signal is obtained by measuring the cross-correlation between galaxy shapes in different bins along the line of sight (i.e.tomographic bins).Large forthcoming surveys, also utilising cosmic shear tomography, will enhance the precision of cosmological parameter measurements (e.g.Ω m , σ 8 , and w), while also enabling the measurement of any evolution in the dark-energy equation of state, such as that parametrised by Caldwell et al. (1998): w = w 0 + w a (1 − a), where a is the scale factor.
Tomographic cosmic shear studies require accurate knowledge of the galaxy redshift distribution.The estimation and calibration of the redshift distribution has been identified as one of the most problematic tasks in current cosmic shear surveys, as systematic bias in the distribution calibration directly influences the resulting cosmological parameter estimates.In particular, Joudaki et al. (2020) show that the Ω m −σ 8 constraints from KiDS and DES can be fully reconciled under consistent redshift calibration, thereby suggesting that the different constraints from the two surveys can be traced back to differing methods of redshift calibration.
In tomographic cosmic shear, the signal is primarily sensitive to the average distance of sources within each bin.Therefore, for this purpose, the redshift distribution of an arbitrary galaxy sample can be characterised simply by its mean z , defined as: where N(z) is the true redshift distribution of the sample.Furthermore, in cosmic shear tomography it is common to build the required tomographic bins using photo-z (see Salvato et al. 2019, for a review), which can be measured for large samples of galaxies with observations in only a few photometric bandpasses.
However these photo-z are imperfect (due to, for example, photometric noise), resulting in tomographic bins whose true N(z) extend beyond the bin limits.These 'tails' in the redshift distribution are important, as they can significantly influence the distribution mean and bring sensitive information (Ma et al. 2006).
For a Euclid-like cosmic shear survey, Laureijs et al. (2011) predict that the mean redshift z of each tomographic bin must be known with an accuracy better than σ z = 0.002 (1 + z) in order to meet the precision on w 0 (σ w 0 = 0.015) and w a (σ w a = 0.15).Given the importance of measuring the mean redshift for cosmic-shear surveys, numerous approaches have been devised in the last decade.A first family of methods, usually referred to as 'direct calibration', involves weighting a sample of galaxies with known redshifts such that they match the colour-magnitude properties of the target galaxy sample; thereby leveraging the relationship between galaxy colours, magnitudes, and redshifts to reconstruct the redshift distribution of the target sample (e.g.Lima et al. 2008;Cunha et al. 2009;Abdalla et al. 2008).A second approach is to utilise redshift probability distribution functions (zPDFs), obtained per target galaxy and subsequently stacked them to reconstruct the target population N(z).The galaxy zPDF is typically estimated by either model fitting or via machine learning.A third family of methods uses galaxy spatial information, specifically galaxy angular clustering, crosscorrelating target galaxies with a large spec-z sample to retrieve the redshift distribution (e.g.Newman 2008;Ménard et al. 2013).New methods are continuously developed, for instance by modelling galaxy populations and using forward modelling to match the data (Kacprzak et al. 2020).
In this paper we evaluate our capacity to measure the mean redshift in each tomographic bin at the precision level required for Euclid, based on realistic simulations.
We base our study on a mock catalogue generated from the Horizon-AGN hydrodynamical simulation as described in Dubois et al. (2014) and Laigle et al. (2019).The advantage of this simulation is that the produced spectra encompass all the complexity of galaxy evolution, including rapidly varying starformation histories, metallicity enrichment, mergers, and feedback from both supernovae and active galactic nuclei (AGN).By simulating galaxies with the imaging sensitivity expected for Euclid, we retrieve the photo-z with a standard template-fitting code, as done in existing surveys.Therefore, we produce photoz with realistic biases, precision and failure rate, as shown in Laigle et al. (2019).The simulated galaxy zPDF appear as complex as the ones observed in real data.
We further simulate realistic spectroscopic training samples, with selection functions similar with those that are currently being acquired in preparation of Euclid and other dark energy experiments (Masters et al. 2017).We introduce possible incompleteness and failures as occurring in actual spectroscopic surveys.
We investigate two of the methods envisioned for the Euclid mission: the direct calibration and zPDF combination.We also propose a new method to debias the zPDF based on Bordoloi et al. (2010).We quantify their performance to estimate the mean redshift of tomographic bins, and isolate relevant factors which could impact our ability to fulfill the Euclid requirement.We also provide recommendations on the imaging depth and training sample necessary to achieve the required accuracy on z .
Finally, we demonstrate the general utility of each of the methods presented here, not just to future surveys such as Euclid but also to current large imaging surveys.As an illustration, we apply those methods to COSMOS and the fourth data release of KiDS (Kuijken et al. 2019) surveys.
The paper is organised as follows.In Sect. 2 we describe the Euclid-like mock catalogues generated from the Horizon-AGN hydrodynamical simulation.In Sect. 3 we test the precision reached on z when applying the direct calibration method.In Sect. 4 we measure z in each tomographic bin using the zPDF debiasing technique.We discuss the advantages and limitations of both methods in Sect. 5. We apply these methods to the KiDS and COSMOS data set in Sect.6.Finally, we summarise our findings and provide closing remarks in Sect.7.
A Euclid mock catalogue
In this section we present the Euclid mock catalogue used in this analysis, which is constructed from the Horizon-AGN hydrodynamical simulated lightcone and includes photometry and photometric redshift information.A full description of this mock catalogue can be found in Laigle et al. (2019).Here we summarise its main features and discuss the construction of several simulated spectroscopic samples, which reproduce a number of expected spectroscopic selection effects.
Horizon-AGN simulation
Horizon-AGN is a cosmological hydrodynamical simulation ran in a simulation box of 100 h −1 Mpc per-side, and with a dark matter mass resolution of 8 × 10 7 M (Dubois et al. 2014).A flat ΛCDM cosmology with H 0 = 70.4km s −1 Mpc −1 , Ω m = 0.272, Ω Λ = 0.728, and n s = 0.967 (compatible with WMAP-7, Komatsu et al. 2011) is assumed.Gas evolution is followed on an adaptive mesh, whereby an initial coarse 1024 3 grid is refined down to 1 physical kpc.The refinement procedure leads to a typical number of 6.5 × 10 9 gas resolution elements (called leaf cells) in the simulation at z = 1.Following Haardt & Madau (1996), heating of the gas by a uniform ultra-violet background radiation field takes place after z = 10.Gas in the simulation is able to cool down to temperatures of 10 4 K through H and He collision, and with a contribution from metals as tabulated in Sutherland & Dopita (1993).Gas is converted into stellar particles in regions where the gas particle number density surpasses n 0 = 0.1 H cm −3 , following a Schmidt law, as explained in Dubois et al. (2014).Feedback from stellar winds and supernovae (both types Ia and II) are included in the simulation, and include mass, energy, and metal releases.Black holes (BHs) in the simulation can grow by gas accretion, at a Bondi accretion rate that is capped at the Eddington limit, and are able to coalesce when they form a sufficiently tight binary.They release energy in either the quasar or radio (i.e.heating or jet) mode, when the accretion rate is respectively above or below one per cent of the Eddington ratio.The efficiency of these energy release modes are tuned to match the observed BH-galaxy scaling relation at z = 0 (see Dubois et al. 2012, for more details).
The simulation lightcone was extracted as described in Pichon et al. (2010).Particles and gas leaf cells were extracted at each time step depending on their proper distance to the observer at the origin.In total, the lightcone contains roughly 22 000 portions of concentric shells, which are taken from about 19 replications of the Horizon-AGN box up to z = 4.We restrict ourselves to the central 1 deg 2 of the lightcone.Laigle et al. (2019) extracted a galaxy catalogue from the stellar particle distribution using the AdaptaHOP halo finder (Aubert et al. 2004), where galaxy identification is based exclusively on the local stellar particle density.Only galaxies with stellar masses M > 10 9 M (which corresponds to around 500 stellar particles) are kept in the final catalogue, resulting in more than 7 × 10 5 galaxies in the redshift range 0 < z < 4, with a spatial resolution of 1 kpc.
A full description of the per-galaxy spectral energy distribution (SED) computation within Horizon-AGN is presented in Laigle et al. (2019) 1 , in the following we only summarise the key details of the SED construction process.Each stellar particle in the simulation is assumed to behave as a single stellar population, and its contribution to the galaxy spectrum is generated using the stellar population synthesis models of Bruzual & Charlot (2003), assuming a Chabrier (2003) initial mass function.As each galaxy is composed of a large number of stellar particles, the galaxy SEDs therefore naturally capture the complexities of unique star-formation and chemical enrichment histories.Additionally, dust attenuation is also modelled for each star particle individually, using the mass distribution of the gasphase metals as a proxy for the dust distribution, and adopting a constant dust-to-metal mass ratio.Dust attenuation (neglecting scattering) is therefore inherently geometry-dependent in the simulation.Finally, absorption of SED photons by the intergalactic medium (i.e.Hi absorption in the Lyman-series) is modelled along the line of sight to each galaxy, using our knowledge of the gas density distribution in the lightcone.This therefore introduces variation in the observed intergalactic absorption across individual lines of sight.Flux contamination by nebular emission lines is not included in the simulated SEDs.While emission lines could add some complexity in galaxy's photometry, their contribution could be modelled in template-fitting code.Moreover, their impact is mostly crucial at high redshift (Schaerer & de Barros 2009) and when using medium bands (e.g.Ilbert et al. 2009).Kaviraj et al. (2017) compare the global properties of the simulated galaxies with statistical measurements available in the literature (as the luminosity functions, the star-forming main sequence, or the mass functions).They find an overall fairly good agreement with observations.Still, the simulation over-predicts the density of low-mass galaxies, and the median specific star formation rate falls slightly below the literature results, a common trend in current simulations.Fig. 2. Few examples of galaxy likelihood L (z) (dashed red lines) and debiased posterior distributions (solid black lines).The spec-z (photo-z) are indicated with green (magenta) dotted lines.These galaxies are selected in the tomographic bin 0.4 < z p < 0.6 for the DES/Euclid (top panels) and LSST/Euclid (bottom panels) configurations.These likelihoods are not a random selection of sources, but illustrate the variety of likelihoods present in the simulations.
Simulation of Euclid photometry and photometric redshifts
As described in Laureijs et al. (2011), the Euclid mission will measure the shapes of about 1.5 billion galaxies over 15 000 deg2 .The visible (VIS) instrument will obtain images taken in one very broad filter (V IS ), spanning 3500 Å.This filter allows extremely efficient light collection, and will enable VIS to measure the shapes of galaxies as faint as 24.5 mag with high precision.The near infrared spectrometer and photometer (NISP) instrument will produce images in three near-infrared (NIR) filters.In addition to these data, Euclid satellite observations are expected to be complemented by large samples of ground-based imaging, primarily in the optical, to assist the measurement of photo-z.
Euclid imaging has an expected sensitivity, over 15 000 deg 2 , of 24.5 mag (at 10σ) in the V IS band, and 24 mag (at 5σ) in each of the Y, J, and H bands (Laureijs et al. 2011).We associate the Euclid imaging with two possible ground-based visible imaging datasets, which correspond to two limiting cases for photo-z estimation performance.
-DES/Euclid.As a demonstration of photo-z performance when combining Euclid with a considerably shallower photometric dataset, we combine our Euclid photometry with that from DES (Abbott et al. 2018).DES imaging is taken in the g, r, i, and z filters, at 10σ sensitivities of 24.33, 24.08, 23.44, and 22.69 respectively.-LSST/Euclid.As a demonstration of photo-z performance when combining Euclid with a considerably deeper photometric dataset, we combine our Euclid photometry with that from the Vera C. Rubin Observatory LSST (LSST Science Collaboration et al. 2009).LSST imaging will be taken in the u, g, r, i, z, and y filters, at 5σ (point source, full depth) sensitivities of 26.3, 27.5, 27.7, 27.0, 26.2, and 24.9, respectively.
DES imaging is completed and meets these expected sensitivities.Conversely LSST will not reach those quoted full depth sensitivities before its tenth year of operation (starting in 2021), and even then it is possible that the northern extension of LSST might not reach the same depth.Still, LSST will be already extremely deep after two years of operation, being only 0.9 magnitude shallower than the final expected sensitivity (Graham et al. 2020).Therefore, these two cases (and their assumed sensitivities) should comfortably encompass the possible photo-z performance of any future combined optical and Euclid photometric data set.
In order to generate the mock photometry in each of the Euclid, DES, and LSST surveys, each galaxy SED is first 'observed' through the relevant filter response curves.In each photometric band, we generate Gaussian distributions of the expected signal-to-noise ratios (SNs) as a function of magnitude, given both the depth of the survey and typical SN-magnitude relation (in the same wavelength range) (see appendix A in Laigle et al. 2019).We then use these distributions, per filter, to assign each galaxy a SN (given its magnitude).The SN of each galaxy determines its 'true' flux uncertainty, which is then used to perturb the photometry (assuming Gaussian random noise) and produce the final flux estimate per source.This process is then repeated for all desired filters.
The galaxy photo-z are derived in the same manner as with real-world photometry.We use the method detailed in Ilbert et al. (2013), based on the template-fitting code LePhare (Arnouts et al. 2002;Ilbert et al. 2006).We adopt a set of 33 templates from Polletta et al. (2007) complemented with templates from Bruzual & Charlot (2003).Two dust attenuation curves are considered (Prevot et al. 1984;Calzetti et al. 2000), allowing for a possible bump at 2175Å.Neither emission lines nor adaptation of the zero-points are considered, since they are not included in the simulated galaxy catalogue.The full redshift likelihood, L (z), is stored for each galaxy, and the photo-z pointestimate, z p , is defined as the median of L (z) 2 .The distributions of (derived) photometric redshift versus (intrinsic) spectroscopic redshift for mock galaxies (in both our DES/Euclid and Euclid Collaboration: O. Ilbert et al.: Determination of the mean redshift of tomographic bins LSST/Euclid configurations) are shown in Fig. 1.Several examples of redshift likelihoods are shown in Fig. 2. We can see realistic cases with multiple modes in the distribution, as well as asymmetric distributions around the main mode.The photo-z used to select galaxies within the tomographic bins are indicated by the magenta lines and that they can differ significantly from the spec-z (green lines).
We wish to remove galaxies with a broad likelihood distribution (i.e.galaxies with truly uncertain photo-z) from our sample.In practice, we approximate the breadth of the likelihood distribution using the photo-z uncertainties produced by the templatefitting procedure to clean the sample.LePhare produces a redshift confidence interval [z min p , z max p ], per source, which encompasses 68% of the redshift probability around z p .We remove galaxies with max( z p − z min p , z max p − z p ) > 0.3, which we denote σ z p > 0.3 in the following for simplicity.We investigate the impact of this choice on the number of galaxies available for cosmic shear analyses, and also quantify the impact of relaxing this limit, in Sect.5.2.
Finally, we generate 18 photometric noise realisations of the mock galaxy catalogue.While the intrinsic physical properties of the simulated galaxies remain the same under each of these realisations, the differing photometric noise allows us to quantify the role of photometric noise alone on our estimated of z .We only adopt 18 realisations due to computational limitations, however, our results are stable to the addition of more realisations.
Definition of the target photometric sample and the spectroscopic training samples
All redshift-calibration approaches discussed in this paper utilise a spec-z training sample to estimate the mean redshift of a target photometric sample.In practice, such a spectroscopic training sample is rarely a representative subset of the target photometric sample, but is often composed of bluer and brighter galaxies.Therefore, to properly assess the performance of our tested approaches, we must ensure that the simulated training sample is distinct from the photometric sample.To do this, we separate the Horizon-AGN catalogue into two equal sized subsets: we define the first half of the photometric catalogue as our as target sample, and draw variously defined spectroscopic training samples from the second half of the catalogue.We test each of our calibration approaches with three spectroscopic training samples, designed to mimic different spectroscopic selection functions: a uniform training sample; a SOM-based training sample; and a COSMOS-like training sample.
The uniform training sample is the simplest, most idealised training sample possible.We sample 1000 galaxies with V IS < 24.5 mag (i.e. the same magnitude limit as in the target sample) in each tomographic bin, independently of all other properties.While this sample is ideal in terms of representation, the sample size is set to mimic a realistic training sample that could be obtained from dedicated ground-based spectroscopic follow-up of a Euclid-like target sample.
Our second training sample follows the current Euclid baseline to build a training sample.Masters et al. (2017) endeavour to construct a spectroscopic survey, the Complete Calibration of the Colour-Redshift Relation survey (C3R2), which completely samples the colour/magnitude space of cosmic shear target samples.This sample is currently assembled by combining data from ESO and Keck facilities (Masters et al. 2019;Guglielmo et al. 2020).The target selection is based on an unsupervised machinelearning technique, the self-organising map (SOM, Kohonen 1982), which they use to define a spectroscopic target sample that is representative in terms of galaxy colours of the Euclid cosmic shear sample.The SOM allows a projection of a multidimensional distribution into a lower two-dimensional map.The utility of the SOM lies in its preservation of higher-dimensional topology: neighbouring objects in the multi-dimensional space fall within similar regions of the resulting map.This allows the SOM to be utilised as a multi-dimensional clustering tool, whereby discrete map cells associate sources within discrete voxels in the higher dimensional space.We utilise the method of Davidzon et al. (2019) to construct a SOM, which involves projecting observed (i.e.noisy) colours of the mock catalogue into a map of 6400 cells (with dimension 80 × 80).We construct our SOM using the LSST/Euclid simulated colours, assuming implicitly that the spec-z training sample is defined using deep calibration fields.If the flux uncertainty is too large (∆m x i > 0.5, for object i in filter x) the observed magnitude is replaced by that predicted from the best-fit SED template, which is estimated while preparing the SOM input catalogue.This procedure allows us to retain sources that have non-detections in some photometric bands.We then construct our SOM-based training sample by randomly selecting N train galaxies from each cell in the SOM.The C3R2 expects to have 1 spectroscopic galaxies per SOM cell available for calibration by the time that the Euclid mission is active.For our default SOM coverage, we invoke a slightly more idealised situation of two galaxies per cell and we impose that these two galaxies belong to the considered tomographic bin.This procedure ensures that all cells are represented in the spectroscopy.In reality, a fraction of cells will likely not contain spectroscopy.However, when treated correctly, such misrepresented cells act only to decrease the target sample number density, and do not bias the resulting redshift distribution mean estimates (Wright et al. 2020).We therefore expect that this idealised treatment will not produce results that are overlyoptimistic.
Finally, the COSMOS-like training sample mimics a typical heterogeneous spectroscopic sample, currently available in the COSMOS field.We first simulate the zCOSMOS-like spectroscopic sample (Lilly et al. 2007), which consists of two distinct components: a bright and a faint survey.The zCOSMOS-Bright sample is selected such that it contains only galaxies at z < 1.2, while the zCOSMOS-Faint sample contains only galaxies at z > 1.7 (with a strong bias towards selecting star-forming galaxies).To mimic these selections, we construct a mock sample whereby half of the sources are brighter than i = 22.5 (the bright sample) and half of the galaxies reside at 1.7 < z < 2.4 with g < 25 (the faint sample).We then add to this compilation a sample of 2000 galaxies that are randomly selected at i < 25, mimicking the low-z VUDS sample (Le Fevre et al. 2015), and a sample of 1000 galaxies randomly selected at 0.8 < z < 1.6 with i < 24, mimicking the sample of Comparat et al. (2015).By construction, this final spectroscopic redshift compilation exhibits low representation of the photometric target sample in the redshift range 1.3 < z < 1.7.
Overall, our three training samples exhibit (by design) differing redshift distributions and galaxy number densities.We investigate the sensitivity of the estimated z on the size of the training sample in Sect.5.3.
Direct calibration
Direct calibration is a fairly straightforward method that can be used to estimate the mean redshift of a photometric galaxy sample, and is currently the baseline method planned for Euclid cosmic shear analyses.In this section we describe our implementation of the direct calibration method, apply this method to our various spectroscopic training samples, and report the resulting accuracy of our redshift distribution mean estimates.
Implementation for the different training samples
Given our different classes of training samples, we are able to implement slightly different methods of direct calibration.We detail here how the implementation of direct calibration differs for each of our three spectroscopic training samples.
The uniform sample.In the case where the training sample is known to uniformly sparse-sample the target galaxy distribution, an estimate of z can be approximated by simply computing the mean redshift of the training sample.
The SOM sample.By construction, the SOM training sample uniformly covers the full n-dimensional colour space of the target sample.The method relies on the assumption that galaxies within a cell share the same redshift (Masters et al. 2015) which can be labelled with the training sample.Therefore, we can estimate the mean redshift of the target distribution z by simply calculating the weighted mean of each cell's average redshift, where the weight is the number of target galaxies per cell: where the sum runs over the i ∈ [1, N cells ] cells in the SOM, z i train is the mean redshift of the training spectroscopic sources in cell i, N i is the number of target galaxies (per tomographic bin) in cell i, and N t is the total number of target galaxies in the tomographic bin.A shear weight associated to each galaxy can be introduced in this equation (e.g.Wright et al. 2020).As described in Sect.2.3, our SOM is consistently constructed by training on LSST/Euclid photometry, even when studying the shallower DES/Euclid configuration.We adopt this strategy since the training spectroscopic samples in Euclid will be acquired in calibration fields (e.g.Masters et al. 2019) with deep dedicated imaging.This assumption implies that the target distribution z is estimated exclusively in these calibration fields, which are covered with photometry from both our shallow and deep setups, and therefore increases the influence of sample variance on the calibration.
The COSMOS-like sample.Applying direct calibration to a heterogeneous training sample is less straightforward than in the above cases, as the training sample is not representative of the target sample in any respect.Weighting of the spectroscopic sample, therefore, must correct for the mix of spectroscopic selection effects present in the training sample, as a function of magnitude (from the various magnitude limits of the individual spectroscopic surveys), colour (from their various preselections in colour and spectral type), and redshift (from dedicated redshift preselection, such as that in zCOSMOS-Faint).Such a weighting scheme can be established efficiently with machinelearning techniques such as the SOM.To perform this weighting, we train a new SOM using all the information that have the potential to correct for the selection effects present in our heterogeneous training sample: apparent magnitudes, colours, and template-based photo-z.We create this SOM using only the galaxies from the COSMOS-like sample that belong to the considered tomographic bin, and reduce the size of the map to 400 cells (20 × 20, because the tomographic bin itself spans a smaller colour space).Finally, we project the target sample into the SOM and derive weights for each training sample galaxy, such that they reproduce the per-cell density of target sample galaxies.This process follows the same weighting procedure as Wright et al. (2020), who extend the direct calibration method of Lima et al. (2008) to include source groupings defined via the SOM.In this method, the estimate of z is also inferred using Eq. ( 2).
Results
We apply the direct calibration technique to the mock catalogue, split into ten tomographic bins spanning the redshift interval 0.2 < z p < 2.2.To construct the samples within each tomographic bin, training and target samples are selected based on their best-estimate photo-z, z p .We quantify the performance of the redshift calibration procedure using the measured bias in z , defined as: and evaluated over the target sample.We present the values of ∆ z that we obtain with direct calibration in Fig. 3, for each of the ten tomographic bins.The figure shows, per tomographic bin, the population mean (points) and 68% population scatter (error bars) of ∆ z over the 18 photometric noise realisations of our simulation.The solid lines and yellow region indicate the |∆ z | ≤ 2 × 10 −3 requirement stipulated by the Euclid mission.
Given our limited number of photometric noise realisations, estimating the population mean and scatter directly from the 18 samples is not sufficiently robust for our purposes.We thus use maximum likelihood estimation, assuming Gaussianity of the ∆ z distribution, to determine the underlying population mean and the scatter.We define these underlying population statistics as µ ∆z and σ ∆z for the mean and the scatter, respectively.We find that, when using a uniform or SOM training sample, direct calibration is consistently able to recover the target sample mean redshift to |µ ∆z | < 2 × 10 −3 .In the case of the shallow DES/Euclid configuration, however, the scatter σ ∆z exceeds the Euclid accuracy requirement in the highest and lowest tomographic bins.The DES/Euclid configuration is, therefore, technically unable to meet the Euclid precision requirement on z in the extreme bins.In the LSST/Euclid configuration, conversely, the precision and accuracy requirements are both consistently satisfied.We hypothesise that this difference stems from the deeper photometry having higher discriminatory power in the tomographic binning itself: the N(z) distribution for each tomographic bin is intrinsically broader for bins defined with shallow photometry, and therefore has the potential to demonstrate greater complexity (such as colour-redshift degeneracies) that reduce the effectiveness of direct calibration.
The direct calibration with the SOM relies on the assumption that galaxies within a cell share the same redshift (Masters et al. 2015).Noise and degeneracies in the colour-redshift space introduce a redshift dispersion within the cell which impacts the accuracy of z .Even with the diversity of SED generated with Horizon-AGN, and introducing noise in the photometry, we find that the direct calibration with a SOM sample is sufficient to reach the Euclid requirement.
We find that the COSMOS-like training sample is unable to reach the required accuracy of Euclid.This behaviour is somewhat expected, since the COSMOS-like sample contains selection effects that are not cleanly accessible to the direct calibration weighting procedure.The mean redshift is particularly biased in the bin 1.6 < z < 1.8, where there is a dearth of spectra; the Comparat et al. (2015) sample is limited to z < 1.6, while the zCOSMOS-Faint sample resides exclusively at z > 1.7, thereby leaving the range 1.6 < z < 1.7 almost entirely unrepresented.In this circumstance, our SOM-based weighting procedure is insufficient to correct for the heterogeneous selection, leading to bias.This is typical in cases where the training sample is missing certain galaxy populations that are present in the target sample (Hartley et al. 2020).We note, though, that it may be possible to remove some of this bias via careful quality control during the direct calibration process, such as demonstrated in Wright et al. (2020).Whether such quality control would be sufficient to meet the Euclid requirements, however, is uncertain.
We note that, although we are utilising photometric noise realisations in our estimates of z , the underlying mock catalogue remains the same.As a result, our estimates of µ ∆z and σ ∆z are not impacted by sample variance.In reality, sample variance affects the performance of the direct calibration, particularly when assuming that the training sample is directly representative of the target distribution (as we do with our uniform training sample).For fields smaller than 2 deg 2 , Bordoloi et al. (2010) showed that Poisson noise dominates over sample variance (in mean redshift estimation) when the training sample consists of less than 100 galaxies.Above this size, sample variance dominates the calibration uncertainty.This means that, in order to generate an unbiased estimate of z using a uniform sample of 1000 galaxies, a minimum of 10 fields of 2 deg 2 would need to be surveyed.
The SOM approach is less sensitive to sample variance, as over-densities (and under-densities) in the target sample population relative to the training sample are essentially removed in the weighting procedure (provided that the population is present in the training sample, Lima et al. 2008;Wright et al. 2020).In the cells corresponding to this over-represented target population, the relative importance of training sample redshifts will be similarly up-weighted, thereby removing any bias in the reconstructed N(z).Therefore, sample variance should have only a weak impact on the global derived N(z) in this method.Nonetheless, samples variance may still be problematic if, for example, under-densities result in entire populations being absent from the training sample.
Finally, it is worth emphasising that these results are obtained assuming perfect knowledge of training set redshifts.We study the impact of failures in spectroscopic redshift estimation in Sect. 5.
Estimator based on redshift probabilities
In this section we present another approach to redshift distribution calibration that uses the information contained in the galaxy redshift probability distribution function, available for each individual galaxy of the target sample.Photometric redshift estimation codes typically provide approximations to this distribution based solely on the available photometry of each source.We study the performance of methods utilising this information in the context of Euclid and test a method to debias the zPDF.
Formalism
Given the relationship between galaxy magnitudes and colours (denoted o) and redshift z, one can utilise the conditional probability p(z|o) to estimate the true redshift distribution N(z), using an estimator such as that of Sheth (2007); Sheth & Rossi (2010): where N(o) is the joint n-dimensional distribution of colours and magnitudes.As made explicit in the above equation, the N(z) estimator reduces simply to the sum of the individual (pergalaxy) conditional redshift probability distributions, p i (z|o).A shear weight associated to each galaxy can be introduced in this equation (e.g.Wright et al. 2020).It is worth noting that this summation over conditional probabilities is ideologically similar to the summation of SOM-cell redshift distributions presented previously; in both cases, one effectively builds an estimate of the probability p(z|o), and uses this to estimate z .Indeed, it is clear that the SOM-based estimate of z presented in Eq. ( 2) in fact follows directly from Eq. (4).Generally, photometric redshift codes provide in output a normalised likelihood function that gives the probability of the observed photometry given the true redshift, L (o|z), or sometimes the posterior probability distribution P(z|o) (e.g.Benítez 2000;Bolzonella et al. 2000;Arnouts et al. 2002; Cunha et al. where Pr(z) is the prior probability.Photometric redshift methods that invoke template-fitting, such as the LePhare photo-z estimation code, generally explore the likelihood of the observed photometry given a range of theoretical templates T and true redshifts L (o|T, z).The full likelihood, L (o|z), is then obtained by marginalising over the template set: In the full Bayesian framework, however, we are instead interested in the posterior probability, rather than the likelihood.In the formulation of this posterior, we first make explicit the dependence between galaxy colours c and magnitude in one (reference) band m 0 : o = {c, m 0 }.Following Benítez (2000) we can then define the posterior probability distribution function: where Pr(z|T, m 0 ) is the prior conditional probability of redshift given a particular galaxy template and reference magnitude, and Pr(T |m 0 ) is the prior conditional probability of each template at a given reference magnitude.Under the approximation that the redshift distribution does not depend on the template, and that the template distribution is independent of the magnitude (i.e. the luminosity function does not depend on the SED type), one obtains Adding the template dependency in the prior would improve our results, but is impractical with the iterative method presented in Sec. 4, given the size of our sample.
The posterior probability P(z|o) is a photometric estimate of the true conditional redshift probability p(z|o) in Eq. ( 4), and thus we are able to estimate the target sample N(z) via stacking of the individual galaxy posterior probability distributions: and therefore:
Initial results
In this analysis we use the LePhare code, which outputs L (o|z) for each galaxy as defined in Eq. ( 6).The redshift distribution (and thereafter its mean) are obtained by summing galaxy posterior probabilities, which are derived as in Eq. ( 9).This raises, however, an immediate concern: in order to estimate the N(z) using the per-galaxy likelihoods, we require a prior distribution of magnitude-dependant redshift probabilities, Pr(z|m 0 ), which naturally requires knowledge of the magnitude-dependent redshift distribution.
We test the sensitivity of our method to this prior choice by considering priors of two types: a (formally improper) 'flat prior' with Pr(z|m 0 ) = 1; and a 'photo-z prior' that is constructed by normalising the redshift distribution, estimated per magnitude bin, as obtained by summation over the likelihoods (following Brodwin et al. 2006).Formally this photo-z prior is defined as: where Θ(m 0,i |m 0 ) is unity if m 0,i is inside the magnitude bin centered on m 0 and zero otherwise, and N t is the number of galaxies in the tomographic bin.We estimate z in the previously defined tomographic bins using Eq. ( 11).In the upper-left panel of Fig. 4, we show estimated (and true) N(z) for one tomographic bin with 1.2 < z p < 1.4, estimated using DES/Euclid photometry.We annotate this panel with the estimated ∆ z made when utilising our two different priors.It is clear that the choice of prior, in this circumstance, can have a significant impact on the recovered redshift distribution.We also find an offset in the estimated redshift distributions with respect to the truth, as confirmed by the associated mean redshift biases being considerable: |∆ z | > 0.012, or roughly six times larger than the Euclid accuracy requirement.
The resulting biases estimated for this method in all tomographic bins, averaged over all noise realisations, is presented in the left-most panels of Fig. 5 (for both the DES/Euclid and LSST/Euclid configurations).Overall, we find that this approach produces mean biases of |µ ∆z | > 0.02 (1 + z) and |µ ∆z | > 0.01 (1 + z), which corresponds to roughly ten and five times larger than the Euclid accuracy requirement, for the DES/Euclid and LSST/Euclid cases respectively.Such bias is created by the mismatch between the simple galaxy templates included in LePhare (in a broad sense, including dust attenuation and IGM absorption) and the complexity and diversity of galaxy spectra generated in the hydrodynamical simulation.Such biases are in agreement with the usual values observed in the literature with broad band data (e.g.Hildebrandt et al. 2012).
We therefore conclude that use of such a redshift calibration method is not feasible for Euclid, even under optimistic photometric circumstances.
Redshift probability debiasing
In the previous section we demonstrated that the estimation of galaxy redshift distributions via summation of individual galaxy posteriors P(z), estimated with a standard templatefitting code, is too inaccurate for the requirements of the Euclid survey.The cause of this inaccuracy can be traced to a number of origins: colour-redshift degeneracies, template set nonrepresentativeness, redshift prior inadequacy, and more.However, it is possible to alleviate some of this bias, statistically, by incorporating additional information from a spectroscopic training sample.In particular, Bordoloi et al. (2010) proposed a method to debias P(z) distributions, using the Probability Integral Transform (PIT, Dawid 1984).The PIT of a distribution is defined as the value of the cumulative distribution function evaluated at the ground truth.In the case of redshift calibration, the PIT per galaxy is therefore the value of the cumulative P(z) distribution evaluated at source spectroscopic redshift z s : If all the individual galaxy redshift probability distributions are accurate, the PIT values for all galaxies should be uniformly distributed between 0 and 1.Therefore, using a spectroscopic training sample, any deviation from uniformity in the PIT distribution can be interpreted as an indication of bias in individual estimates of P(z) per galaxy.We define N P as the PIT distribution for all the galaxies within the training spectroscopic sample, in a given tomographic bin.Bordoloi et al. (2010) demonstrate that the individual P(z) can be debiased using the N P as: where P deb (z) is the debiased posterior probability, and the last term ensures correct normalisation.This correction is performed per tomographic bin.This method assumes that the correction derived from the training sample can be applied to all galaxies of the target sample.As with the direct calibration method, such an assumption is valid only if the training sample is representative of the target sample, i.e. in the case of a uniform training sample, but not in the case of the COSMOS-like and SOM training samples.In these latter cases, we weight each galaxy of the training sample in a manner equivalent to the direct calibration method (see Sect. 3), in order to ensure that the PIT distribution of the training sample matches that of the target sample (which is of course unknown).As for direct calibration, a completely missing population (in redshift or spectral type) could impact the results in an unknown manner, but such case should not occur for a uniform or SOM training sample.
Until now we have considered two types of redshift prior (defined in Sect.4.2): (1) the flat prior and ( 2) the photo-z prior.We have shown that the choice of prior can have a significant impact on the recovered z (Sect.4.2).However, as already noted by Bordoloi et al. (2010), the PIT correction has the potential to account for the redshift prior implicitly.In particular, if one uses a flat redshift prior, the correction essentially modifies L (z) to match the true P(z) (assuming the various assumptions stated previously are satisfied).This is because the redshift prior information is already contained within the training spectroscopic sample.Nonetheless, rather than assuming a flat prior to measure the PIT distribution, one can also adopt the photo-z prior (as in Eq. 12).This approach has two advantages: (1) it allows us to start with a posterior probability that is intrinsically closer to the truth, and ( 2) it includes the magnitude dependence of the redshift distribution within the prior, which is of course not reflected in the case of the flat prior.
Therefore, we improve the debiasing procedure from Bordoloi et al. ( 2010) by including such photo-z prior.We add an iterative process to further ensure the correction's fidelity and stability.In this process the PIT distribution is iteratively recomputed by updating the photo-z prior.We compute the PIT for the galaxy as: where Pr n (z|m 0 ) is the prior computed at step n.We can then derive the debiased posterior as: with N n P the PIT distribution at step n.The prior at the next step is: with m i for the magnitude of the galaxy i.Note that at n = 0, we assume a flat prior.Therefore, the step n = 0 of the iteration corresponds to the debiasing assuming a flat prior, as in Bordoloi et al. (2010).We also note that the prior is computed for the N T galaxies of the training sample in the debiasing procedure, while it is computed over all galaxies of the tomographic bin for the final posterior.As an illustration, Fig. 2 shows the debiased posterior distributions with black lines, which can significantly differ from the original likelihood distribution.We find that this procedure converges quickly.Typically, the difference between the mean redshift measured at step n + 1 and that measured at step n does not differ by more than 10 −3 after 2-3 iterations.
As described in appendix A, we also find that the debiasing procedure is considerably more accurate when the photo-z uncertainties are over-estimated, rather than under-estimated.Such a condition can be enforced for all galaxies by artificially inflating the source photometric uncertainties by a constant factor in the input catalogue, prior to the measurement of photo-z.In our analysis, we utilise a factor of two inflation in our photometric uncertainties prior to measurement of our photo-z in our debiasing technique.
Final results
We illustrate the impact of the P(z) debiasing on the recovered redshift distribution in the lower panels of Fig. 4.This figure presents the case of the redshift bin 0.8 < z p < 1 in the DES/Euclid configuration.The N(z) and PIT distributions, as computed with the initial posterior distribution are shown in the upper panels (for both of our assumed priors).The distributions after debiasing are shown in the bottom panels.We can see the clear improvement provided by the debiasing procedure in this example, whereby the redshift distribution bias ∆ z (annotated) is reduced by a factor of ten.We also observe a clear flattening of the target sample PIT distribution.
We present the results of debiasing on the mean redshift estimation for all tomographic bins in Fig. 5.The three rightmost panels show the mean redshift biases recovered by our debiasing method, averaged over the 18 photometric noise realisations, for our three training samples.The accuracy of the mean redshift recovery is systematically improved compared to the case without P(z) debiasing (shown in the left column).In the DES/Euclid configuration for instance (shown in the upper row), the improvement is better than a factor of ten at z > 1.
In the LSST/Euclid configuration (shown in the bottom row), we find that the results do not depend strongly on the training set used: the accuracy of z is similar for the three training samples, showing that stringent control of the representativeness of the training sample is not necessary in this case.In the DES/Euclid case, however, the SOM training sample clearly out-performs the other training samples, especially at low redshifts.Finally, we note that the iterative procedure using the photo-z prior improves the results when using the SOM training sample and the DES/Euclid configuration.
Overall, the Euclid requirement on redshift calibration accuracy is not reached by our debiasing calibration method in the DES/Euclid configuration.The values of µ ∆z at z < 1 reach five times the Euclid requirement, represented by the yellow bands in Fig. 5.At best, an accuracy of |µ ∆z | ≤ 0.004 (1 + z) is reached for the SOM training sample with the photo-z prior.Conversely, the Euclid requirement is largely satisfied in the LSST/Euclid configuration.In this case, biases of |µ ∆z | ≤ 0.002 (1 + z) are observed in all but the two most extreme tomographic bins: 0.2 < z < 0.4 and 2 < z < 2.2.We therefore conclude that, for this approach, deep imaging data is crucial to reach the required accuracy on mean redshift estimates for Euclid.
Discussion on key model assumptions
In this section, we discuss how some important parameters or assumptions impact our results.We start by discussing the impact of catastrophic redshift failures in the training sample, the impact of our pre-selection on photometric redshift uncertainty, and the influence of the size of the training sample on our conclusions.We also discuss some remaining limitations of our simulation in the last subsection.
Impact of catastrophic redshift failures in the training sample
For all results presented in this work so far, we have assumed that spectroscopic redshifts perfectly recover the true redshift of all training sample sources.However, given the stringent limit on the mean redshift accuracy in Euclid, deviations from this assumption may introduce significant biases.In particular, mean redshift estimates are extremely sensitive to redshifts far from the main mode of the distribution, and therefore catastrophic redshift failures in spectroscopy may present a particularly significant problem.For instance, if 0.5% of a galaxy population with true redshift of z = 1 are erroneously assigned z s > 2, then this population will exhibit a mean redshift bias of |µ ∆z | > 0.002 under direct calibration.Studies of duplicated spectroscopic observations in deep surveys have shown that there exists, typically, a few percent of sources that are assigned both erroneous redshifts and high confidences (e.g.Le Fèvre et al. 2005).Such redshift measurement failures can be due to misidentification between emission lines, incorrect associations between spectra and sources in photometric catalogues, and/or incorrect associations between spectral features and galaxies (due, for example, to the blending of galaxy spectra along the line of sight; Masters et al. 2017;Urrutia et al. 2019).Of course, the fraction of redshift measurement failures is dependant on the observational strategy (e.g.spectral resolution) and the measurement technique (e.g. the number of reviewers per observed spectrum).Incorrect association of stars and galaxies can also create difficulties.Furthermore, the frequency of redshift measurement failures is expected to increase as a function of source apparent magnitude; a particular problem for the faint sources probed by Euclid imaging (V IS < 24.5).
As we cannot know a priori the number (nor location) of catastrophic redshift failures in a real spectroscopic training set, we instead estimate the sensitivity of our results to a range of catastrophic failure fractions and modes.We assume a SOMbased training sample and an LSST/Euclid photometric configuration, and distribute various fractions of spectroscopic failures throughout the training sample, simulating both random and systematic failures.Generally though, because these failures occur in the spectroscopic space, recovered calibration biases are largely independent of the depth of the imaging survey and the method used to build the training sample.
We start by testing the simplest possible mechanism of distributing the failed redshifts, by assigning failed redshifts uniformly within the interval 0 < z < 4. Resulting calibration biases for this mode of catastrophic redshift failure are presented in the left panels of Fig. 6.We find that, for the direct calibration approach (top panel), even 0.2% of failures in the training sample is the limit to bias the mean redshift by |µ ∆z | > 0.002 at low redshifts (by definition, flag 3 in the VVDS could include 3% of failures; Le Fèvre et al. 2005).We also find that the bias decreases with redshift and reaches zero at z = 2.This is a statistical effect; our assumed uniform distribution has a z = 2 mean, and so random catastrophic failures scattered about this point induce no shift in a z ≈ 2 tomographic bin.For the same reason, biases would be significant at the two extreme tomographic bins if we were to assume a catastrophic failure distribution that followed the true N(z) (which peaks at z ≈ 1).In contrast, our debiased zPDF approach is found to be resilient to catastrophic failure fractions as high as 3.0% (bottom panel).In that case, only an unlikely failure fraction of 10% biases the mean redshift by |µ ∆z | ≥ 0.002 (1 + z).We interpret this result demonstrating the low sensitivity of the PIT distribution to redshift failures in the training sample.This is related to the fact that the PIT distribution provides a global statistical correction that is only weakly sensitive to individual galaxy redshifts.
In the previous test, we assign the failed redshifts uniformly within the interval 0 < z < 4, which is not the expected distribution when redshift failures occur by misidentification of spectral emission lines (e.g.Le Fevre et al. 2015;Urrutia et al. 2019).This mode of failure leads to a highly non-uniform distribution of failed redshifts, due to the interplay between the location of spectral emission lines and the redshift distribution of training sample galaxies.If a line emitted at λ true is misclassified as a different emission line at λ wrong , the redshift is therefore assigned to be: We study the impact of such line misidentifications on our estimates of z , by introducing redshift failures in the simulation with the following assumptions: if z true < 0.5, we assume that the H α emission line can be misclassified as [Oii]; if 0.5 < z true < 1.4, we assume that [Oii] can be misclassified as H α (for bright sources) or Ly α (for faint sources, using i = 23.5 as a limit); at 1.4 < z true < 2.0, we assume that the redshift is estimated using NIR spectra, and therefore that the H α line can be misclassified as [Oii]; and for sources at z > 2, we assume that Ly α can be misclassified as [Oii].
The same fraction of misclassifications is assumed in all the redshift intervals.The result of this experiment is shown in the right panels of Fig. 6, and demonstrates that this (more realistic) mode of catastrophic failures results in equivalent levels of bias as was seen in our simple (uniform) mode, albeit in different tomographic bins.This confirms that the sensitivity of the direct calibration to catastrophic redshift failures exists across simplistic and complex failure modes.In this mode, a failure fraction of 0.2% is sufficient to bias direct calibration at |µ ∆z | ≥ 0.002 (1+z) in all tomographic bins with z p > 0.6.This highlights that the calibration bias depends on the exact distribution of failed redshifts: in the case of line misidentification, incorrectly assigned redshifts consistently bias spectra to higher redshift, causing z to be affected more heavily over the full redshift range.We compare our result to the simulation of Wright et al. (2020).They investigate the impact of catastrophic spec-z failures on the estimate of z (for KiDS cosmic shear analyses) in the MICE2 simulation (Fosalba et al. 2015).They introduce 1.03% of failed redshifts following various distributions.In particular, they test the case of a uniform distribution within 0 < z < 1.4,where z = 1.4 is the limiting redshift of the MICE2 simulation.They report a bias in their direct calibration of ∆ z = 0.0029 for their lowest redshift tomographic bin, and smaller biases for higher redshift tomographic bins.In our lowest redshift bin, we observe a bias of ∆ z = 0.01 for a similar analysis.We argue that this is entirely consistent with the results of Wright et al. (2020) given that our considered redshift range is almost three times larger.Wright et al. (2020) conclude that spec-z failures are unlikely to influence cosmic shear analyses with the KiDS survey, which are limited to z < 1.2, but may be significant for Euclid-like analyses.In this way, our results also agree; it is clear that direct calibration for next generation (so called 'Stage-IV') cosmic-shear surveys like Euclid will require careful consideration of the influence of catastrophic spectroscopic failures.
The training sample for Euclid is currently being built with the C3R2 survey (Masters et al. 2019;Guglielmo et al. 2020).Such sample results from a combination of spectra coming from numerous instruments installed on 8-meter class telescopes (e.g.VIMOS, FORS2, KMOS, DEIMOS, LRIS, MOSFIRE) including data from previous spectroscopic surveys (e.g.Lilly et al. 2007;Le Fevre et al. 2015;Kashino et al. 2019).The most robust spec-z acquired on the Euclid Deep fields with the NISP instrument will be included.Given the diversity of observations, a careful assessment of the sample purity is necessary to limit the fraction of failures below 0.2%.Encouragingly, Masters et al. (2019) do not find any redshift failures within the 72 C3R2 spec-z with duplicated observations.Nonetheless, a larger sample of confirmed spectra is necessary to demonstrate that less than 0.2% of spectroscopic redshift measurements suffer from catastrophic failure.Finally, it is possible that improved reliability of both direct calibration methods and spectroscopic confidence could decrease the effects seen here: Wright et al. (2020), for example, advocate a means of cleaning cosmic shear photometric samples of sources with poorly constrained mean redshifts, demonstrating that this can cause a considerable reduction in calibration biases.Of course, the problem could possibly be alleviated if one were able to improve the reliability of the training sample by only including spec-z with corroborative evidence from, for example, high-precision photo-z derived from deep photometry in the calibration fields.
Relaxing the photo-z σ z p preselection
Estimates of the redshift distribution mean are also sensitive to the presence of secondary modes in the redshift distribution, and our ability to reconstruct them.As described in Sect.2.2, all results presented thus far have invoked a selection on the photometric redshift uncertainty of σ z p < 0.3, which reduces the likelihood of secondary redshift distribution peaks in our analysis.
Here we discuss the impact of this adopted threshold on both accuracy of our estimates of z , and on the fraction of photometric sources that satisfies this selection (and so are retained for subsequent cosmic shear analysis).We apply several σ z p thresholds in the range σ z p ∈ [0.15, 0.6] to the full photo-z catalogue.For the training sample, we consider the SOM configuration with two galaxies per cell.The results are shown in Fig. 7 for the DES/Euclid (left) and LSST/Euclid (right) configurations.We find that the σ z p threshold does not influence our conclusions regarding the direct calibration approach, which is largely insensitive to variations in this threshold.We note, however, that the scatter on the mean redshift (σ ∆z , shown by the errorbars) increases well above the Euclid requirement (for the DES/Euclid configuration) when selecting photo-z with σ z p < 0.15; however this is primarily because such a selection drastically reduces the size of the training sample at z > 1.2, increasing the influence of Poisson noise.Therefore, given the insensitivity of the direct calibration to this threshold, it is advantageous to keep galaxies with broad redshift likelihoods in the target sample when using this method.Conversely, σ z p has a decisive impact on the accuracy of mean redshift estimates inferred from the debiased zPDF approach.For instance, in the DES/Euclid configuration, |µ ∆z | is strongly degraded when applying a threshold of σ z p < 0.6.Such a threshold on σ z p could be relaxed in the LSST/Euclid configuration, however, primarily because the sample is already dominated by galaxies with a narrow zPDF.
Not considered in the above, however, is the importance that the target sample number density plays in cosmic shear analyses.Cosmological constraints from cosmic shear are approximately proportional to the square root of the size of the target galaxy sample, and to the mean redshift.Therefore, optimal lensing surveys require a sufficiently high surface density of sources, preferentially at high redshift.In the Euclid project, 30 galaxies per arcmin 2 are required to reach their planned scientific objectives (Laureijs et al. 2011).As shown in the top panels of Fig. 7, however, applying a threshold on σ z p naturally introduces a reduction in the size of the target sample.For instance, we keep less than 10% of the galaxies at z > 1.4 by selecting a sample at σ z p < 0.15 in the DES/Euclid configuration.In the LSST/Euclid case, a threshold of σ z p < 0.3 has only a significant impact in the redshift bins above z > 1.6.A compromise is therefore needed between the number of sources retained in the target sample, and the accuracy of the mean redshift that we estimate for these sources (when using the debiasing technique).We do not attempt to estimate what this optimal selection is using our simulations, as the luminosity function predicted by Horizon-AGN does not perfectly reproduce what is found in real data.Nonetheless, we note that the fraction of galaxies that are removed from the target sample is likely overestimated here: modern cosmic shear analyses typically introduce a weight associated with the accuracy of each source's shape measurement (the 'shear weight', which is not included in our simulations), which systematically decreases the contribution of low signal-to-noise galaxies to the analysis.As these fainter sources have intrinsically broader photo-z distributions, they will be the most heavily affected by our cuts on σ z p .
Size of the training sample
The size of the training sample is naturally of most importance when using the direct calibration approach (e.g.Newman 2008).The debiased zPDF approach, though, is also sensitive to statistical noise in the PIT distribution.As some ongoing spectroscopic surveys are designed to produce the training samples for Stage IV weak-lensing experiments (e.g.Masters et al. 2017), we explore here the minimal size of these samples required for accurate redshift calibration.To do this, we modify the size of the training samples (limiting our analysis to the uniform and SOM training sample cases).We do not consider the COSMOS-like case that is a patchwork of existing surveys, and is not specifically designed for weak-lensing experiments.For the uniform training samples, we test the cases with 500, 1000, 2000 galaxies per tomographic bin.For the SOM training samples, we test the cases corresponding to cells filled with 1, 2, or 3 galaxies.
Figure 8 shows the impact of the training sample size on ∆ z .We find that the mean bias µ ∆z always remains within the Euclid requirements for the direct calibration approach.The scatter σ ∆z in the bias exceeds the Euclid requirements in few tomographic bins, however only when considering the smallest training samples: the Euclid requirements are fully satisfied in all tomographic bins when assuming a training sample with more than 1000 galaxies per bin or more than two galaxies per SOM cell.With the debiased zPDF approach, we find that increasing the size of the training sample is not sufficient to reduce the residual bias in the method; rather deeper photometry is preferable, to improve the quality of the initial zPDF.
Catastrophic failures within the photo-z sample
Catastrophic failures in the photo-z sample are a concern for both methods described in this paper.We discuss here their impact as well as remaining limitations of our simulation.
As shown in Fig. 1, our simulated sample already includes a significant fraction of photo-z outliers, defined such that |z p − z s | > 0.15 (1 + z s ).We find 16.24% and 0.70% of outliers at VIS < 24.5 in DES/Euclid and LSST/Euclid, respectively.These fractions reduce to 1.82% and 0.04% when applying a selection on the photometric redshift uncertainty at σ z p < 0.3.The largest fraction of these outliers is due to the degeneracies in the colourredshift space inherent to the use of low signal-to-noise photometry in several bands.However, less trivial catastrophic failures are also present in the simulation.In particular, the diversity of spectra generated by the complex physical processes in Horizon-AGN is not fully captured by the limited set of SED templates used in LePhare.This misrepresentation in galaxy SED creates a significant fraction of zPDF not compatible with the spec-z.An example of such L (z) is shown in the bottom right panel of Fig. 2. Despite the presence of such failures, our results show that the Euclid requirement is fulfilled.
Several factors were ignored that can potentially create more catastrophic failures in the photo-z.Galaxies with extreme properties, such as sub-millimeter galaxies (SMG) for instance, are known to be under-represented in simulations (e.g.Hayward et al. 2020).If galaxies with an extreme dust attenuation fall within the cosmic-shear selection at VIS < 24.5 and are selected in one tomographic bin, they could have an impact on our results.Nonetheless, nothing indicates that their zPDF cannot be established correctly from template fitting, or that such population cannot be isolated in the multi-color space with SOM.
The presence of AGN could also be a problem.These sources can be isolated from their SED (Fotopoulou & Paltani 2018), identified as point-like sources for quasi-stellar objects, and identified as X-ray sources with eROSITA (Merloni et al. 2012).We should however fail to isolate AGN with an extended morphology or that are too faint to be detected in X-ray.Salvato et al. (2011) find however that standard galaxy SED libraries are sufficient to obtain an accurate photo-z for such sources.
Residual contamination from stars could also bias z .This population contaminates preferentially specific tomographic bins.In particular, stars may bias the mean redshift towards higher values, for both direct calibration and debiased zPDF methods.A morphological selection based on VIS highresolution images, combined with a color selection including near-infrared photometry (e.g.Daddi et al. 2004), is efficient to isolate them (Fotopoulou & Paltani 2018).A minimal contamination could bias the mean redshift at a level similar to the one discussed in Sect.5.1.Nonetheless, future simulations need to include stellar and AGN populations to better assess the level of contamination of the galaxy sample and its impact on the Euclid requirement.
Finally, Laigle et al. (2019) show that the fraction of outliers in Horizon-AGN remains underestimated in comparison to real dataset.One source of discrepancy originates from not taking into account the uncertainties induced by source extraction in images.Bordoloi et al. (2010) estimate that 10% of the sources could be potentially blended and that the likelihood of two blended galaxies with a magnitude difference lower than two is affected in an unpredictable way.In the last decade, numerous source extraction methods have been developed to perform photometry in crowded fields (De Santis et al. 2007;Laidler et al. 2007;Merlin et al. 2016;Lang et al. 2016), which could mitigate the impact of blending.Therefore, a new set of simulations that include images and such source extraction tools should be considered in the future.
Application to real data
In this section, we apply the two approaches presented in Sect. 3 and Sect. 4 to real data.We use existing imaging surveys and associated photo-z to define several tomographic bins.In each tomographic bin, we select a subsample of spec-z for which the mean redshift z true is known.We refer to this sample as the target sample and the goal is to retrieve the mean redshift using only the photometric catalogue and an independent training sample.As previously, we measure ∆ z as defined in Eq. ( 3) in each tomographic bin.
The COSMOS survey
We first investigate a favourable configuration, where the photometric survey is much deeper than the target sample.We aim at measuring the mean redshift of the LEGA-C galaxies (van der Wel et al. 2016) selected in the tomographic bin at 0.7 < z p < 0.9.We base our estimate of z on the COSMOS broad-band photometry and associated zPDF.The imaging sensitivity is three magnitudes deeper than that of the target sample.All the spec-z available on the COSMOS field (excluding the LEGA-C ones) are used for the training.For the direct calibration approach, we obtain a bias of µ ∆z = 0.00032 and a scatter of σ ∆z = 0.00135; an accuracy well within the Euclid requirement.Secondly, we debias the zPDF using the PIT distribution as discussed in Sect.4.3.In that case, we obtain a mean redshift with a bias of µ ∆z = −0.00046and a scatter of σ ∆z = 0.00073.In the case of a target sample associated with much deeper photometry, we thus reach the 0.002 (1 + z) accuracy requirement of Euclid, either using the direct calibration or debiased zPDF approaches.The details of this measurement are given in Appendix B.
The KiDS+VIKING-450 survey
We now study a less favourable case, where the photometric survey has a similar depth as the target sample.We measure the mean redshift in five tomographic bins extracted from the KiDS+VIKING-450 imaging survey, which covers 341 deg 2 (Wright et al. 2019).The survey combines the ugri-band photometry from KiDS with ZY JHK S bands from VISTA Kilo degree Infrared Galaxy (VIKING) photometry.We adopt the method described in Sect.2.2 to measure the photo-z.This leads to a photo-z quality comparable to that obtained by Wright et al. (2019), where σ NMAD ∼ 0.045 at z < 0.9 and σ NMAD ∼ 0.079 at z > 0.9.Those photo-z are used to define five tomographic bins over the photometric redshift interval 0.1 < z < 1.2, as in Hildebrandt et al. (2020).
The KiDS+VIKING-450 survey encompasses the VVDS (Le Fèvre et al. 2005) and DEEP2 (Newman et al. 2013) fields, which contain spectroscopic redshifts.We aim at retrieving the mean redshift of the VVDS/DEEP2 galaxies.By only selecting galaxies with secure spectroscopic redshifts and counterparts in the KiDS+VIKING-450 catalogue, we build a target sample of 5794 galaxies3 .The DEEP2 sample has been selected at R < 24.1 and z > 0.7, while the VVDS sample is purely magnitude limited at i < 24.Our target sample covers the full redshift range of interest 0.1 < z < 1.2, with magnitude limits similar to those used for the KiDS+VIKING-450 cosmic shear analysis (Hildebrandt et al. 2020).
The KiDS+VIKING-450 imaging survey also covers the COSMOS field, and we use the existing spec-z in the COSMOS field as the training sample.We note that the training and target samples are located in different fields.Therefore, the sample variance may impact our results.The COSMOS training sample contains 13 817 galaxies from the KiDS+VIKING-450 survey, after applying a redshift confidence selection.This highly heterogeneous sample combines various spectroscopic surveys covering a large range of magnitudes and redshifts (see Sect.We present our results in Table 1 for the five considered tomographic bins.The upper section of the table shows the fiducial case, where a σ z p < 0.3 photo-z uncertainty selection is applied.The direct calibration produces a bias of |∆ z | < 0.01 (1 + z), except in the lowest tomographic bin (0.1 < z < 0.3) where it reaches |∆ z | = 0.02 (1 + z).Using the debiased zPDF method, we find |∆ z | 0.01 (1 + z).In that case, the σ z p < 0.3 selection removes between 20% and 44% of the full KiDS+VIKING-450 sample4 .If we relax the selection on the photo-z error, as presented in the lower section of Table 1, the bias ∆ z increases with the debiased zPDF approach, as found in the simulation.Nonetheless, ∆ z remains around 1%, which corresponds to an accuracy comparable to that obtain with direct calibration.We note that the zPDF debiasing technique with the photo-z prior performs significantly better than with the flat prior.Figure 9 illustrates the impact of the photo-z prior in recovering the shape of the redshift distribution, where we can see a clear improvement below the main mode (bottom left panel).This result is confirmed in the other tomographic bins.
The depth of the KiDS imaging survey is similar to the one we simulate for DES (5σ sensitivity between 23.6 and 25.1), while the VIKING photometry is much shallower than the Euclid one (between 21.2 and 22.7 for VIKING).It is therefore encouraging to find a bias similar to that expected from the simulation in the DES/Euclid configuration, even with shallower imaging.We emphasise that our estimate is performed in the worst possible conditions: (1) our training sample does not cover the same colour/magnitude space as our target sample as shown in Wright et al. (2020), ( 2) the photometric calibration could vary from field-to-field, and (3) some failures in the spec-z target sample could bias the mean redshift considered as the truth.We know that a fraction of the target spec-z could include catastrophic failures, possibly biasing our estimate of z true .Indeed, flag 3 in VVDS and DEEP2 are expected to be 97% and 95% correct, respectively, suggesting that a few percent of failures may be present in those samples, thereby introducing a bias in the true mean redshift z true of more than 0.01, according to Fig. 7.The presence of such fraction of failures remains difficult to verify.A comparison between duplicated observations in DEEP2 shows that the fraction of failures should be at maximum 1.6% (Newman et al. 2013).
Finally, we note that our various selections on σ z p prevent us from directly comparing the recovered redshift distributions with those published in Wright et al. (2019) and Joudaki et al. (2020).Indeed, our selection σ z p preferentially removes the faintest galaxies from the sample, thus shifting the intrinsic redshift distribution towards lower redshifts than expected for the full KiDS+VIKING-450 sample.
Summary and conclusion
This paper investigates the possibility of measuring the mean redshift z of a target sample of galaxies, in ten tomographic bins from z = 0.2 to z = 2.2, with an accuracy of |∆ z | < 0.002 (1+z), as stipulated by the Euclid mission requirements on cosmic shear analysis.Naturally, the conclusions presented here are equally applicable to all current and future surveys where redshift calibration is a relevant challenge.
We apply two approaches which are foreseen for the Euclid mission : a direct calibration of z with a spectroscopic training sample and the combination of individual zPDF to reconstruct the underlying redshift distribution.This paper analyses in detail several factors which could impact these approaches and provide recommendations to make them successful.
We use the Horizon-AGN hydrodynamical simulation (Dubois et al. 2014), which allows a large diversity of modelled SED, and create 18 Euclid-like mock catalogues, with different realisations of the photometric noise.We simulate two possible configurations, which should encompass the range of sensitivities of future imaging available for Euclid: (1) a shallow configuration combining DES and Euclid, and (2) a deep configuration combining LSST and Euclid.We measure the photo-z of the simulated galaxies using the template-fitting code LePhare, as performed in Laigle et al. (2019).Such procedure produces photometric redshifts with complex zPDF, realistic biases, and catastrophic failures.We also assume different characteristics for the spectroscopic training samples associated to the mock catalogues.We consider several selection functions, several sample sizes, and include possible failures in the spec-z.
We first test the direct calibration approach, where the redshift distribution is directly estimated from existing spectroscopic redshifts in a training sample, applying necessary weights to match this distribution to the target sample.We find that this approach is efficient in recovering the mean redshift with an accuracy of 0.002 (1 + z).The method is successful when based on a representative spectroscopic coverage (uniform or SOM), but the weighting scheme is not sufficient to correct for the heterogeneity in the COSMOS-like training sample at the level required by Euclid.This method is stable and robust, and does not require deep photometry such as that from LSST.However, we find that the recovered mean redshift is extremely sensitive to the presence of catastrophic failures in spectroscopic redshift measurement.To recover unbiased estimates of z , a careful quality assessment of the spectroscopic redshifts must guarantee a fraction of failures below 0.2%.
We then investigate the possibility of reconstructing the redshift distribution from the zPDF produced by a template-fitting photo-z code.As expected, we find that the quality of the initial zPDF is not sufficient to measure z with an accuracy better than |∆ z | < 0.01.We test the method by Bordoloi et al. (2010) to debias the zPDF.We improve it by taking into account an appropriate prior, combined with an iterative correction of the zPDF.Our results are summarised below.
-The mean redshift accuracy inferred from the debiased zPDF is systematically improved when compared to the one inferred from the initial zPDF (by up to a factor ten). -This method is weakly sensitive to the fraction of spec-z failures.-Imaging depth is the primary factor in determining the effectiveness of the debiasing technique.We reach the Euclid requirement when combining Euclid and LSST ground-based images.
-Insufficient imaging depth can be compensated by selecting well peaked zPDF, but introduces considerable losses to the target sample number density.A balance should therefore be established between the accuracy of z and the statistical signal of the cosmic shear analysis.
We test the two approaches on real data sets from COS-MOS and KiDS+VIKING-450, and confirm that a high signalto-noise in the photometry is essential for an accurate estimate of z using the debiased zPDF approach.In the less favourable case , where the photometric sample and a spec-z target sample are approximately of equal depth, we reach an accuracy around 0.01 (1 + z) on z , as expected from the simulation and other works (e.g.Wright et al. 2020).We confirm the trends observed in the simulation and find that including the prior in the debiasing technique produces significantly better results.
We conclude that both methods could foreseeably provide independent and accurate inferences of tomographic bin mean redshifts for Euclid.We find that the current Euclid baseline to measure z with a direct calibration approach and a SOM training sample is robust with respect to the imaging survey depth.However, we recommend that training samples, such as C3R2 (Masters et al. 2019), insure a purity level above 99.8%.We also find that the sum of the debiased zPDF could be sufficient to measure z at the Euclid requirement, with currently ongoing spectroscopic surveys.However, we recommend this method only in areas covered with deep optical data.The two methods should be applied simultaneously with current planning of the Euclid survey and provide complementary and independent estimate of z .
Finally, our work still suffers from several limitations that we still need to investigate.We neglect the catastrophic failures within the photo-z sample created by misclassified stars or AGN, or by the galaxy blending.A residual contamination of The fraction of galaxies kept after this selection is also shown ('% kept').We apply the same definition as Wright et al. (2020) to define the loss of photometric sources (their Eq. 1), including shear weights.
these populations in the tomographic bins could affect both approaches to redshift calibration.Moreover, we do not consider sample variance effects, since the Horizon-AGN simulation covers only 1 deg 2 .We would benefit from a larger simulated area to test the impact of sample variance.Nonetheless, our results here present a largely positive outlook for the challenge of tomographic redshift calibration within Euclid.
Fig. 1 .
Fig. 1.Comparison between the photometric redshifts (z p ) and spectroscopic redshifts (z s ) for the Horizon-AGN simulated galaxy sample.Each panel shows a two-dimensional histogram with logarithmic colour scaling, and is annotated with both the 1:1 equivalence line (red) and |z p − z s | = 0.15 (1 + z s ) outlier thresholds (blue), for reference.Photometric redshifts are computed using both DES/Euclid (left) and LSST/Euclid (right) simulated photometry, assuming a Euclid-based magnitude limited sample with V IS < 24.5.
Fig. 3 .
Fig. 3. Bias on the mean redshift (see Eq. 3) averaged over the 18 photometric noise realisations.The mean redshifts are measured using the direct calibration approach.The tomographic bins are defined using the DES/Euclid and LSST/Euclid photo-z in the top and bottom panels, respectively.The yellow region represents the Euclid requirement at 0.002 (1 + z) for the mean redshift accuracy, and the blue dashed lines correspond to a bias of 0.005 (1 + z).The symbols represent the results obtained with different training samples: (a) selecting uniformly 1000 galaxies per tomographic bin (black circles); (b) selecting two galaxies/cell in the SOM (red squares); and (c) selecting a sample that mimics real spectroscopic survey compilations in the COSMOS field (green triangles).
Fig. 4 .
Fig. 4. Examples of redshift distributions (left) and PIT distributions (right, see text for details) for a tomographic bin selected to 0.8 < z p < 1 using DES/Euclid photo-z.In these examples, we assume a training sample extracted from a SOM, with two galaxies per cell.The top and bottom panels show the results before and after zPDF debiasing, respectively.Redshift distributions and PITs are shown for the true redshift distribution (blue), and redshift distributions estimated using the zPDF method, when incorporating photo-z (red) and uniform (black) priors.
Fig. 5 .
Fig. 5. Bias on the mean redshift (see Eq. 3), estimated using the zPDF method and averaged over the 18 photometric noise realisations.The top and bottom panels correspond to the DES/Euclid and LSST/Euclid mock catalogues, respectively.Note the differing scales in the y-axes of the two panels.The left panels are obtained by summing the initial zPDF, without any attempt at debiasing.The other panels show the results of summing the zPDF after debiasing, assuming (from left to right) a uniform, SOM, and COSMOS-like training sample.The yellow region represents the Euclid requirement of |∆ z | ≤ 0.002 (1 + z).The red circles and black triangles in each panel correspond to the results estimated using photo-z and flat priors, respectively.
Fig. 6 .
Fig.6.Bias on the mean redshift averaged over the 18 photometric noise realisations in the LSST/Euclid case.We assume a SOM training sample, and the different symbols correspond to various fraction of failures introduced in the spec-z training sample.The left and right panels correspond to different assumptions on how to distribute the catastrophic failures in the spec-z measurements: uniformly distributed between 0 < z < 4 (left), and assuming failures are caused by misclassified emission lines (right).The upper and lower panels correspond to the direct calibration and debiasing method, respectively.
Fig. 7 .
Fig. 7. Bias on the mean redshift (see Eq. 3), averaged over the 18 photometric noise realisations, under different σ zp selection thresholds.Top panels: fraction of the sample retained after having applied different σ zp thresholds.The middle and bottom panels show the bias on the mean redshift using the direct calibration and debiasing technique, respectively.The left and right panels correspond to the DES/Euclid and LSST/Euclid configurations, respectively.We assume a SOM training sample with 2 gal/cell.
Fig. 8 .
Fig. 8. Bias on the mean redshift (see Eq. 3) averaged over the 18 photometric noise realisations.Impact of the training sample size on the mean redshift accuracy in the LSST/Euclid case.Left and right panels correspond to a uniform and SOM spectroscopic coverage, respectively.The top panels show the number of galaxies used for the training in three considered cases.Middle and bottom panels show the mean redshift accuracy using the direct calibration and the optimised zPDF, respectively.
Fig. 9 .
Fig.9.Same as Fig.4, except that this refers to real data from the KiDS+VIKING-450 photometric survey and the VVDS-DEEP2 target sample.The sample is selected with a σ zp < 0.6 threshold in the photo-z uncertainties.
Fig. A. 1 .
Fig. A.1.Example of PIT distribution (left) and redshift distribution (right) for a tomographic bin selected at 0.6 < z p < 0.8.The top and bottom panels assume photo-z errors that are under-estimated (A = 0.7) and over-estimated (A = 1.5), respectively.The PIT distribution used to correct the zPDF is shown with the solid black line.The inset shows an example of the debiased zPDF for one galaxy (selected randomly).The resulting PIT distribution, after debiasing, is shown in dashed red.The true N(z) is shown with the blue histogram in the right panels.The N(z) reconstructed using the initial and the debiased zPDF are shown with black solid lines and red dashed lines, respectively.
Table 1 .
Differences between the mean redshifts reconstructed with different methods (direct calibration and debiased zPDF) and z true , divided by (1 + z true ).The KiDS+VIKING-450 survey is split in five tomographic bins.We use VVDS/DEEP2 as target sample, and COSMOS as the training one.In the top part of the table, photo-z are selected with σ zp < 0.3, while the bottom parts show a selection at σ zp < 0.6 and σ zp < 1.2. | 19,378.8 | 2021-01-06T00:00:00.000 | [
"Physics"
] |
Image Super-Resolution Reconstruction Method for Lung Cancer CT-Scanned Images Based on Neural Network
,
Introduction
At present, there is an increasing demand for high-resolution images in various fields such as medicine, security, and entertainment [1].Medical science is the field where images play very important role in diagnosis of the diseases where images are supplied as inputs and output is achieved in terms of identification of the diseases based on images [2].For example, doctors attempt to identify diseases through highresolution CT images; diseases are identified through highresolution surveillance images where similar images can mislead [3].It is expected that through high-resolution video, healthcare practitioners can obtain more realistic and detailed visual effects to diagnose the diseases and ailments in a detailed manner [4,5].The most direct way to increase the resolution is to increase the hardware resolution of the digital image acquisition system [6].However, high costs and technical bottlenecks often make this method difficult to achieve, and it is not feasible for healthcare practitioners to devise these computational methods to enhance the quality of the images of patients [7].Therefore, obtaining highresolution images under unified hardware conditions is the focus of super-resolution reconstruction technology [8].Super-resolution reconstruction technology provides an effective way to solve this problem.Spatially modulated full-polarization imaging technology is following the traditional methods to fetch the information from the image [9].A new system of polarization imaging technology has been evolved from the time-sharing and simultaneous polarization imaging technology [10].Under the new imaging system, the system uses the Savart polarizer to modulate the four Stokes vectors of the detected target in the same interference image, so as to pass a single image as an input [11].
The complete polarization information can be obtained by acquisition [12].The system has gradually become a research hotspot due to its advantages of obtaining multiple Stokes vectors at the same time, simple structure, and easy implementation with respect to dynamic targets [13].A direct mapping function is established between the sensor pixels and the scene to obtain enhanced images with the new computational imaging system [14].The feature extraction and image reconstruction as a whole is devised using the adaptive and the latest methods.The newer systems can use high-performance computing power and global information processing capabilities to enhance the resolution of the images and to extract the relevant information from the images [15,16].It plays a role in applications such as ultra-diffraction-limit imaging, high resolution (HR) imaging with a large field of view, and clear imaging through scattering media [17].Single-image super-resolution technology uses a single degraded low-resolution image to reconstruct a high-resolution image [18].High-resolution images have more details, and these details are of great significance in practical applications such as diagnosis of diseases [19].Image super-resolution technology has always been a research hotspot in aerospace, remote sensing, target recognition, and other fields [20].Image super-resolution (SR) technology has been widely applied, with high practical value in medical imaging, face recognition, high-definition audio, video, and other fields.Until now, medical imaging has played an important role in the medical field.Highresolution medical images can improve the work efficiency of doctors and reduce the rate of missed diagnosis.CT images are often used in guided radiotherapy, so it is of great significance to obtain high-resolution CT images.
In [2], authors have proposed super-resolution technology for the first time.At present, super-resolution technology is divided into three categories: interpolation-based methods, reconstruction-based methods, and learning-based methods.The learning-based method can introduce more highfrequency information than the other two types of methods and can obtain better robustness to noise.In [3], optical remote sensing image super-resolution reconstruction technology is used that processes one or more low-resolution optical remote sensing images with complementary information to obtain high-resolution optical remote sensing images.Optical remote sensing image is the data support and application basis of remote sensing image target detection, providing rich information for monitoring the images to extract the hidden information.Therefore, it is of great significance to improve the resolution of remote sensing images.Optical remote sensing image reconstruction algorithms are divided into two categories; one is human-centered methods, and the other is machine-centered methods.Human-centered methods often use PSNR (peak signal-to-noise ratio) and SSIM (structural similarity) as evaluation indicators and generate visually satisfactory pictures for recognition.Usually this type of method ignores the follow-up due to the particularity of computer vision tasks (such as target detection and classification).The machine-centric method has many options, and the machine learning-based algorithms enhance the quality of the image for drawing useful information from the images by training the algorithm on huge data set of images.
The newly developed methods take the execution result of the computer vision task as an optimization index and evaluate the reconstruction performance of the algorithm through the input of images and their respective outputs.The super-resolution reconstruction task is regarded as a preprocessing step for processing the images where the resolution of the images is improved before applying any feature extraction algorithms and classification algorithms [21,22].The design principle focuses on learning the resolution invariance of a special task to process multiscale targets in a remote sensing image, so as to facilitate higher-level computer vision task processing and analysis.In the early days, most of the models for SR tasks have been implemented based on interpolation methods, and the most representative of them is the model based on sparse representation [14][15][16].These types of models assume that any natural image can be sparsely represented by elements in the image dictionary.Then the model can reconstruct the high-resolution images based on the image dictionary.However, this type of method is computationally complex and requires a lot of computing resources, and this type of method does not perform well in restoring the details of the image.With the development of deep learning, deep neural networks have been introduced into the SR task.SR tasks based on neural networks are implemented in a supervised learning manner.From the perspective of neural networks, it is necessary to establish a pixel-level mapping from low-resolution images to high-resolution images [17].From a statistical point of view, this process can be considered to establish a conditional probability pðyjxÞ, where x is the input low-resolution image and y is the corresponding high-resolution image.Through training, the neural network can learn to obtain the statistical characteristics of lowresolution images and restore high-resolution images accordingly, that is, generalize from the training data set to the test data set [18][19][20].Image super-resolution reconstruction based on deep neural networks can be roughly divided into two research directions.In order to solve the above problems and generate sharper images, this paper designs a stable and effective energy-based counter-assistance loss based on the commonly used VGG reconstruction loss.Super-resolution (SR) image reconstruction is a technique used to recover a high-resolution image using the cumulative information provided by several low-resolution images.Super-resolution reconstruction of sequence remote sensing image is a technology which handles multiple low-resolution images to provide a better quality image irrespective of the underlying hardware.This technology works purely independently without the involvement of hardware support, and once the low-resolution images are enhanced by using this superresolution technology, the images can be used on any machine; they will be classified in an accurate manner irrespective of the hardware configuration of the machine.
BioMed Research International
The advantage of using an energy function as a discriminator to replace the traditional discriminator is that the process of encoding data into energy takes into account the volatility of the neural network itself, and after the energy flow of the data is constructed, the generator can be used to track the energy flow.Another advantage of tracking the energy flow of data is that when the energy approaches 0, the discriminator no longer generates additional gradients, so the energy-based confrontation generation network is relatively stable.In order to construct a relatively stable auxiliary energy loss, this article draws on the concept of Boltzmann distribution in statistical mechanics and the energybased GAN model [19]; the Boltzmann distribution establishes the relationship between energy and probability.According to this distribution, the lower the energy, the greater the probability of the corresponding sample to find a matching image.When the loss function converges, the curve tends to be flat.The corresponding probability distribution can be considered as the distribution Pdata of real data.Therefore, it is assumed that samples that obey the data distribution have low energy.Then when the energy of the sample that passed to the discriminator is low enough, it can be considered that the sample obeys the data distribution and the generated confrontation network can be regarded as the energy flow of tracking the data using the model distribution.
For spatially modulated computational imaging, the image degradation process not only includes the direct mapping model between the sensor pixel and the scene in the traditional imaging method, and the mapping relationship corresponds to the interference fringe intensity image on the CCD of the spatially modulated full-polarization computational imaging; it also includes the use of two-dimensional discrete Fourier transform to transform the spatial domain interference fringe information into the frequency domain computational imaging and the use of low-pass filter calculation to obtain the target's Stokes vector information spatial modulation process [2].At the same time, in the hyperspectral full polarization imaging system, in addition to obtaining the polarization information of the detection target, it is also very important to obtain the Hyperspectral Information and high-resolution visible panchromatic image of the target.These heterogeneous redundant hyperspectral and visible light images are the low resolution of the same target scene.The polarized image SR method provides additional target scene priors.Interpolation-based methods for super resolutions are also explored by the researchers in the existing literature.These methods use the pixel values of adjacent pixels in the image space domain to determine the pixel values of the points to be interpolated.The most common ones are the nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation.The existing literature proposes spatial nonlinear interpolation algorithms, wavelet-based algorithms, and bilinear interpolation-based methods [10][11][12] as interpolation-based image superresolution reconstruction method is easy to process, but due to the lack of sufficient prior knowledge and image observation model, the reconstructed image has blurred edges and poor overall visual effect.
In recent years, transfer learning methods [15,16] have provided ideas and technical means for using scene priors to perform SR.The representative method is to use HR RGB image prior information to enhance hyperspectral image SR [17][18][19].In actual imaging detection, hyperspectral imaging systems often sacrifice time and spatial resolution in order to achieve high spectral resolution, while visible light (or multispectral) cameras integrate radiation with a wide wavelength range, which can easily capture high spatial resolution in real-time images.Inspired by this, this article focuses on the spatial modulation type computational imaging degradation process for lung cancer images.The characteristics of the imaging system are utilized for preparing the model.The convolutional neural networks (CNNs) are utilized with new architecture in the proposed framework where the hybrid mechanism is used by fusion of spatial modulation-based computational imaging method based on scene feature migration.
In order to solve the problem of excessive smoothness and blurring of the reconstructed image edges caused by the introduction of this self-similarity constraint, a twolayer reconstruction framework based on a smooth layer and a texture layer is proposed for a medical application of lung cancer.This method uses a global nonzero gradient number constrained reconstruction model to reconstruct the smooth layer.The proposed sparse coding method is used to reconstruct high-resolution texture images.Finally, a global and local optimization models are used to further improve the quality of the reconstructed image.
The dual-drive adaptive remote sensing image for target detection is based on the characteristics of optical remote sensing images.An adaptive multiscale remote sensing image super-division reconstruction network is designed.The adaptive feature terminology is used as a flexible feature of the proposed approach which works well on all type of low-resolution images by employing super-resolution technology without caring about the hardware and software details.Any images can be supplied as input images, and the adaptive feature technology is able to extract the features from the image to assist in enhancement of its resolution and to assist in classifying the image more accurately.The selective core network and adaptive gating unit are integrated to extract and fuse features to obtain a preliminary reconstruction.Through the proposed dual-drive module, the feature prior drive loss and task drive loss are transmitted to the super-resolution network.Due to the precision of the remote sensing image target detection task, the super-division network can better serve the target detection task and improve the performance of target detection for serious diseases like lung cancer from the available images.The proposed work not only improves the subjective visual effect, but the robustness has also been enhanced with more accurate construction of edges.to the traditional time-sharing and simultaneous polarization imaging systems.Figure 1 shows a 2-channel hyperspectral full-polarization simultaneous imager.The system is mainly composed of pre-expanded optic devices (beam expander optics (BEO)), half-wave plate front surface aperture diaphragm (S), Savart polariscope (SP), liquid crystal tuning filter (P), computing imaging system (CIS), and CCD.Among them, the area array detector has a resolution of 2048 × 2048 in the visible near infrared band and a pixel size of 12 μm; the short-wave infrared band has a resolution of 640 × 512 and a pixel size of 20 μm.
Existing Frameworks
The imaging system adopts the principle of spatial modulation of Stokes vectors and modulates 4 Stokes vectors (S03~S) in the same image at the same time.One acquisition can obtain modulation information containing the 4 Stokes vectors of the target.Based on this, it can be parsed out.The realization of hyperspectrum can quickly switch the band through the liquid crystal tuning filter and realize the rapid measurement of the complete polarization state of the target which can reflect the scene and target information from different angles.
Image Degradation
Model.Let fg1, g2,⋯,gng be n frames of low-resolution images collected by existing hardware devices, and f is the high-resolution image to be reconstructed.As shown in Figure 2, the high-resolution image in Figure 2 Consistent with literature [11], the image degradation model is expressed as: In Equation (1), Tk is the geometric transformation, Hk is the point spread function, D is the downsampling operator, and ηk is the noise signal.In this paper, 4 × 4 times reconstruction is considered, so the downsampling operator D is 4 : 1 sampling.
In order to obtain a more accurate degradation model, it is necessary to study the corresponding relationship between the high-resolution coordinate system and the lowresolution coordinate system.For this reason, it is specified that the upper left corner of the image is the origin of the coordinate, the right is the positive direction of the x-axis, and the downward is the positive direction of the y-axis.The positions of the pixels indicated by the dots in Figures 2(a Therefore, the accurate degradation process can be described as In Equation ( 2), ðs, tÞ = DTkðx + ex, y + eyÞ.Here, the gray value at the position (s, t) in the low-resolution grid is not only determined by the high-resolution grid (x + ex, y + ey) but is determined by the position and the surrounding pixels.The determination method depends on the point spread function Hk.Considering that the 4 BioMed Research International acquisition process of low-resolution images is the overall acquisition of the same scene; it may be assumed that the transformation Tk is an overall transformation.In addition, assuming that the point spread function Hk has translation invariance, then the resolution is represented by Considering that f k = Tkf , thus forming Since jexj and jeyj will not exceed half a pixel unit, the above model is usually approximated by Assuming that the point spread function remains unchanged during the image degradation process, and the filter h is approximated instead, two image degradation models in the spatial domain can be obtained: In Equations ( 6) and ( 7), " * " is convolution, ðs, tÞ = DTkðx + ex, y + eyÞ ≈ DTkðx, yÞ.Equation ( 5) retains subpixel information, and Equation ( 6) is approximated by rounding.Ignore this information.In the process of superresolution reconstruction, literature [11][12][13][14] all use Equation (6) as the image degradation model, directly discarding the subpixel information which will inevitably affect the accuracy of the reconstruction model.This paper tries to base on the degradation model ( 5) which establishes a super-resolution reconstruction model based on subpixel displacement to improve the accuracy of the reconstruction model.
Model Building Using the CNN-Based
Hybrid Mechanism
Sparse Coding Unit.
Recent studies have shown that the traditional sparse coding considering the geometric structure of the image in sparse coding improves the sparse coding ability [15].A priori condition for image geometric structure is that the natural images often contain repeated structural blocks.However, due to the potential instability of sparse coding methods, image blocks with similar geometric structures often have different sparse coefficients, resulting in flaws in the reconstruction results which can be eliminated by proposing more potential solutions.Therefore, it is necessary to use the nonlocal self-similarity of the image to stabilize the sparse coding.Rahman et al. [16] have proposed a hypothesis based on nonlocal self-similarity; if in a nonlocal neighborhood, the image block × j is the jth of the k most similar to the image block × j, then in the same the nonlocal neighborhood corresponding to the sparsity coefficient, Sj is also the jth most similar sparsity of Sj.This nonlocal self-similar prior condition is defined: where ω ji is the self-similar weight of × j relative to × j, hj is a smoothing parameter, and cj is a normalized parameter.8) with the traditional sparse coding model in Equation ( 9) to obtain a nonlocal self-similar sparse coding model as shown: where s:t: The sparse coding model pays attention to the sparse coefficient space, uses the self-similarity of sparse coefficients to reduce the error of sparse representation, and protects the geometric structure of the image but does not pay attention to the choice of dictionary training.A good training dictionary can reduce rebuild defects and improve the quality of reconstructed images.The dictionaries obtained by training the sparse coding model of Equations ( 9) and ( 10) lack orthogonality and have redundancy which weakens the effectiveness and stability of the dictionary.It also reduces the reconstruction efficiency and reconstruction accuracy and easily leads to the inaccuracy of the reconstructed geometric structure.It is necessary to introduce the noncorrelation constraint of the dictionary to reduce the inaccuracy of the reconstructed geometric structure and improve the quality of the reconstruction result.This nonrelevance constraint is defined as follows: In Equation (11), I ∈ Rk × k is the identity matrix, and D T is the transposed matrix of dictionary D. When any two atoms in the dictionary are orthogonal, ε2 = 0, at this time, the noncorrelation of the dictionary is the highest.Introducing Equations ( 8) and ( 11) into the traditional sparse coding model, the resulting sparse coding model is shown as The solution of Equation ( 12) is divided into two parts: fixed dictionary D to solve the sparse coefficient S and fixed sparse coefficient S to solve the dictionary D. Fixed dictionary D solves the sparse coefficient S; Equation ( 12) becomes the following formula as shown.
Use feature search algorithm to update sj one by one.The fixed sparse coefficient S is used to solve the dictionary D with a fixed sparse coefficient S, and Equation ( 13) leads to the following equation: BioMed Research International 3.3.Adaptive Feature Gating.In order to obtain a better reconstruction effect and reduce calculations, it is necessary to add an adaptive gating unit between the selectable multiscale feature extraction layers to adapt to the complex nonlinear mapping relationship during remote sensing image reconstruction which reduces the redundant information.Therefore, in the process of feature transfer, we adopt a simple adaptive gating mechanism to solve the redundant information in the process of feature transfer and increase the flexibility of the network.The adaptive feature gating unit is shown in Figure 5.The key to adaptive feature gating is to adaptively obtain the gate control score of the input feature map M i−1 .When the gate control score (M i−1 ) is determined, it provides detail on how much feature information needs to be retained.The characteristic information is retained as In order to calculate the gating score, we first use the global average pooling operation to reduce the dimensionality of the feature map and then add two full connections connected to BN as a simple nonlinear mapping function, and a ReLU function is utilized to capture the dependence between channels.Finally after the Softmax operation, the vector G containing two elements is output.The element with a larger value is recorded as the gated score of the feature map M i−1 .
3.4.Dual-Drive Module.We know that the quality of optical remote sensing image target detection results depends largely on the clear image and sufficient texture information to extract specific feature information.Therefore, a dual-drive module (DDM) is proposed, and feature priority drive (FPD) and task drive (TD) are added to reduce the feature gap between super-resolution images and real highresolution images.We combine the target detection network and the super-division reconstruction network for joint training to make the super-division reconstruction model more suitable for target detection and provide a solution for the remote sensing image super-division reconstruction method for target detection.The dual-drive module consists of two parts, a characteristic a priori-driven part and a task-driven part.In order to reduce the feature gap between the super-resolution image and the real high-resolution image, we first add the feature prior drive and use the trained mask R-CNN with ResNet50-C4 [15] as the feature extractor since mask R-CNN introduces mask reflection and it has no transitional coupling with subsequent detectors which helps to improve the usability of the generated image in other detection networks.After feature alignment, the loss is passed to the previous super-division reconstruction network to constrain the characteristics of the super-division reconstruction image to be as similar as possible to the characteristics of the real image.
Then, it is observed that the feature gap between the super remote sensing image and the real high-resolution image is reduced.The feature prior drive is a result of relying on empirical selection but lacks flexibility and adaptability.Therefore, in order to fully explore the interaction between the super-division network and target detection, we also add task driving to jointly train the target detection network and the adaptive multiscale super-division reconstruction network.Explicitly include the task driving loss Ltask in the training of the adaptive multiscale super-division reconstruction network.
Experimental Outcome
4.1.Comparative Experiment.The comparative study is made in order to evaluate the results of the proposed approach with the existing methods.We have selected several mainstream representative super-division reconstruction methods and magnified the image by 2 times for comparison experiments.The detection performance of these superreconstructed images is tested on the UCAS-AOD data set, and then in second phase, these are tested on the lung cancer data set taken from the Zhongnan University Xiangya Medical College.The detection networks selected by the comparison method are the same as this method, and all use the Faster R-CNN network with FPN.Table 1 shows the PSNR values of optical remote sensing images reconstructed by different methods and the results of the detection performance AP (average precision) of these images.APS, APM, and APL represent the detection performance of small, medium, and large-scale targets, respectively.
As shown in Table 1 in case of double downsampling, the AP decreases from 47.6% to 22.14%.It can be seen that the performance of the super-resolution reconstruction 7 BioMed Research International network has a great impact on the detection results of the target detection network Faster R-CNN.Small-scale and mesoscale targets are greatly affected.The APS is reduced from 21.5% to 6.71%, and the APM is reduced from 48.5% to 23.46%, respectively.According to the analysis, this is caused by the loss of multiscale information and the limitation of downstream target detection tasks.We have utilized an adaptive multiscale super-division reconstruction module and a dual-driver module to reconstruct the multi-scale information of the image and also to significantly improve the performance of remote sensing image target detection.Our method has an AP value of 44.89% and the original high resolution.The image detection result is only 2.71% which shows the effectiveness of our method.The detection effect of small-scale targets is improved more obviously, and the APS is increased from 6.71% to 20.3%.
It can be seen from Tables 1-2 that both MSRN and AMFFN use multiscale methods to reconstruct superresolution remote sensing images, but their multiscale networks are fixed which cannot guarantee the extraction of the multi-scale information of optical remote sensing images.The subsequent target detection task is difficult; hence, there are serious shortcomings in both the results of reconstruction effect and the target detection effect.The results are evaluated on two data sets as mentioned in the tabular results.Our method has an average increase of 1.38 dB and 1.67 dB, respectively, in resolution, and mAP increased by 10.67% and 10.3% on an average, respectively, which shows the effectiveness of the adaptive multiscale super-division reconstruction module and the dual-drive module for improving the quality of the images.VDSR uses the loss of the detection network to optimize the previous super-resolution network D-DBPN to improve the performance of target detection, but the deep VDSR network structure may cause problems such as the disappearance of the gradient.FDSR only uses the feature extractor to align the original image features with the reconstructed image features and then transfers the alignment loss to the previous D-DBPN network.This method has great limitations.The detection accuracy of the above two methods is a little higher than that of the conventional super-resolution method.However, the above two methods do not take into account the characteristics of optical remote sensing images, so the reconstruction effect is general.On the above two data sets, the average detection accuracy mAP of these methods is 62.96% and 63.91%, but the PSNR is only 26.62 dB and 26.80 dB.Taking into account the advantages and disadvantages of these two methods, we have introduced dual-drive modules, combined with feature prior drive and task drive.In proposed method, the PSNR reached to 28.75 dB and 28.58 dB on the UCAS-AOD data set and lung cancer data set, respectively.VDSR and FDSR are 2 dB higher on average, and the target detection accuracy mAP is improved more obviously reaching 69.67% and 68.61% which shows that our method has greatly improved the reconstruction effect and detection accuracy.
In order to better verify the superiority of our method, we also selected representative test results on the UCAS-AOD and lung cancer data sets for visual display.In the test result, the red box indicates missed inspection, and the yellow box indicates the wrong test result.It can be seen from the Figure 5 that other methods have error detection and missed detection to varying degrees, and our method has good detection results.In summary, our method has the best overall performance.It not only has a better reconstruction effect on optical remote sensing images with diverse scales, but also has a great improvement in detection accuracy.
Convergence Curve Comparison.
In order to verify the effectiveness of the key components of the proposed algorithm model, this section conducts sufficient ablation experiments.The first is to discuss the impact of the number of feedback loops, that is, the number of recursive DCB modules T and the number of MRB modules N in the DCB module on performance and then the impact of global feature fusion (GFF) and multicore fusion module (MKFB) on performance in structural design.It should be noted that in order to speed up the training process and ensure the fairness of the result comparison, all ablation experiments in this section only use the DIV2K data set as the training set, the Set5 data set as the test set, and the magnification factor is 4.
(1) In the analysis of the number of feedback T and the number of MRB modules N, Figures 6 and 7, respectively, show the PSNR index of the reconstruction result of the proposed algorithm under different T or N conditions, and the result of the DRCN algorithm is used as the benchmark reference value.It can be observed that the larger T and N, the better the performance of the algorithms.
Conclusion
In order to solve the problem of excessive smoothness and blurring of the reconstructed image edges caused by the introduction of self-similarity constraints in medical images, this paper proposes a two-layer reconstruction framework based on a smooth layer and a texture layer for providing smooth CT images of lung cancer for better diagnosis.First, in the smooth layer reconstruction, the proposed global nonzero gradient number constrained reconstruction model is used to sharpen the edge of the images and obtain an ideal smooth layer image.The generative model which takes low-resolution images as input, train on ImageNet as the feature extractor, extracts the features of high-resolution images and then builds content-based images.The energy function is utilized for compensating the confrontation loss in the neural network which makes the model more stable and allows the model to generate clear and sharp highresolution images.The experimental part of this article verifies the effectiveness of the proposed algorithm.The proposed work has achieved mAP (mean average precision) and PSNR (peak signal-to-noise ratio) better than the existing schemes as shown in the result section.The conversion analysis is also optimal as shown in the result section..The algorithm proposed in this article is attempting to reduce the noise and enhance the image quality for better diagnosis and it can be experimented with more data sets to prove its viability and versatility.
2. 1 .
Spatial Modulation Type Hyperspectral Full-Polarization System.Spatial modulation type full-polarization imaging is a new type of polarization imaging systems developed next 3 BioMed Research International (a) is transformed into the result in Figure 2(b) after geometric transformation Tk, and then Figures 2(a) and 2(b) are, respectively, blurred (point spread function Hk and downsampling D); add noise to get Figures 2(c) and 2(d).
Figure 2(d) is the positional relationship of the pixels shown by the dots in Figure 2(d) in the low-resolution coordinate system, and its coordinates are (s, t); Figure 2(b) is the position of the dots in Figure 2(b) shows the positional relationship of the pixel in the high-resolution coordinate system.Its coordinates are (x0, y0).After downsampling, (x0, y0) becomes (s, t), which is the gray value of (s, t).It is determined by the blur of the point spread function Hk at (x0, y0); Figure 2(a) is the positional relationship of the pixel shown by the dot in Figure 1 in the high-resolution coordinate system, and its coordinates are (x + ex, y + ey); after the transformation f k = Tkf , it becomes (x0, y0); Figure 2(c) is a partial enlarged view of the box part in Figure 2(a), and the errors in the x and y directions are respectively.In addition, since the point spread function Hk does not change the positional relationship of the coordinates, the process from Figures 2(b) to 2(d)) does not reflect Hk.
Figure 1 :Figure 2 :
Figure 1: Depicting the basic composition of a full-polarization imager.
3. 1 .
Sparse Coding Model.This paper proposes a dual-drive adaptive multiscale super-division reconstruction algorithm for target detection which mainly utilizes an adaptive multiscale super-division reconstruction method for enhancing the image quality of the degraded images of lung cancers.The specific structure is shown in Figure3.The lowresolution remote sensing (ILR) image first obtains the reconstructed super-division image ISR through the adaptive multiscale method specially designed for remote sensing images.This module contains the adaptive multiscale feature extraction block and integrates the optional multiscale.The feature extraction and feature gating units can flexibly fuse the multiscale features of remote sensing images and enhance the target features.Then the super-division image ISR and the original high-resolution image IHR are sent to the feature-driven prior module for feature alignment, and the feature-driven loss is passed into the super-division reconstruction network to guide the generation of superdivisions that are more suitable for target detection of remote sensing images.Then, considering the particularity of the subsequent target task, send the super-divided optical remote sensing image to the task driving module, that is, the target detection module, and pass the task driving loss to the previous super-dividing network to obtain the final remote sensing images detection result.The overall structure is shown in Figure4with lung cancer image of CT scan.
Figure 3 :
Figure 3: Correspondence between high-and low-resolution coordinate systems.
Figure 4 :
Figure 4: Overall CNN-based network structure for lung cancer-based CT scan images.
Figure 6 :
Figure 6: Convergence curve comparison diagram of different feedback times T.
Figure 7 :
Figure 7: Convergence curve comparison diagram of different MRB module number N.
Table 2 :
The experimental results are shown for mAP (mean average precision) and PSNR (peak signal-to-noise ratio).
Table 1 :
The experimental results of our method for mAP and PSNR with other methods on NWPU VHR-10 data set. | 7,602 | 2022-07-18T00:00:00.000 | [
"Computer Science"
] |
Engineering Gold Nanostructures for Cancer Treatment: Spherical Nanoparticles, Nanorods, and Atomically Precise Nanoclusters
Cancer is a major global health issue and is a leading cause of mortality. It has been documented that various conventional treatments can be enhanced by incorporation with nanomaterials. Thanks to their rich optical properties, excellent biocompatibility, and tunable chemical reactivities, gold nanostructures have been gaining more and more research attention for cancer treatment in recent decades. In this review, we first summarize the recent progress in employing three typical gold nanostructures, namely spherical Au nanoparticles, Au nanorods, and atomically precise Au nanoclusters, for cancer diagnostics and therapeutics. Following that, the challenges and the future perspectives of this field are discussed. Finally, a brief conclusion is summarized at the end.
Introduction
Cancer is a worldwide health concern and one of the leading causes of mortality. In the past two decades, tremendous efforts have been dedicated to finding a competent treatment strategy against cancer, but only a few successes are achieved to date. Therefore, there is a huge demand for developing novel strategies for diagnostics and treatments of cancer. With the emergence and booming of nanoscience and nanotechnology, exceptional growth in research and applications of nanomaterials toward cancer treatment has been witnessed, bringing hope that the disadvantages of using conventional cancer therapies can be circumvented.
Among all kinds of nanomaterials for cancer treatment, gold nanostructures have shown great promise as emerging agents, mainly thanks to their unique advantages, such as tunable optical properties, easily functionalized surface, and excellent biocompatibility [1][2][3]. For instance, small gold nanoparticles are able to passively accumulate and remain at the tumor site through permeability and retention effects [4]. In addition, the surface of gold nanoparticles can be readily functionalized with active moieties such as peptides, proteins, monoclonal antibodies, and small drug molecules to avoid non-specific uptake and realize tumor-specific targeting [4]. Previous studies have shown that the structure of the gold nanomaterials can play a critical role. In an early comparative study of Au nanorods, nanocages, and nanohexapods for photothermal treatment, Au nanohexapods showed superior performance in both photothermal destruction and contrast-enhanced diagnosis [5]. In another investigation, Ma et al. evaluated the radio-sensitization effect in X-ray radiotherapy of three types of Au nanostructures (gold nanoparticles, spherical [34]. Copyright 2019, American Chemical Society.
Gold Nanorods
Another important type of gold nanostructure is gold nanorods, which possess some unique advantages for cancer treatment. For example, gold nanorods can absorb light in the near-infrared (NIR) region, enabling efficient irradiation, which can be utilized for selective photothermal therapy of some specific cancers [45]. Specifically, thanks to the tunable localized surface plasmon resonance (LSPR), gold nanorods can not only serve as probes but also become heat sources when irradiated by a laser with a photothermal effect [46]. The generated heat can provide photothermal therapy for cancer treatment and/or trigger anti-cancer drug release for chemotherapy when gold nanorods serve as a drug carrier [46]. In short, gold nanorods can be applied for cancer treatment in phototherapy, cellular imaging, drug transport, and combined therapy (e.g., phototherapy and chemotherapy) [47,48].
Employing the photothermal effects of gold nanorods to kill cancer cells is the most widely employed strategy for cancer treatment, as the nanorod can absorb the NIR light to penetrate into sick tissues without damaging the surrounding healthy tissues, and the wavelength of light can be fine-tuned through the aspect ratio and surface ligand [49][50][51]. In 2015, Betzer et al. reported dual-mode targeted plasmonic nanoprobes made of gold In addition, spherical Au nanoparticles have been attracting considerable interest as non-toxic drug carrier systems for cancer treatment, thanks to the large surface-to-volume ratio; easy tuning of surface charge, hydrophilicity, and functionality; and outstanding stability [37][38][39]. Various biocompatible polymers (e.g., polyethylene glycol (PEG) [40], polyelectrolyte [37], DNA [25], liposome [41], and other bio-macromolecules [42]) can be used to tune the tumor microenvironment [43] and, more importantly, enhance the stability, payload capacity, and the cellular uptake. Muhammad et al. reported that the PEG-capped AuNPs can enable efficient delivery of anti-cancer therapeutics of bleomycin and doxorubicin into HeLa cells while maintaining drug cytotoxicity [40]. In another study, Soliman's group successfully prepared cetyltrimethylammonium bromide (CTAB)stabilized AuNPs which can efficiently entrap fluorouracil (5-FU), an antimetabolite drug used for treating colon and skin cancers [44]. The optimum 5-Fu-loaded AuNP gel and Nanomaterials 2022, 12, 1738 5 of 17 cream were able to reduce tumor volume by about 6.8-and 18.4-fold, as compared to the control, in A431-bearing mice [44].
Gold Nanorods
Another important type of gold nanostructure is gold nanorods, which possess some unique advantages for cancer treatment. For example, gold nanorods can absorb light in the near-infrared (NIR) region, enabling efficient irradiation, which can be utilized for selective photothermal therapy of some specific cancers [45]. Specifically, thanks to the tunable localized surface plasmon resonance (LSPR), gold nanorods can not only serve as probes but also become heat sources when irradiated by a laser with a photothermal effect [46]. The generated heat can provide photothermal therapy for cancer treatment and/or trigger anticancer drug release for chemotherapy when gold nanorods serve as a drug carrier [46]. In short, gold nanorods can be applied for cancer treatment in phototherapy, cellular imaging, drug transport, and combined therapy (e.g., phototherapy and chemotherapy) [47,48].
Employing the photothermal effects of gold nanorods to kill cancer cells is the most widely employed strategy for cancer treatment, as the nanorod can absorb the NIR light to penetrate into sick tissues without damaging the surrounding healthy tissues, and the wavelength of light can be fine-tuned through the aspect ratio and surface ligand [49][50][51]. In 2015, Betzer et al. reported dual-mode targeted plasmonic nanoprobes made of gold nanorods as a theranostic approach for detecting and curing skin-adjacent tumors for head and neck cancers [52]. Both in vivo and in vitro, the immune-targeted gold nanorods can target head and neck cancer cells with high specificity and facilitate the differentiation between cancerous and noncancerous tissues [52]. Shrivastava's group discovered that the polyelectrolyte coating on the Au nanorods can have an important effect on the photothermal efficiency and the photothermally triggered cancer cell damage [53]. For gold nanorods with polystyrene sulfonate (PSS-AuNRs) and PSS plus poly-diallyl dimethyl ammonium chloride (PDDAC-AuNRs), despite high photothermal conversion efficiency and cellular uptake of PDDAC-AuNRs, their intracellular clustering adversely affects the photothermal treatment of cancer cells [53]. Such surface coating influence was also observed by Wang et al., who documented biologically inspired polydopamine-stabilized Au nanorods for light-induced cancer therapy [54]. The self-polymerized polydopamine shell has a high adsorption capacity for therapeutic drugs and is very stable and biocompatible. Thanks to the tunable LSPR properties of gold nanorods in the near-infrared spectral region, impressive in vitro cancer cell killing efficiency and remarkable tumor growth suppression were achieved in vivo by the gold nanorod-polydopamine composite, superior to any single therapy modality [54].
Besides surface coating, imprinting other biologically active molecules such as saccharides can also improve the photothermal treatment efficiency. Liu's group prepared sialic acid (SA, a typical monosaccharide)-imprinted gold nanorods, which could selectively kill a tumor but not damage the circumjacent healthy tissue [55]. Besides achieving higher treatment efficiencies, researchers have also devoted great effort to unraveling the molecular mechanism of the Au-nanorod-aided plasmonic photothermal therapy. In 2017, Ali et al. conducted an investigation regarding the efficacy, toxicity, and mechanism of Au nanorod photothermal therapy of cancer in xenograft mice [56]. In this study, the size, surface modification, and concentration of AuNRs and the laser power to achieve the maximal apoptosis induction were first examined. The possible mechanism of AuNRs-plasmonic photothermal therapy (PPTT) action using quantitative proteomic analysis in tumor tissues of the mouse was also studied, where several death pathways were identified. Cytochrome c and p53-associated apoptosis mechanisms were recognized to contribute to the enhancement of PPTT with AuNRs@RF (rifampicin). Moreover, Pin1 and IL18-related signaling made a contribution to the disturbance of the NETosis pathway through PPTT enabled by AuNRs@RF [56].
In 2018, Joshi's group reported gold-nanorod-composed theranostic nanoparticles (TNPs) for interventional image-directed photothermal therapy for solid tumors [57]. In this study, the feasibility of site-selective hepatic image-directed delivery of TNPs in rats was examined. Figure 2A shows the dynamic thermal imaging at different time points during the PPT process. In the saline group, the tumor's temperature increased by about 7.5 • C within 1 min and remained basically stable; however, in sharp contrast, the TNP group tumor temperature quickly jumped to~20 • C in 5 min, suggesting that the increase in tumor temperature exceeded the range of hyperthermia, resulting in the damage of local vasculature which can destroy the tumor cells effectively. The authors further conducted the hematoxylin/eosin staining of tumor sections. As shown in Figure 2B, tumor slices in the saline group exhibited no obvious effect, while the TNP group presented a valid response under the same laser irradiation power level with a remarkable photothermal therapy effect. The transmission electron microscopy (TEM) images verified that the TNPs stayed in the tissue with no structural change, as illustrated in Figure 2C. In addition, the clear observation of the morphology of a gold nanorod core and a Gd shell can be observed in Figure 2D. Finally, Figure 2E validates the feasibility of intraoperative imagingoffered quantum yield, and the imaging sensitivity can be further improved by reducing the exposure time to below 1 s. The above findings confirm that TNPs can be employed for photothermal ablation efficiently while bearing no risk of heat-induced breakdown [57]. Meanwhile, gold nanorods can integrate with other functional materials such as inorganic compounds to form a therapeutic package to further promote the efficiency of cancer treatment. Note that a variety of organic photosensitizer-conjugated Au complexes have been designed and prepared recently, but they also have some drawbacks such as photobleaching and invalid energy transfer, and the introduction of inorganic compounds Meanwhile, gold nanorods can integrate with other functional materials such as inorganic compounds to form a therapeutic package to further promote the efficiency of cancer treatment. Note that a variety of organic photosensitizer-conjugated Au complexes have been designed and prepared recently, but they also have some drawbacks such as photobleaching and invalid energy transfer, and the introduction of inorganic compounds might resolve these issues. For instance, Lee et al. fabricated novel inorganic phototherapeutic complexes by conjugating Au nanorods with defective TiO 2 nanoparticle clusters together [58]. A higher efficacy of cell death was observed in phototherapeutic treatments of cancer cells, which is attributed to the increase in reactive oxygen species generation from the TiO 2 nanoparticle clusters with the aid of localized surface plasma resonance triggered electron and heat generation from Au nanorods [58]. In another study, Li et al. fabricated a novel nanocomposite of mesoporous silica gold nanorods, which also showed an improved lifetime of circulation and homotypic targeting to HeLa cell tumors [59]. By utilizing this nanocomposite, the tumor growth can be completely inhibited, indicating great potential for tumor treatment [59].
Besides the photothermal effects, gold nanorods (GNRs) can serve as effective drug carriers for controllable drug delivery. For instance, Mahmoud and co-workers discovered that cholesterol-coated gold nanorods can be an intriguing carrier for hydrophobic drugs, where efficient delivery and therapy against breast cancer cells can be achieved by using MCF-7 cell lines [60]. A quite recent study quantified the cellular uptake by GNRs in MCF-7 cells by using inductively coupled plasma mass spectrometry, and the MCF-7 cells used the micropinocytosis mechanism to internalize bare GNRs that aggregate and associate with the cell membrane [61]. Pacardo et al. discovered that when functionalized with cyclodextrin, gold nanorods can encapsulate doxorubicin (DOX), and the as-formed nanocomplex showed enhanced anti-cancer efficacy [62]. Zhang et al. reported DNA-conjugated gold nanorods as a multifunctional carrier, which can load and release DOX at targeted locations [63]. More importantly, such biotin-PEG-functionalized GNR nanomedicine was able to drastically increase the cell uptake and reduce the drug reflux capability of multidrug-resistant breast cancer cell lines [63].
One may notice that more and more research attention has been switched to employing gold nanorods and/or gold-nanorod-based nanomedicines for combined therapies, especially chemotherapy and photothermal therapy, as combined chemo-photothermal therapy shows better therapeutic efficiency than monotherapy. For instance, in 2014, Wang et al. reported combined chemotherapy and photothermal ablation using DOX-loaded DNA-wrapped gold nanorods for the treatment of metastatic breast cancer [64]. The inhibition capability of tumor growth was mainly thanks to the synergistic effect between DOX-induced apoptosis and laser-irradiation-caused necrosis of tumor cells [64]. In 2019, the Qian and Suo groups developed a facile means to construct polysaccharide-encapsulated Au nanorods for improved chemo-phototherapy of breast cancer [65]. The polysaccharidedecorated nanoplatform was efficiently internalized inside MCF-7 breast cancer cell lines and exhibited greater cancer cell killing than single modalities [65]. Recently, Huang et al. prepared pH-sensitive gold nanorods conjugated with a polypeptide for chemo/photothermal therapy for cervical cancer treatment [66]. The Au nanorod conjugates displayed exceptional biocompatibility, improved cancer cell uptake, and excellent cancer cell killing effects [66]. Another recent study conducted by Zhu's group further demonstrated that degradable silica-capped gold nanorods can be employed for triple-combined therapy for breast cancer treatment [67]. Specifically, in the nanomedicine, upon 808 nm laser irradiation, singlet oxygen was generated to achieve photodynamic/photothermal effects, while the site-specific drug release of DOX can realize chemotherapeutic outcomes [67].
Atomically Precise Gold Nanoclusters
Gold nanoclusters (AuNCs), usually with a size less than 3 nm, are intermediate bridges between relatively larger plasmonic Au nanoparticles and Au complexes. A gold nanocluster has tens to a few hundreds of gold atoms, possessing a core-shell structure, with Au atoms in the core and a surface ligand capped on the metal core. For biomedical applications, various biomolecules, such as DNA, proteins, polypeptides, dendrimers, and biopolymers, have been employed as the stabilizing ligand to prevent the aggregation of the metal core and hence improve the stability. Thanks to the ultrasmall-size-imparted quantum confinement effects, gold nanoclusters exhibit significantly different optical behaviors and chemical and catalytic properties compared with their nanoparticle counterparts [9,68,69]. Unlike AuNPs, AuNCs have no surface plasmon resonance absorption peak but have discrete absorption peaks ranging from the visible region to the near-infrared (NIR) region and drastically different fluorescent properties, depending on the size, surface ligand, charge state, and other factors. Tremendous efforts and progress have been made in employing AuNCs for cancer treatments, and the main ways AuNCs can make a contribution include probing, cell imaging, photothermal therapy, radiotherapy, and antimicrobial application [70][71][72][73].
By rational structural design and choosing of a surface ligand, AuNCs can be fluorescent at a specific photo-emitting wavelength with a long lifetime that is quite favorable for imaging or as probes [73]. In 2017, Singh developed glucose-decorated Au nanoclusters as membrane-potential-independent fluorescence probes that can realize rapid identification of cancer cells that express the Glut receptor [74]. In another study, Chen et al. fabricated novel iodinated gold nanoclusters stabilized by bovine serum albumin (BSA) as a dual modality probe, which achieved malignant thyroid cancer visualization through fluorescence/computed tomography (CT) [75]. Wang's group discovered that accurate tumor imaging can be realized by gold nanoclusters conjugated with carborane derivatives, making accurate imaging-guided cancer treatment possible [76]. Such cancer imaging behaviors were also observed by Zhu et al., who prepared gold-nanocluster-grafted polymer nanoparticles for both imaging and cancer cell killing [77]. Phototherapy is usually considered to be a more powerful means to cure cancer. For example, Liu et al. found that dendrimerencapsulated Au nanoclusters can "self-supply" O 2 through the catalase activity, which was utilized for photodynamic therapy to overcome cancer hypoxia [78]. In another report, Youn's group designed a facile top-down approach to synthesize albumin/polyallylamineassisted AuNCs, which possessed a non-spherical and hyperbranched morphology with a high absorption capacity [79]. Such structure advantage was favorable for surface-plasmonbased hyperthermia, and hence the as-fabricated gold nanoclusters were markedly cytotoxic to 4T1 breast cancer cells [79]. Recently, more and more research attention has been devoted to employing AuNCs in radiotherapy, in which ionizing radiation is utilized for killing cancer cells. Zhang et al. prepared histidine-capped gold nanoclusters that can be adopted as a radiosensitizer for improved cancer radiotherapy through synergistic internal and external regulations [80]. Interestingly, Yang's group found that radionuclidelabeled gold nanoclusters, particularly 99mTc@AuNCs and 177Lu@AuNCs, were able to boost the effective anti-tumor immunity for augmented cancer radiotherapy [81]. Li's group employed bone marrow mesenchymal stem cells to mediate the fabrication of ultrasmall gold nanoclusters, which can enhance the radiotherapy efficacy of Egr1-hNIS for its radiation sensitization [82]. In another report, Li and co-workers demonstrated a transformable AuNC aggregate-based synergistic strategy, which can improve the tumor retention/penetration of the nano-radiosensitizers and weaken the radio-resistance of cancer cells [83]. In a quite recent study, Burda's group and Basilion's group reported that when conjugating AuNCs with protease activatable monomethyl auristatin E, the specificity and efficacy of radiation and chemotherapy can be significantly improved [84]. Both in vitro and in vivo results showed selective tumor cell uptake, excellent anti-tumor activity, and prolonged chemotherapeutic effect [84].
It is worth noting that gold nanoclusters with polydisperse size distribution are employed in the above cases. Such wide size distribution can hinder the deeper fundamental understanding of biomedical applications to some extent. However, gold nanoclusters of molecular purity can be chemically synthesized with atomic precision. Atomically precise gold nanoclusters have demonstrated great potential for cancer treatment, mainly due to their rich surface functionalities, outstanding optical features (especially the excellent luminescent properties, e.g., strong emission in the near-infrared region), and great biocompatibility [10,[85][86][87]. More importantly, thanks to the definite size, uniform composition, and crystallographically resolvable structure, atomically precise gold nanoclusters provide an ideal platform to unravel comprehensive mechanisms and establish structure-activity relationships in cancer treatment study [88][89][90].
In early studies, biocompatible compounds such as glutathione (GSH) were widely employed as functional stabilizing agents to prepare atomically precise gold nanocluster molecules [91]. For instance, Zhang et al. synthesized a series of ultrasmall molecular Au 10-12 (SG) 10-12 nanoclusters, which enhanced the tumor uptake and targeting specificity via enhanced permeability and retention effects owing to their small-size-imparted quantum confinement effect. At the same time, GSH ligands can further enhance the tumor uptake by facilitating the escape of nanoclusters from the reticuloendothelial system while activating the transporter [92]. Such size-depending tumor-targeting behaviors were subsequently observed by Zheng and co-workers with a series of few-atom AuNCs [93]. Upon injection into the mice for 40 min, smaller-sized Au 10-11 and Au 18 NCs were more retained in the kidneys than the relatively larger-sized Au 25 NCs. Additionally, the ratios of bladder-tokidney intensity followed the order of Au 25 NC > Au 18 NC > Au 10-11 NC. This suggests that the glomerulus is no longer a one-way "size-cutoff" slit but is an atom-precise "bandpass" barrier that can drastically decrease the renal clearance of atom-precise Au nanoclusters in the subnanometer size regime [93]. In a following study, the same group reported that enhanced photostability and tumor-targeting can only be achieved by ICG-conjugated GSH-protected Au 25 nanoclusters but not gold clusters with other gold numbers [94]. Such magic size selection was observed by Liu group in a recent study, in which the Au 25 (Capt) 18 based nanosystem acted as a GSH-activated mitochondria-targeting photosensitizer for high-efficiency treatment of malignant tumors [95].
In 2020, Yang et al. developed a theranostic nanomedicine of AuNCs-Pt based on atomically precise glutathione-protected Au 25 nanoclusters with dual functions of both near-infrared imaging and glutathione scavenging capabilities [96]. AuNCs-Pt has NIR-II (excited at 808 nm, emitted at 1050-1250 nm) imaging ability on a lethal high-grade serous ovarian cancer (HGSOC) model; hence, it can be a potential tool for monitoring Pt transportation [96]. At the same time, AuNCs-Pt exhausts the intracellular glutathione to minimize the Pt detoxification and effectively maximizes the platinum chemotherapeutic efficacy [96]. As shown in Figure 3A, the authors conducted NIR imaging using the LUC + OVCAR8 cells. Notably, LUC + OVCAR8 cells have a bioluminescent property that is able to present the growth degree and position of tumors through imaging. After injection for 12 h, most of the AuNCs-Pt was found in the peritoneal tumor, indicating high tumor accumulation ( Figure 3B). It is worth noting that the images in the NIR-I region portrayed the tissue anatomy. In stark contrast, the NIR-II signal was better defined and overlapped with the tumor luminescent signal. Ex vivo imaging was carried out on excised organs, which verified the colocalization of the bioluminescent and fluorescent signals of both NIR-II and NIR-I for the AuNCs-Pt tumor deposits ( Figure 3C,D). Thanks to the stronger penetration capability, NIR-II imaging more precisely disclosed the nanoparticle accumulation in organs, showing a more convincing imaging method. The results indicated that AuNCs-Pt reached about 5-fold Pt accumulation in tumor tissue compared with that using free CDDP ( Figure 3E). They also illustrated that AuNCs-Pt demonstrated a markedly stronger ability to inhibit tumor growth compared to the other groups ( Figure 3F). Furthermore, AuNCs-Pt treatment increased the survival of the animals to one and a half months and did not reduce the body weight ( Figure 3G,H).
The above case took full advantage of the near-infrared emission property of molecular Au 25 nanoclusters, and the Au 25 clusters can effectively maximize the chemotherapeutic efficacy of platinum. In fact, besides chemotherapy, radiotherapy is another important cancer therapeutic strategy, particularly for treating solid tumors at different stages [97]. In radiotherapy, X-ray radiation of high energy is used to shrink the tumors and kill cancer cells, and the radiosensitizer is essential to improve the therapeutic efficacy [98,99]. In 2019, Jia et al. reported a molecular levonorgestrel-protected gold nanocluster as a radiosensitizer for enhanced cancer therapy [100]. Scheme 2a presents the synthetic route, in which the alkynyl ligand of levonorgestrel can react with Me 2 AuSCl to generate a molecular Au 8 nanocluster. Single crystal X-ray diffraction (SCXRD) measurement showed that it has two parts, each containing a planar tetranuclear structure capped by four ligands. The major cancer therapeutic mechanism is shown in Scheme 2b. Specifically, X-ray irradiation triggers an increase in reactive oxygen species, leading to irreversible cell apoptosis. Au 8 NCs make cancer cells more sensitive to radiation by improving the local treatment efficiency with a relatively safe and low radiation dose. The above case took full advantage of the near-infrared emission property of molecular Au25 nanoclusters, and the Au25 clusters can effectively maximize the chemotherapeutic efficacy of platinum. In fact, besides chemotherapy, radiotherapy is another important cancer therapeutic strategy, particularly for treating solid tumors at different four ligands. The major cancer therapeutic mechanism is shown in Scheme 2b. Specifically, X-ray irradiation triggers an increase in reactive oxygen species, leading to irreversible cell apoptosis. Au8NCs make cancer cells more sensitive to radiation by improving the local treatment efficiency with a relatively safe and low radiation dose. The authors then evaluated the radiosensitizing effect of the Au8 nanoclusters with an in vivo tumor assay [100] in which a comparison test of a control sample and a phosphate-buffered saline (PBS)-treated group was also performed. Specifically, the EC1 cells were first divided into three different groups: control group, PBS-treated group, and Au8NC-treated group. Subsequently, EC1 cells (2 × 10 6 cells per mouse) were injected into the flanks of female BALB/c-nude specific-pathogen-free (SPF) mice and treated with different doses of X-ray irradiation. Finally, the body weights and the tumor sizes were monitored every other day [100]. Figure 4a-e illustrate the tumor size and body weight of the mice after injection of different doses. It can be noted that an approximately 5 times increase in the tumor size was observed for the control groups, while in sharp contrast, the tumor volume in the Au8NCs + 4 Gy group decreased significantly. Furthermore, the body weights of the mice under various conditions remained nearly the same over 2 weeks, indicating no toxicity. Eosin and hematoxylin staining of the organs and tumors was further carried out. As shown in Figure 4f, compared with the control groups, ubiquitous damage can be identified in the tumor tissue for the Au8NCs + 4 Gy treated group with basically no abnormalities in the organs. This study demonstrated the potent capability of the atomically precise gold nanoclusters as a sensitizer to enhance the tumor-suppressing efficacy. Following the above work, the same group also reported a levonorgestrel- The authors then evaluated the radiosensitizing effect of the Au 8 nanoclusters with an in vivo tumor assay [100] in which a comparison test of a control sample and a phosphatebuffered saline (PBS)-treated group was also performed. Specifically, the EC1 cells were first divided into three different groups: control group, PBS-treated group, and Au 8 NC-treated group. Subsequently, EC1 cells (2 × 10 6 cells per mouse) were injected into the flanks of female BALB/c-nude specific-pathogen-free (SPF) mice and treated with different doses of X-ray irradiation. Finally, the body weights and the tumor sizes were monitored every other day [100]. Figure 4a-e illustrate the tumor size and body weight of the mice after injection of different doses. It can be noted that an approximately 5 times increase in the tumor size was observed for the control groups, while in sharp contrast, the tumor volume in the Au 8 NCs + 4 Gy group decreased significantly. Furthermore, the body weights of the mice under various conditions remained nearly the same over 2 weeks, indicating no toxicity. Eosin and hematoxylin staining of the organs and tumors was further carried out. As shown in Figure 4f, compared with the control groups, ubiquitous damage can be identified in the tumor tissue for the Au 8 NCs + 4 Gy treated group with basically no abnormalities in the organs. This study demonstrated the potent capability of the atomically precise gold nanoclusters as a sensitizer to enhance the tumor-suppressing efficacy. Following the above work, the same group also reported a levonorgestrel-protected gold nanocluster of Au 10 (C 21 H 27 O 2 ) 10 ; by conjugating a poly(allylamine hydrochloride) molecule, sustained drug release and effective antibody-mediated actin imaging can be realized [101]. We also notice that the ligand of levonorgestrel is a water-soluble drug, and this study can pave a path to selecting a suitable drug as a ligand to prepare molecular Au nanoclusters as effective sensitizers for improved radiotherapy and beyond.
protected gold nanocluster of Au10(C21H27O2)10; by conjugating a poly(allylamine hydrochloride) molecule, sustained drug release and effective antibody-mediated actin imaging can be realized [101]. We also notice that the ligand of levonorgestrel is a water-soluble drug, and this study can pave a path to selecting a suitable drug as a ligand to prepare molecular Au nanoclusters as effective sensitizers for improved radiotherapy and beyond.
Challenges and Perspectives
The recent advances regarding gold nanostructures, namely gold nanoparticles, gold nanorods, and atomically precise gold nanoclusters, have been reviewed above. One can notice that the gold nanostructures hold great potential in cancer diagnostics and therapeutics, mainly thanks to the merits such as excellent optical properties, facile control of size and/or morphology, robust stability, the capability to tune the surface chemistry for conjugation with functional biological molecules, and especially the great biocompatibility.
However, there are also some disadvantages of gold nanostructures employed for cancer treatment, and these disadvantages need further in-depth investigations in this promising yet fast-evolving field: 1. The long-term toxicity issue. The gold nanostructures cannot be easily degraded and can accumulate in vivo during prolonged treatment, which may cause some uncertain side effects [56,102]. Upon long-term accumulation, damage to organs such as lung, spleen, kidney, and liver might be present.
Challenges and Perspectives
The recent advances regarding gold nanostructures, namely gold nanoparticles, gold nanorods, and atomically precise gold nanoclusters, have been reviewed above. One can notice that the gold nanostructures hold great potential in cancer diagnostics and therapeutics, mainly thanks to the merits such as excellent optical properties, facile control of size and/or morphology, robust stability, the capability to tune the surface chemistry for conjugation with functional biological molecules, and especially the great biocompatibility.
However, there are also some disadvantages of gold nanostructures employed for cancer treatment, and these disadvantages need further in-depth investigations in this promising yet fast-evolving field: 1.
The long-term toxicity issue. The gold nanostructures cannot be easily degraded and can accumulate in vivo during prolonged treatment, which may cause some uncertain side effects [56,102]. Upon long-term accumulation, damage to organs such as lung, spleen, kidney, and liver might be present.
2.
The targeting specificity issue. Even though the gold nanostructures can be designed to bind to specific cancer cells, there is still an urgent need for cancer diagnosis and therapeutics at the early stage with a high level of targeting specificity [103]. Currently, the widely employed cancer treatment strategies such as photoimaging and photothermal therapy still have the limitations such as non-specific binding and the unnecessary activation of the normal host immune response.
3.
The modulation of the gold nanostructures to meet the complex biological environment can be challenging. Upon the surface modification of the gold structures, the pharmacokinetic parameters of the gold nanostructures and the cellular response will be correspondingly changed, while in vivo, the fundamental comprehensive understanding of the interactions between the gold nanostructures and the biological moieties is still lacking [104].
4.
Some gold nanostructures (e.g., the gold nanocluster case mentioned in this review) can be used for both NIR-I and NIR-II imaging; however, when choosing both regions, the excitation wavelength range is quite limited, and the imaging effectiveness and efficiency still have room to improve. Determining how to modify the composition, morphology, and structure of these gold nanomaterials to work better for both NIR-I and NIR-II regions is still extremely challenging.
The above challenges actually imply great opportunities for future development using gold nanostructures for cancer treatment. In addition, from the perspectives of this research field, some other important issues may also represent the future research directions:
1.
For photothermal treatment based on gold nanostructures, the efficacy is highly dependent on the penetration depth of the NIR lasers, and the heating intensity can decrease with the increase in the laser penetration depth. This means that the laser intensity and the plasmonic effects of the gold nanostructures could be critical and deserve special attention in future studies.
2.
Even if gold nanostructures have been successfully documented for in vitro, in vivo, pre-clinical, and clinical studies, considering the cytotoxicity, the internalization of gold nanostructure with tissues, the complex biological environment, the longterm stability of the gold nanostructure's integrity, and the high costs of preparing specifically designed nanogold agents, the way to realizing gold nanostructures for practical applications of cancer treatment is still long.
However, all the above issues or challenges might be resolved by the rapid development of nanotechnology, plus other factors such as the introduction of artificial intelligence in modern medicine. For instance, with the aid of artificial intelligence and machine learning technologies, some new specific drugs can be possibly designed and synthesized for preparing atomically precise gold nanoclusters to target specific cancer cells to achieve some "perfect" diagnostic and therapeutic effects.
Conclusions
In conclusion, gold nanostructures, especially spherical gold nanoparticles, gold nanorods, and atomically precise gold nanoclusters, are good candidates for cancer treatment. The optical properties (such as surface plasmon effects and fluorescent behaviors), ease of surface modification, low cytotoxicity, outstanding biocompatibility, excellent stability, and other merits make gold nanostructures very promising for cancer diagnostics and therapeutics. Despite some shortcomings and disadvantages, we envision that more research endeavors will push gold nanostructures toward real clinical applications of cancer treatment in the future. Funding: This work was supported by the Chongqing Chemical Industry Vocational College.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,326.2 | 2022-05-01T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Wheat Varietal Response to Tilletia controversa J. G. Kühn Using qRT-PCR and Laser Confocal Microscopy
Tilletia controversa J. G. Kühn is a causal organism of dwarf bunt in wheat. Understanding the interaction of wheat and T. controversa is of practical and scientific importance for disease control. In this study, the relative expression of TaLHY and TaPR-4 and TaPR-5 genes was higher in a resistant (Yinong 18) and moderately resistant (Pin 9928) cultivars rather than susceptible (Dongxuan 3) cultivar at 72 h post inoculation (hpi) with T. controversa. Similarly, the expression of defensin, TaPR-2 and TaPR-10 genes was observed higher in resistant and moderately resistant cultivars after exogenous application of phytohormones, including methyl jasmonate, salicylic acid, and abscisic acid. Laser confocal microscopy was used to track the fungal hyphae in the roots, leaves, and tapetum cells, which of susceptible cultivar were infected harshly by T. controversa than moderately resistant and resistant cultivars. There were no fungal hyphae in tapetum cells in susceptible cultivar after methyl jasmonate, salicylic acid and abscisic acid treatments. Moreover, after T. controversa infection, the pollen germination was of 80.06, 58.73, and 0.67% in resistant, moderately resistant and susceptible cultivars, respectively. The above results suggested that the use using of resistant cultivar is a good option against the dwarf bunt disease.
Introduction
Wheat (Triticum aestivum L) is one of the most important staple food crops throughout the world. Disease, a main biotic stress, negatively affects plant physiology, morphology, and productivity, and also reduces quality and quantity of wheat worldwide [1]. Dwarf bunt is caused by T. controversa and is an economically devastating disease of winter wheat [2]. The disease is a seed and soil-borne and appeared in cold areas of the world [3,4]. The pathogen has the extreme potential to grow when persistent and deep snow occurs before the soil become frozen, which provides a long period of cool, stable, and humid conditions that are suitable for teliospore germination and infection. T. controversa is an important quarantine pathogen and many countries have strict restriction for importing wheat grains infected by it [5,6]. The closely related species of T. controversa are T. caries and T. foetida, causing common bunt of wheat, are more widely distributed in the world. T. caries and T. foetida can be differentiated from T. controversa through molecular techniques including internal transcribed spacer (ITS) and intergenic spacers (IGS) [7,8]. Plants and spikes of wheat infected by T. controversa are typically shorter than a healthy one [4,9]. In the normal flowering plants, the male reproductive organ stamen usually has four anther lobes; every lobe has microsporagium where pollen grains complete their development. The male reproduction has many steps, including initiation of tapetum cells and generation of germ-line meiotic cells. These tapetum and germ cells support the process of pollen development [10]. The development of functional pollen that is critical to maximize the pollination is important for plant reproduction [10][11][12]. Therefore, the tapetum cells and pollen development are key players for the anther development and pollination. The infected tapetum cells by Ustilago maydis increased in size compared to normal cells [12]. The fungal hyphae of T. controversa were seen on the somatic and reproductive cells of the wheat anthers [9]. Millions of teliospores of T. controversa can develop in the spikelets of wheat [13].
Plants have progressed different mechanisms to manage the biotic stresses [14,15]. Phytohormones, including methyl jasmonate (MeJa), salicylic acid (SA), and abscisic acid (ABA), activate primary defense responses of the plants against both abiotic and biotic stresses via antagonistic or synergistic actions [16]. Usually, MeJa and SA are associated with necrotrophic and biotrophic pathogens, respectively [17]. Whereas, ABA has important role in plant growth and development, and also in defense responses against both biotic and abiotic stresses [18,19]. Transcription factors (TFs) are important molecules in the regulatory networks underlying plant behaviors to biotic and abiotic pressures [20]. The MYB family belongs to TFs and against the plant pathogens [21]. Similarly, pathogenesisrelated (PR) proteins have been implicated in defense response, potentially restricting pathogen development and spread [22][23][24]. Both TFs and PR proteins can directly affect pathogen integrity or release signal molecules through their enzymatic activity that act as elicitors molecules to induce other plant defense related pathways [25][26][27][28][29]. TaLHY is a 1R protein MYB transcription factor (R1/R2-MYB), which plays critical role in disease resistance against ear heading and stripe rust pathogens of the wheat [30]. Plants have both inducible and performed mechanisms to protest attack of the plant pathogens and respond them by various defense tactics leading to the synthesis of different protective molecules, for example, pathogenesis related proteins (PR-2 and PR-5) [31]. Previous studies showed that there were PR-1 to PR-13 proteins families in plants upon infection by fungi, oomycetes, virus, bacteria, nematode, as well as insect attack [32]. The recognized PRs have been broadly reviewed [33,34] and presently have 17 PRs families [22]. Previous studies revealed that PR-2 and PR-4 act as the antifungal compounds, which limiting the pathogen growth, activity, and the fitness of fungal plant pathogens [22]. PR-2 (chitinase) has potential to target the herbivorous and nematodes infection in tomato plants [22]. Triticum aestivum pathogenesis related (TaPR-4) has antifungal activity against different pathogens and also has ribonuclease activity in wheat [35,36]. Similarly, RR-10 proteins display homology to ribonucleases, while some members have the weak ribonucleases activity [37]. PR-5 family has direct link in resistance against oomycetes. Similarly, defensin has comprehensive antifungal and antibacterial activities [22]. The up-regulation or down regulation of PR-2, PR-5 genes increase or decrease the disease severity in wheat and rice [38,39].
In potato tubers, qRT-PCR technique was also used to detect successfully Colletotrichum coccodes [45]. Using the same technique, the mycelium of T. caries and T. controversa were quantified in the apical meristem of wheat by using quantitative real-time PCR (qRT-PCR) [46]. Therefore, investigation PR genes expression in wheat cultivars after T. controversa infection by using qRT-PCR is very important in the process of plant selection.
In the present study, we checked the presence of T. controversa on tapetum cells and pollen grain germination. Furthermore, we investigated the expression of PRs genes (TaPR-4 and TaPR-5) the MYB transcription factor (TaLHY) genes and the role of exogenous hormones (MeJa, SA and ABA) in the induction of PRs genes expression (defensin, TaPR-2, and TaPR-10) in resistant, moderately resistant and susceptible wheat cultivars against dwarf bunt disease. The proliferation of T. controversa hyphae was further examined in roots, leaves, and tapetum cells of anther by laser confocal microscopy. The effects of T. controversa on pollen grain germination in resistant, moderately resistant and susceptible cultivars were additionally tested.
Plant Material and Fungal Inoculation
In total, three wheat (Triticum aestivum L) cultivars (Yinong 18, Pin 9928, and Dongxuan 3) and T. controversa were the biological materials of this study. Wheat cultivars were collected from the Institute of Plant Protection, Chinese Academy of Agricultural Sciences, Beijing, China, while T. controversa was provided by Blair Goates, National Small Grains Germplasm Research Facility, United States Department of Agriculture-Agricultural Research Service (USDA-ARS). Above cultivars were tested in a greenhouse against T. controversa during 2015-2018, Yinong 18, which is known to be very resistant to T. controversa (with 5% infected spikes), was used as the resistant cultivar in the present work. Pin 9928, which is known to be very moderately resistant to T. controversa (with 27% infected spikes), was used as the moderately resistant cultivar in this study. Dongxuan 3, a very susceptible wheat cultivar to T. controversa (73% infected spikes) was used as the susceptible cultivar in the present work. Seeds of above cultivars were grown in the experimental pots in a growth chamber (14 h light: 10 h dark 5 ± 2 • C and 70% relative humidity). A total of four biological replicates of each cultivar were used in this study. The fungal cultivation and inoculation of wheat plants followed the method previously published [9]. Briefly, the concentration of fungal conidia in the ddH 2 O was adjusted to 10 6 conidia mL −1 and inoculated seedlings. Inoculation was repeated five times with one-day interval. The inoculated leaves of above cultivars were sampled at 24, 36, 72, and 96 h post inoculation (hpi), quickly frozen in liquid nitrogen, and stored at −80 • C for further use. The hormone treatments, namely 100 mM of abscisic acid (ABA), 100 mM of methyl jasmonate (MeJa) and 100 mM of salicylic acid (SA) were performed by following our laboratory method [9]. The treated leaves were collected for RNA extraction at 1, 3, and 7 h after hormone treatment. Plants sprayed with ddH 2 O used as a control [9].
RNA Extraction and cDNA Synthesis
Plant samples (100 mg of the leaves) collected for each T. controversa infected and control plants were immediately placed in liquid nitrogen and processed for RNA extraction by using EasyPure Plant RNA Kit (TransGen, Beijing, China) following manufacturer instructions. The quality and quantity of extracted RNA were checked through a NanoDrop spectrophotometer (Denovix, Wilmington, DE, USA) device. The RNA was stored at −80 • C until used for cDNA synthesis. First-strand cDNA was synthesized by using 1.5 µg of purified total RNA, RT-RI enzyme and oligo (dT)18 Primer (TransGen) following the instructions of the kit (TransGen) and stored at −20 • C for further use. The cDNA was synthesized from three biological replicates and four technical replicates for qRT-PCR analysis. Additionally, the same RNA extraction and cDNA synthesis method was used for samples treated with MeJa, SA, and ABA at different time intervals.
Quantitative Real-Time PCR Analysis
Quantitative real-time-PCR was performed using SYBR Green Master Mix in a total volume of 20 µL by following the manufactures instructions and applied to the ABI 7500 RT-PCR system (Applied Biosystems, Foster City, CA, USA). The qRT-PCR reactions were set up with the following thermal cycles: pre-denaturation at 95 • C for 10 min and 40 cycles of 95 • C for 15 s, 58 • C for 30 s, and 72 • C for 30 s. The amplification of wheat actin gene was used as an internal control for normalizing all data. The 2 −∆∆CT method [47] was used to calculate the relative expression of every gene. The genome of wheat crop is complex when compared with other crops due to its hexaploid nature. The interaction between the three subgenomes subsidize flexibility in gene expression levels, which enhanced the adaptability to various biotic and abiotic factors [48,49]. The primers used in this study are positioned in an identical region to the three subgenomes of wheat and listed in Table S1.
Observation by Laser Confocal Microscopy
Roots, leaves, and anther cells were investigated under laser confocal microscopy to investigate the fungal intensity in resistant, moderately resistant, and susceptible cultivars, as previously described [2,9]. Briefly, the roots, leaves and anthers were dissected from the wheat and immediately dip in absolute ethanol (96%) until the tissues changed into white. The anther cells were stained with Propidium Iodide (PI) (Invitrogen, Eugene, OR, USA) and fungal hyphae in the roots, leaves and tapetum cells were stained with the chitin-specific dye Wheat Germ Agglutinin and Alexa Flour 488 conjugate (WGA-AF488) (Invitrogen). After 1 h slides were made and investigated under laser confocal microscopy (Leica SP8, Wetzlar, Germany), as described before [50].
Effects of T. controversa on Pollen Germination
The mature anthers with stamen were collected from mock and fungal inoculated plants for pollen germination test. Three anthers were collected and gently shaken in 1.5 mL centrifuge tube containing liquid culture media (20% sucrose, 20% PEG4000, 40 mg/L H 3 BO 3 , 3 × 10 −3 mol/L Ca (NO 3 ) 2 and 10 mg/L VB1) for taking pollen out from locule with slight modification [51]. These centrifuge tubes were incubated at 28 • C for 30, 60 and 90 min time intervals. One drop of every time interval sample was observed under microscope (Leica DM 2500, Wetzlar, Germany). The size of pollen tube half or more from pollen diameter was considered the standard to indicate the ability of the pollen to germinate. Pollen germination was calculated as: pollen germination = number of germinated pollen grains total number of observed pollen grain × 100 (1)
Assessment of Wheat Cultivars Against T. controversa
A total of 45 heads of above cultivars were evaluated in response to T. controversa for disease assessment. The score for dwarf bunt is as follows Dwarf bunt = number of infected heads total number of heads × 100 (2) The level of disease resistance was calculated by following the scale mention in our previous study [9].
Statistical Analysis
Data were statistically analyzed using one-way (ANOVA) followed by Tukey's test in SPSS Statistics software (Version 20.0). The results were considered significant at the 5% probability level (p ≤ 0.05). The standard errors were calculated in Excel 2016 (Microsoft, Redmond, WA, USA).
The Expression Patterns of TaLHY, TaPR-4, and TaPR-5 in Response to T. controversa Infection
The relative expression value was measured in leaves in resistant (Yinong 18), moderately resistant (Pin 9928) and susceptible (Dongxuan 3) wheat cultivars by using qRT-PCR. The results showed that at 36 h post inoculation (hpi), the relative expression of TaLHY in resistance cultivar was significantly up-regulated compared to moderately resistant and susceptible cultivars by comparing the expression at 0 hpi (control) (p < 0.05), the expression of which increased to 2.28-fold. The relative expression was statistically significant at 72 hpi for above tested cultivars ( Figure 1A). As shown in Figure 1B, results revealed that transcripts abundance of TaPR-4 protein was statistically high in resistant cultivar at 72 hpi compared with expression at 0 hpi (p < 0.05), which was 10.71-fold of the relative expression at 0 hpi, and also higher than that in the moderately resistant and susceptible wheat cultivars at the corresponding time (p < 0.05) ( Figure 1B). Similarly, relative expression of TaPR-5 protein was statistically high (6.71-fold) at 72 hpi in resistant cultivar, which was followed by the moderately resistant cultivar (6.18-fold) at 24 hpi compared with the expression at 0 hpi (p < 0.05). Additionally, the relatively expression of TaPR-5 in moderately resistant cultivar was significantly up-regulated at 72 and 96 hpi, compared with expression at 0 hpi (p < 0.05), which was 5.99-fold and 4.02-fold, respectively. Values found were significantly higher than that in the susceptible cultivar at the corresponding time ( Figure 1C). The results clearly revealed that the expression of TaLHY, TaPR-4, and TaPR-5 were higher in resistant and moderately resistant wheat cultivars, which can positively indicate that these genes possibly regulate resistant in the resistance and moderately resistant cultivars. Therefore, using of resistance cultivars is the best option against the dwarf bunt pathogen.
Response of Pathogenesis Related Proteins against to Exogenous Hormones in Different Wheat Cultivars
Transcriptional profiles of defensin, TaPR-2 and TaPR-10 were analyzed by qRT-PCR in resistant (Yinong 18), moderately resistant (Pin 9928) and susceptible (Dongxuan 3) wheat cultivars after exogenous hormone treatment; including MeJa, SA, and ABA at 1, 3, and 7 h post treatment (hpt). Leaves of the above cultivars during jointing stage were treated with hormones and relative expression was measured at 3 times point; namely 1, 3, and 7 hpt. In Figure 2A, for defensin, ABA induced the maximum level of relative expression at 1 and 7 h post treatment (hpt) with a 5.15-fold and 4.63-fold increase in resistant cultivar compared to control, respectively. However, ABA treatment downregulated the expression of defensin by 0.12-fold (resistant), 0.12-fold (moderately resistant), and 0.11-fold at 3 hpt compared to the control. With regard to SA, the highest relative expression of defensin was noted at 1 and 7 hpt, reaching a 2.29-fold and 2.21-fold increase, respectively, compared to control. In the case of MeJa, the maximum transcriptional level of defensin was noted at 3 hpt, reaching a 2.09-fold increase compared to the control in moderately resistance cultivar. In Figure 2B, the response of TaPR-2 protein to the exogenous application of hormones was comparatively higher at 1 hpt than 3 and 7 hpt compared to the control. The expression level of TaPR-2 increased to 4.91-fold (MeJa) and 4.51-fold (SA) at 1 hpt in resistant and moderately resistant cultivars, respectively, compared to control. Similarly, TaPR-2 responded in a similar way to the exogenous application of SA and ABA at 1 hpt in resistant and moderately resistant cultivars. The highest expression in the SA occurred at 1 hpt with an increase of 3.25-fold in resistant cultivar and in the ABA, expression increased to 3.35-fold. At 3 hpt, TaPR-2 expression was decreased by 0.89-fold and 0.14-fold in resistant and susceptible cultivars, respectively, in the case of MeJa compared to control. The expression increased to 1.87-fold in the moderately resistant cultivar at 3 hpt for the MeJa. After the SA treatment, TaPR-2 expression was decreased by 0.28-fold, 0.59-fold, and 0.48-fold in resistant, moderately resistant, and susceptible cultivars, respectively, compared to control at 3 hpt. Similarly, after the ABA treatment, TaPR-2 expression was decreased by 0.27-fold and 0.47-fold in resistant and susceptible cultivars, respectively, compared to reference at 3 hpt. In the 7 hpt MeJa treatment, the expression level of TaPR-2 was 3.34-fold higher in the resistant cultivar compared to control. Similarly, for the ABA, expression level of TaPR-2 was 3.96-fold higher in the resistant cultivar compared to control.
As shown in Figure 2C, the TaPR-10 expression levels after treatments for 1, 3, and 7 hpt for resistant, moderately resistant and susceptible cultivars were analyzed. In the resistant cultivar, after the treatment with MeJa, the TaPR-10 expression levels at the 1, 3, and 7 hpt were increased by 2.33-fold, 6.00-fold, and 4.88-fold, respectively, compared to control. Using the same hormonal compound in moderately resistant cultivar, the TaPR-10 expression levels at the 3 and 7 hpt were increased by 2.27-fold and 2.83-fold, respectively, compared to control. However, the expression levels of susceptible cultivar at 1 and 7 hpt decreased by 0.59-fold and 0.71-fold, respectively, after the MeJa treatment. Applying SA treatment in the resistant cultivar, the TaPR-10 expression levels at the 1, 3, and 7 hpt increased by 2.62-fold, 3.74-fold, and 4.77-fold, respectively, compared to control. For the moderately resistant cultivar, and after application of SA treatment, the TaPR-10 expression level at the 3 hpt was increased by 3.66-fold compared to control. However, applying SA treatment in the susceptible cultivar, the TaPR-10 expression levels at 1, 3, and 7 hpt decreased by 0.75-fold, 0.41-fold, and 0.15-fold, respectively, compared to control. For the ABA treatment for resistance cultivar, the TaPR-10 expression levels at 3 and 7 hpt was increased by 12.96-fold and 3.07-fold, respectively, compared to control. While for the moderately resistance cultivar, expression levels of TaPR-10 increased to 5.15-fold and 2.06-fold at 3 and 7 hpt, respectively. However, expression levels decreased by 0.51-fold and 0.96-fold for the susceptible cultivar at 1 and 3 hpt, respectively.
Proliferation of Fungal Hyphae in Root and Leaf Cells
To track the hyphae in roots and leaves of resistant, moderately resistant and susceptible wheat cultivars, roots and leaves samples were analyzed by laser confocal microscopy. At germination, hyphae started from small tips, and they formed a hyphal network inside the cortical and rhizodermal cells of roots and leaves. Hyphae moved into cortical and rhizodermal cells through intercellular spaces where they branched and continued to grow. Results revealed that cortical and rhizodermal cells of roots and leaves of susceptible cultivar were harshly infected compared to resistance and moderately resistance cultivars ( Figure 3A-C). Similar response was noted in the leaves tissue of above cultivars ( Figure 3D-F).
Proliferation of Fungal Hyphae in Anther Cells
We also observed the proliferation and colonization of fungal hyphae in tapetum cells of anther. The laser confocal microscopy results showed that there was no proliferation and colonization of fungal hyphae into the tapetum cells of resistant cultivar. Moreover, very few hyphae were observed in the tapetum cells of moderately resistant cultivar, but tapetum cells of susceptible cultivar were harshly infected by fungal hyphae (Figure 4A-C). Additionally, there was no fungal hyphae on epidermis and endothecium cells of anther in resistant cultivar. Yet, in the moderately resistance cultivar only a few hyphae were seen, but epidermis and endothecium cells of anther were heavily infected by fungal hyphae ( Figure S1).
Effects of Exogenous Hormones on Tapetum Cells of Anther
To track the fungal hyphae in the tapetum cells of anther of susceptible cultivar after the treatment of cultivars with MeJa, SA, and ABA were analyzed using laser confocal microscopy. No fungal hyphae were observed in the tapetum cells, which were treated with MeJa, SA, and ABA hormones, but heavily infection of fungal hyphae was observed in the tapetum cells of controls ( Figure 5).
Effects of T. controversa on Pollen Grain Germination
We examined the effects of T. controversa in pollen grain germination in vitro. Pollen germination in control was 87.14, 88.39, and 86.95%, while in T. controversa infected was 80.06, 58.73, and 0.67% in resistant, moderately resistant and susceptible cultivars, respectively (Table 1 and Figure 6). A total of three replications were used for every variety and more than 200 pollens were used in every replication. * stands for highly significant and ± represents the standard error between replications.
Evaluation of Dwarf Bunt Resistance in Wheat Cultivars
The resistant (Yinong 18), moderately resistant (Pin 9928) and susceptible (Dongxuan 3) cultivars were evaluated for disease resistance, which showed 8.89, 26.67, and 62.2% infected heads by dwarf bunt pathogen, respectively (Figure 7). This level of infection confirmed that Yinong 18, Pin 9928, and Dongxuan 3 are resistant, moderately resistant, and susceptible cultivars [52]. Additionally, the dwarf bunt symptoms were clearly seen on the spike of susceptible compared to moderately resistant and resistant cultivars ( Figure S2).
Discussion
The qRT-PCR is a highly reliable, sensitive, accurate, and simple method to quantify the expression levels of genes in crops including; wheat after pathogen infection [9,46], while, laser confocal microscopy helps to visualize the fungal and plant cells by using dyes. Previously, Wheat Germ Agglutinin and Alexa Flour 488 conjugate (WGA-AF488) (Invitrogen, Eugene, OR, USA) was used for fungal hyphae and Propidium Iodide (PI) (Invitrogen, Eugene, OR, USA) for plant cell counting [9,53]. Here, we report the expression profiles of pathogenesis-related genes and the infection process of fungal hyphae in the tapetum cells of anther in the resistance, moderately resistance and susceptible cultivars by using qRT-PCR and laser confocal microscopy, respectively. PR proteins of wheat, tomato, and Arabidopsis contain a group of functionally and inducible diverse proteins that are accumulated in response to pathogen infection. These proteins have been implicated in active defense, as well as potentially restricting pathogen spread and development [9,22,[54][55][56][57]. Regarding the role of wheat and rice PRs proteins in defense system, PR-2, PR-5, and PR-10 proteins can directly affect pathogen integrity or release signal molecules through their enzymatic activity that act as elicitors to induce plant defense related pathways [22][23][24]. Endochitinases (PR-4) and thaumatin like proteins (PR-5) of wheat, maize, barley, sorghum, and oat are implicated in defense responses against a diverse group of pathogens, including fungal and oomycete pathogens with different lifestyle [22,24,58,59]. Similarly, MYB transcription factors (TaLHY) plays key roles in defense mechanism of the plants [30,60]. Previous studies showed that TaLHY plays key role in disease resistance against stripe rust of wheat [30]. The expression of PRs genes up-regulated in the resistance cultivar than in susceptible wheat cultivar upon infection by Bipolaris sorokiniana and T. controversa [9,31]. The silencing or overexpression of TaPR-5 and TaLHY genes decrease or increase the resistance level against plant pathogens [30,61,62]. Here in the qRT-PCR analysis results revealed that infection with T. controversa triggers the expression levels of PRs genes (TaPR-4 and TaPR-5) and TaLHY gene more in resistant and moderately resistance cultivars than in the susceptible cultivar. After the infection of T. controversa, the expression levels of TaLHY, TaPR-4, and TaPR-5 were higher in resistant and moderately resistant cultivars than in susceptible cultivar at different time points (Figure 1A-C). The above results revealed that TaLHY, TaPR-4, and TaPR-5 genes were activated upon infection by T. controversa.
MeJa, SA, and ABA are involved in both biotic and abiotic stress signaling in plants [2,[16][17][18] and many defense related genes are activated by MeJa, SA, and ABA [16][17][18]57]. According to the previous literature, the expression of TaPRs genes against B. sorokiniana, T. controversa, and P. striiformis f.sp. tritici were increased by above molecules [2,9,30,63]. Our results showed that response of MeJa, SA, and ABA to the expression of TaPRs (defensin, TaPR-2, and TaPR-10) genes was higher at different time points in resistant and moderately resistant cultivars than in the susceptible cultivars. However, the expression induced by MeJa and SA was greater than ABA in above mentioned cultivars (Figure 2A-C).
The Red Bobs a winter wheat cultivar was shown to be more susceptible to dwarf bunt typically during the 1 to 3 leaf stages [64]. The fungal hyphae that established in the 1 to 3 leaf stages remains sparse until reached to the reproductive organs [65]. In present study, we investigated the varietal response to study the proliferation of T. controversa on roots, leaves, and tapetum cell of the anthers. Results showed that roots and leaves of susceptible cultivar had harshly infected rather than resistance and moderately resistance cultivars. The fungal hyphae move from roots to reproductive parts as crop mature in susceptible cultivar and further infect the anther cells. The anthers have four lobes that are designed to produce and release pollen grains. Every lobe has a specialized chamber known as locule in which pollen develops. Locule walls are lined by a specialized tissue which are composed by the tapetum cells. The tapetum cell is the innermost layer of the anther and provides nutrients to developing pollen grains. The tapetum cells undergoes to program cell death by depositing a mixture of wax and protein on the surface of pollen exine during the later stages of pollen development [66,67]. Ustilago maydis deforms the four anther lobes which influence the normal process of pollen grain development [12]. Our previous studies revealed that hyphae of T. controversa were present on the anther epidermal and sub-epidermal cells including; epidermis cells (EPI), endothecium cells (EN), middle layer (ML), and pollen mother cells (PMC) more severely in susceptible cultivars [2,9]. However, here, we observed the prevalence of T. controversa in tapetum cells in resistant, moderately resistant, and susceptible cultivars. The results of this study revealed that tapetum cells of susceptible cultivar was harshly infected by fungal hyphae than moderately resistant and resistant cultivar (Figure 4). Additionally, we also confirmed that the percentage of pollen germination was statistically lower in the susceptible cultivar rather than resistance (Table 1 and Figure 5). The pollination from infected anthers is critical for normal plant reproduction. The seeds produced from infected anthers contain millions of teliospores, which turn the grain materials into black mass of T. controversa teliospores [4].
Conflicts of Interest:
All authors declare that there is no conflict of interest. | 6,262.2 | 2021-03-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Biology"
] |
Solution structure and flexibility of the condensin HEAT-repeat subunit Ycg1
High-resolution structural analysis of flexible proteins is frequently challenging and requires the synergistic application of different experimental techniques. For these proteins, small-angle X-ray scattering (SAXS) allows for a quantitative assessment and modeling of potentially flexible and heterogeneous structural states. Here, we report SAXS characterization of the condensin HEAT-repeat subunit Ycg1Cnd3 in solution, complementing currently available high-resolution crystallographic models. We show that the free Ycg1 subunit is flexible in solution but becomes considerably more rigid when bound to its kleisin-binding partner protein Brn1Cnd2. The analysis of SAXS and dynamic and static multiangle light scattering data furthermore reveals that Ycg1 tends to oligomerize with increasing concentrations in the absence of Brn1. Based on these data, we present a model of the free Ycg1 protein constructed by normal mode analysis, as well as tentative models of Ycg1 dimers and tetramers. These models enable visualization of the conformational transitions that Ycg1 has to undergo to adopt a closed rigid shape and thereby create a DNA-binding surface in the condensin complex.
Condensins are protein complexes that play a key role during the segregation of eukaryotic chromosomes into the daughter cells during mitotic and meiotic cell divisions (1,2). As members of the SMC (structural maintenance of chromosomes) family of complexes, they are thought to function by encircling chromosomal DNA within the large ring-shaped architecture that is created by a dimer of SMC subunits and a kleisin subunit (3)(4)(5). The Brn1 Cnd2 kleisin subunit of condensin binds two additional subunits that are composed of tandem repeats of short, amphiphilic ␣-helices known as HEAT repeats (named after four proteins that contain this motif: Huntingtin, elonga-tion factor 3, the A subunit of protein phosphatase 2A, and the signaling kinase TOR1) (6,7).
Recent crystal structures of the Saccharomyces cerevisiae (Sc) 4 Ycg1 HEAT-repeat subunit in complex with the region of the Brn1 kleisin subunit it binds to and DNA revealed a positively charged groove that contacts the phosphate backbone of the dsDNA helix (8). In addition, DNA is entrapped within a flexible Brn1 loop, which thereby acts analogous to a safety belt that pins the DNA double helix in place. Interestingly, Ycg1 gains its ability to bind DNA only upon its association with the Brn1 subunit.
HEAT-repeat proteins have been shown to exhibit significant flexibility, both in response to binding events and as a result of external forces (9,10). It therefore remains unclear whether the inability of Ycg1 to bind DNA by itself is merely due to the missing positive charges that are contributed by Brn1 residues and/or its safety-belt entrapment or whether binding to Brn1 induces structural transitions in the HEAT-repeat solenoid to create a DNA-binding site. The latter option would be analogous to the large conformational changes that other HEAT-repeat proteins undergo upon ligand binding (11,12). Considering the essential function of the Ycg1-Brn1 DNAbinding site in recruiting condensin complexes to chromosomes (8,13), it is essential to understand how DNA binding by Ycg1 is promoted upon its association with Brn1 and how this interaction is prevented when the protein is not assembled into condensin holocomplexes.
Here, we examined the structure and flexibility of the Chaetomium thermophilum (Ct) condensin subunit Ycg1 in solution using small-angle X-ray scattering (SAXS), both in its unbound form and in complex with the kleisin region Brn1 515-634 it binds to. Whereas the SAXS data confirmed the overall horseshoe-shaped Ycg1 conformation observed in the Sc Ycg1-Brn1 crystal structure, the unbound Ycg1 is considerably more flexible than the complex. This indicates that Brn1 binding significantly restricts the conformational freedom of Ycg1. Utilizing the high-resolution structure of Sc Ycg1-Brn1 as a starting point, we constructed models of monomeric Ct Ycg1 and the Ct Ycg1-Brn1 515-634 complex in solution by a normal mode analysis against the SAXS data. We further found that Ycg1 assembly into condensin complexes prevents the protein from forming oligomers in solution. These findings have direct implications for the regulation of condensin function.
Results
To elucidate the solution structure of Ct Ycg1, both with and without the Brn1 515-634 fragment, SAXS data were collected for both samples at a series of concentrations. The processed scattering intensities after background subtraction are presented in Fig. 1 as functions of the momentum transfer I(s), s ϭ 4sin/, where is the X-ray wavelength, and 2 is the scattering angle. For Ycg1 alone, the normalized forward scattering, I(0), and the apparent particle radius of gyration, R g , significantly increase with the solute concentration, which indicates that the protein oligomerizes at higher concentrations (Fig. 1a). In sharp contrast, no obvious concentration dependence was observed for Ycg1-Brn1 515-634 except for minor unspecific aggregation effects (Fig. 1b).
The concentration series SAXS data for Ycg1 were used to extrapolate a scattering curve to apparent zero solute concentration, whereas Ycg1-Brn1 515-634 SAXS data were derived from merging low and high-concentration scattering data (see "Experimental procedures" for details). Panels a and b of Fig. 2 show the Ycg1 extrapolated data and the Ycg1-Brn1 515-634 merged data, with plots of their respective Guinier regions in the lower left insets. Both Ycg1 extrapolated data and the Ycg1-Brn1 515-634 merged data have linear Guinier regions, which indicates a lack of aggregation for both samples. The overall parameters computed from the scattering data are summarized in Table 1. The molecular weight (MW) estimates from the forward scattering, I(0), for unbound Ycg1 and Ycg1-Brn1 515-634 confirm that the extrapolated and merged data represent monomeric forms in both cases. The pair distance distribution functions, P(r), show that unbound Ycg1 appears slightly larger in solution than the Brn1-bound form (Fig. 2c), pointing to an increased Ycg1 flexibility in the absence of Brn1. The normalized Kratky plots for Ycg1 and Ycg1-Brn1 515-634 (Fig. 2d) furthermore suggest that unbound Ycg1 exhibits more flexibility than the Ycg1-Brn1 515-634 complex. Indeed, the latter plot is bell-shaped, which is characteristic of globular proteins, whereas the unbound protein reveals elevated scattering at higher angles, pointing to a higher anisometry/flexibility.
We reconstructed ab initio models of Ycg1 and Ycg1-Brn1 515-634 from the derived scattering curves of the two samples using DAMMIF (14). The resulting Ycg1 bead models are more extended than Ycg1-Brn1 515-634 bead models (Fig. 3, a and b), which also indicates that Brn1 binding restricts the flexibility of the Ycg1 HEAT-repeat solenoid.
The 2 fits between the Ycg1 and Ycg1-Brn1 515-634 SAXS data and the scattering profiles computed from their corresponding high-resolution structures are displayed in Fig. 2 (a and b). The scattering curve computed from the Sc Ycg1-Brn1 crystal structure yielded an overall good agreement with the Ct Ycg1-Brn1 515-634 SAXS data ( 2 ϭ 1.5), whereas the scattering curve computed from the Sc Ycg1 component of the crystal structure alone showed noticeable systematic deviations against the unbound Ct Ycg1 SAXS data ( 2 ϭ 1.8). To generate meaningful models that yield better fits, we performed SAXSguided normal mode analysis (NMA). NMA starts with a highresolution structure and fits the given scattering profile, allowing for larger-scale domain movements, as defined by harmonic normal modes (15). With this approach, it is possible to test whether allowing for a limited flexibility of the structure is sufficient to reconcile the models with the observed scattering data. Fig. 3 (c and d) display the models of Ct Ycg1 and Ct Ycg1-Brn1 515-634 derived from SAXS-guided NMA. The Ycg1-Brn1 515-634 NMA model deviates from the crystal structure by 7 Å (rmsd), displaying essentially no differences at low resolution. In contrast, the Ycg1 NMA model is significantly more open and differs notably from the Ycg1 conformation in the Ycg1-Brn1 crystal structure (ϳ17 Å rmsd). Because the unbound Ycg1 appears to be flexible, it might adopt multiple conformations in solution, and the obtained NMA model might therefore represent an average of these conformers, which clearly differ from the Brn1-bound Ycg1 in the crystal.
Given the observed concentration-dependent Ycg1 oligomerization in solution (Fig. 1a), we further utilized the SAXS data in the entire concentration range from 0.25 to 10 mg/ml Ycg1 to structurally characterize this process using the Ycg1 NMA model as a tentative monomeric unit. At 2 mg/ml, the MW estimate from I(0) indicated that the protein is largely dimeric (MW I(0) Ϸ 204 kDa). The MW determinations at protein concentrations lower than 2 mg/ml indicated a mixture of monomers and dimers, with the latter fraction increasing with concentration. We therefore used SASREFMX (16) to model a
Solution structure and flexibility of condensin subunit Ycg1
tentative dimeric structure and calculate the monomer-to-dimer ratio by simultaneous fitting of the entire set of SAXS data from 0.25 to 2 mg/ml. The obtained dimeric model is displayed in Fig. 4, along with the volume fractions of the dimer at different protein concentrations. In this model, the Ycg1 dimerization interface partially overlaps with the Brn1-
Solution structure and flexibility of condensin subunit Ycg1
binding site, which might also explain the lack of oligomerization in the Ycg1-Brn1 515-634 sample at comparable concentrations (Fig. 1b).
We further employed SASREFMX to follow the oligomerization of Ycg1 at yet higher concentrations (5 and 10 mg/ml). The analysis of possible higher-order oligomers indicated that the scattering data at 10 mg/ml can be wellrepresented by a tetramer, formed as a dimer of the Ycg1 dimer (Fig. 4b).
We validated the presence of higher oligomers of Ycg1 in solution using light scattering experiments. The dynamic light scattering (DLS) measurements revealed a systematic increase in the average hydrodynamic radius, R h (from ϳ6.5 to ϳ9 nm), and apparent MW (from ϳ250 to ϳ500 kDa) with increasing solute concentration (Fig. 5a). Because the MW values deduced from DLS are dependent on the particle/oligomer shapes, which allows only a qualitative comparison, we complemented these data with a more quantitative method. Size-exclusion chromatography coupled to multiangle static light scattering (SEC-MALS) revealed three components in the elution profile. These components (marked with arrows in Fig. 5b) correspond to MW values of monomeric, dimeric and tetrameric Ycg1 (ϳ100, ϳ200, and ϳ400 kDa). Both DLS and SEC-MALS results are in excellent agreement with the SAXS data and suggest a concentration-dependent oligomerization of Ycg1. The . Ct Ycg1 concentration-dependent oligomerization. a, scattering data from a Ct Ycg1 concentration series were modeled as mixtures of monomer, dimer, and tetramer molecules. As Ct Ycg1 concentration increases, the amount of dimeric and tetrameric species in solution increases. b, the Ycg1-Brn1 crystal structure, compared with dimer and tetramer Ycg1 models. In the dimer model, the dimerization interface partly overlaps with the Brn1 (blue) binding site, which might explain why oligomerization was not observed to occur for Ct Ycg1-Brn1 515-634 . The tetramer was built as a dimer of dimers.
Solution structure and flexibility of condensin subunit Ycg1
biological implications of this oligomerization are discussed below.
Discussion
By assembling into a complex with the kleisin subunit Brn1, the HEAT-repeat subunit Ycg1 forms the major DNA-binding site of the condensin complex and thereby controls the recruitment of condensin to chromosomes (8). It has been suggested that this recruitment triggers the subsequent ATP-dependent entrapment of DNA strands within the lumen of the large ring structure created by the SMC and kleisin subunits of condensin. Remarkably, the protein levels of budding yeast Ycg1 are strictly controlled by cell cycle-dependent transcription and protein degradation, which suggests that this subunit is the limiting factor for condensin holocomplex formation (17). It therefore seems conceivable that premature binding to chromosomes of Ycg1 molecules that have not yet assembled into condensin complexes needs to be prevented to achieve a controlled recruitment of condensin holocomplexes to chromosomes.
We found that the binding to Brn1 significantly restricts the flexibility of Ycg1 and converts an open and flexible configuration of free Ycg1 in solution, which is presumably facilitated by the elasticity of the HEAT-repeat architecture, into the horseshoe-shaped structure observed in crystal structures. A recent study on the human analog of the Ycg1-Brn1 515-634 complex suggests a similar modulation of the flexibility of the HEATrepeat subunit by the binding of the kleisin (18). The crystal structure of this complex revealed a less tight interaction of the kleisin subunit than the one observed in the yeast Ycg1-Brn1 crystal structure and a more open conformation and greater flexibility (higher b-factors) of the HEAT-repeat subunit.
Flexibility of the unbound Ycg1 might be an essential prerequisite for its movement through the crowded cellular space (19) and for its assembly into condensin complexes. The SAXS data of free Ycg1 can be best fitted by a model that incorporates a large rotation of the N-terminal part of the HEAT-repeat solenoid (Fig. 3c). Remarkably, this is also the region of the protein that considerably contributes to the formation of the positively charged DNA-binding groove (8). These results strongly suggest that Brn1 binding promotes a structural transition in Ycg1, leading to a conformation that is amenable for making contacts with the phosphate backbone of the DNA double helix. Fig. 6 shows a comparison between the Sc Ycg1-Brn1-DNA crystal structure and the SAXS-based NMA model of unbound Ct Ycg1. In the crystal structure, Brn1 contributes positively charged residues to the DNA-binding interface and directs positively charged residues located in the N and C termini of Ycg1 into a compact cluster amenable to DNA binding.
Although the condensin HEAT-repeat subunits have been speculated to self-assemble ("phase separate") via multivalent, weak interactions (19), no direct experimental evidence has yet supported this notion. The scattering data from Ycg1 strongly suggest that the higher MW species formed with increasing solute concentrations are specific oligomers rather than unspe- 1, ϳ100 kDa), dimeric (arrow 2, ϳ200 kDa), and tetrameric (arrow 3, ϳ400 kDa) MWs. Note that the dimers and tetramers may be dissociating during chromatography because of dilution, causing them to be present in much smaller amounts compared with the SAXS experiment.
Solution structure and flexibility of condensin subunit Ycg1
cific aggregates. Indeed, the latter would not have yielded bellshaped scattering curves, like the ones shown in Fig. 1a, and the scattering data from unspecific aggregates would also not be amenable to meaningful analyses in terms of oligomeric mixtures. The independent light scattering experiments (DLS and SEC-MALS) fully confirm the oligomerization observed by SAXS.
Future experiments will have to clarify the functional relevance of the observed oligomeric equilibria for Ycg1 and possibly for other HEAT-repeat proteins. However, already at this stage, our results further explain the finding that only the Ycg1-Brn1 515-634 complex, but not Ycg1 alone, is able to stably bind to DNA. Indeed, the observed dimerization of Ycg1 would probably not allow DNA binding and could thereby be part of the regulatory mechanism to prevent association of free Ycg1 molecules with chromatin. Furthermore, because the Brn1binding site on Ycg1 overlaps with the Ycg1 dimerization interface (Fig. 4b), Brn1 binding might also dissociate the Ycg1 dimers to consequently allow chromatin association of assembled condensin complexes.
Sample preparation
Ct Ycg1 and the Ct Ycg1-Brn1 515-634 complex were both purified as described previously (8). The protein properties, such as the positions of purification tags and extinction coefficients, are listed in Table 2. In brief, proteins were expressed in Escherichia coli Rosetta (DE3) pLysS (Merck) from pET-MCN (20) vectors in 2ϫ TY medium for 18 h at 18°C. The cells were lysed by sonication in lysis buffer (500 mM NaCl, 50 mM Tris-HCl, pH 7.5, 20 mM imidazole, 5 mM -mercaptoethanol, containing complete protease inhibitors (Roche)) at 4°C, and the lysate was cleared by a centrifugation step at 45,000 ϫ g max . The cleared lysate was loaded onto nickel-Sepharose 6 Fast Flow (GE Healthcare) and extensively washed with 30 -40 column volumes (CVs) of lysis buffer. The proteins were eluted in 5-7 CV elution buffer (lysis buffer plus 300 mM imidazole), and the combined eluate was dialyzed in SEC buffer (300 mM NaCl, 25 mM Tris-HCl, pH 7.5, 1 mM DTT) overnight at 4°C. The dialyzed eluate was diluted with low-salt buffer (100 mM NaCl, 25 mM Tris-HCl, pH 7.5, 1 mM DTT) to a final salt concentration of 150 mM NaCl and loaded onto a 6 ml-RESOURCE Q (GE Healthcare) anion-exchange column pre-equilibrated with low-salt buffer. The column was washed with 3-5 CV of lowsalt buffer, and the proteins were eluted with a linear NaCl concentration gradient to 1 M in 60 ml. The peak fractions were combined and loaded onto a Superdex 200 26/60 size-exclusion chromatography column equilibrated in SEC buffer. The peak fractions were pooled and concentrated by ultrafiltration (Vivaspin 30,000 MWCO, Sartorius), and the proteins were frozen at 10 mg/ml in liquid N 2 .
Small-angle X-ray scattering data collection and processing
SAXS data for Ct Ycg1 and the Ct Ycg1-Brn1 515-634 complex were collected at the SAXS Beamline P12 of the PETRA III storage ring (Deutsches Elektronen-Synchrotron, Hamburg, Germany) (21). The details of the data collection conditions are summarized in Table 1. The scattering data in the momentum transfer range 0.002 Ͻ s Ͻ 0.38 Å Ϫ1 were collected with a PILA-TUS 2M pixel detector at a distance of 4.0 m from the sample. For each sample, solute concentrations ranging from 0.25 to 10 mg/ml were measured. The samples were loaded using an automatic sample changer, and the solute was constantly flown through the capillary during the X-ray exposure to minimize radiation damage. The two-dimensional pixel data from the detector were converted to one-dimensional scattering profiles using the automated pipeline SASFLOW, which performed radial averaging, outlier removal, data averaging, and buffer subtraction (22).
The analyses of the SAXS data were performed using the ATSAS 2.8 suite (23). The data at different concentrations were assessed for quality in terms of the absence of repulsive or attractive interactions and signal-to-noise ratio. Based on these criteria, a composite scattering profile for Ycg1-Brn1 515-634 was generated by merging the low-angle scattering at 0.5 mg/ml and high-angle scattering at 5 mg/ml. For Ycg1, noticeable con-
Solution structure and flexibility of condensin subunit Ycg1
centration effects were observed, and the scattering data from 0.25 and 0.5 mg/ml were extrapolated to zero concentration using Primus (24). The Ycg1-Brn1 515-634 merged data and the Ycg1 extrapolated data were used for all subsequent data analyses including shape determination and rigid body modeling.
The forward scattering, I(0), and the radius of gyration, R g , were obtained from the Guinier approximation (25), following the standard procedures (26). The distribution of pair distances P(r) was computed using the inverse Fourier transformation method implemented in GNOM (27). From the P(r) function, an alternative estimate of R g and the maximum particle dimension D max were obtained. MWs of Ycg1 and Ycg1-Brn1 515-634 in solution were assessed from the SAXS data with three methods: (a) using the forward scattering, I(0) (comparing against a reference solution of bovine serum albumin); (b) from the excluded (Porod) volume, V p (given that V p in nm 3 is ϳ1.6 times the MW in kDa) (16); and (c) a consensus Bayesian MW assessment method (28).
Molecular weight assessment with light scattering
The oligomeric states of Ycg1 were analyzed by analytical SEC-MALS. SEC was performed using an Agilent 1260 Infinity II Bio-inert LC system and an analytical Superdex 200 10/300 GL column (GE Healthcare) equilibrated with the sample buffer (25 mM Tris, 300 mM NaCl, 1 mM DTT, pH 7.5) at 20°C. Seven microliters of Ycg1 at 10 mg/ml was injected, with the experiment performed at a flow rate of 0.8 ml/min. Protein elution was detected by absorbance at 280 nm, and protein concentration was quantified with differential refractometry using an Optilab T-rEX detector (Wyatt). Light scattering data were measured with a miniDAWN TREOS multiangle light scattering detector (Wyatt). Molecular weights were computed from the concentration and light scattering data using the software ASTRA version 7.1.3.15 (Wyatt).
Batch DLS measurements were also performed for Ycg1 at concentrations 0.6, 1.4, 2.5, 5.5, 10.8, and 22.1 mg/ml with a DynaPro NanoStar DLS detector (Wyatt). Light scattering data collection and analysis was performed with the software DYNAMICS version 7.6.0 (Wyatt). For each concentration, ten 5-s acquisitions were performed.
Data analysis and structure modeling
For the two samples, the ab initio modeling program DAM-MIF (14) was used to produce low-resolution bead models from the measured SAXS data. Because the structural modeling with SAXS data are potentially ambiguous, analysis of model variability and uniqueness was performed. Ten independent DAM-MIF models were generated, superimposed with SUPCOMB (29), compared, and averaged using DAMAVER (30).
The theoretical scattering curves from the high-resolution models of Sc Ycg1 and Ycg1-Brn1 (PDB code 5OQQ) were computed, and their 2 fits against the experimental SAXS data evaluated using CRYSOL (31). The 2 fit is defined as follows, where N p is the number of experimental points, I e is the experimental scattering, I m is the computed scattering from the PDB model, (s i ) is the experimental error, and c is the scaling factor. To improve the 2 fits from the free Sc Ycg1 and from Ycg1-Brn1, the two high-resolution models were refined by NMA using the program SREFLEX (15). This program partitions the structure into pseudo-domains and hierarchically employs NMA to find the domain rearrangements minimizing the 2 between the SAXS curve computed from the refined model and the experimental data.
Because the unbound Ycg1 exhibited signs of oligomerization, a dimer structure and its proportion at elevated concentrations in solution was further modeled using SASREFMX (16). This program constructs dimeric models and fits a set of SAXS data measured at different concentrations by mixtures of the monomers and dimers in the mixture. The SAXS data for Ycg1 at c ϭ 5 and 10 mg/ml were further modeled with SAS-REFMX as a mixture of monomers, dimers, and tetramers, with the tetrameric structure built as a dimer of dimers.
The experimental SAXS data described here, as well as the models derived from them, were deposited in the Small Angle Scattering Biological Data Bank under accession numbers SAS-DFC4 (Ycg1 monomer), SASDFD4 (Ycg1-Brn1 monomer), SASDFE4 (Ycg1 tetramer), SASDFG4 (Ycg1 dimer), and SAS-DFF4, SASDFH4, SASDFJ4, and SASDFK4 (Ycg1 concentration series) (32). Acknowledgments-We are grateful to Melissa Graewert for assistance with the light scattering experiments and Markus Hassler for comments on the manuscript. | 5,122.6 | 2019-07-26T00:00:00.000 | [
"Biology",
"Chemistry",
"Materials Science",
"Physics"
] |
Alterations of the gut microbial community structure and function with aging in the spontaneously hypertensive stroke prone rat
Gut dysbiosis, a pathological imbalance of bacteria, has been shown to contribute to the development of hypertension (HT), systemic- and neuro-inflammation, and blood–brain barrier (BBB) disruption in spontaneously hypertensive stroke prone rats (SHRSP). However, to date individual species that contribute to HT in the SHRSP model have not been identified. One potential reason, is that nearly all studies of the SHRSP gut microbiota have analyzed samples from rats with established HT. The goal of this study was to examine the SHRSP gut microbiota before, during, and after the onset of hypertension, and in normotensive WKY control rats over the same age range. We hypothesized that we could identify key microbes involved in the development of HT by comparing WKY and SHRSP microbiota during the pre-hypertensive state and longitudinally. Systolic blood pressure (SBP) was measured by tail-cuff plethysmography and fecal microbiota analyzed by16S rRNA gene sequencing. SHRSP showed significant elevations in SBP, as compared to WKY, beginning at 8 weeks of age (p < 0.05 at each time point). Bacterial community structure was significantly different between WKY and SHRSP as early as 4 weeks of age, and remained different throughout the study (p = 0.001–0.01). At the phylum level we observed significantly reduced Firmicutes and Deferribacterota, and elevated Bacteroidota, Verrucomicrobiota, and Proteobacteria, in pre-hypertensive SHRSP, as compared to WKY. At the genus level we identified 18 bacteria whose relative abundance was significantly different in SHRSP versus WKY at the pre-hypertensive ages of 4 or 6 weeks. In an attempt to further refine bacterial candidates that might contribute to the SHRSP phenotype, we compared the functional capacity of WKY versus SHRSP microbial communities. We identified significant differences in amino acid metabolism. Using untargeted metabolomics we found significant reductions in metabolites of the tryptophan-kynurenine pathway and increased indole metabolites in SHRSP versus WKY plasma. Overall, we provide further evidence that gut dysbiosis contributes to hypertension in the SHRSP model, and suggest for the first time the potential involvement of tryptophan metabolizing microbes.
www.nature.com/scientificreports/ rats (WKYs) [7][8][9][10][11][12][13] . At the present time we know very little about the bacterial genera or the bacterial mechanisms that elicits hypertension in this model. However, two potential candidates recently reported include bile acids 13 and short chain fatty acids 14 . Gut bacteria can generate short chain fatty acids by fermentation 3 , and metabolize bile acids generated by the host 15 .
In the present study, we tested the hypothesis that we could identify key microbes involved in the development of hypertension (HT) by comparing WKY and SHRSP microbiota during the pre-hypertensive state and longitudinally. We demonstrate significantly different gut microbiota communities between WKY and SHRSP from 4 to 20 weeks old. We observed a number of "candidate" genera whose relative abundance was significantly different between WKY and SHRSP at ages when hypertension was developing in SHRSP. Seeing as that a single genus could not be identified, we assessed the functional capacity of the community as a whole and identified that a number of genera significantly different between WKY and SHRSP are involved in short chain fatty acid and amino acid metabolism. In support of this, we found significant reductions in the short chain fatty acids propionate and butyrate, and reduced tryptophan-kynurenine pathway metabolites, in SHRSP versus WKY plasma. Not only do these data support previous findings that gut dysbiosis contributes to the SHRSP phenotype, they also show complex changes in the SHRSP microbiome with age, and demonstrate significant alterations in circulating microbial metabolites that may contribute to the development of HT.
Methods
All animal protocols were approved by the Institutional Animal Care and Use Committee at Baylor College of Medicine, Houston, TX and conformed to the Guide for the Care and Use of Laboratory Animals, 8th edition, published by the National Institutes of Health (NIH). WKY and SHRSP were obtained from Charles Rivers and mated for at least four generations to produce in-house colonies for each strain. Rats of the same age and group were housed 2-4 per cage with ad libitum access to normal rodent chow (Labdiet 5V5R) and chlorinated water. Rats were housed in autoclaved cages with sterilized bedding (Biofresh pelleted cellulose), and were subjected to a 12 h light (6 AM-6 PM): 12 h dark (6 PM-6 AM) cycle. Given the differences in the microbiota and inflammatory responses between male and females, gender must be treated in separate groups. We included only males in the present study since inclusion of both genders would be prohibitive for a single study.
Studies with WKY and SHRSP. Male SHRSP and WKY rat pups were weaned at 21 days. Beginning at 4 weeks, fecal samples were collected for gut microbiota analysis and continued at intervals until the age of 20 weeks. Fecal pellets were collected for 16S rRNA analysis at 4, 6, 8, 10, 16, and 20 weeks, and systolic blood pressure (SBP) was measured as described every 2 weeks from 6 weeks until the age of 20 weeks. SBP was measured in unanesthetized rats using a six channel CODA high-throughput tail-cuff blood pressure system (Kent Scientific, Torrington, CT). Prior to the initial SBP measurement all rats were acclimatized to the system. At least 10 consecutive readings, without movement artifact, were averaged to obtain an individual measurement.
Gut microbiota analysis. Fresh fecal pellets were collected into 1.5 mL tubes while handling rats, snap frozen, and stored at −80 °C. The samples were sent to the Center for Metagenomics and Microbiota Research (CMMR) at the Baylor College of Medicine where 16S rRNA gene sequence libraries were generated from the V4 primer region using the Illumina MiSeq platform 12,16,17 after extracting DNA using MO BIO PowerMag Soil Isolation Kit (MO BIO Laboratories). Reads were de-noised and merged into amplicon sequence variants (ASVs) by DADA2 pipeline in R 18,19 . Taxonomic annotations were also generated against DADA2-formatted training FASTA files derived from SILVA138 Database 20 . ASVs with identical taxonomic assignment were grouped into taxonomic bins. 16S rRNA sequencing data is publicly available at GenBank under the accession KFUL00000000. ASVs were analyzed and visualized with ATIMA version 2 (Agile Toolkit for Incisive Microbial Analyses) developed by the CMMR at the Baylor College of Medicine.
Microbial functional prediction. ASVs were input into PICRUSt2 (Phylogenetic Investigation of Communities by Reconstruction of Unobserved States) pipeline for functional prediction as previously described 21,22 . Stratified gene families with KEGG orthologs annotations were used to construct Gut-Brain Modules (GBM) 23 . Spearman's rank correlation coefficient was used to calculate correlation and statistical significance between taxa and GBM. Correlations with |ρ| ≥ 0.6 and p < 0.05 were included. Untargeted metabolomics. Plasma was collected from 15-week-old rats in sterile tubes and snap frozen.
Samples were submitted to Metabolon, Inc. (Morrisville, NC) for untargeted metabolomics. Plasma (100ul) samples (n = 6-8 per group) were subjected to methanol extraction. The purified supernatant was divided into aliquots corresponding to the various analytical methodologies, then subsequently evaporated and reconstituted with the appropriate analytical injection solvent. Samples were analyzed with four separate methods: two positive mode methods (Pos Early UHPLC-RP/MS/MS and Pos Late UHPLC-RP/MS/MS) and two negative mode methods (Neg UHPLC-RP/MS/MS and Neg UHPLC HILIC/MS/MS) to ensure broad coverage of biochemicals. Metabolites were identified by automated comparison of ion features to a reference library of chemical standards followed by visual inspection for quality control. For downstream analysis, any missing values were assumed to be below the limits of detection and were imputed with the compound minimum (minimum value imputation). Log 10 transformation was performed prior to statistical analysis. Random forest models for untargeted metabolomics were constructed as previously described 13 Statistical analysis. Data is expressed as means ± standard error of the mean or the standard error of least squares mean. p < 0.05 adjusted for multiple comparisons or false rate of discovery (FDR) was considered statistically significant. Data for taxa abundance were analyzed using the Mann-Whitney U test with a FDR adjusted for multiple comparisons. Two-way repeated measures ANOVA was used to analyze SBP with post hoc Holm-Sidak test where appropriate. β diversities were measured using weighted UniFrac, visualized by principal coordinate analysis (PCoA), and statistically analyzed using permutational multivariate analysis of variance (PERMANOVA).
Results
We followed blood pressure in WKY and SHRSP from 6 to 20 weeks. Figure 1 shows SBP in SHRSPs began increasing between 6 and 8 weeks and continued to increase until 16 weeks when it plateaued above 200 mmHg. SBP in the WKYs remained between 140 and 150 mmHg at all ages. At intervals, fresh fecal samples were obtained for analysis of bacterial abundance and community structure using 16S rRNA analysis. Figure 2 shows PCoA of weighted UniFrac distances, a measure of β diversity, comparing SHRSPs and WKYs at different ages. Bacterial community differences were highly significant between strains from 4 to 20 weeks of age (ranging from p = 0.001-0.01).
Relative abundance of bacteria phyla at different ages are shown in Fig. 3. The relative abundance of Firmicutes was greater in WKY compared to SHRSP; however, statistical significance was not achieved at 20 weeks. On the other hand, abundance of Bacteroidota was greater in SHRSP with significance being achieved at 4, 10 and 16 weeks of age. Overall, there was a contraction of Firmicutes and expansion of Bacteroidota, Verrucomicrobiota, and Proteobacteria in SHRSPs compared to WKYs. Relative abundances at the genus level show a number of differences between WKYs and SHRSPs at different ages (Fig. 4). Note that Fig. 4 presents a portion of the genera identified; genera of very low abundance and genera that did not reach statistical significance were omitted. There was a total of 80 classified and unclassified genera identified in fecal samples from at least one age group. A number of genera were consistently different between strains with age. For example, Akkermansia was increased or trended to increase in SHRSP in most age groups with the greatest increases occurring in the younger cohorts. Akkermansia was < 1% in WKYs and was significantly increased to 6% in SHRSP at 4 weeks, increased from 1 to 6% at 6 and 8 weeks, and from 1 to 4% at 10 weeks (Fig. 4). Although abundance of Akkermansia increased at 16, and 20 weeks, it was not statistically significant.
After comparing community structures between SHRSPs and WKYs at different ages, we evaluated changes in community structure in individual strains as the rats aged. A change in SHRSP bacterial community could represent an age where a phenotypic expression such as SBP or BBB integrity would be altered as a result of the altered gut microbiota. PCoAs of weighted UniFrac in individual strains as the rat aged are shown in Supplemental Figure I. SHRSP showed significantly different microbiota structures as assessed by weighted UniFrac distance over the ages of 4-20 weeks (p = 0.001). Similarly the weighted UniFrac distance was significantly different in WKYs from 4 to 20 weeks (p = 0.001). For both strains, the 4 and 6 weeks communities segregated from that of all other ages. Furthermore, this shift can be seen when looking at the genera abundance (Fig. 5) where a number Figure 1. Systolic blood pressure (SBP) in WKY and SHRSP from 6 to 20 weeks of age. Two-way repeated measures ANOVA showed statistical main effects of strain, age, and interaction between strain and age (p < 0.001 for all, n = 5-7). *p ≤ 0.002 compared to corresponding age in WKYs (Holm-Sidak method). www.nature.com/scientificreports/ of genera showed increases or decreases in abundance between 6 and 10 weeks. Thus, if a shift in community structure is responsible for the increase in SBP, it is likely to have occurred between 6 and 10 weeks, ages when SBP showed initial increases (Fig. 1). Next, we aimed to examine the functional capacity (metabolites and/or biochemical pathways) of WKY and SHRSP gut microbiota. The functional capacity of WKY and SHRSP microbial communities across all ages was inferred using Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt2). From 16S rRNA sequencing data, PICRUSt2 was used to generate a list of genes predicted in each community. The gene lists were used to calculate a predicted abundance of functional modules involved in gut-brain communication (gut-brain modules; GBM) 23 . Fig. 6A shows the correlation scores between GBM abundance and genera abundance. We observed several strong positive correlations between genera abundance and short-chain fatty acid (SCFA) and amino acid (AA) metabolism modules. To validate the predicted differences in SCFA and AA metabolism we performed targeted and untargeted metabolomics, respectively, in plasma of SHRSP with established HT and WKY. We observed significant reductions in propionate and butyrate concentrations in SHRSP plasma (Fig. 6B). We performed random forest classification to identify metabolites important in distinguishing WKY from SHRSP 13 . We found tryptophan metabolism to be a strong predictor of WKY versus SHRSP genotype 13 . Examining individual metabolites, we found significant increases in metabolites along the kynurenine pathway in WKY plasma, including kynurenine, xanthurenate, quinolinate, N-acetyltryptophan, and N-acetylkynurenine (Fig. 6D). In SHRSP plasma we observed significant increases in metabolites along the indole metabolism pathway, including indoleacetate and indoleacetylcarnitine (Fig. 6E). Bacterial enzymes are capable of converting tryptophan to kynurenine or indole. Our findings suggest the dysbiosis in SHRSP may lead to alterations in tryptophan metabolism and indicate that further investigation into tryptophan metabolites as a means of microbe-host interaction in the context of HT is warranted. www.nature.com/scientificreports/
Discussion
Solid evidence supports the idea that gut dysbiosis is sufficient to elicit HT in the SHRSP. Conversely, prevention of dysbiosis attenuates HT 12,13,24 . These findings in the SHRSP model of HT support the idea of a potential role for gut dysbiosis in the development of HT in humans and other animal models and also demonstrates the importance of the gut microbiota, in general, in the development and maintenance of pathological states [7][8][9][10][11][12][13]25 . Given the importance of the gut microbiota in pathological states, we sought to identify the taxa responsible for the HT and determine when those taxa were altered in the SHRSP microbiota. We report three main findings in the present study. (1) The microbiota community structure in WKY and SHRSP were significantly different as early as 4 weeks of age and remained significantly different for the remainder of the study (Fig. 2).
(2) Given the number of genera that were different between WKY and SHRSP at ages when HT was developing, it is possible that an overall change in the microbial community structure, as opposed to a change in a single or a few bacteria, was responsible the development of HT (Fig. 4). (3) By assessing the functional capacity of the microbial communities and examining microbial metabolites in plasma, we identify reduced short chain fatty acids and increased shunting of tryptophan to indole in the SHRSP (Fig. 6). Each of these findings will be discussed in more detail below.
(1) The microbiota structures in WKYs and SHRSPs were significantly different at 4 weeks of age and remained significantly different for the remainder of the study (Fig. 2). This is in line with a previous study that showed significant differences between the WKY and SHR microbiota community structure as early as 1 week old 26 . We observed differences in taxa abundance at the phylum and genus levels (Figs. 3 and 4); however, given the large number of genera that were significantly different between WKY and SHRSP over time, it was impractical to determine an individual genus or group of genera that was responsible for the HT. The microbiota component of HT in SHRSP was likely driven by an overall change in the community and not by a single genus or a small group of genera. www.nature.com/scientificreports/ (n = 6-15) *, **, and ***p < 0.05, 0.01, and 0.001 over age respectively using Mann-Whitney U test with FDR corrected for multiple comparisons. Only genera with relative abundance ≥ 0.5% are shown. www.nature.com/scientificreports/ (2) We observed changes in community structure as WKY and SHRSP aged; however, no age-related changes in the gut microbiota could readily be identified as a causal factor for the development of HT in SHRSP. The community structure as measured by the weighted UniFrac distance showed a clear segregation at 4 and 6 weeks in both WKY and SHRSP when compared to all other age groups (Supplemental Figure I). It is likely that these early difference in community structure was important to the phenotype that developed in later life for both strains of rats 26 . Changes in community structure in the microbiota of SHRSP subsequent to the development of HT may be secondary to, or responsible for maintaining, the HT. (3) Given the difficulty in trying to identify a single genus or small group of genera responsible for the SHRSP phenotype, we turned to examine differences in the overall functional capacity of the microbial communities. Using 16S rRNA sequencing data, PICRUSt2 was used to quantify gene abundance in a community based on sequenced bacterial genomes. We focused on gene pathways that are characterized by an involvement in gut-brain communication. We found several strong positive correlations between the microbiota and gene pathways involved in SCFA and amino acid metabolism (Fig. 6A). We further explored each of these pathways by measuring metabolites in the plasma. Similar to previous observations in the SHR model, we observed significant reductions in plasma propionate and butyrate of SHRSP, relative to WKY (Fig. 6B). Impaired SCFA receptor signaling in the gut, vasculature, and kidney has previously been linked to HT 14,17,27 . Bacteria found in the gut possess the enzymes for converting tryptophan to indole or kynurenine as well as many of the downstream metabolites 28 . We observed a number of tryptophan derived metabolites to be significantly different between WKY and SHRSP. Tryptophan can be converted to serotonin, enter the kynurenine pathway, or the indole pathway (Fig. 6C). We found that a number of metabolites in the kynurenine pathway were significantly elevated in WKY relative to SHRSP plasma. Conversely, indoleacetate and indoleacetylcarnitine, metabolites of indole metabolism, were elevated in SHRSP plasma (Figs. 6D and E). These data suggest the dysbiotic SHRSP microbiota preferentially shunts tryptophan to the indole pathway resulting in elevated indole metabolites and decreased kynurenine metabolites. Increased indole production can lead to the accumulation of uremic toxins, including indoxyl sulfate, p-cresyl glucuronide, and p-cresyl sulfate, which are pro-hypertensive through their inflammatory and oxidative effects in peripheral www.nature.com/scientificreports/ tissues such as the vasculature and kidney 29 . However, other indole metabolites negatively correlate with BP. For example, increased salt intake elevates BP while reducing Lactobacillus murinis, indole-3-acetic-acid, and indole-3-lactic-acid 30 . The disparate effects of various indole metabolites on BP may be secondary to the presence of multiple indole receptors distributed on a wide range of cell types, including epithelium, endothelium, and innate and adaptive immune cells 31 . Further studies will be required to determine the bacteria involved in altered tryptophan metabolism of SHRSP and how these metabolites may contribute to the hypertensive phenotype.
Overall, these studies provide further evidence that the dysbiotic microbiota of SHRSP contributes to the hypertensive phenotype of this model. A specific genus, or group of genera, could not be determined as responsible for the observed elevation in BP. Rather, our data support the idea that an overall change in the microbiota structure and microbial metabolites leads to HT. In support of this, we identify significant reductions in circulating propionate and butyrate, as well alterations in tryptophan metabolites that suggest preferential shunting of tryptophan to indole metabolism in the SHRSP model. | 4,554.2 | 2022-05-20T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Biology"
] |
One‐Dimensional Variational Ionospheric Retrieval Using Radio Occultation Bending Angles: 1. Theory
A new one‐dimensional variational (1D‐Var) retrieval method for ionospheric GNSS radio occultation (GNSS‐RO) measurements is described. The forward model implicit in the retrieval calculates the bending angles produced by a one‐dimensional ionospheric electron density profile, modeled with multiple “Vary‐Chap” layers. It is demonstrated that gradient based minimization techniques can be applied to this retrieval problem. The use of ionospheric bending angles is discussed. This approach circumvents the need for Differential Code Bias (DCB) estimates when using the measurements. This new, general retrieval method is applicable to both standard GNSS‐RO retrieval problems, and the truncated geometry of EUMETSAT's Metop Second Generation (Metop‐SG), which will provide GNSS‐RO measurements up to about 600 km above the surface. The climatological a priori information used in the 1D‐Var is effectively a starting point for the 1D‐Var minimization, rather than a strong constraint on the final solution. In this paper the approach has been tested with 143 COSMIC‐1 measurements. We find that the method converges in 135 of the cases, but around 25 of those have high “cost at convergence” values. In the companion paper (Elvidge et al., 2023), a full statistical analysis of the method, using over 10,000 COSMIC‐2 measurements, has been made.
Introduction
The radio occultation instrument on EUMETSAT's Metop Second Generation (Metop-SG) satellites will measure up to 600 km above the Earth's surface, and provide measurements for space weather applications.The retrieval of ionospheric profile information from GNSS radio occultation (GNSS-RO) is well established (e.g., Hajj & Romans, 1998;Schreiner et al., 1999), using the Abel transform, which, for a circularly symmetric electron densities N e and (therefore) refractive indices n, and bending angles α that are available at all impact parameters a, is given by (1) 10.1029/2023SW003572 2 of 15 density profile from topside Incomplete RO data), which iterates toward an electron density profile by modeling the densities above 500 km by a (simplified) VaryChap layer, whose parameters are estimated from Abel-inverted lower altitude electron densities at the previous iteration, is fast enough for operational implementation but prob- ably not yet accurate enough.
In this work, we present a new method based on a variational (or "optimal estimation") approach (Rodgers, 2000).
The one dimensional variational (1D-Var) retrieval approach is used extensively in neutral atmosphere GNSS-RO applications (e.g., Healy & Eyre, 2000;Palmer et al., 2000).Variational retrieval is more flexible than the Abel transform inversion, as it does not rely on an idealized measurement geometry.Specifically, the truncated Metop-SG geometry can be accounted for in a straightforward manner as part of the forward problem, = () , which maps the electron density profile information, x, to an observation y-bending angles here.This is in contrast to most GNSS-RO ionospheric retrievals, which, with the exception of Hajj and Romans (1998), are based on slant total electron contents (STECs), S.This is defined as the electron density N e integrated along the path P between the GNSS and LEO satellites, thus: However, STEC values are relative quantities.The ionospheric delay experienced by a radio occultation signal depends on its frequency.This would appear to allow the ionospheric effect to be isolated, and the STEC to be calculated, by taking the difference between the delays at two separate frequencies.This is not directly possible, however, because of a remaining a phase ambiguity, which arises because the phase measurements are known only modulo the wavelength (e.g., Dyrud et al. (2008)), and because of differential code biases (DCBs), introduced by both satellite and receiver (e.g., Equation 2 in Montenbruck et al. (2014)).Both these effects need to be accounted for before the STEC can be estimated.In this paper we show how bending angles are related to the vertical derivative of STEC, ∂S/∂a, where a is the impact parameter of the ray path.Taking the derivative removes the sensitivity to any constant offsets (the DCBs) and simplifies the problem.
The standard Abel transform, Equation 1, relates profiles of refractivity and bending angle that range from the surface to infinity.Real observations do not of course extend that far, and bending angles above the maximum height of the Metop-SG measurements, 600 km, are not small enough to be neglected.There are two standard Abel transform methods of generating electron density profiles from such truncated radio occultation signals (e.g., Schreiner et al., 1999).Neither requires absolute TEC, but each faces a difficulty.The first uses bending angles, and requires knowledge of the refractive index at the LEO.The second uses the vertical derivative of the slantwise TEC (or excess phase) (e.g., Equation 3 of Lei et al. (2007)).Unfortunately this necessarily diverges as impact heights approach the LEO, and the (integrable) singularity must be handled somehow.By contrast, in the method proposed in this paper the electron density at any height, including that of the LEO, can be derived from the parameters of the presumed VaryChap layer(s).This also means that electron densities can be inferred above the maximum height of the observations, which makes the method ideally suited for use with truncated bending angles.By construction, the method also enforces positive electron densities, thereby avoiding a failing that can be suffered by Abel inversion in the presence of horizontal gradients (see, e.g., Case 4 in Section 6.1).Finally, the variational method automatically generates measures of the quality of any particular retrieval (number of iterations, cost function at convergence, solution error covariance matrix, etc.)-see Section 2. Note, however, that the retrieval method described in this paper, being one-dimensional, cannot handle horizontal gradients because it assumes spherically symmetric refractivity/electron density fields.This drawback is shared by Abel transform methods.
We note that bending angles are assimilated in most global numerical weather prediction (NWP) data assimilation systems without bias correction, and they are considered "anchor measurements" because they constrain the bias corrections applied to other measurements (e.g., Poli et al., 2010).However, in the ionospheric data assimilation literature, radio occultation STECs with bias corrections (DCBs) are still usually assimilated, for example, Angling and Cannon (2004), Yue et al. (2011), andAngling et al. (2021).
The purpose of this paper is to describe the theoretical basis of the new ionospheric 1D-Var retrieval approach.This includes a description of the multiple Vary-Chap layer electron density profile, the relationship between the derivative of the slant TEC and bending angle, and the 1D-Var assimilation system.A companion paper, Elvidge et al. (2023), presents a full statistical validation of the method.
Data Assimilation Preliminaries
In a variational retrieval system the estimate of the state vector is obtained using the observations combined with a priori information.Since the observations, and a priori information, are only known up to certain levels of confidence these are usually described using probability density functions (PDFs).By assuming Gaussian PDFs, and that the observation and background errors are uncorrelated, Bayes' theorem can be used to show that this problem can be framed as the minimization of a cost function J (e.g., Talagrand, 2010): Here, x b is the background state (specifically, the parameters that define the vertical electron density profile), B is the background error covariance matrix, y o is the set of observations (specifically, bending angles as a function of impact parameter), R is the observation error covariance matrix, and is the forward operator, which generates pseudo observations from a particular state.
The retrieval solution, x a , is the state which minimizes the cost function, and it should be consistent with both the background x b and the observations y o , to within their expected error statistics.The bending angle profile provides useful information about all the ionospheric variables in the state vector x, and the uncertainty in the estimate of x is significantly reduced as a result of making the measurement.In this context, a useful feature of a 1D-Var retrieval is that it provides the following estimate of the theoretical solution error covariance matrix, A in terms of B and R = where H is the matrix of partial derivatives of the forward model with respect to the state vector elements: = ∕ .In well-posed problems the diagonal elements of the A matrix are significantly smaller than the corresponding diagonal elements of the B matrix (A i,i ≪ B i,i ).
The 1D-Var approach should be more robust to measurement noise than the Abel transform because it is a weighted least-squares approach; it will not over-fit the measurements if the assumed observation error statistics, R, are a reasonable approximation of the actual observation error statistics.The retrieval also produces useful diagnostics of the quality of the solution.These include the number of iterations required to converge, and the cost at convergence, J(x a ), the expectation value of which is half the size of the observation vector, [2 ()] = ± √ 2 (e.g., Rodgers (2000)), if the assumed error statistics are well specified.
Practical details of the solution method are discussed in Section 5.
Theory
To overcome the need for estimation of the Differential Code Biases (DCBs) in this work the derivative of slantwise TEC S, with respect to the impact parameter a, ∂S/∂a, is assimilated.The latter is the quantity used in the Abel transform solution for refractivity (see Equation 14in Schreiner et al. (1999)), and we will show that, to a good approximation, it is proportional to the difference between the L2 and L1 bending angles, plus a term that involves the electron density at the LEO.
The STEC is the integrated electron density (N e ) along the path P between the GNSS and the LEO satellites (see Equation 2).To a first approximation the delay ϕ i (in m) in the phase of the carrier wave (of frequency f i ), relative to the vacuum, accumulated along the path P is given by where n i is the refractive index at frequency f i , which is approximately given by (e.g., Schreiner et al., 1999) in which the proportionality constant κ ≈ 40.3 m 3 s −2 .
If the electron density is assumed to be only a function of height (i.e., is spherically symmetric), the STEC S between a GNSS satellite at radius r G and a LEO satellite at radius r L is easily shown, from Equation 2 to be given approximately by in which N e (r) is the electron density at radius r, and the impact parameter a is nearly equal to the radial distance to the tangent point, r T .(For example, even for a high electron density of 3 × 10 12 m −3 at a height of 300 km, which is appropriate for the F2 peak in daytime, solar maximum conditions, n i − 1 ≈ − 8 × 10 −5 at the L2 frequency of about 1.2 GHz.This means that a and r T differ by less than 600 m.) Assuming the electron density is zero at the GNSS, integrating Equation 7 by parts gives The derivative of Equation 8with respect to a is: Equation 9 is singular at a = r L , as noted in Lei et al. (2007).This singularity is necessary for the Abel inversion using slantwise TECs to be well-behaved.
Note that the integrals in Equation 9 are closely proportional to the standard Abel transform approximation to the ionospheric bending angle at impact parameter a and frequency f i , α i (a), namely (Angling et al., 2018;Kursinski et al., 1997;Vorob'ev & Krasil'nikova, 1994): 6, n i ≈ 1 (and therefore r T ≈ a) where appropriate, and replacing the infinite upper limits of the integrals by distances to the satellites.
The vertical gradient of the phase delay in Equation 5is ∕ .Therefore, given measurements at two frequencies, f 1 and f 2 , and assuming straight line paths with the same impact parameter, a, we can use Equation 9to write the difference in the phase delay gradients as: There is a fortuitous cancellation of errors relevant to this problem.In the processing of real measurements, bending angle values are derived from the Doppler shift values assuming that the refractive index at the LEO satellite is unity: n(r L ) = 1.For circular orbits this assumption does not affect the impact parameter, a, but it introduces a frequency dependent negative bias, b i , in the observed bending angles.(The proportionality between Doppler shift and impact parameter for circular orbits is also exploited in the Full Spectral Inversion (FSI) technique (Jensen et al., 2003).)This is equal to (see Equation A6 of Schreiner et al. (1999)): which, being inversely proportional to the square of the frequency, cancels out in the usual dual frequency ionospheric correction of the bending angles (Vorob'ev & Krasil'nikova, 1994).The observed ionospheric bending angles, () , will therefore be related to the true bending angles, α i (a), (see Equation 10), according to Comparison with Equation 11 therefore reveals that Therefore, the ionospheric retrieval can use either observed bending angle differences or the derivative of phase delay differences with respect to impact parameter.
Observations
Accurate estimation of the uncertainty of the observations is crucial to ensure the observations are not over-fitted.
In general it is difficult to estimate the uncertainty of observations, since in most cases there is no additional, independent, reference truth.
As well as errors in the observations themselves, there are also errors arising from the use of a 1D-retrieval rather than a more realistic 3D-retrieval.Such errors obviously depend on the electron density distribution, which varies with location, time of day and solar conditions.To estimate these errors, electron density distributions appropriate to 143 Metop-SG occultations have been generated with 1D and 3D versions of the climatological ionosphere electron density model NeQuick (Nava et al., 2008), at four different solar activity levels, represented by the solar radio flux at 10.7 cm (F 10.7 ).Specifically we use F 10.7 values of 80, 130, 180, 230 solar flux units (sfu) (where 1 sfu = 10 −22 Wm −2 Hz −1 ).
Define 3 (ℎ) as the electron density at height h that results from calculating the phase delays incurred by signals propagating through the ith occultation plane, using 3D NeQuick electron densities, and then passing the delays through an Abel transform to infer an electron density profile at the tangent point.
𝐴𝐴 𝐴𝐴 1𝐷𝐷
(ℎ) is the same, except that it uses a spherically symmetric electron density field, which matches the 1D NeQuick field at the occul tation tangent point.The difference between the two is a measure of the errors in an inferred electron density distribution incurred by the use of a 1D rather than a 3D retrieval.For each altitude h the absolute mean electron density error ϵ across all 4 × 143 = 572 simulated occultations is calculated thus: To ensure that "rare" geometries, such as occultations that cross the day/night terminator, do not distort the statistics, any outliers at each altitude are removed.Outliers are defined as any value which is less than 1.5 times the interquartile range (IQR) below the first quartile (Q1) of the data, or greater than 1.5 times the IQR above the third quartile (Q3), that is, only data, d, in the range are kept at each altitude.The bending angle profile resulting from passing the 1D electron density difference ϵ(h) in Equation 15 through the Abel transform (which is linear in N e ) is shown in Figure 1.The blue vertical line at 2.10 μrad shows the average mean absolute error, and shows that 2.0 μrad is an excellent single value error to use.It is comparable to the value used in neutral atmosphere applications in the middle/upper stratosphere.For example, ECMWF uses 3.0 μrad above about 32 km when assimilating GNSS-RO data.It is clear, however, that the error is height-dependent.Rather than directly using the average errors from Figure 1, which are subject to sampling errors themselves, the following Gaussian fit to the data could be used in a 1D-Var retrieval: bending angle error = 3.8 rad × exp where h is the altitude in km.A minimum value of 1.0 μrad is imposed.Equation 17 is plotted in red in Figure 1.This function provides a similar residual sum of squares error as using a fifth order polynomial fit, but without the associated numerical instabilities.
Electron Density Model
To reconstruct the ionosphere from the observations described in Section 3.2 a model electron density state is needed for the retrieval.Here we assume that the ionosphere can be modeled as a collection of one-dimensional 'Vary-Chap' electron layers.
The standard description (e.g., Wang et al., 2021) of a Vary-Chap electron density profile (described in Reinisch et al. (2007) as a generalization of an α-Chapman profile based on the work by Rishbeth and Garriott (1969)) with a height-dependent scale height, H(h), is given by where In these equations N m is the peak electron density, which occurs at h = h m .H m is the scale height at h m .The estimation of the Vary-Chap parameters in AVHIRO is based on minimizing a least squares cost function, with two terms (Lyu et al., 2019).The first term is the fit to a previous state estimate (rather than a "background" constraint); the second is the fit to the observed STEC measurements, which can be calculated from the L1 minus L2 phase delay measurements.
In practice the height-dependent scale height, H(h), in Equation 18 can be difficult to determine (Kutiev et al., 2009;Nsumei et al., 2012;Wang et al., 2021).Throughout this work the scale height used in the Vary-Chap profile varies linearly with height (above the peak).This means that Equations 18 and 19 reduce to ) , where = log(∕)∕ and ( 21) These equations if h > h m .If h ≤ h m , or if k ≤ 10 −3 , the 'standard' Chapman layer approximation, which is given by the limit as k tends to 0 of the above, applies.This is given by: ) , where is plotted between 100 and 600 km in Figure 2.This one layer Vary-Chap profile provides a good approximation to the standard electron density profile.However, a better approximation can be formed through the addition of multiple Vary-Chap profiles, for example, one for each ionospheric layer (D, E, F1, and F2).It can also be beneficial to introduce a "topside" layer, to try to account for the systematic underestimation of electron densities above the F2 peak height given by the Vary-Chap model.(Prol et al. (2019) show that an F2 scale height that varies quadratically, rather than linearly, with height gives a better fit to topside sounder measurements).Various multi-layer Vary-Chap profiles using the parameters in Table 1 are shown in Figure 3.
The resulting five-layer Vary-Chap model provides an excellent approximation to the ionosphere, with a realistic looking E and F1 region.Using a fifth layer for the topside addresses the underestimation as highlighted by Prol et al. (2019).In practice the addition of the D-region layer has very little impact on the overall profile.Figure 4 shows the difference between a five-layer (F2+F1+E+D+Topside) and four-layer (F2+F1+E+Topside) version, that is, the difference in electron density by including the D-layer.As expected the impact of including a D-Region in the model increases with decreasing altitude, although the maximum difference is approximately 2 × 10 7 m −3 , which is over three orders of magnitude smaller than the absolute density at that altitude.The differences are insignificant, yet they add considerable time to the calculations.(See Section 6.2).
Practical Considerations
The equations of Sections 3.1 and 4 are encoded numerically as follows.The forward model calculates the electron density profile defined by the four Vary-Chap parameters {N m , h m , H m , k} according to Equations 20-25.The integrals needed to calculate the slantwise TEC, S, according to Equation 8 are estimated numerically by Simpson's rule, using at least 30 points between the tangent point and the LEO (counted twice), and 10 points between the LEO and the GNSS (counted once).The integrable singularity at r = a is handled by working in terms of cosh −1 (r/a) rather than r.The procedure is repeated for each Vary-Chap layer.The resulting ∂S/∂a is related to the vertical gradient of the excess phases according to Equation 11, and this in turn is equated to the bending angle differences according to Equation 14 Table 1.
capture the key features of the ionosphere, without complicating the solution by trying to model smaller scale structures at lower altitudes.
The tangent linear model, H, which is needed to minimize the cost function J(x) of Equation 3, and the solution error covariance matrix A via Equation 4, is calculated by evaluating manually differentiated counterparts of the non-linear model.
The cost function J(x) is minimized by means of a standard version of the Levenberg-Marquardt minimization algorithm (Press et al., 1992), which the 'diagonal weighting factor' (usually called λ), which determines how closely the change in x follows the path of steepest descent, is multiplied by 0.1 if the cost function is decreasing, and by 100 if the cost function starts increasing.Iterations are deemed to have converged when changes to the state vector or to the cost function are sufficiently small.In addition, 'unphysical' state vector components, such as negative peak electron densities, are handled during the iterative phase of the minimization, usually by resetting them to a few percent of the associated errors, that is, the square roots of the diagonal elements of B. These are deliberately chosen to be rather large-typically around {5.0 × 10 11 m −3 , 100 km, 20 km, 0.05} for {N m , h m , H m , k} respectively.
The results of this paper have been produced by code written in Python 3.0.Another, widely available version, which is written in Fortran 95, has been part of the Radio Occultation Processing Package (ROPP) since version 11.0 (released January 2022).ROPP is a collection of software, build scripts, test scripts and documentation, which is intended to help users to process RO data.It is maintained, developed and supported by EUMETSAT, through the Radio Occultation Meteorology Satellite Applications Facility (ROM SAF), and freely available to download from its website (ROM SAF, 2023).Users should bear in mind that ROPP is under constant development, and that its code may not therefore exactly match that described in this paper.The differences, however, should be small.
Example Retrievals
The 1D-Var retrieval results are compared with the AVHIRO retrieval (Lyu et al., 2019), and an Abel transform solution.The Abel transform retrievals are not absolute electron density values, because we do not add the electron density at the LEO to the profile.In addition, we have not corrected (calibrated) the bending angles to only include the section of the path below the LEO satellite, by subtracting positive elevation values with the same impact parameter (see Section 3.1 of Schreiner et al. (1999)).However, the Abel solution should indicate whether or not the 1D-Var results look reasonable.Our implementation of the Abel transform assumes that the L2-L1 bending angle differences vary linearly in the vertical between observations.This means that the Abel transform is linear, so the retrieved electron density profile can be computed as a matrix multiplied by a vector of (differenced) bending angles.
The performance of the 1D-Var is illustrated with four cases (see Elvidge et al. (2023) for the complete analysis), which have been chosen for their differing retrieval characteristics and convergence properties.
Case 1
The first case, shown in Figure 5, can be considered a good retrieval, and is the same example as shown in the top right panel of Figure 4 in Lyu et al. (2019).The 1D-Var retrieval uses two layers, an F2 layer and F1 layer, two layers being a reasonable compromise between accuracy (which increases with number of layers) and robustness and CPU time (which decrease with it)-see Section 6.2.(Also see Elvidge et al. (2023), which contains an extensive study of the impact of adding more and more layers to the 1D-Var retrieval method described in this paper).A one-off Abel Transform solution has also been added, to replicate Figure 4 of Lyu et al. (2019) more completely.The three approaches agree very similarly around the profile peak (between approximately 200 and 400 km) but the 1D-Var solution differs from the AVHIRO and the Abel transform solutions outside this region.
Case 2
The second case, shown in Figure 6, compares the 1D-Var retrieval with those from COSMIC-2 (UCAR Cosmic Program, 2019) and a closely located ionosonde.Ionosondes step through a range of HF frequencies transmitted vertically upwards and measure the return echoes, which enables them to image the vertical profile of the ionosphere up to the peak density.(Ionosonde density profiles above the peak are less reliable, because they depend on a background model, which is why they are plotted as dashed lines in this paper.)Ionosonde observations are widely used as reference observations for comparative studies (e.g., Feltens et al., 2011;Elvidge et al.. 2014;Scherliess et al., 2011).Here an ionosonde observation (profile) from within 200 km of the location of the occultation have been used to demonstrate the 1D-Var retrieval (A full statistical study is undertaken in Elvidge et al. ( 2023)).Also shown is the convergence in bending angle space, that is, the observed (blue) and forward modeled solution (green) bending angle differences α 2 − α 1 .
In this case the 1D Var solution is in very close agreement with the ionosonde observations throughout the whole profile.Above approximately 300 km and below 160 km there are some deviations from the observations, but these are small.The COSMIC-2 retrieval is also very good, but overestimates the peak electron density.The COSMIC-2 profile shows more structure below 150 km, presumably resulting from the variability in the α 2 − α 1 observations in that region.The 1D-Var solution is the result of fitting observations between 500 and 175 km (see Section 5), and therefore has no knowledge of this structure.Even so, the forward modeled bending angle differences give a reasonable fit to the observations.
Case 3
The third case, Figure 7, again compares the 1D-Var retrieval with COSMIC-2 and a nearby ionosonde (in the same manner as described in Section 6.1.2) and both the COSMIC-2 and 1D-Var retrievals are fairly similar.The peak heights of the profiles, at about 200 km, are similar and both are slightly higher than that reported by the ionosonde.(But note that autoscaled ionosonde observations should be treated carefully, and the errors in the F2 peak height can be reasonably large (as much as 10%, according to Themens et al. (2022)).The COSMIC-2 retrieval slightly overestimates the peak density whilst the 1D-Var slightly underestimates it.Between the peak of the electron density profile and about 150 km, the COSMIC-2 retrieval more closely matches the observations.Above about 275 km the two retrievals are very similar.(Differences from the ionosonde profile in this region should be treated skeptically, for reasons explained in Section 6.1.2).
Case 4
In the final case, shown in Figure 8, the COSMIC-2 retrieval is generally closer to the ionosonde observation than the 1D-Var retrieval.Again, both retrievals have similar heights of peak density, but the COSMIC profile more closely matches the observations above and below the peak.However the COSMIC-2 retrieval returns negative electron densities below 100 km, whereas the 1D-Var retrieval cannot, for reasons described in Section 5.
Convergence and Computational Expense
Elvidge et al. ( 2023) provides a full statistical analysis of the results of the 1D-Var retrievals compared to Abel-transform-based COSMIC-2 retrievals.This section simply provides a flavor of the 1D-Var ionospheric retrieval method's performance on 143 bending angle profiles provided by the Institut D'Estudis Espacials de Catalunya (IEEC) in Barcelona, Spain.These are COSMIC-1 measurements from 18 September 2011.The data contains the "geometry free" (ϕ 1 − ϕ 2 ) phase differences as function of impact parameter, a, up to the COSMIC-1 satellite, which operated at an altitude of about 800 km.
The simulated differenced bending angles α 2 − α 1 are computed with a finite difference approximation to ∂(ϕ 1 − ϕ 2 )/∂a.The vertical separation between the bending angles is typically 500 m.A common radius of curvature of 6,371 km is used for all cases since it was not included in the test data sets.
Although adding layers seems to improve the overall shape of the retrieved ionosphere, this comes at the cost of higher computational time and number of iterations required for the 1D-Var to converge.This is shown in Table 2.Note that the maximum number of iterations was set to 50.Any retrievals needing more iterations than this were recorded as convergence failures, and discarded from further analysis.On the basis of the 143 profiles studied here, two layers generate a reasonable fit to the observations at a moderate computational cost, which is suitable for illustrating the method.In the companion paper Elvidge et al. (2023), however, which uses a much better background model on a much larger sample, four layers (F2, F1, E and topside) work better, having a smaller RMS error with respect to ionosondes (albeit at a lower convergence rate).The correct number to use in any particular application will depend on the balance of the competing demands of speed, robustness and accuracy that the user feels appropriate for that application.
It can be seen from Table 2 that the average time per iteration increases as the number of layers increases.
However after an immediate jump in the average number of iterations when using one-and two-layer models, the average number of iterations for converging observations remains fairly steady (at around 30).This leads to a monotonic increase in the average total length of time needed for an observation to converge using the 1D-Var, as shown in Figure 9.In addition, the percentage of observations that converge continually decreases as the number of layers increases, from 98.6% with just one layer to only 58.7% with five layers.More realistic retrieved electron density profiles therefore come at a cost in CPU computational time and robustness.
Discussion and Conclusions
This paper has described a new 1D-Var ionospheric retrieval method.This approach can be applied to the Metop-SG measurement geometry, which will truncate the ionospheric GNSS-RO measurements around 600 km above the surface, as well as 'standard' GNSS-RO measurement geometry, in which measurements are assumed to be available at all heights.
The original plan was to use the difference between the slantwise TECs of the L1 and L2 radio signals, expressed as a function of impact parameter, but the L2-L1 observed bending angle differences have been used instead.This approach is closer to the way that GNSS-RO data is used in neutral atmosphere applications, such as operational NWP.
A new 1D ionospheric bending angle forward operator has been developed, which computes L2-L1 bending angle differences as a function of impact parameter.This operator assumes that the ionospheric electron density can be modeled as a collection of 'Vary-Chap' layers.As suggested by a comparison of bending angles generated by 1D-and 3D-NeQuick electron density models, the L2-L1 bending angle uncertainty is assumed to be a constant 2.0 μrad, and vertical error correlations are neglected.Some experiments were made using the more complicated error formula of Equation 17, but the results were found to differ little from the same experiments using 2.0 μrad.For simplicity and reproducibility, therefore, and to illustrate the method, the constant value was adopted.
We find that gradient-based minimization techniques can be successfully applied to this retrieval problem, and that the non-linear, nested exponential nature of the Vary-Chap profiles (see Equation 20) does not cause problems.This is in contrast to the assertion of Lyu et al. (2019).Typically, at least two out of three retrievals converge within 50 iterations, although this convergence rate decreases as more Vary-Chap layers are introduced.The iteration count, and the high final cost functions of the retrievals that fail to converge, provide useful diagnostic information, as does the automatically generated solution error covariance matrix A.
A few example 2-layer 1D-Var retrievals have been compared against the results of the AVHIRO method, Abel transform retrievals, and nearby ionosondes.In general the 1D-Var method performs at least as well as AVHIRO and the Abel transform, and produces a close fit to the observed bending angle differences α 2 − α 1 in the region where it is supposed to.By construction, 1D-Var also avoids one drawback of the Abel inversion technique, namely the generation of negative electron densities.Adding more layers can improve the fit, but at the cost of longer CPU times and a lower convergence rate.
The a priori information is essentially a first guess used to start the minimization, rather than a strong constraint on the final 1D-Var solution, because the background errors are (deliberately) rather large.Better a priori information might speed up convergence but would not necessarily improve the 1D-Var solution.It would, however, make it easier to screen out poor bending angles at the start of the minimization.
The key suggestion of this paper is that it may be possible to use differenced (L1-L2) bending angles in ionospheric data assimilation (DA) systems.Usually, such systems assimilate slantwise TEC values, but these require a correction for the Differential Code Biases (DCBs).Constant DCBs during the occultation, however, will not affect the bending angles derived from raw phase delays, and can therefore be ignored.This suggestion should at some point be tested in the context of a more formal data assimilation system.
Figure 1 .
Figure 1.Mean absolute difference in bending angles derived from electron density differences ϵ(h) defined by Equation 15.Vertical average in blue.Fitting function in Equation 17 in red.
Figure 2 .
Figure 2. of a Vary-Chap profile with "standard" parameters given by Equation 26.
Figure 3 .
Figure 3. Examples of multi-layer Vary-Chap profiles with the parameters given inTable 1.
Figure 6 .
Figure 6.Comparison between the 1D-Var electron density profile retrieval with the Abel transform retrieval of a COSMIC-2 profile and a nearby ionosonde.Also shown are the observations and the forward modeled solution in bending angle space (i.e., α 2 − α 1 ).(Ionosonde profiles above the peak appear as dashed lines, as they are less reliable there).
Figure 8 .
Figure 8.Comparison between the 1D-Var electron density profile retrieval with the Abel transform retrieval of a COSMIC-2 profile and a nearby ionosonde.(Ionosonde profiles above the peak appear as dashed lines, as they are less reliable there).
Figure 7 .
Figure 7.Comparison between the 1D-Var electron density profile retrieval with the Abel transform retrieval of a COSMIC-2 profile and a nearby ionosonde.(Ionosonde profiles above the peak appear as dashed lines, as they are less reliable there).
Table 2
Average Time Per Iteration, Percentage of Observations Which Converge, and Statistics of the Number of Iterations Needed for Convergence, for up to Five Vary-Chap Layers Figure 9. Average time taken per iteration (blue) and average length of time for an observation to converge (red). | 7,998.2 | 2024-01-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Estimation of the Radioactivity Produced in Patient Tissue during Carbon Ion Therapy
Nuclear interactions of the projectile carbon ion in biological soft tissue for cancer treatment purpose are studied. Elastic interaction of carbon ion with carbon, oxygen and nitrogen nuclei existing in soft tissue leads to beam divergence especially in Bragg peak region, where the carbon ion is slowing down. Monte Carlo simulation shows the amount of carbon ion beam divergence in soft tissue. For carbon ion beam with 2.4 GeV energy and 2mm diameter, at 85mm penetration depth the beam spreads out to 4mm diameter. Non-elastic interactions are modeled as well. Such interactions are important due to secondary radiation produced in patient’s body. The product particles include positrons and neutrons, being important in therapeutic dose verification with PET imaging and extra dose in the hospital ambient, respectively. Computer code ALICE produces reaction cross sections that might be used to roughly estimate the neutron and positron yield. Computer code ALICE was used to assess the cross section and yield of products from carbon nuclei interaction with soft tissue.
Introduction
Application of charged particle such as proton and carbon ion is being developed for treatment of cancerous tumors (Khoroshkov & Minakova, 1998).This is due to eligibility of such particles in tailoring radiation dose distribution to geometrical shape of tumor and capability of carbon ion in damage of radio-resistant tumors (Schardt 2007).Unlike photons, charged particles have a finite range in matter with little scattering.The increase of energy loss with decreasing velocity is characteristic of all ions.Charged particles slow down as they travel through matter as a result of electromagnetic interactions, including Coulomb scattering.The slower they move, the more efficient they are at ionizing atoms in their path and more likely they are to interact with atomic nuclei.It means that the highest radiation dose is delivered at points in the body at which the charged particles stop, while the dose elsewhere is low.Hence the rate of energy loss increases sharply near the end of its range, culminating in a peak, the so called Bragg peak.The depth of the Bragg peak in the body depends precisely on the initial energy of charged particles.
Carbon ion with higher LET than proton is more efficient in destroying tumors resisting against radiation therapy.Thus carbon ion therapy has become a matter of interest over past few years (Khoroshkov & Minakova 1998, Schardt 2007).
Verification of dose delivery to the tumor is possible by taking advantage of the property of positrons in producing 511 keV annihilation gamma photons (Parodi 2008).A similar technique as PET imaging might be utilized to track the charged particle beam down to the tumor by making the image of its trail of positron emitters.Nuclear interactions along the track of charged particles in the patients body leads to production of sufficient amount of positrons, which makes possible the use of PET imaging technique for the purpose of tracking the beam down to the tumor.The production yield of some positron emitter nuclide such as C 11 and C 10 has been studied using GEANT4 computer code (Pshenichnov et al 2006), but in present work a complete list of nuclear reactions that produces positron emitter nuclides would be presented.
Another product of nuclear interaction inside the living tissue is neutron (Chaudhri 2001).Due to absorbed dose by the patient and radiotherapy personnel, it is important to estimate neutron yield in therapy with charged particles (Schardt et al 2006).There have been some experiments in measurement of neutron yield, but computational results are scarce (Khoroshkov 1998, Schardt 2007, Schardt et al 2006)
Computation method
Computer code ALICE has already been successfully used to estimate the nuclear interaction products and their respective cross sections as function of the energy of projectile particle (Abbas 2006, Kiraly 2008, Kettern 2004).For medical radioisotope production purposes, the projectile is usually proton, deuteron, or alpha particle.In this work, the projectile is considered C .Many radioactive and stable isotopes may be produced in interaction between carbon ion beam and biological soft tissue.The complete list of possible reactions used is shown below.A similar problem has been considered for the case when the impinging beam is proton (Kettern 2004).
Cross section of nuclear interaction usually increases as the ion slows down.In carbon ion therapy, usually the energy of carbon ion beam is 2-4 GeV (Khoroshkov 1998, Schardt 2007).In the present work carbon ion beam is considered to pass through healthy tissue and then enter into the tumor with much lower energy.Thus, the kinetic energy of carbon ion beam is considered in the range of 7-100 MeV.This is in accord with ALICE code abilities.On the other hand this energy range appears to be in Bragg Peak of carbon ion, happening inside the tumor.
Approximate values of cross section are indicated in table 1. Bearing in mind the composition of soft tissue as
N O H C 18 40 5 (Dennis 1977) concentration of radioactive isotopes in patient's tissue after treatment with beam of carbon ions might be estimated.
Another problem in cancer therapy with charged particle beam is the divergence of beam due to scattering reaction.This is especially important for small size tumors such as eye melanoma.If the beam divergence is too high, the lateral resolution of targeting the tumor becomes unacceptable, causing some damage to healthy tissue surrounding the tumor.The problem of widening the proton beam while passing through low-Z medium was studied before (Noshad 2005, Mertens 2007).In the present work the case is studied for carbon ion beam.
Results
Using a Monte Carlo method, the amount of beam divergence is obtained.It is assumed that a parallel beam of charged particle enters a water phantom.At first instant the beam was considered in cylindrical geometry.There was observed a smooth divergence leading to increase in the beam diameter, followed by a sharp spread of beam diameter in penetration depth corresponding to the Bragg peak region.The amounts of beam diameter increase are shown in table 2 for carbon ion beam.
Over the Bragg peak, carbon ion energy is reduced remarkably.On the other hand, due to restrictions in the computer code ALICE, it was not possible to compute the cross section for carbon ion energy more than 100 MeV.Thus in agreement with the physical facts, cross sections are obtained for E<100 MeV.Taking advantage of ALICE code abilities in providing various kinds of cross section data, all possible interaction products due to collision of carbon ion on major elements of soft tissue are elaborated.Among these nuclides, many are stable and some others are very short lived.Important nuclides with half-lives of the order of minutes or hour are obtained.Nuclear interactions leading to activation of patient are listed below.
Conclusion
It is shown that among other radioisotopes produced in patient's body during carbon ion therapy, Na 24 is important due to its longer half-life.This radioisotope with the half-life of 15 hour, leads to patient being activated for many hours after undergoing radiation therapy with carbon ion.Thus it is inferred that after carbon ion therapy the patient must be quarantined.It is also shown that at the end of its range, carbon ion beam is spread laterally and less than 2mm is added to its radius, which might be important in irradiating small tumors in crucial organs of patient's body.
12ion which is used for treatment of deep-seated cancer tumors.The interaction is assumed to happen between C 12 and the most abundant nucleus in fat and muscle tissue,
Table 1 .
Most important radioactive nuclides produced during carbon ion therapy
Table 2 .
Widening of carbon ion beam as function of penetration depth in tissue | 1,758.4 | 2010-03-18T00:00:00.000 | [
"Medicine",
"Physics"
] |
Tribological changes of tooth enamel-mullite/3Y-TZP couple in arti�cial saliva
In-situ mullite toughened 3Y-TZP composite ceramic (mullite/3Y-TZP) with excellent mechanical properties was fabricated by gel-casting. The cytotoxicity of mullite/3Y-TZP was determined by both extract and direct contact methods, and the results indicated that mullite/3Y-TZP had no acute cytotoxicity. Furthermore, the tribological properties of the tooth enamel sliding against mullite/3Y-TZP in arti�cial saliva were investigated by using the pin-on-plate friction method. The friction coe�cient (μ) between the two friction samples was about 0.464 with a stable friction process, and both of them showed slight wear. Analysis of the wear surface and debris demonstrated that the tooth enamel mainly suffered from fatigue wear accompanied by mild adhesive wear, while mullite/3Y-TZP showed slight abrasive wear. This result indicated that mullite/3Y-TZP had good wear resistance and showed potential applications in dental material.
Introduction
Advanced biomaterials made of metals and ceramics have received extensive attention and developed rapidly over the last few decades with the improvement of medical level and the development of materials [1][2][3][4][5].It is well known that biomaterials, especially dental materials, must have the characteristics of good biocompatibility, chemical stability and mechanical properties [6,7].Metal alloys, such as titanium alloys, are generally considered to be bio-inert in human biological systems.However, when they are used in the oral environment, the frequent interaction between alloys and physiological environment will release metal ions, which not only limits their long-term stability in vivo, but also is harmful to the patients' health [8][9][10][11].
Zirconia (ZrO 2 ) ceramic, as one of the most important oxide ceramics, not only in possession of biosafety (no cytotoxicity), but also can exist stably in the oral environment without releasing harmful impurities and degradation [12][13][14][15][16].In comparison with alumina (Al 2 O 3 ) ceramic, ZrO 2 ceramic has more excellent mechanical properties, which can meet the requirements of higher compressive strength and hardness of dental crown [17][18][19][20][21].Moreover, previous studies have con rmed that ZrO 2 ceramic shows poor bacterial adhesion.Scarano et al [22].found that the coverage degree of bacteria on ZrO 2 was 12.1% as compared to 19.3% on titanium.Rimondini et al [22].also con rmed these results through invivo studies, and their results indicated that Y-TZP accumulated fewer bacteria than that of titanium in the total number of bacteria.Quinn et al [23].studied the effects of microstructure and chemical composition on the mechanical properties of dental ceramics, and also believed that ZrO 2 ceramic had better mechanical properties than that of dental ceramics.Despite ZrO 2 ceramic has high hardness, strength as well as good biocompatibility, the inherent brittleness limits its application.Mullite/3Y-TZP, as one of the ZrO 2 composite ceramics, not only improves the exural strength and fracture toughness of pure ZrO 2 ceramic without introducing toxic composition, but also retains high hardness, which has been con rmed in our previous research [24,25].
Nevertheless, in order to determine whether it is a better candidate for dental materials than pure ZrO 2 ceramic, especially for dental crowns, further research is needed on the biotoxicity and tribological properties of mullite/3Y-TZP, which will play an important role in the service life and failure behavior of this material when used as a kind of dental ceramic.
Gergo Mitov et al [26].used natural enamel to slide against Y-TZP ceramic treated in four different methods, and found that there was no signi cant linear correlation between the ceramic surface roughness and abrasive wear.Wang et al [27].studied the wear behavior of tooth enamel sliding against three kinds of dental ceramics (smooth and rough zirconia ceramics, glass ceramic, silicate-based veneer porcelain) with gold-palladium alloy and nickel-chromium alloy as control groups, and the results showed that the frictional coe cient of enamel sliding against polished zirconia or porcelain was between that of metal and glass-ceramic.Enamel showed abrasive wear when sliding against rough zirconia or glass ceramic, while fatigue wear was found on the worn surfaces of enamel when sliding against polished zirconia or nickel-chromium alloy, which showed that the friction and wear performance of zirconia can be improved signi cantly by proper surface polishing.Therefore, as a dental material, studying the oral tribological behavior of mullite/3Y-TZP is very important.
In the previous study, high-performance mullite/3Y-TZP has been prepared by gel-casting combined with pressureless sintering.Based on the comprehensive analysis of mechanical properties and microstructure of this ceramic, its biological toxicity will be con rmed by in vitro cytotoxicity tests including extract and direct contact methods in this study.These methods can be standardized to yield the repeatable results as well as e ciently performed at a relatively low cost [28,29].Moreover, the tribological properties of mullite/3Y-TZP will be deeply analyzed in terms of biology, including the study of wear behavior of the tooth enamel sliding against mullite/3Y-TZP in arti cial saliva environment, and revealing their wear mechanism and wear resistance.
Sample preparation
In this study, four human molars without obvious wear scar were derived from 18-year-old females, and were stored in physiological saline for sample preparation after being removed from the body.In order to prevent the original scratches on the enamel surface from affecting the observation of the morphology after the tribological tests and test the roughness of the enamel surface more accurately, each tooth was polished on carborundum sandpaper from 400 to 2000 mesh in water, and then polished by polishing cloth.This treatment method was performed within the range of not affecting the structure and performance of enamel.Finally, those teeth were cold-set by resin to obtain columnar pins with the size of φ10mm×15mm, and the enamel surface was exposed.
More details about this experimental process and raw materials used in this study were described in the Ref. 26.The polishing steps of these samples were similar to those for the teeth.
Characterization methods
The optical density (OD) of the medium in porous plate was measured by absorbance microplate reader (ELx 800, USA).The morphologies of cells cultured with extract and direct contact methods were observed by inverted microscopy.Scanning electron microscope (SEM, JSM-6390, JEOL, Tokyo, Japan) was used to observe the surface morphologies of mullite/3Y-TZP and tooth enamel before and after tribological tests as well as wear debris.Using raman spectrum to analyze elemental composition and chemical bond information of mullite/3Y-TZP before and after tribological tests.X-ray diffraction (XRD, Cu Ka radiation, D/max-2550-18kW, Rigaku, Japan) from 15° to 70° at 40kV with a scanning speed of 8°/min, coupled with energy dispersive X-ray spectroscopy (EDS) were used to analyze elemental composition and phase of mullite/3Y-TZP before and after experiments as well as wear debris.The elemental distribution of the wear surface of mullite/3Y-TZP after experiments was analyzed by electron probe microanalysis (EPMA).Using atomic force microscopy (AFM) and laser scanning confocal microscope (LSM700, Zeiss, Germany) to measure surface roughness (Ra) of the polished tooth enamel and mullite/3Y-TZP as well as undulating state of the wear surface.Ultra depth of eld three-dimensional microscopy system (VHX-500, Keyence, Japan) combine with LSM700 were used to characterize 3D morphologies of wear surface to determine width and depth of the wear track.
Cytotoxicity tests
In vitro cytotoxicity assays were carried out on mullite/3Y-TZP according to ISO10993-5 using both extract and direct contact methods [30,31].The extract assays used L929 mouse broblasts as test cells.
Each sample was ground into a cube of 6×6×6mm, and then they were ultrasonically cleaned with alcohol and deionized water, and nally put them into autoclave for disinfection (121℃, 30min).Pure 3Y-ZrO 2 ceramic and mullite ceramic were used as experimental control groups, while the negative and positive control groups were also set.The ow diagram of the extract method is shown in Fig. 1, and the speci c steps are as follows: (1) Three different groups of the samples were placed in the complete medium for 24h after disinfection, and the ratio of the sample surface area to the volume of the culture medium was 1 and 3cm 2 /mL; (2) After the cells resuscitated and adhered to the wall for growth, the original culture medium was removed and the sample extract was added.6 holes were set for each concentration of each sample group, and then put them in the CO 2 incubator for 1, 3 and 5 days, respectively; (3) The viability of cells was assessed by tetrazolium salt (MTT), and ELx 800 was used to detect the OD value of each hole [32].Then, calculating relative growth rate (RGR) value according to the optical density (OD) value, and determining the cytotoxicity of the samples by combining with the cell morphologies.
Direct contact assays were performed using L929 mouse broblast.A cell suspension with the concentration of 5×10 4 cells/ml was dropped into a petri dish containing the sample, and the ratio of the sample surface area to the volume of the culture medium was 1:1.After being cultured in CO 2 incubator for 1, 3 and 5 days, using inverted microscopy to observe the cell morphologies and determine whether there were transparent areas around the samples.
Tribological tests
All tribological tests were carried out using pin-on-plate reciprocating friction method on the UMT-3 multifunctional friction and wear tester in arti cial saliva conditions.The device and schematic diagram of the tribological tests are shown in Fig. 2. The tests used teeth as bolts, while polished mullite/3Y-TZP were used as counterparts.During the chewing process of human beings, the chewing force between the upper and lower teeth in the mouth ranges from 3 to 36N and the sliding distance between the bite contacted teeth is about 0.9-1.2mm.[33][34][35].Therefore, A normal force of 20 N, cyclic reciprocating displacement of 1mm, and frequency of 2 Hz were used in all tribological tests, as shown in Table 1.During the tests, the wear surfaces of these two friction components were always immersed in arti cial saliva, and the contents of various components of arti cial saliva are shown in Table 2 [27,36].The experiment was repeated three times in each condition.After tribological tests, the teeth and counterparts were ultrasonically cleaned and dried, and the wear rate was replaced by mass loss.(The mass loss of this friction couple was very low after tribological tests, and it was found that their wear rate was less than 1/10000 after rough calculation.In addition, non-regular structure of the tooth enamel selected in the experiment made it di cult to accurately express the wear rate by height loss.Therefore, it was more intuitive to demonstrate wear results by the mass loss.)
Cytotoxicity
The results of extract assays are shown in Fig. 3 and Fig. 4. The data have been analysed by one-way ANOVA, and the minimum signi cant difference at p<0.05 has been calculated and displayed on the histograms.There were no signi cant differences between any extracts of sample groups and the negative control group except the mullite group.A signi cant decrease of OD value happened in the positive control group, indicating that the tests were valid and L929 mouse broblasts cells were susceptible to the degrees of cytotoxicity.
The OD values (490nm) of each set of samples are shown in Fig. 3. Low OD values of all experimental groups were found after the cells had been cultured by extracts for 1 day, which was mainly because the cells did not fully grow and divide, and the cell concentration was low.Three days later, the OD values increased signi cantly except for the pure mullite group with a concentration of 3cm 2 /mL and the positive control group.After being incubated for ve days, the OD values of each experimental group were almost twice that of the cells cultured for 1 day, and each of them was not lower than that of the negative control group except the pure mullite group and positive control group.
Combined with the comparison between RGR value (>100%) and determination standard of cytotoxicity level, ZrO 2 ceramic and mullite/3Y-TZP had no cytotoxicity, and morphologies of the cells of these groups also con rmed this result, as shown in Fig. 4(a-b) [32].Although RGR value of mullite group 100%, the result of morphologies of the cells cultured by extract of mullite for 5 days indicated that mullite had no cytotoxicity or slight cytotoxicity compared with that of the positive and negative control groups, as shown in Fig. 4(c-e).
No adverse reaction was observed in petri dishes by inverted microscopy in the direct contact experiments, and morphologies of the cells are shown in Fig. 5.As depicted in Fig. 5(a-c), no abnormalities or dead cells were found around the samples, and no cell-free transparent regions were observed.Density and morphologies of the cells of these sample groups were similar to those of the negative control group (Fig. 5(d)), while there were signi cant death cells in the positive control group (Fig. 5(e)).The results showed that those three kinds of samples were no-cytotoxicity, which was a further proof of the previous experimental results.
Microstructure characterization
Fig. 6 shows the microstructure of mullite/3Y-TZP, and there are two obviously different phases in the sample.The preliminary experimental results proved that the black phase was mullite generated by the reaction of Al 2 O 3 and SiO 2 during the sintering process, while the gray area was ZrO 2 [24,25].In the ternary eutectic system of Y 2 O 3 -SiO 2 -Al 2 O 3 formed by Y 2 O 3 with SiO 2 and Al 2 O 3 , local liquid phase appeared in the sample at high temperature, then the contact reaction and nucleation occurred between SiO 2 and Al 2 O 3 .After that, the core of mullite crystal grew into the columnar crystal through mutual diffusion, as shown in Fig. 6(a) [37][38][39][40][41].The size of the columnar mullite was about 10μm, as shown in Fig. 6(b), and there were ZrO 2 particles uniformly distributed in the interior of mullite, which can improve the strength of columnar mullite by the pinning effect.Meanwhile, columnar mullite will also signi cantly enhance and toughen the composite ceramic for this reason.In addition, previous studies have shown that Y 2 O 3 can enter the crystal lattice of ZrO 2 to stabilize the tetragonal/cubic phase [25].
The surface morphologies of the tooth enamel before the tribological tests are shown in Fig. 6(c-d).It retained complete character with no protrusions or microcracks on the surface, providing a reliable premise for subsequent friction experiments.EDS analysis indicated that the main components of the tooth enamel surface were calcium and phosphorus, and the atomic ratio of the two elements was about 1.6:1, con rming that tooth enamel was indeed made of hydroxyapatite (Ca 10 (PO 4 ) 6 (OH) 2 ), as shown in Fig. 6(d), which provided a theoretical basis for subsequent analysis of wear debris.7. Their Ra values were 33.6nm and 148.22nm respectively, indicating that the surfaces of these two kinds of materials using for the experiments were smooth, which can signi cantly reduce the frictional resistance (F x ) and μ [42].As is known to us, there should be an appropriate μ when the dental ceramics, especially ceramic crowns, sliding against natural teeth, so as not to cause excessive wear on one side or affect the chewing of food.The results were obtained by repeating the friction experiments for three times, and one of them was shown in Fig. 8.It can be seen that after a brief run-in period, the μ nally stabilized at 0.464.Combined with the results of Wang et al. (as shown in Fig. 9), this value was between the μ of glass ceramics and Au-Pd alloy when rubbing against tooth enamel, and was particularly similar to the result of polished zirconia and porcelain [27].Meanwhile, we also understood the in uence of the surface treatment on the results of friction experiments of dental ceramics from their research, and this is why the frictional couple were polished before the tests.The reason for getting this result was that even though the surfaces of this couple were very smooth before the tribological tests, they would be destroyed after the initial contact under the action of applied load, resulting in unstable μ and F x until new wear surfaces were generated.In the process of stable friction, the existence of arti cial saliva played an important role in lubrication and cooling, but also could wash away the debris generated during the friction process and clean the wear surface.Therefore, the μ and F x were decreased and extremely stable, and they were the factors that determined the mass loss of the teeth and mullite/3Y-TZP [34,42].This frictional behavior indicated that there was a good match between the tooth enamel and mullite/3Y-TZP.
The mass loss of the teeth and mullite/3Y-TZP was very small, even though the former value was slightly higher than that of the later, both of them lost a few milligrams, only were 0.5±0.1mgand 0.3±0.1mg,indicating that the wear resistance of the friction couple was well in this environment.Stable friction process and low mass loss proved that mullite/3Y-TZP had application potential in the eld of oral cavity.
Moreover, Lee et al [43].found that ZrO 2 would undergo phase transition under the action of load during the friction process, and the low mass loss of mullite/3Y-TZP may partly attributed to transformation toughening induced by ash-temperature.In this study, the diffraction peaks of different ZrO 2 phases of mullite/3Y-TZP did not show signi cant changes before and after the tribological tests, as shown in the XRD patterns of Fig. 10.The diffraction peaks intensity of m-ZrO 2 , t-ZrO 2 and c-ZrO 2 were almost unchanged, which may be caused by stress dispersion and cooling effects of arti cial saliva.In addition, the internal structure of mullite/3Y-TZP did not change before and after the tribological tests, as shown in the Raman spectrum of Fig. 11 (black and red curves).The intensity of the diffraction peak and Raman shift of m-ZrO 2 and t-ZrO 2 did not change signi cantly [44].The -OH peak (3625cm -1 ) was observed in the Raman spectrum of mullite/3Y-TZP after the tribological tests, and Lang et al. [45,46] mentioned that Y 2 O 3 could react with water to form α-Y(OH) 3 due to the action of water and pressure during the friction process.However, compared with the polished sample whose -OH peak most likely generated during the pretreatment process before the tribological tests, no new -OH peak formed during the friction process.This hypothesis was demonstrated by immersing mullite/3Y-TZP in arti cial saliva for 5 days and then performing a Raman test, and the spectrum was shown in the blue curve.No signi cant changes happened in the intensity and Raman shift of the -OH peak, which was similar to those of the previous two results.These results indicated that mullite/3Y-TZP had good stability during friction and arti cial saliva environments.
Wear appearances
Different observation methods were selected based on the different characteristics of the wear surface.
3D morphologies of the wear surface of the tooth enamel were measured by LSM700, and they showed a at wear surface with almost no grooves or undulations, which can be obtained from Fig. 12(a).Surface roughness of the wear surface was measured to be only 4.166μm, even though it was not as smooth as that of the original surface of tooth enamel, and the curve at the bottom of Fig. 12(a) also illustrated no signi cant uctuation.This phenomenon may be caused by the formation of smooth lm on the wear surface due to the deformation of the debris falling from tooth enamel under action of arti cial saliva and load during the friction process, which indicated that no signi cant abrasive wear had occurred on the surface of enamel, providing a basis for the following wear mechanism.Mullite/3Y-TZP, as a counterpart, did not produce severe wear on its wear surface compared to enamel, as shown in Fig. 12(b).The 3D morphologies measured by VHX-500 demonstrated that the worn area of mullite/3Y-TZP was very shallow, and the vertical height difference between the centre of the pit and the unworn surface was only 8.37μm, as shown by the curve below Fig. 12(b).This was mainly because both mullite/3Y-TZP and enamel have high hardness, it was di cult to destroy the surface structure of mullite/3Y-TZP and cause serious wear during the friction process with the lubricating of arti cial saliva,.Fig. 13(a) represents overall appearance of the wear surface of enamel at low magni cation, and it showed a at surface without signi cant scratches, which was consistent with its 3D morphologies.Fig. 13(b) shows the morphology obtained by local magni cation of Fig. 13(a).A small amount of abrasive debris adhered to the at surface, the analysis of EDS showed that they were calcium-phosphorus compounds (hydroxyapatite) derived from the surface of enamel, and contain a small amount of elements of mullite/3Y-TZP and arti cial saliva, as shown in the upper right corner of Fig. 13(b).Fig. 13(c) shows a portion of the edge of the worn region, and it was apparent that delamination of the layered debris had occurred in this region, and the size was about 25μm, while cracks presented around it, which was closely related to stress-induced fatigue fracture.Fig. 13(d) represents overall appearance of the wear surface of mullite/3Y-TZP at a low magni cation, and only a very small amount of scratches presented on the at surface.A small amount of abrasive debris adhered to the surface of the counterpart, and EDS analysis showed that their compositions were the same as that of the debris on enamel surface, as shown in Fig. 13(e).Unlike the wear surface of enamel, the surface of mullite/3Y-TZP did not exhibit large-scale peeling, even though some scratches and microcracks appeared, as shown in Fig. 13(f), which was closely related to the mechanical properties of mullite/3Y-TZP.In addition, the pinning effect of mullite and good combination between the ZrO 2 particles reduced the likelihood of particle aking.
Similar study of Wang et al [27].(as shown in Fig. 14) can be compared with the study in the text.Fig. 14(a-c) represents the wear surface morphology of enamel after sliding against polished zirconia ceramic, and the enamel surface generated micro cracks and a layer of wear debris, which was almost the same as the results of this study, indicating that the tooth enamel had appeared fatigue wear; What was different from the results of this experiment was that the wear surface of zirconia ceramic had obvious particle shedding phenomenon (as shown in Fig. 14(d)), that is, abrasive wear had occurred on the surface of zirconia ceramic, which was closely related to the brittleness of zirconia ceramic.Because mullite/3Y-TZP used in this experiment had better fracture toughness, the nailing effect of mullite and alumina particles reduced the possibility of zirconia particles falling off [24].Fewer particles such as hard zirconia would reduce the damage to the friction couple during the friction process.Therefore, mullite/3Y-TZP is more suitable as dental materials than pure zirconia ceramic.
In order to further analyze element distribution of the wear surface of mullite/3Y-TZP, EPMA analysis was performed, as shown in Fig. 15, where Al and Si elements were not indicated.As shown in Fig. 15(a-d), internal components Zr and Y of mullite/3Y-TZP were detected, and it clearly proved that Y element distributed uniformly in ZrO 2 , which played a role in stabilizing t-ZrO 2 .In addition to the components of ceramic matrix, there were also Ca and P elements on the wear surface, as shown in Fig. 15(e-f), indicating that there was wear debris from enamel presented on the wear surface, which can be found in the grooves in combination with Fig. 15(a).The representative elements of arti cial saliva, such as Na, Cl, etc. also existed on the wear surface, and their distribution was consistent with that of Ca and P elements besides a small amount of them evenly distribute in other areas, which was mainly because of the adsorption of wear debris to arti cial saliva.It was apparent that the lling of grooves and cavities with wear debris and arti cial saliva can signi cantly lubricate the wear surface and reduce friction resistance.
Wear mechanism
The shape of the wear debris provides a reliable clue to the wear mechanism of the specimen.Fig. 16 shows morphologies of the wear debris obtained after tooth enamel sliding against mullite/3Y-TZP in arti cial saliva, from which abrasive grains and layered wear debris with different sizes can be seen.
Abrasive grains showed small sizes with obvious aggregation and mutual adhesion, while the layered debris had two kinds of morphologies: ( ) large layered debris with a rough surface, as shown in Fig. 16(a-b), and ( ) layered debris with a smooth surface, as shown in Fig. 16(c-d).On the surface of the rst kind of lamellar debris, there were aggregated ne particles and obvious microcracks, which was caused by the aggregation of abrasive debris under the combined action of arti cial saliva and load during the friction process, or directly from the lamellar shedding on the surface of the enamel.The second kind of lamellar debris had clear outline with distinctly straight boundaries and microcracks, indicating that they were mainly caused by brittle fracture.The elements of these wear debris were shown in Fig. 17(a-d), it can be seen that the mass ratio of three elements of calcium, phosphorus and oxygen in the wear debris was high, reaching 26%, 18% and 24% respectively, which was consistent with the composition of enamel, and they were distributed almost every debris combined with Fig. 17(a).In addition, the surface of the wear debris was evenly distributed with a small amount of debris from the surface of the mullite/3Y-TZP and elements in the arti cial saliva, as shown in Fig. 17(e-i), which indicated that only slight wear occurred in mullite/3Y-TZP, thus it was di cult to nd large pieces of ZrO 2 particles in the wear debris.The lling and lubrication of the wear surface by arti cial saliva reduced the frictional resistance and mass loss, and result in the adhesion of elements, such as Na and K on the surface of the enamel and counterpart.
The morphologies of the wear debris combined with the character of the wear surface indicated that enamel mainly experienced fatigue wear.Because enamel on the surface of tooth is hard and brittle, repeated friction and load will cause stress concentration in the contact part, resulting in fatigue fracture [34].In addition, the shape of the edge portion of the wear surface indicated that adhesive wear happened locally (Fig. 13(a)), which was due to the abrasive was wetted and pressed to form lm on the wear surface, and then the lm was peeled off due to repeated friction for a long time.Thereby, an adhesive wear zone was formed.However, whether it was from the mass loss and wear surface morphologies of mullite/3Y-TZP or the elemental analysis of the wear debris, mild wear of mullite/3Y-TZP could be obtained.The uniform distribution of Zr, Al and Si elements in the wear debris indicated that mullite/3Y-TZP did have a slight particle aking phenomenon and only occur mild abrasive wear, which was also consistent with a few scratches on its wear surface.The lubrication and cooling effects of the wear debris and arti cial saliva which lled in the pits of the wear surfacec of mullite/3Y-TZP maintained the entire friction process at an extremely low mass loss during the repeated tribological tests.Because the hardness of the enamel was lower than that of mullite/3Y-TZP and arti cial saliva had lubrication and cooling effects, it was di cult to cause extensive fatigue wear on the composite ceramic.In addition, almost no phase transformation of ZrO 2 was obtained by the previous XRD analysis during the friction process, and Raman analysis found no signi cant reaction between Y 2 O 3 with water, indicating that the internal structure of mullite/3Y-TZP was retained, which played an important role in retaining high mechanical properties.This phenomenon showed that mullite/3Y-TZP had good stability and wear resistance in the human oral environment.
Conclusion
The cytotoxicity of mullite/3Y-TZP has been preliminarily studied based on its dental application, and the tribological properties of the tooth enamel sliding against mullite/3Y-TZP in arti cial saliva environment have been deeply analyzed with mullite/3Y-TZP as the counterpart.The main conclusions are as follows: 1.The cytotoxicity of mullite/3Y-TZP was tested by both extract and direct contact methods.The results indicated that mullite/3Y-TZP did not show acute cytotoxicity like ZrO 2 ceramic, even though the second phase of mullite was introduced.
2. In arti cial saliva environment, the friction process between the tooth enamel and mullite/3Y-TZP was extremely stable with the μ of 0.464, which was in the range of μ when natural teeth chewing food.The mass loss of the two materials was low due to the lubrication and cooling effects of arti cial saliva.These tribological properties showed that the friction pairs matched well.The curves of the coe cient of friction (μ), friction resistance (Fx) and applied load (Fz) during the friction test.
Page 20/26 The coe cient of friction of natural enamel against different dental materials plotted versus the number of cycles [27].
Figure 10 XRD patterns of mullite/3Y-TZP before and after the friction test.
Raman spectrum of mullite/3Y-TZP with different methods of handling.Figure 16 (a-d) Microscopic morphologies of wear debris generated during the friction test.
3. 3
Wear behavior 3.3.1 Coe cient of friction (μ) and mass loss Ra and 3D topographies of the polished tooth enamel and mullite/3Y-TZP before the tribological tests are shown in Fig.
Figures Figure 1 Flow
Figures
Figure 2 Device
Figure 2
Figure 7 Surface
Figure 7
Figure 13 a
Figure 13
Table 1 .
3.No signi cant phase transitions happened to mullite/3Y-TZP during the tribological tests.The tooth enamel mainly suffered from fatigue wear accompanied by slight adhesive wear due to the lling of arti cial saliva and wear debris.While mullite/3Y-TZP only showed slight abrasive wear in this condition.These results indicated that mullite/3Y-TZP had good stability and wear resistance.Declarations 4 .Lange F F, Dunlop G L, Davis B I. Degradation During Aging of Transformation -Toughened ZrO 2 -Y 2 O 3 Materials at 250°C.J Am Ceram Soc 2010, 69: 237-240.Experimental parameters and results of the friction test | 6,920.2 | 2020-07-21T00:00:00.000 | [
"Materials Science"
] |
CODES OVER LOCAL RINGS OF ORDER 16 AND BINARY CODES
. We study codes over the commutative local Frobenius rings of order 16 with maximal ideals of size 8. We define a weight preserving Gray map and study the images of these codes as binary codes. We study self-dual codes and determine when they exist.
Introduction
It has become the standard now in many areas of coding theory to assume that the underlying alphabet of a code is a finite Frobenius ring. One of the early papers using this was [8] in which binary codes were constructed via a non-linear Gray map from quaternary codes. This work was followed by papers where Gray maps were defined from all of the rings of order 4, see [3], and then to other rings, see [2,7,12]. In general, Gray maps have been a central feature of the study of codes over rings. Self-dual and formally self-dual codes are important classes of codes and they have canonical connections to finite designs and unimodular lattices. Formally selfdual and self-dual codes have Hamming weight enumerators that are held invariant by the action of the MacWilliams relations. It is a long standing question to determine if there are codes that correspond to each polynomial, with non-negative integer coefficients, that is held invariant by the action of the MacWilliams relations. One such open question is to determine when there are extremal Type II codes of length 24k.
In [11], foundational results were given for codes over finite Frobenius rings. Namely, Wood shows that both MacWilliams theorems, namely the theorem which determines the weight enumerator of the orthogonal and the theorem that determines equivalences of codes, hold for codes over these rings. Wood's results were for codes over arbitrary Frobenius rings, but for the remainder of this paper we shall assume that rings in the paper are commutative. It is well known that any finite commutative Frobenius ring is isomorphic via the Chinese Remainder Theorem to a direct product of local Frobenius rings. This isomorphism carries many of the important structural aspects of a code. Hence, there is a great significance to studying codes over local Frobenius rings. Specifically, codes over local Frobenius rings can be thought of as the foundation of codes in general. In the non-commutative case, we do not have the Chinese Remainder Theorem and so we cannot necessarily reduce many coding theory questions to codes over local rings.
In this paper, we study codes over commutative local Frobenius rings of order 16 with |J(R)| = 8. We define a Gray map to the binary Hamming space and define a Lee weight based on the Hamming weight of the Gray image. This Gray map is a canonical extension of the usual Gray maps as defined over the local rings of order 4. Our Gray map is presented in a very general form which can be applied to any local Frobenius ring of order 16 with a maximal ideal of size 8. Then we study self-dual and formally self-dual codes over these rings and examine their binary images.
Definitions and notations
The rings in this paper, which are used as alphabets for codes, have a multiplicative identity and are assumed to be commutative and finite. A ring is a local ring if it has a unique maximal ideal and a chain ring is a local ring where the ideals are linearly ordered by inclusion. It follows that the ideals in a chain ring are principal and that the ideals are of the form with γ e−1 = 0. The number e is said to be the index of nilpotency of the ring.
For a module M over a ring R define M = Hom Z (M, C * ). Note that C * can be replaced with Q/Z if we want to use an additive model rather than a multiplicative one. Suppose R is a finite ring. The following are equivalent: (1) R is a Frobenius ring; (2) as a left module, R ∼ = R R; (3) as a right module R ∼ = R R . We shall only be concerned with rings that are Frobenius in this paper. It has been established, see [11], that this is the class of rings which can serve as a useful alphabet for codes.
Since we assume all rings are commutative, it simplifies the following definitions. The Jacobson radical J(R) of a ring R can be characterized as the intersection of all maximal ideals. In a commutative local ring the Jacobson radical is necessarily the unique maximal ideal. The socle of a ring R, Soc(R), is defined as the sum of all the minimal ideals of the ring.
In Theorem 3.10 (4a) of [10], the list of local Frobenius rings of order 16 with Jacobson radical of size 8 was presented. Five of the rings are chain rings, and Z 16 and seven of them are non-chain rings, In all of these rings we have that the maximal ideal m is the Jacobson radical of the ring and m ⊥ = Soc(R). For the non-chain rings we have m 2 = Soc(R). For the chain rings we have m 3 = Soc(R). In both cases we have that |Soc(R)| = 2.
Remark 1. The nilpotency of J(R) for any ring under consideration is less than or equal to 4. From this we see that the group of units for each ring is isomorphic to either C 2 × C 2 × C 2 or C 2 × C 4 . Of the 12 rings being considered, the only 3 rings with the group of units isomorphic to Given a ring R, we say that a code of length n is a subset of R n . If the code is a submodule we say that the code is linear. If C is a code then the weight enumerator with respect to a weight, denoted by wt, is where N is the maximum weight for a vector of length n. We attach to the ambient space the Euclidean inner-product, namely [v, w] = v i w i . We define an orthogonal with respect to this inner-product, namely C ⊥ = {v | [v, w] = 0, ∀w ∈ C}. The code C ⊥ is always linear whether or not C is and it follows from the foundational results in [11] that |C||C ⊥ | = |R| n when C is a linear code over Frobenius ring R.
If C = C ⊥ we say that the code C is self-dual and if C ⊆ C ⊥ we say that the code C is self-orthogonal. If a code C satisfies W C (x, y) = W C ⊥ (x, y) then we say that the code is formally self-dual with respect to that weight.
Local rings of order 16
In this section, let R be a local Frobenius ring of order 16 with a maximal ideal of size 8. Then R/J(R) ∼ = F 2 . In any case, there exist u, v, w ∈ R with J(R) = u, v and Soc(R) = w = {0, w}. If R is a chain ring, we actually have J(R) = u , J(R) 2 = v and Soc(R) = w where v = u 2 and w = u 3 . The ideals for such a ring are described in Figure 1.
In the non-chain ring case there are three intermediate ideals all of cardinality 4 which are u , v and u + v . The ideals for such a ring are described in Figure 2.
Every element of these rings can be written uniquely in the form a + bu + cv + dw where a, b, c, d ∈ {0, 1}. Note that we are not saying that the group structure of the additive group is C 4 2 , but rather that the elements have a unique representation of this form. We shall use the fact that every element can be written in this form to construct a Gray map for any ring in this family.
There are two local rings of order 4 that are not fields, namely F 2 + uF 2 and Z 4 . Both of these have a well known Gray map. It is possible to describe both Gray maps as Φ : Figure 2. Diagram for the non-chain ring Notice that in both of these rings we are able to write the elements as a+bv where a, b ∈ F 2 . Note that this is not saying that the group structure is C 2 × C 2 but rather that the elements can be placed into that form. We can extend these maps to rings where the elements can be written as a + bu + cv + dw where a, b, c, d ∈ {0, 1}. Namely, we view the elements as being of the form (a+cv)+(b+dv)u and apply the order 4 Gray map twice (allowing for an abuse of notation), that is, once in terms of u and then in terms of v. Specifically, we apply this Gray map to the element α + βu where α = a + bv and β = c + dv to get (β, α + β). Then we apply the Gray map to α and β. Namely, we send α to (b, a + b) and β to (d, c + d). Hence we make the following definition of the Gray map. For a finite local Frobenius ring R of order 16 with Jacobson radical of size 8, we define the Gray map φ : R → F 4 2 as , then by comparing the first coordinates d = d . Comparing the second and third coordinates gives that b = b and c = c . Then comparing the fourth coordinates we have a = a . Hence the map is bijective.
In many ways it is the non-linear case which is the most interesting, as it is in the case for the quaternary Gray map. For the ring F 2 [u, v]/ u 2 , v 2 we have that φ(C ⊥ ) = φ(C) ⊥ , however this is not true in general for the rings of characteristic 2. For example, it is not true for Let w L (x) denote the Lee weight of the element x ∈ R where w L (x) is defined to be the Hamming weight of φ(x). It can be seen that Note that the weight is odd if and only if the element is a unit. This is important in that what we want is that the weights of the elements in the maximal ideal all have even weight and that the non-zero element in the socle has the largest weight. The reason for this becomes apparent in Theorem 4.6. Notice the map φ is not necessarily linear but it is weight preserving.
In [7], another Gray map equivalent to φ was defined. It is also possible to define another Gray map on the local rings of order 16 with a maximal ideal of size 8. On the set {1, u, v, w} define a poset where 1 ≤ u, 1 ≤ v, u ≤ w, v ≤ w and u and v are not comparable. This poset is given in the following diagram: View the elements of the ring as an F 2 vector space with basis 1, u, v, w. For Then extend ψ linearly over F 2 . Note that the Lee weight increases with respect to the ordering of the poset and that the Lee weights defined for ψ and φ are the same. Additionally, this vector space can be viewed as the mod 2 group algebra of the Klein-4 group, where the Hamming weight respect to the basis of group elements corresponds to the Lee weight as we have defined it. Proof. Follows from a straightforward computation.
The rank of a code C is the minimum number of generators of C. Let d L (C) denote the minimum Lee weight of all non-zero vectors in C. Using the same proof as was given in [6], we have the following. Theorem 3.3. Let C be a linear code over R, then Using the Lee weight we can now define the following Lee weight enumerator: where wt L (c) is the weight of a codeword c ∈ C. Notice that 4n is the maximal Lee weight of any vector of length n. This weight enumerator coincides with the Hamming weight enumerator of the image of the code We shall say that a formally self-dual code code is Type II if all Lee weights in the code are doubly-even, Type I if all Lee weights are even and Type 0 otherwise.
An example of a Type 0 formally self-dual code will be given in Theorem 4.9.
Self-dual codes
Self-dual codes are an important class of codes. They have numerous connections to unimodular lattices and design theory. In this section, we shall describe self-dual codes over local Frobenius rings of order 16 and examine their images under the Gray map.
We shall determine the existence for Type I and Type II self-dual codes. It will be shown that there are no Type 0 self-dual codes. Proof. The result follows from the previous theorem noting that the image of the all w vector by the Gray map is the all-one vector.
The following well known lemma is easily proven. Lemma 4.2. If C and D are self-dual codes over a Frobenius ring R of length n and m respectively then C × D is a self-dual code of length nm. If C and D are formally self-dual codes of length n and m respectively then C × D is a formally self-dual code of length nm.
We shall now examine the existence of self-dual codes. Proof. Let R be a local Frobenius ring of order 16 with a maximal ideal of size 8 that is not a chain ring. There are precisely 3 ideals of order 4 namely u , v and u + v . Given an ideal a of order 4, a ⊥ is an ideal of order 4. Take an ideal a, if a = a ⊥ then there is a self-dual code of length 1. If a = a ⊥ , then a ⊥ is one of the other ideals. Hence the third ideal must be a self-dual code of length 1.
If the ring is a chain ring then there is a unique ideal of order 4, hence its orthogonal must be of size 4 so it must be self-dual.
Since all the elements in the self-dual code are in the maximal ideal, the Lee weights must all be even. Hence the code is Type I.
Notice that this lemma implies that there are either one or three self-dual codes of length 1 in this case. For chain rings there can be only one self-dual code of length 1 since there is only one ideal of the proper order. The image under the Gray map of any of these possible length 1 codes is a [4, 2, 2] binary self-dual code. It is possible that all three ideals are self-dual codes of length 1. For example, in the ring F 2 [u, v]/ u 2 , v 2 , uv = vu , the codes u , v and u + v are all self-dual codes of length 1. In the ring F 2 [u, v]/ u 2 + v 2 , uv only the ideal u + v is self-dual. In this ring u ⊥ = v .
For the rings in our collection of characteristic 4, we have that 2 is always a self-dual code of length 1. In the ring Z 4 [x]/ x 2 − 2x , 2 is the only self-dual code of length 1 and in the ring Z 4 [x]/ x 2 there are three, namely 2 , x and x + 2 . Lemma 4.3 leads to the following theorem. Proof. We know that there are three ideals of order 4 in R, namely u , v and u + v . Moreover, each of these ideals contain the socle which is {0, w}. If there is only one self-dual code then the other ideals of order 4 are duals of each other. Assume that the ideals α and β are duals of each other. Then α = {0, α, w, α + w} and β = {0, β, w, β + w}. In either case, the ideals have two elements with Lee weight 2, one element with Lee weight 0 and one element with Lee weight 4. Hence they have identical Lee weight enumerators. Thus α and β are Type I Lee formally self-dual codes that are not self-dual of length 1.
As an example of the previous theorem, consider the ring F 2 [u, v]/ u 2 + v 2 , uv . The ideals u and v are Lee formally self-dual codes that are not self-dual. The image of both of these codes is the linear [4, 2, 2] binary self-dual code. This shows that the image of Lee formally self-dual code that is not self-dual can in fact be self-dual.
We shall now examine the case for Type II codes.
Theorem 4.6. There exist Type II self-dual codes of length 2 for all local non-chain Frobenius rings of order 16 with maximal ideal of size 8.
Proof. Let R be a local non-chain Frobenius ring of order 16 with Soc(R) = w . Consider the code Since J(R) 2 = Soc(R), we have that x 2 ∈ Soc(R). Then 2y = 0 for all y in Soc(R) since it is isomorphic to the field with two elements. Hence, the code C is selforthogonal. Then the code has |C| = 16 and is of length 2 with C ⊆ C ⊥ . Therefore C = C ⊥ and C is self-dual. If v ∈ C − {0} is of the form (x, x), x ∈ J(R), then the Lee weight is 2 + 2 = 4 unless x = w where the Lee weight is 4 + 4 = 8. If v ∈ C is of the form (x, x + w), x ∈ m, then if x = w the Lee weight is 2 + 2 = 4 and if x = w then v has Lee weight 4. Hence C is a Type II self-dual code.
The code φ(C), where C is the code given in Theorem 4.6, is equivalent to the [8,4,4] binary Hamming code. We give an example for a specific ring which describes the code in Theorem 4.6.
Example 3. Let C be a code over R = Z 4 [x]/ x 2 − 2x generated by the following matrix Then G generates a Type II self-dual code over R.
In [13], the authors find a Type II self-dual code over F 2 [u, v]/ u 2 , v 2 whose image under φ is the extended binary Golay code.
Notice that in the chain ring case, J(R) 2 may not be Soc(R) so the above construction may not work. For example, over Z 16 which is a chain ring with Jacobson radical of size 8, the construction would give the generator matrix 2 2 0 8 . However, the vector (2, 2) has [(2, 2), (2, 2)] = 8 = 0 and the vector is not self-orthogonal. However, we can prove the following similar result. Proof. Let γ be the generator of the maximal ideal in a chain ring R of order 16 with | γ | = 8. Consider the code generated by the following matrix: Note that each vector has even Hamming weight in the code generated by the first three rows of the matrix with elements in the maximal ideal. Therefore all of these vectors have doubly-even Lee weight.
The only vectors in the code with odd Hamming weight have a γ 3 in them which has Lee weight 4 so these vectors have doubly-even Lee weight as well. Then note that γ 3 ∈ Soc(R) so γ 3 + γ 3 = 0 which gives that the rows are mutually orthogonal. Hence the code is a Type II self-dual code.
Corollary 2. There exist Type II codes of all even lengths over local non-chain Frobenius rings of order 16 with maximal ideal of size 8. There exist Type II codes of all lengths a multiple of 4 over local Frobenius chain rings of order 16 with maximal ideal of size 8.
Proof. Follows from Lemma 4.2 and noticing that the Cartesian product of Type II codes is Type II.
Of course, if the image of a Type II self-dual code is linear then it must be a self-dual code since it contains only doubly-even binary codewords. However, the binary code may be non-linear and so not self-dual. Proof. The code (1, 0) is a Type 0 Lee formally self-dual code of length 2. Then apply the Cartesian product and we have Type 0 Lee formally self-dual codes of all even lengths.
Of course, if a local Frobenius non-chain ring of order 16 with maximal ideal of size 8 has only one self-dual code of length 1 then there exist Lee formally self-dual codes of length 1. In these cases, Lee formally self-dual codes exist for all lengths.
We can use the generating element of a self-dual code of length 1 to construct other self-dual codes.
Theorem 4.10. Let C be a self-orthogonal code of length n over a local Frobenius ring R of order 16 with a maximal ideal of size 8. Let α be a principal generator of the self-dual code of length 1. Then C, (C ⊥ ∩ αR n ) is a self-dual code of length n.
Proof. In the non-chain ring case there exists a self-dual code of length 1 either generated by u, v or u + v. Call this element that generates the self-dual code α.
In the chain ring case there is a unique choice for α. Consider vectors v, v ∈ C and αw, αw ∈ C ⊥ . Note that w, w may not be in C ⊥ , but any vector in this intersection C ⊥ ∩ αR n is of the form αw for some vector w. Then = 0 + 0 + 0 + α 2 [w, w ] = 0, since α 2 = 0. Hence the code is self-orthogonal. Then we have Then C ⊥ ∩ C, αR n = C, (C ⊥ ∩ αR n ) . Therefore the code is self-dual.
Notice that it is possible that there is more than one α that could be used in the previous theorem. We shall now illustrate this point and show two different cases in the construction of this self-dual code. Consider the ring R = Z 4 [x]/ x 2 . Let C be the self-orthogonal code C = {(β, β) | β ∈ m}, where m is the unique maximal ideal which is 2, x . The code C can be seen as generated by the vectors (2,2) and (x, x). Take the code C 1 = C, (C ⊥ ∩ xR n ) . This code C 1 can be seen as C, ( (3x, x) ) . Notice that (3, 1) is an element of the code C ⊥ . As such C, (C ⊥ ∩ xR n ) = C, xC ⊥ . By the theorem, C 1 is a self-dual code of length 2. Now consider the code C 2 = C, (C ⊥ ∩ 2R n ) . The code C 2 can be seen as C, ( (2x + 2, 2) ) . Notice here that C 2 = C, 2C ⊥ since (2x + 2, 2) = 2(β 1 , β 2 ) for any (β 1 , β 2 ) ∈ C ⊥ . Note however that (2x + 2, 2) is in C ⊥ . These examples show why the theorem requires C, (C ⊥ ∩ αR n ) rather than simply C, αC ⊥ and the difference a choice of α can make.
We shall now investigate when free self-dual codes exist. A code C is free if it is isomorphic to R k for some integer k.
Lemma 4.11. Let C be a free self-dual code of length n over a local Frobenius ring R of order 16 with a maximal ideal of size 8. Then n is even.
Proof. Since C is a free self-dual code, |C| 2 = |R| n . This gives |R| 2k = |R| n for an integer k which gives that n is even. Proof. Since R/J(R) ∼ = F 2 and 1 = −a 2 − b 2 we have that either a or b is a unit and the other is in J(R). Assume a is a unit. Now, a 2 = 1 + s where s ∈ J(R) 2 .
Finally, we show that there is no a, b ∈ Z 4 [x]/ x 3 − 2, 2x with a 2 + b 2 = −1. To that end, assume there is a, b with a 2 + b 2 = −1. We have seen in the proof of Lemma 4.12, that we may assume a is a unit and b ∈ J(Z 4 [x]/ x 3 − 2, 2x ). Then a and 1 + b are units. Note that since 2 ∈ Soc(Z 4 [x]/ x 3 − 2, 2x ), we have that 2b = 0. Then we have (1 + b) 2 = 1 + 2b The following corollary is a direct consequence of Lemma 4.13 and Theorem 4.14. 4.15. Let S be a subring of the Frobenius ring R. If C is a self-dual code in S n with generator matrix of the form (I n 2 | A) then C ⊆ R n is a free self-dual code.
Proof. Since the generator matrix is of the form (I n 2 | A) then the generated code has size |R| n 2 . Then if v i , w j ∈ S n and α i , β i ∈ R we have that giving that the code is self-orthogonal. Therefore, the code is self-dual.
We can use this lemma to find free self-dual codes even when Lemma 4.13 may not apply. For rings of characteristic 4, the ring contains Z 4 as a subring. The following matrix generates a free self-dual code in Z In order to get self-dual codes over the local rings of order 16 which have an element γ with γ 2 = −1, we can apply the building-up construction. Hence, we can obtain self-dual codes over the following rings by the building-up construction: F 2 [u, v]/ u 2 , v 2 , F 2 [u, v]/ u 2 +v 2 , uv , Z 4 [x, y]/ x 2 −2, xy−2, y 2 , 2x, 2y , For more detailed information about the building-up construction see [4].
Construction of formally self-dual codes
In this section, we shall investigate a construction technique of formally self-dual codes. If X is either an n×n circulant or symmetric matrix over R then the code generated by the matrix [I n |X] is a formally self-dual code of length 2n.
Proof. In Theorems 5.1 and 5.2 of [14], these results were proven specifically for R = Z 4 [x]/ x 2 . Those proofs work verbatim when R is since in these rings, for any element c ∈ R we have w L (c) = w L (−c).
Note that the matrix G = [I n |X] does not have to generate a formally selfdual code over Z 4 [x, y]/ x 2 , xy − 2, y 2 , 2x, 2y , Z 4 [x, y]/ x 2 − 2, xy − 2, y 2 , 2x, 2y , Then G generates a formally self-dual code where w is the generator of the socle of the corresponding ring and u is a generator of one of the ideals of order 4. | 6,609.4 | 2016-04-01T00:00:00.000 | [
"Mathematics"
] |
LogEvent2vec: LogEvent-to-Vector Based Anomaly Detection for Large-Scale Logs in Internet of Things
Log anomaly detection is an efficient method to manage modern large-scale Internet of Things (IoT) systems. More and more works start to apply natural language processing (NLP) methods, and in particular word2vec, in the log feature extraction. Word2vec can extract the relevance between words and vectorize the words. However, the computing cost of training word2vec is high. Anomalies in logs are dependent on not only an individual log message but also on the log message sequence. Therefore, the vector of words from word2vec can not be used directly, which needs to be transformed into the vector of log events and further transformed into the vector of log sequences. To reduce computational cost and avoid multiple transformations, in this paper, we propose an offline feature extraction model, named LogEvent2vec, which takes the log event as input of word2vec to extract the relevance between log events and vectorize log events directly. LogEvent2vec can work with any coordinate transformation methods and anomaly detection models. After getting the log event vector, we transform log event vector to log sequence vector by bary or tf-idf and three kinds of supervised models (Random Forests, Naive Bayes, and Neural Networks) are trained to detect the anomalies. We have conducted extensive experiments on a real public log dataset from BlueGene/L (BGL). The experimental results demonstrate that LogEvent2vec can significantly reduce computational time by 30 times and improve accuracy, comparing with word2vec. LogEvent2vec with bary and Random Forest can achieve the best F1-score and LogEvent2vec with tf-idf and Naive Bayes needs the least computational time.
Introduction
Internet of Things (IoT) [1,2] has provided the possibility of easily deploying tiny, cheap, available, and durable devices, which are able to collect various data in real time, with continuous supply [3][4][5][6][7]. IoT devices are vulnerable and usually deployed in harsh and extreme natural environments, thus solutions that can improve monitoring services and the security of IoT devices are needed [8][9][10]. Most smart objects can accumulate log data obtained through sensors during operation. The logs record the states and events of the devices and systems, thus providing a valuable source of information which can be exploited both for research and industrial purposes. The reason is that a large amount of log data stored in such devices can be analyzed to observe user behavior patterns or detect errors in the system. Based on log analysis, better IoT solutions can be developed or updated and presented to the user [11]. Therefore, logs are one of the most valuable data sources for device management, root cause analysis, and IoT solutions updating. Log analysis plays an important role in IoT system management to ensure the reliability of IoT services [12]. Log anomaly detection is a part of log analysis that analyzes the log messages to detect the anomalous state caused by sensor hardware failure, energy exhaustion, or the environment [13].
Logs are semi-structured textual data. An important task is that of anomaly detection in log [14], which is different from the classification and detection in computer vision [15][16][17][18], digital time serial [19][20][21][22][23], and graph data [24]. In fact, the traditional ways of dealing with anomalies in logs are very inefficient. Operators manually check the system log with regular expression matching or keyword searching (for example, "failure", "kill") to detect anomaly, which is based on their domain knowledge. However, this kind of anomaly detection is not applicable to large-scale systems.
Many existing works propose schemes to process the logs automatically. Log messages are free-form texts and semi-structured data which should turn into structured data for further analysis. Log parsing [25][26][27] extracts the structured or constant part from log messages. The constant part is named by the log template or log event. For example, a log message is "CE sym 2, at 0x0b85eee0, mask 0x05". The log event of the log message is "CE sym < * >, at < * >, mask < * >".
Although log events are structured, they are still text data. Most machine learning models for anomaly detection are not able to handle text data directly. Therefore, to extract features of the log event or derive a digital representation of it is a core step. According to the feature extraction results, several machine learning models are used for anomaly detection, such as Regression, Random Forest, Clustering, Principal Component Analysis (PCA), and Independent Component Analysis (ICA) [28]. At first, many statistical features of log event [29,30] are extracted, such as sequence, frequency, surge, seasonality, event ratio, mean inter-arrival time, mean inter-arrival distance, severity spread, and time-interval spread.
More and more works start to apply natural language processing (NLP) methods for the log event vectorization, such as bag-of-words [31], term frequency-inverse document frequency (tf-idf) [32,33] and word2vec [34,35]. Most of the above works are based on the word. Anomalies in logs mostly depend on the log message sequence. Meng et al. [32] form the log event vector by the frequency and weights of words. The log event vector is transformed into the log sequence vector as the input of the anomaly detection model. The transformation from word vector to log event vector or log sequence vector is called coordinate transformation. The frequency and weight of words ignore the relevance between words. Bertero et al. [34] detect the anomaly based on the word vector from word2vec [36], which is an efficient method to extract the relevance between words. The word vector is converted to the log event vector, and then the log event vector is converted to the log sequence vector before anomaly detection. However, the computing cost of training word2vec is high and it needs to transform the word vector twice.
As the systems become increasingly complex, there is a large amount of log data. The number of words in each log message is in the range from 10 to 102. Processing words directly is not suitable for large-scale log anomaly detection. Therefore, He et al. [31] propose to count the occurrence number of log events to obtain log sequence vectors directly. The coordinate transformation is unnecessary. In addition, the number of log events is far less than the number of words. The length of the vector is based on the number of words or log events. The dimension of the vector is shortened, which further reduces the computational cost. However, the frequency of log events ignores the relevance of log events. Therefore, to extract the relevance between log events, reduce the computational cost, and avoid multiple transformations, we investigate the log anomaly detection problem by word2vec with log events as input. The main contributions can be summarized as follows: • We propose an offline low-cost feature extraction model, named LogEvent2vec, which first takes log events as input of the word2vec model to vectorize the log event vector directly. The relevance between log events can be extracted by word2vec. Only one coordinate transformation is necessary to get the log sequence vector from the log event vector, which decreases the number of coordinate transformations. Training log events is more efficient because the number of log events is less than that of words, which reduces the computational cost. • LogEvent2vec can work with any coordinate transformation methods and anomaly detection models. After getting the log event vector, the log event vector is transformed into the log sequence vector by bary or tf-idf. Three kinds of supervised models (Random Forests, Naive Bayes, and Neural Networks) are trained to detect the anomaly. • We have conducted extensive experiments on a real public log dataset from BlueGene/L (BGL).
The experimental results demonstrate that our proposed LogEvent2vec can significantly reduce computational time by 30 times and improve the accuracy of anomaly detection, comparing with word2vec. • Among different coordinate transformation methods and anomaly detection models, LogEvent2vec with bary and Random Forest can achieve the best F1-score and LogEvent2vec with tf-idf and Naive Bayes needs the least computational time. Tf-idf is weaker than bary in aspect of accuracy, but it can significantly reduce the computational time.
The rest of the paper is organized as follows. We introduce the related work in Section 2, and present the general framework of log anomaly detection and the formulation of our work in Section 3. We further provide an overview of our scheme, the log parsing, feature extraction, and anomaly detection model in Section 4. Finally, we evaluate the performance of the proposed algorithms through extensive experiments in Section 5 and conclude the work in Section 6.
Related Work
According to the framework of log anomaly detection in Section 3, log anomaly detection consists of several important steps. We review the related works for each step.
Log Parsing
Log parsing extracts the log template or log event from the raw log. A log template is a log event that records events occurring in the execution of a system. FT-tree [25] identifies the longest combination of frequently occurring words as a log template. He et al. [26] design and implement a parallel log parser (namely POP) on top of Spark, a large-scale data processing platform. The raw log is divided into constant and variable, and the same log events are combined into the same clustering group by hierarchical clustering. He et al. also propose an online log parsing method, namely Drain [27], which uses a fixed depth parse tree to accelerate parsing. He et al. [37] provide the tools and benchmarks for automated log parsing.
Feature Extraction
Extracting the feature of logs is the basis of anomaly detection. Zhang et al. [29] propose Prefix to extract four features (sequence, frequency, surge, seasonality) from the log sequence and form a feature matrix. Khatuya et al. [30] select features from system logs, including event count, event ratio, mean inter-arrival time, mean inter-arrival distance, severity spread, and time-interval spread, and transform the log events into score matrix. Liu et al. [38] extract 10 features and compress to two features.
In addition, the NLP methods start to attract the researcher's interest to vectorize the log event, such as bag-of-words [39], TF-IDF [40], and word2vec.
He et al. [31] count the occurrence number of each log event to form the event count vector for each log sequence, whose basic idea draws from bag-of-words. Meng et al. [32] propose LogClass which combines a word representation method, named tf-idf, with the Positive-unlabeled (PU) learning model to construct device-agnostic vocabulary with partial labels. Lin et al. [33] propose an approach named LogCluster which turns each log sequence into a vector by Inverse Document Frequency (IDF) and Contrast-based Event Weighting.
Bertero et al. [34] consider logs as regular text and first apply a word embedding technique based on Google's word2vec algorithm, in which logfiles' words are mapped to a high dimensional metric space. Then, the coordinate of the word is transformed into the log event vector, and the coordinate of the log event vector is transformed into the log sequence vector. Meng et al. [35] propose LogAnomaly, a framework to model a log stream as a natural language sequence. They propose a novel, simple feature extraction method, template2vec, to extract the semantic information hidden in log templates by a distributional lexical-contrast embedding model (dLCE) [41]. The word vector is transformed to the log event vector, which is fed into the long short-term memory (LSTM) detection model.
According to the type of anomaly detection, the word vector from word2vec needs to form the log event vector or the log sequence vector. For example, the log event vector is enough for LSTM [35], while the log sequence vector is needed for Random Forest or Naive Bayes [34]. Table 1 concludes the NLP methods on log feature extraction. To avoid multiple transformations, the objects of NLP methods become log events from words. Therefore, this paper handles the log events directly.
Ridge regression is used to estimate the abnormal score from the features [30], and the total weight vector obtained by ridge regression is used for express the relative importance of different features. Random Forest is used to anomaly detection based on the feature matrix in Prefix [29]. LogClass [32] classifies anomalies based on device logs by Random Forest.
LogCluster [33] clusters the logs to ease log-based problem identification, which utilizes a knowledge base to check if the log sequences occurred before. Liu et al. [38] make use of a mixed attribute clustering method k-prototype, which transforms data from 10 features to a new data set to reduce feature dimensions. Then, k-Nearest Neighbor (k-NN) classifier is used to identify the real abnormalities in the new data set, which greatly reduces the calculation scale and time. Loglens [42] is a real-time log analysis system, which clusters log events by similarity measure.
A comparison among six state-of-the-art log-based anomaly detection methods is presented in [31], including three supervised methods (Logistic Regression, Decision Tree, and Support Vector Machine (SVM)) and three unsupervised methods (LogCluster, PCA, Invariant Mining), and an open-source toolkit allowing ease of reuse.
In addition, deep learning methods [43] are applied in log anomaly detection [35]. Deeplog [44] uses LSTM to model a certain type of log key sequence of logs, automatically learns the normal mode from the normal log data, and then judges system exceptions. Refs [45,46] analyze the application of various LSTM models in anomaly detection, such as bidirectional LSTM, stacked LSTM, etc.
In this paper, we show that our feature extraction algorithm can work well with various anomaly detection methods.
General Framework and System Model
In this section, we introduce the general framework of log anomaly detection and the formulation of our work. The general framework of log anomaly detection consists of three steps: log parsing, feature extraction, and anomaly detection, as shown in Figure 1. Table 2 summarizes the notations and definitions used in this paper. Table 2. List of notations.
L
The log data N The number of lines in log data E The set of log events M The number of log events LSE The set of log sequences W The window size which decides the length of a log sequence T The vector space l i The ith log message p(.) The mapping function of log parsing p(l i ) The log event of log message l i lse i The ith log sequence that is The vector of log event e f (lse i ) The prediction of log sequence lse i that is f (v(p(l iW+1 )), v(p(l iW+2 )), . . . , v(p(l iW+W ))) y i The label of log sequence lse i
Log Parsing
Logs are semi-structured. A log message can be divided into two parts: a constant part and a variable part (some specific parameters). A log event is the template (constant part) of a log message. To turn semi-structured raw logs into structured data, log parsing extracts a set of templates to record events that occur during the execution of a system. In this paper, we do not distinguish between the log template and the log event.
The log data from a system are denoted by L. The log data contain N lines of log messages. The ith log message is denoted by l i ∈ L, 1 ≤ i ≤ N. Every log message is generated by an application of the system to report an event. Every log message consists of a list of words, similar to a sentence.
The log parsing [27] is used to remove all specific parameters from log messages and extract all the log events. The set of log events is denoted by E, in which the number of log events is M. In this way, each log message is mapped into a log event. Log parsing can be represented by the mapping function p. The log event of the log message l i can be described as p(l i ) ∈ E: Then, log data are divided into various chunks. A chunk is a log sequence. We assume that the fixed window is used and the window size decides the length of log sequences, denoted by W. There are N/W log sequences, where the set of log sequences is denoted by LSE. The ith sequence consists of W log messages from l iW+1 , l iW+2 , to l iW+W . Each log message in a log sequence can be mapped into a log event [47]. As a result, the log sequence can be treated as a list of log events. The log sequence lse i is denoted by (2)
Feature Extraction
Although log events are structured, they still consist of text. Therefore, the log event should be numerically encoded for further anomaly detection. Text of log events can be encoded by NLP models. The list of logs are divided into various chunks, which are log sequences. A feature vector is generated to represent a log sequence.
Word2vec [36] is used to extract features of log events. Generally speaking, word2vec maps words of a text corpus into a Euclidean space. In the Euclidean space, relevant words are close, while irrelevant words are far away.
In our case, we use word2vec to map log events of log sequence into a Euclidean space. The input of word2vec is a list of log events instead of a list of words. Thus, every log event gets a coordinate, denoted by v(e), e ∈ E in a vector space T. After mapping each log event, a log sequence can be represented by a function of its all log events' coordinates. It means that each log sequence is also mapped into the vector space. The mapping of log event and log sequence can be represented as two functions: v : According to the definition of log sequence in Equation (2), the log events of log sequence lse i are p(l iW+1 ), p(l iW+2 ), . . . , p(l iW+W ). The coordinate of the log event related to the log message l j can be denoted by v(p(l j )).
Therefore, the coordinates of these log events are v(p(l iW+1 )), v(p(l iW+2 )), . . . , v(p(l iW+W )). By the above-described procedure, the coordinate of the log sequence depends on all its log events' coordinates. The log sequence lse i can be assigned to a coordinate by
Anomaly Detection
All feature vectors of log sequence are the samples, which are trained for machine learning or deep learning models to detect anomaly. Then, the trained model predicts whether a new log sequence is anomalous or not.
A binary classifier c is trained on f (lse i |lse i ∈ LSE) ∈ T. This kind of classifier c can be treated as an ideal separation function: c : T → [0, 1]. The classifier determines whether a log sequence lse i is anomalous (label y i = 1 denotes an anomalous log sequence and y i = 0 denotes a normal log sequence) or not. When the anomalous event occurs, the log message at that time is labeled anomalous. If an anomalous log message belongs to a log sequence, this log sequence is labeled as an anomaly. Otherwise, the log sequence is normal when all log messages in it are normal. In the case log sequence contains log events which do not occur, those are simply ignored.
Methodology
The overview of LogEvent-to-vector based log anomaly detection is shown in Figure 2. The first block shows nine raw logs in the BGL dataset. The second block is the log parsing step which extracts five log events from the raw logs by the Drain. Each log is mapped into a log event. The third block is the feature extraction step. Logs are divided into log sequences by a fixed window. Each log event vector is obtained by logEvent2vec which takes the log event as the processing object. The log sequence vector is calculated by all log event vectors in the log sequence according to bary or tf-idf. The fourth block is the anomaly detection. The anomalies are marked by the red line. Three kinds of supervised models (Random Forests, Naive Bayes, and Neural Networks) are trained to detect the anomaly. The detailed process of each step is described below. After parsing by Drain [27], the log event is shown in the last row of Table 3. The semi-structured raw log message is converted into structured information. The variable part in the log message is replaced by a wildcard, and the constant part remains unchanged. Each log event has a unique log event and event template. The event template of the third log message is "CE sym < * >, at < * >, mask < * >" with log event E3 as shown in the second block of Figure 2. Similarly, we get five log events E1-E5 in the second block from the nine raw log messages. Each raw log message is mapped into a log event. For example, the first log message is mapped into log event E1, and the second log message is mapped into log event E2. "CE sym < * >, at < * >, mask < * >"
Feature Extraction
LogEvent2vec takes the log event as input of the word2vec model, and then transforms the log event vector to the log sequence vector. Because the number of log events is far less than the number of words, LogEvent2vec reduces the training cost. In addition, only one coordinate transformation is necessary to get the log sequence vector from the log event vector.
LogEvent2vec: Log Event Training Via Word2vec
Word2vec maps words to vectors, which is divided into two models, namely continuous skip-gram model (skip-gram) and continuous bag-of-words model (cbow) [48]. The training input of cbow model is the context word vector of a target word, and the output is the word vector of the target word. The idea between skip-gram and cbow is opposite, that is, the input is the word vector of a target word, and the output is the context word vector of the target word. Cbow model is used in this paper.
Cbow model consists of three layers [49]: input layer, hidden layer, and output layer, as shown in Figure 3. For example, the corpus is "I drink coffee every day". We can get the embedding of "coffee" from the rest four words "I", "drink", "every", and "day" which are taken as input. Similarly, we can get the embedding of all words.
LogEvent2vec takes the log event as the input of word2vec to get the embedding of each log event in vector space T. The space dimension is dim(T). If the target is log event p(l iW+j ), the rest log events p(l iW+1 ), p(l iW+2 ), . . . , p(l iW+j−1 ), p(l iW+j+1 ), . . . , p(l iW+W ) in the log sequence lse i are taken as input, as shown in Figure 3. For example, we assume that the fixed window size is 3, as shown in the third block of Figure 2. The nine log messages are divided into three sequences (lse 1 , lse 2 , lse 3 ) which are [E1, E2, E3], [E3, E4, E4], and [E5, E3, E1], respectively. LogEvent2vec takes log events E1 and E3 in the first sequence as the input and log event E2 as the output. Similarly, log events E3 and E4 in the second sequence are taken as the input of word2vec while the target is log event E4. Log events E5 and E1 in the third sequence are taken as the input of word2vec while the target is log event E3.
In detail, the one-hot vector with |E| dimension is used to represent the log event. There are W − 1 one-hot vectors in the input layer. The output layer is the one-hot vector of the target log event. The hidden layer's dimension is dim(T). After training the model, we can get the embedding of a log event by multiplying its one-hot vector and the weight matrix W M ∈ R |E|×dimT . Assuming that the dimension is set to 5, the embedding vectors of log events E1 − E5 are
From Log Event Vector to Log Sequence Vector
All log event vectors in space T are produced by LogEvent2vec. To get the log sequence vector in space T, we transform log event vector to log sequence vector by bary or tf-idf: • Bary defines the vector of log sequence as the average of all its log events in Equation (4): • Tf-idf defines the vector of log sequence as the weighted average of all its log events. The weight depends on the frequency of log events. A rare log event has a higher weight than a frequent log event.
According to bary, the vector of first log sequence lse 1 is the average position of E1, E2, E3, which is ([1, 2, 1, 0 After transformation, we can obtain all log sequences' vector, which is a matrix with N/W × dim(T).
Anomaly Detection
Anomaly detection can be treated as a binary classification problem. Many classifiers are available. In this paper, we use three supervised algorithms to detect anomaly: Random Forests, Naive Bayes, and Neural Networks in this part. The log sequence matrix is the input of the anomaly detection model.
Datasets
To evaluate the performance of our proposed algorithms, we use the BGL dataset from the BlueGene/L supercomputer system at Lawrence Livermore National Labs (LLNL) [50]. Table 4 shows the basic information of the BGL dataset. There are 4,747,963 log messages and 348,460 anomalous log messages in the BGL dataset.
Experimental Setup
All experiments are run on Baidu AI Studio (Beijing, China), which provides a server with an Intel(R) Xeon(R) Gold 6148 CPU (Beijing, China) with 8 core, NVIDIA Tesla V100 with 16 GB VideoMem GPU (Beijing, China), and 32 GB RAM.
After Drain [27] log parsing, we obtain 376 log events. By default, according to [34], the window size of fixed windows is set to 5000 and the dimension of vector dim(T) is set to 20. It means that the length of each log sequence is 5000. After dividing, there are 943 log sequences. We randomly choose the 90% log sequence as the training data, and the remaining 10% as the testing data. All results are averages of five times results.
We compare our feature extraction scheme with the method in [34] with different coordinate transformation and anomaly detection models. The two kinds of feature extraction schemes have two kinds of coordinate transformations: bary and tf-idf. There are three kinds of supervised methods: Random Forests, Naive Bayes, and Neural Networks. The two kinds of feature extraction schemes are described as follows: • Word [34]: It takes words as input of the word2vec model after removing the non-alphanumeric characters. After getting the words vector, it performs coordinate transformation twice to get the log file vector. • LogEvent: our approach takes log events as input of the word2vec model after log parsing. After getting the log event vector, it performs coordinate transformation once to get the log sequence vector.
As shown in Table 5, we have 12 kinds of schemes. We use three combined characters to represent the schemes. For example, "W-b-NB" means the method in [34] with two bary coordinate transformations and Naive Bayes anomaly detection model. "LE-t-NN" means our approach with a tf-idf coordinate transformation and Neural Networks anomaly detection model. The implementations of tf-idf, Random Forests, Naive Bayes, and Neural Networks are from the scikit-learn (http://scikitlearn.org/) standard library.
Steps Models
Word2vec input unit Word/Log event Coordinate transformation Bary/Tf-idf Anomaly detection model Random Forests/Naive Bayes/Neural Networks F1-score, Area Under Curve (AUC), and computational time are used to evaluate the accuracy of anomaly detection methods. F1-score is an index used to measure accuracy of binary classification model in statistics. It takes into account both accuracy and recall of classification model. F1-score can be regarded as the harmonic average of precision and recall in Equation (5). It has its best value at 1 and worst at 0. AUC is a kind of evaluation index to measure the quality of the binary classification model, which indicates the probability that the positive example prediction is in front of the negative example. Computational time includes the time of feature extraction, the time of training anomaly detection model, the time of issuing all predictions in the test set, and the total time. The computational time of feature extraction consists of training word2vec and coordinate transformations. The total time is from word2vec training to the anomaly detection model without preprocessing because of the different preprocessing in our scheme and [34]:
Impact of Anomaly Detection Model
To investigate the effect of different anomaly detection models, we analyze the F1-score, AUC, and computational time of Random Forests, Naive Bayes, and Neural Networks, while other parameters are set to the default values and the coordinate transformation method is bary.
The results are as shown in Figure 4. In the aspect of the detection model, we can see that the anomaly detection performance of W-b-RF (F1-score = 0.83, AUC = 0.96) and W-b-NN (F1-score = 0.80, AUC = 0.94) are far higher than that of W-b-NB (F1-score = 0.72, AUC = 0.89). In the case of log event as input, the detection performance of LE-b-RF (F1-score = 0.88, AUC = 0.97) and LE-b-NN(F1-score = 0.89, AUC = 0.95) is better than that of LE-b-BN (F1-score = 0.78, AUC = 0.94). The detection performance of Random Forest and Neural Network as classifier is better than Naive Bayes. The reason is that Random Forest is a set of decision trees, in which each decision tree processes samples and predicts output labels. The decision trees in the set are independent, and each decision tree can predict the final result. Neural Networks are fully connected. They are grouped by layers and process the data in each layer and deliver it to the next layer. The last layer of neurons is responsible for the prediction. Therefore, those two detection models consider the relevance between features. The premise of Naive Bayes algorithm is that the features are independent. There is a certain relevance between the log events in the log sequence. Therefore, Random Forest and Neural Network are better than Naive Bayes.
In the aspect of the input, the results of anomaly detection using log events as input are better than using words as input. The AUC score of LE-b-RF is 0.97. The reason is that the representation of the log sequence vector is more accurate in LogEvent2vec. Inputting words to word2vec [34] needs to transform word vectors into the vector of log sequence by two coordinate transformations, so there will be some bias in the representation of the log sequence vector and affect the final anomaly detection results. Our schemes only need to perform one coordinate transformation to get the log sequence vector. Therefore, the log sequence vector is more accurate in representation. Logevent2vec reduces the number of coordinate transformations and obviously improves its F1-score. The results confirm the rationality of LogEvent2vec. Figures 5 and 6 show the computation time for three classifiers to detect anomaly. Figure 5a shows the time of feature extraction from training word2vec to coordinate transformation, where training word2vec consumes the majority of the time. It can be seen that the time required for feature extraction in Random Forest is the highest, and the time required for feature extraction in Naive Bayes is the lowest. The number of words in the BGL dataset is 1,405,168, while the number of log events is only 376. The training time of log events in word2vec is far less than that of words. The final experimental results show that LogEvent2vec (32.97 s < 955.33 s) takes less time to train word2vec. Figures 5b and 6a show the time needed to train the classifier with 848 log sequences and issue 95 log sequences for prediction. We can see that LogEvent2vec and word2vec consume the same time in training the classifier and issuing the test set for prediction. Figure 6b shows the total time from training word2vec to finally completing anomaly detection. The total time is mainly determined by the time of feature extraction: the less time word2vec training takes, the less time it takes. Finally, the experiment shows that LogEvent2vec shortens time by 30 times than word2vec.
Impact of Coordinate Transformation
We analyze the performance of bary and tf-idf with other parameters set to the default values. The results are shown in Figure 5a. In addition, the computational time of feature extraction in W-b-RF is 959.31 s, while that in W-t-RF is only 347.67 s. It can be seen that tf-idf can reduce the time consumption in feature extraction. Figures 5b and 6a show the time consumed by training the anomaly detection model and issuing all predictions in the test set. It can be seen that different coordinate transformations have less impact on the time consumed by training the anomaly detection model and issuing predictions. Figure 6b shows the total consumption time of anomaly detection with tf-idf coordinate transformation, which is still mainly determined by the computational time of feature extraction. Therefore, the performance of tf-idf is weaker than that of bary, but tf-idf can significantly reduce the computational time.
Impact of the Dimension
To investigate the effect of the dimension of feature space, the number of dimensions is set from 5 to 500 while other parameters are set to the default values.
In Tables 6 and 7, LE-b-NN has the best performance in all classification when dimensions are from 5 to 50. The performance of LE-b-RF is the best when dimensions are from 100 to 500.
Although the AUC score of W-b-RF (F1-score = 0.81) is the highest at 200 dimensions, its F1-score of correct classification is lower than that of LE-b-RF (F1-score = 0.84). The difference of AUC score between LE-b-RF and W-b-RF is 0.008, while the difference of F1-score is 0.03. Therefore, the performance of LE-b-RF is better than that of W-b-RF in 200 dimensions. Generally speaking, we use log events as input of the word2vec model, and the effect of anomaly detection is better than using words as input of the word2vec model. The final experimental results also confirm that the effect of LogEvent2vec is better than that of word2vc.
Tables 8-11 depict the time of feature extraction, the time of training anomaly detection model, the time of issuing all predictions in the test set, and the total time with bary coordinate transformation, respectively. No matter which kind of anomaly detection models and feature extraction, the time of feature extraction and training the anomaly detection model increases as the dimension increases. Therefore, the total time is also increasing with the increase of dimensions. However, the dimension has less impact on the time of issuing all predictions in the test set. Table 9. Computational time of the anomaly detection with different dimensions.
Conclusions
We propose LogEvent2vec, an offline feature extraction approach that takes the log event as the input of word2vec to extract the relevance between log events, and reduce the time of training and coordinate transformation. LogEvent2vec can work with any coordinate transformation methods and anomaly detection models. The experimental results demonstrate that our approach is effective and outperforms the state-of-the-art work. Compared with Neural Network and Naive Bayes model, the performance of Random Forest as classifier working with LogEvent2vec is better. Different coordinate transformation methods (bary and tf-idf) have less influence on the accuracy of anomaly detection, but tf-idf can significantly reduce the computational time. LogEvent2vec working with LSTM is our future work. | 8,279.6 | 2020-04-26T00:00:00.000 | [
"Computer Science"
] |
Radiative and Collisional Molecular Data and Virtual Laboratory Astrophysics
Spectroscopy has been crucial for our understanding of physical and chemical phenomena. The interpretation of interstellar line spectra with radiative transfer calculations usually requires two kinds of molecular input data: spectroscopic data (such as energy levels, statistical weights, transition probabilities, etc.) and collision data. This contribution describes how such data are collected, stored, and which limitations exist. Also, here we summarize challenges of atomic/molecular databases and point out our experiences, problems, etc., which we are faced with. We present overview of future developments and needs in the areas of radiative transfer and molecular data.
Introduction
Many fields in astronomy such as astrophysics, astrochemistry and astrobiology, depend on data for atomic and molecular (A + M) collision and radiative processes.Among these amount of data collections there are atomic and molecular processes and spectral regions that even today are poorly represented.Therefore, there is an urgent need to collect these data in the databases as well as to develop methods for improving the existing ones.Also, this require a joint effort both of scientists and IT software specialists to develop state-of-the-art infrastructures satisfying their needs, such as Virtual Laboratory [1][2][3].
The Base Astrophysical Targets
Nowadays, the data in in the field of astrophysics modeling are especially important and needed for simulations/calculations.For example the A + M data for hydrogen are important for development of atmosphere models of solar and near solar type stars and for radiative transport investigations as well as an understanding of the kinetics of stellar and other astrophysical plasmas [4,5].Modern codes for stellar atmosphere modelling, like e.g., PHOENIX (see e.g., [6][7][8]) require the knowledge of atomic data, so that the access to such atomic data, via online databases become very important.
The helium A + M data are of interest particularly for helium-rich white dwarf atmospheres investigations [9,10].Such data are also important in modelling early Universe chemistry (see the paper of Coppola et al., 2013 [11]).The data for H and some metal atoms like Li, Na, Si are important for the exploring of the geo-cosmical plasmas, the interstellar medium as well as for studies of the early Universe chemistry and for the modelling of stellar and solar atmospheres (see, e.g., [12,13]).
Recently, in the papers [14][15][16][17][18] it has been pointed out that the photodissociation of the diatomic molecular ion in the symmetric and non-symmetric cases, are of astrophysical relevance and could be important in modeling of specific stellar atmosphere layers and they should be included in some chemical models.In the symmetric case, it was considered the processes of molecular ion photodissociation (bound-free) and ion-atom photoassociation (free-bound): where A and A + are atom and ion in their ground states, and A + 2 is molecular-ion in the ground electronic state.
In the non-symmetric case, the similar processes of photodissociation/photoassociation are: where M is an atom whose ionization potential is less than the corresponding value for atom A. AM + is also molecular-ion in the ground electronic state.
In the general case molecular ion A + 2 or AM + can be in one of the states from the group which contains the ground electronic state.For the solar atmosphere, A usually denotes atom H(1s) and M one of the relevant metal atoms (Mg, Si, Ca, Na) [14][15][16], but there are cases where A = He, and M = H, Mg, Si, Ca, Na.For the helium-rich white dwarf atmospheres A denotes He(1s 2 ) and M denotes, H(1s), and eventually carbon or oxygen [19,20].
Recently, the results from [16] show the importance of including the symmetric processes with A = H(1s) in the stellar atmosphere models like [5].Also, for modeling the DB white dwarf atmospheres results for case A = He(1s 2 ) have been used (Koester 2016, private communication).The photodissociation of HeH + has been extensively studied both from a theoretical and experimentalist point of view and inserted in chemical networks describing the formation and destruction of primordial molecules.
It is well known [21] that the chemical composition of the primordial gas consists of electrons and species such as: helium-He, He + , He 2+ and HeH + ; hydrogen-H, H − , H + , H + 2 and H 2 ; deuterium-D, D + , HD, HD + and HD − ; lithium-Li, Li + , Li − , LiH − and LiH + .Evaluation of chemical abundances in the standard BB model are calculated from a set of chemical reactions for the early universe [21] and is presented at Figure 1 from [21].One can see that among them are species like molecular ions H + 2 , HD + , HeH + , etc., whose role in the primordial star formation is important.
Database Description
MolD database contains cross-sections for photodissociation processes [22], as well as corresponding data on molecular species and molecular state characterisations.MolD project is part of Serbian Virtual Observatory (SerVO) 1 and Virtual Atomic and Molecular Data Center (VAMDC) 2 (see Figure 2).MolD application is implemented as a customisation and extension of NodeSoftware provided by VAMDC.It complies to VAMDC interoperability standards and protocols for distributed remote queries.The underlying technology is Python-based, with Django as a Web framework [23], MySQL as a relational database system [24].The web application runs on Apache web server.
The data model of MolD application is tailored to specifically suit the needs of the theoretical photodissociation data, and yet to easily map onto VAMDC's standardized XSAMS 3 (XML Schema for Atoms, Molecules and Solids format) schema for representation and exchange of atomic and molecular data 4 .
Accessing MolD Data
MolD data can be accessed in several ways:
•
Via MolD homepage (http://servo.aob.rs/mold).There is an AJAX-enabled (Asynchronous JavaScript and XML) web form for data querying as well as calculating and plotting average thermal cross sections along available wavelengths for a given temperatiure.During 2017, MolD entered stage 3 of development.Currently, the database includes cross-section data for processes which involve species such as He + 2 , H + 2 , MgH + , HeH + , LiH + , NaH + .These processes are important for exploring of the interstellar medium, the early Universe chemistry as well as the modeling of different stellar and solar atmospheres (see papers [11,15,16,20,22]).
Our plans include transition to new versions of Django framework and NodeSoftware, with ongoing incremental inclusion of A + M data from our papers.We also intend to develop a more intuitive interface for querying and presentation of multidimensional data on our website.
Example: H +
2 Molecular Ion MolD is available online from the end of 2014 and it contains the data for the photodissociation processes Equation ( 1) with A = H(1s) and A = He(1s 2 ).Also it contains the relevant data for some other non-symmetric photodissociation processes Equation ( 2) where M = Li, Na, Mg or He.
The methods of calculation.The cross-sections for the photodissociation of individual ro-vibrational state of the considered molecular ion H + 2 is determined in the dipole approximation [14]: and the corresponding averaged thermal cross sections are given by: where T is temperature, λ-wavelength, D E,J+1;v,J is the radial matrix element [25], E Jv is the energy of the individual states with the angular and vibrational quantum numbers J and v respectively, and Z is the partition function In this expression the product g J;v × (2J + 1) is the statistical weight of the considered state and the coefficient g J;v depends on the "the spin of the nuclei".
The photo-dissociation crosssection σ phd (λ, T) given by Equation ( 4), as well as the coefficients K ia (λ, T), are determined within the approximation where the processes are treated as the result of the radiative transitions between the ground and the first excited adiabatic electronic state of the molecular ion H + 2 which are caused by the interaction of the electron component of the ion-atom system with the free electromagnetic field taken in the dipole approximation.
For determination of σ phd (λ, T), as well as the coefficients K ia (λ, T) it is important to know the dipole matrix element D 12 (R) defined by relations where R = |R| and D(R) is the operator of electron dipole momentum.The mentioned adiabatic electronic states, X 2 ∑ + g and A 2 ∑ − u , are denoted here with |1 and |2 and R is the internuclear distance in the considered ion-atom system.
The described mechanism of the processes causes absorption of the photon with energy λ near the resonant point R = R λ , where R λ is the root of the equation where U 1 (R) corresponds to the ground electronic state, and U 2 (R)-to the first excited electronic state.
In Figure 3 are presented the data for σ J,v (λ) Equation ( 3) for the case J = 0 and v = 10, in the wavelength region 50 nm ≤ λ ≤ 1500 nm.The spectral coefficients.The absorption process (1) i.e., processes of molecular ion photodissociation (bound-free) for the case A = H is characterized by partial spectral absorption coefficients κ ia (λ) (see e.g., [25]) taken in the form where N(H + 2 ) is density of the H + 2 and σ phd (λ, T) is average cross-section for photo-dissociation of this molecular ion given by Equation (4).As in previous papers [14,25] the partial spectral absorption coefficient κ ia (λ) is also used in the form where the coefficient K ia (λ, T) is connected with σ phd (λ, T) by the relations In accordance with the definition of the absorption coefficient κ ia (λ), the coefficient K ia (λ, T) is given in units (cm 5 ).The results for the average photodissociation cross-section σ phd (λ, T) for H + 2 molecular ion are illustrated by Figure 4.The curves in this figure show the behavior of σ phd (λ, T) as a function of λ for a wide range of temperatures T, which are relevant for the stelar atmosphere (e.g., solar photosphere).The values of the coefficient K ia (λ, T), defined by Equation (10), are presented in the Table 1 for the regions 90 nm ≤ λ ≤ 370 nm with small wavelength steps and for 3000 K ≤ T ≤ 10, 000 K in order to enable easier use (interpolation) of this results.This allows direct calculation of the spectral absorption coefficients during the process of applying a any atmosphere model with the given parameters of plasma and composition.Solar atmosphere: absorption processes.The influence of radiation processes (1) can bee estimated by comparing their intensities with the intensities of known concurrent radiation processes, namely: The relative contributions of the processes (1), with respect to processes ( 11) and ( 12), is described by the quantities F κ defined by relation where κ ea is absorption spectral coefficient of processes (11) and κ ei is absorption spectral coefficient of processes (12) (see papers [14,16]).
Similarly to the He case in DB white dwarf atmosphere [25] calculations of the absorption coefficient were performed for the solar photosphere and lower chromosphere by means of a standard Solar atmosphere model [5], and the total contribution of the processes (1) to the solar opacity was estimated [16].The results of the calculations of the parameter F κ for 92 nm ≤ λ ≤ 350 nm are presented in Figure 5.The figure show that in the significant part of the considered region of altitudes (−75 km ≤ h ≤ 1065 km) the absorption process (1) together give the contribution which varies from about 10% to about 90% of the contribution of the absorption process (11) and (12), which are considered as the main absorption processes [16].F κ = κ ia /(κ ea + κ ei ) (data taken from [14]) as a function of λ and height h for a model of the solar photosphere [5].
On the basis of the above, it can be concluded that photodissociation processes represent important channels for destruction of molecules in lot of astrophysical environments and features of the interacting radiation are important in their spectral analyses.
Future Developments and Concluding Remarks
Exploitation the full potential of A + M data and database services is an ongoing challenge in virtual data centers.There are still many limitations and problems that users are facing with such as poor documentation, missing of data evaluation, no open access, etc.The aim of MolD database is to be accessible, and be used by the wider scientific community, through VAMDC and to follow certain protocols and defined rules in order to eliminate such limitations and problems.The next step of development i.e., the stage three of MolD development will be the implementation of possibility to fit the tabulated data.We plan to develop fitting formulas for photodissociation cross section as the function of the temperature and wavelength.Also, we intend to update the current database with newly calculated/measured data.
The continuation of such developments and services such as constantly updated online A + M database, is crucial in the field of astrophysics and modern physics due to its rapid development and make an immense impact on the way science is done in the developing world.
Figure 2 .
Figure 2. Snapshot from the query page from the Virtual Atomic and Molecular Data Center (VAMDC) portal.
Figure 4 .
Figure 4.The behaviour of the averaged cross-section σ phd (λ, T) for photodissociation of the H +2
FFigure 5 .
Figure 5. Upper panel: The behavior of the temperature T, N H and Ne as a function of height h within the considered part of the solar atmosphere model; lower panel: A surface plot of the quantityF κ = κ ia /(κ ea + κ ei ) (data taken from[14]) as a function of λ and height h for a model of the solar photosphere[5].
Table 1 .
The coefficient K ia (cm 5 ) Equation (9) as a function of λ and T. | 3,239.6 | 2017-09-19T00:00:00.000 | [
"Physics"
] |
Optical and Gamma-ray Variability of the Vrl Nlsy1 Galaxy, 1h 0323+342
1H 0323+342 was one of the first vRLNLSy1 galaxies detected at gamma-rays with the Fermi-LAT and is one of the brightest of this class observed at optical wavelengths. We report the results of monitoring the optical flux, polarization and the gamma-ray flux of 1H 0323+342 during the past ~5 years. In some cases, the optical flux has been monitored on timescales as short as ~minutes simultaneously with two telescopes, demonstrating, for the first time, the reality of microvariability events with durations as short as ~15 min for this object.
Introduction
Recently, Fermi has identified a small number of very radio-loud narrow-line Seyfert 1 galaxies (vRL NLSy1) as gamma-ray sources (Abdo et al. [1]).These objects exhibit many properties similar to those seen for blazars (Abdo et al. [2]).1H 0323+342 was one of the first vRLNLSy1 galaxies detected at gamma-rays with the Fermi-LAT and is one of the brightest of this class observed at optical wavelengths.As such, it is an attractive object to monitor at both optical and gamma-ray wavelengths.In this paper, we report the results of monitoring the optical flux, polarization and the gamma-ray flux of 1H 0323+342 during the past ~5 years.In addition, the optical flux has been monitored on timescales as short as ~minutes, simultaneously with two telescopes, demonstrating, for the first time, the existence of microvariability discrete events with durations ~15 min.
Observational Program
Optical photometric observations were obtained using the 31" NURO, 42" Hall, and 72" Perkins telescopes at Lowell Observatory in Flagstaff, Arizona.R-band magnitudes were derived via aperture photometry of in-field comparison stars.These comparison stars were selected for their photometric stability, and have been repeatedly re-tested to ensure this property.The finding charts, with comparison star R-band magnitudes, can be found on GSU's dedicated web pages (https://sites.google.com/site/jdmaune/seyfert-fields). Figure 1 Gamma-ray data were obtained through the Fermi public data server, and were collected using the Large Area Telescope (LAT) over the entirety of its mission lifetime.The data were reduced and analyzed using ScienceTools v9r33p0 and instrument response functions P7REP_SOURCE_V15.The data were binned in 30.5 day increments, and a likelihood analysis was performed on each bin.The resulting gamma-ray lightcurve is co-plotted in Figure 2 (bottom panel) along with the results of our optical monitoring of the 1H 0323+342 (top).
Results
Following the technique employed by Carini, Miller and Goodrich [3], simultaneous observations of 1H 0323+342 were obtained using the 31-in and 72-in telescopes at Lowell Observatory (Figure 1).A cross correlation analysis of the light curves was performed.The Pearson's Correlation Coefficient was then calculated for these lightcurves, with the result of P(0) = 0.914 at zero time lag.This demonstrates, for the first time, the reality of very low-amplitude (∆MR ≤ 0.05) microvariability events for 1H 0323+342.Previously, Paliya et al. [4] and Paliya et al. [5] reported detecting the presence of intranight optical variability (INOV) for 1H 0323+342.These results report the presence of low-level linear trends occurring during a single night, but not microvariations.Microvariability, as identified in Miller, Carini and Goodrich [6], refers to discrete events occurring on timescales of minutes to hours, or doubling times on similar timescales; but does not include simple low level linear trends spanning several hours.
In Figure 2, our multifrequency and polarimetric monitoring observations spanning from 2010 to 2015 are presented.1H 0323+342 was observed to vary by approximately 1.1 magnitudes over the course of our ~5 year program, although the largest variation observed in a single night was ∆MR = 0.14.The maximum degree of polarization observed during this period for this source was P = 2.29% ± 0.27%.1H 0323+342 is highly variable at gamma-ray wavelengths.The source was observed to undergo long periods (several consecutive months) of non-detectability punctuated by shorter periods of activity.A gamma-ray high-flux state was observed to precede an optical polarimetric flare by a matter of days near the middle of our monitoring period, occurring as the polarization reached its maximum noted above.Although there seems to be a quasi-coincidence in the occurrence of this gamma-ray/optical flare, the data are too sparsely populated to allow the rigorous analytical support of this possibility.In addition, there is an optical flare near ~MJD 56000 which has no obvious gamma-ray counterpart, and there is a gamma-ray event which has no optical counterpart near MJD 56500.This suggests that there is no strong evidence of a gamma-ray/optical correlation of events.
Conclusions
We have, for the first time, demonstrated the reality of microvariability events for 1H 0323+342 with an amplitude of ~0.05 mag.We have reported the first long term monitoring of the optical flux and polarization and the gamma-ray flux for this vRL NLSy1 galaxy.Although this object exhibited more than 1.0 mag variation during this monitoring program, no strong correlation was found between the optical and gamma-ray flux variations.However, the data sampling may be too sparse to fully address this question.
displays a single night of simultaneous (31" & 72", respectively) observations of 1H 0323+342, demonstrating the reality of extremely low amplitude microvariability.Figure2displays the results of our MW monitoring program.All of our optical polarimetric data were obtained using the 72" (1.83 m) telescope at Lowell Observatory, equipped with the PRISM instrument.Observations were obtained during several different observing runs between November 2010 and March 2015.
Figure 1 .
Figure 1.Intra-night R-band light curves of 1H 0323+342.Both light curves are plotted on a 0.5 magnitude scale.Microvariablity is clearly detectable for this object.
Figure 1 .
Figure 1.Intra-night R-band light curves of 1H 0323+342.Both light curves are plotted on a 0.5 magnitude scale.Microvariablity is clearly detectable for this object.
Figure 1 .
Figure 1.Intra-night R-band light curves of 1H 0323+342.Both light curves are plotted on a 0.5 magnitude scale.Microvariablity is clearly detectable for this object. | 1,318.8 | 2017-01-20T00:00:00.000 | [
"Physics"
] |
Geophysicae On the validity of the ionospheric pierce point ( IPP ) altitude of 350 km in the Indian equatorial and low-latitude sector
The GPS data provides an effective way to estimate the total electron content (TEC) from the differential time delay of L1 and L2 transmissions from the GPS. The spacing of the constellation of GPS satellites in orbits are such that a minimum of four GPS satellites are observed at any given point in time from any location on the ground. Since these satellites are in different parts of the sky and the electron content in the ionosphere varies both spatially and temporally, the ionospheric pierce point (IPP) altitude or the assumed altitude of the centroid of mass of the ionosphere plays an important role in converting the vertical TEC from the measured slant TEC and vice versa. In this paper efforts are made to examine the validity of the IPP altitude of 350 km in the Indian zone comprising of the everchanging and dynamic ionosphere from the equator to the ionization anomaly crest region and beyond, using the simultaneous ionosonde data from four different locations in India. From this data it is found that the peak electron density height (hpF2) varies from about 275 to 575 km at the equatorial region, and varies marginally from 300 to 350 km at and beyond the anomaly crest regions. Determination of the effective altitude of the IPP employing the inverse method suggested by Birch et al. (2002) did not yield any consistent altitude in particular for low elevation angles, but varied from a few hundred to one thousand kilometers and beyond in the Indian region. However, the vertical TEC computed from the measured GPS slant TEC for different IPP altitudes ranging from 250 to 750 km in the Indian region has revealed that the TEC does not change significantly with the IPP altitude, as long as the elevation angle of the satellite is greater than 50 degrees. However, in the case of satellites with lower elevation angles ( <50), there is a significant departure in the TEC computed using different IPP altitudes from both methods. Therefore, the IPP altitude of 350 km may be taken as valid even in the Indian sector but only in the cases of satellite passes with elevation angles greater than 50 . Correspondence to: P. V. S. Rama Rao<EMAIL_ADDRESS>
Introduction
In the recent years, the measurements of total electron content (TEC) have gained importance with the increasing demand for the GPS-based navigation applications in trans-ionospheric communications with space-borne vehicles, such as satellites, aircrafts and surface transportations.The TEC measurements are necessary for making appropriate range delay corrections introduced by the ionosphere, both during quiet and disturbed periods (space weather events), such as scintillations and geomagnetic storm periods.The TEC is one of the most important quantitative parameters of the Earth's ionosphere and plasmasphere, which is defined as the height integral of electron density along the ray path from the receiver to the satellite (Leitinger, 1996).
All modern TEC measuring techniques rely on the observation of signal phase differences or on pulse travel time measurements based on geostationary and orbiting satellite signals.A standard way of measuring TEC is to use a ground-based receiver capable of processing signals from satellites in geostationary orbits, like ATS-6, SIRIO; polar orbiting satellites, like the U.S. Navy Navigation Satellite System (NNSS), the Russian Global Navigation Satellite System (GLONASS) satellites; some of which are not in use currently.The LEOS, such as the TOPEX/Poseidon's dual frequency altimeter data is used to obtain the vertical electron content of the altitude of the satellite (1336 km), but this instrument is used only to measure the ocean heights and there are data gaps over landmasses.Hence, in recent times, the TEC measurements using the Global Positioning System (GPS) are being carried out all over the world for ionospheric studies.The development of the GPS has also opened up new opportunities to investigate the ionosphere and plasmasphere on a global scale (Davies and Hartmann, 1997).The Global Positioning System (GPS) is a satellite-based navigation system, which provides good positional accuracy of the user at any location, and at any given time.The current constellation of 29 GPS satellites (http://tycho.usno.navy.mil/gpscurr.html)orbit in six separate orbital planes with four satellites in each orbit.The orbital planes have an inclination of 55 • relative to the equator.The precise spacing of the satellites in the six orbits is arranged such that a minimum of four satellites are visible to a user at any time, at any location on the Earth.The GPS signal traversing the ionosphere undergoes an additional delay proportional to the total number of electrons in the cross-sectional volume measured in TEC units.The dual frequency GPS receivers use two frequencies, L1 (1.575 GHz) and L2 (1.227 GHz), to compensate for the ionospheric delay, a measure of TEC, at least to a first order approximation, taking advantage of the dispersive nature of the ionosphere, where the refractive index is a function of frequency (Coco, 1991;Wanninger, 1993;Klobuchar, 1996).The GPS data provides an efficient way to estimate TEC with a greater spatial and temporal coverage (Davies and Hartmann, 1997;Hocke and Pavelyev, 2001).Since the frequencies used in the GPS are sufficiently high, the signals are minimally effected by the ionospheric absorption and the Earth's magnetic field, both in the short-term as well as in the long-term variations in the ionospheric structure.
The effective height of the ionosphere influences the conversion of the measured slant TEC to the vertical TEC, defined by the obliquity factor, depending on the elevation angle of the satellite.But as the GPS satellites are in different parts of the sky and the electron content varies both spatially and temporally, the ionospheric pierce point (IPP) altitude or the assumed altitude of the centroid of mass of the ionosphere plays an important role while converting the vertical TEC from the measured slant TEC and vice versa.As the northern crest of the equatorial ionization anomaly (EIA) is located over the Indian region, where the electron densities and the gradients are high, the determination of a single suitable IPP altitude is more difficult.In this paper efforts are made to examine the validity of the IPP altitude of 350 km (used in the mid latitude sector), in the Indian region where the ionosphere varies significantly from the magnetic equator to the equatorial ionization anomaly crest region and beyond.
The ionospheric effective altitude in the conversion of slant TEC
In the conversion of slant TEC to vertical TEC, it was assumed that the ionosphere and the protonosphere are horizontally stratified and are spatially uniform.Further, the ionosphere is simplified to a thin layer at an altitude of 350 km above the Earth's surface.This is called the thin shell model, and its height is the effective height or centroid of the mass of the ionosphere, which is taken as the IPP altitude or the altitude of ionospheric intersection of the user line-of-sight to the tracked satellite.This shell approximation is widely used (Coco et al., 1991;Wilson and Mannucci, 1993;Ciraolo and Spalla, 1997 and references therein), and its height is usually taken in the range of 350 to 400 km, apparently based on the maximum electron density altitudes in the ionosphere.In the equatorial and low-latitude regions, it is the spatial and temporal variations in the ionosphere that significantly effects the assumption of the effective IPP altitude and homogeneity of the ionosphere.
The thin shell approximation model is being used in the Satellite Based Augmentation Systems (SBAS), such as the Wide Area Augmentation System (WAAS), which is operational in the USA, where the model may not affect the WAAS operation, because this region mostly consists of the mid latitude ionosphere, where the spatial and temporal variations in the TEC are relatively less than those at high and low latitudes.Whereas in the Indian region, which encompasses the latitudes ranging from the magnetic equator to the northern anomaly crest and beyond, the effective height of the IPP may vary.
Data
In the equatorial and low latitudes, the plasma dynamics associated with the Appleton ionization anomaly and the electrojet current system greatly modify the vertical structure of the ionosphere, thereby changing the centroid of ionization mass distribution from latitude to latitude.Therefore, with a view to study the altitude variation of the peak electron density (h m F 2 ≈h p F 2 ) in the Indian sector, the ionosonde data from four Indian stations (Trivandrum, Waltair, Ahmedabad and Delhi) for the year 2001 are considered, as detailed in Table 1.Along with this ionosonde data, the TEC data measured from the dual frequency GPS receivers deployed at different locations (18) in India are also used for examining the effect of the IPP altitude variation in the conversion of slant TEC to vertical TEC.
Peak electron density altitude (hpF 2 ) variations in the Indian region
The altitude of the peak electron density of the F2-layer (N max F 2 ) is usually taken as nearly equal to h p F 2 (altitude at 0.834 of This variability in altitude is expected because the equatorial electrojet is stronger in winter months compared to summer months and consequently, the F-region is elevated to higher altitudes at the equator due to increased E×B drift.Similarly, during March 2001 at Trivandrum, the F-layer peak altitude variability is higher, but one of the interesting features to be noticed here is that there is a significant post-sunset upward movement of the F-layer at the equator (Trivandrum) and at the sub-tropical latitude (Waltair).At Ahmedabad and Delhi the mean altitude of the Fregion varies marginally (from 300 to 350 km), while at the equatorial station, Trivandrum and the sub-tropical station, Waltair it shows a large variability.
In Figs.2a, b and c, the surface maps are presented of the variation of the monthly mean diurnal variation of the altitude of the peak electron density, as a function of latitude and local time, for the three different seasons, namely equinox, summer and winter, for the high solar activity year of 2001, using the data listed in Table 1.It may be readily seen from these figures that the altitude (h p F 2 ) is maximum around the equatorial region in all three different seasons, with diurnal peaks occurring around the pre-sunset hours during the equinox and winter months, and around the post-noon hours during summer.Further, it may also be noticed that the peak electron density altitude (h p F 2 ) decreases significantly with the increase of the latitude in the Indian sector.The surface plots also reveal a higher diurnal variability of the F-layer peak electron density altitude up to the latitude of about 22 • N from the equator and for higher latitudes beyond the use of the IPP altitude of 350 km in the conversion of slant to vertical TEC in the mid-latitude sectors may not introduce any significant differences in the computed range delays and errors.In view of the high variability of the altitude of the peak electron density in the Indian equatorial and lowlatitude regions, an attempt is made to examine the validity of using the IPP altitude of 350 km and to identify any other suitable altitude(s) that need to be used in the Indian sector.It may be mentioned here that Birch et al. (2002), using an inverse technique (described in Sect.4.2), with the data of the European sector, have arrived at IPP altitudes ranging from 600 to 1200 km, with an average effective altitude of 750 km.
Computation of IPP altitude using inverse technique
To illustrate the change in the obliquity factor that is used in the conversion of slant TEC to vertical TEC, due to changes in the assumed altitude of IPP, the geometry of the ray path from satellite to ground through the ionospheric pierce point is schematically represented in Fig. 3.In this figure it may be seen that the angle "β" subtended at the point of intersection of the ray path, with the normal to the Earth's surface at altitude h s , changes to β' at altitude h s ' and thus gives rise to two different values of TEC for the two different IPP altitudes (h s and h s ) while converting from slant to vertical TEC or vice versa.The description of the different parameters relevant to the geometry are provided along with the figure.Here, the angle β is related to the zenith angle (χ ) at the ground by the relation (Birch et al., 2002) sin β = sin χ where R E is the radius of the Earth.
The approach is to determine the effective height by comparing the TEC from a pair of satellites observed simultaneously along slant and zenithal paths.Thus, we have identified a total of 26 pairs of satellites observed simultaneously along the slant and zenithal paths from Waltair during April 2004 and determined the correlation between slant and the vertical TEC and evaluated the plasmaspheric effective height from the gradient "m" (sec β) by the inversion technique, as described by Birch et al. (2002).
Assuming the ionosphere and protonosphere to be spatially uniform, and if Bs and Bz are the slant and zenithal satellite biases including a receiver bias (i.e.Bz is the receiver bias + zenith satellite bias, and Bs is the receiver bias + slant satellite bias), the measured slant and zenith TEC R s and R z , respectively, are given by where I is the true vertical TEC.From the above two equations Thus, the slant TEC (R s ) and the vertical TEC (R z ) from the two satellites (slant and zenithal) are linear with a gradient (slope) m=secβ, while the bias terms Bs and Bz are constants.And from Eq. ( 1), we have which gives the value of IPP height (h s ), if the zenith angle (χ ) at the ground is known.
The TEC measurements recorded from satellite passes whose elevation angles are greater than 85 • are considered as zenith TEC measurements, and the simultaneously recorded TEC from the off-zenith satellite passes (<85 • ) are taken as the slant TEC measurements.From the data of slant TEC and zenithal TEC of the pairs of satellites chosen, a linear plot is drawn between the slant TEC and vertical TEC, to verify the linearity as in Eq. ( 4).The best estimate of their linear gradient (slope m = sec β) is derived using the standard regression analysis method.For each of the zenithal satellite passes all the simultaneously available slant satellite passes are considered, to derive the gradient (m) values during a month, taking only one point in each of the satellite pairs in the regression analysis.In Table 2 are the results obtained from the data sets chosen, which include correlation coefficients between vTEC (from zenithal pass) and the sTEC (from the slant pass), gradients (m) derived by linear regression, mean elevation angle of the slant pass, the effective IPP height (h s ) computed using Eq. ( 5), the mean time of the pass and the expected value of the gradient "m" for the IPP height of 350 km for comparison.From Table 2 it may be noticed that the correlation between the slant and the vertical TEC is fairly significant in most of the cases, as may be seen from some of the typical plots presented in Fig. 4.However, it may be noted that the gradient "m" (secβ) is highly variable and lies between 1.141 and 2.667, indicating the effect of the large latitudinal electron density gradients in the Indian region.The values of "m", theoretically computed for an altitude of 350 km for different elevation angles of the satellite passes, are also shown in Table 2 for comparison, along with the values of the gradients derived from the present analysis.
The height estimates derived from these gradients are highly variable and in some cases, they are negative, which is unrealistic.
In Fig. 5, the variation of the gradient m is shown as a function of elevation angle of the satellite (Birch et al., 2002).Each of the solid curves in this figure represent the theoretically expected variation of slope (m) for different altitudes ranging from 300 to 1200 km (in steps of 100 km).If all the experimentally computed gradients align on any one of these curves, the altitude corresponding to that particular curve is taken as the IPP altitude.However, it may be seen that the gradients evaluated from the experimental data of TEC over Waltair show a considerable scatter (red color dots with error bars) and do not align on any single theoretical height curve, clearly indicating that the effective height of the IPP is highly variable from observation to observation, particularly for elevation angles lower than 50 • , over the Indian region.Thus, the assumptions made in this inverse technique (Birch et al., 2002) for estimating the effective height of the ionosphere have limitations in the Indian region.The Indian ionosphere covers the equatorial and low latitudes beyond the northern crest of the equatorial ionization anomaly, where the spatial and temporal variation of TEC are significant, limiting the assumption that the TEC is horizontally homogenous.Also, the visibility of the satellites at zenith angles is good at locations with geographic latitudes situated around 55 • over the globe, since the inclination of the GPS orbits is also 55 • .Whereas in the Indian equatorial and low-latitude sector the zenithal satellite passes are much less.Thus, in this region the thin shell model which is strongly based on the assumptions of spatial uniformity of the ionosphere, may not hold true owing to the limitations imposed by the electron density gradients.
Therefore, at equatorial and sub-tropical latitudes any single weighted mean average height cannot be considered representative for a particular station, and the effective height depends on the time and location of observation and the ver-tical ionization distribution at that location.However, if the modulation of ionization density by the equatorial plasma transport over a location and its altitude structure can be parameterized, depending on the conditions prevailing on any given day, the effective height can probably be determined, based on the skewness of the F-region electron density profile and the centroid of its vertical ionization distribution, the study of which needs to be attempted with a larger data base comprising of multiple ionospheric parameters.
IPP altitude and the satellite elevation angle
Using the measured GPS-TEC data, an alternate attempt is made to assess the effect of the choice of the IPP altitude in the conversion of the slant to vertical TEC in the Indian sector for different altitudes varying from 250 to 750 km (in steps of 100 km).Theoretically, from Eq. ( 3) the vertical TEC is given by the product of slant TEC and cosβ.If a plot of cosβ is made as a function of satellite elevation angle for different IPP heights, which essentially indicates the vertical TEC spread as a function of elevation angle for a given slant TEC value, a threshold for the elevation angle could be defined.Keeping this condition in mind, the vTEC converted from the measured sTEC for some typical GPS satellite passes recorded over a few locations in India are presented in Figs. 6 and 7, for the different altitudes considered above.
It may be seen from Fig. 6 that the TEC did not show much of a variation with a change in the IPP altitude for satellite elevation angles greater than about 50 • .However, for elevation angles less than 50 • , the TEC computed with different IPP altitudes shows a significant deviation.Further, it is noted that this dispersion in TEC is higher (≈15 TEC units) at 15:00 LT, where the ambient diurnal value is higher than that at 10 hrs LT (≈8 TEC units) during which the TEC is in the buildingup process.Similar examples from four other different stations for different GPS satellite passes with different PRN numbers are presented in Fig. 7.It may also be seen from these figures that the commonly used IPP altitude of 350 km seems to be valid even in the Indian sector with satellite elevation angles greater than about 50 • .The use of any of the above IPP altitudes (250 to 750 km) for satellites with elevation angles lower than about 50 • is likely to give rise to varying range delays.Thus, the results obtained from the present study agree with results reported from similar studies by Birch et al. (2002), particularly with reference to the effect of the elevation angles of the satellite passes.However, it may be emphasized here that these conversions are based on the assumption that the ionosphere is a uniform thin shell, which may not be equally valid for all the Indian latitudes.the F-layer of the ionosphere varies both in altitude as well in space and time.The altitude of the peak electron density (h p F 2 ) measured from the data of four identical ionosondes located at different parts of India, Trivandrum, Waltair, Ahmedabad and Delhi has revealed that the h p F 2 varies significantly from day to day and from day to night, and also from equator to the anomaly crest and beyond.At the equator, the variation of h p F 2 is found to be maximum with 275 km in the night to 575 km during the day, whereas at Ahmedabad and Delhi this variation is found to be marginal (300 to 350 km).Hence, whether the use of a constant value of the IPP altitude of 350 km, commonly used in the mid latitude sector, is valid or not for the Indian sector is the topic under investigation.The attempts made in this regard to arrive at any suitable value of IPP using the inverse technique suggested by Birch et al. (2002) did not yield any consistent value of the IPP altitude.The alternate attempt made using the experimentally measured GPS slant TEC in the Indian sector, to compute the vertical TEC for different discrete IPP altitudes ranging from 250 to 750 km (in steps of 100 km) clearly suggest that the elevation angle of the satellite pass plays an important role.For elevation angles greater than 50 • , the IPP altitude of 350 km may also be effectively used in the Indian sector.However, in the case of satellite passes with elevation angles lower than about 50 • , the computed vertical TEC deviates significantly with a change in IPP altitude.Therefore, it is inferred that the commonly used IPP altitude of 350 km is valid and can also be used in the Indian sector but for elevation angles of the satellite passes greater than about 50 • .
Further, in the cases of low elevation angle passes (Fig. 5) the non-alignment of the computed gradients (m or sec β) on any single theoretical height curve, as well as the dispersion observed in TEC (Figs. 6 and 7), indicates that the effective height is highly variable from observation to observation, pointing out that at equatorial and sub-tropical latitudes any single weighted mean average height cannot be considered to be representative for all elevation angles of the satellites viewed from a station and that the effective height depends on the time of day, the location of observation and the vertical ionization distribution at that location.
Fig. 1 .
Fig. 1.The monthly mean diurnal variation of h p F 2 of four Indian stations Trivandrum, Waltair, Ahmedabad and Delhi for the months of March, June and December 2001.The error bars indicate the standard deviation of the data for the month.
The mean diurnal variations of h p F 2 for the months of March, June and December 2001, representing the three different seasons, namely equinox, summer and winter, respectively, from the four different Indian stations, Trivandrum (8.4 • N, 76.9 • E), Waltair (17.7 • N, 83.3 • E), Ahmedabad (23 • N, 76.6 • E) and Delhi (28.5 • N, 77.2 • E), where identical ionosondes (KEL, Australia) are simultaneously in operation, are presented in Fig. 1.It may be seen from this figure, that the F-region peak altitude varies between 275 km at the day minimum to about 500 km during noon to postnoon hours at Trivandrum and Waltair, while the variation is much less in a day at Ahmedabad and Delhi during June 2001.During December 2001, while Waltair, Ahmedabad and Delhi show similar variation, the heights at Trivandrum show a higher variability, with daytime peak values as large as 575 km.
Fig. 2 .
Fig. 2. (a, b and c) showing the surface maps of the monthly mean diurnal variation of the altitude of the peak electron density (h p F 2 ) as a function of local time and latitude for the three different seasons (year 2001) in the Indian sector.
Fig. 3 .
Fig. 3.The schematic diagram showing the geometry of the configuration in the conversion of oblique to zenithal TEC from two different IPP altitude,s h s and h s .
Fig. 4 .
Fig. 4. Examples showing the gradients (m) of slant TEC versus zenithal TEC derived from simultaneously observed satellite pairs of data chosen.
Fig. 5 .
Fig. 5. Variation of the computed gradients (m) obtained with different elevation angles mapped onto the theoretically expected gradients for the different heights chosen.
Fig. 6 .
Fig. 6.Variation of vertical TEC for different IPP altitudes and the corresponding elevation angle of the satellite.The elevation angle of the satellite is given on the right-hand side of the y-axis.
Fig. 7 .
Fig. 7. Variation of vertical TEC for different IPP altitudes and the corresponding elevation angle of the satellite from four different Indian stations.The elevation angle of the satellite is given on the right-hand side of the y-axis. | 5,957 | 2006-09-13T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Analysis of Energy Reception Characteristics of Solar Aircraft in the Tropics of Cancer and Capricorn
This paper mainly aims at the energy reception characteristics of solar aircraft flying in the near space between the Tropics of Cancer and Capricorn. Based on the solar radiation intensity calculation model, the energy reception characteristics including maximum solar radiation intensity, daytime average solar radiation intensity, total solar radiation energy received per day on the unit area., and daytime hours of solar aircraft was calculated varying with date, longitude, and latitude. By analyzing the corresponding calculation results, it can provide some reference value for the design of solar aircraft flying in this area.
Introduce
Solar aircraft is a new type of aircraft which can realize High Altitude Long Endurance(HALE) flight [1] and has an irreplaceable advantage compared with conventional aircraft [2] .It can convert solar radiant energy into electrical energy by photovoltaic conversion.Near space generally refers to the airspace 20-100km from the ground, and it has a special strategic position because of the unique height and environmental conditions.Using the stable atmosphere of near space and the inexhaustible supply of solar energy, HALE solar aircraft has a wide application prospect in the tasks of environmental detection, regional communication, border monitoring, disaster warning and monitoring, high resolution, etc., and it is one of the hot and frontier fields of international research [3] .Solar aircraft uses solar energy as the main power source [1] .Therefore, the analysis of solar energy reception characteristics is the basis and necessary prerequisite for the study of such aircraft [4] .Due to the influence of earth-sun distance, time and latitude, the solar radiation received in the near space will change.On account of the existence of the obliquity of the ecliptic, sunlit point is always moving back and forth between the Tropics of Cancer and Capricorn, which causes the solar energy in this area is abundant [5,6] .
In view of this, in this paper, the energy reception characteristics of solar aircraft which flies in the near space between the Tropics of Cancer and Capricorn were analysed, with the change of time, latitude and longitude, in order to provide some reference value for the design of this kind of aircraft.
2 Calculation model of solar radiation intensity [7] Assuming that the influence of cloud, moist, dust, etc. is ignored, the calculation model of solar irradiation intensity is as follows.
Solar altitude
The solar altitude is the angle between the incident ray and the horizontal plane, and it will vary with time and place.The solar altitude can be calculated as follows: (1) Where: -local latitude, -sun declination angle, -solar hour angle. (2) Where: -date number in a year. (3) (5) (6) Where: -true solar time, -local standard time,local longitude, -the longitude of the local standard time, -the equation of time between true solar time and mean solar time.
Eccentricity correction coefficient associated with earth-sun distance
The sun is in a slightly eccentric position on the earth's elliptical orbit, so the distance between the sun and earth is not constant, and there are perihelion and aphelion.The distance between the sun and earth is usually corrected with the eccentricity correction coefficient .(7) Where: -the average earth-sun distance, -the earth-sun distance of the observation point. [8]e atmospheric transparency coefficient is the ratio of transmitted radiation to incident radiation through an air mass.Under the different weather conditions, the atmosphere thickness will be different.Considering the weather factors, the atmospheric transparency coefficient can be calculated as follows: (8) Where: -weather influence factor, -highly modified air mass.
Atmospheric transparency coefficient
( Where: -air mass, -atmospheric height correction factor, -the value is the real part of the formula, -height. [9]nrise time is the time when the sun appears on the horizon.Regardless of the curvature of the earth's surface and the refraction of the atmosphere to the light, it can be calculated as follows:
Sunrise and sunset time
The sunset time and the sunrise time are symmetric at 12 noon.
(13) Daytime hours are the difference between the sunset time and the sunrise time. (14)
Solar radiation intensity
Considering the effects of seasons, earth-sun distances, height, and time, solar radiation intensity can be calculated as follows: (15) Where: -solar constant, refers to the received solar radiation energy at the boundary of the earth's atmosphere surface perpendicular to the light in unit area and unit time.In 1981, the eighth session of the world meteorological organization instrument and observation method committee defined the solar constant as .
solar energy reception characteristics
The intensity of solar radiation increases from sunrise to noon and then decreases before sunset.It can be reduced to a sine curve, and the highest point is obtained at noon, which is the true solar time .Maximum solar radiation intensity at noon can be calculated as follows: (16) The change function of solar radiation intensity with time during the day is: Daytime average solar radiation intensity can be calculated as follows: The total amount of solar radiation received per unit area in one day is: (19) According to the above calculation model, this paper will analyse the variation of , , , and with date, longitude and latitude.Relevant constant value parameters are chosen as shown in table 1.
Energy reception characteristics varies with date and latitude
First of all, this paper analyses the changes of energy reception characteristics of the Tropics of Cancer and Capricorn in a year.In the calculation, the latitude north is positive, the south is negative, and the range of change is °; The range of dates is ; The geographical longitude is °.In addition, the energy reception characteristics of spring equinox , summer solstice , autumnal equinox and winter solstice are compared and analysed.4 shows that the day and night time are equal at the vernal equinox and autumnal equinox between the Tropics of Cancer and Capricorn.At the equator, the time of day and night are the same all year round.This is consistent with the actual situation, and it verifies the reliability and accuracy of the calculation model to a certain extent.
Table 2 shows the maximum and minimum values and the corresponding latitude and date of the energy reception characteristics between the Tropic of Cancer and the equator [0,23.5]°, the equator and the Tropic of Capricorn [-23.5,0]°.
Energy reception characteristics varies with longitude and latitude
On the basis of the above analysis, this paper analyses the changes of solar energy reception characteristics between the Tropics of Cancer and Capricorn in a day with longitude and latitude.When calculating, the spring equinox day is chosen as the research object.The latitude range is °, and the range of longitude is °.
Conclusion
In this paper, the energy reception characteristics including , , and of 20000m high altitude area between the Tropics of Cancer and Capricorn are calculated with the change of date and latitude within one year, and with the longitude and latitude within one day (vernal equinox).
Through analysis, it can be seen that: 1) In a year, the energy reception characteristics including , , and of a certain longitude will vary with the date and latitude; 2) In a day, the energy reception characteristics including , , and only change with latitude, not with longitude.Solar radiation is abundant in the near space between the Tropics of Cancer and Capricorn.In this paper, the energy reception characteristics of solar aircraft are analysed, and the corresponding calculation results are given.In particular, the maximum and minimum values of energy reception characteristics can provide some reference for the design of solar aircraft for different flying missions in the region.
Figure 1 .
Figure 1.Maximum solar radiation intensity varies with date and latitude .
Figure 2 .
Figure 2. Daytime average solar radiation intensity varies with date and latitude .
Figure 3 .
Figure 3.The total amount of solar radiation received per unit area in one day varies with date and latitude .
Figure 4 .
Figure 4. Daytime hours varies with date and latitude .
figure 1
figure 1 and figure 2 are the curves of and with date and latitude .The changes in figure 1 and figure 2 are basically the same.Near the Tropic of Cancer, the double peaks of and appear in the vicinity of the spring equinox and the autumnal equinox, and fall in the summer solstice, and the trough value appears near the winter solstice; Near the Tropic of Capricorn, the peaks of and appear near the winter solstice, and the trough value appears near the summer solstice, showing a relatively perfect sinusoidal trend.figure 3 and figure 4 are the curves of and with date and latitude .The change trend of figure 3 and figure 4 are basically the same.In the vicinity of the tropic, both and show a relatively perfect sinusoidal trend.Near the Tropic of Cancer, the peak value appears near the summer solstice, and the trough value appears near the winter solstice; And it's opposite near the Tropic of Capricorn.In addition, figure4shows that the day and night time are equal at the vernal equinox and autumnal equinox between the Tropics of Cancer and Capricorn.At the equator, the time of day and night are the same all year round.This is consistent with the actual situation, and it verifies the reliability and accuracy of the calculation model to a certain extent.Table2shows the maximum and minimum values and the corresponding latitude and date of the energy reception characteristics between the Tropic of Cancer and the equator [0,23.5]°, the equator and the Tropic of Capricorn [-23.5,0]°.
Figure 5 .
Figure 5. Maximum solar radiation intensity varies with longitude and latitude .
Figure 6 .Figure 7 .
Figure 6.Daytime average solar radiation intensity varies with longitude and latitude .
Figure 8 .
Figure 8. Daytime hours varies with longitude and latitude .
Figure 5 -
Figure 5-8 shows the variation curves of , , and with longitude and latitude at vernal equinox.It can be seen that the , , and in
Table 1 .
Parameters of solar irradiation strength calculation.
Table 2 .
The maximum and minimum of | 2,328.2 | 2018-01-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Acute regulation of the Glycerophospholipid Composition of The Membranes of Mammalian Cells. The First Comprehensive Model
Is unclear how mammalian cells maintain the complex glycerophospholipid (GPL) compositions of their various membranes. Here we propose the first comprehensive model that suggests how this could be accomplished. The model is based on the idea that there are a limited number of GPL compositions that are energetically more favorable than the other compositions, i.e. those (optimal) compositions represent local free energy minima. Thus, the GPL composition of a membrane has a natural tendency to settle in one of the optimal composition. When the mole fraction of an GPL class exceeds that in an optimal composition, its chemical activity abruptly increases, which (i) increases its propensity to efflux from the membranes thus making it susceptible for hydrolysis by homeostatic phospholipases; (ii) increases its potency to inhibit its own biosynthesis via a feedback mechanism; (iii) enhances its conversion to another GPL class via “head group remodeling” or (iv) enhances its translocation to another membrane. These four processes may act separately or simultaneously to maintain GPL homeostasis.
Introduction
Glycerophospholipids (GPLs) are the most abundant lipids in virtually all mammalian membranes each of which contain more than 10 GPL classes varying in the structure of the polar head group.The GPL major classes are the phosphatidylcholines (PC), phosphatidylethanolamines (PE), phosphatidylinositols (PI) phosphatidylserines (PS) phosphatidylglycerols (PG), phosphatidic acids (PA) and cardiolipins (CL).Mammalian cells maintain the relative concentrations of GPL classes in their different subcellular membranes within narrow limits, obviously because this is essential for the numerous membrane-associated functions.Despite the vital importance of such GPL homeostasis, information regarding the mechanisms underlying this crucial phenomenon in mammalian cells is limited.In particular, hardly anything is known about the coordination of the key processes underlying GPL homeostasis except that biosynthesis and degradation must be tightly coordinated.
This has been demonstrated by many studies in which the rate of GPL synthesis was either boosted or inhibited.Thus, when the synthesis of PC was increased several-fold, its concentration in the cells remained essentially unchanged due to increased degradation (Baburina and Jackowski, 1999;Barbour et al., 1999;Jackowski, 1994;Lykidis et al., 2001;Walkey et al., 1994).Parallel evidence has been obtained for PE and PS (Baburina and Jackowski, 1999;Lykidis et al., 2001;Stone et al., 1998;Walkey et al., 1994).
On the other hand, when the synthesis of PC, PE or PS was inhibited, their turnover decreased correspondingly (Fullerton and Bakovic, 2010;Fullerton et al., 2009;Nishijima et al., 1984;Polokoff et al., 1981;Steenbergen et al., 2006).However, there it is no information on how the coordination of synthesis and degradation is accomplished mechanistically, which must be a challenging task (Fig. 1) due to the presence of many GPL classes.Here, we present the so-called Optimal Composition Model (OCmodel) which appears to represent the first attempt to explain how synthesis and degradation of GPLs could be accurately coordinated.This hypothesis was inspired by our recent findings on the processes involved in GPL homeostasis in mammalian cells.
Optimal composition model
We have previously shown that the phospholipid compositions of the inner and outer leaflets of mammalian erythrocyte and platelet membranes, the by far best characterized biological membranes in terms of GPL composition, are remarkably similar to compositions predicted by the so-called Superlattic i e Model (Somerharju et al., 2009;Virtanen et al., 1998).The key elements of this model are that 1) the different GPLs tend to be regularly distributed locally and 2) structurally similar GPL molecules can be assigned to 3 classes, i.e. (i) choline lipids (PC and sphingomyelin), (ii) PE and (iii) negatively charged GPLs.From these assumptions it necessarily follows that there are only a limited number of allowed class compositions (Somerharju et al., 2009), which in ternary systems are multiples of 11.1 mole%, i.e. 11.1, 22.2, 33.3, 44.4 mol% etc.The allowed compositions is that they represent local free energy minima along the composition axis because they allow for the optimal or tightest interaction between neighboring molecules.Accordingly, the membrane GPL composition has a natural tendency to settle in one of allowed composition, rather than in an intervening one.
When the mole fraction of a GPL class exceeds that corresponding to an optimal one, the interaction of the molecules in excess with its neighbors will be weakened and thus the chemical activity is those molecules strongly increased.We propose that this increased chemical activity results in (i) increased propensity of the GPL molecules in excess to efflux from the membrane which makes them targets for homeostatic PLAs, and (ii) in an increased capacity to inhibit their own synthesis via a feed-back mechanism.Accordingly, chemical activity could be the "signal" regulating and thus coordinating biosynthesis and degradation, the processes critical for GPL homeostasis.Below we will discuss the recent data strongly supporting this model and also indicate two other, potentially important homeostatic process i.e., GPL class interconversion and intracellular transfer which both are likely to driven by chemical activity of the GPL molecules present "in excess".
Figure 2. Deviation from an optimal composition brings about GPL molecules with an increased chemical activity. On the left:
The GPL class composition is optimal as proposed previously for the erythrocyte membrane inner leaflet where PE (gray) is ~44 mol%, the choline lipids (white) are ~22 mol% and the negatively charged GPLs (red) are ~33 mol% (Virtanen et al., 1998).On the right: If a single (zwitterionic) PE molecule has been replaced by a negatively charged GPL, the chemical activity of several negatively charged GPL molecules is greatly increased due to electrostatic repulsion between the proximal negatively charged molecules.
Chemical activity regulates the biosynthesis
In general, it is poorly established what regulates the biosynthesis of GPLs in mammalian cells except for PS and PC.Kuge and coworkers have demonstrated that exogenous PS strongly inhibits the synthesis of PS in CHO cells and that this inhibition is most probably mediated by the interaction of PS, the product, with specific arginine in PS synthase 1 or 2 (Reviewed in (Kuge and Nishijima, 2003).In case of PC synthesis, the rate limiting and thus the regulatory step is the binding of cytidyltransferase (CT) to the ER or nuclear membrane (Cornell and Ridgway, 2015).The binding is inhibited by PC and stimulated by addition of PE, diacylglycerol (DAG) ad negatively charged lipids presumably by modulating the membrane packing or curvature elastic stress or charge (Arnold and Cornell, 1996;Dymond, 2015).Notably, addition of PE, DAG or negatively charged lipids should also decrease the chemical activity of PC in the membrane which thus could the actual regulating factor in CT binding.Beside PS and PC, there is evidence that PI synthesis is inhibited by PI in rat pituitary cells (Imai and Gershengorn, 1987).
Recently, we have shown than all common GPLs when loaded to HeLa cells strongly inhibit the synthesis of the corresponding GPL (Hermansson, 2010); our unpublished data).In these studies the concentration of a GPL class was increased above its normal level, which would thus increase the chemical activity of that GPL class.In conclusion, increased chemical activity is most probably the key factor regulating the rate limiting enzymes of GPL biosynthesis.This is analogous as what has been suggested for cholesterol biosynthesis (Radhakrishnan et al., 2000;Sokolov and Radhakrishnan, 2010).
In addition, we have provided strong evidence that the activity of PNPLA9 in vitro is proportional to the propensity of its GPL substrate to efflux from the membrane (Batchu et al., 2015), which is consistent with the prediction that the active site of PNPLA9 resides well above the membrane surface (Bucher et al., 2013).Notably, the efflux propensity of a GPL molecule is proportional to its chemical activity as suggested previously for cholesterol (Lange and Steck, 2008).
GPL glass interconversion: a novel homeostatic mechanism
We have recently found that exogenous PE, PS, PI, PG and PA are rapidly and effectively converted to PC and triacylglycerol (TAG) in HeLa cells (Hermansson et al., 2010); our unpublished data).The initial step of such conversion is probably catalyzed by a PLC (-type) enzyme and the driving force is most likely an increased chemical activity of the GPL which has been loaded to the cells.As discussed in the text, the GPL molecules present in excess (red) has increased chemical activity which is predicted to (i) increase its hydrolysis by a PLA, (ii) inhibit its own biosynthesis, (iii) enhance its conversion to another GPL class and (iv) enhance its translocation to another membrane.All these events are proposed to collaborate to maintain GPL class homeostasis in mammalian cells.
Interorganelle translocation of GPLs, yet another process affected by the chemical activity
As suggested above, when the molar percentage of a GPL class increases above its optimal value, its chemical activity and thus its propensity to efflux from a membrane increases abruptly.It has been previously shown that the rate limiting step in spontaneous intermembrane translocation of a lipid is its efflux from the donor membrane (McLean and Phillips, 1984;Nichols, 1985).Notably, the efflux seems to be the rate limiting step in some protein-mediated translocation processes as well (Huuskonen et al., 1996;van Amerongen et al., 1989).In conclusion, the increase of the mole percentage of a GPL class in a membrane can result in abrupt increase of its chemical activity and, consequently, its PLA Synthase intracellular translocation as has been proposed previously for cholesterol (Lange and Steck, 2008;Radhakrishnan et al., 2000).
Finally, we stress that there are also other levels of GPL compositional regulation such as those based on altered gene expression, translation or protein phosphorylation, but these mechanisms are far too slow to maintain the GPL composition in a steady state without major energy-wasting fluctuations (hysteresis).Such "coarse" mechanisms rather play role when a shift in the GPL composition is required as e.g. during mitosis, cell growth and differentiation or by altered environment (Jackowski, 1994;Murakami et al., 1992;Sanchez-Alvarez et al., 2015;Sugimoto et al., 2008).
Figure 1 .
Figure 1.Complexity of regulation of GPL compositions of mammalian membranes.This scheme emphasizes the complexity of regulation of the GPL compositions of membranes consisting of many different lipid classes.All GPL classes present in mammalian cells are not shown here for simplicity.
Figure 3 .
Figure 3. Multiple homeostatic events can be driven by the increased chemical activity of the GPL present in excess.As discussed in the text, the GPL molecules present in excess (red) has increased chemical activity which is predicted to (i) increase its hydrolysis by a PLA, (ii) inhibit its own biosynthesis, (iii) enhance its conversion to another GPL class and (iv) enhance its translocation to another membrane.All these events are proposed to collaborate to maintain GPL class homeostasis in mammalian cells. | 2,429 | 2019-04-19T00:00:00.000 | [
"Biology"
] |
Developing a Deep-Learning-Based Coronary Artery Disease Detection Technique Using Computer Tomography Images
Coronary artery disease (CAD) is one of the major causes of fatalities across the globe. The recent developments in convolutional neural networks (CNN) allow researchers to detect CAD from computed tomography (CT) images. The CAD detection model assists physicians in identifying cardiac disease at earlier stages. The recent CAD detection models demand a high computational cost and a more significant number of images. Therefore, this study intends to develop a CNN-based CAD detection model. The researchers apply an image enhancement technique to improve the CT image quality. The authors employed You look only once (YOLO) V7 for extracting the features. Aquila optimization is used for optimizing the hyperparameters of the UNet++ model to predict CAD. The proposed feature extraction technique and hyperparameter tuning approach reduces the computational costs and improves the performance of the UNet++ model. Two datasets are utilized for evaluating the performance of the proposed CAD detection model. The experimental outcomes suggest that the proposed method achieves an accuracy, recall, precision, F1-score, Matthews correlation coefficient, and Kappa of 99.4, 98.5, 98.65, 98.6, 95.35, and 95 and 99.5, 98.95, 98.95, 98.95, 96.35, and 96.25 for datasets 1 and 2, respectively. In addition, the proposed model outperforms the recent techniques by obtaining the area under the receiver operating characteristic and precision-recall curve of 0.97 and 0.95, and 0.96 and 0.94 for datasets 1 and 2, respectively. Moreover, the proposed model obtained a better confidence interval and standard deviation of [98.64–98.72] and 0.0014, and [97.41–97.49] and 0.0019 for datasets 1 and 2, respectively. The study’s findings suggest that the proposed model can support physicians in identifying CAD with limited resources.
Introduction
Across the globe, cardiovascular diseases (CVD) are the leading cause of mortality, which accounts for an estimated 17.9 million deaths annually [1]. The most prevalent form of CVD is coronary artery disease (CAD), which frequently results in cardiac arrest. Coronary artery blockage leads to heart failure [2][3][4][5][6][7]. The heart relies on blood flow from the coronary arteries [8]. In developing countries, heart disease diagnosis and treatment are difficult due to the limited number of medical resources and professionals [9]. In order to avoid further damage to the patient, there is a demand for practical diagnostic tools and techniques. Both economically developed and underdeveloped nations are experiencing significant surges in the number of deaths from CVD [10]. Early CAD identification can save lives and lower healthcare costs [11][12][13][14][15][16]. Developing a reliable and non-invasive approach for early CAD identification is desirable. During the past few years, practitioners have significantly increased their utilization of computer technology to make decisions [17].
Physicians utilize conventional invasive methods to diagnose heart disease based on a patient's medical history, physical tests, and symptoms [18]. Angiography is one of the most
Materials and Methods
The proposed CAD detection model uses the CNN technique for identifying CAD from the CT images. Figure 1 highlights the proposed CAD detection model. It contains image enhancement, feature extraction, and hyperparameter-tuned UNet++ models for predicting CAD using CCTA images.
Dataset Characteristics
A total of two datasets are employed to train the models. Dataset 1 is publicly available in the repository [5]. The CCTA images of 500 patients are stored in the dataset. The images are classified into normal (50%) and abnormal (50%). The image is represented in 18 multiple views of a straightened coronary artery. The images are divided into training, validation, and test images. The authors have included 2364 images to balance the dataset.
The 3D CCTA images of 1000 patients are deposited in dataset 2. The images were captured using a Siemens 128-slice dual-source scanner. The size of the images is 512 × 512 × (206-275) voxels. The images were collected from the Guangdong Provincial People's hospital between April 2012 and December 2018. The average ages of females and males were 59.98 and 57.68 years, respectively. The dataset repository [6] is publicly available for the researchers. In addition, it offers an image segmentation method for extracting images of coronary arteries from raw 3D images. Figure 2a,b are the raw images of datasets 1 and 2, respectively. Table 1 presents the characteristics of the dataset. Dataset 1 2364 500 1182 1182 2 Dataset 2 1000 1000 503 497 2 Figure 1. Proposed CAD detection model.
Dataset Characteristics
A total of two datasets are employed to train the models. Dataset 1 is publicly available in the repository [5]. The CCTA images of 500 patients are stored in the dataset. The images are classified into normal (50%) and abnormal (50%). The image is represented in 18 multiple views of a straightened coronary artery. The images are divided into training, validation, and test images. The authors have included 2364 images to balance the dataset.
The 3D CCTA images of 1000 patients are deposited in dataset 2. The images were captured using a Siemens 128-slice dual-source scanner. The size of the images is 512 × 512 × (206-275) voxels. The images were collected from the Guangdong Provincial People's hospital between April 2012 and December 2018. The average ages of females and males were 59.98 and 57.68 years, respectively. The dataset repository [6] is publicly available for the researchers. In addition, it offers an image segmentation method for extracting images of coronary arteries from raw 3D images. Figure 2a,b are the raw images of datasets 1 and 2, respectively. Table 1 presents the characteristics of the dataset. Figure 3 highlights the research phases of the study. Phase 1 outlines the image pre processing and feature extraction processes. Phase 2 describes the processes for classifyin the CCTA images into CAD and No CAD. In this phase, the Aquila optimization (AO Figure 3 highlights the research phases of the study. Phase 1 outlines the image preprocessing and feature extraction processes. Phase 2 describes the processes for classifying the CCTA images into CAD and No CAD. In this phase, the Aquila optimization (AO) algorithm [21] is employed for tuning the hyperparameters of the UNet++ model. Lastly, phase 3 presents the performance evaluation of the proposed model. Figure 3 highlights the research phases of the study. Phase 1 outlines the image preprocessing and feature extraction processes. Phase 2 describes the processes for classifying the CCTA images into CAD and No CAD. In this phase, the Aquila optimization (AO) algorithm [21] is employed for tuning the hyperparameters of the UNet++ model. Lastly, phase 3 presents the performance evaluation of the proposed model.
Feature Extraction
In phase 1, the researchers follow the methods of [18] to enhance the image quality. A fuzzy function processes the standard CCTA image in the raster format. A discrete space is used to represent the height and width of an image. A mapping function maps the fuzzy image and the discrete space. The spatial information of the fuzzy image is located using a neighborhood function. The researchers modified the membership function of [18] to increase the pixel value. The membership function includes a rescaling function to enable the YOLO V7 model to rescale the images during feature extraction. Equation (1) shows the fuzzification process.
Feature Extraction
In phase 1, the researchers follow the methods of [18] to enhance the image quality. A fuzzy function processes the standard CCTA image in the raster format. A discrete space is used to represent the height and width of an image. A mapping function maps the fuzzy image and the discrete space. The spatial information of the fuzzy image is located using a neighborhood function. The researchers modified the membership function of [18] to increase the pixel value. The membership function includes a rescaling function to enable the YOLO V7 model to rescale the images during feature extraction. Equation (1) shows the fuzzification process.
where Int H.w and Mem H,w are intensity and membership functions, and H and W are the height and width of the CCTA image. The defuzzification function applies the maxima for generating the enhanced CCTA image. Using the enhanced image, the researchers transform the images into different sizes and supply them to the subsequent phases. The images in dataset 2 are represented in 3D form, whereas the images of dataset 1 are expressed as the standard straightened arteries. To generate the straightened arteries from the 3D CCTA images, the researchers apply the centerline extraction [19] using the YOLO V7 model [20]. The YOLO V7 model identifies the centerlines using the anchor point between the coronary ostia and cardiac chambers. The arterial characteristics are generated using the central lines and area around the coronary vessels. In the subsequent steps, YOLO V7 extracts the features, which are forwarded to the CAD detection model.
Fine-Tuned CNN Model
In phase 2, the author applies the AO algorithm and the UNet++ model to generate the outcome. CCTA image features are convolutionally processed using a linear filter and merged with a bias term. Then, the resulting feature map is passed through a non-linear activation function. Hence, each neuron gains input from an N × N area of a subset of feature maps of the prior or input layer. This neuron's receptive fields comprise the combined regions of its receptive fields. As the same filter in the convolutional layer is used to probe all tolerable receptive fields of prior feature maps, the weights of neurons in the same feature map are always the same.
During the training phase, the system acquires the shared weights, which may also be filters or kernels. The activation function is a mathematical equation for determining the outcome of a neural network [20]. The process is linked to each neuron of the network. The active neuron is used to support the model to make a prediction. The activation function determines the outcome of a neuron. The pooling layer triggers the non-linear function. This layer is assigned to reduce the number of values in the feature maps by identifying the important values of the previous convolutional layer. The dropout technique includes an additional hyperparameter and dropout rate, influencing the chance of removing or keeping layer outputs.
With UNet++, decoders from different U-Nets are densely coupled at the exact resolution [21]. As a result of structural improvements, UNet++ offers the following benefits. First, UNet++ embeds U-Nets of various depths in its design. The encoding and decoding processes of these U-Nets are interconnected, and the encoders are partially shared. All the individual UNets are trained in parallel with a standard image representation assistance by training UNet++ under deep supervision. This architecture enhances the total segmentation performance, and model pruning is made possible during the inference phase. In addition, the encoder and decoder of the UNet++ model allow the feature maps to be fused at a similar rate. The aggregation layer can determine how to merge feature maps transported via skip connections with decoder feature maps using UNet++'s new skip connections. The following section discusses the number of layers and the outcome of the training phase. In order to tune the hyperparameters of the UNet++ model, the researchers employ the specific features of the AO algorithm. Let P be the set of hyperparameters and consider a population of candidate solutions with the upper bound (U) and lower bound (L). In each iteration, an optimal solution is attained. Equations (2) and (3) present the candidate and random solutions for P.
. . . P 2,Dim : : : : : where P represents the hyperparameters, N is the total number of parameters, and Dim is the dataset size.
6 of 14 where rand is the function to generate an anchor point for searching the parameter, i and j are the total number of parameters of the UNet++ model and the dataset's size. The researchers derive narrowed exploration and exploitation features of the AO algorithm for finding the suitable hyperparameters of the UNet++ model. The AO agent considers the locations of hyperparameters as a prey area from a high soar and narrowly explores it using Equations (4) and (5).
where M 1 (t + 1), M 1best , and M 1R are the generative outcome at each iteration(t), s is the space, Y is the random location of the search space, and Levy(s) is a flight distribution function presented in Equation (5).
where c, n, m, σ, and β are the constants for finding the hyperparameters. Furthermore, narrow exploitation searches the hyperparameter using stochastic movements. Equation (6) shows the mathematical expression for the narrow exploitation.
where M 2 (t + 1) is the generative solution at iteration (t), Q represents the quality function, and G 1 and G 2 are movements of the AO agent. The researchers modified the quality function according to the UNet++ model's performance.
Performance Evaluation
Finally, the third phase evaluates the proposed method using the evaluation metrics, including accuracy, precision, recall, F1-score, Matthews correlation coefficient (MCC), and Kappa. The datasets are divided into a train set (70%) and a test set (30%). The number of parameters, learning rate, and testing time are computed for each model. The researchers compute the area under the receiver operating characteristic (AU-ROC) and the precisionrecall (PR) curve for each CAD detection model. In addition, the confidence interval (CI) and the standard deviation (SD) are calculated to find the outcome's uncertainty levels.
Results
To evaluate the performance of the proposed model, the researchers implemented the model in Windows 10 professional with an i7 processor, NVIDIA GeForce RTX 3060 Ti, and 8 GB RAM. Python 3.9, Keras, and Tensorflow libraries are used for constructing the proposed model. Yolo V7 [20] and UNet++ [21] are employed for developing the proposed model. In addition, the Alothman A.F. et al. model [4], Papandrianos N et al. model [7], Moon, J.H. et al. model [8], and Banerjee, R. et al. model [9] are used for performance comparison. The researcher trains the UNet++ model using datasets 1 and 2 under the AO environment. During the process, the proposed model scores a superior outcome at the 36th epoch and around the 34th epoch for datasets 1 and 2, respectively. The dropout ratios of 0.3 and 0.4 are used for datasets 1 and 2. These are used to address overfitting and underfitting issues. Finally, six layers, including two dropout layers, three fully connected layers, and a softmax layer, are integrated with the UNet++ model. Table 2 presents the performance analysis of the proposed model on dataset 1. It indicates that the proposed model achieves an average accuracy and F1-measure of 98.85 and 98.37 during the training phase. In contrast, in the testing phase, it obtains a superior accuracy and F1-measure of 99.40 and 98.60. Table 3 reflects the proposed model performance on dataset 2. It is evident that the image enhancement and feature extraction processes support the proposed model to detect normal and abnormal CCTA images with optimal accuracy and F1-measure. Likewise, Table 5 displays the outcome of CAD detection models using dataset 2. The proposed model's dropout and fully connected layers supported the UNet++ model to overcome the existing challenges of the CNN models in classifying the images. Thus, the performance of the proposed model is better compared to the baseline models. Figures 4 and 5 highlight the performance of the CAD detection models on datasets 1 and 2, respectively. Figure 6 shows the AU-ROC and PR curves of the models using dataset 1. The posed model learns the environment efficiently and handles the images effectively Similarly, Figure 7 represents the AU-ROC and PR curve for dataset 2. Dataset 2 contains a smaller number of images compared to dataset 1. The recent models failed to generate a better AU-ROC and PR curve. In contrast, the proposed model generates the AU-ROC and PR curve values of 0.96 and 0.94, respectively. Similarly, Figure 7 represents the AU-ROC and PR curve for dataset 2. Dataset 2 contains a smaller number of images compared to dataset 1. The recent models failed to generate a better AU-ROC and PR curve. In contrast, the proposed model generates the AU-ROC and PR curve values of 0.96 and 0.94, respectively. Similarly, Figure 7 represents the AU-ROC and PR curve for dataset 2. Dataset 2 contains a smaller number of images compared to dataset 1. The recent models failed to generate a better AU-ROC and PR curve. In contrast, the proposed model generates the AU-ROC and PR curve values of 0.96 and 0.94, respectively. [9] model consumed a learning rate of 1 × 10 −4 , 1 × 10 −3 , 1 × 10 −3 , and 1 × 10 −3 , respectively. [9] model consumed a learning rate of 1 × 10 −4 , 1 × 10 −3 , 1 × 10 −3 , and 1 × 10 −3 , respectively. Table 7 reveals the CI and SD of the outcomes generated by the CAD detection models. The higher CI and SD values indicate that the proposed method's results are highly reliable.
Discussion
Recently, there has been a demand for a lightweight CAD detection model for diagnosing patients at earlier stages. The CAD detection model helps the individual to recover from the illness. CCTA is one of the primary tools in detecting CAD. It offers a non-invasive evaluation of atherosclerotic plaque on the artery walls. The current CAD detection models require substantial computational resources and time. The researchers proposed a CAD detection model for classifying the CCTA images and identifying the existence of CAD. Therefore, the researchers built a model using YOLO V7 and UNet++ models. The effectiveness of the model is evaluated using two datasets. Initially, the images are enhanced through a quality improvement process. Generally, the images are in grayscale with low quality. The proposed image enhancement increases the pixel size and removes the irrelevant objects from the primary images. Subsequently, YOLO V7 is applied to extract the CCTA images' features. It is widely applied in object detection techniques. The researchers used this technique to identify the key features. Finally, the AO algorithm is used to tune the hyperparameters of the UNet++ model. The findings highlight that transfer learning can replace large datasets in potential AI-powered medical imaging to automate repetitive activities and prioritize unhealthy patients. However, the CNN model can produce a poor outcome due to the generalization ability. Thus, annotating or labeling the images is necessary to improve the performance of the YOLO V7 model. Transfer learning prevents overfitting and allows the generalization of tasks for other domains. It supports the UNet++ model to adjust the final weights concerning the features. The advantages of transfer learning using image embeddings with a feature extraction technique generate the highest average AUROC of 0.97 and 0.96 for datasets 1 and 2, respectively. The time necessary to train the proposed model was a few minutes, eliminating the requirement for a significant amount of computing resources and extensive training timeframes. The researcher achieves the study's goal with limited resources by employing the CNN model. CAD detection models have demonstrated strong visual analysis, comprehension, and classification performance. The proposed model gradually reduces the input size, extracting features in parallel using convolutional layers. Images can be embedded to represent the input in a lower-dimensional environment properly. The fuzzy function offers an opportunity to improve the quality of images in the datasets. Improving the grayscale images enables the YOLO V7 model to identify valuable features.
Furthermore, narrowed exploration and exploitation of the AO algorithm have identified the optimal set of hyperparameters for the UNet++ model. Although the UNet++ model contains an array of Unet models, it does not sufficiently address the overfitting issues. However, the hyperparameter optimization integrated a set of dropouts and fully connected layers with the UNet++ model. Thus, the proposed model achieves the study's objective by developing a CAD detection model. The findings reveal that the proposed CAD detection model can help healthcare centers to identify CAD using limited computing resources. The CI and SD outcomes show that the results are reliable. The following outcomes of the comparative analysis reveal the proposed model's significance in detecting CAD.
Alothman A.F. et al. [4] suggested a feature extraction strategy and a CNN model to identify CAD in the shortest amount of time while maintaining the highest level of accuracy. The effectiveness of the suggested model is examined using two datasets. The experimental results for the benchmark datasets reveal that the model achieved a better outcome with limited resources. However, the proposed model outperforms the model by producing a superior outcome. Papandrianos et al. [7] developed a model for detecting CAD using single-photon emission CT images. They applied an RGB-based CNN model for CAD detection. The model achieved an AUC score of 0.936. However, the proposed model obtained an AUC score of 0.97 and 0.96 on datasets 1 and 2. In addition, it produces a better outcome on grayscale CCTA images.
Likewise, Moon J.H. et al. [8] proposed a DL model to detect CAD from 452 proper coronary artery angiography movie clips. In line with [8], the proposed model employs the YOLO V7 technique, which can be used for video clips. Moreover, the proposed model outperforms the Moon J.H. et al. model with limited resources. Table 6 outlines the computational complexities of the CAD detection models. It is evident that the proposed CAD detection model generated results with a few sets of parameters and a lower learning rate. Banerjee et al. [9] found a CNN long short-term memory approach for detecting CAD from the electrocardiogram images. Tables 4 and 5 show that the Bannerjee et al. model produces low accuracy and F1-measure. The proposed model achieved a better outcome than the recent image classification [11][12][13][14][15][16][17][18]. The feature extraction technique supplied the practical features to support the proposed model and generate better insights from the CCTA images.
The proposed CAD detection generates an effective outcome on imbalanced datasets. However, there is a demand for future studies to overcome a few limitations of the proposed model. The multiple layers of the CNN model may require an additional training period. The UNet++ architecture requires an extensive search due to the varying depths. In an imbalanced dataset, the skip connection process may impose a restrictive fusion scheme to simultaneously force sub-networks to aggregate the feature maps.
Conclusions
The authors proposed a CAD detection model using the computed tomography images in this study. They intended to improve the performance of the CAD detection model using the effective feature extraction approach. The recent models require high computational costs to generate the outcome. Therefore, the authors proposed a three-phase method for detecting CAD from the images. In the first phase, an image enhancement technique using a fuzzy function improves an image's quality. In addition, the authors applied the YOLO V7 technique to extract critical features. They improved the pixel value of the images to increase the YOLO V7 performance in extracting features from the grayscale images. The second phase used the AO algorithm for optimizing the hyperparameters of the UNet++ model with CCTA image datasets. The dropout layers are integrated with the model to address the overfitting issues. Finally, the third phase evaluated the performance of the proposed model. The state-of-the-art CAD detection models are compared with the proposed model. The comparative analysis revealed that the proposed model outperformed the recent CAD detection models. In addition, the computational cost required for the proposed model was lower than the others. The findings highlighted that the proposed model could support the healthcare center in developing countries to identify CAD in the initial stages. Moreover, the proposed model can be implemented with limited computational resources. However, future studies are required to minimize the training time and improve the performance of the CAD models with unbalanced data. | 5,488 | 2023-03-31T00:00:00.000 | [
"Computer Science"
] |
Structural Basis for Plexin Activation and Regulation
Summary Class A plexins (PlxnAs) act as semaphorin receptors and control diverse aspects of nervous system development and plasticity, ranging from axon guidance and neuron migration to synaptic organization. PlxnA signaling requires cytoplasmic domain dimerization, but extracellular regulation and activation mechanisms remain unclear. Here we present crystal structures of PlxnA (PlxnA1, PlxnA2, and PlxnA4) full ectodomains. Domains 1–9 form a ring-like conformation from which the C-terminal domain 10 points away. All our PlxnA ectodomain structures show autoinhibitory, intermolecular “head-to-stalk” (domain 1 to domain 4-5) interactions, which are confirmed by biophysical assays, live cell fluorescence microscopy, and cell-based and neuronal growth cone collapse assays. This work reveals a 2-fold role of the PlxnA ectodomains: imposing a pre-signaling autoinhibitory separation for the cytoplasmic domains via intermolecular head-to-stalk interactions and supporting dimerization-based PlxnA activation upon ligand binding. More generally, our data identify a novel molecular mechanism for preventing premature activation of axon guidance receptors.
Each row represents a range from 1 to 10 with different examples. All growth cones were visualized by phalloidin staining. Table S1, related to Figure 1, Figure S1 and Extended Experimental
Crystallization and Data Collection
Crystallization experiments were conducted by mixing 100 nl (or 200 nl for PlxnA2 1-10 ) of protein solution with 100 nl reservoir solution using a Cartesian Technologies pipetting robot . Crystallization plates were maintained at 20.5 °C in a TAP Homebase storage vault and imaged with a Veeco visualization system . PlxnA1 1-10 was concentrated to 8.2 mg/ml in 20 mM HEPES, pH 7.5 and 150 mM sodium chloride and subsequently crystallized in three different crystal forms. One crystal form, with space group P4 3 2 1 2 grown in 6% (w/v) PEG 4k and 5 mM tricine, pH 8.5 (or 5 mM TRIS, pH 8.5), diffracted to 6.0 Å resolution.
Structure solution and refinement
The structure of PlxnA1 1-10 at 4.0 Å resolution was initially solved by molecular replacement in PHASER (McCoy, 2007;McCoy et al., 2007) using the structure of PlxnA2 1-4 (Janssen et al., 2010) (PDB: 3OKT) (54% sequence identity with PlxnA1 1-IPT5, with 16%, 29%, 29%, 29% and 20% sequence identity, respectively. These models were placed by molecular replacement in PHASER and the resulting structure was further optimized by manual rebuilding in COOT (Emsley and Cowtan, 2004) and refinement in REFMAC (Murshudov et al., 2011) using jelly-body restraints (Nicholls et al., 2012). However, the low 4.0 Å resolution of the data prevented unambiguous assignment of the sequence register for those parts of the structure for which no high quality homology model was available, for example for domains IPT2 and IPT5. Furthermore, the weaker electron density of domain IPT6 prevented reliable building of this domain. We therefore sought to determine much higher resolution crystal structures for these segments in isolation. We collected 1.36 Å resolution data from PlxnA2 4-5 crystals and 2.2 Å resolution data from PlxnA1 7-10 crystals. The structures of PlxnA2 4-5 and PlxnA1 7-10 were solved by molecular replacement in PHASER using the corresponding partially refined domains PSI2-IPT2 and IPT3-IPT4-IPT5, respectively, from the PlxnA1 1-10 structure. Both solutions were completed by model building in COOT and refinement in REFMAC (for PlxnA2 4-5 ) and BUSTER (Smart et al., 2012) and PHENIX (Adams et al., 2002) (for PlxnA1 7-10 ). For domain IPT6 of PlxnA1 7-10 our maps showed only fragmentary electron density for two loops (residues 1153-1163 and 1210-1217) and the C-terminal residues (1228-1236), these regions were therefore not modelled. We used these refined structures to replace domains IPT2, IPT3, IPT4, IPT5 and segments of domain IPT6 in the PlxnA1 1-10 structure which was then completed by model building in COOT and refinement in REFMAC using jelly-body restraints.
All other PlxnA full extracellular segment structures were solved using the refined 4.0 Å resolution PlxnA1 1-10 structure and, for PlxnA2 1-10 , also the PlxnA2 1-4 structure (Janssen et al., 2010), PDB: 3OKT. Homology models for PlxnA2 and PlxnA4 were calculated with MODELLER using the PlxnA1 1-10 structure as the template for PlxnA2 domains 5 to 10 (IPT2, PSI3, IPT3, IPT4, IPT5 and IPT6) (53% sequence identity) and for all ten PlxnA4 domains (54% sequence identity). The other PlxnA1 1-10 , PlxnA2 1-10 and PlxnA4 1-10 crystal forms were solved by molecular replacement in PHASER using individual domains of the PlxnA1 1-10 structure, of the PlxnA2 1-4 structure and PlxnA2 5-10 homology model, and of the PlxnA4 1-10 homology model, respectively, as search models. Domains were omitted from the models in cases where the electron density was too weak for interpretation due to flexibility (see Figure S1 for the entire set of structures and domains present). All solutions were refined by rigid-body refinement in PHENIX with each domain as a rigid group and a single B factor per domain, thus preventing any overfitting. Refinement statistics for all structures are given in Table S1. Electrostatic charge distribution was calculated with PDB2PQR (Dolinsky et al., 2004) and APBS (Baker et al., 2001), alignments were generated with Clustal Omega (Sievers et al., 2011), residue conservation was calculated with The Consurf Server (Glaser et al., 2003) or MODELLER and buried surface areas were calculated with PISA (Krissinel and Henrick, 2007). Figures were produced with The PyMOL Molecular Graphics System (Schrödinger, LLC).
Negative stain electron microscopy images analysis
For the preparation of carbon-coated grids and electron microscope set up, we followed the previously described protocols for negative stain EM (Booth et al., 2011). In brief, 2.5 μl of freshly gel-filtrated PlxnA1 1-10 at a concentration of 1-5 µg/ml in 10 mM HEPES, pH 7.5 and 150 mM sodium chloride was adsorbed to a glowdischarged carbon-coated copper grid, washed with two drops of deionized water, and stained with two drops of freshly prepared 0.75% uranyl formate. From 355 electron micrographs, a total of 12,645 particles were manually selected from 355 images using EMAN2 (Tang et al., 2007) and framed into boxes with a size of 298 Å × 298 Å. The particle images were normalized, filtered, centered, iteratively aligned and classified without any starting reference using IMAGIC (van Heel et al., 1996) to generate 60 class averages. The structural models based on the 4 Å crystal structure of PlxnA1 1-10 were generated manually using The PyMOL Molecular Graphics System (Schrödinger, LLC). The two-dimensional projections of our models were then subjected to automated correlation analysis with the class averages in which all the class averages were compared to all projections of all models. Thirteen models were tested and of these seven were found to be sufficient to represent the experimental class averages. Two-dimensional projections of the crystal structure and models were generated using SPIDER and WEB (Frank et al., 1996) and WellMAP (Flanagan et al., 2010) software and the correlations of the class averages were performed using scripts operating through SPIDER.
Single molecule localization microscopy-based cluster analysis
COS-7 cells were transiently transfected with PlxnA2 full-length constructs conjugated to mVenus at the C-termini via X-treme GENE HP Transfection Reagent and incubated at 37 °C, 5% CO 2 for 24 hours before fixing with 4% paraformaldehyde on ice. The fixed cells were mounted onto the glass coverslip using Prolong® Gold antifate reagent from Life Technologies TM . We used a single molecule localization microscopy technique based on the stochastic switching of standard fluorescent proteins (Lemmer et al., 2008), similar to the principle of (F)PALM (Betzig et al., 2006) and STORM (Rust et al., 2006) which rely on special photo-activatable or photo-switchable fluorophores. To perform localization microscopy, we used an OMX (optical microscope experimental, V2, API) microscopic system modified to enable single molecule localization microscopy using conventional fluorescent proteins according to the method described previously (Lemmer et al., 2008). The intensity of the 488 nm laser was adjusted to ~14 kW/cm² in the object plane for localization microscopy imaging of the mVenus conjugated samples. Single molecule positions were determined using an algorithm based on a maximum-likelihood method optimized for localization microscopy data (Grull et al., 2011) and adapted to the OMX hardware configuration. Distance and cluster analyses of the single molecule data were performed with algorithms described earlier Seiradake et al., 2013). fitted by a Marquandt nonlinear least-square algorithm (Marquardt, 1963) with monoor bi-exponential theoretical models using Symphotime software from Picoquant GmbH (detailed in Padilla-Parra et al., 2008;2015). In brief, the mean fluorescence lifetime ! of a fluorescence decay !(!), is defined by the following equation:
Imaging PlxnA2 interactions in live COS-7 cells using FRET-FLIM
The fluorescence lifetime obtained from a mono-exponential decay model of cells expressing only PlxnA2 molecules fused to the donor mTFP1 was first assigned. This lifetime was then fixed as the non-interacting fraction in a bi-exponential model obtained from the fluorescence lifetime decay profile of cells expressing both PlxnA2 fused to mTFP1 and mVenus (Padilla-Parra et al., 2008) using the following equation: where ! ! stands for the fraction of interacting donor, τD is the fixed donor lifetime and τF is the discrete FRET lifetime. The value of ! ! per pixel was calculated from the mean fluorescence lifetime ( τ ) using 1200 channels of fluorescence decay (Padilla-Parra et al., 2008;2015). One should note that we were only capable of detecting ~1/4 of the real interaction (0.5 because of our labelling strategy and ~0.5 given the dynamic range of the FRET couple (Galperin et al., 2004;Padilla-Parra et al., 2009).
This factor together with the spectral heterogeneity of the fluorophores and the temporal resolution of our technique determined that the ! ! represents the minimal fraction of the actual interaction (Padilla-Parra et al., 2008;2015;Yasuda, 2006).
DG growth cone assay
DG granule cell cultures were prepared as described previously (Van Battum et al., 2014). In brief, hippocampi were dissected from P6-8 C57bl6J mice and cut into 350 μm slices using a tissue chopper (McIllwain). The DG was dissected from these slices and collected in 1x Krebs medium (0.7% NaCl, 0.04% KCl, 0.02% KH 2 PO 4 , 0.2%NaHCO 3 , 0.25% glucose and 0.001% phenol red). Cells were dissociated after incubation in 1x Trypsin in Krebs medium for 15 min at 37 o C. The reaction was stopped by adding 20 mg/ml soybean trypsin inhibitor followed by trituration using a firepolished Pasteur pipette in Krebs medium containing 2 mg soybean trypsin inhibitor and 20 μg/ml DNAseI. Dissociated DG granule cells were resuspended in Neurobasal medium supplemented with B-27, L-glutamine, penicillin-streptomycin and β-mercaptoethanol and plated on coverslips coated with poly-D-lysine (20 mg/ml) and laminin (40 μg/ml) in 24 well plate and incubated in a humidified incubator at 37 o C and 5% CO 2 . Proteins were filtered using 0.45 μm filter before incubation with cells. On day in vitro (DIV)1, cells were treated with vehicle, or wild type or mutant PlxnA1 4-5 recombinant proteins (3 mg/ml) for 30 min at 37 o C, fixed in 4% PFA and sucrose for 20 min at room temperature, and immunostained with anti-βIII-tubulin antibodies and phalloidin. Images were acquired using a 100× objective on an AxioScopeA1 (Zeiss) microscope and analyzed using a growth cone morphology matrix ( Figure S6) in an observer-blind manner. The matrix was composed of 40 growth cones from different experiments that represent the full range of morphologies observed, ranging from a normal fan-shaped morphology (1) to full blown collapse (10). This strategy was applied as it allows the detection of subtle changes in growth cone morphology rather than just a comparison of collapse versus non-collapse. Crystallogr. 58, 1948Crystallogr. 58, -1954 | 2,660 | 2016-08-03T00:00:00.000 | [
"Biology"
] |
Methylselenol Produced In Vivo from Methylseleninic Acid or Dimethyl Diselenide Induces Toxic Protein Aggregation in Saccharomyces cerevisiae
Methylselenol (MeSeH) has been suggested to be a critical metabolite for anticancer activity of selenium, although the mechanisms underlying its activity remain to be fully established. The aim of this study was to identify metabolic pathways of MeSeH in Saccharomyces cerevisiae to decipher the mechanism of its toxicity. We first investigated in vitro the formation of MeSeH from methylseleninic acid (MSeA) or dimethyldiselenide. Determination of the equilibrium and rate constants of the reactions between glutathione (GSH) and these MeSeH precursors indicates that in the conditions that prevail in vivo, GSH can reduce the major part of MSeA or dimethyldiselenide into MeSeH. MeSeH can also be enzymatically produced by glutathione reductase or thioredoxin/thioredoxin reductase. Studies on the toxicity of MeSeH precursors (MSeA, dimethyldiselenide or a mixture of MSeA and GSH) in S. cerevisiae revealed that cytotoxicity and selenomethionine content were severely reduced in a met17 mutant devoid of O-acetylhomoserine sulfhydrylase. This suggests conversion of MeSeH into selenomethionine by this enzyme. Protein aggregation was observed in wild-type but not in met17 cells. Altogether, our findings support the view that MeSeH is toxic in S. cerevisiae because it is metabolized into selenomethionine which, in turn, induces toxic protein aggregation.
The solution of DTT that we used contained 0.5% of oxidized DTT. If c 0 represents the initial concentration of DMDSe, c 1 the initial concentration of total DTT (c 1 = [DTT] + [DTT ox ]), α the proportion of DTT ox in the DTT solution, and ξ the advancement of the reaction (defined as ξ = [MeSeH]/(2.c 0 )), the concentrations of the different compounds in the mixture of DMDSe and DTT are: Introduction of the these equations in the expression of K yields an equation from which c 1 can be expressed as a function of the other parameters: c = ( ) .
Because DTT was added at the same concentration in the reference and sample cuvettes, the absorbance of the sample at 252 nm recorded by the spectrophotometer was equal to: where ε refers to the molar absorption coefficients of the compounds at 252 nm. Using the above equations giving the concentrations of the different species, we can write the relation linking ξ to the absorbance: ξ = . ( The combination of equations (1) and (2) gives an implicit equation relating A 252 to c 1 .The value of K was obtained by fitting this implicit function to the experimental data using OriginPro software.
Calculation of the rate constant for MeSeH aerobic oxidation
MeSeH is readily oxidized by oxygen. To evaluate the rate of this oxidation, we monitored at 22°C the change of absorbance at 252 nm of MeSeH solutions prepared in deoxygenized 100 mM potassium phosphate buffer as described in Materials and Methods. At this wavelength, the absorbance of MeSeH is much higher than that of the oxidation product (DMDSe). Various concentrations of MeSeH were produced by reaction of equimolar concentrations of DMDSe and TCEP (15, 30, 45, 60 µM). The observed rates of decay in the first deduced that the oxidation of MeSeH was pseudo-first-order: with a value of k ox experimentally determined equal to 4.2 10 -4 s -1 . For the sake of comparison, we also measured the rate of MeSeH oxidation in a fully oxygenized phosphate buffer. A value of 1.7 10 -3 s -1 was determined.
Kinetic modelling of the reactions between MeSeSG, MSeH, DMDSe, GSH and GSSG
If we consider the system of the two reactions (5) and (6) Absorbance of the sample was introduced as a 6th variable (y6). The value of y6 is given by the formula (∑ ε i y i ).pl, where ε i represents the molar absorption coefficient of compound i in µM -1 .cm -1 , y i its concentration in µM and pl represents the path length of the acquisition system expressed in cm.
Differentiation of this formula shows that y6 obeys to the equation: When the fits were realized at two wavelengths simultaneously (252 and 340 nm), another variable corresponding to the absorbances at the second wavelength was created and another differential equation using the molar absorption coefficients corresponding to this wavelength was implemented.
In all the above equations, k 2 was then replaced by K2.k -2 , were K2 designates the equilibrium constant of reaction (6) (K2 = k 2 /k -2 ). The reaction rates for reaction (6) are several orders of magnitude larger than those for reaction (5). Therefore, to analyze experiments aiming at determining the rate constants of reaction (5) (k 1 and k -1 ), we assumed that reaction (6) was always at equilibrium. | 1,096.2 | 2021-02-24T00:00:00.000 | [
"Chemistry",
"Biology",
"Medicine"
] |
Axiomatic arguments for decomposing goodness of fit according to Shapley and Owen values
We advocate the decomposition of goodness of fit into contributions of (groups of) regressor variables according to the Shapley value or—if regressors are exogenously grouped—the Owen value because of the attractive axioms associated with these values. A wage regression model with German data illustrates the method. AMS 2000 subject classifications: 62J05, 62P20, 91A12.
Introduction
One of the unwritten conventions in applied econometrics is that authors provide their readers with some goodness-of-fit measure (GOF) at the end of each regression table.Very rarely, however, the GOF is allocated to individual regressor variables, even though-or because-the literature provides numerous different approaches to do so [3,9].Rather, the discussion of 'relevance' of regressor variables is often confined to the sign and p-value of their corresponding coefficients [12].Due to space constraints, many coefficients are not even reported, leaving readers bewildered as to how important (with respect to GOF) such 'omitted' variables were compared to those variables of primary interest.
In the present paper, we advocate the method that employs the Shapley value [17]-and its generalizations-to distribute the GOF of the model among the regressor variables, henceforth Shapley value decomposition [20].This method takes account of the interplay of regressor variables in sub-models and is calculated on the basis of information on the same type of GOF in these sub-models.Its attractiveness also stems from the fact that it emerges as the unique solution to the decomposition problem under a sound set of assumptions.
A generalization of the Shapley value, the Owen value [14], allows for decomposition in the context of exogenously grouped regressors as is suggested by Shorrocks [18].Such groups may arise, e.g., if the model includes polynomial terms of a variable, dummy variables that recode a categorical variable, or variables that are conceptually related for other reasons.Under such circumstances, it is necessary to adjust the processing of the information about the GOF, such that both the resulting values of the variables and the values of groups (defined as the sum of the values of the variables in the respective group) can be interpreted.In contrast to the model without exogenous groupings, this requires, for instance, that equally performing groups receive the same group values.
Apart from the characterizing properties, the methods advocated satisfy other nice properties.For instance, if the GOF is insensitive to a transformation of variables, this insensitivity is passed on to the valuation of variables.Also, a variable that contributes nothing to GOF in all sub-models receives the value zero.Moreover, the Owen values satisfies the following 'consistency' property.The sum of the values attributed to the variables of an exogenously given group equals the amount given to the group if the GOF would be assigned to the groups directly-not to the variables-using the Shapley value decomposition.Thus, the Owen value provides the theoretical underpinning to allocate GOF among the groups by means of the Shapley value decomposition.
The paper presents both concepts applied to regression analysis.We then provide an illustrative example with the decomposition of R2 of a wage regression with data from Germany.Our conclusion covers some possible extensions of the approach.
Method
Consider the OLS regression model and let K = {x 1 , ..., x j , ..., x k } denote the set of regressor variables.This modelwhich we refer to as the 'full model'-produces a particular worth 2 for some GOF measure, such as R 2 .We seek to distribute this worth among the regressor variables.For this purpose, we will consider additional regression models for every combination of variables T ⊆ K: Each of these sub-models is associated with a worth of the respective GOF, e.g.R 2 (T ).These worths can be collected in a function that maps from K's power set 2 K to the reals, assigning to every combination of variables its GOF: where, e.g., f (K) denotes the GOF of the full model.In the following we assume that f is zero-normalized such that the empty model exhibits a GOF of zero, i.e., f (∅) = 0.4 As a generalization, consider the case where the regressor variables are grouped (e.g., for reasons mentioned in the introduction) such that K is partitioned into G = {G 1 , ..., G , ..., G γ }.Estimation of the full OLS model again gives the GOF of the full model f (K) that is to be distributed. 5he decomposition problem now boils down to the following question: Given the function f , how should f (K) be distributed among the variables x 1 , . . ., x k ?Our answer makes use of results from cooperative game theory.
Cooperative game theory provides insights into rules for distributing f (K) systematically among players, or in the present case, the regressor variables.These rules exhibit certain properties, although not all rules satisfy all desirable properties.Instead of judging the attractiveness of (ad hoc) formulae to decompose f (K), one should judge the attractiveness of a decomposition rule on the basis of its characteristic properties.
Before we turn to a discussion of sound conditions for such a purpose, we describe a way to calculate the Shapley value (ungrouped case) and the Owen value (for grouped regressor variables).
Calculating the Shapley value and the Owen value
Starting with the full model, assume we successively remove regressor variables, one by one and according to a particular ordering of the variables.The difference in GOF associated with the elimination of a variable can be regarded as the variable's marginal contribution in this particular ordering of the regressors.Treating all orderings equally probable, the Shapley value of a variable equals the variable's average marginal contribution over all possible orderings.
More formally, let θ be a permutation of the variables with the interpretation that variable x j has the position θ (j) in θ.The set of variables that appear before x j in θ is denoted by P (θ, x j ) := {x p ∈ K | θ (p) < θ (j)}.Thus, in the permutation θ, variable x j changes the GOF by which we call variables x j 's marginal contribution in θ.
Denoting by Θ(K) the set of all |K|! permutations on K, we may now calculate the Shapley value of variable x j as M C(x j , θ). 6 Now we turn to the case where explanatory variables are organized in groups whose composition is known a priori to the analyst.Then the Owen value, a generalization of the Shapley value, takes the implied restrictions on the set of possible sub-models into account, as follows.To outsiders-i.e., variables belonging to other groups-, the members of a particular group can only appear jointly and will therefore 'negotiate' a value for their group as a whole.Therefore, a group can only be subdivided when its members negotiate the distribution of the group's payoff between themselves.In this situation, the other groups are either completely present or completely absent.In comparison to the previous paragraph on the Shapley value, this implies that sub-models in which two or more groups are represented by some, but not all of their constituent variables are not considered anymore.The set of rank orders Θ(K, G) that respect the partitioning scheme G is lower now (as long as not all groups are singleton groups, γ = |K|, or all variables belong to one group, γ = 1): Given this limited set of admissible rank orders, the Owen value can then be calculated along the lines of the Shapley value: Of course, computing these values per se is expensive.Moreover, the costs to calculate the GOF for subsets grows substantially with the number of regressor variables.For R 2 as GOF, this burden can be alleviated to some extent if the calculation is based on the covariance structure of the variables rather than the individual observations [6].
Why Shapley value decomposition should be used
In the following we motivate the conditions under which the Shapley value remains as the only candidate for decomposing f (K), given the information in 6 For every T ⊆ K, there are |T |! • (|K| − |T | − 1)! permutations θ, such that T = P (θ, x j ).Thus, an alternative and computationally less expensive formula for the Shapley value is: imsart-ejs ver.2011/12/06 file: TSWLatexianTemp_001933.tex date: June 23, 2012 f : T → f (T ) for T ⊆ K. Let φ be a decomposition rule.Formally, this is a function that assigns to every f the outcomes of the variables, i.e., φ xj (f ) is the value we attribute to variable x j if the combinations of the variables are associated with GOF according to f .The first condition of interest merely states what is to be distributed among the variables: Efficiency: The GOF of the full model is decomposed among the regressor variables, i.e., xj ∈K φ xj (f ) = f (K).
Next, we identify the criterion on which the judgment about the explanatory performance of a variable should be based.Virtually all approaches in the literature refer to the marginal contributions of a variable, which is compatible with the following condition.
Monotonicity: A change in the GOF worths from f A to f B such that variable x j exhibits higher marginal contributions in f B , must not decrease the explanatory value attributed to variable x j , i.e., The Monotonicity condition might be less reasonable if Efficiency were not to be imposed.To see this, assume we had imposed xj ∈K φ xj (f ) = 1 instead of Efficiency, and assume there are two samples with the same explanatory variables.Sample A yields the GOF worths f A (T ) for T ⊆ K, and sample B yields f B (T ) for T ⊆ K.If some variable performs better in sample B than in sample A, it is supposed to be 'rewarded'.However, it would not be clear that φ xj (f B ) ≥ φ xj (f A )-as is required by Monotonicity-should hold, because other variables could exhibit even higher increases in performance, and the restriction to distribute 1 could have implied a decrease in φ xj .Given that we distribute f (K), however, higher explanatory performance due to the other variables should also increase f (K), and therefore an increase in the values of all variables is possible.
Finally, it should be the case that variables that perform equally with respect to GOF receive the same outcome.The only difficulty is to identify equally performing variables.To this end, we say that two variables x j and x j are substitutes according to f if it does not matter whether x j or x j is taken into a model, i.e., if Equal treatment property: If the variables x j and x j are substitutes according to f , then φ x j (f ) = φ x j (f ) .
To our mind, these three conditions are plausible and not too restrictive.What is also appealing about them is that they leave no room for ambiguity as to which decomposition method should be used.
Theorem 1 (Young [24]) The Shapley value is the only rule that satisfies Efficiency, Monotonicity, and the Equal treatment property. 7n other words, other decomposition rules violate at least one of the three conditions. 8The Shapley value brings about several other desirable properties.For example, a variable that never contributes anything to the model's GOF receives an outcome of zero.In the case of correlated regressors, a variable may receive a non-zero outcome if it contributes to GOF in sub-models, even though its coefficient in the full model is zero.Grömping [7] discusses this point and suggests that this property may be reasonable in many practical settings where causal relationships are not obvious.To be sure, the Shapley value does not identify causal mechanisms in the presence of multicollinearity, in the sense that it assumes that all sub-models provide useful information.9
Why Owen value decomposition should be used
In the case with a priori grouped regressor variables, a decomposition rule ϕ prescribes the outcome of the variables for any given pair (f, G).Note that a rule ϕ does not explicitly attribute a value to a group, so that the outcome of the group is defined as the sum of the values of all its constituent variables, ϕ G (f, G) := xj ∈G ϕ xj (f, G).The Efficiency and Monotonicity conditions can both be adapted accordingly, by adding some fixed a priori partition.
Efficiency*: The GOF of the full model is decomposed among the variables such that xj ∈K ϕ xj (f, G) = f (K).
Monotonicity*: Leaving G fixed, a change in the GOF worths from f A to f B , such that variable x j exhibits higher marginal contributions in B, must not decrease the explanatory value attributed to variable x j .
The handling of substitutes requires attention, as variables in different groups cannot be substitutes anymore.Therefore, we say x j and x j are substitutes according to f and G, if x j and x j belong to the same group and f (T ∪ {x j }) = f (T ∪ {x j }) for all T ⊆ K \ {x j , x j }.
Equal treatment of players property: If the variables x j and x j are substitutes according to f and G, then One may identify interchangeable groups as well.We say G and G are substitutes according to f if it does not matter whether G or G is taken into a model, given that the other groups are not split, i.e., if Equal treatment of groups property: If the groups G and G are substitutes according to f , then The conditions Equal treatment of players and Equal treatment of groups touch upon the interpretation of the a priori groups.If the decomposition rule did not satisfy these conditions, in particular the latter one, then it would not be possible to identify equally performing groups (substitutes) that we want to receive the same share of f (K); a fortiori, a comparison of group values would have little meaning.
A recent result from cooperative game theory suggests a unique solution to the decomposition problem when there are a priori groupings.
Theorem 2 (Khmelnitskaya and Yanovskaya [10]) The Owen value is the only value that satisfies Efficiency*, Monotonicity*, the Equal treatment of players property, and the Equal treatment of groups property.
The Owen value has other desirable properties not mentioned so far.For instance, a variable that never contributes anything to the GOF of the model receives the outcome zero. 10Further, the following consistency properties hold.If there are only trivial groups, i.e., if all variables belong to one group or if all variables form groups of their own, Owen and Shapley value decomposition coincide.
Now suppose an a priori group were replaced by one variable equipped with the same contribution to the GOF of the model as all the variables of the original group.Then this new variable obtains the same outcome as the replaced group would have obtained.Consequently, the model's GOF is distributed among the groups in the same fashion as it is distributed among the variables if there were no groups-namely according to the Shapely value decomposition.Hence, arguing for the Owen value decomposition also supports the approach to merely distribute the GOF of the full model f (K) among the groups according to the Shapely value decomposition if a further decomposition within the groups is not of interest.This can be attractive in the case of a large group of 'control variables' (e.g., dummy variables for regions), if a detailed decomposition among the group's member variables is computationally very costly.
Application to German wage data
As an illustrative application of the method we estimate an augmented Mincer regression model for male German workers.We focus on the relative importance of 'human capital' on earnings.Our data originate from the German Socio-Economic Panel wave of 2006 [22].
This particular wave features a short test on cognitive ability, the symboldigit correspondence test (SCT), for the group of participants who took the CAPI interview [11]. 11To simplify the interpretation, we rescale SCT such that it varies between 0 (lowest score) and 1 (highest score).Formal education is accounted for in the form of the years of schooling (EDUC). 12In addition, we consider the interaction term of ability and formal education.These three variables form the first group.
The second and third groups of regressors consists of a polynomial in years of labor market experience (EXPER) and a polynomial in years of job tenure (TENURE), respectively. 13Taken together, these first three groups reflect 'human capital'.The model also includes four groups of control variables: marital status (MARRIED), firm size (3 dummy variables), industry classification (6 dummy variables), and region (14 dummy variables). 14he dependent variable is the natural logarithm of hourly pre-tax earnings.We restrict the sample to male German citizens, aged 20-64 years, who worked for at least 10 hours per week, who were not self-employed and not disabled.This leaves us with 850 observations with valid observations.Table 3 presents Owen values and their group sums as percentage of the overall R 2 of the model, which turned out to be 0.501.According to these values, one third of the explained variance can be attributed to the group of formal education and ability variables.While the entire group is statistically significant at the 1% level, both the main effect of SCT and the interaction effect are only significant at the 10% level.While the GOF decomposition does not have standard errors, bootstrapping may help to attach greater reliability to comparisons of importance.Figure 1 shows the 90% bootstrap confidence intervals for the absolute (i.e., not standardized by R 2 ) group values. 15This reinforces the notion that the first group is the 'most important' one, as its confidence interval-that reaches from 14% to 20% of the variance in log wagesdoes not overlap with any of the other ones.Within this first group, the main effect of EDUC is clearly the most important one.Remarkably, the interaction term plays a more important role (8% of R 2 ) than the main effect of SCT (3% of R 2 ), again with confidence intervals not overlapping (Figure 2).Looking at the coefficients, the model implies that up to 16 years of education, more cognitive ability is associated with higher earnings.The polynomial terms of labor market experience and job tenure suggest positive effects on earnings in the first years, with turning points after about 30 years in both cases.Interestingly, our procedure assigns greater importance in terms of GOF to the tenure polynomial, although the coefficients suggest that the experience profile is the steeper one.However, both confidence intervals include the value of the respective other group, i.e., generalizations on the difference in importance should not be drawn on the basis of our data (Figure 1).
In terms of 'group importance', firm size categories and the regional composition reach a similar order of magnitude as the tenure polynomial.While our focus is not on these dummy variables, such information may nevertheless be of interest to the reader, e.g., against the backdrop of the long economic convergence process in East Germany after the fall of the Berlin wall.Group values may thus provide the reader a space-conserving impression of the importance of control variables that are usually omitted from regression tables.
Concluding remarks
Decomposition of GOF provides an attractive diagnostic tool for identifying important (groups of) explanatory variables in a given regression model.We have argued, on the grounds of its attractive properties, that the Shapley value should be used for this purpose.The Shapley value and its axiomatic foundations can be generalized.The Owen value constitutes such a generalization where an a priori grouping of the regressor variables is taken into account, which accommodates many empirical analyses in practice.A further generalization could allow for additional levels of aggregation [23].In our wage regression example, such a level structure design could be implemented to assign the first three groups into a 'human capital' cluster.
One can also imagine situations in which certain variables must always be included in all sub-models, e.g., time fixed effects in a panel data analysis, or situations in which external knowledge on causal relationships can be exploited.In such cases, restricting the set of potential models, such that some variables must always be present or can only appear in combination, seems appropriate.An implementation could follow along the lines of the Shapley value for Games with Restricted Coalitions [4], which has an axiomatic foundation in the same spirit as Young's axiomatization presented in Section 2.2.
Table 1
OLS regression results with decomposition of R 2 (in %) | 4,779.6 | 2012-01-01T00:00:00.000 | [
"Economics",
"Mathematics"
] |
Association of vitamin D receptor gene polymorphisms with pancreatic cancer: A pilot study in a North China Population
Polymorphisms of the vitamin D receptor (VDR) gene may be a risk factor for pancreatic cancer (PC). We investigated the association of two single-nucleotide polymorphisms (SNPs) of the VDR gene with PC in age- and gender-matched patients and controls. PC (n=91) and healthy control (n=80) samples were genotyped for the FokI (rs2228570) and BsmI (rs1544410) polymorphisms using the PCR and restriction fragment length polymorphism (PCR-RFLP) method. Chi-square analysis was used to test for the overall association of VDR genotype with disease. There was a significant difference in the frequency of genotype FF between the PC patients and controls (Ptrend=0.009); however, the difference in frequency of genotype BB between the two groups was not significant (Ptrend=0.082). The difference between FF and Ff/ff frequency was significant (P=0.002). The two high-risk genotypes were ffbb and Ffbb, with an 11.66- and 6.42-fold increased risk of PC, respectively. VDR gene polymorphisms were important for the development of PC in this study population; however, further exploration of these findings and their implications are required.
Introduction
Pancreatic cancer (PC) is one of the most lethal human malignant tumors and accounts for 3% of all reported cases of cancer (1). It is estimated to have been responsible for >250,000 mortalities and was the fifth leading cause of cancer-associated deaths worldwide in 2007 (2). The prognosis for PC is extremely poor, with a 5-year survival rate of <5%, even with surgical and chemotherapeutic intervention (3). It has been shown that 1α,25(OH) 2 D 3 acts as a type of hormone and significantly inhibits the proliferative activity of numerous types of cancer cells, including PC cells, in vitro and regulates growth and differentiation in various cell types (4,5). It acts by binding to a corresponding intranuclear vitamin D receptor (VDR), which is present in a number of target tissues (6,7).
Numerous studies have demonstrated that polymorphisms of the VDR gene have important implications in VD signaling and are associated with various malignancies, including cancer of the colon, breast, kidneys and prostate (8)(9)(10)(11)(12). However, little is known about the role of the VD endocrine system in the carcinogenesis of PC.
Therefore, the aim of this study was to screen for genetic variations of two single-nucleotide polymorphisms (SNPs), FokI (rs2228570) and BsmI (rs1544410), of the VDR gene in a well-defined population of individuals with PC and compare their incidence with that of a healthy control population, in order to determine the contribution of VDR polymorphisms to PC in North China.
Materials and methods
Study participants. This study was part of an ongoing hospitalbased case-controlled study, conducted at three hospitals (Shandong Provincial Hospital, Taian Central Hospital and Taian Eighty-eight Hospital, Shandong, China). The purpose of this study was to define risk factors which contribute to the development of PC. PC patients (n=91) eligible for the current study were enrolled between January 1 2010 and June 30 2012. Diagnosis of the samples was confirmed by certified histopathologists (only pancreatic adenocarcinoma was included) in the three hospitals mentioned above. Formalin-fixed paraffin-embedded tissues from PC patients (n=91) were used. The control group consisted of 80 healthy volunteers, selected by age-and gender-matching to the PC patients. The volunteers had no history of any type of cancer at the time of recruitment. Written informed consent was obtained from patient's and volunteer's families. This study was approved by the Ethical Committees of the three hospitals.
DNA isolation in PC. The required thin (10x10 µm) tissue sections were dried onto slides at 37˚C overnight. After soaking the tissue sections in xylene, deparaffinization was carried out with ethanol series for 3-5 min each (100% ethanol for dehydration and 80, 60 and 40% ethanol). To differentiate healthy tissue from tumor tissue, the slides were stained with hematoxylin. Tumor tissues were isolated by microdissection and the DNA was extracted using a DNA Isolation kit (Sangon Biotech, Inc., Shanghai, China), according to the manufacturer's instructions.
DNA isolation from peripheral venous blood of control population. Peripheral venous blood was obtained from each healthy volunteer and promptly centrifuged (1,500 x g for 10 min). Genomic DNA was extracted from 200 µl EDTA blood with a DNA Isolation kit from Roche Diagnostics (Sangon Biotech, Inc.), according to the manufacturer's instructions. To obtain higher DNA concentrations, a number of blood samples intially underwent lymphocyte separation, performed according to the manufacturer's instructions (Sangon Biotech, Inc.). Briefly, 3 ml diluted blood samples were carefully centrifuged at 1,200 x g for 20 min at 25˚C and lymphocytes from the interphase were washed twice in phosphate-buffered saline (PBS). DNA was then isolated as described above.
Genotyping of FokI. According to the method described by Arjumand et al (11) and Harries et al (13), the polymorphisms of VDR [FokI (rs10735810) and BsmI (rs1544410)] were assayed using the PCR and restriction fragment length polymorphism (PCR-RFLP) method. Genomic DNA (2 µl) was used, in addition to 200 ng of forward and reverse primers, 1X Taq polymerase buffer (1.5 M MgCl 2 ), dNTPs (0.3 mM) and 1 unit of Taq DNA polymerase (Sangon Biotech, Inc.). The primers of the VDR gene used were forward (5'-AGCTGGCCCTGGCACTGACTCTGCTCT-3') and reverse (5'-ATGGAAACACCTTGCTTCTTCTCCCT-C-3'). PCR amplification was carried out with the following cycling parameters: denaturation at 94˚C for 5 min, 35 cycles at 94˚C for 30 sec, 61˚C for 30 sec and 72˚C for 1 min and one final cycle of extension at 72˚C for 7 min. The C/T polymorphism in the first of the two-start codon (ATG) at the translation initiation site of the VDR gene was detected by RFLP, using the restriction endonuclease FokI (Sangon Biotech, Inc.). The PCR product of the 265-bp band was digested with 5 units of FokI restriction enzyme and incubated at 37˚C for 4 h. The digested reaction mixture (10 µl) was then loaded using 2% agarose gel containing ethidium bromide and visualized under short-wave UV light. The sizes were determined using a 100-bp ladder (Sangon Biotech, Inc.). Digestion of the amplified 265-bp PCR product yielded two fragments: 169 and 96 bp. Depending on the digestion pattern, individuals were scored as FF when homozygous for the presence of the FokI site, ff when homozygous for absence of the FokI site or Ff in the case of heterozygosity.
Genotyping of BsmI. The PCR amplification was carried out with the following cycling parameters: denaturation at 94˚C for 5 min and 35 cycles at an annealing temperature of 66˚C with the following primers: forward (5'-CAACCAAGACTACAAGTACCGCGTCAGTGA-3') and reverse (5'-AACCAGCGGGAAGAGGTCAAGGG-3'). The 800-bp PCR product was then diluted and digested with enzyme BsmI at 65˚C for 18 h using 5 units of enzyme (Sangon Biotech, Inc.) for each 20 µl reaction. After digestion, the PCR products were separated using 2% agarose gel containing ethidium bromide and visualized under short-wave UV light. Fragments of 650 and 150 bp were visible after the 800-bp product was digested by the BsmI restriction enzyme. DNA from homozygotic individuals lacking a BsmI restriction site (BB) appeared on the gel as a single 800-bp band. All the primers were synthesized by Sangon Biotech, Inc.
Statistical analysis. Statistical analysis was performed using SPSS 13.0 statistical software (SPSS Inc., Chicago, IL, USA) and data are presented as mean ± SD. Comparisons between two groups were performed using independent t-tests. The χ 2 analysis was applied to determine the difference in the genotype and gene frequency. Odds ratios (ORs) and 95% confidence intervals (95% CIs) were calculated from unconditional logistic regression models. P<0.05 was considered to indicate a statistically significant difference.
Results
Patient characteristics. In our study, 91 PC patients (52 males and 39 females) were diagnosed histopathologically following surgery or endoscopic ultrasonography fine-needle aspiration (EUS-FNA) and the mean age of patients was 47.1±9.1 years. We recruited 80 healthy volunteers (controls; 45 males and 35 females) and their mean age was 47.5±7.4 years (Table I). Initially, Pearson's χ 2 test was performed to examine the genotypic distribution of the control population. VDR FokI and BsmI genotypic distributions were calculated according to the Hardy-Weinberg equilibrium, with P-values of 0.156 and 0.606, respectively (Table II). The PC patients and control Combined analysis of genotypes FokI and BsmI. We pooled the data for FokI and BsmI genotypes of the VDR gene for PC patients and the control population to analyze the cumulative effect of FokI and BsmI polymorphisms, as shown in Table IV. Individuals with genotype ffbb had an 11.66-fold risk of PC compared with those of genotype FFBB (OR=11.667; P=0.009) and genotype Ffbb individuals had a 6.417-fold risk of PC compared with those of genotype FFBB (OR=6.417; P=0.018). There were no significant differences in risk of PC between the other genotypes, as shown in Table IV.
Discussion
PC is one of the most lethal types of human cancer, responsible for >250,000 mortalities and the fifth leading cause of cancerassociated deaths worldwide in 2007 (2). The majority of patients who contract the disease usually succumb to it within a few months of diagnosis, despite surgical or medical intervention. The incidence rate of PC is approximately the same as the mortality rate and the 5-year survival rate is <1% (14). Carcinoma of the exocrine pancreas is an increasingly common cancer, but no effective chemotherapy has been developed for patients with advanced disease (15). It has previously appeared that receptors in PC, such as estrogen receptors (ER), may be responsive to endocrine therapy. However, subsequent clinical trials with these receptors have not supported this therapy (16). Epidemiological studies show that individuals living at higher latitudes are at an increased risk of PC and are more likely to succumb to the cancer than those living at lower latitudes (5). One reason is that 1,25(OH) 2 VitD affects >200 genes to regulate proliferation, differentiation, apoptosis and angiogenesis of cells (17)(18)(19). Also, in vitro studies have demonstrated that 1,25(OH) 2 VitD and its synthetic analogs are able to inhibit the proliferation of PC cell lines (20). The VDR and its gene polymorphisms may also be important (21). Genetic studies have investigated the possible association between various histotypes of cancer and the detection of specific SNPs of the VDR gene (VDR-SNPs). Polymorphisms in the VDR gene have been shown to affect VDR mRNA and protein levels (22), which in turn may affect the immunomodulatory function of VDR (23). Approximately 200 different VDR-SNPs have been described; however, the VDR polymorphisms which are most frequently associated with tumorigenesis are FokI, BsmI, TaqI, ApaI, EcoRV and Cdx2 (24)(25)(26). The most frequently studied SNPs are the RFLPs FokI and BsmI. The FokI RFLP, located in the coding region of the VDR gene, leads to the production of a VDR protein that is three amino acids longer than normal. Although no significant differences in ligand affinity, DNA binding or transactivation activity have been identified between these two VDR forms, the shorter VDR variant exhibits higher potency than the longer one. The BsmI RFLP is intronic and located at the 3' end of the gene. BsmI does not alter the amount, structure or function of the final VDR protein produced, but it is strongly linked with a poly(A) repeat and may affect VDR mRNA stability. Thus, VDR polymorphisms have important implications for VD signaling and are associated with various malignancies (27,28). A number of studies have reported the role of the VDR gene in different malignancies, yet no studies have been carried out to evaluate the role of VDR gene polymorphisms in PC. To the best of our knowledge, this is the first study to investigate the association between VDR polymorphisms and the risk of PC.
VD is involved in the regulation of cell proliferation and differentiation in vitro and in vivo (29). The results of this study revealed that genetic heterozygous variants of FokI are associated with a decreased risk of PC in a North Chinese population, whereas the effects of BsmI were not significant. The difference in the allele frequencies was statistically significant (F vs. f allele, P=0.003; OR=1.952; 95% CI, 1.261-3.021. B vs. b allele, P=0.045; OR=0.645; 95% CI, 0.421-0.990) between PC patients and the control population. When the two genotypes, FokI and BsmI, were combined to analyze their cumulative effect in PC, we identified that ffbb and Ffbb genotypes have an increased risk of PC. This may be attributed to the haplotypic effect associated with the linkage disequilibrium of these two polymorphic sites. Thus, we propose that the FokI and BsmI polymorphisms of VDR are potential prognostic variables which may predict the risk of developing PC in the North Chinese population.
A number of potential improvements may be considered for any further studies. Firstly, more PC and control samples are required to effectively test the effects of variables. Secondly, further investigation of VDR-SNPs, including TaqI and ApaI, to analyze the association between SNPs and PC and various clinicopathological parameters for PC patients and healthy volunteers is required. Comparison of the various risk factors for PC (age, smoking, diabetes and chronic hepatitis B infection) and the frequency of VDR genotypes in PC patients may also be considered for further investigation.
In conclusion, data from this study showed that the FokI and BsmI polymorphisms were associated with a higher risk of PC among the North Chinese population. Furthermore, this study showed for the first time that these two polymorphisms in the VDR gene are potential determinants in PC patients. The main aim of this study was to understand the role of VDR gene polymorphisms in the etiology of PC in China. Additional studies on a larger population size are warranted to elucidate the role of genetic variations of VDR and PC risk. | 3,161.2 | 2013-02-28T00:00:00.000 | [
"Biology"
] |
Fast solution of boundary integral equations for elasticity around a crack network: a comparative study
Because of the non-local nature of the integral kernels at play, the discretization of boundary integral equations leads to dense matrices, which would imply high computational complexity. Acceleration techniques, such as hierarchical matrix strategies combined with Adaptive Cross Approximation (ACA), are available in literature. Here we apply such a technique to the solution of an elastostatic problem, arising from industrial applications, posed at the surface of highly irregular cracks networks.
Introduction
Many applications involve the solution to an elliptic boundary value problem in a background medium perturbed by the presence of cracks that take the form of one or many pieces of surface (with boundary). Crack (also called "screens" in electrical engineering) problems arise in all classical fields of applied physics: acoustics [9,14], electromagnetics [5,15] and elasticity [4,8]. Such problems are of particular interest in industrial applications related to geophysics that often involve fractures and dislocations.
When the background medium can be considered as homogeneous, which is a valid approximation in many cases, boundary integral equations appear as a method of choice for the numerical solution to crack problems. With such an approach, the problem is reformulated as a fully non-local equation posed at the surface of cracks. This is the strategy adopted by IFP Energies Nouvelles (IFPEN) for the evolution of the deformation and perturbed stress field associated with the solution of an elastostatic problem around a network composed of multiple cracks. The latter problem has motivated the present contribution. Most of the existing literature considers either the case of cracks that remain well separated, or the case of a few faults located at regular pieces of manifolds. A salient feature of the present contribution, concerns the geometry under consideration where cracks intersect each other forming a geometrically highly irregular structure, see Figure 1.
Discretization of boundary integral equations by means of a Galerkin procedure, resulting in the so-called Boundary Element Method (BEM), leads to densely populated matrices due to the full non-locality of the operators under consideration. Then, if the matrix of the problem is of size N , any matrix-vector product (the most elementary operation in any iterative linear solver) requires at least O(N 2 ) operations. This is clearly not acceptable as it requires unreasonable computational effort, especially in an industrial context where large size problems usually arise.
To circumvent the computational complexity issue, current literature offers a panel of refined acceleration techniques: fast multipole methods, hierarchical matrix strategies, and the like. These techniques, that have been developed during the last two decades, have been introduced to accelerate computations in a wide variety of problems ranging from molecular dynamics [3] to astrophysics [1]. For a general overview see [6]. Acceleration of boundary integral equations on smooth surfaces has also been historically a key challenge for stimulating the development of such methods [12].
To reduce computational complexity, IFPEN did not adopt one of the currently available acceleration techniques mentioned above, but rather developed its own approach, which shall be referred to as "sparsification", consisting in forcing coefficients of the BEM matrix to zero whenever the corresponding interaction involves sufficiently distant points of the computational domain. This sparsification procedure is implementation friendly, and approximates the originally fully populated matrix with a sparse counterpart that allows fast matrix-vector products. On the other hand, this strategy also induces substantial consistency error: measured in relative Frobenius norm, the perturbation on the matrix is typically 30% large.
The main objective of the present contribution is to compare the performance of sparsification with another well established method of the current literature: the Hierarchical Matrix [7] format combined with the Adaptive Cross Approximation (ACA) compression method [2]. We chose to consider this alternative method, later referred to as HM-ACA, because this is one of the only existing approaches that treats generation of the matrix of the problem in a fully black-box manner.
Although HM-ACA has already been analyzed in detail, and was proved to perform well on classical boundary integral equation problems, even in industrial contexts (see e.g. [10]), the present geometry with possibly many intersecting fractures is highly non regular, and thus cannot be considered a classical test case. We shall indeed see that HM-ACA not always achieves both accuracy and high compression. We will show that whether this strategy is preferable in terms of compression rate to sparsification depends on certain geometrical parameters of the problem related to the density of cracks.
The outline of this contribution is as follows. We first present in Section 1 the problem under consideration, its discretization and the way IFPEN sparsifies the obtained matrix. Then in Section 2.1 we introduce the adaptive cross approximation(ACA) method and show its efficiency for the compression of dense matrices admitting fast decreasing singular values (such matrices shall be referred to in the sequel as admissible). However, BEM matrices are not admissible because of the singularity of the integral kernel. Thus, we introduce in Section 2.2 a recursive splitting algorithm which decomposes the matrix into admissible sub-blocks in which the ACA compression method can be applied. This algorithm produces so-called Hierarchical Matrices and is referred to as the HM-ACA method. Then, in Section 3, we give an overview of the code we developed, and present a series of test cases to compare HM-ACA to the sparsification procedure. The numerical results obtained will lead us to conclude and provide an outlook for our work.
Initial problem, exact and sparsified matrices
The matrices we consider in the present contribution stem for the Galerkin discretization of the boundary integral formulation of some elastostatic problem. We will thus start by briefly describing this underlying continuous formulation as well as the associated discretization. We will also present the sparsification heuristic used so far at IFPEN in order to decrease the algorithmic complexity of matrix-vector products.
Underlying continuous problem
We are primarily interested in the solution of an elastostatic problem, looking for an equilibrium displacement field v in the exterior of a dislocation surface S ⊂ R 3 that consists of the union of a collection T = {τ j } N j=1 of flat polygons The elements τ j might intersect each other, which makes S a potentially very rough surface, see Figure 1. The stress field σ(v) is in equilibrium over the area R 3 \S, and is submitted to a traction field t : S → R 3 prescribed at S, which leads to the following system of equations Considering a normal n τ to each polygon τ ∈ T , the jump operator in (1) is defined on each τ by p | τ := p| + τ − p| − τ where p| + τ denotes the trace of p taken from the side of τ where n τ is ingoing, and p| − τ refers to the trace on the other side. Since we assume that the elastic solid is homogenous and isotropic, the stress field is given by the Hooke's Law in 3D σ(v) = 2µ (v) + λtr( (v)) Id, with (v) = 1 2 (∇v + ∇v T ) is the strain tensor and Id is the identity matrix. The two material parameters λ and µ are known as Lamé coefficients and verify the following relations where ν refers to the Poisson ratio, and E is Young's modulus. We are interested in a boundary integral reformulation of problem (1) that will consist in a non-local equation posed only on the surface S. Without giving too much detail, let us describe the bilinear form associated with such a formulation. First, we need to introduce the Green kernel of problem (1), which is given by the explicit formula It is fundamental to observe, and to keep in mind, that this function is singular at x = 0. Next, for any vector field v : S → R 3 , define the trace operator Although the displacement field v solution to (1) satisfies σ(v) · n = 0 on S, a priori it jumps across S according to a slip field v = u : S → R 3 that is the unknown of our boundary integral formulation. The exact solution of problem (1) can be recovered from u by means of a so-called representation formula, see [13, §6.7], The boundary integral formulation is obtained simply by imposing the third equation of (1) taking the trace at S of the above representation formula. For an appropriate space H(S) = Π τ ∈T H(τ ) 3 of trace fields, the boundary integral variational formulation associated with problem (1) that we consider writes and the right hand side is f (v) = S t · v ds. In formula (2) the operator T x τ is the operator T τ applied with respect to the x variable. The operator T y τ is defined accordingly. The important feature here is that the integral kernel coming into play in this integral operator is singular at x = y (which is possible only if τ ∩ τ = ∅), and it is regular otherwise. In particular, if τ and τ are distant from each other, then the operator associated with a( , ) is regularizing, and it will induce matrices with exponentially decreasing singular values.
Exact BEM matrices
The bilinear form in (2) is discretized by means of a Galerkin procedure, where each space H(τ ) is approximated by constant functions over τ . As a consequence three degrees of freedom (corresponding to the three directions of space) are associated with each elementary polygon τ , and the discrete variational formulation takes the form For this discrete formulation, the order 0 Lagrange vector shape functions are defined as follows: First consider a numbering of the elementary polygons T = {τ j } j=1...N , and let e k , k = 1, 2, 3 refer to the canonical basis of R 3 . As shape functions, we then choose Each ψ 3j−q with q = 0, 1, 2 is thus regarded as a function defined on S that is supported only on τ j . The matrices we are dealing with are defined as
Sparsification heuristics
The bilinear form a( , ) coming into play in (3) is non-local: there is no reason for a(u h , v h ) to vanish, even if the supports of u h and v h are disjoint. The direct consequence of this property is that the matrix A = (A j,k ) is fully populated. Without any special strategy, the computational complexity of a matrix-vector product will then be of order O(N 2 ). This is unbearable for any test case of decent size.
An approximation has to be applied to the matrix A in order to break this computational complexity.
Let us describe what is the heuristic adopted so far at IFPEN in order to achieve this goal. This strategy shall be referred to later on as "sparsification procedure". First, note that for each pair (j, k), the coefficient A j,k corresponds to the interaction between two mesh elements τ j and τ k . An approximate matrix A sp = (A sp j,k ) j,k=1...3N is then obtained by computing only the coefficients A j,k such that where α > 0 is a parameter, diam(τ k ) is the diameter of the circumscribed sphere to element τ k and dist(τ j , τ k ) is the distance between the barycenters of elements τ j and τ k . Admittedly this may appear as a crude approach. However, this strategy is easily implementable. In addition, which is even more important, one should keep in mind that the requirements in terms of accuracy are rather loose. The main point of the present contribution is to examine whether other more sophisticated strategies may offer a better accuracy/compression trade-off, taking the sparsification procedure described above as a reference. Figure 2 shows the resulting sparse patterns for two problems considered by IFPEN, corresponding to large faults (left) and a crack network (right). These two test cases will be included in the numerical comparison experiments Section 3.2 and correspond to Figure 7 and 8 respectively.
Adaptive cross approximation and hierarchical matrices
In this section, we will present the alternative approach that we selected for comparison with the sparsfication procedure of the previous section. This alternative strategy rests on a (classical) combination of Adaptive Cross Approximation (ACA) presented in the next paragraph, and the hierarchical matrix (HM) format presented in a second paragraph.
Low rank approximation
In this paragraph, we consider a fully populated matrix A ∈ C n×n , A = (A j,k ) j,k=1...n with, a priori, none of its entries vanishing, its size n being potentially large. With no particular assumption on this matrix, the cost of a matrix-vector product is O(n 2 ). This cost is substantially reduced if we assume that A is of low rank though. We say that a matrix has the low rank property, with rank k ≤ n, if there exist vectors u j , v j ∈ C n , j = 1 . . . k, such that Indeed if this representation holds, then a matrix-vector product requires 2kn flops, which is smaller than n 2 provided that the condition on k given above is satisfied. Matrices A encountered in applications rarely have the low rank property. A simple primary observation is that general matrices A can be written as a sum of rank one matrices through its Singular Value Decomposition (SVD) where (u j ) n j=1 , (v j ) n j=1 are orthonormal basis of C n and σ 1 ≥ σ 2 ≥ · · · ≥ σ n . A closer inspection of this formula leads to the conclusion, provided that the sequence (σ j ) decreases fast, that truncating the singular value decomposition (6) yields a good approximation of the matrix A. This is the essence of the next result (see [7, Appendix C]).
Proposition 2.1. Let A ∈ C n×n admit the singular value decomposition (6), and denote A (k) the matrix obtained by truncating this decomposition at rank k, namely A (k) := k j=1 σ j u j · v T j . Then we have the error estimates where 2 refers to the matrix norm induced by the vector norm |u| 2 = ( n j=1 |u j | 2 ) for u = (u j ) n j=1 ∈ C and F refers to the Frobenius norm given by A 2 F = j,k=1...n |A j,k | 2 . Truncating the SVD is thus an efficient way to approximate a matrix, and so to reduce the cost of the matrix-vector product, provided that the singular values decrease fast. Assume that singular values decrease exponentially, say σ k ≤ q k for a fixed q ∈ (0, 1). Then for a relative error expressed in Frobenius norm of order ε > 0, it suffices to take k log q ε.
Unfortunately, computing the singular value decomposition of a matrix is expensive: it costs O(n 3 ) flops. To circumvent this issue, starting from an arbitrary matrix A whose singular values are assumed to decrease exponentially, the Adaptive Cross Approximation (ACA) algorithm provides a collection of vectors u j , v j ∈ C n , j = 1 . . . n such that for some constant C > 0 independent of k. Moreover, which is probably the most interesting feature of this method, the cost of computing A (k) is O(kn). This cost is thus quasi-linear provided that the singular values decrease exponentially. Besides, the algorithm does not require to generate all the coefficients of the matrix, so that the cost of the storage is also O(kn). The detailed analysis of the ACA algorithm is out of the scope of this paper, we only report the algorithm itself (Algorithm 1) and refer the reader to [2, Chap.3] for further details.
In our pseudo-code notation, for any vector w ∈ C n , the number w(j) refers to the jth entry of w; likewise for A ∈ C n×n , the number A(:, k) refers to the kth column, and A(j, :) refers to the jth row.
At the beginning of Algorithm 1, there is an initialization step for the choice of the index of the first j * . For this initialization, one could take j * = 1. Other choices may speed up the convergence of the algorithm (see [2,Section 3.4.3]). Algorithm 1 also involves a stopping criterion based on an error estimator. We took the stopping criterion given in [11]: In this stopping criterion, the value ε > 0 is a fitting parameter whose choice depends on the degree of consistency that one wishes. The parameter controls the relative variation between two iterations since we have the following relation: To show the accuracy of ACA compared to a usual SVD, let us look at the interaction between two clusters of random points X = {x i } and Y = {y i } picked randomly according to a uniform law in two unit balls whose distance will vary. The interaction between two points x and y is given by 1/ x − y so that the coefficients of the interaction matrix A are defined as where x i ∈ X and y j ∈ Y . In Figure 3, we show the relative error in Frobenius norm of A (k) and A (k) with respect to A, where k is the rank. It can be observed that there is an exponential decrease in both cases, and this decrease is faster when the distance between the two clusters is greater, that is to say, when the interaction is more regular (see Section 2.2). Besides, we see that SVD gives a better approximation, which is expected since it can be proven that it gives the best approximation for a given rank.
Partition of the matrix in admissible blocks
Two important observations can be made for the matrix in (4) Relative error is not admissible. As a consequence, the ACA compression strategy of Section 2.1 is not directly applicable. However, sub-blocks of the matrix A in (4) may be admissible (i.e. have exponentially decreasing singular values).
To build an approximation of A allowing a substantial reduction of the cost of matrix-vector products, it would be sufficient to find a sparse matrix A nf ∈ C 3N ×3N (sometimes referred to as near field contribution) such that where B is a collection of subsets of 1, 3N × 1, 3N , and each A t,s is admissible. Here, for any subsets t, s ⊂ 1, 3N , the submatrix A t,s refers to a 3N × 3N block with coefficients equal to zero, except for the sub-block obtained from A by restricting indices to the set t × s. With such a decomposition as (9), one may consider A = A nf + t×s∈B A t,s as a good approximation for A, where each A t,s is obtained from A t,s after application of the ACA compression procedure described in Section 2.1. Building the decomposition into sub-blocks efficiently can be achieved by using recursive algorithms described in the following. This constitutes the first step in building a Hierarchical Matrix (HM).
Remark that for the BEM matrices at hand (4), the Galerkin discretization establishes a correspondence between the numbering of unknowns and some spatial distribution of degrees of freedom. This gives an insight on how to find admissible (and thus compressible) sub-blocks inside the matrix A as described in the following.
For a ball B s ⊂ R 3 , define s = {j ∈ 1, 3N | supp(ψ j ) ⊂ B s }. For another ball B t ⊂ R 3 , define t ⊂ 1, 3N accordingly. The more B s and B t are far from each other, the faster the singular values of A t,s will decrease because the Green kernel, and so the interaction, will be more regular (see Remark 2.2 for an example). The distance between B t and B s that may be considered as sufficient for the admissibility of A t,s depends on the radius of these balls. Such a criterion shall be referred later on as admissibility criterion. Current literature provides various admissibility criteria. It should be considered as problem dependent. In our case, we chose the following admissibility criterion (see [11, eq. 3.15]) where diam(B s ) = max k1,k2∈s Here for k ∈ s, x k is the barycenter of the mesh element corresponding to the Lagrange basis function ψ j , and η > 0 is a fitting parameter. For the present problem, the typical values of η that we considered range from η = 0.1 to η = 10. As suggested in [11] and in order to avoid the quadratic cost of the computation of diam(B s ), the practical implementation makes use of the more restrictive but more easily computable admissibility condition: where X s (resp. X t ) is the center of B s (resp. B t ). Remark that relation (11) is close to (the inverse of) the IFPEN criterion (5) but takes advantage of the hierarchical structure of the cluster tree and keeps the symmetry. Decomposition (9) involves a partition in admissible blocks, so we need to find them in an efficient way. To do so, we build a cluster tree, that is to say, we organize the set of geometric elements as a binary tree such that each node of the tree is a cluster of geometric points. The two sons of a cluster/node s are obtained by defining a separation hyperplane that goes through its barycenter X s and is orthogonal to the direction of largest expanse of the cluster. This direction is obtained by computing the first eigenvector of the 3 × 3 covariance matrix C s of the cluster: The cluster is thus divided into two more ore less equal sons. This clustering algorithm can be found in [11]. Examples of the first four levels of such a cluster tree are given in Figures 4 and 5.
To build B as a collection of subsets of 1, 3N × 1, 3N corresponding to admissible blocks, we can look at pairs of clusters at the same level in the cluster tree and check if they are admissible according to criterion (10), starting from the root. If they are, we apply ACA to the corresponding sub-matrix, and if they are not, we look at the interactions between their sons. This recursive algorithm provides a block decomposition as in (9). Actually, if the block is admissible, we also check if the compression for the given ε is advantageous in terms of complexity: during the algorithm ACA, if k(n + m)/(n · m) ≥ 1 for a block n × m and k the current rank of the approximation, we stop and we do like if the block was not admissible because it is not worth compressing it.
Implementation
There already exist freely available libraries written in C or C++ that implement HM-ACA, see e.g. HLib (http://hlib.org/), H2Lib (http://www.h2lib.org/) or Ahmed (https://github.com/xantares/ahmed). Due to license restriction we redeveloped our own implementation of HM-ACA freely available on a GitHub repository at https://github.com/xclaeys/ElastoPhi and put under Lesser Gnu Public License (LGPL). Let us briefly comment on the most remarkable parts of the code and refer to its documentation for the details. The only external library we use is Eigen (http://eigen.tuxfamily.org), which is a Free Software.
The first important part is in the file cluster.hpp where the class Cluster is implemented. The constructor calls the function build, which recursively builds the cluster tree associated with a set of geometric points. More precisely, for a given cluster of points, it creates its two sons as described in the previous section (computing the center and the principal component) and calls the same function build on its two sons. Then the class Block contains a pair of clusters so that it is associated with their interaction. It has a function to check the admissibility of their interaction according to (10).
Then, in the file lrmat.hpp the class LowRankMatrix is implemented. Its constructor takes as input a submatrix and applies the ACA algorithm so that an instance of the class contains the collections of vectors defining the low rank approximation (7) of the submatrix.
Finally, we have all the tools to build the hierarchical matrix. The class HMatrix, implemented in the file hmatrix.hpp, contains two vectors of matrices, one for the low rank sub-matrices in A t,s and one for the dense sub-matrices in A nf . Its constructor needs a set of geometric points so that it can build the associated cluster tree with the class Cluster. Then it recursively looks at blocks as described in the previous section using the class Block to check the admissibility. If it is admissible, it constructs a LowRankMatrix instance and adds it to its vector of low rank sub-matrices, otherwise it looks at the sub-blocks according to the cluster tree until it reaches the leaves (for the problem under consideration, they correspond to 3 × 3 sub-matrices) and stores them as dense sub-matrices.
With the headers contained in the folder include, we have already built some useful executables in the folder src. One of the main executables is Compress, which builds the hierarchical matrix with compressed and dense blocks for given parameters η (of the admissibility test) and ε (of ACA compression), and then computes the compression rate, the relative error for a matrix-vector product and the relative error in Frobenius norm with respect to the dense matrix given in input. The executable MultiCompression and CompaSparse do the same but for various values of η and ε, and CompaSparse does it also for the IFPEN sparse matrix; they are the executables used to create the data postprocessed with the Python scripts of the folder postprocessing that generate the graphs of Section 3.2. Finally, there are some visualization executables. For example, the executable VisuMesh creates a file in Gmsh (http://gmsh.info) format to visualize the mesh of the network as in Figure 1, and the executable VisuCluster creates a file in Gmsh format to generate the images of Figures 4 and 5. Finally, the executable VisuMatrix creates the data postprocessed with the Python scripts to generate images as in Figure 6b.
Test cases and results
In this section we report a series of test cases to illustrate the performance of our code on different geometrical structures. Figures 6, 8, 9-11 refer to discrete networks of fractures (DFN), where each fracture is represented by a mesh element (a quadrangle); Figure 7 refers to a network of large faults, which have been triangulated and we consider each mesh triangle as a dislocation element. Both types of structure are considered in IFPEN applications.
For each geometrical structure we show first the corresponding mesh. Then, for the test case of smaller dimension (Figure 6), we visualize, for some pairs of η and ε, the local compression rate of each block of the HM-ACA matrix by coloring its entries using a color scale from 0 to 1. The local compression rate of an admissible block of dimension n × m with rank k in ACA approximation is 1 − k(n + m)/(n · m) (the larger the compression rate, the more the matrix is compressed); the local compression rate of the non compressed blocks is set to 0. Note that a block is not necessarily a connected part of the matrix.
For all the cases, an error-compression graph summarizes the relevant results: for some pairs of values of the parameters η and ε, we report on the vertical axis the relative error in Frobenius norm of the corresponding HM-ACA matrix with respect to the dense matrix, and on the horizontal axis the achieved global compression rate. Next to each marker we indicate the value of ε (ε = 1, 0.9, 0.5, 0.1, 0.01) and the legend gives the value of η (η = 10, 1). The expression "0 blocks" means that the admissible blocks are approximated with zero rank matrices (i.e. zero matrices, but we just need to store k = 0) instead of computing their ACA approximation. Note that the 0 blocks strategy is close to the IFPEN Sparsification procedure, but with a different admissibility criterion. The global compression rate of a 3N × 3N matrix A is defined as: where B is the set of subsets of 1, 3N × 1, 3N corresponding to admissible (and then compressed) blocks introduced in Section 2.2. Moreover, for the geometrical structures for which the sparse matrices obtained by IFPEN with the heuristic sparsification procedure are available, we report with a red triangular marker ("Sparsification" in the legend) the corresponding relative error in Frobenius norm and compression rate, given by 1 − n0/ (3N · 3N ), where n0 is the number of non zero coefficients of the sparse matrix. Next to each Sparsification marker, the corresponding value for the sparsification parameter α (α = 2, 3, 4, 5, 6) is given.
Looking at the first results in Figure 6, we remark that when ε increases, the relative error and the global compression rate increase as expected: recall that ε is used in the stopping criterion (8) for the ACA compression of each admissible block and the stopping criterion becomes less restrictive with bigger ε. For ε ≥ 1 the ACA loop always stops after computing just 1 rank. Similarly, when η increases, the relative error and the global compression rate increase: indeed, η appears in the admissibility condition (10) and a bigger η allows the balls B t and B s to be farther while maintaining the admissibility of the corresponding block. These remarks indeed hold true for the other test cases as well.
Results for the network of large faults are shown in Figure 7, where we also give a comparison with the sparsification procedure of IFPEN. We can see that we obtain better results with HM-ACA: for instance, the most aggressive compression rate for HM-ACA is achieved with η = 10 and 0 blocks and gives an error of 0.012 for a compression rate of 99.3%, while the IFPEN heuristic procedure gives an error of 0.21 for a compression rate of 98.7%. The good behavior of HM-ACA can be explained by the regularity of the geometry, which translates into fewer near-field interactions. The situation is less clear when the geometry is less regular, as the following test cases for crack networks illustrate.
The test case presented in Figure 8 consists in a crack network of 1994 fractures. Here, we still obtain better results with HM-ACA: with η = 10 and ε = 0.9, we have an error of 0.011 for a compression rate of 75%, while the IFPEN sparsification procedure achieves only a compression rate of 63.3% for the same error. Figures 9,10 and 11 correspond to another test case with 3600 fractures of increasing density, as the considered volume goes from V a = 900 × 300 × 20 in Figure 9 to V c = 300 × 300 × 20 in Figure 11. First, we can remark that when the density increases, it becomes more difficult to obtain a good compression rate, regardless of the compression algorithm. For instance, for η = 1, ε = 0.9, the compression rate for V 1 (respectively V 3 ) is 76.2% (respectively 94.9%) and the relative error is 0.0072 (respectively 0.0037). Then, we see that for the lower density cases (Figures 9 and 10), we obtain little to no improvement using HM-ACA compared to the IFPEN sparsification heuristic. However, for the denser case ( Figure 11) the use of HM-ACA can be justified: for instance, η = 10 and ε = 0.01 gives an error of 0.0059 for a compression rate of 78.5%, whereas the error obtained with the sparsification heuristic of IFPEN is 0.010 for a similar compression rate of 78.3%.
To conclude, even though the sparsification procedure of IFPEN can be implemented more easily and gives decent results, the HM-ACA strategy is the better approach for faults geometries, presumably because of their smoothness. However, in the situation of crack networks, where the geometry is less regular, HM-ACA gives mixed results: we observe improvements in the densest test case compared to the IFPEN strategy, but little to no gain in the less dense geometries. Overall, HM-ACA offers greater flexibility in the compression-accuracy trade-off, although at the cost of more involved calibration of the parameters.
Conclusion
We have implemented a HM-ACA code and tested it on several matrices provided by IFPEN to study the efficiency of this kind of method for particularly complex geometries as in Figure 1. From our numerical results, HM-ACA proves to be a more accurate, or at least more flexible alternative to the heuristic sparsification procedure developed by IFPEN. It works particularly well for geometries coming from faults, where far interactions are predominant. With geometries coming from discrete fracture networks, lower compression rates are obtained, especially for dense networks. This is quite intuitive, since then there are less far interactions to be compressed.
Our implementation is clearly not optimal, and there is room for improvement. The main perspectives in this respect concerns the parallelization of the matrix-vector product in conjunction with a hierarchical matrix strategy (see [2,Section 2.3]) and the parallelization during the construction of the H-matrix format (see [2,Section 3.4.6]).
At a more theoretical level, another admissibility criterion more suited to fracture networks could be designed, for instance taking into account the anisotropy of fractures using ellipsoids instead of balls. However, it may be necessary to carry out a mathematical analysis of this problem to find the most appropriate geometrical criterion. Finally, looking for alternative acceleration strategies would also be an interesting research direction. Due to the low level of accuracy required for applications considered at IFPEN, and since the geometries under consideration are particularly rough (crack networks), probabilistic acceleration techniques might prove even better suited. | 7,959 | 2018-01-01T00:00:00.000 | [
"Mathematics"
] |
Maximum Likelihood Time Delay Estimation Based on Monte Carlo Importance Sampling in Multipath Environment
,
Introduction
The time delay estimation problem has always been a hot topic in wireless communications and is widely applied in radar [1], sonar [2], wireless communication system [3], and other fields.In multipath environments, the time delay estimation schemes for single snapshot narrowband signal using super resolution algorithms show good estimation performance.Generally, the super resolution algorithms mainly include two categories: the subspace estimation algorithm and the maximum likelihood (ML) estimation algorithm.To be specific, the subspace estimation algorithms include multiple signal classification (MUSIC) algorithm [4,5], the root of MUSIC algorithm [6], and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm [7].Under the condition of single snapshot, these algorithms adopt smoothing in frequency domain in order to make autocorrelation matrix transformed into singular matrix.As a consequence, the effective bandwidth becomes narrow, which will lead to the result that mean square error (MSE) of the time delay estimation cannot be close to the Cramér-Rao bound (CRB).
The ML estimator is an asymptotically best estimator and has the best estimation performance under the condition of the limited samples.Since the multidimensional likelihood function is a nonlinear function of time delays and has many local maxima, the exact ML estimate needs multidimensional grid search.However, the corresponding estimation accuracy is limited by the search interval, and the computational complexity increases exponentially with the increasing dimension.In order to reduce the complexity, the realization method of the ML estimation can adopt the iterative algorithm, such as the expectation maximization algorithm [8], but the iterative algorithm requires that the initial value must be close enough to the unknown parameters which will be estimated.Otherwise, the iterative algorithm will converge to local maxima of the likelihood function.In addition, the iterative algorithm uses multiple International Journal of Antennas and Propagation initial values to improve the performance.Accordingly, the iterative algorithm converges to the global maxima at the cost of high computational complexity.For this reason, literature [9] adopted Monte Carlo (MC) importance sampling to determine the ML estimation of time delay under the condition of no data assistance.This algorithm does not need iterative calculation, but it can only be applied to the single path scenario.Literature [10] used MC importance sampling to complete ML time delay estimation under the condition of multipath.The algorithm needs the known reference signal in frequency domain and cannot be directly applied to general time delay estimation model.The iterative expectation maximization algorithm was investigated [11].However, it is sensitive to initialization value and has the problem of converging to local optimal value.In this paper, the time delay likelihood function under the condition of multipath is deduced by using channel frequency response.The normalized pseudoprobability density function is established.The importance function (IF) is given according to the properties of the normalized pseudoprobability density function and is sampled by using MC method.The time delays can be estimated by calculating the mean of the samples.Finally, the simulation results present the performance comparisons of the proposed algorithm, MUSIC algorithm, and the grid search ML algorithm.
The symbols and the operators used in the paper are as follows: [⋅] Τ denotes a transpose; [⋅] * represents a conjugated matrix; [⋅] H represents a conjugate transpose; [⋅] means an expectation.
Signal Model
In the process of electromagnetic wave propagation, the radio channel impulse response under the condition of the multipath can be modeled as where is the number of the multipath components, = | | is the complex fading coefficient of the th multipath component, | | presents the amplitude, means the phase and obeys a uniform distribution (0, 2) [5], denotes the time delay for the th multipath component, and represents the Dirac delta function.
Let us take the Fourier transform of (1).Then, the channel frequency response can be represented as It is common practice that channel frequency response is used for time delay estimation.The discrete sampling of the channel frequency response in different systems can be obtained by different methods.For example, multicarrier demodulation technique is used in Orthogonal Frequency Division Multiplexing (OFDM) system, and the received signal deconvolution method is used in direct sequence spread spectrum system, and so forth.
The discrete measurement data is obtained by sampling the channel frequency response () at equally spaced frequencies.Considering the impact of the additive white noise in the measurement process, we can express the sampled discrete channel frequency response as where = 0, 1, . . ., − 1, is the carrier frequency, Δ denotes the frequency sampling interval, and () represents the additive white noise with the zero mean and variance 2 .The vector form of the signal model can be represented as where x = [(0), (1), . . ., ( − 1)] T and H = [(0), (1), . . ., ( − 1)] T are the channel frequency response estimation vector and frequency response vector, respectively, T contains the modified complex fading coefficients, = −2 , and w = [(0), (1), . . ., ( − 1)] T is the additive Gaussian white noise vector.
According to (4) and noise related assumptions, the likelihood function of a single snapshot for time delay estimation can be expressed as The ML time delay estimate τ can be expressed as Note that (x | , ) is the joint distribution function of both and , and is a quadratic function.The analytical expression of with respect to can be obtained by calculating partial derivatives: We substitute ( 7) into ( 5), take the logarithm, and remove the constant parts.Then, likelihood function with respect to can be obtained as
Time Delay Estimation Algorithm Based on Importance Sampling
In ML time delay estimation algorithms, the computational complexity of the multidimensional grid search increases exponentially with the increase of the dimension, and estimation accuracy is limited by search intervals.The iterative algorithms require that the initial values be close to the estimated unknown parameters.Otherwise, they cannot guarantee the convergence to the global maximum.The MC algorithm converts the search of the global maxima into the expectations of a random variable.In the process of practical calculations, the expectation of a random variable can be replaced by the sample mean.Importance sampling is the most commonly used sampling method in classical MC methods and approaches the global maximum.Furthermore, the computational complexity does not increase exponentially with the increase of dimension of the likelihood function.The key of importance sampling is the selection of importance function.In order to reduce the estimation error, importance function should be similar to the original probability distribution.In addition, samples should be easily extracted from the importance function for implementation convenience.
In the following sections, the global maxima of likelihood function will be first introduced.Then, importance function and the random sampling method are derived.Finally, the algorithmic steps and the computational complexity analysis are presented.
Global Maxima of Likelihood Function.
In order to make the sample averages approximate the global maxima of corresponding parameters, we can make the distribution function more accurate to adopt the exponential for the likelihood function () [10], which makes the estimation more accurate.The exponential likelihood function is defined as where 0 is a constant.The different values of 0 have significant influence on the distribution characteristics of 0 ().If 0 is sufficiently large, then 0 () will approach the Dirac delta function.
According to literature [12], τ can be expressed as where 0 → ∞; represents the th time delay search region.
Let us define the normalized pseudoprobability density function: Then, τ can be simplified into the following form: According to the principle of importance sampling, (12) can be rewritten as where () is the importance function.
Importance Function.
The choice of importance function () will affect the estimation accuracy in the proposed algorithm.() is selected as close as possible to (), and should be easily sampled from ().
For the matrix V H V, where I represents the × identity matrix.Then, where 1 is a constant coefficient and Define the importance function as where The size of 1 determines the distribution of the importance function samples.The corresponding sample is monotonically decreasing with respect to 1 .As a consequence, the choice of 1 should be moderate.
The MC method is an effective calculation method for the integral.It recasts the definite integral as a mathematical expectation of a random variable.As long as one can realize the sampling of the random variable, it can be effectively solved.By substituting the sampling average for the integral using the MC importance sampling method, we can express τ as where () is a random sample for the importance function
The closed-form expression for the inverse function −1 () is not easy to derive.We can obtain the sampling equation of () for the th sample by 3.4.Algorithm Flow.According to the above derivation and analysis, the processes of the proposed algorithm can be summarized as follows.
Cramér-Rao Bound
The CRB gives the lower bound of the mean square error of an unbiased estimator.The following model gives the corresponding CRB of time delay estimation.Firstly, we define the unknown parameter vector [ 2 T ] T .The log-likelihood function is given by ln ≜ ln (x | , ) Re and Im represent the real and imaginary parts of , respectively; that is, Re = Re() and Im = Im().The respective partial derivatives of ln with respect to 2 , Re(), Im(), and , can be obtained as where Therefore, the partial derivative of ln with respect to is where By (25), we can obtain the following results: The Fisher information matrix (FIM) is ( T ), where = ln / [ 2 T ] T .According to FIM and [13], the CRB is given by
Simulation Result and Performance Analysis
5.1.Simulation Result.Consider an OFDM wireless system where the number of multipaths is two and we analyze and compare the proposed algorithm with the MUSIC time delay estimation algorithm, the grid search ML time delay estimation algorithm, and the CRB [5].Finally, the computational complexities of the above algorithms are analyzed and compared.The simulation parameter set-up of the OFDM system is shown in Table 1.
To begin with, define the mean square error as where x indicates the parameter estimate value obtained in the th simulation, denotes the true value of the corresponding parameter, and represents the number of estimates.
Simulation 1. Compare the likelihood function and the exponential likelihood function with 0 = 1 and 0 = 100.
As shown in Figure 1, compared with the normalized likelihood function, the two-dimensional bend surface is The multipath power is unchanged in the simulation process.
As shown in Figure 2, the MSE of the proposed algorithm, MUSIC algorithm, and grid search ML algorithm all decrease with increasing SNR.The MSE of the proposed algorithm approaches the CRB algorithm and basically corresponds to the performance of grid search ML algorithm.The reason lies in that the importance sampling algorithm uses a weighted average of the sample to approach the global maxima of the objective function.The single snapshot MUSIC algorithm needs to use smoothing in frequency domain to make the autocorrelation matrix singular, which leads to the loss of effective bandwidth and the reduction of the estimation accuracy.
Algorithm Complexity.
In this paper, the computational complexity of the proposed algorithm is ( ( + log 2 ) + ( 2 + 3 + 2 + 2 + )).The computational complexity of MUSIC algorithm is ((/2) 2 ((/2) + 1)2+(/2) 3 +(/2) ), where is the number of candidate delay grid points.The computational complexity of the grid search ML algorithm for time delay estimation is (( 2 + 3 + 2 + 2 + ) ).We can see that the computational complexity of the proposed algorithm is slightly higher than that of MUSIC algorithm but is significantly lower than that of grid search ML algorithm.
Conclusions
In the multipath wireless communication scenario, to alleviate the computational burden of the ML time delay estimation for single snapshot, we have proposed a ML time delay estimation algorithm based on Monte Carlo importance sampling.Meanwhile, we have introduced a normalized pseudoprobability density function, an importance function structure method, a random sampling method, and CRB of the model as well as the analysis of the computational complexity.The algorithm has taken sample for importance function and has used the weighted average of the samples to calculate the time delay estimation.The simulations have shown that the proposed algorithm can significantly reduce the computational complexity and obtain the approximate performance with the ML algorithm of grid search.
Table 1 :
The simulation parameter set-up of the OFDM system.The number of samples = 3000, doing 100 Monte Carlo experiments and taking 0 = 100 and 1 = 1. | 2,933 | 2017-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Controlling the temperature of bones using pulsed CO 2 lasers: observations and mathematical modeling
: Temperature of porcine bone specimens are investigated by aiming a pulsed CO 2 laser beam at the bone-air surface. This method of controlling temperature is believed to be flexible in medical applications as it avoids the uses of thermal devices, which are often cumbersome and generate rather larger temperature variations with time. The control of temperature using this method is modeled by the heat-conduction equation. In this investigation, it is assumed that the energy delivered by the CO 2 laser is confined within a very thin surface layer of roughly 9 m m. It is shown that temperature can be maintained at a steady temperature using a CO 2 laser and we demonstrate that the method can be adapted to be used in tandem with another laser beam. This method to control the temperature is believed to be useful in de-contamination of bone during the implantation treatment, in bone augmentation when using natural or synthetic materials and in low-level laser therapy.
Introduction
Bone tissue ablation [1][2][3][4] is now considered an alternative method for various type of medical surgery. In some cases, bone tissue ablation shows no discernible evidence of thermal damage as opposed to using a standard drill, which can cause fragmentation into numerous small bone chips [4,5] during a surgical procedure. Apart from bone ablation, laser treatment of animal bones involving moderate radiant exposure of He-Ne laser light in low-level laser therapy (L-LLT) has shown to be successful on osteogenesis after controlled surgical fracture [6,7]. In these controlled surgical fractures, trabecular regions are being exposed. As trabecular regions are being exposed, we are investigating how well the temperature of cancellous bone can be controlled by a CO 2 laser beam working in tandem with a visible laser. Other uses of Argon laser light involving moderate radiant exposure has shown to give excellent results in the treatment of primary angle closure glaucoma [8]. In tissue engineering, temperature control of the bone near the implant surface is very important as devitalization of the overlaying tissue may occur if the temperature exceeds 47 • C [9][10][11]. Although no major thermal damages were noticed in both cortical and cancellous bone de-contamination above 50 • C [12], it is still very desirable to control temperature in medical sciences. As studies appear to be done separately for cortical and cancellous bone in the literature, only results for cancellous porcine bones are reported in our investigations.
In all of the aforementioned applications related to orthopaedic surgery, a procedure that is well temperature-controlled is desirable to prepare the bone before it is drilled or cut. The method used to heat a given bone specimen by CO 2 laser can be done in tandem with a visible laser that is exciting natural intracellular chromophores such as porphyrins or cytochromes [4] found in blood supplying bones. In this tandem method, the CO 2 laser would be controlling the bone temperature, which is absorbing in the far-infrared and visible light such as He-Ne taking the same path as the CO 2 laser would be useful to excite natural chromophores.
In this investigation, we show that the temperature of the cancellous part of porcine bones can be controlled using a constant flux of CO 2 radiation. A similar method was proposed to control the hard cortical part of the bone under a constant CO 2 laser flux [13], a complete mathematical treatment supported by experimental results for cancellous porcine bone is presented. A complete mathematical model would take into account the blood supply in the bone cancellous structure. For the dry bones that are currently available, it was deemed that a simplified mathematical model that treats the bone as a uniform section was realistic. The current mathematical model would provide a good basis for more complete models where the complexities of including the effects of blood/water could be studied more rigorously.
Methodology
The absorption coefficient α of bone is very large [2,3] at the CO 2 laser wavelength λ . As the melting point of mineral bones is quite high [3] (T Melt ∼ 1280 • C ), the material can be heated to at least 300 • C without significantly affecting its physical properties and burning the material.
A CO 2 laser, Firestar 60t from Synrad, was used at λ = 10.6µm to investigate all the bone samples in this study. The beam from the CO 2 laser aperture was reflected by two mirrors mounted into the periscope assembly shown in Fig. 1. The pair of mirrors were adjusted so that the beam would propagate in a direction parallel to the plane of the optic table. The CO 2 laser beam was collimated by a pair of lenses of focal lengths f 1 and f 2 , respectively. Using the pair of collimation lenses, the beam was magnified to fill about one third of the lens surface of a third lens having a focal length of f 3 . This final lens focuses the CO 2 beam a distance of 50 mm behind it (as shown in Fig. 1). The bone sample is placed further away, in the diverging field, at about 25 cm from the third lens' focal point. The 2D optical scanner mirrors sweeps the beam along the plane of the figure at a frequency of typically 100 Hz and in a direction perpendicular to the plane of the figure at a very low frequency of typically 0.5 Hz. The surface scanned by the 2D mirror system is roughly three times that of the cross sectional area of the bone specimen.
Porcine bones were chosen because laser ablation resulting in these animal bones is similar to the findings in hard biological tissues such as human teeth [3]. Furthermore, the cortical and cancellous parts of the bone mineral density of porcine bones are comparable to those of human bones [14]. The bone samples were cut out of cylindrical rib bones about 60 mm long. Their cross sections were nearly circular with diameters ranging from 7 to 10 mm and their lengths were about 10 mm. The bones were placed in boiling water for approximately 30 minutes then they were placed in a 3% hydrogen peroxide solution for approximately four hours. Following this procedure, the bones were soaked for a few minutes in distilled water, then wiped with low abrasive tissue and were finally left to dry in a well ventilated area for a 24 hour period. During each trial, the bone specimen was placed with their cross-section side facing towards the incident beam. The samples were placed on a thick glass substrate roughly 2 cm thick. The glass substrate (not shown in Fig. 1) lying directly on the metallic optical table absorbs the CO 2 radiation during the heating procedure and prevented too much heat from being transfered through the bone samples into the table.
At the location where the bone samples were placed, the CO 2 beam exposing each sample had a broad Gaussian profile with the beam spot size (at 1/e 2 ) of 20 mm. The spot size was much larger than the average bone cross-sectional diameter. The CO 2 beam was also steered with the optical scanner mirrors along two independent directions at the aforementioned frequencies to uniformly heat the bone's cross-sectional area. Each bone sample were centered within the CO 2 laser beam prior to scanning. An infra-red temperature sensor (IR-USB from Omega Engineering) with an accuracy of 1 • C was aimed at the cross-sectional surface of the bone and was adjusted in order to measure the temperature of any object placed at that position. When the operator's finger was positioned at the working location, the thermometer measured a temperature of 30 • C. A red laser aligned along the CO 2 beam was used to track down the invisible far-infrared radiation exposing the bone samples during the heating procedure in our investigations. Another laser with a different wavelength can also be used in tandem with the CO 2 beam if we want to generate fluorescence on a substance that is overlaying the bone under a well-controlled temperature in future work.
Since bones absorb much of the radiation at the CO 2 wavelength, only a very thin layer confined near the surface of the bone will be heated. The thickness τ of this layer attenuates the laser beam irradiation by direct absorption according to the Lambert-Beer law [15,16]. Therefore, in the case of a cylindrical disk the laser irradiation will decay exponentially from the surface as where α is the absorption coefficients, I 0 is the laser irradiation at the material surface and z is the distance from the surface into the material. At a depth of z = 3/α from the surface of the material, Eq. (1) predicts that the laser irradiation is approximately 5% of the initial value . As a result, for our investigation, we estimate that the thickness of the layer heated by the CO 2 laser is τ = 3/α.
The scattering coefficient reported in the literature for hard bones in the mid-infrared from 2 µm to 10.6 µm is on the order of 10 cm −1 [21,22] or is too small to be measured [23,24] at λ = 10.6 µm. As the scattering coefficients reported for hard tissues is very small compared to the absorption coefficient α, the scattering has been neglected in Eq. (1) and is not taken into account in the mathematical model.
Experimental results
The bones were placed as shown in the experimental set-up illustrated in Fig. 1. The cancellous cross-section of the bone is facing towards the incident CO 2 laser beam as shown in Fig. 2. A computer-controlled procedure was used to heat our bone samples at constant temperature.
To maintain a bone sample at T ∼ 50 • C in this procedure, a series of CO 2 laser pulses at a modulation frequency of 5 kHz were delivered to the sample at a power of about 7.4 W during 15 seconds. The power was set to a value between 7.35 W and 12.6 W so the sample temperature could ramp-up between 55 • C and 85 • C at a modulation frequency of 5 kHz. This phase is shown as step A in Fig. 3 for a sample maintained near 52 • C.
Immediately following the first series of pulses at 5 kHz, a second series of pulses were delivered to the samples at double the modulation frequency (10 kHz) at the same power or at a slightly different power. During the short period of time while the modulation frequency was changed from 5 kHz to 10 kHz, the laser was turned off. This is seen as a small dip in temperature in Fig. 3 at the beginning of step B. The series of pulses at 10 kHz was applied so that the bone would recover its temperature loss during the very short period of time it was turned off seen in the later section of step B in Fig. 3.
The frequency of the pulses was changed to 20 kHz in order to keep the temperature reasonably uniform in time. This is shown in step C in Fig. 3. A similar method was also shown to be successful in the heating a liquid such as water [17].
Immediately following the series of pulses at 20 kHz, the laser was turned off and the bone samples were left to cool (this is shown in step D in Fig. 3.) In Fig. 4 we present the results of heating porcine bones using the method that we outlined in Fig. 3. We note that the bone specimens were kept at given temperature during step C. In 4 we note that temperature can be controlled at fairly fixed temperature (step C) using different condition during the heating procedure. Note in Fig. 4 that the temperature of the bone cross-sectional area is increasing quickly to reach a constant value within 55 • C -85 • C range depending on the power delivered by the laser during step A. The laser is turned off for a short period of time, typically 10 ms immediately after step A to change the modulation frequency from 5 kHz to 10 kHz. During this short period of time, the bone cools off and the temperature drops just before the laser is turned on at its higher frequency of 10 kHz. Some decreases in temperature can be seen in Figs. 4(c), 4(d) and 4(f) after 15 s (end of step A), 40 s (end of step B) and 25 s (end of step B) respectively, when the laser frequency was changed. The interval between each data point shown in all graphs in Fig. 4 is one second. A shorter pause could be implemented by the routine when the modulation frequency is changed from 10 kHz to 20 kHz. The modulation frequency was changed to 20 kHz and the same laser power was maintained, we note that the temperature of the bone specimens is nearly constant during step C. In addition, we note that step C lasts for ∼60 s in Figs. 4(a) and 4(b), for ∼90 s in Figs. 4(c) and 4(d) and for more than 100 s in Figs. 4(d) and 4(e). We also note that the decrease in temperature is rather modest in Figs. 4(a) to 4(e) when the laser frequency is changed. However, when the frequency of the pulses is changed from 10 kHz to 20 kHz in Fig. 4(f), we observe a significant decrease in the temperature. This large decrease in temperature in Fig. 4(f) may be due to the temperature gradient between the bone sample surface (∼ 85 • C) and the ambient air (∼ 22 • C). Fig. 3. Four step procedure to heat the spongy part of the bone. In step A of this process, the bone specimen is rising its temperature rapidly within 15s. The specimen temperature is recorded for a few seconds just before the laser is turned on. During step A, the laser power is typically around 8 W and its modulation frequency is 5 kHz. A small drop in temperature is observed after step A due to a short pause period in the routine to switch the laser modulation frequency. In step B, the laser is pulsed at a modulation frequency of 10 kHz for a period of 10 seconds at a laser power slightly smaller than in step A. In step C, the laser is pulsed at a modulation frequency of 20 kHz for a period varying from 60 to 150 s. When the modulation frequency changes from 10 kHz to 20 kHz another sudden drop in temperature is observed at the coding outset of step C. Then the laser is turned off and a few points are being recorded during this period (step D)
Mathematical model
The porcine bone specimens were modeled using a short cylinder with a height of L and a diameter d, as shown in Fig. 2. Each specimen is exposed to the laser beam along the cylindrical face facing up as shown in Fig. 2(a). Since the length L of each cylindrical specimen is within 7-10 mm, it is much greater than τ and as a result the specimen can be modeled as a semi-infinite cylinder as shown in Fig. 2(b). The cylinder is assumed to be axisymmetric and homogeneous with constant physical properties. The standard 1-D heat conduction equation can be written as [18] 1 κ where κ = k/ (ρc) is the thermal diffusivity (m 2 /s), ρ is the mass density (kg/m 3 ), c is the specific heat (J/kg/K) and k is the thermal conductivity (W/m/K).
A finite difference solution [19] of the heat conduction equation was derived. The boundary conditions at the surface of the specimen are determined using the surface energy balance [19]. For the surface illuminated by the laser (z = 0) we have a mixed boundary condition that can be written as where the first term is the heat transfered through the surface by conduction into the material, the second term is the heat transfered by convection, h c is the convective heat transfer coefficient (W/m 2 /K), the third term is the heat transfered by radiation (ε is the emissivity of bone, σ is the Stefan-Boltzmann constant, T ∞ is the temperature of the ambient air) and finally, q laser is the heating source due to the laser. We note from Eq. (3), that the radiation term (3rd term), makes Eq. (2) non-linear in T . The mathematical model considers a solid bone and as a result, scattering from the bone was neglected in Eqs. 1 and 3. The bone is in contact with the glass plate at z = L and we have the following boundary conditions T bone = T glass (4) and, The temperatures of the bone and of the glass are identical at the contact surface and the heat flux through both media are equal.
Basic assumptions
Before we can proceed to compute the temperature as a function of time by solving Eq. (2) subject to the boundary conditions described in Eqs. 3, 4 and 5, we must make a few assumptions. For our numerical study, we assumed that the laser beam had a 'top-hat' beam pattern and that the beam size was twice as much as the cross-sectional area of the bone. In other words, we assumed that the intensity of the beam was uniform over the entire surface of the bone.
The initial temperature at t = 0 of the specimen was assumed to be uniform and equal to the temperature of the ambient air namely, T ∞ = 25 • C. We assumed that the bone was homogeneous and that it's thermal properties were constant throughout the material. We also assumed that the emissivity of the bone was ε = 1 [20,25,26].
The thermal properties for cancellous bone were taken from [16] and were used in the numerical simulations that are presented in Figs. 5 and 6. According to [16], a thermal conductivity of k = 0.31 W/m/K, a mass density of ρ = 1178 kg/m 3 and a heat capacity of c = 2274 J/kg/K results in a value of κ = 1.1572 × 10 −7 m 2 /s for the thermal diffusivity of cancellous bone.
Numerical results
In Fig. 5, we present the temporal variation of the surface temperature obtained from our numerical simulations for the first 3 laser heating pulses. We find that the temperature rises rapidly at the outset of the heating pulse and when the laser is turned off the temperature profile decreases slightly. This can be seen as a "ramped step function". For larger values of the duty cycle we note that the temperature increases slightly more than for the cases when the duty cycle is smaller. Nevertheless, in all cases, the cooling is not sufficient to return the surface to its original temperature and as a result the surface temperature continues to increase as time progresses. These results are consistent with those found in [27,28].
In Fig. 6(a), we present the temporal variation of the surface temperature obtained from Eq. (2) for 100 seconds for 4 different values of the convective heat transfer coefficient, the specific heat was assigned a value of c = 2274 J/kg/K [16]. In this figure we simulated a laser that was turned off for 1 s while the laser frequency was being changed from 5 kHz to 10 kHz at t = 20 s and from 10 kHz to 20 kHz at t = 40 s. This can be seen as deep minimums in the surface temperatures at these transition times. We observe that as h c is reduced, the temperature increases rapidly as a function of time. This would be consistent with the notion that as less heat is removed from the surface due to convection, the hotter the surface becomes as it absorbs energy from the laser. For the largest value of h c = 200 W/m 2 /K, we note that as time progresses the temperature begins to flatten out. This can also be seen in Fig. 6(b), in which we plot the time derivative of the surface temperature as a function of time. As time progresses we note that the time derivative of the surface temperature for all values of h c approaches 0 indicating that the surface of the bone specimen is approaching a constant temperature. One of the main difficulties that we encountered while modeling the bone sample is that there does not seem to be any consensus in the literature on the value of the specific heat for either cancellous or cortical bones. Values ranging from 1.15 × 10 3 J/kg/ • C have been reported by [29][30][31][32][33] up to values as high as 2.274 × 10 3 J/kg/ • C [16]. In a study by [34], they showed that the specific heat of bovine femurs increases linearly as a function of temperature and depends on its mineral composition as well as its water content.
We present Fig. 7 which depicts the temporal variations of the surface temperature obtained using Eq. (2) for the experimental parameters shown in Fig. 4(a) namely, 7.7 W for 20 s < t < 40 s (step A) and 7.4 W for 40 s < t < 80 s ( for steps B and C) and turned off when t > 80 s for different values of specific heat. In the first two cases, the specific heat was set to a constant value of c = 2.274 × 10 3 J/kg/ • C (solid black trace) [16] and c = 1.3 × 10 3 J/kg/ • C (solid blue trace) [31]. For the other two cases we assumed that the specific heat increases linearly as a function of temperature: c = 200 + 10T (blue dash-dot trace) and c = 100 + 10T (black dash-dot trace). Many other linear models for the heat capacity were tried but these seemed to give the best results for our simplified model.
We remind the reader that for all of the cases that are presented in Fig. 7, the convective coefficient h c , and thermal conductivity k, were held to constant values of 100 W/m 2 /K and 0.31 W/m/K, respectively. The initial temperature that was used for all of these cases was obtained from the observed values. As a result, the initial temperature in our model was set at T ∞ = 29 • C which is the same initial temperature as in Fig. 4(a). Lastly, in order to facilitate the comparison between the computed temperatures and those that were measured using the IR sensor, we superimposed the measured temperatures shown as red triangles in Fig. 7.
We observe from Fig. 7 that for all of the 4 cases presented, the surface temperature increases as a function of time. In the first case, where c = 2.274 × 10 3 J/kg/ • C (black solid line), the temperature rises to T ∼ 47 • C at 80 s. We also note that the rate at which the temperature increases at the outset of the laser heating, 20 s < t < 30 s, does not agree very well with the rate with which the observed values are seen to increase (red triangles). The observed temperatures show a significant jump from the initial temperature of T = 29 • C to T ∼ 47 • C in approximately 1 s. Our simplified model is not able to replicate this abrupt change in temperature for this particular value of heat capacity.
In the second case when c = 1.3 × 10 3 J/kg/ • C (solid blue trace) [31] we notice that the temperature increases in a similar fashion as in the first case but it reaches a slightly higher value of T ∼ 50 • C at 80 s. We further note that the agreement between the computed and the observed rate of change in the temperature at the outset of the laser heating improves somewhat when a smaller value for the specific heat is used in our model however it still does not quite capture the measured trend.
In the next two cases, we assumed that the specific heat vary as linear functions of temperature. In the third case we assumed that the heat capacity was described as c = 100 + 10T (black dash-dot trace) while in the fourth case the heat capacity was described as c = 200 + 10T (blue dash-dot trace). In both these cases we observe that the rate at which the computed temperature increase at the outset heating agrees much better with those that were measured. We further note that in both of the last two cases, the computed temperatures converge towards the measured temperature. The differences between the computed temperature and the measured temperature are less than ∼ 10% for 30 s < t < 80 s. From these 4 cases we can see that the numerical results rely significantly on the value of the specific heat that we assign to the bone in the model. It appears that a better agreement with the observed temperatures is obtained if we assume that the specific heat varies as a linear function of temperature.
In all of the cases that are presented in Fig. 7, we note that the simulated surface temperature decreases slightly when the laser beam modulation frequency is changed from 5 kHz to 10 kHz and from 10 kHz to 20 kHz. To model the effect of changing the modulation frequency in our simulations, we assumed that the laser was turned off for 0.1 s while the laser modulation frequency was changed. This can be seen as sharp decreases in the temperatures at the time when the frequencies were changed namely at t = 40 s and t = 50 s. Although one could easily argue that our model is much too simplified, Fig. 7 shows that our model gives reasonable results in comparison to the measured temperatures. This suggests that our model could be a useful tool to obtain a first order prediction of the average temperature within τ of bones.
Conclusion
It was shown experimentally that the surface temperature of porcine bones can be controlled to a constant temperature within 1 or 2 degrees within the 50-85 • C range. This small temperature range of less than 2 • C would allow a good enough resolution for bone treatment during exposure for at least one minute.
A very simplified model of the bone structure was used in this study. We assumed that the bone was a uniform axisymmetric cylinder possessing the thermal parameters equivalent to those of cancellous bone. The composition of the bone, its porosity, the fact that bones are not uniform as well as the effect of the beam scanning across the bone sample could affect the computed temperatures however these were not considered in the model. Nevertheless, using our very simplified mathematical description which included heat transfer by conduction, convection and radiation to model the temporal variation of the temperature obtained results that were within ∼ 10% of the observed values.
The model predicts that the temperature reaches a near constant value ∼ 50 • C for h c = 100 W/K when the specific heat of the bone is permitted to vary linearly with temperature in the model(see Fig. 7). Furthermore, the agreement between the observed and the computed rates of change in temperature at the outset of the laser heating improves significantly when the specific heat is allowed to vary as a function of temperature in the model. A more sophisticated model that takes into account the change of h c and k as functions of the bone surface temperature as suggested in [13] as well as changes in the composition, the porosity and the sweeping of the CO 2 laser beam may improve the mathematical predictions of the average temperature. | 6,545 | 2015-12-01T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Climate Coalitions and Punishments
Studies that demonstrate that climate change is human induced are becoming more and more prevalent. Even though most world leaders are aware of this urgency and know that we must work at mitigating it quickly, little has been accomplished in terms of widespread participation in an International Environmental Agreement (IEA). The purpose of this paper is to create a link between studies on the use of border tax adjustments (BTAs) and coalition formation. The main contribution is that the punishment will be based on relative emissions between signatories and defectors. It is a structure that is more likely be accepted by the World Trade Organization (WTO) since it may be seen as fair due to the fact that if signatories and defectors emit the same amount of pollution then there will be no punishment. The main results indicate that this form of punishment may lead to small, partial, or full cooperation, depending on the parameter values. Additionally, at any equilibrium level, the signatories have a punishment structure that induces defectors to reduce their emissions by the same amount. In the end, this punishment may be seen as a credible threat because at equilibrium no punishment is imposed, yet if we remove the possibility of punishment it breaks down to a situation wherein no large coalitions are feasible.
Introduction
Environmental issues are commonly of an international concern.The actions undertaken within a country are often felt by surrounding countries.Some issues such as climate change caused by the release of greenhouse gas (GHG) emissions in the atmosphere are international in nature because GHGs are transboundary pollutants.Hence, one unit of emissions released in one country versus another has the same impact on climate change.Viewing this obvious T. Eyland link, countries have realized that this problem must be addressed through the use of an international environmental agreement (IEA) because there lacks a global government to determine, monitor and enforce reduction targets by every country.
When countries decide whether or not to join an IEA and set emission reduction targets, the policymakers in power first think of the potential gains and losses of doing so.To be implemented, it should be beneficial not only in the long-term (i.e.climate change mitigation), but also in the short term (i.e.protection of domestic jobs).Ideally to have a successful IEA, there would be incentives for the major polluting countries to participate.However, complete cooperation is typically unlikely because of the well-known free rider problem, whereby some individual countries are better off adopting lax environmental policies, while other countries strive to reduce emissions through stringent actions [1].The reason for this problem is that reducing emissions entails some costs, borne only by that country, and in turn leads to a loss of competitiveness.
The use of restrictive trade policies has been suggested to level the competitiveness of the playing field by offering non-participating countries an incentive to join the coalition (see for example, [2], [3] and [4]).A common proposal is the implementation of a border tax adjustment (BTA) on the imports of energyintensive goods from countries that do not commit to reducing their emissions as well as export rebates. 1 [2] argues that if a country is maximizing its welfare taking into account everyone's emissions as environmental damages, it must either impose a carbon tariff or differentiate domestic carbon taxes to avoid large losses of competitiveness that bring about carbon leakage.An increasing number of jurisdictions around the world are seriously considering using carbon-motivated BTAs.The US, with the Waxman-Markey Bill, and the EU are currently contemplating the implementation of BTAs. 2he legality of a BTA has always been subject of discussion, yet many believe that it does not go against WTO rules (see for example [6], [7], [8], [9]).[6] proposed that BTAs should be imposed under the best available technology (BAT) rule to be in accordance with WTO regulations.Under this proposition, goods will be subject to the same "per unit" taxation level no matter the country in which they came from, thus simplifying greatly the informational requirements.[5] believes that the key to acceptance under the WTO is in how the BCA is designed and implemented.Specific to climate change, many researchers have looked at finding ways to achieve the grand coalition to an international environmental agreement (IEA) (for a thorough review, see [10]).Overall, except in very specific cases (e.g.: dynamic games), researchers found the maximum coalition size to be of 2 or 3 [11].[11] sees it as costlier to punish when one country pollutes more than the cooperative desired level.Costly punishments lead to non-credible or incredible threats wherein one country has no incentive to administer a punishment and 1 Also referred to as a border carbon adjustment in the literature dealing with the emissions trading scheme.
the other has no reason to believe that it will be punished.[12] argues that a punishment or negative incentive must be credible and severe for it to be effective.
Ideally, it should be demonstrated that it is possible to punish a non-compliant country without much damage in the process.
Previous punishment structures relied on using more emissions as a form of punishment, which not only hurt the deviator but also the coalition making it a non-credible punishment.[13] looked at the evolution of signatories to an IEA on climate change.They developed a punishment structure imposed by signatories on to deviators which was linked to the number of signatories and the level of the pollution stock.Naturally as the pollution stock grew, the punishment would increase because it would become more pressing to limit emissions.Furthermore, the cost of punishment borne by the signatories would fall as more signatories enter.As in most papers looking at coalitions they use a stability concept introduced by [14], which states that for a coalition to be stable, no signatory can have an incentive to exit, nor does any non-signatory have an incentive to join the agreement.Their paper may be viewed as a good representation of reality, but for a punishment to be acceptable by the WTO it must be seen as fair.The question that seems unresolved is why would two deviators with different emission levels be subject to the same punishment?Or why can a signatory country who pollutes the same amount as a non-signatory inflict a punishment?The main shortfall in the noted article is that there is nothing that links punishment to relative emissions.
The purpose of this short essay is to expand on [13] by looking at the implications of introducing relative emissions, which gives it more of a BTA flavour.
The main questions to be answered are 1.Will a punishment that depends on relative emissions cause "defectors" to emit less in order to suffer a smaller punishment?
2. If the answer to the first question is yes, does this mean that more countries will become "signatories" because they naturally emit a closer level to the signatories to avoid this emission dependent punishment?
The motivation here is to determine whether through the use of an emission indexed punishment there will be greater membership and lower global emissions.
If yes, this would mean that we may no longer rely on an extensive IEA to succeed in fighting climate change, but rather a simple IEA with the permission of the WTO to impose a punishment and have everyone maximizing their selfinterest.
The rest of the paper is structured as follows.The model will be described with key equations in Section 2. The game will be played in Section 3 and a sensitivity analysis will be tested in Section 4. Section 5 will conclude and provide some possible extensions.
The Model
As in [13], the current paper looks at a situation where we have N symmetrical countries that decide whether or not they want to become a "signatory" of an IEA for the reduction of pollution.The set of signatory countries is denoted by S and the members of S maximize their aggregate welfare, where s is the number of signatory countries, S s = , 0 s ≥ . 3The remaining N s − countries choose not to join the agreement and are referred to as "defectors", where D denotes the set of defectors.
Production by all countries leads to benefits and pollution emissions.The benefits from production is represented by the following function where k e is the level of emissions by country , k S D ∈ and b is a positive scaling parameter.
For simplicity reasons and to better compare with [13], environmental damages are linear in the amount of emissions released by all the countries.
Punishment Function
In [13], all signatory countries agree to inflict a punishment to all non-signatory countries, and it is assumed to be proportional to the level of the pollution stock.
It is difficult for such a punishment to be seen as "fair" if it is solely based on the current pollution stock and the number of signatories.In practice, it is challenging to justify a punishment from a signatory country if the non-signatory country has the same level emission reductions.Ideally, this punishment serves to penalize countries with laxer environmental policies as the Border Tax Adjustment (BTA) literature would suggest.Thus, if the emission reduction level is the same between signatory and non-signatory countries then there should be no punishment.
We introduce a component that makes the punishment smaller if the deviators work at reducing their emissions further.
( ) ( )
, , , where i e is the emission level of a defector country, j e is the emission level of signatory countries, and 0 α ≥ is a scaling parameter.The punishment function is such that, for a fixed level of emissions by the signatories, the punishment is increasing with the level of emissions by the defectors.4
Self-Inflicted Cost Function
The self-inflicted cost will be modelled as a fraction of punishment as was done in [13].The intuition here is that countries who choose to punish will incur a 3 Here, we assume that you can have a single country who is punishing all the others (a single "signatory").
cost to do so.We assume that this net impact is a cost to the signatories, and that this cost falls as more countries join the agreement.
where τ is the self-inflicted cost scaling parameter and 0 τ ≥ .Participation in an IEA has two extremes that must always be kept in mind.In the case where no one wants to be part of the agreement and all act non-cooperatively, we need to have no punishment nor any self-inflicted cost suffered.The other extreme case is when we have full participation, where all maximize their welfare and take into account the negative impact of their emissions on the environmental damages suffered.In this case, we must also have no punishments nor self-inflicted cost.
The current model satisfies both of these extreme scenarios.It is the transition between these two points that is interesting to examine.
As the number of signatories grows, the punishment which causes a reduction in welfare to defectors, will increase.Since we are dealing with countries that are standardized to being symmetric, comparing the amount of emissions of a typical signatory to the deviator will allow us to discern how much more environmentally friendly the signatories are relative to the defectors.Even though signatories join an agreement, they should not be allowed to impose a punishment if their emissions are equivalent to that of the defectors.The deviators emissions levels are likely going to fall as they identify the link between their emissions and the punishment.Their incentive to reduce emissions becomes greater as the number of signatories, and thus the number of countries administering a punishment, increases.
The key welfare equations of the model are where
The Game
When the game is played for the signatory we are maximizing the welfare of a signatory taking into account the negative externality it has on other signatory countries, which is represented by the following, Conversely the defector does not take into account any negative externality, Typically, we would assume that signatories would be the first movers and that defectors would then decide on their pollution levels.However, seeing that we have payoffs that are linear in the choice variable means that the reaction functions of signatories or defectors will not directly depend on the emission levels of the other group.Consequently, a sequential Stackelberg game will be the same as a simultaneous Nash equilibrium in this case.From taking the firstorder conditions we get the following equations From here, we may be interested in the level of emissions of one group relative to that of the other.If we remove punishment and self-cost from the previous equations it is clear that as the number of signatories goes to a number greater than 1 the level of emissions from signatories will be smaller than the one from the defectors.Once we add punishment, we are inducing defectors to reduce emissions and incur a smaller punishment level.To answer the first question in the introduction, we may notice at this point that the emissions of defectors are based on the amount of signatories and their respective punishment.The more signatories join, the less defectors have an incentive to pollute.They may actually reduce their emissions by a larger percentage than signatories, thus giving them access to some form of compensation (i.e.: similar to the idea of the Clean Development Mechanism in the Kyoto Protocol).
( ) ( )
To ensure that this * s is a positive number, we must first ensure that the parameter values satisfy d α τ α − > .If the punishment parameter α is smaller than the damage parameter d , this condition will be satisfied in all cases.However, if the punishment is greater than the environmental damage parameter, the self-inflicted cost of punishment must increase as the punishment increases or else there will not be a situation where the emissions are the same between both groups.
If we believe that the self-inflicted cost will only be a fraction of the actual cost would be a sufficient condition to ensure that we have a positive * s .This conditions simply requires the punishment parameter not to be too large relative to the environmental damage cost.It may still be larger because ( ) τ − ≤ but should not be largely out of proportion.
Looking at Equations ( 9) and ( 10), we can see that in both cases as the number of signatories is higher the level of emissions will be lower.Before to far on the discussion of what impacts * s , we must first look at the welfare for both groups.Inserting Equations ( 9) and (10) in Equations ( 5) and ( 6) we get the following: )) ) Setting and solving for * s we obtain two possible results: We obtain that at equilibrium the emissions of the defectors will be lesser than the signatories.Additionally, it requires that we have a punishment parameter greater than the actual environmental damage, which is unlikely going to be allowed by the WTO.We can thus eliminate this non-intuitive possibility.
After eliminating the possibility of ** s , we may verify that * s is indeed a stable equilibria.In coalition formation studies, a measure of stability uses [14] internal and external stability concept.Internal stability verifies that at this equilibrium there is no incentive for anyone that is part of the signatory to leave, and external stability ensures that no defector wants to join the signatory.
Looking locally first, we find that then the equilibrium coalition size is greater than the number of countries and there would be a convergence to the grand coalition.
Comparative Static Analysis
Knowing that * s is a stable equilibrium and that the greater the s the lower the total emissions, one may wonder what factors impact the value of * s .There are only four variables which may impact * s .Following is a comparative static analysis on how each influences * s .
( ) Impact of N on * s : If the number of symmetric countries in the model increases, the greater the equilibrium number of signatories based on the previous condition that d α > .
Impact of d on * s : As the environmental damage parameter becomes greater there are less signatories.This result can be explained by the fact that with greater environmental damage, the signatories will further reduce their emissions because they internalize this negative externality causing a greater burden on their firms' T. Eyland profits, and hence reducing the number of countries having an incentive to become part of the signatory.
Impact of α on * s : If the punishment level rises, it would be less beneficial to be a defector and the number of signatories will increase.It was noted earlier that the punishment parameter should always be smaller than d .This finding demonstrates that as the punishment rises, more countries will want to become signatories answering the second question from the introduction.
Impact of τ on * s : Fourth, the greater the self-cost parameter, the greater the number of signatories rendering the welfares equal, since signatories will now suffer a greater cost, it needs to be spread over a larger number of signatories.
These results have provided insights on how altering certain key variables allow us to get closer to the grand coalition.Ultimately, simply having a large enough punishment parameter (still smaller than the environmental damage parameter), may be a credible threat to obtain the grand coalition where no punishment is induced on anyone.Even though for certain parameter values we may not obtain the grand coalition, the punishment structure based on relative emissions substantially reduces the level of emissions of the defectors compared to the Business as Usual scenario with no punishment.The punishment now serves as a credible threat to inducing a reduction in emissions by all players.In the end, the goal may not necessarily to obtain the grand coalition but to induce a reduction by all.
Off Equilibrium Analysis
In equilibrium, we will always obtain the same amount of emissions for all countries, therefore no punishment or cost.An interesting observation is if we have an amount of signatories that is smaller than * s , we have S D e e > .This could be interpreted as the signatories emitting more to induce the defectors to join the coalition.However, this equilibrium is not stable because when that occurs we are now in a situation where and more people want to become signatories.At an equilibrium coalition size, signatories do not emit more than defectors.Additionally, one may anticipate that some countries may not choose to adopt a Kyoto Protocol, but due to policies like the Kyoto Protocol Clean Development Mechanism, they may receive a subsidy if their emission reduction is greater than the signatories.
Conclusions
[13] had a punishment and self cost independent of the current choice of emissions.
Introducing a punishment based on relative emissions changed the incentives of First, a punishment that depends on relative emissions can cause the defectors to emit less in order to suffer a smaller punishment.Second, this new punishment structure allows us to obtain a partial or the grand coalition.It also offers different solutions on how to increase the number of signatories, to potentially converge to the grand coalition where all pollute the cooperative ideal level of pollution. 6 The main deficiencies of this article can also be seen as future opportunities.
For example, examining different functional forms such as a quadratic environmental damage function to have more interdependence in the reaction functions and to allow more of a sequential game.Moreover, an estimation of real world parameter values to determine the expected outcome would be very beneficial.
Additionally, introducing dynamics or other markets may offer additional insights.
∑
are respectively the total emissions of the signatories and the defectors.
Based on this emission difference equation, it is interesting to note for what value of s the difference in emissions is equal to 0.5 One can remark how useful this piece of information is by looking at S W and D W above. Simply put, when emissions are the same for both groups we end up with no punishment or self-cost and the same firm profits and environmental damage, leading to the same welfare levels.Hence, for the parameter values used in the model, there should exist an s where D , for a very large N we approximately need d α > .However, if we impose this condition after inserting this equilibrium ** s in the emissions difference equation, defectors and induced them to emit less than in the previous model.Based on what we know, this paper offers the first bridge between coalition formation and the BTA literature.Although it is a preliminary model, it offers interesting conclusions on the strategic impact of punishment based on relative emissions. | 4,816 | 2017-02-03T00:00:00.000 | [
"Economics"
] |
Absence of A3Z3-Related Hypermutations in the env and vif Proviral Genes in FIV Naturally Infected Cats
Apolipoprotein B mRNA-editing enzyme catalytic polypeptide-like 3 (APOBEC3; A3) proteins comprise an important family of restriction factors that produce hypermutations on proviral DNA and are able to limit virus replication. Vif, an accessory protein present in almost all lentiviruses, counteracts the antiviral A3 activity. Seven haplotypes of APOBEC3Z3 (A3Z3) were described in domestic cats (hap I–VII), and in-vitro studies have demonstrated that these proteins reduce infectivity of vif-defective feline immunodeficiency virus (FIV). Moreover, hap V is resistant to vif-mediated degradation. However, studies on the effect of A3Z3 in FIV-infected cats have not been developed. Here, the correlation between APOBEC A3Z3 haplotypes in domestic cats and the frequency of hypermutations in the FIV vif and env genes were assessed in a retrospective cohort study with 30 blood samples collected between 2012 and 2016 from naturally FIV-infected cats in Brazil. The vif and env sequences were analyzed and displayed low or undetectable levels of hypermutations, and could not be associated with any specific A3Z3 haplotype.
Introduction
Feline immunodeficiency virus (FIV) is a lentivirus of the Retroviridae family which is able to infect several species of Felidae. It has been associated with feline acquired immunodeficiency syndrome (FAIDS) in domestic cats (Felis catus) [1]. Besides its biological and genomic similarity with HIV, which makes it a valuable natural model for the study of AIDS [2], FIV has significant veterinary importance due to its high prevalence in domestic cats worldwide [3]. As characteristic to retroviruses, one of the steps of the FIV replication cycle is the reverse transcription of the genome; the reverse transcriptase enzyme uses the viral RNA as a template for the provirus synthesis, which will be further integrated in the genome of the host cell. The proviral DNA is flanked by long terminal repeats (LTRs) and is constituted by structural genes (gag, pol and env) and accessory, auxiliary and regulatory genes (vif, orfA and rev, respectively) [3]. Among the proteins encoded by these genes, the viral infectivity factor (vif) is a 23-29 kDa protein essential to the formation of infectious viral particles in nonpermissive cells [4][5][6].
Mammalian cells express restriction factors that play an important antiretroviral role in innate immunity. One of them is apolipoprotein B mRNA-editing enzyme, catalytic polypeptide-like (APOBEC3, A3), which belongs to the family of the DNA cytidine deaminases [7]. Cats encode four APOBEC3 proteins (A3Z2a, A3Z2b, A3Z2c and A3Z3). In addition, a fifth protein, named A3Z2Z3, is expressed by read-through alternative splicing [8]. The A3 proteins expressed in infected cells are encapsulated in the virions and act in the cytoplasm of the target cell, after RNA liberation [9,10]. During the reverse transcription of the viral genome, A3 catalyzes a deamination reaction by converting cytidines into uridines, thus inducting G-to-A hypermutations in the newly synthesized viral DNA [11][12][13]. As a viral countermeasure, A3 activity is drastically diminished as it is tagged for proteosomal degradation by vif, encoded by the virus genome [14,15].
The interaction among vif, A3 and a ubiquitin ligase complex leads the polyubiquitination and degradation of A3 proteins [16,17]. Vif contains different N-terminal motifs that interact with A3Z2 and A3Z3. Moreover, it is able to interact through the residues 50-80, outside the interaction sites [18]. Importantly, alterations in the coding sequence of A3 may modify the stability, subcellular localization and, consequently, its interaction with vif. Seven haplotypes of A3Z3 encoded by domestic cats have been reported [19]. Among them, only haplotype V has demonstrated resistance in vitro to FIV vif-mediated degradation, determined by the lateral chain of the amino acid on position 65 [20]. The objective of this retrospective cohort study was to correlate the effect of different A3Z3 haplotypes on the frequency of hypermutations in the env and vif gene sequences of cats that tested FIV positive by the SNAP FIV/FeLV Combo Test (Idexx) or PCR.
Samples and DNA Extraction
Thirty samples of peripheral blood from FIV naturally infected cats from Porto Alegre, RS, Brazil collected between 2012 and 2016 were used for the analyses. The DNA extraction was performed using buffer-saturated phenol and the DNA was stored at −20 • C. Animals were nonpedigree cats, characterized by being a genetically homogeneous population. All the study protocols were approved by the Ethics Committee on Animal Use (CEUA) of the Federal University of Rio Grande do Sul (UFRGS). Project number 29749, permission date 4 October 2016.
A3Z3, env and vif Amplifications and Sequencing
Exon 3 of the A3Z3 gene (coding for APOBEC3) was submitted to a PCR with primers A3H2F and A3H3R, as described previously [19]. FIV provirus was amplified by a nested PCR. The first round of amplification was performed with primers VIF_FIV_PF and ENV_PR [18] generating a 3.1-kb-long fragment using the Phusion High-Fidelity DNA Polymerase (New England Biolabs, Ipswich, MA, USA). In order to amplify part of the env gene, a second round of amplification was made with primers ENV2-3_PF and ENV2-3_PR [21], giving rise to an expected final product of 831 pb. In order to amplify the vif gene, the 3.1 kb was submitted to another second round of amplification with primers VIF_FIV_PF and VIF_FIV_PR [18], generating a 756-pb-long fragment that encompassed the entire vif gene (sequences of primers are available in the Appendix data, Table A1). In order to sequence the entire vif gene, this product was cloned into pCR2.1 vector using TOPO TA Cloning kit (Thermo Fisher Scientific, Waltham, MA, USA). Final PCR products of A3Z3 (590 pb), env (831 pb) and cloned fragment of vif were sequenced using BigDye Terminator v3.1 Cycle Sequencing (Applied Biosystems, Waltham, MA, USA). Three vif clones per sample were sequenced. The generated chromatograms were then assembled using the Geneious ® software (version 9.0.5, Auckland, AUK, NZ, http://www.geneious.com) [22].
Genotype and Haplotype Analyses
Genotypes and allele frequencies were determined manually by gene counting in A3Z3 gene. The Hardy-Weinberg equilibrium was calculated. The haplotype was determined and analyzed with the program MLOCUS [23]. The linkage equilibrium was also calculated.
Hypermutations Analyses
The env and vif sequences were submitted to analyses with the program Hypermut 2.0. Such program identifies G→A hypermutations using the default setting, where hypermutations are detected in a GRD motif (where R is code for G or A, and D for G, A or T), and the context requirements are enforced on query sequences. To select the reference sequence for the analyses of the env gene fragment, a phylogenetic reconstruction was performed using parameters and reference described in our previous work [21]. Briefly, we submitted our dataset (including 50 sequences from public databases and 27 sequences described in this study) to a maximum-likelihood phylogenetic reconstruction in PhyML. Our 27 sequences grouped in a monophyletic clade inside subtype B group. As the reference sequence should represent ancestral characters for hypermutation analyses in Hypermut, a reference sample was searched in branches at the basal positions of clade generated by the 27 sequences here reported (D37812). For vif analyses, a phylogenetic tree was reconstructed using 46 sequences (including our 15 vif sequences generated in this study) by the same method described above. Sequence LC079040, which belongs to a sister group of the cluster of sequences described in this study, was used as reference ( Figure A1). In addition, several other attempts to analyze hypermutions in env and vif sequences using different reference sequences were performed. Hypermut is available from http://www.hiv.lanl.gov/content/sequence/HYPERMUT/hypermut.html [24].
PCR Amplification and Sequencing
The A3Z3 exon 3 was successfully amplified and sequenced from all of the 30 samples examined in the present study; env partial gene sequences were obtained from 27 samples; complete vif sequences were obtained from 15. The Genbank accession numbers for partial env sequences are MF062041-MF062051, MF062053-MF062055 and MF062057-MF06206, and for the complete vif genes the numbers are KX668630-KX668644.
Identification of A3Z3 Haplotypes
In the present study, five out of the seven previously described haplotypes were detected among the 30 samples here examined [23]. The frequencies of each of the genotypes are shown in Table 1. The occurrence of such haplotypes was evidenced in both chromosomes in sampled animals, totalizing 60 haplotypes. Haplotype GGGG was the most frequently detected (30 times, see Table 2).
Detection of Hypermutations
Although variation in provirus sequences was found between animals, both groups of sequences (env and vif ) showed low or undetectable levels of hypermutations (p > 0.05). This was also true among the sequences from the cat with A3Z3 haplotype V, preventing the correlation between the A3 haplotypes and hypermutations. The same results were obtained when other sequences were used as the reference.
Discussion
According to a previous study on naturally occurring A3Z3 haplotypes, the polymorphisms in the exon 3 of feline A3Z3 gene are as follows: A65S (A65I), R68Q, A94T and V96I [20]. The frequencies of A3Z3 haplotypes found here were not different from the ones described by Castro et al. (2014): p = 0.6257 ( Table 2). The calculated linkage disequilibrium between polymorphisms indicated that all loci segregate together (values of D < 0.18). However, unlike the study of Castro et al., we did not detect haplotypes VI and VII in our cat population, most likely due to the low frequency of these variants [19]. Possibly such haplotypes would be represented among a larger number of samples. In the present study, only one sample belonged to A3Z3 haplotype V, whose frequency in the cat population has previously been described as low (3.7%) [19]. Haplotype V was previously correlated with resistance to FIV infection [19], and was resistant to vif-mediated degradation in vitro [20].
In order to find a correlation between the A3Z3 haplotypes, the proviral sequences and a resistance to vif-mediated degradation in env and vif genes, the sequences of env and vif were analyzed with Hypermut 2.0 sofware, searching for G→A hypermutations. All vif sequences obtained in each sampled animal were identical, confirming their nucleotide sequence and thus, no correlation was found in this study between A3Z3 haplotypes and hypermutations in the target regions of the viral genome. However, these results might have been influenced by the relatively small number of samples/sequences analyzed. In humans, the effect of different SNPs in the function of A3 has been more broadly studied, and A3 variants were associated with hypermutations in env, gag and pol and with the progression of the disease in HIV-positive patients. For instance, a previous study with antiretroviral-therapy-naive women (N = 28) found 16 individuals with hypermutated gag sequences. However, only 20 out of 373 gag sequences of these patients were hypermutated [25]. In another report, APOBEC3G-and APOBEC3F-associated hypermutations were detected in 12 and three out of 127 patients, respectively, and such hypermutations in HIV were strongly associated with a defective vif [26].
Although some of the previously mentioned studies have relied on DNA amplification, cloning and single-clone sequencing, as performed here, such methodology may lead to inconclusive results because, in principle, such mutated viruses are in disadvantage in comparison to the original viruses, and hypermutated sequences are usually found in small numbers [27].
Here, the sample used to search for hypermutations in FIV-positive cats was peripheral blood. Hypermutated proviral HIV genomes have been previously detected from PBMC in 39%, 40% and 8% of patients with undetectable viral load in the absence of antiretroviral therapy (elite controllers), treated controls and untreated controls, respectively (N = 46) [28]. However, using alternative tissues may also help with finding such mutations, as in HIV-positive patients they have more easily been found in sanctuaries (cerebral spinal fluid and rectal tissue) of HIV than in PBMC [29].
The study of other A3Z3 regions or other feline A3 genes may help to establish a correlation with resistance or susceptibility to the virus in vivo and possibly with the occurrence of hypermutations in different FIV genes. Further, it is possible that the identification of the nucleotide sequences of the whole env gene, as performed in other studies with HIV or other genes, such as pol and gag, would clarify the relation between A3Z3 and the occurrence of hypermutations in the FIV genome [30,31]. Remarkably, hypermutations in two HIV-positive patients in env and vpu genes were not associated with mutations in vif [29]. In line with this, in the present study some polymorphisms were observed in the vif sequences, but none of them in regions important for interactions with A3Z3 and A3Z2 [18]. The sequences of vif showed that all of those are functional, and do not present deleterious mutations that could impair their function. This is evidenced by the lack of hypermutations detected in the env and vif genes.
This study cannot provide any solid conclusion, as the number of samples analyzed is small (and the genomic regions of the virus analyzed are also limited). Hence, the study failed to find any association between A3 variants and hypermutations in the FIV genome. On the other hand, we cannot rule out that there is an association, as the power of the study may be too small to reveal it. Further studies using a larger number of samples from FIV-infected cats will be necessary to better understand the association between A3 variants and clinical status, virus load and other parameters, as'well as the role of other restriction factors in the progression of FIV infections in domestic cats.
Acknowledgments:
We would like to thank Naila Blatt Duda and Fernanda Vieira Amorim da Costa for the sample collection. Table A1. Primers used in this study to amplify proviral genes (env and vif ) from exon 3 of A3Z3-positive FIV cats. | 3,138.8 | 2018-05-31T00:00:00.000 | [
"Biology",
"Medicine"
] |
Analytical interpretation of nondiffusive phonon transport in thermoreflectance thermal conductivity measurements
We derive an analytical solution to the Boltzmann transport equation (BTE) to relate nondiffusive thermal conductivity measurements by thermoreflectance techniques to the bulk thermal conductivity accumulation function, which quantifies cumulative contributions to thermal conductivity from different mean free path energy carriers (here, phonons). Our solution incorporates two experimentally defined length scales: thermal penetration depth and heating laser spot radius. We identify two thermal resistances based on the predicted spatial temperature and heat flux profiles. The first resistance is associated with the interaction between energy carriers and the surface of the solution domain. The second resistance accounts for transport of energy carriers through the solution domain and is affected by the experimentally defined length scales. Comparison of the BTE result with that from conventional heat diffusion theory enables a mapping of mean-free-path-specific contributions to the measured thermal conductivity based on the experimental length scales. In general, the measured thermal conductivity will be influenced by the smaller of the two length scales and the surface properties of the system. The result is used to compare nondiffusive thermal conductivity measurements of silicon with first-principles-based calculations of its thermal conductivity accumulation function.
I. INTRODUCTION
Nondiffusive thermal transport occurs when length or time scales of a system are on the order of the mean free paths (MFPs) or lifetimes of the energy carriers.As a result, a local equilibrium temperature cannot be defined and the thermal transport properties of the system can no longer be taken as the bulk values.When system boundaries are decreased below energy carrier MFPs, nondiffusive transport can be described with a reduced, effective thermal conductivity [1][2][3][4][5].Heat dissipation in light emitting diodes and transistors is adversely impacted by reductions in thermal conductivity, while thermoelectric energy conversion devices benefit.
Determination of the relationship between system dimensions and effective thermal conductivity has been a research focus for over 20 years and requires two fundamental pieces of information: (i) the intrinsic (i.e., bulk) MFP-dependent contributions of energy carriers to thermal conductivity [6][7][8] k and (ii) the relationship between system dimensions and the modified MFPs of the energy carriers [9,10].In semiconducting materials, the former can be described by the thermal conductivity accumulation function for phonons [11], k accum , which identifies cumulative contributions to thermal conductivity from phonons having a MFP less than or equal to the length scale * .Under the isotropic assumption, Here, is MFP, C is volumetric heat capacity per unit MFP, and v is group velocity.Thermal conductivity accumulation functions have been determined theoretically for bulk and nanostructured materials using analytical scattering relationships [10], molecular dynamics simulations with empirical *<EMAIL_ADDRESS>[7], and by first-principles calculations [8,12,13], but require experimental validation.
Recent attempts have been made to experimentally measure k accum by inducing nondiffusive thermal transport through varying an experimentally controllable length scale L c in a range comparable to phonon MFPs.Techniques include transient thermal grating (TTG), where L c is the period of a pulsed optical grating that induces a spatially periodic temperature profile [14] and thermoreflectance techniques including time domain thermoreflectance (TDTR) and broadband frequency domain thermoreflectance (BB-FDTR), where the experimental length scales are the spot size of a heating laser and the thermal penetration depth of a temporally sinusoidal laser heat flux [6,[15][16][17][18].An effective thermal conductivity as a function of L c is found by interpreting nondiffusive measurements with a solution to the heat diffusion equation.
Initially, the interpretation to obtain k accum was that energy carriers with > L c do not contribute to the experimentally measured thermal conductivity k exp and energy carriers with L c fully contribute, as they would in a purely diffusive regime [6,15,16,18].Mathematically, this assumption takes the form ( This mapping between L c and MFP contributions to the effective thermal conductivity leads to accumulation functions that are consistent with first-principles predictions in silicon and gallium arsenide [15,16,18] but lacks rigorous justification.More generally, where S( , L c ) is known as the suppression function.In the simple interpretation in Eq. ( 2), S( , L c ) is a step function from 1 to 0 at = L c .But discrepancies between BB-FDTR [16] and TDTR [6] results using Eq. ( 2) demand a deeper understanding of the suppression function.
Comparison of analytical [19] and numerical solutions [20][21][22] of the Boltzmann transport equation (BTE) to the heat diffusion equation for TTG leads to the functional dependence of the suppression function on L c and MFP and reconciles nondiffusive TTG measurements and k accum .Although the form of the suppression function has been identified for TTG, a new analysis is required for BB-FDTR and TDTR since the experimental setups are physically different, i.e., L c is different.Ding et al. predicted suppression due to spot size in TDTR using a Monte Carlo-based numerical solution to the BTE [23], but neither suppression due to thermal penetration depth nor analytical analyses for these experiments have been demonstrated in the literature.Three important questions remain unresolved: (1) What is the form of thermal-penetration-depth-based suppression?(2) What is its interplay with spot-size-based suppression?(3) Under what circumstances can BB-FDTR and TDTR measurements be interpreted with the conventional heat diffusion equation?
In this work, we derive an analytical suppression function for thermoreflectance techniques by solving the BTE.In thermoreflectance techniques, there are two experimental length scales: (1) the thermal penetration depth L p = √ 2k/ C, which characterizes the exponential decay length of the temperature amplitude into a solid with thermal conductivity k and volumetric heat capacity C due to sinusoidal laser heating with angular frequency at the surface, and ( 2) the e −2 radius of the Gaussian laser spot, r o .The presence of r o in thermoreflectance experiments necessitates a comparison of length scales rather than the time scales 1/ and phonon lifetimes.In Secs.II and III, we account for both experimental length scales in our expression for the suppression function.The results are used in Sec.IV to interpret nondiffusive measurements of phonon transport in silicon by BB-FDTR and TDTR, although our solution does not account for the multiple time scales in TDTR that arise from using a pulsed laser.
II. SUPPRESSION FUNCTION IN A PLANAR GEOMETRY
As shown in Fig. 1(a), we first consider a planar medium with a temporally oscillating surface temperature with angular frequency and amplitude T s = 1 K, such that T (x = 0,t) = T s e i t .Because we are solving for deviations from the mean temperature, for convenience we define the temperature T (x → ∞,t) = T ∞ = 0 K.The one-dimensional (1D) nature of this problem will yield an analytical solution that provides insight into the functional dependence of the suppression function on thermal penetration depth.
We begin with the gray, 1D BTE for phonons in Cartesian coordinates under the relaxation time approximation in an isotropic medium [24,25], where the nonequilibrium distribution function n(x,t,μ) is the phonon energy density per unit phonon frequency per unit solid angle and equals ωD(ω)g(x,t,μ)/4π .Here, is the reduced Planck constant, ω is the phonon frequency, D(ω) is the phonon density of states, g(x,t,μ) is the occupation function, n e (x,t) is the equilibrium distribution function and is specified for phonons when g is the Bose-Einstein distribution g BE , τ is the gray lifetime /v, v is the frequency-independent phonon group velocity (i.e., sound velocity), and μ is the directional cosine (μ = cos θ ) that accounts for the velocity of phonons traveling at an angle θ from the x direction [see Fig. 1(a)].For small temperature variation, n e (x,t) ≈ ωD(ω) dg BE dT | x,t T (x,t)/4π = C ω T (x,t)/4π , where C ω is the volumetric heat capacity per unit frequency and T (x,t) is the departure from T ∞ = 0 [19,20,27].Thus, we solve for the deviations from the equilibrium distribution function, which are related to deviations of temperature from T ∞ .
Since the oscillating surface temperature determines the temporal behavior of the solution, we separate variables such that n(x,t,μ) = n(x,μ)e i t , where n is the component of n that is only a function of x and μ.Substituting into Eq.( 4) yields The difficulty in solving Eq. ( 5) arises from the fact that we must account for phonons traveling over all directions μ.For TTG, Collins et al. demonstrated a Volterra integral solution to a BTE of similar form [19], but the dependence on μ in our formulation leads to a divergent integral.Henceforth, we follow a two-flux procedure similar to that of the Milne-Eddington approximation for radiative heat transfer [28].This method involves taking the zeroth and first moments of Eq. ( 5), i.e., Eq. ( 5) is integrated over all directions after multiplication with μ 0 = 1 (zeroth moment) and μ 1 = μ (first moment).The distribution moments are defined as Furthermore, the distribution function is assumed to be isotropic over the upper and lower hemispheres such that n+ (x) ≡ n(x,0 < μ 1) and n− (x) ≡ n(x, − 1 μ 0) [see Fig. 1(a)] [28].From Eq. ( 6), the zeroth and first moments are n0 = 2π ( n+ + n− ) = 3 n2 and n1 = π ( n+ − n− ), which can be physically related to temperature and heat flux [28].Applying the two-flux method to Eq. ( 5) yields a coupled set of equations: In formulating Eqs.(7a) and (7b), we use conservation of energy for a gray medium to determine the equilibrium distribution ne in terms of n0 as (Ref.[29]) This coupled set of ordinary, linear, homogeneous differential equations is an eigenvalue problem and has a solution of the form [ n0 n1 ] = c 1 v 1 e −λx + c 2 v 2 e λx , where c 1 and c 2 are constants to be determined by the boundary conditions, ±λ are the eigenvalues, and v 1 and v 2 are the eigenvectors.Since the spatial domain is semi-infinite, c 2 = 0 because n0 and n1 cannot increase unbounded.The boundary condition at x = 0 is depicted schematically in Fig. 1(a) and is [30,31] n+ where ε and ρ are the phonon emissivity and reflectivity, both of which will be discussed in further detail in Sec.IV.Physically, Eq. ( 9) states that the total energy carried by phonons traveling in the positive x direction at the surface is equal to the sum of the energy carried by phonons emitted due to the induced surface temperature T s and the energy carried by phonons traveling in the negative x direction that are reflected from the surface.By solving the system of equations and integrating over all phonon frequencies, the spatial temperature and heat flux profiles are found to be where η = √ 2i − 2τ , ß = /L p , and k bulk = 1 3 Cv 2 τ .Since we use the gray approximation, n0 and n1 are independent of ω and the integral over ω only changes C ω to the total volumetric heat capacity, i.e., ∞ 0 C ω dω = C.To generate figures in this section and Sec.III, we use properties of bulk silicon [32,33] and determine L p using k bulk .
The magnitudes of the spatial temperature profiles from the diffusion solution ] and BTE solution for ε = 1−ρ = 1 and /L p = 1 are shown in Fig. 2(a).The spatial temperature profile from the diffusion solution is a continuous exponential decay where the diffusive thermal resistance can be defined as Temperature (K) which can be written as When τ 1,L p-BTE = L p and TBTE (x) collapses to Tdiff (x), but when τ 1,L p-BTE → ∞, which indicates purely ballistic transport.Thus, as /L p increases, the temperature decay rate predicted by the BTE decreases.
The spatial temperature profile from the BTE solution indicates two distinct regions: a surface temperature jump of T ε and a spatial temperature decay spanning T i .When ε = 1−ρ, the total thermal resistance from the BTE solution R BTE,x is comprised of two parts, The thermal resistances in Eqs.(12a), (12b), and (12c) are complex.Complex thermal resistances are analogous to impedance in alternating current circuit analysis.In the plots throughout this paper, we plot the magnitude of such complex thermal resistances.
The magnitude of the terms R ε , R i,x , and R BTE,x are plotted in Fig. 2(b) as a function of /L p and τ with ε = 1−ρ = 1 and are compared to the magnitude of R diff,x .The term R ε is a resistance that arises from the interaction between the surface and ballistic phonons originating within one MFP of the surface and is associated with the surface temperature jump in BTE [31,[34][35][36] and radiative transfer [37] problems.The term R ε is independent of any experimentally controllable length scale but is always present.The term R i,x is intrinsic to the material and accounts for transport of phonons associated with two length scales: L p and .It should be noted that R i,x says nothing about the surface properties (i.e., R i,x is not a function of ε).Thus, when bulk ) and the BTE thermal resistance converges to the diffusive thermal resistance because R i,x dominates R ε .However, as the phonon MFP approaches L p , the second term in R i,x and the R ε term become non-negligible and the BTE thermal resistance becomes larger than the diffusive thermal resistance.In the ballistic limit bulk ) and becomes independent of .It should be noted that the total thermal resistance is independent of whether a temporally oscillating surface temperature or heat flux is imposed, the latter of which is more consistent with the experiments.
As in the analysis of the experimental measurements, we can now determine the effective thermal conductivity k eff that equates the complex diffusive thermal resistance (R diff,x = 1/ √ i Ck eff ) to the complex thermal resistance determined by the BTE,R BTE,x [21,31].Since, by definition, T s is identical in both systems, this procedure is equivalent to equating surface heat fluxes from the diffusion and BTE solutions.Furthermore, similar functional forms of the BTE and diffusion solutions suggest that interpreting nondiffusive transport with an effective, suppressed k is reasonable.We define the suppression function for this planar geometry S x ( ,L p ,ε,ρ) as the fractional contribution to thermal conductivity made by a phonon with a MFP of in a thermoreflectance experiment with , ε, and ρ and is It should be noted that S x ( ,L p ,ε,ρ) is complex.Thus the phase angle of the suppression function influences the observed phase angle in thermoreflectance experiments, ultimately influencing the value of thermal conductivity obtained.In plots of the suppression function throughout the paper, we plot its magnitude.
In Figs. and the magnitude of S x ( ,L p ,ε,ρ) as a function of /L p and τ for ε = 1−ρ = 1, 0.5, and 0.1.The suppression function [Fig.3(b)] accounts for the increase in thermal resistance compared to the diffusion solution [Fig.3(a)], and reduces the effective thermal conductivity of the material.The suppression function is different than that previously assumed [i.e., a step function; see Eq. ( 2)] [6, 16,18] in that phonons with /L p < 1 contribute less and phonons with /L p > 1 contribute more near /L p = 1.
The effect of changing ε is highlighted in Figs.3(a) and 3(b).In our BTE solution, the resistance associated with the surface temperature jump R ε = (4 − 2ε)/εCv (for ρ = 1−ε) is independent of any experimentally controllable length scale, i.e., L p .Consequently, this resistance is always present and of the same magnitude but only becomes non-negligible when R i,x is sufficiently small, which happens when the penetration depth is on the order of or smaller than the MFP.Decreasing ε increases R ε , increasing the surface temperature jump, and hastening the onset of suppression.This fact can be qualitatively understood with an analogy to radiative transfer, i.e., the energy transfer rate from an isothermal gray surface will be less than that from an isothermal black surface at a given surface temperature.Reducing the phonon emissivity reduces the number of phonons emitted from the surface and hence reduces the energy transfer away from the surface, increasing the thermal resistance and reducing the effective thermal conductivity of the material in the nondiffusive regime.Furthermore, it is reasonable that emissivity is related to the interface resistance between the transducer and substrate in a thermoreflectance experiment considering that emissivity affects the size of the surface temperature jump [38].The effect of changing ε and ρ will be revisited in Sec.IV.
To verify the behavior of our suppression function, we compare it to a solution to the gray BTE for two infinite, parallel, black (ε = 1), isothermal plates.This scenario is similar to our problem except that we consider an oscillating surface temperature that defines our length scale L p .The solution to this problem is obtained using the P 1 approximation and is plotted against the ratio of and plate separation distance in Fig. 3(b).A similar trend instills confidence in our solution and suggests that although L p is not a physical boundary, it similarly suppresses contributions of phonons to thermal transport.
III. SUPPRESSION FUNCTION IN A SPHERICAL GEOMETRY
In BB-FDTR and TDTR experiments, there are two relevant length scales: the thermal penetration depth and the spot size of the heating laser.Thus, in order to obtain an accurate suppression function for relating thermoreflectance measurements to k accum , both length scales should be incorporated.The most accurate solution would involve solving the spectral BTE in cylindrical coordinates, under conditions of radially symmetric Gaussian surface heating.While other studies have reached numerical solutions to similar problems [23,39], it is our goal to reach an analytical solution for a simpler problem.
As depicted in Fig. 1(b), we consider a sphere with radius r o embedded in an infinite medium with temperature T (r → ∞,t) = T ∞ = 0 K and a temporally oscillating surface temperature at the sphere-medium interface.Solving the BTE within the medium will provide a solution that is dependent on L p due to the periodic nature of the surface temperature as well as the effect of spot size, which can be captured by varying the radius of the embedded sphere.We note that Chen solved a similar problem for a sphere with steady-state heating [31].While this geometry is not an exact representation of a thermoreflectance experiment, the spherical symmetry (1D in the radial direction) of the problem allows us to derive an analytical solution for the suppression function that is dependent on L p and r o .
We begin with the 1D, gray BTE under the relaxation time approximation in spherical coordinates in the radial direction r (Ref.[24]), The μ-dependence in Eq. ( 14) can be eliminated using the method of spherical harmonics (P N approximation), which is a generalization of the Milne-Eddington approximation and has been thoroughly studied in spherically symmetrical geometries in radiative transfer [28,[40][41][42].The method involves reducing the governing equation into a set of N simpler partial differential equations by taking advantage of the orthogonality of spherical harmonics.Applying the P N approximation to Eq. ( 14) yields where l = 0,1,2, . . .,N and δ 0l is the Kronecker delta.In the limit where N → Ý, the exact solution is obtained.Here we use the P 1 approximation, which is accurate for scattering media at large optical thicknesses with decreasing accuracy as the optical thickness is decreased [28].For our problem, large optical thicknesses correspond to L p .Using the P 1 approximation and separating variables in a similar fashion as Eq. ( 5), Eq. ( 14) reduces to By employing an analogous boundary condition as used for the planar solution, i.e., n+ (r = r o ) = ε C ω T s 4π + ρ n− (r = r o ), we obtain closed-form solutions for the spatial temperature and heat flux profiles for r r o , where = /r o .The suppression function is found by determining k eff of the infinite medium that equates the complex thermal resistance from the diffusion solution [43] to the complex thermal resistance defined by the BTE, which is equivalent to equating surface heat fluxes [21,31] and is where In Fig. 4(a), the magnitude of the complex thermal resistance is plotted as a function of /L p and τ at different values of /r o with ε = 1−ρ = 1 for both the diffusion and BTE solutions.The thermal resistance from the diffusion equation (solid lines) highlights the interplay between r o and L p .When L p r o , the solution converges to the planar solution [Fig.3(a)], and when L p r o , the diffusive thermal resistance becomes independent of /L p .
Similar to the planar solution, the total thermal resistance from the BTE is the sum of a surface component R ε = (4 − 2ε)/εCv (for ε = 1−ρ), which is the same as for the planar solution, and an intrinsic component R i,r , As in the planar solution, R i,r includes no effect from the surface properties (R i,r is not a function of ε when ε = 1−ρ).For a given value of ε, R i,r converges to the diffusion solution when /L p 1 and asymptotes to /( √ 3k bulk ) when /L p 1.But since the diffusive resistance decreases with increasing /r o when /L p 1, R ε becomes nonnegligible, and even dominates, when r o is commensurate or smaller than the MFP.Because the total thermal resistance is the sum of R ε and R i,r , the BTE and diffusion solutions do not converge when /L p 1 at larger values of /r o .When /r o = 0, the BTE solution converges to the planar solution from Eq. ( 10), as shown in Fig. 3(a).
The magnitude of the suppression function S r ( ,L p ,r o ,ε,ρ) is plotted in Fig. 4(b) for ε = 1−ρ = 1.In the limit when r o → ∞, the solution converges to the planar solution given in Eq. ( 13) and shown in Fig. 3(b).Changes in the suppression function with /r o over all /L p illustrate the interactions between the two length scales.In general, the smaller of L p or r o dominates suppression.For example, when /L p 1, suppression is solely due to decreasing particle radius and is consistent with the TDTR experimental measurements by Minnich et al. of k vs r o that were independent of heating frequency [15] and Chen in the case of steady-state heating [31].According to the BTE solution, if either L p or r o are much smaller than the phonon MFP, that phonon will not contribute to k exp .Under these circumstances, BB-FDTR and TDTR are inadequate for measuring the bulk thermal conductivity of a material.
The magnitude of the thermal resistance from the BTE and diffusion solutions and the magnitude of S r ( ,L p ,r o ,ε,ρ) are plotted in Figs.5(a) and 5(b) as a function of /r o for different values of /L p with ε = 1−ρ = 1.As heating frequency increases ( /L p increases), additional suppression occurs from L p , even at very low /r o .In Fig. 5(c), we compare our analytical solution for S r in the low limit ( /L p = 0) with Chen's exact solution from Ref. [31] for a sphere with steady-state heating and the suppression function from Ref. [23] found numerically by solving the spectral BTE for a Gaussian-shaped laser spot.Due to our use of the P 1 approximation, we find that Eq. ( 18) and the exact solution for a sphere with steady-state heating from Ref. [31] differ by a factor of 2 on the horizontal axis.We assert that this factor is not significant considering that the range of MFP spans four orders of magnitude in typical crystalline semiconductors [7,8,12].We also find that using a value of 3r o in Eq. ( 18) yields a suppression function that compares well with the suppression function from Ref. [23].We expect that there should be a correction factor to the spot size in our suppression function as a result of the geometry we have 18) when /L p = 0 for a particle with radius r o and a particle with radius 3r o , the exact solution for a sphere with steady-state heating from Ref. [31], and the suppression function found numerically by solving the spectral BTE for a Gaussian-shaped laser spot from Ref. [23].Using a particle radius of 3r o in our analytical result compares well with numerical results from Ref. [23].
chosen to generate an analytical solution, i.e., we approximate our spot as a finite sphere in an infinite medium while the actual experimental geometry is a Gaussian spot incident on a semi-infinite medium.
IV. RELATING EXPERIMENTS AND k accum USING THE SUPPRESSION FUNCTION
The suppression function can be used to relate experimental measurements to k accum by mapping length scales to phonon MFPs.For example, k accum can be obtained using Eq. ( 3) with thermoreflectance thermal conductivity measurements and the suppression function from Eq. ( 18) as inputs to the solution of an inverse problem, which was done by Minnich for TTG using convex optimization [21].Alternatively, as we will do here, the experimental measurement can be predicted given k as an input, which can be obtained from models (e.g., Callaway, Born-von Karman-Slack, first-principles, etc.) [8,10,18].This approach is less mathematically complex and allows for a direct comparison to the measurements.
We compare experimental measurements on silicon made by TDTR and BB-FDTR with predicted k exp in Figs.6(a) and 6(b).The solid lines are the predicted accumulation functions from first-principles calculations for silicon plotted as a function of MFP at temperatures of 80 and 300 K [44].Using Eq. ( 3) with the suppression function from Eq. ( 18), we transform this data into a predicted k exp as would be measured by BB-FDTR or TDTR, shown as the dashed lines in Figs.6(a) and 6(b).To make this transformation, we use a spot size of 3r o , which is found by comparing Eq. ( 18) to the suppression function for a Gaussian-shaped spot from Ref. [23] [see Fig. 5(c)] and a temperature-independent value of ε = 1−ρ in Eq. ( 18) that yields the best fit between experimental data and Eq. ( 3), i.e., ε is used as a fitting parameter.
In Fig. 6(a), we transform the k accum vs data from the first-principles calculations into predicted k exp vs 3r o using Eqs.(3) and (18).We find that a value of ε = 1−ρ = 0.88 fits the TDTR measurements from Ref. [23] at temperatures of 80 and 300 K well.Here, we used a heating frequency of 10 6 Hz to determine /L p .We note that the interface between the transducer and substrate for the TDTR data presented is Al/Si.It is reasonable that the value of ε obtained by fitting is related to the properties of this interface.We also plot predicted k exp vs 3r o for a heating frequency of 10 7 Hz with ε = 1−ρ = 0.88 to show how increased TDTR heating frequency is expected to further suppress k exp .
In Fig. 6(b) we transform k accum vs data from the firstprinciples calculations into predicted k exp vs L p using Eqs.( 3) and (18).Here, L p is determined using predicted k exp instead of k bulk to be consistent with our previous presentation of the experimental measurements [16].We find that a value of ε = 1−ρ = 0.6 best describes the BB-FDTR measurements from Ref. [16] at temperatures of 80 and 300 K.In the BB-FDTR results presented, the interface between the transducer and substrate is Cr/Si rather than Al/Si, and it is reasonable that there is a difference in the fitted value of ε for BB-FDTR compared to TDTR.
For silicon at a temperature of 300 K, the predicted k exp vs L p for TDTR in Fig. 6(b) shows L p -dependence over the measurement range although the experimental measurements show no L p -dependence.The TDTR spot size used is the average of the range given in Ref. [6] (3r o = 32.25 μm).For BB-FDTR (3r o = 10.2 μm), the prediction compares well to experimental results at smaller L p .The experimental measurements should plateau at larger L p due to the effect 3) using the suppression function from Eq. (18).A value of ε = 0.88 results in the best fit to TDTR measurements at T = 300 K and T = 80 K from Ref. [23] with a heating frequency of 10 6 Hz.Predicted k exp vs 3r o for a heating frequency of 10 7 Hz with ε = 0.88 is shown for comparison.In TDTR, the transducer/substrate interface is Al/Si.(b) k accum from first-principles calculations is transformed into k exp vs L p by Eq. (3) using the suppression function from Eq. (18).A value of ε = 0.6 results in the best fit to BB-FDTR measurements (circles) at T = 300 K and T = 80 K from Ref. [16].In BB-FDTR, the transducer/substrate interface is Cr/Si.Predicted k exp vs L p with ε = 0.88 is compared to TDTR measurements from Ref. [6] (diamonds). of spot size, but this effect is not observed.More suppression is observed in BB-FDTR relative to TDTR for the available range of TDTR data because a smaller spot size was used and the surface emissivity is lower.At T = 80 K, Eqs.(3) and (18) compare well with BB-FDTR experimental results over all L p .At this temperature, phonons have longer MFPs and are significantly suppressed by the finite spot size, i.e., even for very large L p , k exp will only attain approximately 30% of k bulk due to the spot size restriction.In Figs.6(a) and 6(b) we compare only against the overarching modulation frequency and have neglected the multiple time scales in TDTR that arise from using a pulsed laser.
To generate predicted k exp vs L p in Figs.6(a) and 6(b), we used a suppression function derived from the gray BTE [Eq.(18)] and applied it to the full phonon spectrum, where the MFP is frequency-dependent.A similar approach was used by Collins et al. for TTG, in which a gray suppression function was applied to the full phonon spectrum to obtain predictions of thermal diffusivity as a function of grating period [19].The results were compared to predictions of thermal diffusivity as a function of grating period calculated from the spectral BTE using phonon properties from first-principles calculations.The authors found favorable comparison in that the predicted effective thermal diffusivity varied by less than 7% over grating periods from 10 −1 to 10 6 nm compared to the full spectral models of Si and PbTe at a temperature of 300 K.
The parameters ε and ρ in Eq. ( 18) arise from the analogy with radiative transfer and describe the ability of the surface to emit and reflect phonons.In our comparisons with experimental results we use ε as a fitting parameter but propose that ε is related to the properties of the transducer/substrate interface in BB-FDTR and TDTR experiments.One interpretation is that the phonon emissivity is equal to the transmission coefficient of phonons from the transducer into the substrate [30].
Phonon transmission coefficients are used in the Landauer formulation to make predictions of interface thermal resistance.Following Ref. [33], the total interface resistance R total = 2R T + R L , where R T and R L are the contributions from transverse and longitudinal acoustic phonons, can be derived in a similar manner as thermal conductivity.Beginning with Eqs.(2.10) and (2.11) in Ref. [46] and using a truncated Debye dispersion and Debye density of states, where k B is the Boltzmann constant, v T and v L are the transverse and longitudinal speeds of sound, θ T and θ L are the temperatures associated with the transverse and longitudinal Brillouin zone edge frequencies, y = ω/k B T , and α = ε is the transmission coefficient.Using Eq. ( 21) with values of v T , v L , θ T , and θ L from Ref. [33] and our best-fit values for α = ε, we find that R total = 3.85 m 2 K GW −1 for a Cr/Si interface and R total = 2.63 m 2 K GW −1 for an Al/Si interface at T = 300 K.These values compare well with measured values reported in Refs.[16,15] at T = 300 K (R total = 4.76 m 2 K GW −1 for Cr/Si interface and R total = 2.78 m 2 K GW −1 for Al/Si interface).Furthermore, because ε influences the onset of suppression, we hypothesize that the interface properties contribute to the discrepancy between room temperature BB-FDTR and TDTR heating frequency-dependence for silicon [see Fig. 6(b)], though a spectral phonon model that includes the transducer may be required to reconcile this unresolved question.
It is important to note that previous nondiffusive measurements have been solely attributed to reduced thermal conductivity, i.e., the interface resistance between the substrate and the transducer is assumed constant in diffuse interpretations of the experiments [6,15,16,18].In our formulation, we are comparing a thermal resistance from the BTE that includes a surface temperature drop in the BTE domain to a diffusion solution that does not account for an interface thermal resistance (no surface temperature drop).As a result, we are including the effect of R ε [Eq.(12a)] in our definition of k eff .To generate a suppression function that does not include the surface temperature drop, one can equate the appropriate complex diffusive thermal resistance to R i,x or R i,r , resulting in S i = 1/(1 + iτ ).This result is equivalent for both the planar and spherical geometries and is independent of the particle radius.In Ref. [16], a suppression function was determined from a numerical solution to the 1D, gray BTE for phonons traveling in the positive and negative x directions (μ = 1 or −1).The result is related to S i ; the difference being a factor of π/2 on the x axis, which stems from considering −1 μ 1 when determining S i [47].
To include heating frequency-dependent interface resistance between the transducer and the substrate, a BTE formulation that explicitly includes an interface could be considered and compared to a diffusion solution including an interface.How the transducer affects nondiffusive transport, which is important in interpreting the experiments, has not been explicitly addressed, though it may contribute to the discrepancy between heating frequency-dependent measurements of silicon by BB-FDTR and TDTR.
V. CONCLUSIONS
An analytical suppression function for a system geometrically similar to a thermoreflectance experiment was obtained by solving the BTE for a gray medium.The result accounts for the two dominant length scales in thermoreflectance experiments: thermal penetration depth and heating laser spot radius.We used the suppression function to predict k exp vs L p and k exp vs 3r o to make a direct comparison to experimental measurements by both BB-FDTR and TDTR.Our results corroborate the use of BB-FDTR and TDTR as tools for identifying k accum by generating nondiffusive transport and provide insight and understanding of the measurements.Furthermore, our results suggest that if either L p or r o are much smaller than the phonon MFPs that dominate k, BB-FDTR and TDTR are inadequate for measuring the bulk thermal conductivity of a material.The phonon surface properties ε and ρ affect suppression and may explain discrepancies between TDTR and BB-FDTR measurements of similar samples with different transducers.It is clear that powerful new insight is offered by nondiffusive thermal transport measurements paired with the experiment-specific suppression function to map data into real energy carrier properties.
FIG. 1 .
FIG. 1. (Color online) Schematic diagrams for (a) the 1D planar system (Sec.II) and (b) the spherically symmetrical system (Sec.III), both with oscillating surface temperatures.Here, μ is the directional cosine, μ = cosθ.The parameters ε and ρ are the phonon emissivity and reflectivity and are discussed further in Sec.IV.
part of the exponential in Eqs.(10a) and (10b) represents the BTE prediction of penetration depth L p-BTE ,
FIG. 2 .
FIG. 2. (Color online) 1D planar geometry with temporally oscillating surface temperature and ε = 1−ρ = 1.(a) Magnitude of the spatial temperature profiles from the diffusion and BTE solutions for /L p = 1.The BTE solution has two distinct regions that correspond to two distinct thermal resistances.(b) Magnitude of the thermal resistances R diff,x and R BTE,x = R ε + R i,x plotted as a function of /L p and τ .
3(a) and 3(b), we plot the magnitude of the thermal resistance of the system from the BTE and diffusion solutions
FIG. 3 .
FIG. 3. (Color online) 1D planar geometry with temporally oscillating surface temperature and ε = 1−ρ = 1, 0.5, and 0.1.(a) Magnitude of the thermal resistance from the diffusion and BTE solutions vs /L p and τ .The BTE predicts a higher thermal resistance than the diffusion solution, which can be accounted for by reducing the effective thermal conductivity in the diffusion solution.(b) Magnitude of the suppression function plotted as a function of /L p and τ .These curves are compared to the P 1 solution to the BTE for parallel, black, isothermal plates and to the step function suppression function [Eq.(2)] [6,16,18].
FIG. 4 .
FIG. 4. (Color online) Spherical particle embedded in an infinite medium with oscillating temperature at the surface of the sphere (r = r o ) with ε = 1−ρ = 1.(a) Magnitude of the thermal resistance from the diffusion and BTE solutions vs /L p and τ .(b) Magnitude of the suppression function plotted as a function of /L p and τ for different values of /r o .For /r o = 0, the results collapse to the 1D planar case shown in Figs.3(a) and 3(b).
FIG. 5 .
FIG. 5. (Color online) Spherical particle embedded in an infinite medium with oscillating temperature at the surface of the sphere (r = r o ) with ε = 1−ρ = 1.(a) Magnitude of the thermal resistance from the diffusive and BTE solutions vs /r o for different values of /L p .(b) Magnitude of the suppression function plotted as a function of /r o for different values of /L p .(c) Comparison of Eq. (18) when /L p = 0 for a particle with radius r o and a particle with radius 3r o , the exact solution for a sphere with steady-state heating from Ref.[31], and the suppression function found numerically by solving the spectral BTE for a Gaussian-shaped laser spot from Ref.[23].Using a particle radius of 3r o in our analytical result compares well with numerical results from Ref.[23].
FIG. 6 .
FIG.6.(Color online) Comparison of thermal conductivity measurements and predicted k exp for silicon.(a) k accum from first-principles calculations (solid lines) is transformed into predicted k exp vs 3r o (dashed lines) by Eq. (3) using the suppression function from Eq.(18).A value of ε = 0.88 results in the best fit to TDTR measurements at T = 300 K and T = 80 K from Ref.[23] with a heating frequency of 10 6 Hz.Predicted k exp vs 3r o for a heating frequency of 10 7 Hz with ε = 0.88 is shown for comparison.In TDTR, the transducer/substrate interface is Al/Si.(b) k accum from first-principles calculations is transformed into k exp vs L p by Eq. (3) using the suppression function from Eq.(18).A value of ε = 0.6 results in the best fit to BB-FDTR measurements (circles) at T = 300 K and T = 80 K from Ref.[16].In BB-FDTR, the transducer/substrate interface is Cr/Si.Predicted k exp vs L p with ε = 0.88 is compared to TDTR measurements from Ref.[6] (diamonds). | 9,300.2 | 2014-08-20T00:00:00.000 | [
"Engineering",
"Physics"
] |
Human Activity Recognition Based on Continuous-Wave Radar and Bidirectional Gate Recurrent Unit
: The technology for human activity recognition has diverse applications within the Internet of Things spectrum, including medical sensing, security measures, smart home systems, and more. Predominantly, human activity recognition methods have relied on contact sensors, and some research uses inertial sensors embedded in smartphones or other devices, which present several limitations. Additionally, most research has concentrated on recognizing discrete activities, even though activities in real-life scenarios tend to be continuous. In this paper, we introduce a method to classify continuous human activities, such as walking, running, squatting, standing, and jumping. Our approach hinges on the micro-Doppler (MD) features derived from continuous-wave radar signals. We first process the radar echo signals generated from human activities to produce MD spectrograms. Subsequently, a bidirectional gate recurrent unit (Bi-GRU) network is employed to train and test these extracted features. Preliminary results highlight the efficacy of our approach, with an average recognition accuracy exceeding 90%.
Introduction
Radar sensors in the context of human monitoring are becoming increasingly popular, especially in applications such as activity classification in smart homes within the ambient assisted living framework, the recognition of human gestures in human-computer interaction, contactless vital sign monitoring, and other fields [1].In the realm of these applications, there are generally two distinct categories of sensors that can be utilized, namely wearable and non-wearable sensors [2].
Wearable sensors are usually attached to the body parts of the monitored subject or are worn and carried in the pocket [3,4].It is essential to acknowledge that wearable sensors face challenges in human activity classification, such that the placement and attachment of wearable sensors can affect their accuracy and reliability.Therefore, ensuring proper sensor placement and addressing issues related to sensor displacement or misalignment are crucial for obtaining accurate and consistent measurements [5,6].Some research uses inertial sensors embedded in smartphones or other devices [7].However, external factors such as temperature, humidity, magnetic fields, etc., can affect the performance of inertial sensors.These sensors require calibration to ensure accuracy under varying environmental conditions.The non-wearable sensors provide distinct advantages and have unique applications in human activity classification.Unlike wearable sensors, which require physical contact or attachment, non-wearable sensors can be deployed in the environment, such as the surrounding infrastructure or objects [8].These sensors can utilize various technologies, including vision-based systems [9], depth cameras [10], ambient sensors, or radar [11], to capture relevant information about human activities.Among the non-wearable sensors, radar has attracted significant attention due to its insensitivity to light conditions and easy integration into the home environment, as modern radar systems can be designed to look like a normal Wi-Fi router [12,13].Furthermore, radar sensors may pose fewer privacy concerns compared to other non-wearable sensors, as they do not collect plain images or videos of the user and their private environments.
The rich structure of the Doppler is widely used as the input for complex radar-based solutions in a lot of studies [14,15].A radar device emits an electromagnetic signal along a specific line of sight (LOS).The reflection of the targets moving in the LOS contains information about their speed as a result of the Doppler effect [16].In addition, separately moving parts are characterized by their own Doppler signal.Most often, the superposition of all these Doppler signals is summarized in a so-called micro-Doppler (MD) signature [17].
Numerous studies used Deep Convolution Neural Network (DCNN) to process the data as images.The work in [18] used a lightweight DCNN for the classification of different human activities.Comparisons with other neural networks such as MobileNet and ResNet were provided, demonstrating better performance when the DCNN was used.Some studies used Generative Adversarial Networks (GANs) to address the need for a large amount of data for training the neural network model for classification, and it is a significant challenge to gather a lot of radar data.The work in [19] applied a similar approach to the data of six human activities, which shows that GANs are an effective tool to generate synthetic radar data, starting from a relatively small set of such synthetic data, and their best use is to improve classification performances.Compared to the above methods, we investigate, in this paper, recurrent neural networks that interpret radar data as a temporal series and characterize the time-varying nature of a sequence of human activities.We use gate recurrent unit networks in their bidirectional implementation (Bi-GRU).Gate recurrent unit (GRU) is a recurrent neural network that can learn temporal dependencies between samples at separated time steps in a sequential data stream.GRU have been promoted as an ideal solution for temporally variant data for many applications, ranging from temperature detection, text and speech detection, up to finance, and the energy field [20][21][22][23][24].However, GRU and especially Bi-GRU have been minimally discussed in the literature as standalone tools for radar-based human activity classification and represent an under-explored approach if compared with the DCNNs mentioned in previous paragraphs.
In summary, to the best of our knowledge so far, very few works in the literature have investigated the use of GRU networks, let alone Bi-GRUs, for the radar-based classification of human activities; when these have been used, the data referring to the classes of interest have been collected as separated radar recordings.However, in this paper, we analyze continuous sequences of human activities.
The main contributions of this paper are summarized as follows: • We analyze realistic, continuous sequences of human activities rather than discrete activities.Within them, different actions can happen at any time, with unconstrained duration for each activity, and the body parts reposition themselves appropriately in order to perform the following activity.
•
We extract the Doppler feature from continuous-wave (CW) radar data.Then, we introduce stacked bidirectional GRU networks as a potent deep learning (DL) mechanism for classifying these ongoing human activity sequences.Bi-GRUs are inherently suitable for such analysis because they can capture both temporal forward and backward correlated information within the radar data.We also shed light on performance implications stemming from data-processing choices and pivotal hyperparameters.
•
We base our analysis on experimental data collected using a CW radar and involving three participants performing different combinations of five activities.Then, we design three different permutations, as shown in the table in Section 4.3, to train and test the model with different humans, which makes it more credible.
The remaining sections of this paper are organized as follows: Section 2 reviews the related works on human activity classification.Section 3 introduces the system description and main structure of the proposed radar-based Bi-GRU scheme.In Section 4, the perfor-mance of the proposed scheme is evaluated.Section 5 provides conclusions and discusses future works.
Related Works
Radar has been widely used in the field of human activity classification.The Bi-GRU is a lightweight neural network model, and it is usually suitable for small datasets, akin to our method.The authors in [25] proposed a deep learning (DL) method, called TRANS-Bi-GRU, which combines a transformer with a bidirectional gated recurrent unit (Bi-GRU) to efficiently learn and recognize different types of activities with a large dataset.They compared the proposed scheme with some existing schemes, and the results show that their scheme significantly outperforms the existing models for activity classification.However, this approach only used the raw data from the radar directly, without extracting the Doppler features from the raw data like our proposed scheme, which enables the fine-grained use of radar data.The authors in [13] proposed a robust fall detection system based on the frequency-modulated continuous-wave (FMCW) radar.The results show that the accuracy is over 90% on the test set.However, this scheme only detects the moment of human movement and calculates the range map of the radar signals, which cannot effectively utilize the data.On the other hand, the authors in [26] proposed an extremely efficient convolutional neural network (CNN) architecture named Mobile-RadarNet, specially designed for human activity classification based on micro-Doppler signatures.The experiments on a seven-class human activity dataset demonstrate that the proposed scheme can achieve high classification accuracy.Despite its high classification accuracy, it treats human activities as discrete, overlooking the continuous nature of most real-world activities.Our methodology, in contrast, treats continuous activities as a class, mirroring real-world scenarios more closely.
System Description and Data Processing
The simplified overview of our proposed method for continuous human activity classification is illustrated in Figure 1.In the data acquisition part, the echo signals are recorded by a continuous wave (CW) radar.In the signal processing part, the fast Fourier transform (FFT) is employed to extract the Doppler features from the CW radar data, leading to time-Doppler spectra.In the activity classification part, the above time-Doppler spectra are fed into our proposed network, culminating in the final classification results.The CW radar is the simplest and most efficient solution in cases where the detection of the moving object is the only, and outstanding, task [18,27].Figure 2 shows the Doppler signature, capturing 40 s during which an individual cycles through five activities: walking, running, squatting, standing, and jumping.The y-axis in the figure denotes the Doppler dimension, while the x-axis highlights the time progression.Figure 2 distinctly showcases the varied Doppler signatures associated with different activities.For example, a negative frequency shift is observed when the participant squats, indicating movement away from the radar.Conversely, a positive frequency shift arises when the participant stands, symbolizing movement toward the radar.
Optimal Parameters for Human Activity Classification
We implemented the neural network using a combination of software tools commonly employed in deep learning research.The primary components of our software stack included PyCharm version 2022.2.2 as the integrated development environment (IDE), Anaconda (Python version 3.9) for package management and environment control, and TensorFlow version 2.9.1 as the deep learning framework.For many sequence modeling tasks, it is helpful to access future and post contexts.However, the standard GRU network processes the sequence in chronological order, ignoring the future context.Bi-GRU networks extend the unidirectional GRU network by introducing a second layer, in which the hidden connections flow in reverse chronological order.As a result, the model can take advantage of past and future information.The typical structure of GRU is shown in Figure 5.As mentioned earlier, GRU consists of an update and reset gate [1].In the update gate, GRU computes ‡ t at a given time t to solve the vanishing gradient problem using the following formula: Figure 5.The structure of the proposed Bi-GRU network.‡ t is the output vector of the update gate, which controls the degree to which the previous hidden state y t−1 influences the current input spectrum bin t .Sigmoid is the activation function, W ‡ is the weight matrix, y t−1 is the previous hidden state, and spectrum bin t is the current input GRU calculates r t at a given time t to illustrate how much past information to forget.The gate executes the following formula: r t is the reset gate output vector, which determines how much of the previous hidden state y t−1 should be ignored or reset based on the current input spectrum bin t .W r is the weight matrix.The current storage content stage is calculated according to the following formula: ∼ y t is the new candidate hidden state, which is computed based on the previous hidden state y t−1 and the current spectrum bin t .tanh is the hyperbolic tangent activation function.Subsequently, the current hidden state, y t , is computed based on the previous hidden state, current candidate activation, and update gate, using the equation y t is the current hidden state at time t, y t−1 is the previous hidden state at time step t − 1, o t is the output at time step, and W 0 is the weight matrix.For many sequence modeling tasks, it is helpful to access future and post contexts.However, the standard GRU network processes the sequence in chronological order, ignoring the future context.Bi-GRU networks extend the unidirectional GRU network by introducing a second layer, in which the hidden connections flow in reverse chronological order [28].As a result, the model can take advantage of past and future information.
Figure 5 shows a simplified block diagram of the proposed architecture of the Bi-GRU network.The input to this network is the spectrogram, which contains micro-Doppler information and is fed into the network as a group of different vectors' time bin after the time bin.Our Bi-GRU network is structured in a sequential manner, where the output layer of one Bi-GRU is connected to the input layer of the next Bi-GRU.This sequential connection enables the network to capture complex temporal dependencies within the data.Specifically, the hidden states of the first Bi-GRU layer serve as inputs to the second Bi-GRU, and so on for subsequent layers.This hierarchical architecture allows the network to learn and refine features at different levels of abstraction.In terms of training, we employed a holistic approach by training the entire stacked network from examples, as opposed to training each Bi-GRU layer separately.This comprehensive training strategy facilitates the learning of hierarchical representations from the data.Each layer of the network contributes to the extraction of relevant features, and the subsequent layers build upon these representations, ultimately leading to the high classification performance reported in our study.To estimate the influence of different hyper-parameters on the model performance, we will compare the average accuracy when the learning rate, the number of Bi-GRU layers, and the number of Bi-GRU neurons are different.We use two targets' data to train the model and use the remaining one target's data to test the model.The results are presented as follows.
(1) number of Bi-GRU layers Figure 6a presents the average accuracies of the three targets in comparison with four distinct Bi-GRU layers: one layer, two layers, three layers, and four layers.All Bi-GRU parameters are set to be identical, except for the number of Bi-GRU layers.As can be observed, using three Bi-GRU layers results achieves the peak classification accuracy for all three target datasets, followed by the two Bi-GRU layers.The least average classification accuracy is found when using one Bi-GRU layer.In conclusion, the average classification accuracy improves when transitioning from one to three Bi-GRU layers.However, an increase to four Bi-GRU layers does not yield further gains.This plateau may arise from the model's heightened complexity, which might lead to overfitting, meaning the model over-learns from the training data, capturing noise and unnecessary details.Therefore, the ability to generalize weakens, affecting the performance on test data.In the end, the number of Bi-GRU layers is set to three.(
2) number of Bi-GRU neurons
It is widely understood that amplifying the number of Bi-GRU neurons augments the model's complexity, enabling it to capture more from the training data.However, an excessively intricate model might not necessarily perform well, especially if it over-learns, including noise and other irrelevant details, compromising the model's generalization capability.Hence, a balanced, appropriate model is more beneficial than an overly complex model.
Figure 6b presents the average classification accuracies for three distinct Bi-GRU neuron counts: 32, 64, and 128.All other parameters remain consistent except for the neuron count.The optimal average classification accuracy emerges with 64 neurons, trailed by 128 neurons.The least effective configuration utilizes 32 neurons.Given these findings, the number of Bi-GRU neurons is set to 64.
(3) learning rate
The Adam optimizer is implemented for the model.A crucial hyperparameter to adjust in the update of the model parameters is the optimizer learning rate, often known as the step size.In this experiment, we conduct several tests to configure the optimal learning rate.Figure 6c illustrates the average classification accuracies of the three targets with four learning rates: 10 −1 , 10 −2 , 10 −3 , and 10 −4 .The figure shows that the highest average accuracies of the three targets all occur when learning rate is set to 10 −3 .To ensure the model functions optimally on our dataset, we apply a learning rate of 10 −3 .
Measurement Hardware and Its Parameters
The CW radar, supplied by the Innocent Company, works in a 24 GHz industrial, scientific, and medical (ISM) frequency.The radar sensor has only one transmitter channel and receiver channel.A summary of the technical details of our test apparatus is presented in Table 1.
Experiment Scenario Setup and Data Collection
Data were collected from three participants, aged between 23 and 31, on the sixth floor of the Hwado Office Building at Kwangwoon University.Table 2 provides the primary physical attributes of these participants.The radar was positioned at a height of 0.8 m, with a distance of 3 m from the participant.Figure 7 depicts the layout of the environment and radar setup.Based on the purpose of detecting human activities in disasters, we designed several activities commonly used in disasters.The data include five human activities as shown in Figure 8: walk (A1), run (A2), squat (A3), stand (A4), and jump (A5).While each activity is presented as a distinct action, they were executed in a continuous sequence.Most research collects discrete activity.For example, the participant performs a single activity during one collection.However, we collected continuous activity.The participant performs five different activities continuously during one collection because activities in real-life scenarios tend to be continuous.Upon activating the radar, participants carried out all five activities in random order without constraints.Each data collection session lasted 40 s, during which participants completed all five activities.People can perform the five activities in a random order and the speed of the activity also depends on the participant.One important point is that the participant must perform all five activities in the 40s collection time, and each activity can only be performed one time.Such a session is termed a "group", and every participant undertakes 20 groups.With three participants, this totaled 60 groups within a singular experimental setup.Each group was segmented into 200 bins, each lasting 0.2 s.Each bin was labeled according to the activity conducted during that time.Subsequently, these bins were inputted into the model group by group.
Training and Testing Set Composition
Sixty distinct groups were gathered in total.To bolster the credibility of our results, data from two targets (or 40 groups) were used for training, while data from the third target (20 groups) were allocated for testing.Table 3 lists all the permutation combinations considered.The target we mentioned in Table 3 refers to the participant in our experiment.
Performance Analysis
Figure 9 shows an example result of a group; the blue line is the actual activity of the participant, and the red line is the participant's activity that the model predicts.The accuracy of the group is determined by comparing the blue line and the orange line.The accuracy of each activity is determined by comparing the blue line and orange line in each activity in the group.Many papers use the data of the same target to train and test.However, when they use the same target's data to train and test, this means that the model already learns the feature of the test target.Therefore, it cannot show that the model has a good generalization ability.In our experiment, we use different targets to train and test to let the results be more persuasive.
Figure 10 shows the accuracy of each group when we use two targets' data to train the model and use the remaining one target's data to test the model.It is obviously shown in Figure 10 that many accuracy rates exceed 95% and most accuracy rates exceed 90%, especially in Figure 10b,c.However, not all test groups maintain this high level of accuracy due to the inherent challenges of detecting intricately designed activities within this dataset.For instance, the 17th group in Figure 10a has an accuracy of approximately 78%.This variability can be attributed to the fact that the same activity, when performed by the same target, might exhibit slight differences across different targets.Consequently, Doppler features might vary between participants, leading to occasional inconsistencies in the model's predictions.Beyond group-wise accuracy, we also evaluated the accuracy for each distinct activity within every group to verify the experiment's authenticity.Figure 11 shows an example about how to calculate the accuracy of each activity.On the one hand, the model sometimes cannot predict the correct activity like the first circle on the figure, which results in 0% accuracy; on the other hand, the model can sometimes predict the correct activity but cannot predict the duration of the activity correctly, such as the third circle on the figure, which results in 71.8% accuracy.In the next section, we will introduce the results of the accuracy of each activity in every group.
Figure 12 presents the accuracy of individual activities across groups, using two targets' data for training and the remaining one target's data for testing.From Figure 12, we can see that the best result is Figure 12c, followed by Figure 12b, and most of the accuracies in both are over 90%.Then, there is Figure 12a, in which most of the accuracies are over 80%.From Figures 10 and 12, we can see that the model behaves well when using the data of the second target and third target to test, and the model behaves less well when the data of the first were used for testing.Figure 13 illustrates a violin plot of the accuracy for five distinct activities across different targets.The mean accuracies for the activities walk and run consistently surpass 95%, except for the walk activity in Figure 13b.This high accuracy can be attributed to the extended duration of walking and running relative to other activities, allowing the model to glean more characteristic features from these two activities.In contrast, the average accuracies for the squat activity are approximately 90% in Figure 13b,c but drop to approximately 80% in Figure 13a.The jump and stand activities yield mean accuracies of approximately 85% in Figure 13b,c.However, these values significantly dip to approximately 65% in Figure 13a.One potential explanation is the inherent variability in the way different targets perform the same activity, leading to inconsistent accuracy when different targets' data are used for training and testing, as observed in the jump and stand activities within Figure 13a.
Performance Comparison
To evaluate the performance of the proposed scheme, we compared the average accuracy of the group for the three participants with different deep learning models, as shown in Figure 14.As the figure indicates, the Bi-GRU approach consistently achieves average accuracies exceeding 0.9 for all participants, succeeded by the Bi-LSTM model, which yields average accuracies over 0.9 for the second and third targets.The worst is the LSTM scheme, in which the average accuracy for first target is approximately 0.88, the accuracy for the second target is approximately 0.77, and the accuracy for the third target is only approximately 0.72.Conversely, the Bi-GRU scheme achieves the best performance.
Conclusions and Future Work
This paper introduces a Bi-GRU model geared toward continuous human activity classification, leveraging the Doppler feature extracted from CW radar data.Our emphasis is on continuous human activities, as opposed to discrete ones, given the inherently continuous nature of real-world human actions.Going forward, we aspire to achieve real-time continuous human activity classification, fostering applications such as monitoring human activities during emergencies or disasters.
Figure 1 .
Figure 1.The framework of the proposed Doppler-based Bi-GRU method.
Figure 2 .
Figure 2. Doppler signature of a group.
Figure 3
Figure 3 elucidates the core steps of data processing.The captured signals manifest as a matrix structured by slow and fast time dimensions.Fast time refers to the time domain of individual pulse signals received by the radar; they are transformed into the fast time domain for the analysis and processing of each pulse signal.Slow time refers to the longer time scale in a radar system, involving a sequence of multiple pulse signals received over a period of time.The slow time domain is used to accumulate and integrate multiple pulse signals to improve the performance and target detection capability of the radar system.For data to be used as inputs to the classifier, firstly, a Fast Fourier transform is performed on each fast time bin of raw data to convert the time domain into a frequency domain to extract the information of the fast time dimension.To remove static clutter, a Hanning window is then applied, and then using specific slow time bins where the target is performing the activities, a 2D Fast Fourier transformation (FFT) is applied to find the Doppler-time pattern to characterize the micro-Doppler signatures.Each Doppler spectrum time bin is then manually labeled, setting the stage for model training.Figure 4 is the result after being given the label of Figure 2 (A1, A2, A3, A4 and A5 in the figure represent walking, running, squatting, standing, and jumping, respectively).
Figure 3 elucidates the core steps of data processing.The captured signals manifest as a matrix structured by slow and fast time dimensions.Fast time refers to the time domain of individual pulse signals received by the radar; they are transformed into the fast time domain for the analysis and processing of each pulse signal.Slow time refers to the longer time scale in a radar system, involving a sequence of multiple pulse signals received over a period of time.The slow time domain is used to accumulate and integrate multiple pulse signals to improve the performance and target detection capability of the radar system.For data to be used as inputs to the classifier, firstly, a Fast Fourier transform is performed on each fast time bin of raw data to convert the time domain into a frequency domain to extract the information of the fast time dimension.To remove static clutter, a Hanning window is then applied, and then using specific slow time bins where the target is performing the activities, a 2D Fast Fourier transformation (FFT) is applied to find the Doppler-time pattern to characterize the micro-Doppler signatures.Each Doppler spectrum time bin is then manually labeled, setting the stage for model training.Figure 4 is the result after being given the label of Figure 2 (A1, A2, A3, A4 and A5 in the figure represent walking, running, squatting, standing, and jumping, respectively).
Figure 3 .
Figure 3.The main process of the data processing.
Figure 4 .
Figure 4.The result after the data processing of Figure 2.
Figure 6 .
Figure 6.The average accuracy of using two targets' data to train and using the remaining one target's data to test with different model parameters: (a) various Bi-GRU layers, (b) various neurons, (c) various learning rates.
Figure 8 .
Figure 8. Pictorial list of activities; these five activities were performed in random continuous sequences.
Figure 9 .
Figure 9. Ground truth in blue, and the predicted outcome in red.
Figure 10 .
Figure 10.The accuracy of each group when we use two targets' data to train and use the remaining one target's data to test: (a) use the data of 2nd target and 3rd target to train, and use the data of 1st target to test; (b) use the data of 1st target and 3rd target to train, and use the data of 2nd target to test; (c) use the data of 1st target and 2nd target to train, and use the data of 3rd target to test.
Figure 11 .
Figure 11.Example of how to calculate the accuracy of each activity.
Figure 12 .
Figure 12.Accuracy of each activity in each group when we use two targets' data to train and use the remaining one target's data to test: (a) use the data of 2nd target and 3rd target to train, and use the data of 1st target to test; (b) use the data of 1st target and 3rd target to train, and use the data of 2nd target to test; (c) use the data of 1st target and 2nd target to train, and use the data of 3rd target to test target to test.
Figure 13 .
Figure 13.Violin plot of the accuracy of five activities for different targets: (a) use the data of 2nd target and 3rd target to train, and use the data of 1st target to test; (b) use the data of 1st target and 3rd target to train, and use the data of 2nd target to test; (c) use the data of 1st target and 2nd target to train, and use the data of 3rd target to test.
Figure 14 .
Figure 14.Average accuracy of the three targets when using different models.
Table 1 .
Configuration parameters of CW radar.
Table 2 .
Main physical parameters of participants.
Table 3 .
All permutations of the train and test sets. | 6,533.8 | 2023-09-27T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Effects of Probiotic Supplementation on Immune and Inflammatory Markers in Athletes: A Meta-Analysis of Randomized Clinical Trials
Background and Objectives: Probiotic supplementation can prevent and alleviate gastrointestinal and respiratory tract infections in healthy individuals. Markers released from the site of inflammation are involved in the response to infection or tissue injury. Therefore, we measured the pre-exercise and postexercise levels of inflammation-related markers—tumor necrosis factor (TNF)-α, interleukin (IL)-6, IL-8, IL-10, interferon (IFN)-γ, salivary immunoglobulin A (IgA), IL-1β, IL-2, IL-4, and C-reactive protein (CRP)—in probiotic versus placebo groups to investigate the effects of probiotics on these markers in athletes. Probiotics contained multiple species (e.g., Bacillus subtilis, Bifidobacterium bifidum, etc.). Materials and Methods: We performed a systematic search for studies published until May 2022 and included nine randomized clinical trials. Reporting followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses guideline. Fixed-effects meta-analyses and sensitivity analyses were performed. Subgroup analyses were conducted on the basis of the period of probiotic intervention and timing of postassessment blood sampling. Results: The levels of IFN-γ and salivary IgA exhibited a significant positive change, whereas those of TNF-α and IL-10 demonstrated a negative change in the probiotic group. The subgroup analysis revealed that the probiotic group exhibited significant negative changes in TNF-α and IL-10 levels in the shorter intervention period. For the subgroup based on the timing of postassessment blood sampling, the subgroup whose blood sample collection was delayed to at least the next day of exercise exhibited significant negative changes in their TNF-α and IL-10 levels. The subgroups whose blood samples were collected immediately after exercise demonstrated negative changes in their TNF-α, IL-8, and IL-10 levels. Conclusions: Probiotic supplementation resulted in significant positive changes in the IFN-γ and salivary IgA levels and negative changes in the IL-10 and TNF-α levels. No significant changes in the IL-1β, IL-2, IL-4, IL-6, IL-8, or CRP levels were observed after probiotic use in athletes.
Introduction
Individuals who engage in strenuous exercise are more likely to experience upper respiratory tract and gastrointestinal illness, especially diarrhea, during heavy training and competitions such as a marathon [1][2][3][4].Strenuous exercise causes immunosuppression by reducing the function of immune cells, thus increasing susceptibility to viral infection [5,6].Gastrointestinal illness is typically characterized by belching, bloating, flatulence, side stitch, abdominal cramps, vomiting, diarrhea, the urge to defecate during exercise, nausea, and loss of appetite [7,8].Respiratory illness is often characterized by throat soreness, sneezing, a blocked or runny nose, and cough [8].Athletes may be more at risk of infection during heavy training [9][10][11], possibly because of the suppression of mucosal immunity, which, in turn, increases susceptibility to gastrointestinal and respiratory illness [2], or alternatively because of the combined effects of small changes in several immune parameters [12].Therefore, elite athletes are required to reduce the risk of infection and shortly recover from susceptibility to gastrointestinal and respiratory symptoms.Evidence increasingly indicates that probiotic supplementation can prevent and alleviate gastrointestinal and respiratory tract infections (common cold and influenza) in healthy individuals and have an influence on body defense [13,14].
Cytokines, which are small peptides facilitating the influx of lymphocytes, neutrophils, monocytes, and other cells, are released from sites of inflammation and are involved in the response to infections or tissue injury [28][29][30].Probiotics have been reported to modulate inflammation and systemic immune responses in experimental animals, such as by affecting defense mechanisms and the release of several cytokines (e.g., tumor necrosis factor (TNF)-α and interferon (IFN)-γ) [31,32].Probiotics can also improve several inflammatory and oxidative stress biomarkers [33].The balance between proinflammatory and anti-inflammatory cytokines, which regulate immune cell homeostasis, is dynamic and ever-shifting in the human immune system [34,35].The cytokines initially involved in a cytokine storm include TNF-α, interleukin (IL)-1β, IL-6, and IL-10 [36].High-intensity long-duration exercise can lead to higher levels of inflammatory mediators, including IL-1β, IL-6, and TNF-α, and thus increase the risk of injury and chronic inflammation [37,38].IL-2 is considered a key growth and death factor for antigen-activated T lymphocytes [39].IL-4 is associated with type 2 inflammation, which is related to parasite infection and chronic diseases, including asthma and atopic dermatitis [40].A systematic review and meta-analysis demonstrated an elevation in IL-1β, IL-8, IL-10, and TNF-α levels; a reduction in IL-2 and IFN-γ levels; and no change in the IL-4 level after long-distance running [41].However, TNF-α plays a crucial role in several physiological and pathological conditions related to its action in inflammation and leukocyte movement [42].IL-6 is a cytokine present in circulation during exercise.A study reported that after a person took probiotics, their IL-6 level increased exponentially in response to exercise and declined during the postexercise period [28].Salivary immunoglobulin A (IgA) as a biomarker is associated with the incidence of infection; its low level or a substantial transitory decline is related to an increase in the incidence of upper respiratory tract diseases [43].Probiotics increased mucosal salivary IFN-γ, IgA1, and IgA2 levels in healthy adults [44].However, evidence from clinical trials regarding the effects of probiotic supplementation on immune and inflammatory markers in athletes is lacking.Scholars have reported inconsistent results.Several studies have reported no significant change after probiotic supplementation [20,23,25].By contrast, some studies demonstrated that the TNF-α level was lower in both sexes after probiotic supplementation [22] and observed a significantly decreased IL-6 level and increased IL-10 level in a probiotic group compared with a placebo group [21].Moreover, probiotic supplementation attenuated acute exercise-induced changes in both anti-inflammatory and proinflammatory cytokines (IL-6, IL-8, IL-10, IFN-γ, and TNF-α) in male and female athletes [8,45].
This study determined the effect of probiotics on inflammation-related markers (TNFα, IL-6, IL-8, IL-10, IFN-γ, salivary IgA, IL-1β, IL-2, IL-4, and C-reactive protein (CRP)) in athletes by examining the levels of these markers before and after exercise in probiotic and placebo groups.
Data Sources
The review protocol was prospectively registered on PROSPERO (CRD42022302897), and our findings are reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guideline.A research librarian systematically searched for relevant studies in PubMed, Cochrane Library, CEPS, and Embase from the inception to 12 May 2022.Citations were managed using Endnote version 20.1 (Clarivate Corp., Philadelphia, PA, USA; London, UK).
Eligibility Criteria
Clinical trials that included healthy human athletes involved in any sport and of any sex, age range, and race and that provided original pre-exercise and postexercise blood data were eligible.We considered interventions involving the administration of probiotics that contained alone or multiple mixed species and were prepared in all forms, including capsules or sticks.Studies that administered to their control group a placebo that was manufactured to be identical to the probiotics in packaging, encapsulation, and taste were eligible for inclusion.
We excluded clinical trials that met one of the following criteria: (1) did not include athletes, (2) were designed as nonparallel randomized clinical trials (RCTs), (3) had only a single arm, (4) did not examine inflammation-related markers, (5) had an intervention period of <14 days, (6) included patients with diseases as the study population, and (7) combined other supplements or medication in their intervention.Furthermore, we excluded studies that measured inflammation-related markers only after the probiotic supplementation.
Data Selection and Extraction
One researcher (YTG) searched for relevant RCTs published in the PubMed, Embase, Cochrane Library, and Chinese Electronic Periodical Services (CEPS) databases from their inception until May 2022.Another researcher (YCP) evaluated the selected RCTs.The researchers were blinded to each other's decisions.The outcomes were reviewed by two researchers.All retrieved abstracts, studies, and citations were reviewed.The decisions of the two researchers were compared, and if the two reviewers could not reach a consensus, any disagreements were resolved through discussion with a third researcher (WHH).
The two researchers (YTG and YCP) independently extracted data.If data were only presented graphically, values were estimated from figures by using WebPlotDigitizer version 4.5 [46].Finally, data were analyzed using RevMan 5.4.1 (Cochrane Collaboration, Oxford, UK).
Outcomes
The pre-exercise and postexercise blood levels of inflammation-related markers in the probiotic and placebo groups were measured to determine the effect of probiotics.To perform a meta-analysis, we excluded the outcomes of specific cytokines, which were only measured in one RCT.
Assessment of Risk of Bias
The two reviewers (YTG and YCP) independently determined the risk of bias by using the revised Cochrane risk-of-bias tool for randomized trials, version 2 (RoB 2.0) (Bristol, England) in accordance with the Cochrane Handbook for Systematic Reviews of Interventions, Version 5.2.12; this tool measures the potential for bias arising from five domains: the randomization process, deviation from the intended intervention, missing outcome data, outcome measurement, and selection of reported results.Possible responses were "yes", "probably yes", "probably no", "no", and "no information".Domains were evaluated as having either low or high risk of bias or some concerns [47].Assignment or intention to treat was the outcome of interest.Disagreement was resolved through discussion with the third author (WHH).
Statistical Analysis
All analyses were performed using the fixed-effects model with Review Manager version 5.4.1 (Cochrane Collaboration, Oxford, UK), which includes MetaView for presenting graphs and figures.The mean difference and 95% confidence interval (CI) were calculated for each trial and are presented in a forest plot.
To assess heterogeneity, I 2 statistics were calculated.An I 2 greater than 50% represents substantial heterogeneity.The potential risk of small-study bias was visually examined by generating funnel plots [48].Statistical significance was set at a p value of <0.05, except for publication bias, where a p value of <0.10 was considered.Sensitivity analysis was performed by removing outlier studies-those with CIs that did not overlap with the CI of the pooled effect [49].If I 2 was >50%, subgroup analysis was conducted to determine potential factors contributing to the heterogeneity, such as the length of the probiotic intervention (less than 6 weeks vs. more than 6 weeks) and the time of postassessment blood sampling (immediately after exercise vs. delayed to at least the next day of exercise).No further subgroup analysis was performed if an outcome was examined in only two studies.Publication bias was evaluated using Egger's test.The funnel plot we constructed evaluates the pseudo 95% CI against the standard error of evaluations.Owing to heterogeneity, we used the fixed-effect model because when a conventional funnel plot is used to examine publication bias, the plot is assumed to be inaccurate when the number of studies included in the analysis is small [46].
Study Selection
Figure 1 presents the PRISMA flowchart of the study screening and selection processes used in this research.Through a literature search, we retrieved relevant publications from PubMed (n = 92), Cochrane Library (n = 37), Embase (n = 41), and CEPS (n = 4).A total of 41 RCTs were retained after the exclusion of 133 duplicate studies.After the titles and abstracts had been screened, we excluded 26 studies and evaluated the eligibility of the remaining 15 studies.After the full-text assessment, we excluded six trials for several reasons (i.e., intervention time being <14 days, adoption of a crossover design, and examination of inflammation-related markers in only one study).A total of nine studies that met the inclusion criteria were included in this systemic review and meta-analysis.
Characteristics of Included Studies
Table 1 summarizes the characteristics of the nine studies that examined ten inflammation-related markers, namely TNF-α, IL-6, IL-8, IL-10, IFN-γ, salivary IgA, IL-1β, IL-2, IL-4, and CRP.A brief description of their main features is provided in the following sections in compliance with the review strategy.
Characteristics of Included Studies
Table 1 summarizes the characteristics of the nine studies that examined ten inflammationrelated markers, namely TNF-α, IL-6, IL-8, IL-10, IFN-γ, salivary IgA, IL-1β, IL-2, IL-4, and CRP.A brief description of their main features is provided in the following sections in compliance with the review strategy.
The nine trials were published between 2011 and 2021, and their sample sizes ranged from 13 to 97.A total of 335 participants were included (170 in the probiotic group and 165 in the placebo group).No difference was noted in age or body mass between the groups.The intervention period did vary somewhat among the studies, ranging from 28 to 90 days.
Five of the selected RCTs lasted 4 weeks (30 days) [21,[23][24][25][26], one RCT lasted 8 weeks [27], and three RCTs lasted from 11 to 12 weeks (90 days) [8, 20,22].Sticks, capsules, sachets, or fermented milk were consumed once or twice per day during the supplementation period.Six RCTs used supplementation capsules containing B. lactis, and among them, capsules used in three RCTs also contained B. longum ES1.The probiotics in each study contained at least one type of Bifidobacterium species or Lactobacillus species.The control groups were mainly administered sensorially identical placebo capsules, sticks, or sachets containing excipients only without bacteria.
Inflammatory-Related Markers
Of the nine studies, seven reported the TNF-α level, five reported the IL-6 level, four reported the IL-8 level, five reported the IL-10 level, two reported the IFN-γ level, two reported the salivary IgA level, two reported the IL-1β level, two reported the IL-2 level, two reported the IL-4 level, and two reported the CRP level.One study [20] only indicated the differences in these markers after the intervention.Therefore, we determined differences in the levels of these markers by subtracting the preintervention values from the postintervention values.The timing of the blood sampling in the baseline and postintervention assessments varied.For the baseline assessment, all the studies collected samples prior to the supplementation period with regular exercising.For the postintervention assessment, two studies collected blood samples at 8 a.m. on the 30th day, three studies collected samples after the supplementation period and ensured that participants did not perform strenuous exercise for at least 24 h before sample collection, one study collected samples 1 h after a race, and three studies collected samples immediately after a race.
RoB 2.0 Assessment
RoB 2.0 indicated overall high risk for one study, some concerns for three studies, and low risk for five studies for the outcome of inflammation-related markers.Overall, some concerns were concluded for the differences between the probiotic and placebo groups indicating that fat mass was higher in the probiotic group [26], body fat was significantly higher in the placebo group [20], and the white blood cell count was higher in women in the placebo group [8].Low risk of bias was determined for the blood sampling outcome.We discovered high risk of attrition bias for one study [21] that excluded one participant due to an outlier and had a >20% loss in the follow-up and another study that excluded the data of four participants due to there being insufficient blood volume to enable analyses and had a 31% loss in the follow-up [27].Because all nine RCTs reported the blinding of assessors, we considered them to have a low risk of detection bias.Studies reporting that the raw data of outcomes were unadjusted were considered to have low risk of reporting bias.Figure 2 presents the risks of bias of all the included studies in the five domains.
Overall Effects
The overall effect size for the TNF-α outcome was −0.30 (95% Cl: −0.42, −0.17, p < 0.00001; heterogeneity: chi-square = 31.5,df = 6, p < 0.0001, I 2 = 81%), indicating that the probiotic group exhibited a significant negative change in the TNF-α level compared with the control group (Figure 3).An outlier study was noted [27].Therefore, a sensitivity analysis was performed by excluding this study for all relevant outcomes.
The effect size for various modes and sites of stimulation is presented in Table 2.
In the subgroup analyses of the timing of postintervention blood sampling, the subgroup whose blood sample collection was delayed to at least the next day of exercise demonstrated no significant change in the IL-6 level compared with the control group, with an effect size of −0.77 (95% Cl: −1.91, 0.37, p = 0.18; heterogeneity: chi-square = 0.19, df = 1, p = 0.66, I 2 = 0%).The subgroup whose blood samples were collected immediately after exercise revealed no significant change in the IL-6 level compared with the control group, with an effect size of 0.36 (95% Cl: −0.11, 0.84, p = 0.14; heterogeneity: chi-square = 0.55, df = 2, p = 0.76, I 2 = 0%; Figure 6b).
In the subgroup analysis of the intervention period, for a shorter intervention period, the probiotic group exhibited no significant change in the IL-8 level compared with the control group, with an effect size of −1.23 (95% Cl: −2.48, 0.03, p = 0.06; heterogeneity: chi-square = 11.05,df = 2, p = 0.004, I 2 = 82%).For a longer intervention period, the probiotic group demonstrated no significant change in the IL-8 level compared with the control group, with an effect size of −0.20 (95% Cl: −1.15, 0.75, p = 0.68; Figure 5c).
In the subgroup analysis based on the timing of postintervention blood sampling, the subgroup whose blood sample collection was delayed to at least the next day of exercise exhibited no significant change compared with the control group, with an effect size of −0.17 (95% Cl: −1.11, 0.77, p = 0.72; heterogeneity: chi-square = 0.16, df = 1, p = 0.69, I 2 = 0%).The subgroup whose blood samples were collected immediately after exercise exhibited a significant negative change in the IL-8 level compared with the control group, with an effect size of −1.31 (95% Cl: −2.59, −0.04, p = 0.05; heterogeneity: chi-square = 10.56,df = 1, p = 0.001, I 2 = 91%; Figure 6c).
In the subgroup analysis based on the intervention period, for a shorter intervention period, the probiotic group exhibited a significant negative change in the IL-10 level compared with the control group, with an effect size of −0.13 (95% Cl: −0.19, −0.06, p = 0.0002; heterogeneity: chi-square = 4.67, df = 3, p = 0.20, I 2 = 36%).For a longer intervention period, the probiotic group exhibited no significant change in the IL-10 level compared with the control group, with an effect size of −0.33 (95% Cl: −1.01, 0.35, p = 0.34; Figure 5d).
Outcome of IL-6
The effect size of five RCTs for the outcome of IL-6 was 0.19 (95% Cl: − 0.25, 0.63, p = 0.39; heterogeneity: chi-square = 4.00, df = 4, p = 0.41, I² = 0%).The probiotic group In the subgroup analysis based on the timing of postintervention blood sampling, the subgroup whose blood sample collection was delayed to at least the next day of exercise exhibited a significant negative change in the IL-10 level compared with the control group, with an effect size of −0.12 (95% Cl: −0.19, −0.05, p = 0.0005).The subgroup whose blood samples were collected immediately after exercise demonstrated a significant negative change in the IL-10 level compared with the control group in the fixed-effect model, with an effect size of −0.52 (95% Cl: −0.98, −0.07, p = 0.02; heterogeneity: chi-square = 2.08, df = 3, p = 0.56, I 2 = 0%; Figure 6d).
Publication Bias
According to the funnel plots (Figure 7), no heterogeneity was noted for the outcomes of IL-6 (Figure 7b), salivary IgA (Figure 7f), IL-2 (Figure 7h), and IL-4 (Figure 7i) because the included studies appeared to be distributed within two diagonal lines representing their pseudo 95% confidence limits.However, for the outcomes of IL-8 (Figure 7c), IFN-γ (Figure 7e), and IL-1β (Figure 7g), the studies appeared to be distributed beyond two diagonal lines representing heterogeneity [48].
Overall Effect
To the best of our knowledge, this is the first meta-analysis to investigate the effects of probiotic supplementation on the levels of inflammation-related markers, namely IL-1β, IL-2, IL-4, IL-6, IL-8, IL-10, TNF-α, IFN-γ, CRP, and salivary IgA, in athletes.
A study reported that the consumption of a symbiotic bacterium did not affect immuneand inflammation-related markers in athletes [25].Pugh et al. indicated that the IL-6, IL-8, and IL-10 levels were not significantly different before or after the race between the placebo and probiotic groups, although athletes self-reported lower incidence and severity of gastrointestinal tract symptoms [23].Schreiber et al. demonstrated that the mean IL-6, TNF-α, and CRP levels were not affected by probiotics [20].
Conversely, some studies have reported the beneficial effects of probiotics.Smarkusz-Zarzecka et al. observed that the TNF-α level was lower in both sexes after probiotic supplementation [22].Tavares-Silva et al. noted a significant decline in the IL-2 and IL-4 levels 24 h before exercise in the probiotic group compared with the placebo group [21].West et al. indicated that probiotic supplementation attenuated acute exercise-induced changes in both anti-inflammatory and proinflammatory cytokines (IL-6, IL-8, IL-10, IFN-γ, and TNF-α) in male and female athletes [8].
Moderate activity may enhance immune function to higher than that noted at the sedentary level, whereas intense exercise may cause oxidative stress, muscle damage, inflammation, and immune alteration in elite athletes, leading to upper respiratory tract and gastrointestinal tract illness, especially diarrhea, during heavy training and competitions such as marathons [1][2][3][4]50,51].In our meta-analysis, we examined the effects of probiotic supplementation on the levels of proinflammatory and anti-inflammatory cytokines in athletes at baseline and after probiotics supplementation.The findings of this meta-analysis including nine studies indicate that not every cytokine participating in the inflammatory reaction had a significantly altered level after probiotic supplementation.No significant difference in the IL-6 level was observed in our meta-analysis.However, we noted significant differences in the IL-8, IL-10, and TNF-α levels.
IL-2 plays an immunoregulatory role; it promotes the growth and development of peripheral immune cells in the initiation of the (defensive) immune response and maintains their viability as effector cells [54].
IL-4 is associated with type 2 inflammation and can downregulate IL-1β and TNF-α because of type 1 and type 2's mutual suppression of each other [55,56].
IL-6 is a key member in the network of cytokines and plays a crucial role in acute inflammation [57].Moreover, IL-6 exerts proinflammatory effects (e.g., in acute innate responses) and coordinates anti-inflammatory activities essential for the alleviation of inflammation [58].
IL-8 is a chemoattractant cytokine produced by various tissue and blood cells, and it attracts and activates neutrophils in inflammatory regions [59].In addition to having chemokine properties, IL-8 acts as an angiogenic factor [60].
TNF-α is an inflammatory cytokine produced by macrophages and monocytes during acute inflammation and is responsible for various signaling events within cells, leading to necrosis or apoptosis [42,61].
CRP is a pentameric protein synthesized by the liver, and its level increases in response to inflammation.CRP is primarily induced by IL-6 during the acute phase of an inflammatory or infectious process [62].
Regarding proinflammatory markers, our quantitative analysis demonstrated that probiotic supplementation significantly reduced the TNF-α level but caused no changes in the IL-1β, IL-2, IL-4, IL-6, IL-8, and CRP levels.This result is consistent with that of a previous meta-analysis investigating the effects of probiotic supplementation on normal healthy individuals and reporting a reduction in the TNF-α level but no differences in the IL-1β, IL-4, IL-6, IL-8, and IL-10 levels [63].Although we did not observe significant changes in the levels of all proinflammatory markers in this study, their levels were lower after probiotic supplementation.In the subgroup analysis based on the timing of the postintervention blood sampling, the subgroup whose blood samples were collected immediately after exercise exhibited a significant decrease in the IL-8 level.
To perform a subgroup analysis on the basis of the period of probiotic intervention, we divided the studies into two groups: those in which athletes received probiotics for less than 6 weeks and for more than 6 weeks.The TNF-α level significantly changed in the shorter period group but not the longer period group, although the p value for the longer period group was 0.07, which is close to statistical significance.The current guidelines of the World Gastroenterology Organization indicate that it is generally not possible to state a general dose that is required for probiotics, and the dosage should be based on human studies showing a health benefit [64].
Probiotics may provide benefits by improving mucosal immunity, the inflammatory system, antioxidant capacity, stress reduction, microbiota composition, and the microenvironment in the gastrointestinal tract [65,66].
Anti-Inflammatory Markers: IL-10 and IFN-γ
IL-10 is the most important cytokine with anti-inflammatory properties [67].In terms of the correlation between the IL-10 level and exercise, the exercise-induced increase in the plasma IL-6 level is followed by increased circulating levels of anti-inflammatory cytokines, such as IL-1ra and IL-10 [35,68].
IFN-γ coordinates a diverse array of cellular programs through transcriptional regulation of immunologically relevant genes [69].IFN-r is considered an anti-inflammatory cytokine at low concentrations [70].
Our study revealed a reduction in the level of IL-10 but an increase in the level of IFN-γ after probiotic supplementation in athletes.A recent systematic review on this topic reported that exercise duration is the most crucial factor determining the magnitude of the exercise-induced increase in the plasma IL-10 level.However, no significant correlation was noted between the intensity of exercise and change in the IL-10 level [71].The appearance of IL-10 after eccentric exercise may indicate that IL-10 release is secondary to tissue damage [72].Thus, the first reason may be the original anti-inflammatory effect of antibiotics.Since probiotics may reduce the levels of proinflammatory markers, they do not further stimulate the production of IL-10.
Salivary IgA
IgA is the dominant immunoglobulin isotype on all mucosal surfaces, where it acts as the first line of defense against microbial invasion [73].It is observed that oxidative stress is the leading cause of inflammation and may have a negative impact on immune function, so curing of oxidative stress will ultimately suppress the occurrence of inflammation [74].IgA in sublingual and submandibular secretions is a preferential noninvasive proxy for intestinal immune induction [75].Studies have reported varying effects of exercise on the IgA level.A meta-analysis conducted in 2021 indicated that physical exercise resulted in a change in the salivary IgA level in athletes; however, this study had risk of bias and very low certainty of the evidence [76].
Our results revealed a significant increase in the salivary IgA level after probiotic supplementation in athletes, indicating that probiotics exert beneficial effects on intestinal immune function.The increase in mucosal immunity due to administration of probiotics can protect against infection from pathogens that penetrate the mucosa [43,77].
Our results demonstrate that probiotics play a role in the anti-inflammatory response; this finding is consistent with those of two previous studies reporting that probiotics exert anti-inflammatory effects in intestinal chronic diseases [78] and can prevent acute upper respiratory tract infections [79].The current guidelines of the American Gastroenterological Association indicate that in symptomatic children and adults with irritable bowel syndrome, the use of probiotics is recommended only in the context of a clinical trial [64].The World Gastroenterology Organization concluded that probiotics can treat and prevent acute diarrhea but the mechanisms of action may be strain-specific [64].
Heterogeneity
The results of this study had relatively high heterogeneity; the influential factors were the duration of the intervention, assessment time point, country, probiotic type, and sport type.This study evaluated multiple outcomes on the basis of different intervention and participant types.To reduce the heterogeneity in outcomes between included studies, this study conducted subgroup analyses of the characteristics of supplementation and assessments and analyzed several potential moderators.
Strengths and Limitations
The strength of this meta-analysis is the extensive literature search covering RCTs published over 12 years.Another advantage is that we performed subgroup analyses in relation to several potential moderators.Moreover, we analyzed several types of inflammationrelated markers.
This study has some limitations.The first is the quality of the included studies.Three of the nine studies had some concerns of bias and another study had high risk of bias; this may limit the confidence of the conclusion.In one study, fat mass was higher in the probiotic group [26].In another study, body fat was significantly higher in the control group [20].In one study, data were excluded due to there being insufficient blood volume to enable analyses and the loss during the follow-up was 31% [27].In another study, one participant was excluded due to an outlier; this study had over 20% loss during followup [21].Second, heterogeneity still existed regarding different intervention and participant types; the duration of the intervention, assessment time point, country, probiotic type, gender proportion, and sport type affected the evidence of our results.Differences between male and females included the type and intensity of physical activity.We did not review interactions related to the various species used in the supplements and if it was anticipated that there would or could be a synergistic impact on the markers.We mainly focused on probiotic supplementation and excluded studies that combined probiotics with other medications.Additionally, although we investigated the effects of probiotics on athletes, different types of sports were included, which may have resulted in different exercise intensities and thus altered the result.Finally, because the inflammation-related markers we assessed could only serve as a proxy of clinical effectiveness, the actual correlations between inflammatory markers and clinical symptoms, such as gastrointestinal syndromes and upper respiratory tract infection, were unclear.
Conclusions
This systematic review included nine studies published from 2011 to 2022.The findings of this systematic review and meta-analysis suggest that probiotics result in significant positive changes in the levels of IFN-γ and salivary IgA but negative changes in the levels of IL-10 and TNF-α, which demonstrated that probiotics play a role in the anti-inflammatory response.The levels of IL-1β, IL-2, IL-4, IL-6, and CRP did not exhibit significant changes.Our findings support that probiotics exert anti-inflammatory effects in intestinal chronic diseases and may be strain-specific to treat and prevent acute diarrhea.Future studies investigating the effects of probiotics can use larger samples, examine more types of exercise, and compare more types of probiotics.
Figure 1 .
Figure 1.Preferred Reporting Items for Systematic Reviews and Meta-Analysis flowchart of the search strategy.
Figure 1 .
Figure 1.Preferred Reporting Items for Systematic Reviews and Meta-Analysis flowchart of the search strategy.
24 Figure 2 .
Figure 2. Flowchart of the risk of bias domains.
Figure 2 .
Figure 2. Flowchart of the risk of bias domains.
Figure 2 .
Figure 2. Flowchart of the risk of bias domains.
Figure 3 .
Figure 3. Outcomes for tumor necrosis factor (TNF) −α, indicating that the probiotic group exhibited a significant negative change.An outlier study was noted.
Figure 3 .
Figure 3. Outcomes for tumor necrosis factor (TNF) −α, indicating that the probiotic group exhibited a significant negative change.An outlier study was noted.
Figure 5 .
Figure 5. Forest plots of the mean effect size for the subgroups with a shorter and longer intervention period for (a) TNF−α, (b) IL−6, (c) IL−8, and (d) IL−10.
Figure 5 .
Figure 5. Forest plots of the mean effect size for the subgroups with a shorter and longer intervention period for (a) TNF−α, (b) IL−6, (c) IL−8, and (d) IL−10.
Figure 6 .
Figure 6.Forest plots of the mean effect size for the subgroups with sample collection delayed to at least the next day of exercise and immediately performed after exercise for (a) TNF−α, (b) IL−6, (c) IL−8, and (d) IL−10.
Figure 6 .
Figure 6.Forest plots of the mean effect size for the subgroups with sample collection delayed to at least the next day of exercise and immediately performed after exercise for (a) TNF−α, (b) IL−6, (c) IL−8, and (d) IL−10.
Table 1 .
Characteristics of the studies included in the meta-analysis.
Table 2 .
Overall effect of inflammation-related markers. | 7,179.2 | 2022-08-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
On the classical limit of a time-dependent self-consistent field system: analysis and computation
We consider a coupled system of Schr\"odinger equations, arising in quantum mechanics via the so-called time-dependent self-consistent field method. Using Wigner transformation techniques we study the corresponding classical limit dynamics in two cases. In the first case, the classical limit is only taken in one of the two equations, leading to a mixed quantum-classical model which is closely connected to the well-known Ehrenfest method in molecular dynamics. In the second case, the classical limit of the full system is rigorously established, resulting in a system of coupled Vlasov-type equations. In the second part of our work, we provide a numerical study of the coupled semi-classically scaled Schr\"odinger equations and of the mixed quantum-classical model obtained via Ehrenfest's method. A second order (in time) method is introduced for each case. We show that the proposed methods allow time steps independent of the semi-classical parameter(s) while still capturing the correct behavior of physical observables. It also becomes clear that the order of accuracy of our methods can be improved in a straightforward way.
264
SHI JIN, CHRISTOF SPARBER AND ZHENNAN ZHOU 1. Introduction. The numerical simulation of many chemical, physical, and biochemical phenomena requires the direct simulation of dynamical processes within large systems involving quantum mechanical effects. However, if the entire system is treated quantum mechanically, the numerical simulations are often restricted to relatively small model problems on short time scales due to the formidable computational cost. In order to overcome this difficulty, a basic idea is to separate the involved degrees of freedom into two different categories: one, which involves variables that behave effectively classically (i.e., evolving on slow time-and large spatial scales) and one which encapsulates the (fast) quantum mechanical dynamics within a certain portion of the full system. For example, for a system consisting of many molecules, one might designate the electrons as the fast degrees of freedom and the atomic nuclei as the slow degrees of freedom.
Whereas separation of the whole system into a classical part and a quantum mechanical part is certainly not an easy task, it is, by now, widely studied in the physics literature and often leads to what is called time-dependent self-consistent field equations (TDSCF), see, e.g., [4,5,9,13,18,19,21,26] and the references therein. In the TDSCF method, one typically assumes that the total wave function of the system Ψ(X, t), with X = (x, y), can be approximated by Ψ(X, t) ≈ ψ(x, t)ϕ(y, t), (1.1) where x and y denote the degrees of freedom within a certain subsystem, only. The precise nature of this approximation thereby strongly depends on the concrete problem at hand (in particular, on the initial data and on the precise nature of the coupling between the two subsystems). Disregarding this issue for the moment, one might then, in a second step, hope to derive a self-consistently coupled system for ψ and ϕ and approximate it, at least partially, by the associated classical dynamics. In this article we will study a simple model problem for such a TDSCF system, motivated by [4,9,18,19,21], but one expects that our findings extend to other self-consistent models as well. We will be interested in deriving various (semi-) classical approximations to the considered TDSCF system, resulting in either a mixed quantum-classical model, or a fully classical model. As we shall see, this also gives a rigorous justification of what is known as the Ehrenfest method in the physics literature, cf. [5,9]. To this end, we shall be heavily relying on Wigner transformation techniques, developed in [11,20], which have been proved to be superior in many aspects to the more classical WKB approximations, see, e.g. [24] for a broader discussion. One should note that the use of Wigner methods to study the classical limit of nonlinear (self-consistent) quantum mechanical models is not straightforward and usually requires additional assumptions on the quantum state, cf. [22,20]. It turns out that in our case we can get by without them.
In the second part of this article we shall then be interested in designing an efficient and accurate numerical method which allows us to pass to the classical limit in the TDSCF system within our numerical algorithm. We will be particularly interested in the meshing strategy required to accurately approximate the wave functions, or to capture the correct physical observables (which are quadratic quantities of the wave function). To this end, we propose a second order (in time) method based on an operator splitting and a spectral approximation of the TDSCF equations as well as the obtained Ehrenfest model. These types of methods have been proven to be very effective in earlier numerical studies, see, e.g., [1,2,7,16,17] for previous results and [15] for a review of the current state-of-the art of numerical methods for semi-classical Schrödinger type models.The readers may also refer to [3,27] for some recent results on the numerical analysis of Born-Oppenheimer molecular dynamics with connections to the Ehrenfest model. In comparison to the case of a single (semi-classical) nonlinear Schrödinger equation with power law nonlinearities, where one has to use time steps which are comparable to the size of the small semi-classical parameter (see [1]), it turns out that in our case, despite of the nonlinearity, we can rigorously justify that one can take time steps independent of the semi-classical parameter and still capture the correct classical limit of physical observables.
The rest of this paper is now organized as follows: In Section 2, we present the considered TDSCF system and discuss some of its basic mathematical properties, which will be used later on. In Section 3, a brief introduction to the Wigner transforms and Wigner measures is given. In Section 4 we study the semi-classical limit, resulting in a mixed quantum-classical limit system. In Section 5 the completely classical approximation of the TDSFC system is studied by means of two different limiting processes, both of which result in the same classical model. The numerical methods used for the TDSCF equations and the Ehrenfest equations are then introduced in Section 6. Finally, we study several numerical tests cases in Section 7 in order to verify the properties of our methods.
2.1.
Basic set-up and properties. In the following, we take x ∈ R d , y ∈ R n , with d, n ∈ N, and denote by ·, · L 2 x and ·, · L 2 y the usual inner product in L 2 (R d x ) and L 2 (R n y ), respectively, i.e.
The total Hamiltonian of the system acting on L 2 (R d+n ) is assumed to be of the form where V (x, y) ∈ R is some (time-independent) real-valued potential. Typically, one has V (x, y) = V 1 (x) + V 2 (y) + W (x, y), (2.2) where V 1,2 are external potentials acting only on the respective subsystem and W represents an internal coupling potential in between the two subsystems. From now on, we shall assume that V satisfies where here and in the following, we denote by C 0 the set of continuous functions vanishing at infinity.
Remark 2.1. For potential bounded below, the requirement V 0 is not really an assumption, but merely corresponds to fixing the point 0 on the energy axis.
In (2.1), the Hamiltonian is already written in dimensionless form, such that only two (small) parameters ε, δ > 0 remain. In the following, they play the role of dimensionless Planck's constants. Dependence with respect to these parameters will be denoted by superscripts. The TDSCF system at hand is then (formally given by [9]) the following system of self-consistently coupled Schrödinger equations where we denote by the Hamiltonian of the subsystem represented by the x-variables (considered as the purely quantum mechanical variables) and in which y only enters as a parameter. It is obtained by substituting the ansatz (1.1) into the original Schrödinger equation and integrating over y and x respectively, see [9]. As a matter of fact, the TDSCF systems may take various forms, which are also equivalent to one another by certain gauge transformations, see [4,18,19,21] for broad discussions. Without loss of generality, we choose to study the specific TDSCF system (2.3).
For simplicity, we assume that at t = 0 the data ψ δ in only depends on δ, and that ϕ ε in only depends on ε, which means that the simultaneous dependence on both parameters is only induced by the time-evolution. Remark 2.2. A typical example of initial data which satisfies this assumption (and all upcoming requirements of our analysis) is Ψ(X, 0) ≈ ψ δ in (x)ϕ ε in (y) = a 1 (x)e iS1(x)/δ a 2 (y)e iS2(y)/ε , where S 1 , S 2 are some smooth, real-valued phases and a 1 , a 2 some (in general, complex-valued) amplitudes. In other words, Ψ(X, 0) is assumed to be approximated by a (two-scale) WKB type initial data in product form.
Finally, the coupling terms are explicitly given by and after formally integrating by parts Throughout this work we will always interpret the term ψ ε,δ , h δ ψ ε,δ L 2 x as above, i.e., in the weak sense. Both Υ ε,δ and Λ ε,δ are time-dependent, real-valued potentials, computed self-consistently via the dynamics of ϕ ε,δ and ψ ε,δ , respectively. Note that Here, the purely time-dependent part ϑ ε,δ (t), could in principle be absorbed into the definition of ϕ ε,δ via a gauge transformation, i.e., For the sake of simplicity, we shall refrain from doing so, but this nevertheless shows that the two coupling terms are in essence of the same form. Also note that this gauge transform leaves any H s (R d )-norm of ϕ ε,δ invariant (but clearly depends on the solution of the second equation within the TDSCF system). An important physical quantity is the total mass of the system, where m ε,δ 1 , m ε,δ 2 denote the masses of the respective subsystem. One can then prove that these are conserved by the time-evolution of (2.3).
Proof. Assuming for the moment, that both ψ ε,δ and ϕ ε,δ are sufficiently smooth and decaying, we multiply the first equation in (2.3) with ψ ε,δ and formally integrate with respect to x ∈ R d x . Taking the real part of the resulting expression and having in mind that Υ ε,δ (y, t) ∈ R, yields which, after another integration in time, is the desired result for m ε,δ 1 (t). By the same argument one can show the result for m ε,δ 2 (t). Integration in time in combination with a density argument then allows to extend the result to more general solutions in H 1 , respectively.
We shall, from now on assume that the initial data is normalized such that m ε,δ 1 (0) = m ε,δ 2 (0) = 1. Using this normalization, the total energy of the system can be written as Note that in view of our assumption (A1) on V this is well-defined and that E ε,δ (t) is, in fact, a sum of three non-negative terms.
Lemma 2.5. Let V satisfy (A1) and assume that ψ ε,δ ∈ C(R t ; H 1 (R d x )) and ϕ ε,δ ∈ C(R t ; H 1 (R n y )) solve (2.3). Then In other words, we have conservation of the total energy, which in itself implies a bound on the interaction energy (since V 0) and on the kinetic energies of the respective subsystems. Note however, that the energies of the respective subsystems are in general not conserved, unless W ≡ 0, i.e., V (x, y) = V 1 (x) + V 2 (y).
SHI JIN, CHRISTOF SPARBER AND ZHENNAN ZHOU
Proof. Assuming, as before that ψ ε,δ and ϕ ε,δ are sufficiently regular (and decaying), the proof is a lengthy but straightforward calculation. More precisely, using the shorthand notation where we denote We will now show that (I) + (III) = 0. By using (2.3), one gets where [A, B] := AB − BA denotes the commutator bracket. Similarly, one finds that Analogously, one can show that (II) + (IV) = 0 and hence, an integration in time yields E ε,δ (t) = E ε,δ (0). Using a density arguments allows to extend this result to more general solution in H 1 .
Existence of solutions.
In this subsection, we shall establish global in-time existence of solutions to the TDSCF system (2.3). Since the dependence on ε and δ does not play a role here, we shall suppress their appearance for the sake of notation.
Proposition 2.6. Let V satisfy (A1) and ψ in ∈ H 1 (R d x ), ϕ in ∈ H 1 (R n y ). Then there exists a global strong solution (ψ, ϕ) ∈ C(R t ; H 1 (R d+n )) of (2.3), satisfying the conservation laws for mass and energy, as stated above.
Proof. We shall first prove local (in-time) well-posedness of the initial value problem (2.3): To this end, we consider Ψ(·, t) = (ψ(·, t), ϕ(·, t)) : R d+n → C 2 and define the associated y , and consequently set H 1 (R d+n ) := {Ψ ∈ L 2 (R d+n ) : |∇Ψ| ∈ L 2 (R d+n )}. Using this notation, the TDSCF system (2.3) can be written as Clearly, H is the generator of a strongly continuous unitary Schrödinger group U (t) := e −itH , which can be used to rewrite the system using Duhamel's formula as Following classical semi-group arguments, cf. [8], it suffices to show that f (Ψ) is locally Lipschitz in H 1 (R d+n ) in order to infer the existence of a unique local in-time solution Ψ ∈ C([0, T ), H 1 (R d+n )). This is not hard to show, since: by assumption. Now we can estimate An analogous argument can be done for the second part of f (Ψ), since Combining all these estimates, we conclude The same reasoning can then be applied to ∇f (Ψ) L 2 , by noticing ∇ ϕ, V ϕ L 2 y = ϕ, ∇ x V ϕ L 2 y and ∇ ψ, hψ L 2 in view of (2.5). Since V satisfies (A1), these expressions are all well-defined. In summary, one gets that there exists a C = C( V L ∞ , Using this, [8,Theorem 3.3.9] implies the existence of a T = T ( Ψ H 1 ) > 0 and a unique solution Ψ ∈ C([0, T ), H 1 (R d+n )) of (2.9). It is then also clear, that this solution satisfies the conservation of mass and energy for all t ∈ [0, T ). Moreover, the quoted theorem also implies that if T < +∞, then However, having in mind the conservation laws for mass and energy stated in Lemmas 2.4 and 2.5 together with the fact that we assume w.l.o.g. V (x, y) 0, we immediately infer that Ψ(·, t) H 1 C for all t ∈ R and hence, the blow-up alternative (2.10) implies global in-time existence of the obtained solution.
Remark 2.7. Note that this existence result rests on the fact that the term Λ ε,δ (y, t) := ψ ε,δ , h δ ψ ε,δ L 2 x is interpreted in a weak sense, see (2.5). In order to interpret it in a strong sense, one would need to require higher regularity, in particular ψ ε,δ ∈ H 2 (R d x ).
3. Review of Wigner transforms and Wigner measures. The use of Wigner transformation and Wigner measures in the analysis of (semi)-classical asymptotic is, by now, very well established. We shall in the following, briefly recall the main results developed in [20,11] (see also [10,22,24] for further applications and discussions of Wigner measures): Denote by {f ε } 0<ε 1 a family of functions f ε ∈ L 2 (R d x ), depending continuously on a small parameter ε > 0, and by the corresponding Fourier transform. The associated ε-scaled Wigner transform is then given by [28]: Clearly, one has and thus Plancherel's theorem together with a simple change of variables yields . The real-valued function w ε (x, ξ) acts as a quantum mechanical analogue for classical phase-space distributions. However, w ε (x, ξ) 0 in general. A straightforward computation shows that the position density associated to f ε can be computed via Moreover, by taking higher order moments in ξ one (formally) finds In order to make these computations rigorous, the integrals on the r.h.s. have to be understood in an appropriate sense, since w ε ∈ L 1 (R m x × R m ξ ) in general, cf. [11,20] for more details.
It has been proved in [20,Proposition III.1], that if f ε is uniformly bounded in where C 0 is an ε-independent constant, then the set of Wigner functions {w ε } 0<ε 1 is uniformly bounded in A . The latter is the dual of the following Banach space denotes the space of continuous functions vanishing at infinity. More precisely, one finds that for any test function Denoting, we therefore obtain const., uniformly in ε. Thus, up to extraction of sub-sequences {ε n } n∈N , with ε n → 0 + as n → ∞, there exists a limiting object It turns out that the limit is in fact a non-negative, bounded Borel measure on Remark 3.1. One easily checks that the Schwartz space S is in fact dense in A. Thus, it would also be possible to state all the convergence results above in terms of convergence in S (R d x × R n y ). This is the framework used in [11]. If, in addition it also holds that f ε is ε-oscillatory, i.e., then one also gets (up to extraction of sub-sequences) i.e., for any test function σ ∈ C 0 (R d x ): Indeed, the Wigner measure µ is known to encode the classical limit of all physical observables. More precisely, for the expectation value of any Weyl-quantized operator Op ε (a), corresponding to a sufficiently "nice" classical symbol, say a( and hence where the right hand side resembles the usual formula from classical statistical mechanics.
In order to describe the dynamics of Wigner measures, we first recall that if is a pseudo-differential operator describing the influence of a potential U (x) ∈ R. Explicitly, Θ ε [U ] is given by [20]: Here, the symbol δU ε is found to be and, thus, under sufficient regularity assumptions on V , one consequently obtains It consequently follows that the measure µ(x, ξ, t) solves Liouville's equation on phase space, i.e.
in the sense of distributions. Here, µ in is the weak * limit of w ε in in A , along subsequences of (ε n ) n∈N (which, in principle, could all yield different limits).
Remark 3.2.
In fact, a more general formula for the asymptotic, or semi-classical expansion (in powers of ε) of any Wigner transformed Schrödinger-type equation is available in [11,Proposition 1.8]. This formula will be used in the numerical algorithm described below.
Finally, we note that for sufficiently regular potential V , one can improve the convergence statements and show that, indeed, where Φ t : R 2d → R 2d is the Hamiltonian flow associated to (3.5): This allows to prove uniqueness of the weak solution of (3.5), provided the initial measure µ in is the same for all sub-sequences 4. The mixed quantum-classical limit. In this section we will investigate the semi-classical limit of the TDSCF system (2.3), which corresponds to the case ε → 0 + and δ = O(1) fixed. In other words, we want to pass to the classical limit in the equation for ϕ ε,δ only, while retaining the full quantum mechanical dynamics for ψ ε,δ .
The standing assumption from now on, until the end of this work will be that the initial data ϕ ε in ∈ H 1 (R n y ) and In other words, the initial data are assumed to be such that the initial mass and the initial energy are uniformly bounded with respect to both ε and δ. In view of Lemma 2.4 and Lemma 2.5 this property consequently holds true for all times t 0, and hence, neither the mass, nor the energy can become infinite in the classical limit. In addition, we will assume, for simplicity, that the individual masses of the two sub-systems are initially normalized such that m ε,δ 1 (0) = m ε,δ 2 (0) = 1. This normalization is henceforth preserved by the time-evolution of (2.3).
Next, we introduce the ε-scaled Wigner transformation of ϕ ε,δ in the form (In this subsection, we thus could, in principle, suppress the dependence on δ completely, since it is fixed, but given that we will consider the subsequent δ → 0 + limit in Section 5, we shall keep its appearance within the superscript.) The assumption (A2), together with the a-priori estimates established in Lemma 2.4 and Lemma 2.5 then implies the uniform bound, for any t ∈ R, where C 0 is a constant independent of ε and δ. In other words, ϕ ε,δ (·, t) is εoscillatory for all times and we consequently infer the existence of a limiting Wigner measure µ 0,δ (y, η, t) ≡ µ δ (y, η, t) such that (up to extraction of sub-sequences) for all t ∈ [0, T ] it holds The measure µ δ encodes the classical limit of the subsystem described by the yvariables only.
In order to proceed, we will need to strengthen our convergence results with respect to the t-variable. To this end, we recall that since ϕ ε,δ solves the second equation in the TDSCF system (2.3), w ε,δ ≡ w ε [ϕ ε,δ ] solves the corresponding Wigner transformed equation where Θ[Λ ε,δ ] is explicitly given by The associated symbol δΛ ε,δ reads in view of the definition of Λ ε,δ , given in (2.5). Introducing the short hand notation . In particular, this shows that the purely time-dependent term ϑ ε,δ (t) appearing in (2.5) does not contribute to the symbol of the pseudodifferential operator Θ. can also be seen by using the time-dependent gauge transformation (2.6) from the beginning.
We can now prove the following lemma.
Lemma 4.2. Let Assumptions (A1) and (A2) hold. Then w ε,δ is equi-continuous in time and hence, up to extraction of sub-sequences, we have Proof. The proof follows along the lines of [20,11]. In order to infer the assertion of the Lemma it is sufficient to show that ∂ t w ε,δ ∈ L ∞ ((0, T ); A (R n y × R n η )). The latter implies time-equicontinuity of w ε,δ and hence, the Arzela-Ascoli Theorem guarantees that there exists a subsequence {ε n } n∈N , with ε n → 0 + as n → ∞, such that w εn,δ converges uniformly on compact subsets of R t .
In order to prove the uniform bound on ∂ t w ε,δ we consider the weak formulation of (4.2), i.e.
for any test function χ ∈ A(R n y × R n η ). We shall only show how to bound the term Θ[Λ ε,δ ]w ε,δ , χ , since the other term on the right hand of (4.4) can be treated similarly.
To this end, let χ ∈ A(R n y × R n η ) be a smooth test function with the property that its Fourier transform with respect to η, i.e.
has compact support with respect to both y and z. This kind of test functions are dense in A and hence it suffices to show the assertion for these type of χ only. A straightforward calculation (cf. the proof of [20, Theorem IV.1]) shows that having in mind that V ∈ L ∞ , by assumption, and that ψ ε,δ (·, t) L 2 x = 1, ∀ t ∈ R, due to mass conservation. Since V satisfies (A1), the same argument also applies to ∇ y V ε,δ , which by dominated convergence, is simply given by x . Having in mind the computation (3.2), we know that We thus conclude that uniformly in ε and t, since due to the compact support of χ.
A similar argument can be done to obtain a uniform bound on | η · ∇ y w ε,δ , χ |, and thus (4.4) implies that uniformly in ε and t, and we are done.
Next, we look at the nonlinear coupling term appearing in the first equation of our TDSCF system (2.3): We first note that, as before, ) and an analogous bound for ∂ α x Υ ε,δ , |α| 2. In addition, we have that Υ ε,δ is continuous in time, since by triangle inequality where we have again used the fact that ϕ ε,δ (·, t) L 2 y = 1, for all t ∈ R. In view of our existence result stated in Proposition 2.6, the right hand side is continuous in time, and thus Υ ε,δ (x, ·) is too.
Proof. Having in mind that test functions of the form V (x, y) = γ(x)σ(y) are dense in C 2 0 (R d x × R n y ), the weak measure convergence (4.1) implies that for all (x, t) it holds point-wise for all (x, t) ∈ R d+1 . In addition, the foregoing Lemma shows that the point-wise convergence, in fact, also holds uniformly on compact time-intervals. Using that µ δ 0, we find since ϕ ε,δ (·, t) L 2 = 1, for all t ∈ R. (Here we have used [20, Theorem III.1] in the second inequality.) An analogous bound also holds for ∂ α x Υ δ and |α| 2, since V satisfies (A1). By applying the push-forward formula (4.8) with χ(x, y) = V (x, y), it is easy to see that Υ δ (t, ·) is continuous in time, yielding Υ δ ∈ C b (R t ; C 2 0 (R d x )). The following Proposition then shows that the solution of the first equation within the TDSCF system (2.3) stays close to the one where the potential Υ ε,δ is replaced by its classical limit Υ δ .
Proposition 4.4. Let V satisfy (A1) and ψ ε,δ , ψ δ ∈ C(R t ; H 1 (R d x )) solve, respectively Proof. Denote the Hamiltonian operators corresponding to the above equations by In view of our assumptions on the potential V and the existence result given in Proposition 2.6, we infer that H 1 and H 2 are essentially self-adjoint on L 2 (R d x ) and hence they generate unitary propagators U ε,δ 1 (t, s) and U δ 2 (t, s), such that . Computing further, one gets By Minkowski's inequality, one therefore has which, firstly, implies continuity of the difference in L 2 norm w.r.t. to t ∈ R and, secondly, we also have In order to identify the limiting measure µ δ we shall derive the corresponding evolutionary system, by passing to the limit ε → 0 + in (4.2). The main difference to the case of a given potential V (as studied in, e.g., [11]) is that here V ε,δ itself depends on ε and is computed self-consistently from the solution of ψ ε,δ . We nevertheless shall prove in the following proposition that the limit of Θ[V ε,δ ] as ε → 0 + is indeed what one would formally expect it to be.
where in the second inequality we have used the Cauchy-Schwarz inequality together with the fact that ||a| 2 − |b| 2 | |a − b|(|a| + |b|) for any a, b ∈ C. The strong L 2convergence of ψ ε,δ stated in Proposition 4.4 therefore implies x (y, t), pointwise in y and uniformly on compact time-intervals. Analogously we infer x (y, t). Next, we recall from (4.5) that V ε,δ and ∇ y V ε,δ are uniformly bounded in ε. Moreover, by using the Mean-Value Theorem, we can estimate This shows that F ε,δ := −∇ y V ε,δ is equicontinuous in y, and hence the Arzela-Ascoli Theorem guarantees that there exists a subsequence, such that F ε,δ converges, as ε → 0 + , uniformly on compact sets in y, t. Recalling from before that for any For χ having compact support the uniform convergence of F ε,δ then allows us to conclude Ξ ε,δ ε→0+ −→ i∇ y V δ (y, t) · F −1 z→η (z χ(y, z))(y, η) = F δ (y, t) · ∇ η χ(y, η), and since these χ are dense in A the result follows. Remark 4.6. One should note that, even though Λ ε,δ is a self-consistent potential, depending nonlinearly upon the solution ψ ε,δ , the convergence proof given above is very similar to the case [20, Theorem IV.2], which treats the classical limit of nonlinear Hartree-type models with smooth convolution kernels. In particular, we do not require to pass to the mixed state formulation which is needed to establish the classical limit in other self-consistent quantum dynamical models such as [22].
In summary, this leads to the first main result of our work, which shows that the solution to (2.3), as ε → 0 + (and with δ = O(1) fixed) converges to a mixed quantum-classical system, consisting of a Schrödinger equation for the x-variables and a classical Liouville equation for the y-variables.
(4.7)
Here µ in is the initial Wigner measure obtained as the weak * limit of w ε [ϕ ε in ] and Proof. The result follows from Proposition 4.4 and Proposition 4.5.
SHI JIN, CHRISTOF SPARBER AND ZHENNAN ZHOU
and the mixed quantum-classical system becomes with y 0 , η 0 ∈ R n . This is a well-known model in the physics and quantum chemistry literature, usually referred to as Ehrenfest method. It has been studied in, e.g, [6,5] in the context of quantum molecular dynamics.
Remark 4.8. A closely related scaling-limit is obtained in the case where the time-derivatives in both equations of (2.3) are scaled by the same factor ε. At least formally, this leads to an Ehrenfest-type model similar to (4.9), but with a stationary instead of a time-dependent Schrödinger equation, cf. [5,9]. In this case, connections to the Born-Oppenheimer approximation of quantum molecular dynamics become apparent, see, e.g., [25]. From the mathematical point of view this scaling regime combines the classical limit for the subsystem described by the y-variables with a time-adiabatic limit for the subsystem described in x. However, due to the nonlinear coupling within the TDSCF system (2.3) this scaling limit is highly nontrivial and will be the main focus of a future work. 5. The fully classical limit. In order to get a better understanding (in particular for the expected numerical treatment of our model), we will now turn to the question of how to obtain a completely classical approximation for the system (2.3). There are at least two possible ways to do so. One is to consider the limit δ → 0 + in the obtained mixed quantum-classical system (4.7), which in itself corresponds to the iterated limit ε → 0 + and then δ → 0 + of (2.3), cf. Section 5.1. Another possibility is to take ε = δ → 0 + in (2.3), which corresponds to a kind of "diagonal limit" in the ε, δ parameter space, cf. Section 5.2.
5.1. The classical limit of the mixed quantum-classical system. In this section we shall perform the limit δ → 0 + of the obtained mixed quantum-classical system (4.7). To this end, we first introduce the δ-scaled Wigner transform of ψ δ : Assumption (A2) and the results of Lemma 2.4 and Lemma 2.5 imply that ψ δ is a family of δ-oscillatory functions, i.e, sup 0<δ 1 and thus there exists a limiting measure ν ∈ M + (R d x × R d ξ ), such that for all t ∈ [0, T ]: By Wigner transforming the first equation in the mixed quantum-classical system (4.7), we find that W δ [ψ δ ] ≡ W δ satisfies Having in mind that Υ δ ∈ C(R t ; C 2 0 (R d x )), the same arguments as in Lemma 4.2 then allow us to obtain a uniform bound on ∂ t W , and hence time-equicontinuity of W δ , which yields Furthermore, our assumptions one V together with the weak measure convergence (5.2) imply pointwise. By the same arguments as in the proof of Proposition 4.5, we find that this convergence holds uniformly on compact sets in y, t. With this in hand, we can prove the following result.
To obtain the convergence of the term Θ[Υ δ ]W δ , we note that with the convergence of the Wigner measure µ δ , which is obtained in Proposition 5.1, one gets point-wise, for all x, t. Similar to previous cases, one concludes that, up to extraction of possibly another sub-sequence, Υ δ converges, as δ → 0 + , uniformly on compact sets in x, t.
With the same techniques as in the proof of Proposition 4.5, one can then derive the equation for the associated Wigner measure ν. The classical limit of the mixed quantum-classical system can thus be summarized as follows.
(5.3)
Here ν in is the initial Wigner measure obtained as the weak * limit of W δ [ψ δ in ], and Remark 5.3. Note that system (5.3) admits a special solution of the form where x(t), y(t), ξ(t), η(t) solve the following Hamiltonian system: This describes the case of two classical point particles interacting with each other via V (x, y). Obviously, if V (x, y) = V 1 (x) + V 2 (y), the system completely decouples and one obtains the dynamics of two independent point particles under the influence of their respective external forces.
5.2. The classical limit of the TDSCF system. In this section we shall set ε = δ and consider the now fully semi-classically scaled TDSCF system where only 0 < ε 1 appears as a small dimensionless parameter: where, as in (2.4), we denote We shall introduce the associated ε-scaled Wigner transformations w ε [ϕ ε ](y, η, t) and W ε [ψ ε ](x, ξ, t) defined by (3.1). From the a-priori estimates established in Lemmas 2.4 and 2.5, we infer that both ψ ε and ϕ ε are ε-oscillatory and thus we immediately infer the existence of the associated limiting Wigner measures µ, ν ∈ M + , such that
The associated Wigner transformed system is
(5.5) By following the same arguments as before, we conclude that, up to extraction of sub-sequences, on compact sets in (x, t) and (y, t) respectively. Consequently, one can show the convergences of the terms Θ[Υ ε ]W ε and Θ[V ε ]w ε by the same techniques as in the proof of Proposition 4.5. In summary, we obtain the following result: Theorem 5.4. Let Assumptions (A1) and (A2) hold. Then, for any T > 0, we have that W ε and w ε converge as ε → 0 + , respectively, to µ ∈ L ∞ (R t ; M + (R n y × R n η )) and ν ∈ L ∞ (R t ; M + (R d x × R d ξ )), which solve the classical system (5.3) in the sense of distributions.
In other words, we obtain the same classical limiting system for ε = δ → 0 + , as in the iterated limit ε → 0 + and δ → 0 + . In summary, we have established the diagram of semi-classical limits as is shown in Figure 1. (It is very likely that the missing "upper right corner" within Fig. 1 can also be completed by using arguments similar to those given above.) Figure 1. The diagram of semi-classical limits: the iterated limit and the classical limit.
6. Numerical methods based on time-splitting spectral approximations.
In this section, we will develop efficient and accurate numerical methods for the semi-classically scaled TDSCF equations (2.3) and the Ehrenfest equations (4.9). The highly oscillatory nature of these models strongly suggests the use of spectral algorithms, which are the preferred method of choice when dealing with semi-classical models, cf. [15]. In the following, we will design and investigate time-splitting spectral algorithms, for both the TDSCF system and the Ehrenfest model, which will be shown to be second order in time. The latter is not trivial due to the self-consistent coupling within our equations and it will become clear that higher order methods can, in principle, be derived in a similar fashion. Furthermore, we will explore the optimal meshing strategy if only physical observables and not the wave function itself are being sought. In particular, we will show that one can take time steps independent of semi-classical parameters if one only aims to capture correct physical observables.
6.1. The SSP2 method for the TDSCF equations. In our numerical context, we will consider the semi-classically scaled TDSCF equations (2.3) in one spatial dimension and subject to periodic boundary conditions, i.e.
As before, we denote Υ ε,δ = ϕ ε,δ , V ϕ ε,δ L 2 y and Λ ε,δ = ψ ε,δ , h δ ψ ε,δ L 2 x . Clearly, a, b ∈ R have to be chosen such that the numerical domain [a, b] is sufficiently large in order to avoid the possible influence of boundary effects on our numerical solution. The numerical method developed below will work for all ε and δ, even if ε = o(1) or δ = o(1). In addition, we will see that it can be naturally extended to the multi-dimensional case. 6.1.1. The construction of the numerical method. We assume, on the computational domain [a, b], a uniform spatial grid in x and y respectively, x j1 = a + j 1 ∆x, y j2 = a + j 2 ∆y, where j m = 0, · · · N m − 1, N m = 2 nm , n m are some positive integers for m = 1, 2, and ∆x = b−a N1 , ∆y = b−a N2 . We also assume discrete time t k = k∆t, k = 0, · · · , K with a uniform time step ∆t.
The construction of our numerical method for (6.1) is based on the following operator splitting technique. For every time step t ∈ [t n , t n+1 ], we solve the kinetic step and the potential step possibly for some fractional time steps in a specific order. For example, if Strang's splitting is applied, then the operator splitting error is clearly second order in time (for any fixed value of ε). However, in the semi-classical regime ε → 0 + , a careful calculation shows that the operator splitting error is actually O(∆t 2 /ε), cf. [2,16]. Next, let U n j1 be the numerical approximation of the wave functions ψ ε,δ at x = x j1 and t = t n . Then, the kinetic step for ψ ε,δ can be solved exactly in Fourier space via: whereÛ n l1 are the Fourier coefficients of U n j1 , defined bŷ Similarly, the kinetic step for ϕ ε,δ can also be solved exactly in the Fourier space. On the other hand, for the potential step (6.3) with t 1 < t < t 2 , we formally find Λ ε,δ (y, s) ds ϕ ε,δ (y, t 1 ), (6.5) where 0 < t 2 − t 1 ≤ ∆t. The main problem here is, of course, that the mean field potentials Υ ε,δ and Λ ε,δ depend on the solution ψ ε,δ , ϕ ε,δ themselves. The key observation is, however, that within each potential step, the mean field potential Υ ε,δ is in fact time-independent (at least if we impose the assumption that the external potential V = V (x, y) does not explicitly depend on time). Indeed, a simple calculation shows
SHI JIN, CHRISTOF SPARBER AND ZHENNAN ZHOU
In other words, (6.4) simplifies to which is an exact solution formula for ψ ε,δ at t = t 2 .
The same argument does not work for the other self-consistent potential Λ ε,δ , since formally x . However, since Λ ε,δ (y, t) = ψ ε,δ , h δ ψ ε,δ L 2 x , the formula (6.6) for ψ ε,δ allows to evaluate Λ ε,δ (y, t) for any t 1 < t < t 2 . Moreover, the above expression for ∂ t Λ ε,δ , together with the Cauchy-Schwarz inequality and the energy estimate in Lemma 2.5, directly implies that ∂ t Λ ε,δ is O(1). Thus, one can use standard numerical integration methods to approximate the time-integral within (6.5). For example, one can use the trapezoidal rule to obtain Obviously, this approximation introduces a phase error of order O(∆t 2 /ε), which is comparable to the operator splitting error. This is the reason why we call the outlined numerical method SSP2, i.e., a second order Strang-spliting spectral method.
Remark 6.1. In order to obtain a higher order splitting method to the equations, one just needs to use a higher order quadrature rule to approximate the time-integral within (6.5).
6.1.2. Meshing strategy. In this subsection, we will analyze the dependence on the semi-classical parameters of the numerical error by applying the Wigner transformation onto the scheme we proposed above. In particular, this yields an estimate on the approximation error for (the expectation values of) physical observables due to (3.3). Our analysis thereby follows along the same lines as in Refs. [2,16]. For the sake of simplicity, we shall only consider the differences between their cases and ours. We denote the Wigner transforms W ε,δ ≡ W δ [ψ ε,δ ] and w ε,δ = w ε [ϕ ε,δ ], which satisfy the system To this end, we are interested in analyzing two special cases: δ = O(1), and ε 1, or δ = ε 1. These correspond to the semi-classical limits we showed in Theorem 4.7 and Theorem 5.4.
We first consider the case δ = ε 1, where Wigner transformed TDSCF system reduces to (5.5). For convenience, we suppress the parameter δ, and write the potential step for ϕ ε as, and the associated Wigner transform w ε in the potential step satisfies, In view of (6.7), if we denote the approximation on the right hand side byφ ε , theñ ϕ ε is the exact solution to the following equation where G ε (y) = 1 2 (Λ ε (y, t 1 ) + Λ ε (y, t 2 )).
In summary, we conclude that for the SSP2 method, the approximation within the potential step results in an one-step error which is bounded by O(ε∆t + ∆t 3 ). Thus, for fixed ∆t, and as ε → 0 + , this one-step error in computing the physical observables is dominated by O(∆t 3 ) and we consequently can take ε-independent time steps for accurately computing semi-classical behavior of physical observables. By standard numerical analysis arguments, cf. [2,16], one consequently finds, that the SSP2 method introduces an O(∆t 2 ) error in computing the physical observables for ε 1 within an O(1) time interval. Similarly, one can obtain the same results when δ is fixed while ε 1. We remark that, if a higher order operator splitting is applied to the TDSCF equations, and if a higher order quadrature rule is applied to approximate formula (6.5), one obviously can expect higher order convergence in the physical observables.
6.2. The SVSP2 method for the Ehrenfest equations. In this section, we consider the one-dimensional Ehrenfest model obtained in Section 4.1. More precisely, we consider a (semi-classical) Schrödinger equation coupled with Hamilton's equations for a classical point particle, i.e with initial conditions ψ ε |t=0 = ψ ε in (x), y |t=0 = y 0 , η |t=0 = η 0 , and subject to periodic boundary conditions. Inspired by the SSP2 method, we shall present a numerical method to solve (6.10), which is second order in time and works for all 0 < δ 1.
As before, we assume a uniform spatial grid x j = a + j∆x, where N = 2 n0 , n 0 is an positive integer and ∆x = b−a N . We also assume uniform time steps t k = k∆t, k = 0, · · · , K for both the Schrödinger equation and Hamilton's ODEs. For every time step t ∈ [t n , t n+1 ], we split the system (6.10) into a kinetic step y = η,η = 0; (6.11) and a potential step We remark that, the operator splitting method for the Hamilton's equations may be one of the symplectic integrators. The readers may refer to [12] for a systematic discussion.
As before, the kinetic step (6.11) can be solved analytically. On the other hand, within the potential step (6.12), we see that ∂ t V (x, y(t)) = ∇ y V ·ẏ(t) = 0, (6.13) i.e., V (x, y(t)) is indeed time-independent. Moreover x . Now, we can use the first equation in (6.12) and the fact that V (x, y(t)) ∈ R to infer that the first two terms on the right hand side of this time-derivate cancel each other. We thus have in view of (6.13). In other words, also the semi-classical force is time-independent within each potential step. In summary, we find that for t ∈ [t 1 , t 2 ], the potential step admits the following exact solutions as well as This implies, that for this type of splitting method, there is no numerical error in time within the kinetic or the potential steps and thus, we only pick up an error of order O(∆t 2 /δ) in the wave function and an error of order O(∆t 2 ) in the classical coordinates induced by the operator splitting. Standard arguments, cf. [2,16], then imply that one can use δ-independent time steps to correctly capture the expectation values of physical observables. We call this proposed method SVSP2, i.e., a second order Strang-Verlet splitting spectral method. It is second order in time but can easily be improved by using higher order operator splitting methods for the Schrödinger equation and for Hamilton's equations.
7. Numerical tests. In this section, we test the SSP2 method for the TDSCF equations and the SVSP2 method for the Ehrenfest system. In particular, we want to test the methods after the formation of caustics, which generically appear in the WKB approximation of the Schrödinger wave functions, cf [24]. We also test the convergence properties in time and with respect to the spatial grids for the wave functions and the following physical observable densities i.e., the particle density and current densities associated to ψ ε (and analogously for ϕ ε ).
7.1. SSP2 method for the TDSCF equations. We first study the behavior of the proposed SSP2 method. In Example 1, we fix δ and test the SSP2 method for various ε. In Example 2 and Example 3, we take δ = ε and assume the same spatial grids in x and y.
Example 1. In this example, we fix δ = 1, and test the SSP2 method for various ε = o(1). We want to test the convergence in spatial grids and time, and whether ε-independent time steps can be taken to calculate accurate physical observables. Assume x, y ∈ [−π, π] and let V (x, y) = 1 2 (x + y) 2 . The initial conditions are of the WKB form, ψ δ in (x) = e −2(x+0.1) 2 e i sin x δ , ϕ ε in (y) = e −5(y−0.1) 2 e i cos y ε .
In the following all our numerical tests are computed till the stopping time T = 0.4. These results show that, ε-independent time steps can be taken to obtain accurate physical observables, but not accurate wave functions. line represents the numerical solution with ε-independent ∆t, and the solid line represents the numerical solution with ε-dependent ∆t. From these figures, we observe the numerical convergence (in the weak sense) to the limit solutions after caustics formation, and that the numerical scheme can capture the physical observables with ε-independent ∆t.
Next, we come back to take the harmonic coupling potential V (x, y) = 1 2 (x + y) 2 , which ensures in a nontrivial coupling between the two sub-systems. We again want to test whether ε-independent ∆t can be taken to correctly capture the behavior of physical observables. We solve the TDSCF equations with resolved spatial grids, which means ∆x = O(ε). The numerical solutions with ∆t = O(ε) are used as the reference solutions. For ε = 1 256 , 1 512 , 1 1024 , 1 2048 , 1 4096 , we fix ∆t = 0.005, and compute till T = 0.54. The l 2 norm of the error for the wave functions and the error for the position densities is calculated. We see in Figure 7 that the former increases as ε → 0 + , but the error in the physical observables does not change noticeably.
Example 3. In this example, we want to test the convergence in the spatial grid ∆x and in the time step ∆t as ε = δ → 0 + . According to the previous analysis, the spatial oscillations of wavelength O(ε) need to be resolved. On the other hand, if the time oscillation with frequency O(1/ε) is resolved, one gets accurate approximation even for the wave functions itself (not only quadratic quantities of it). Unresolved time steps of order O(1) can still give correct physical observable densities. More specifically, one expects second order convergence with respect to time in both wave functions (and in the physical observables), and spectral convergence in the respective spatial variable. , ∆x = ε 8 , respectively. The reference solution is computed with the same ∆x, but ∆t = 0.54ε 4 . These results show that, ε-independent time steps can be taken to obtain accurate physical observables, but not accurate wave functions. To test the spatial convergence, we take ε = 1 256 , and the reference solution is computed by well resolved mesh ∆x = 2πε 64 , ∆t = 0.4ε 16 until T = 0.4. Then, we compute with the same time step, but with different spatial grids. The results are illustrated in Figure 8. We observe that, when ∆x = O(ε), the error decays quickly to be negligibly small as ∆x decreases. However, when the spatial grids do not well resolve the ε-scale, the method would actually give solutions with O(1) error.
At last, to test the convergence in time, we take ε = 1 1024 , and the reference solution is computed through a well resolved mesh with ∆x = 2πε 16 , ∆t = 0.4 8192 till T = 0.4. Then, we compute with the same spatial grids, but with different time steps. The results are illustrated in Figure 9. We observe that the method is stable even if ∆t ε. Moreover, we get second order convergence in the wave functions as well as in the physical observable densities. 7.2. SVSP2 method for the Ehrenfest equations. Now we solve the Ehrenfest equations (6.10) by the SVSP2 method. Assume x ∈ [−π, π], and assume periodic boundary conditions for the electronic wave equation.
Example 4. In this example, we want to test if δ-independent time steps can be taken to capture correct physical observables and the convergence in the time step which is expected to be of the second order. The potential is again V (x, y) = These results show that, when δ = ε 1, the SSP2 method is unconditionally stable and is second order accurate in time.
First, we test whether δ-independent ∆t can be taken to capture the correct physical observables. We solve the equations with resolved spatial grids, which means ∆x = O(δ). The numerical solutions with ∆t = O(δ) are used as the reference solutions. For δ = 1/256, 1/512, 1/1024, 1/2048, 1/4096, we fix ∆t = 0.4 64 , and compute until T = 0.4. The l 2 norm of the error in wave functions, the error in position densities, and the error in the coordinates of the nucleus are calculated. We see in Figure 10 that the error in the wave functions increases as δ → 0 + , but the errors in physical observables and in the classical coordinates do not change notably.
Next, we test the convergences with respect to the time step in the wave function, the physical observables and the classical coordinates. We take δ = 1 1024 , and the reference solution is computed by well resolved mesh ∆x = 2πε 16 , ∆t = 0.4 8192 till T = 0.4. Then, we compute with the same spatial grids, but with difference time steps. The results are illustrated in Figure 11. We observe that, the method is stable even if ∆t ε, and clearly, we get second order convergence in the wave functions, the physical observable densities and the classical coordinates. These results show that, the SVSP2 method is unconditionally stable and is second order accurate in time. | 12,534.2 | 2014-06-15T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Complexity Analysis of Prefabrication Contractors’ Dynamic Price Competition in Mega Projects with Different Competition Strategies
This paper considers a repeated duopoly game of prefabrication contractors in mega infrastructure projects and assumes the contractors exhibit bounded rationality. Based on the theory of bifurcation of dynamical systems, a dynamic price competition model is constructed considering different competition strategies. Accordingly, the stability of the equilibrium point of the system is discussed considering different initial market capacities, and numerical simulation is performed. The results show the system has a unique equilibrium solution when initial capacity is high and the parameters meet certain conditions. The contractors’ price adjustment strategy has an important influence on system stability. However, an overly aggressive competition strategy is not conducive to system stability. Moreover, the system is sensitive to initial parameter values.
Introduction
Recently, the development of urbanization, technology, and economy, as well as the demand for convenience, have triggered enthusiasm worldwide towards building mega infrastructure projects, such as high-speed railways, the Hong Kong-Zhuhai-Macao Bridge in China, or the Land transport distance between New Lamu Port and Break-Even Point in Africa [1,2].Mega projects often have a high degree of complexity, use large amounts of resources, require a complex construction environment, and have high technical difficulty [3].The interaction with the surrounding environment during the construction process results in new complexities.Therefore, the owners of mega infrastructure projects have to pay attention to the selection of suppliers as well as adhere to strict requirements on project quality, duration, environmental protection, and so on [4].Traditional on-site open construction has increasingly failed to meet current requirements of the owners, and prefabricated production has gradually become a popular trend [5].For example, the demand of steel in Hong Kong-Zhuhai-Macao Bridge is 420,000 tons, and most of them need to be processed and prefabricated in the factory to meet the requirements of the owner.Compared with traditional on-site construction, on one hand, prefabrication has the advantage of meeting the owner's requirement; on the other hand, prefabrication also can meet the sustainability requirement.Prefabrication allows some of the on-site processes to move to a stable factory, thus reducing the pollution, as well as saving energy, water, and human resources.For example, the Hong Kong-Zhuhai-Macao Bridge realized the splicing of large sections of the steel box girder at the construction site.One of the large block steel box girders used in the navigable span bridge reached 134.45 meters.Splicing of the small blocks was carried out inside the factory, thus greatly reducing the amount of work on site.The less amount of offshore construction, there will be lower risks and will be more environmentally friendly.
Different from the traditional on-site production, prefabrication is an off-site construction method that produces key components of the project in a professional factory using a standardized manufacturing process and then transports them to the project construction site to assemble and further construct [6][7][8].As such, prefabrication production affects product reliability and allows for a more accurate prediction of the construction period.As a result, it is of great help to mega infrastructure projects, which are sensitive to the large number of products used, tight duration, and environment friendly requirements [9,10].Some scholars have discussed these issues from various perspectives such as techniques, environment influence, and risk of prefabrication.For instance, Li et al. [11] used social network analysis to identify and investigate potential networks of stakeholder-related risk factors in house prefabrication cases, and they [12] also proposed a prefabrication quantitative evaluation technology that can minimize the impact of construction waste and subsequent waste disposal activities in an empirical study on China.Jaillon and Poon [13] studied the design of the life cycles of deconstruction and industrialization through literature review and case study analysis.However, there are still some barriers to use prefabrication techniques [14].As such, Mao et al. [15] pointed out that the cost of prefabrication would be higher than on-site production by 27% to 109%.Additionally, high transportation cost, R&D complexity, and design changes are obstacles for prefabrication [16].To solve these obstacles, there is a need for R&D work in the production process, which will likely prompt suppliers to cooperate for innovation and gain higher profits when they produce the same key components.Further, Shi et al. [17] explored the multisuppliers' cooperation tendency in mega construction projects based on evolutionary game model.Cheung et al. [18] identified the cooperative and aggressive drivers that facilitate cooperative contracting in construction projects.Saad et al. [19] found that supply chain management methods had been adopted increasingly to establish long-term strategic cooperation relationships in construction.However, spillover effects will occur in the cooperation process.Dussauge et al. [20] pointed out it is difficult for participants to control the boundary of knowledge investment.Hsuan and Mahnke [21] found knowledge and reputation spillovers can generally bring benefits to the suppliers.
In many developing countries, there are many mega projects under construction, and the prefabrication technology of key components is monopolized by a few companies, which makes it possible and meaningful to study the price competition of contractors.For oligopolistic competition, the most well-known models are the Cournot and Bertrand models [22,23].The Cournot model was first proposed in 1838 to study the output competition of two companies.In 1883, Bertrand proposed a price competition model.For dynamic price competition under bounded rationality, the Cournot model had been researched extensively [24][25][26][27][28]. Recently, the dynamics of the Bertrand model have drawn increasing attention [29,30].Researchers have adopted a variety of adjustment mechanisms, including horizontal product differentiation [31], a gradient adjustment mechanism [32], among others.Tu and Wang [33] proposed a dynamic competition triopoly game considering two-stage R&D input.
As mentioned before, prefabricated production of key components in mega projects has a great advantage and has become a trend.Due to the characteristics of mega projects, only a limited number of contractors in the market have the qualifications and capabilities, resulting in a monopoly.For example, there are only two contractors for the prefabricated production of the steel box girder of the Hong Kong-Zhuhai-Macao Bridge: the China Railway Shanhaiguan Bridge Group Co., Ltd. and the Wuhan Heavy Engineering Co., Ltd.Due to the long-term nature of mega project construction, especially for developing countries, enterprise competition under monopolistic conditions will exist for a long time.Therefore, it is meaningful to deal with this problem with a long-term evolutionary perspective.The motivation of this paper is to provide the analysis method to investigate the impact of the pricing strategies on the competition equilibrium and evolution path of market for prefabrication contractors under different circumstances.The conclusions can provide advice and suggestions for contractors' pricing strategies under long-term scales and uncertain conditions.
In this paper, we focus on the competition among mega project contractors and establish a price competition model considering the limited rationality of the contractor.Meanwhile, we also consider the influence of the spillover effect on cost reduction and use nonlinear dynamics to study the price competition model.The rest of this paper is organized as follows.Section 2 establishes a duopoly monopoly price competition model to describe the decision characteristics of contractors.In Section 3, we solve the model and analyze the stability of equilibrium points.In Section 4, system evolution is given in the form of diagram through numerical simulation.Finally, the conclusions of the paper are presented in Section 5.
Model
In the supply chain of mega projects, the prefabrication contractors who have the ability to provide key components on the market are often limited.As Mao et al. [34] indicated, technological monopoly is an important factor in prefabricated production.As a matter of fact, there is a technical monopoly in the steel box girder of the Hong Kong-Zhuhai-Macao Bridge.There are very few contractors in the market that can meet the high requirements of the owners.In this case, there are only two contractors for steel box girder, namely, the China Railway Shanhaiguan Bridge Group Co., Ltd. and the Wuhan Heavy Engineering Co., Ltd.Among them, the Wuhan Heavy Engineering Co., Ltd. also needs technical cultivation to meet the requirements.Based on this realistic background, we assume that the market is a monopoly and is a duopoly.Specifically, there are only two prefabrication contractors on the market.One is the leader and the other the competitor, and they adopt different price competition strategies.That is, contractor 1 is pursuing profit maximization and contractor 2 expanding the market as much as possible.The two contractors carry out repeated price competition on the market, assuming contractor i is priced at p i t during period t and the 2 Complexity prefabricated part is sold for q i t .Therefore, the sales function of two contractors are where a i is the basic sales volume, which reflects the market's demand for contractor i; b i the average price impact factor, which reflects the impact of substitutes, and b i = 0 indicates there are no other alternative products.The larger b i is, the easier it is for the contractor to be replaced.The higher the average price on the monopoly market, the lower the sales volume will be.θ i is the differential coefficient, which reflects the sensitivity of sales to price differences.Considering the difference and not losing generality, we assume basic sales volumes are equal, that is, a 1 = a 2 = a, and the average price impact factors are equal, that is, b 1 = b 2 = b, meaning the sales function can be simplified as Assume the two contractors carry out R&D strategies that can reduce costs to a certain extent.However, the market scope of prefabrication production is relatively concentrated; due to the flow of human resources and technical cooperation between contractors, the spillover effect of knowledge is prone to occur.Because of this spillover effect, R&D strategies will also reduce the counterparty's costs when the contractor is reducing its own costs.The cost function is Here, c L represents the cost of contractor 1; c F the cost of contractor 2; r i the cost of contractor i through R&D strategies, and β i the cost coefficient of contractor i through spillover effect, which reflects the contractor i acquiring the ability of the counterparty.This paper further assumes that neither of the two contractors has a cost advantage, that is, c L = c F = c 0 , their own R&D strategies have the same impact on cost, that is, r 1 = r 2 = r.To focus more on the study of spillover effects, we consider c = c 0 − r, so the cost function can be simplified as Therefore, we can obtain the profit function of contractor i According to the hypothesis of this paper, the strategy adopted by contractor 1 for profit maximization is requiring marginal profit to be 0. Therefore, for the contractor 1's profit function for the current price derivative, current marginal profit can be Contractor 2 is pursuing the highest market share, so it only needs to meet Π 2 p 1 t , p 2 t = 0.According to the nature of contractor 2's profit function, its optimal pricing strategy can be divided into two situations: According to the initial hypothesis, the two contractors exhibit bounded rationality.As such, it is difficult for them to obtain complete market information, meaning they adjust their price strategies according to certain rules and gradually reach a state of equilibrium.It is assumed contractor 1 is adopting a "near-sightedness" strategy, while contractor 2 adopts a "self-adaption" strategy; that is, contractor 1 dynamically adjusts the price for the next period based on the profitability of the previous period, and contractor 2 uses a linear adjustment mechanism based on the previous and optimal prices.That is,
Equilibrium Points and Stability in a Dynamic Price Competition System
The above adjustment mechanism uses "near-sightedness" and "self-adaption" adjustment methods and combines them into a dynamic adjustment system.In this system, let p i t + 1 = p i t .The nonlinear algebra system can be obtained as follows: According to the range of parameters, the system is divided into the following two situations.
(1) a + θ 2 − b p 1 t − θ 2 + b c − β 2 r < 0, where the system has two equilibrium points: The Jacobian matrix of the system at any point is given by Proposition 1. E 0 = 0, a/ θ 2 + b is the unstable equilibrium point of the dynamic price competition system between contractors.
Proof 1.The Jacobian matrix at E 0 takes the form It gives two eigenvalues, and λ 2 = 1 − δ, obviously satisfying λ 1 > 1 and λ 2 < 1.Therefore, from the stability criterion of the fixed-point theorem, we obtain that E 0 is the unstable equilibrium point of dynamic price competition between contractors.On the other hand, it is clear p * 1 = 0 < c − β 1 r.Contractor 1 is unlikely to sell at price 0, which is lower than the cost and therefore unsustainable, so E 0 is the unstable equilibrium point of the system.Proposition 2. E 1 = p * 1 , p * 2 is the unstable equilibrium point of the dynamic price competition system between contractors.Proof 2. The Jacobian matrix at E 1 takes the form From the stability criterion of the fixed-point theorem, we can obtain that E 1 is locally stable if the eigenvalues of the equilibrium point are inside the unit circle of the complex plane.According to the Jury stability criterion, the necessary and sufficient conditions of the local stability of E 1 satisfy In the plane formed by the price adjustment coefficients γ 1 , δ of the two contractors, E 1 is locally stable if γ 1 and δ satisfy the upper constraints.However, when the values of γ 1 and δ exceed the above range, E 1 is no longer locally stable.In the above equilibrium state, submitting p * , at which point contractor 2 is unprofitable.Therefore, from the perspective of contractor's individual rationality, this point is not the stable equilibrium point of the system. ( We can introduce p * 2 t = c − β 2 r into the dynamical system: There are two equilibrium points, E 0 = 0, p * 2 and 4 Complexity Proposition 3. E 0 = 0, c − β 2 r is the unstable equilibrium point of the dynamic price competition system between contractors.
Proof 3. The Jacobian matrix at E 0 takes the form It gives two eigenvalues, r and λ 2 = 1 − δ, obviously satisfying λ 1 > 1 and λ 2 < 1.Therefore, from the stability criterion of the fixed-point theorem, we obtain E 0 is the unstable equilibrium point of the dynamic price competition system between contractors.On the other hand, p * 1 = 0 < c − β 1 r.As contractor 1 is unlikely to sell at price 0, E 0 is the unstable equilibrium point of the system.
is the stable equilibrium point of the dynamic price competition system between contractors.Proof 4. The Jacobian matrix at E 1 takes the form Its eigenvalues are λ 1 = 1 − 2γ 1 θ 1 + b p * 1 and λ 2 = 1 − δ.According to the local stability criterion of the fixed-point theorem, this point is locally stable when conditions λ 1 < 1 and λ 2 < 1 are satisfied.According to the assumptions, λ 2 < 1 is true.
Moreover, by calculating λ 1 < 1, we get This completes the proof of Proposition 4.
Numerical Simulations
In the above analysis, we discuss the equilibrium stability of a price competition system composed of two contractors in two situations.
To reflect the influence of different parameters on the system more intuitively, numerical simulations are used to simulate this system based on different price competition strategies of the two contractors.With the above parameter settings, Figure 1 shows the price dynamical evolution diagram with respect to the price adjustment coefficient of contractor 1.
From Figure 1, both contractors have entered a state of the period-doubling bifurcation from a stable state and then a chaotic state with the increase of price adjustment coefficient γ.This shows price adjustment coefficient γ has an important influence on system stability.The price change for contractor 1 is larger than that of contractor 2 in the state of bifurcation and chaos, given the change in γ.Further, in this situation, the price of contractor 2 is always lower than its total cost under different conditions.Therefore, in this situation, the equilibrium point is not the stable equilibrium point of the system according to the rational agent assumption.Further, Proposition 2 is also verified.
In Figure 2, we present a strange attractor diagram with γ = 0 347 and N = 20000, which shows the track of price changes in the chaotic state.That is, with the change of γ, the shape of strange attractor also changes. ( We set the initial prices of contractors as p 1 0 = 1 32 and p 2 0 = 1 15, initial market sales as a = 4 8, average market price coefficient as b = 0 4, differential price sensitivity coefficient of the two contractors as θ 1 = 1 25 and θ 2 = 0 75, contractor's cost as c = 1, contractor's profit on cost control as r = 0 5, spillover utility coefficients of R&D cost as β 1 = 0 2 and β 2 = 0 5, and price adjustment coefficient of contractor 2 as δ = 0 5.
Under the above parameter settings, Figure 3 shows the price dynamical evolution diagram with respect to the price adjustment coefficient of contractor 1.From Figure 3, 5 Complexity contractor 1 entered the state of the period-doubling bifurcation from a stable state and then the chaotic state with the increase of price adjustment coefficient γ, while contractor 2 remains stable.In this situation, contractor 2 adopts a strategy whereby pricing is always consistent with cost, while the pricing of contractor 1 varies considerably with γ.This shows that, although contractor 1 holds a leading position on the market, excessive price adjustment changes will still make it difficult for it to make business decisions.
In Figure 4, we use the largest Lyapunov exponent, which can help us identify the bifurcation point of profit.Comparing Figure 3 with 4, the point where the largest Lyapunov exponent in Figure 4 is 0 corresponds to the bifurcation point in Figure 3.At point A in Figure 4, the system produces the first bifurcation; at point B, it produces the second bifurcation; and at point C, it produces the third bifurcation.Further, when γ is greater than a certain degree, the largest Lyapunov exponent will be above 0.At this time, the system will enter a chaotic state.
Figures 5 and 6 show a sequence diagram of the price competition between contractors 1 and 2 when the system is under stability, bifurcation, and chaos, respectively.If we set γ = 0 25 in Figure 5(a), contractor 1 moves rapidly from the initial value to the equilibrium solution then surpasses the latter.Subsequently, it fluctuates up and down around the equilibrium solution and amplitude decreases gradually.Finally, it remains in a stable state.If we set γ = 0 35 in Figure 5(b), contractor 1 presents a stable cyclical track of the price after experiencing initial unstable fluctuations.If we set γ = 0 4 and γ = 0 405, respectively, the blue line can be observed.Further, the price change of contractor 1 presents a more complicated chaotic situation that is more difficult to describe.
In Figure 6(a), the parameters we set up for contractor 1 are γ = 0 4, β 1 = 0 2 and γ = 0 4, β 1 = 0 201, respectively.In Figure 6(b), the parameters are γ = 0 405, β 1 = 0 2 and γ = 0 405, β 1 = 0 201, respectively.That is, the spillover effect coefficient of contractor 1 has changed slightly in the same diagram.At period T < 12, the changes of pricing of contractor 1 are not significant at the different initial values.However, over the period, there is a significant difference in the trends of price changes for contractor 1 in two situations.Even if the difference of the initial value is only 0.001, the difference in these tracks is still significant.Therefore, the system is sensitive to the initial value under chaos.
Figure 7 shows a dynamic change diagram of the average profit of the two contractors where the period is set to 500.Corresponding to the first bifurcation point in Figure 3 and the value of γ at point A in Figure 4, the system enters the first bifurcation.At this time, the profit of contractor 1 experienced a rapid decline.At the second bifurcation, the profit of contractor 1 continues to fall after a brief fluctuation.When the system enters bifurcation and chaos, the profit of contractor 1 is lower than that under stability state.Therefore, if contractor 1 adopts an overly rapid model of price adjustment, bifurcation and chaos occur.As a result, this has a negative effect on the profit of contractor 1, and the effect is considerable.6 Complexity
Conclusion
In the construction of mega projects, some key components in certain areas are often supplied by a limited number of contractors with set production capacities, making it objectively easy to form an oligopoly situation, such as the manufacturing of steel structures for the Hong Kong-Zhuhai-Macao Bridge in China.Therefore, this paper proposes a duopoly market structure, which is closer to the realistic background of mega projects, and considers the spillover effects of both parties' R&D activities on the market.The traditional oligopoly model requires complete information and participant's complete rationality, which is difficult to realize in the actual competition.Recently, scholars have introduced limited rationality to simulate the real decision-making situations of participants.We consider one contractor follows a "near-sightedness" strategy and the other "self-adaption" and construct a dynamic price competition model between the two contractors.We then analyze the equilibrium point of the competition model and perform stability analysis.Additionally, through numerical simulation, we study the complexity of price competition between the two contractors and profit changes due to price competition.The results show that (1) when the initial capacity of the market is not high, both parties fall under bifurcation and chaos in price competition, which makes the price of the follower lower than the cost and, according to the rational agent assumption, the follower will withdraw from competition; (2) when the initial capacity of the market is high, the pricing of the market leader will fall under bifurcation and chaos with 7 Complexity the increase in the price adjustment speed, while the follower maintains its price constant; (3) under chaos, the system is extremely sensitive to the initial value, and the small initial disturbance will have a significant impact on the market as a whole; and (4) for the market leader that adopts the "near-sightedness" strategy, an aggressive price adjustment strategy will make the market unpredictable and have a significant negative impact on the market leader's profits.That is, we should avoid adopting too aggressive price adjustment strategies to avoid price competition under chaos.
For the followers in the market, the initial capacity of the market is a key factor to consider.Under low-capacity market conditions, followers should quit competition quickly, otherwise, they will suffer from long-term losses.While in high-capacity market conditions, followers choose a stable strategy.When price competition falls into chaos, market initial conditions, such as spillover effects, differentiation, and average price, would have a great impact on the equilibrium of price competition.For the leaders in the market, the price adjustment speed needs to be treated cautiously.An overly aggressive strategy will make the market enter chaos and result in the losses of the contractors.Contractors need to balance the advantages and disadvantages in the efficiency and revenue.This paper provides analytical tools and ideas for decision makers in the evolution of long-term price competition.
As this study considers the duopoly monopoly price competition of prefabricated parts for mega projects, we can further study the price competition model by considering the incentives of the owner, government subsidies, and other factors.8 Complexity | 5,556.6 | 2018-09-03T00:00:00.000 | [
"Economics",
"Engineering"
] |
Resolving physical interactions between bacteria and nanotopographies with focused ion beam scanning electron microscopy
Summary To robustly assess the antibacterial mechanisms of nanotopographies, it is critical to analyze the bacteria-nanotopography adhesion interface. Here, we utilize focused ion beam milling combined with scanning electron microscopy to generate three-dimensional reconstructions of Staphylococcus aureus or Escherichia coli interacting with nanotopographies. For the first time, 3D morphometric analysis has been exploited to quantify the intrinsic contact area between each nanostructure and the bacterial envelope, providing an objective framework from which to derive the possible antibacterial mechanisms of synthetic nanotopographies. Surfaces with nanostructure densities between 36 and 58 per μm2 and tip diameters between 27 and 50 nm mediated envelope deformation and penetration, while surfaces with higher nanostructure densities (137 per μm2) induced envelope penetration and mechanical rupture, leading to marked reductions in cell volume due to cytosolic leakage. On nanotopographies with densities of 8 per μm2 and tip diameters greater than 100 nm, bacteria predominantly adhered between nanostructures, resulting in cell impedance.
Bacteria-nanotopography interactions can be quantified using FIB-SEM Envelope penetration and cell impedance are influenced by nanotopography density Low density nanotopographies (8 per mm 2 ) mediate cell impedance High-density nanotopographies (36-137 per mm 2 ) mediate deformation and penetration
INTRODUCTION
The nanostructures found on cicada and dragonfly wings are widely reported to induce physical stretching of bacterial and fungal cell envelopes upon contact, leading to mechanical rupture, cell lysis, and death (Ivanova et al., 2012;Nowlin et al., 2014). The antimicrobial properties of insect wings have provided significant inspiration for the design of synthetic nanostructures with bactericidal activity (Diu et al., 2014;Dunseath et al., 2019;Fisher et al., 2016;Hazell et al., 2018aHazell et al., , 2018bIshak et al., 2020;Jenkins et al., 2020). Determining the underlying mechanisms that drive bacterial cell death on synthetic nanotopographies is crucial, as this will guide the rational design of medical implant surfaces that are resistant to biofilm formation. Several biophysical models have been proposed to explain the mechanisms that drive this antimicrobial phenomenon (Ivanova et al., 2020;Li and Chen, 2016;Linklater et al., 2018;Pogodin et al., 2013;Xue et al., 2015). Alongside the mechanistic theory of contact killing, several biological, chemical and physical factors have been directly linked to promoting nanotopography-mediated antimicrobial activity, including oxidative stress , microbial adhesion force (Bandara et al., 2017;Nowlin et al., 2014), bacterial cell wall thickness Pogodin et al., 2013), chemical composition (Devlin-Mullin et al., 2017;Ewald et al., 2006) and nanotopography geometry (Dewald et al., 2018;Diu et al., 2014;Hazell et al., 2018aHazell et al., , 2018bLü decke et al., 2016;Velic et al., 2019;Watson et al., 2019).
Visualizing the cell-surface interface is crucial for elucidating the antibacterial mechanisms of a nanotopography. Traditionally, these analyzes have been performed using scanning electron microscopy (SEM) (Ivanova et al., 2012;Jenkins et al., 2018). However, this approach cannot resolve cellular ultrastructures such as the bacterial cell wall and membranes. Furthermore, the area of nanotopography that interfaces with the cell envelope is generally concealed from the incident electron beam, thereby restricting the study to surface morphology, mostly in two dimensions. These limitations have prompted alternative techniques that can visualize both nanotopography and bacterial ultrastructure at nanometer resolution; one such technique is focused ion beam scanning electron microscopy (FIB-SEM). Using this approach, biological specimens such as bacteria and fungi can be iteratively cross sectioned and simultaneously imaged, with the With precise control over the exact location of surface ablation, FIB-SEM has proved a powerful tool for directly visualizing the contact points between bacteria or fungi and nanotopographies (Bandara et al., 2020;Bhadra et al., 2015;Dewald et al., 2018;Linklater et al, 2017Linklater et al, , 2018Lü decke et al., 2016). Two main approaches have been utilized to investigate bacteria-nanotopography interactions via FIB-SEM. One method involves generating thin sections, known as lamellae, through bacteria and the underlying nanotopography. Lamellae can then be analyzed by transmission electron microscopy (Jenkins et al., 2018;Linklater et al., 2018). Alternatively, bacteria-nanotopography interactions can be investigated in situ, by generating single cross sections through bacterial cells adhered directly to the nanotopography. This approach has been widely used to visualize the interactions between bacteria and individual nanostructures. FIB milling of Pseudomonas aeruginosa adhered to cicada wings revealed how natural nanostructures can rupture the bacterial envelope, causing cells to submerge into the nanotopography (Ivanova et al., 2012). Similarly, FIB-SEM analysis of P. aeruginosa on dragonfly wing-inspired titanium nanostructures found membrane deformation caused by the energy gain from surface attachment (Bhadra et al., 2015). Initial stretching of Staphylococcus aureus and P. aeruginosa cell envelopes on black silicon (bSi) nanostructures has also been discovered by FIB-SEM cross-sectional analysis (Linklater et al., 2018). Furthermore, FIB-SEM has identified cytoplasmic leakage from Staphylococcus epidermidis cells caused by envelope penetration from spear-like titanium nanostructures (Cao et al., 2018). Most recently, localized envelope deformation and penetration of Escherichia coli and S. aureus cells incubated on TiO 2 nanostructures generated by thermal oxidation was identified .
Although single cross sections generated by FIB milling reveal how bacterial envelope morphology changes at the point of nanostructure contact, this approach does not enable the frequency of nanostructure-induced envelope deformation or penetration to be quantified at a single cell level. Furthermore, it does not reveal the surface area of bacterial envelope that is in direct contact with each nanostructure and to what degree this influences the extent of deformation and/or frequency of penetration. To comprehensively quantify these parameters, this study generated four surface types with distinct nanostructure geometries and utilized a slice-by-slice FIB-SEM milling approach to directly visualize the adhesion interface between S. aureus or E. coli and individual nanostructures. Slice-by-slice FIB-SEM data were then used to generate 3D volume reconstructions of whole bacteria in contact with the underlying nanotopography, enabling all contact points between the bacterial envelope and nanostructures to be resolved simultaneously with nanometer resolution. This approach was previously impossible using conventional 2D imaging tools. Furthermore, advances in 3D analysis software (Cocks et al., 2018;Jorstad et al., 2015) enabled direct quantification of bacteria-nanotopography interactions, including the effective contact surface area between each nanostructure and the bacterial envelope. These analyses demonstrate how this approach can be used to develop an objective framework from which the antibacterial mechanisms of synthetic nanotopographies can be derived.
Fabrication and characterization of nanotopographies
Three nanofabrication methods were utilized to generate four nanotopographies that cover a broad range of nanostructure geometries. The nanofabrication methods shown here have previously been used to generate nanotopographies with bactericidal properties, these include alkaline hydrothermal treatment (Cao et al., 2018;Diu et al., 2014), thermal oxidation (Jenkins et al, 2018Sjö strö m et al., 2016), and plasma etching (Dunseath et al., 2019;Hazell et al., 2018b). Alkaline hydrothermal treatment was used to generate titanium dioxide (TiO 2 ) nanostructures on commercially pure titanium discs (cpTi), measuring approximately 500 nm in height and z50 nm in tip diameter, with a density of 36 per mm 2 . These surfaces are referred to as alkaline hydrothermal nanostructure medium (AH-NS-medium) ( Figure 1A). Thermal oxidation was used to generate two different TiO 2 nanostructure surfaces on grade 5 titanium alloy (Ti-6Al-4V). One surface comprised shorter (350 nm G 52 nm), sharper (27 nm G 4 nm), and more dense (58 per mm 2 G 3 per mm 2 ) nanostructures than AH-NS-medium, herein called thermal oxidation nanostructure short (TO-NS-short) ( Figure 1B), while the other comprised much longer nanostructures (1700 nm G 347 nm), with increased tip diameter (114 nm G 26 nm) and low density (8 per mm 2 G 1 per mm 2 ), referred to as thermal oxidation nanostructure long (TO-NS-long) ( Figure 1C). Plasma etching was used to generate a nanotopography with the shortest (181 nm G 26 nm) and most dense (137 per mm 2 G 6 per mm 2 ) ll OPEN ACCESS 2 iScience 24, 102818, July 23, 2021 iScience Article nanostructures on bSi wafers (PE-NS-short) ( Figure 1D). SEM was used to quantify the average dimensions and densities of these different nanotopographies ( Figure 1E).
FIB-SEM optimization
To determine sample stability during focused ion beam milling and the extent of beam-induced artifacts introduced, single cross sections were first generated through individual E. coli or S. aureus cells on different nanotopography types. Generating single cross sections through E. coli or S. aureus caused minimal movement of bacteria ( Figure 2). Consistent with this, generating consecutive cross sections by a sliceby-slice approach produced little sample movement; however, nanostructure charging caused bacteria to move laterally across the field of view on longer nanostructures, resulting in only partial visualization of
OPEN ACCESS
iScience 24, 102818, July 23, 2021 3 iScience Article bacteria-nanotopography interactions ( Figure S1). To reduce sample movement during sequential ion beam milling, a protective layer of platinum (0.5 mm in thickness) was deposited on top of each bacterium before slice-by-slice analysis. The addition of platinum greatly reduced bacterial cell drifting during sliceby-slice ion beam milling and minimized curtaining artifacts, providing micrographs with enhanced definition ( Figure S2).
Quantification of bacteria-nanotopography interactions
Area searches of each nanotopography type were performed using SEM to select individual E. coli or S. aureus cells for analysis by focused ion beam milling. A combination of single cross-sectional analysis and sequential ion beam milling were performed. Slice-by-slice ion beam milling was used to generate iScience Article consecutive cross sections through selected E. coli and S. aureus cells. The micrographs collected during sequential cross-sectional analysis were reconstructed into three-dimensional volumes to determine the number of nanostructures in contact with each bacterium. Using these data, a framework was developed to quantify the proportion of nanostructure-induced envelope deformation, penetration, and cell impedence ) on a single cell basis. In this study, deformation is defined as the process by which nanostructures directly change bacterial envelope morphology, through indentation. When nanostructures interact with the bacterial envelope with no change in morphology, this is defined as no effect. Penetration is observed when nanostructures peirce through the bacterial envelope, while rupture is defined as penetration combined with a loss of turgor pressure. Furthermore, the effective surface area of bacterial envelope in contact with each nanostructure was determined, and the effect of each interaction on cell morphology and size was investigated.
On AH-NS-medium surfaces, E. coli and S. aureus cells predominantly adhered on top of the nanostructures and mainly displayed continuous envelope morphologies with minimal evidence of deformation or penetration. In one example, an E. coli cell interacting with nanostructures displayed a concave shape, with the cell midpoint positioned higher above the nanotopography relative to the cellular poles, resulting in very few points of nanostructure contact ( Figure 3A). This morphology was not observed for S. aureus, leading to more contact points with the nanotopography ( Figure 3B). It was noted, however, in one example, that the envelope of S. aureus cells on AH-NS-medium surfaces was slightly deformed, which may indicate loss of turgor pressure ( Figure 4A). To investigate these interactions in more detail, sliceby-slice ion beam milling of two S. aureus cells was performed, and 3D reconstructions were generated to determine whether nanostructures had deformed or penetrated the cell envelope ( Figure 4B). Threedimensional reconstructions revealed three nanostructures interacting with S. aureus cell 1 and also for S. aureus cell 2 ( Figure 4C, Videos S1A and S1B). For S. aureus cell 1, two nanostructures (NS1, NS2) had penetrated the envelope ( Figures 4D and 4E), reaching depths inside the cell of 30.8 nm and 37.1 nm, respectively ( Figure 4H). Two nanostructures (NS4, NS5) had also penetrated the envelope of S. aureus cell 2 (Figures 4F and 4G), with nanostructure tips located 45.9 nm and 49.9 nm inside the cell, respectively ( Figure 4I). The remaining nanostructures interacting with S. aureus cells 1 and 2 (NS3, NS6) had no effect on cell morphology and interacted with 225 nm 2 and 237 nm 2 of the cell envelope respectively, representing less than 0.015% of the total cell surface area. Despite multiple nanostructures penetrating the envelope of S. aureus cells 1 and 2, there was no evidence of cytosolic leakage, indicating that neither cell lost significant turgor pressure due to nanostructure penetration.
To determine whether nanostructure-induced envelope penetration occurred in other S. aureus cells adhered to AH-NS-medium surface, slice-by-slice ion beam milling was performed on additional S. aureus cells with similar envelope morphologies ( Figure 5A). Slice-by-slice analysis revealed a total of three nanostructures in contact with S. aureus cell 1 and three nanostructures in contact with S. aureus cell 2 ( Figure 5B, Videos S2A and S2B). For S. aureus cell 1, two nanostructures had deformed the envelope (NS1, NS2), interacting with 463 nm 2 and 147 nm 2 of the cell envelope, respectively. The remaining nanostructure (NS3) had no effect on envelope morphology, interacting with 248 nm 2 of the cell envelope. In contrast, the envelope of S. aureus cell 2 was penetrated by one nanostructure (NS4) to a depth of A B Figure 3. Cross-sectional analysis of E coli and S. aureus on AH-NS-medium surfaces A single cross section was generated through E. coli (A) or S. aureus (B) without platinum deposition. The side of E. coli in contact with nanostructures is concave, with the mid-cell positioned furthest away from the nanotopography. In contrast, S. aureus is positioned on top of the nanotopography with no change in cell shape.
OPEN ACCESS
iScience 24, 102818, July 23, 2021 5 iScience Article 74.7 nm ( Figure 5C). A further nanostructure (NS5) had deformed the bacterial envelope and NS6 had no effect on morphology, interacting with 2926 nm 2 and 155 nm 2 of the cell envelope, respectively. Consistent with the previous S. aureus cell slice-by-slice analysis, envelope penetration did not result in a loss of turgor pressure. In contrast to S. aureus, there was no evidence that AH-NS-medium surfaces had penetrated the envelope of E. coli and only localized deformation of the cell envelope was observed by generating single cross sections ( Figure S3).
Nanostructures generated via alkaline hydrothermal treatment (AH-NS-medium) were slightly longer (444 nm G 85 nm) and wider at the tip (43 nm G 9 nm) than the nanostructures found on TO-NS-short surfaces, which displayed average lengths of 350 nm G 52 nm and a tip diameter measuring 27 nm G 4 nm. Consistent with AH-NS-medium surfaces, E. coli cells attached to TO-NS-short surfaces displayed a concave morphology. In one example, the cellular poles of E. coli had deformed into the nanotopography, while the mid-cell was suspended above the nanotopography ( Figure 6A). Slice-by-slice analysis revealed a total of 8 nanostructures in contact with the cell ( Figure 6B, Videos S3A and S3B). Two nanostructures (NS2 and 4) interacted with the side of E. coli cell at the same position without penetrating ( Figures 6C and 6D), causing the envelope to deform by over 50 nm ( Figure 6E). At the cell midpoint, a single nanostructure (NS6) had penetrated the bacterial envelope by 52 nm, without loss of turgor pressure ( Figures 6F and 6G). A further nanostructure (NS8) at the cell pole had penetrated the bacterial envelope by 37 nm (Figure 6H). Of note, all eight nanostructures shared a common orientation with respect to the E. coli envelope, but only NS6 and NS8 had penetrated the cell, indicating that the point of nanostructure contact along the bottom of the bacterial envelope may be significant in determining the likelihood of penetration. NS2 and NS4 interacted with the side of the E. coli cell via the nanostructure tip, causing only envelope deformation. The remaining four nanostructures interacted with the side of E. coli but rather than interacting via the nanostructure tips, the side wall of nanostructures formed the point of contact. For S. aureus, no evidence of envelope deformation or penetration was observed on TO-NS-short surfaces but in some cases, cells adhered between nanostructures, giving rise to possible cell impedance ( Figure S4). iScience Article Similar to TO-NS-short surfaces, the nanotopography of PE-NS-short surfaces consisted of short (181 nm G 26 nm) and densely packed (137 per mm 2 G 6 per mm 2 ) nanostructures, measuring 50 nm G 6 nm in diameter. In contrast to the other nanotopographies, which comprised randomly orientated nanostructures, PE-NS-short nanostructures were aligned in the same vertical direction. Area searches using SEM identified a single E. coli cell with significant envelope deformation, synonymous with loss of cytosolic content ( Figures 7A-7D). Sequential cross-sectional analysis revealed a total of 24 nanostructures in contact with the E. coli cell envelope (Videos 4SA and 4SB). Three-dimensional reconstructions and morphometric analysis revealed that two nanostructures (NS1 and NS2) had penetrated the E. coli envelope to depths of 29.51 nm and 24.14 nm, respectively, which may have caused the significant deformation observed ( Figures 7E and 7F). The majority of nanostructures (92%) did not penetrate the bacterial envelope, interacting with a total collective surface area of 9000 nm 2 , which corresponds to 0.42% of the total bacterial cell surface area.
In contrast to the other surfaces, for TO-NS-long nanotopographies, E. coli and S. aureus cells predominantly adhered between adjacent nanostructures, leading to nanostructure-induced cell impedance for both. In one example, an E. coli cell expressing significant numbers of fibril-like appendages had adhered between nanostructures (Figures 8A and 8C). Slice-by-slice analysis of the E. coli cell identified three nanostructures in direct contact with the side of the E. coli cell (Figure 8, Videos 5SA and 5SB). The combined surface area of the three nanostructures in contact with the bacterial envelope was 0.026 mm 2 , collectively interacting with less than 1% of the total cell envelope surface area (14.9 mm 2 ). Although none of the nanostructures had penetrated the bacterial envelope, their positioning on either side of the E. coli cell could be expected to have acted as a physical barrier that may have prevented cell division. Consistent with this, the dimensions of the E. coli cell were highly abnormal, measuring approximately 4 mm in length and 1 mm in
OPEN ACCESS
iScience 24, 102818, July 23, 2021 7 iScience Article diameter ( Figure S6). The combination of abnormally large size and absence of cell division septa support our hypothesis of cell impedance and may indicate a nanotopography-induced bacterial stress response, as previously identified . Evidence of nanotopography-induced cell impedance was also observed for S. aureus attached to TO-NS-long surfaces. Cross-sectional analysis revealed no evidence of envelope deformation or penetration ( Figure S5). In contrast to AH-NS-medium, TO-NS-short, and PE-NS-short surfaces, where the interface between nanotopography and bacteria was primarily formed between the nanostructure tips and the underside of the bacterial cell, for TO-NS-long surfaces, these interactions mostly occurred between the sides of the nanostructures and bacterial cells (Figure 9). In S. aureus, the depth of nanostructure penetration varied from 34 nm to 75 nm, while in E. coli depths between 27 nm and 45 nm were observed. Additionally, the depth of deformation in the S. aureus envelope was 38 nm-64 nm while in E. coli deformation from 51 nm to 243 nm was observed. Since these measurments were recorded from different nanotopographies, and different cell numbers, it is unclear whether nanotopography geometries (i.e. density, tip diameter or height) significantly influenced penetration or iScience Article deformation depth. Additional research on a larger sample size is required to more comprehensively assess this. The quantitative data derived from each 3D model are presented in Table 1.
DISCUSSION
It is generally accepted that the antibacterial activity of natural and synthetic nanotopographies is driven by physical contact with nanostructures (i.e. nanowires, nanopillars, nanocones, nanospikes, nanospears). This can result in penetration or rupture of the bacterial cell envelope, or damage can be inflicted via cell impedance or induction of oxidative stress responses Linklater et al., 2021;Tripathy et al., 2017). Visualizing the adhesion interface between bacteria and nanotopographies is thus of critical importance for determining by which mechanisms they mediate their antibacterial effects. In this study, we utilized an FIB-SEM method for directly viewing, in three-dimensional space, physical interactions between the cell envelope of S. aureus or E. coli and four nanotopographies of different geometries (AH-NS-medium, PE-NS-short, TO-NS-short or TO-NS-long). Morphometric analysis was performed to quantify these interactions. The first published morphometric analysis of 3D volume reconstructions generated by FIB-SEM was of brain cells (Jorstad et al., 2015). In this study, Jorstad et al. recognized that a gap exists between rapid 3D volume reconstruction techniques, such as slice-by-slice FIB milling, and software for model quantification; analyses that have previously been achieved via manual segmentation methods. Similarly, generating a 3D model of bacteria interacting with nanostructured surfaces may provide additional qualitative insights, but does not directly provide morphometric data. Therefore, we utilized NeuroMorph software package to quantify morphometric parameters of bacteria-nanotopography interactions, including the intrinsic contact area of the nanostructured surface with the bacterial cell envelope and the bacterial cell volume.
The number of bacteria-nanotopography interactions correlated with nanotopography density, with PE-NS-short surfaces displaying the highest number of physical points of contact (24 interactions per cell) and TO-NS-long nanotopographies having the lowest (3 interactions per cell). However, number of contact points did not correspond to frequency of nanostructure-induced envelope penetration. Rather, nanotopographies with reduced nanostructure density (TO-NS-short and AH-NS-medium) exhibited higher levels
OPEN ACCESS
iScience 24, 102818, July 23, 2021 9 iScience Article of nanostructure-induced envelope penetration (25% and 66%, respectively) compared with PE-NS-short (8%). One possible explanation for this could be the relative surface area of bacterial envelope that nanostructures interact with simultaneously. From the bacteria analyzed in this study, nanostructures on PE-NSshort surfaces interacted with <1% of the total bacterial surface area, while on AH-NS-medium and TO-NSshort nanotopographies, the surface area of physical contact was 3.5-22 times greater. It is also possible that these differences were influenced by nanostructure orientation and/or the simultaneous variation in height or tip diameter between nanotopographies, which affects the precise contact point with the bacterial envelope and forces exerted. Although nanostructure tip diameter is generally greater on PE-NS-short surfaces compared with AH-NS-medium and TO-NS-short surfaces, the nanostructures have the same orientation and a much higher density, meaning that bacteria-nanotopography interactions will be mediated by the nanostructure tips. In contrast, nanostructure orientation on all the other surface types was random, giving rise to bacteria-nanotopography interactions that were mediated by nanostructure tips and/or nanostructure sidewalls.
In this study, analysis of three-dimensional reconstructions of E. coli on PE-NS-short surfaces revealed nanostructure-mediated envelope penetration and significant loss of turgor pressure, indicating that the iScience Article cell wall may have been ruptured, as predicted by the biophysical model (Li, 2015;Pogodin et al., 2013;Xue et al., 2015). However, due to the resolution limit of FIB-SEM, no qualitative evidence of cell wall rupturing was identified. Evidence of nanostructure-mediated envelope penetration was also observed on AH-NWshort and TO-NS-short nanotopographies, but this did not result in cell rupture or loss of turgor pressure. One possible explanation for this observation could be the increased nanostructure density on PE-NSshort surfaces, which would lead to more points of contact with the bacterial envelope. Combined with envelope penetration, this could result in the cell rupturing. Current dogma infers that bacterial cells rupture interstitially between nanopillars (Ivanova et al., 2012). Rather, our observations suggest that cumulative levels of envelope penetration or simply deformation localized at the bacterium-nanostructure tip interface could be a principal driver of physical damage and subsequent antibacterial activity. This is consistent with our previous studies and is supported by recent modeling that indicates that envelope deformation around nanopillar tips delivers sufficient in-plane strain to locally damage and penetrate bacteria (Velic et al., 2021). It is also possible that the biophysical model is only applicable to cicada wing-like nanotopographies such as PE-NS-short, where nanostructure height, spacing, and diameter are more uniform across the surface, whereas dragonfly wing-like nanotopographies, including AH-NS-medium, TO-NS-short and TO-NS-long, display uneven distribution of height, density, and tip diameter. Thus, stretching and rupturing of the suspended bacterial cell wall may be unlikely on surfaces comprising nanostructures of random height and orientation.
Morphometric analysis of the 3D volume reconstructions revealed a strong correlation between cell impedance and cell dimensions. Cell impedance was observed for S. aureus cells incubated on TO-NS-short surfaces but not for E. coli. One possible explanation for these differences is the size and shape of E. coli relative to S. aureus. Based on the longer, elongated shape and larger surface area of E. coli cells, it is more likely for adhesion to occur on top of the nanostructures rather than in-between. Furthermore, nanotopography density influenced the likelihood of cell impedance, since E. coli cells were mostly found to adhere between nanostructures on TO-NS-long surfaces, where nanostructure spacing was generally greater than the width of E. coli cells (z500 nm). In contrast, the smaller cell diameter and coccoid morphology of S. aureus increased the likelihood of attachment between nanostructures, irrespective of surface type. These findings are consistent with previous literature investigating the effects of microtopography on microbial retention. Titanium surfaces with 0.5 mm-2 mm pit sizes were found to retain significantly more S. aureus cells compared to P. aeruginosa or Candida albicans following 1 hr incubation, owing to the smaller diameter of S. aureus (Whitehead et al., 2005). A similar mechanism was recently observed on titanium nanostructure surfaces with pocket-like formations. S. epidermidis cells were found to settle inside the pockets, limiting biofilm growth (Cao et al., 2018). Considering these findings, it is reasonable to hypothesize that cell impedance would occur more frequently with smaller bacterial cells, such as S. aureus, according to the relative dimensions of bacteria and nanostructure. Other factors, including cell surface charge and hydrophobicity, may have also influenced bacterial adhesion to nanostructured surfaces (Krasowska and Sigler, 2014). Quantitative analyses were performed on each 3D model derived from slice-by-slice FIB-SEM analysis, providing an objective framework from which to derive the possible bactericidal mechanisms of each nanotopography. Definitions for each parameter are indicated below. Envelope surface area (mm 2 ) -the total surface area of bacterial envelope, expressed in mm. The FIB-SEM method presented in this study enabled physical contact points between bacteria and nanotopography to be visualized. This method was applied to a range of nanotopographies of varying nanostructure geometries and composition. The staining protocol used here enabled the envelope of S. aureus and E. coli to be resolved with nanometer resolution and was clearly distinguishable from the bacterial cytosol. A further advantage of this approach was that cross sections could be generated through bacteria at any desired location, enabling all points of contact between bacteria and nanotopography to be resolved. Additionally, by performing sequential slice-by-slice ion beam milling, three-dimensional volume reconstructions could be generated to allow 360 visualization and quantification of the morphometric information. The data generated from these analyses demonstrate how a framework for quantifying bacteria-nanostructure interface interactions can be developed to better assess the antibacterial mechanisms of nanotopographies. We anticipate that the FIB-SEM approach highlighted in this study could be widely used to progress our mechanistic understanding of nanotopography-mediated antibacterial activity.
Limitations of the study
The cross-sectional analyses of E. coli and S. aureus in contact with nanostructured surfaces performed in this study are representative of a single time point (3-hr surface incubation). This study explored two bacterial species only, future research should include a greater variety of microorganism in both planktonic and biofilm phases of growth. Owing to cost and limited access to FIB-SEM equipment, and long data collection times for each bacteria, the analyses and conclusions in this study are representative of a small sample size of seven bacteria, which the authors recognize is a limitation. Additional access to FIB-SEM equipment is required to assess the morphological changes more comprehensively in Gram-negative and Gram-positive bacteria over a broader time range.
STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following:
Alkaline hydrothermal treatment
Commercially pure titanium (Ti-Tek (UK) Ltd) with 0.7 mm thickness was laser cut into 11 mm circular disks by Laserit. All disks were mirror polished (TegraPol-15, Struers) before being washed with deionized water and alcohol for 10 minutes each. The disks were air dried and slotted into custom-made PTFE holders to keep the disks upright and placed into a 125 ml PTFE acid-digestion vessel containing 1 M NaOH (52 ml). The vessel was tightly sealed and placed in a preheated oven (Gallenkamp Plus II) for 2 hours at 250 o C. After the hydrothermal treatment, the disks were cooled, washed with deionized water and absolute ethanol, dried and treated at 300 o C for 1 h before immersion in 0.6 M HCl for exchanging the sodium ions with hydrogen ions. The disks were rinsed with copious amounts of deionized water and placed in a furnace for 2 h at 600 o C.
Plasma etching
Plasma etching of n-doped single-crystal silicon (100) wafers was performed in Oxford Instruments reactive ion etching (RIE) systems fitted with inductively coupled plasma (ICP) sources. ICP etching of a Si wafer was used to generate cicada wing-inspired, short nanopillars (0.2 mm in length).
FIB-SEM sample preparation
Samples were prepared using previously described methods . The exact methodology used is as followed: Following overnight fixation in 2.5% EM grade glutaraldehyde at 4 C, samples were washed (3 x 5 minutes) in 0.1 M sodium cacodylate buffer prior to OTO (Osmium tetraoxide -Thiocarbohydrazide -Osmium) processing. Briefly, this method included post fixation in equal volumes of 4% osmium tetraoxide (Agar Scientific Ltd. Essex, UK) and 3% potassium ferrocyanide (Sigma-Aldrich, St.
OPEN ACCESS
iScience 24, 102818, July 23, 2021 17 iScience Article Louis, MO, USA) for 60 minutes on ice. Following post fixation, samples were rinsed (3 x 5 minutes) in dH 2 O before incubating with thiocarbohydrazide (Sigma-Aldrich, St. Louis, MO, USA) for 20 minutes. Additional dH 2 O washing steps (3 x 5 minutes) were applied before incubation in 2% aqueous osmium for 30 minutes at room temperature. Following OTO processing, bacterial samples were stained in 1% aqueous uranyl acetate (1 hour at 4 C) followed by lead aspartate (200 mM) for 30 minutes in the dark. Between these steps, washing with dH 2 O was performed. After the final washing step, bacterial samples were dehydrated in a graded ethanol series of 25%, 50%, 70%, 90% and 100% (Sigma-Aldrich, St. Louis, MO, USA). Samples were then critically point dried using a Leica CPD300, following an established protocol for microbial cells (LiYu et al., 2014). Titanium discs were mounted onto 0.5'' aluminum stubs using colloidal silver paste (Agar Scientific Ltd. Essex, UK), before being coated with a 10 nm chromium layer using an Emitech K757X sputter coater system.
Sequential ion beam milling
Two microscopes were used to perform ion beam milling: 1) Strata FIB201 (University of Manchester); 2) FEI Scios (DESY NanoLab). Samples were loaded into the chamber and the system was purged to create a vacuum. Before cross-sectional analysis, the stage was tilted by 52 , moving the titanium discs perpendicular to the gallium ion beam. Area scans were performed at an accelerating voltage of 5 kV and current of 50 pA. Prior to ion beam milling, a protective platinum layer (500 nm) was deposited onto each bacterium. Rough cut trenches were milled around coated bacteria to depths of 250 nm using an accelerating voltage of 30 kV and current of 1 nA. Auto Slice and View software was used to carry out sequential sectioning of E. coli in 30 nm slices and 20 nm for S. aureus cells. This was performed with an accelerating voltage of 30 kV and beam current of 30 pA. Images of each section were acquired using electron beam accelerating voltages of 5 kV and current of 50 pA.
FIB-SEM image processing and 3D volume reconstruction
Slice and view data were processed using previously described methods . The exact methodology used is as follows: The slice and view data acquired from sequential FIB milling was processed using the FIB-stack wizard tool in Avizo v9.7.0 (FEI). Briefly, this tool facilitates aligning the FIB-stack and correction of geometrical artefacts such as stage tilt foreshortening and/or vertical shift. Avizo segmentation editor was utilized to reconstruct 3D volumes of bacteria and to visualize interactions with all nanostructures.
Morphometric analysis of 3D models
Morphometric analysis of the 3D models was performed using NeuroMorph add-on in Blender (V2.9.0) 3D modeling software, as detailed previously (Jorstad et al., 2015). Briefly, the 3D models reconstructed in Avizo V9.7.0 software were exported as .obj files, which can be used in other 3D modeling software such as Blender, Microsoft Paint 3D or AutoDesk suite. The exported 3D models were then imported and the morphometric analysis was performed inside Blender by using NeuroMorph. Quantification of nanotopographies and bacteria:nanotopography interactions was performed using Microsoft Excel.
ADDITIONAL RESOURCES
Our study has not generated or contributed to a new website/forum and it is not part of a clinical trial. | 7,614.6 | 2021-07-01T00:00:00.000 | [
"Materials Science",
"Biology"
] |
Spin canting across core/shell Fe3O4/MnxFe3−xO4 nanoparticles
Magnetic nanoparticles (MNPs) have become increasingly important in biomedical applications like magnetic imaging and hyperthermia based cancer treatment. Understanding their magnetic spin configurations is important for optimizing these applications. The measured magnetization of MNPs can be significantly lower than bulk counterparts, often due to canted spins. This has previously been presumed to be a surface effect, where reduced exchange allows spins closest to the nanoparticle surface to deviate locally from collinear structures. We demonstrate that intraparticle effects can induce spin canting throughout a MNP via the Dzyaloshinskii-Moriya interaction (DMI). We study ~7.4 nm diameter, core/shell Fe3O4/MnxFe3−xO4 MNPs with a 0.5 nm Mn-ferrite shell. Mössbauer spectroscopy, x-ray absorption spectroscopy and x-ray magnetic circular dichroism are used to determine chemical structure of core and shell. Polarized small angle neutron scattering shows parallel and perpendicular magnetic correlations, suggesting multiparticle coherent spin canting in an applied field. Atomistic simulations reveal the underlying mechanism of the observed spin canting. These show that strong DMI can lead to magnetic frustration within the shell and cause canting of the net particle moment. These results illuminate how core/shell nanoparticle systems can be engineered for spin canting across the whole of the particle, rather than solely at the surface.
. Hyperfine parameters for the components of the Mössbauer spectrum measured at 10 K, where δ is the isomer shift, Bhf is the hyperfine field, Δ is the quadrupole splitting, and Γ is the line width.
Site (mm/s) Bhf (T) Δ (mm/s) Γ (mm/s) Area (%)
I -Fe 3+ A 0.549 (5) 53.15(6) 0.026(9) 0.19(1) 26 (7) II -Fe 3+ B 0.492 (5) 51.78(7) 0.020(9) 0.22 (2) Figure S4. Field cooled (FC) and zero field cooled (ZFC) magnetization for (a) a dilute sample of the core/shell NPs. (b) Overlay of ZFC from both dilute and dense samples of nanoparticles. The blocking temperature of the particles in the dilute particles is lower than the dense samples, indicating that interparticle interactions have been reduced. The peak associated with blocking temperature is also much more distinct for dilute particles (fall off with increasing temperature is more abrupt) indicating that the dilute sample has reduced interparticle interactions. Here the applied field was 100 Oe.
M(a.u)
(a) (b) Data Note S5. Using the PASANS method, we obtained data for all four possible scattering cross-sections (↑↑, ↑↓, ↓↑, or ↓↓) corresponding to incident neutrons either spin up (↑) or spin down (↓) with the post-sample scattered neutrons, again either spin up (↑) or spin down (↓). For example, ↑↓, indicates the scattering from neutrons initially polarized spin up which then after scattering off the sample were found to polarized spin down; the scattering of this cross-section (↑↓) and the related one (↓↑) thus involve "spin-flipping" of the incident neutron relative to the scattered one.
These scattering cross-sections are proportional to the squared sum of the spatial nuclear N and magnetic M Fourier transforms 1 , 2 : where J is any Cartesian coordinate (X, Y, or Z), Q is the scattering vector, RK is the relative position of the Kth scatterer and N,M is the nuclear or magnetic scattering length density respectively. Note that the nuclear scattering is assumed isotropic in many cases, although in some systems, nuclear spins can be aligned. In contrast, the magnetic Fourier transform has directional components with selection rules governing the observed scattering; only the component of magnetic scattering perpendicular to Q can contribute.
While in general the complete angle-dependent polarization rules lead to complex expressions for the scattering cross-sections, these simplify in certain geometries and key angles. In the present case, we then extract the quantities N 2 , M 2 PAR and M 2 PERP in Fig. 6b from the underlying four cross-sections in the following way, taking sector averages of the data for the specified values of In the expression for N 2 , we note that the sample is isotropic in our case, given that the polycrystals of ordered nanoparticle assemblies do not have particular preferred directions. The expression for M 2 PERP is explicitly for a particular portion of the spin-flip data, while M 2 PAR equation is for the fraction that is coherent with the structural order.
Data Note S6. Details of atomistic simulations. The structure of the measured nanoparticles is complex and so we have adopted a simplified model based on a single crystal particle to capture the essential properties of the particles. We construct a single crystal of Magnetite with an inverse Spinel structure and lattice parameter 8.3941 Å, explicitly including the Oxygen sites due to their contribution to the DMI on octahedral Mn sites. A spherical particle is then cut from the crystal and different magnetic parameters are assigned based on the distance from the centre, defining a core region 5.6 nm in diameter and a shell of 0.7 nm thick. The magnetic properties of the system are described with a Heisenberg spin Hamiltonian of the form and are unit vectors describing the directions of spins on sites and , is the exchange tensor between sites and , is the spin moment at site and app is a vector describing the direction of the externally applied field. The exchange tensor describes isotropic, anisotropic and anti-symmetric (DMI) exchange interactions between two spin sites and and is given by The atomistic spin dynamics are computed numerically by solving the stochastic Landau-Lifshitz-Gilbert equation with Langevin dynamics 4 applied at the atomic level using the VAMPIRE software package 5 . The simulations were performed with critical Gilbert damping = 1to ensure a rapid convergence to an equilibrium spin state. Figure S7. Visualization of the simulated spin configuration of a decoupled superparamagnetic MnFe2O4 shell taking a slice through the y-z plane. The coloring of the atoms indicates the direction of magnetization on each site, with Oxygen sites shown as small dark spheres. The simulation temperature is set at 300K in a 1 T externally applied field along the [001] crystal direction. The shaded core region shows a nearly single domain state, while the shell shows much more disorder due to finite size effects. At high fields, the superparamagnetic shell is well aligned with the field direction leading to a Langevin-type saturation of the total magnetization Ms for the nanoparticle. | 1,454.6 | 2018-02-21T00:00:00.000 | [
"Physics"
] |
SHARPENDING OF THE VNIR AND SWIR BANDS OF THE WIDE BAND SPECTRAL IMAGER ONBOARD TIANGONG-II IMAGERY USING THE SELECTED BANDS
The Tiangong-II space lab was launched at the Jiuquan Satellite Launch Center of China on September 15, 2016. The Wide Band Spectral Imager (WBSI) onboard the Tiangong-II has 14 visible and near-infrared (VNIR) spectral bands covering the range from 403-990 nm and two shortwave infrared (SWIR) bands covering the range from 1230-1250 nm and 1628-1652 nm respectively. In this paper the selected bands are proposed which aims at considering the closest spectral similarities between the VNIR with 100 m spatial resolution and SWIR bands with 200 m spatial resolution. The evaluation of Gram-Schmidt transform (GS) sharpening techniques embedded in ENVI software is presented based on four types of the different low resolution pan band. The experimental results indicated that the VNIR band with higher CC value with the raw SWIR Band was selected, more texture information was injected the corresponding sharpened SWIR band image, and at that time another sharpened SWIR band image preserve the similar spectral and texture characteristics to the raw SWIR band image. * Corresponding author
INTRODUCTION
The Wide Band Spectral Imager (WBSI) onboard the Tiangong-II space lab was launched at the Jiuquan Satellite Launch Center of China on September 15, 2016 which has 14 visible and nearinfrared (VNIR) spectral bands covering the range from 403-990 nm at a spatial resolution of 100 m and two shortwave infrared (SWIR) bands with 200 m spatial resolution covering the range from 1230-1250 nm and 1628-1652 nm respectively.The spectral range of each band is shown in Table 1.The WBSI images the Earth's surface in 300-killometer-wide swath which supports a wide range of applications such areas as plant, agriculture, forestry, geology, geography, ocean, water resource, natural hazard, and coastal researches.The SWIR bands are out of the visible light range.The SWIR bands can detect the minerals, man-made materials, plant water status, soil moisture and fire, and see in night and through smoke in a way that is invisible to the VNIR bands (Matheson and Dennison, 2012;Sanchez-Ruiz et al., 2014;Young, 2015).Therefore, it is often hopeful to have a high spectral resolution and spatial information combined in the same file (Padwick et al., 2010).The integration of the WBSI imagery of Tiangong-II is difficult because the VNIR and SWIR bands have the different spatial resolutions.Except the images are resized by the nearest neighbour or cubic convolution interpolation methods so that all VNIR and SWIR bands have the same size of pixels (Wahi et al., 2013;Pour and Hashim, 2013), thus sharpening should be used to combine the low resolution SWIR image with the high resolution VNIR image to produce a spatial resolution of 100 m image including 14 VNIR bands and two SWIR bands.
Bands
Many sharpening techniques exist in the literature (Vaiopoulos and Karantzalos, 2016).Some of them have been embedded in ENVI software, the famous commercial remote sensing software, and thus make the image analyzers convenient to automatically fusion the different spectral and spatial resolution images.However, the above methods cannot be directly used because the WBSI onboard Tiangong-II has not a panchromatic band.Moreover, the similarity between the low spatial resolution images and high spatial resolution images is crucial for the quality of the fusion images (Aiazzi et al., 2005;Nikolakopoulos, 2006;Pak etl al., 2017).But the VNIR bands of the WBSI do not overlap spectrally with the SWIR bands.To sharpen a WBSI image of Tiangong-II, a specific multispectral band should be considered as an optimal panchromatic band with 100 m spatial resolution.The selected and synthesized bands from the existed multispectral bands is an effective approach, which has been successfully applied in sharpening the VNIR and SWIR bands of Sentinel-2 and the VNIR, SWIR and thermal infrared bands of ASTER (Aiazzi et al., 2005;Du et al., 2016;Vaiopoulos and Karantzalos, 2016;Wang et al., 2016;Park et al., 2017).
The main goal of this work was to sharpen the two SWIR bands of the WBSI imagery of Tiangong-II with 200 m spatial resolution using the selected higher spatial resolution VNIR bands at 100 m and evaluate Gram-Schmidt transform (GS) embedded in ENVI software for sharpening of the SWIR bands of the WBSI.
Data and Experiment Area
The WBSI Level 2 data was acquired on March 18, 2017, which was geometrically corrected product and covered the small areas of southern Tianjin City and the southeast of Hebei Province, and large areas of the northwest of Shandong Province.Three sub-regions were selected for the experiment area covered by 400400 pixels in bands with 100 m spatial resolution as shown in Figure 1.The main reasons for the selection were, (1) to contain a variety of land cover classes including crop, forest, bare soil, resident, aquaculture, river, Sea and so on.The left is Area 1 where mainly had high-winterwheat-yield land and resident.The middle is Area 2 where mainly had the middle and low-winter-wheat-yield land, unsown land, resident, small reservoirs and winter jujube.The right is Area 3 where mainly had the middle-winter-wheat-yield land, resident, river, Sea, aquaculture, unsown land and forest; (2) exclude the lower right areas including cloud; (3) these region are also the core demonstration zones of Chinese R & D Demonstration Project on Bohai-rim's Breadbasket which aims to increase crop production by 3 billion kg by 2017 and 5 billion by 2020.
Figure 1.The relative location and size of the WBSI image of the Tiangong-II of the three experiment areas (400400 pixels in bands with 100 m spatial resolution, the left, Area 1; the middle, Area 2; the right, Area 3).A false colour composite is displayed (RGB B5-B8-B10)
Sharpening Method Used in This Work
In ENVI software version 5.1, two image sharpening methods for byte-scaled RGB imagery include HSV transform (HSV) and Colour normalization transform (Brovey) and three image sharpening techniques for spectral imagery include Gram-Schmidt transform (GS), Principal components transform (PC), and Colour normalized transform (CN).The descriptions on the algorithms of the aforementioned sharpening methods can be found in ENVI software version 5.1 help documents and some references (Vrabel, 1996;Vrabel et al., 2002;Nikolakopoulos, 2005;Sarp, 2014).GS was one of the sharpening techniques with preserving the spectral characteristics and minimum information loss (Sarp, 2014;Gao et al., 2016).In this work, GS sharpening method was applied, and the sharpening results from three sub-regions were evaluated.In this study, the GS sharpening procedure was the same every time except choosing the different low resolution pan band such as the SWIR Band 1, the SWIR Band2, average of the SWIR Band 1 and SWIR Band 2, and the high resolution VNIR Band.
Assessments of the Sharpening Images
Images are prone to spectral distortions during sharpening operations, especially if a high spatial resolution image not overlapped spectrally by the low spatial resolution multispectral bands or does not have similar spectral characteristics (Sarp, 2014;Pak et al., 2017).In this work, the statistical parameters of the histogram, the minimum, the maximum, the mean, the standard deviation (St.dev.) value and correlation coefficient (CC) were used in order to select the optimal VNIR band as the panchromatic band and evaluate the quality of the sharpening images (Nikolakopoulos, 2006;Sarp, 2014).
Selected Band Analysis
The CC is used to statistically measure the intensity of a linear association between two images.In this work, the CC was applied to determine the selected bands from the set of VNIR bands which had the high correlations with each SWIR bands.Firstly, the VNIR bands were resized to match the spatial resolution of the SWIR bands using the nearest neighbour algorithm which uses the nearest pixel values without interpolation to preserve the original pixel values of the SWIR bands.The opposite process was also done to more fully understand the correlations between the VNIR and SWIR bands.
Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 described the CC between the VNIR and SWIR bands.The more similarity between the VNIR bands and the SWIR bands showed the higher correlation coefficient values.It showed that the CC values of Area 3 were higher than those of Area 2, and those of Area 2 were higher than those of Area 1, which indicated that the more water areas the higher correlation coefficient values between the VNIR and SWIR bands.
Compared Table 1 with Table 4, Table 2 with Table 3, and Table 3 and Table 6, the CC values between the VNIR and SWIR bands with 100 m spatial resolution were higher than those with 200 m spatial resolution at the Area 1, whereas the CC values between the SWIR Band 1and the SWIR Band 2 was lower.But at the Area 2 and Area 3, the opposite results were gotten.It showed that the CC values between the VNIR and SWIR bands with the downscaled results were higher than those with the upscaled results, and the CC values between the SWIR Band 1 and Band 2 with the downscaled results were slightly lower than those with the upscaled results, which indirectly reflected the sensitivities of the SWIR Band 1 and Band2 to water areas.For the areas with more water bodies such as Area 3, all of the CC values between the VNIR Band 1-6 and the SWIR Band 1 and Band 2 were higher than 0.85.The maximum value in CC was observed at the VNIR Band1, whereas the maximum value in Mean and St. dev. of these six VNIR band was observed at the VNIR Band 6.For the areas with a few water bodies such as Area 2 and Area 1, the CC values between the VNIR Band 1-6 and the SWIR Band 1 were higher than 0.67.The maximum value in CC was also observed at the VNIR Band 1, whereas the maximum value in Mean and St. dev. of these six VNIR band was observed at the VNIR Band 6.Thus, it needs to evaluate these six VNIR bands to select the optimal band for sharpening the SWIR Band 1.
Bands
In Area 1 and Area2, for the SWIR Band 2, the CC values between the VNIR Band 7-9 and the SWIR Band 2 were higher than 0.60.Except the Area 1 with the downscaled 200 m spatial resolution image (Table 4), the maximum value in CC occurred at the VNIR Band 7, whereas the maximum value in Mean and St. dev. of these six VNIR band was observed at the VNIR Band 9 and Band 7 respectively.Thus, it needs to evaluate these three VNIR bands to select the optimal band for sharpening the SWIR Band 2.
It is noteworthy that the CC values between the VNIR bands and the SWIR Band 2 for Area 1 with the downscaled 200 m spatial resolution image were less than 0.50 and the maximum value was observed at the VNIR Band 10.But Area 1 was the core demonstration zones for increasing winter wheat production.Therefore, it needs to evaluate the VNIR Band 7-12 to select the optimal band for sharpening the SWIR Band 2.
Comparison of the Sharpening Images
Quantitative and qualitative assessments could be performed on comparison of the sharpening images.A qualitative comparison was done through visual inspection of gray level and texture changes (Figure 2-4).Quantitative evaluations could be done through consistency measurements between the sharpened images and the available reference images (Nikolakopoulos, 2006).When no reference image existed, consistency measurements were known to be efficient at estimating a sharpened image using the correction coefficient (CC), root mean square error (RMSE), structural similarity index (SSIM), the erreur relative global adimensionnelle de synthese (ERGAS), universal image quality index (UIQI) and so on (Sarp, 2014;Vaiopoulos and Karantzalos, 2016;Pak et al, 2017).Nevertheless, we did not agree that it was a good idea to evaluate the sharpened images through consistency measurements between the sharpened images and the downscaled low spatial resolution images.Therefore, a quantitative evaluation was performed to examine the spectral information preservation through the eight statistical parameters of the histogram, the minimum, the maximum, the mean, the St. dev., the median, the mode, the skewness and the kurtosis (Figure 5-7).The higher the CC values between the VNIR bands with the SWIR Band 1 were more similar the skewness values of the sharpened SWIR Band 2 images were with the raw 200 m SWIR Band 2, and for the Area 3 it was also true for the kurtosis values.For the image with more water areas such as the Area 3 the statistical characteristics of the sharpened image with the downscaled VNIR bands as the low resolution pan band was obviously deviated from those of the raw SWIR bands although the spectral range was stretched which may be helpful for image classification.The sharpened SWIR Band 1 image from the VNIR Band 1 and the SWIR Band 1 selected as the high resolution pan and the low resolution pan band respectively were more similar to the raw 200 m SWIR bands, which indicated the VNIR Band 1 may be the optimal sharpening band for the SWIR for the Area 3 and similar images.For the Area 2, the optimal sharpened SWIR Band 1 image which had the similar spectral characteristics with the raw SWIR Band 1and enhanced texture information was got when the VNIR Band 1 was used as the high resolution pan band with the raw 200 m SWIR Band 1 was used as the low resolution pan band, and the optimal sharpened SWIR Band 2 image which had the similar spectral characteristics with the raw SWIR Band 1and enhanced texture information was got when the VNIR Band 7 was used as the high resolution pan band and the downscaled 200 m VNIR Band 1 was used as the low resolution pan band.For the Area 3, the difference between the SWIR Band 1 and SWIR Band 2 was small (Figure 4), moreover, the CC values between the SWIR bands and the first six VNIR bands were greater than 0.85 (Table 6), it was difficult to evaluate the sharpened image by visual inspection, as mentioned above, the VNIR Band 1 may be the optimal sharpening band for the SWIR for the Area 3 and similar images.
For the image with fewer water areas and more vegetation such as the Area 1, compared with the SWIR Band 1 or the SWIR Band 2 used as the low resolution pan band, only one of two sharpened SWIR band images had the enhanced spatial and texture information, all of the sharpened two SWIR band images from the first six VNIR bands used as the high resolution pan band and the average of two SWIR bands or the downscaled VNIR band selected as the low resolution pan got the spatial and texture information enhancement.From the similarity of visual inspection, the later was better (Figure 8).Moreover, when the downscaled VNIR band was used as the low resolution pan band, the sharpened SWIR band images from the fourteen VNIR bands were similar, in other words, the CC values between the VNIR bands and the SWIR bands had few effects on the sharpen images.Of course, the higher the CC values were, the higher the quality of the sharpen images were.However, it was noteworthy that it was susceptible to the registration between the downscaled VNIR bands and the SWIR bands.When the VNIR band with the higher negative CC values with the SWIR Band 1 such as the VINR Band 7-14 at the Area 1, the sharpened SWIR Band 1 image with the SWIR Band 1 used as the low resolution pan band and the sharpened SWIR Band 2 image with the SWIR Band 2 used as the low resolution pan band and all of two sharpened SWIR bands with the average of two SWIR bands used as the low resolution pan band had the inverse spectral characteristics to the raw 200 m SWIR bands.For the first six VNIR bands, by visual inspection, the sharpened SWIR Band 1 images using the same VNIR band as the low resolution pan band was most similar to the raw SWIR Band 1 image, and the second was using the SWIR Band 2 as the low resolution pan band but had the poor image quality, and that of the SWIR Band 1 or the average of two SWIR bands as the low resolution pan band had more detailed texture information, especially that of the SWIR Band 1 used as the low resolution pan band was similar to the VNIR band.Except the SWIR Band 2 used as the low resolution pan band, the sharpened SWIR Band 2 images from the other three types of the row resolution pan band had the similar spectral and texture characteristics to the raw SWIR Band 2 image, that of the average of two SWIR bands as the low resolution pan band had more detailed texture information but had more noises.When the CC values between the VNIR bands and the SWIR Band 2 increased, the sharpened SWIR Band 2 image from the SWIR Band 1 used as the low resolution pan band and the sharpened SWIR Band 1 image from the SWIR Band 2 used as the low resolution pan band was more similar to the corresponding raw SWIR band images.Therefore, sharpening should be used to combine the low resolution SWIR image with the high resolution VNIR image to produce a spatial resolution of 100 m image including 14 VNIR bands and two SWIR bands.In this work, GS sharpening method embedded in ENVI software was applied based on four types of the different low resolution pan band such as the SWIR Band 1, the SWIR Band2, the average of the SWIR Band 1 and SWIR Band 2, and the high resolution VNIR Band.For the image with more water areas such as the Area 3, the VNIR Band 1 may be the optimal sharpening band.For the Area 2 with a few water and vegetated areas, the optimal sharpened SWIR Band 1 image was got when the VNIR Band 1 and the raw SWIR Band 1 was used as the high and low resolution pan band respectively, and the optimal sharpened SWIR Band 2 image was got when the VNIR Band 7 and the downscaled 200 m VNIR Band 1 was used as the high and low resolution pan band respectively.For the image with fewer water areas and more vegetation such as the Area 1, the sharpened two SWIR band images from the first six VNIR bands used as the high resolution pan band were similar, by visual inspection, the sharpened SWIR images using the same VNIR band as the low resolution pan band were most similar to the raw SWIR Band images.The VNIR band with the higher CC value with the raw SWIR Band was selected, more texture information was injected the corresponding sharpened SWIR band image, and at that time another sharpened SWIR band image preserve the similar spectral and texture characteristics to the raw SWIR band image.In the future, a comprehensive evaluation over more study areas and with the different sharpening techniques and the reference images and assessment indices should be performed which will include comparison of the spectral and texture information of the classified objects.
Figure 5 ,
Figure 5, Figure 6 and Figure 7 showed that the mean and median of the sharpened images from the different VNIR band as the high resolution pan band and four different selections as the low resolution pan band rarely deviated from those of the raw 200 m SWIR band images for the Area 1, Area 2 and Area 3, indicating the whole radiance levels were stable for the before and after sharpening images.The sharpened images and the raw 200 m SWIR band image had the similar mode values at the Area 1 and Area 2. For the Area 3, most of the mode values
Figure 2 .Figure 3 .
Figure 2. The raw and resized images on the SWIR bands of the WBSI of Tiangong-II at the Area 1
Figure 4 .
Figure 4.The raw and resized images on the SWIR bands of the WBSI of Tiangong-II at the Area 3
Figure 5 .Figure 6 .
Figure 5.The minimum, maximum, mean, St. dev., median, mode, skewness and kurtosis of the sharpened images using the four types of low resolution pan band and raw images on the SWIR bands of the WBSI of Tiangong-II at the Area 1
Figure 7 .
Figure 7.The minimum, maximum, mean, St. dev., median, mode, skewness and kurtosis of the sharpened images using the four types of low resolution pan band and raw images on the SWIR bands of the WBSI of Tiangong-II at the Area 3
Figure 8 .
Figure 8.The sharpened SWIR image for the Area 1
Table 1 .
The wavelength of each band of Tiangong-II
Table 1 .
The CC, Mean and St. dev.for the VNIR and SWIR bands with 100 m spatial resolution of the Area 1
Table 2 .
The CC, Mean and St. dev.for the VNIR and SWIR bands with 100 m spatial resolution of the Area 2
Table 3 .
The CC, Mean and St. dev.for the VNIR and SWIR bands with 100 m spatial resolution of the Area 3
Table 4 .
The CC, Mean and St. dev.for the VNIR and SWIR bands with 200 m spatial resolution of the Area 1
Table 5 .
The CC, Mean and St. dev.for the VNIR and SWIR bands with 200 m spatial resolution of the Area 2
Table 6 .
The CC, Mean and St. dev.for the VNIR and SWIR bands with 200 m spatial resolution of the Area 3 | 4,977 | 2018-04-30T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Adipose tissue-derived mesenchymal stem cells and chitosan/poly (vinyl alcohol) nanofibrous scaffolds for cartilage tissue engineering
Osteoarthritis (OA) has been defined as a chronic inflammatory joint disease characterized by progressive articular cartilage degeneration. Recently growing interest in regenerative medicine, using cell therapy and tissue engineering, where cellular components in combination with engineered scaffolds and bioactive materials were used to induce functional tissue regeneration. In the present study, nanofibrous scaffold based on chitosan (CS)/poly (vinyl alcohol) (PVA) were used to develop biologically functionalized biomaterial to mimic the extracellular matrix, allowing the human adipose tissue derived mesenchymal stem cells (ADSCs) to proliferate and differentiate to chondrogenic cells. The morphology of the nanofibrous mat was examined using field emission scanning electron microscope (FE/SEM). The characteristic functional groups and the nature of the chemical bonds between atoms were evaluated using Fourier transform infrared spectroscopy (FTIR) spectrum. Characterization of the seeded cells was morphologically evaluated by scanning electron microscopy and by flow cytometry for the expression of the stem cell surface markers. The differentiation potential was verified after chondrogenic induction by analyzing the expression of chondrogenic marker genes using real-time (RT PCR). Current study suggest significant potential for the use of ADSCs with the nanofibrous scaffolds in improving the osteoarthritis pathology.
great outcomes at 10 years (Bentley et al. 2013). However, complications may occur such as graft failure, periosteal hypertrophy and delamination (Wood et al. 2006;Peterson et al. 2000). Additionally, it has been accounted for that cells may lose their phenotype during expansion (Benya and Shaffer 1982;Takata et al. 2011). Consequently, growing interest in regenerative medicine, using cell therapy, where cells were directly injected into the blood or into tissues, and tissue engineering, where cellular components in combination with engineered scaffolds and bioactive materials were used to induce functional tissue regeneration.
Mesenchymal stem cells (MSCs) have the characteristics of attachment to plastic culture vessels and facility to express CD44, CD73, CD90, and CD105 but not CD45, CD34, and CD14 cell surface markers. MSCs are multipotent cells, having both hypo-immunogenic and immunomodulatory characteristics, which let them capable to home damaged tissues and pledge healing via repair processes. They are believed to possess low immunogenicity, as they express low levels of major histocompatibility complex (Huang et al. 2016;Molina et al. 2015;Xu and Li 2014). It has recently been suggested that MSCs have a new line for management of osteoarthritis in accordance with their capability of differentiation into chondrocytes, and the paracrine effects of secreted bioactive substances that might be more important than differentiated cells in enhancing repair responses (Beris et al. 2005), through the anti-inflammatory and immunomodulatory effects of MSCs that may delay the development of osteoarthritis (Counsel et al. 2015).
The designing of scaffold to have composition, biological, mechanical and physiochemical properties that imitate extra cellular matrix (ECM) of the damaged tissue is considered one of the significant tools for tissue engineering (Yang et al. 2007). Scaffolds should nonimmunogenic, non-toxic, biocompatible and biodegradable. In addition, scaffold should have suitable surface properties that provide in-vitro cell adhesion, ingrowth and provide necessary space for neo-vascularization invivo (Salgado et al. 2004). To target specific tissue engineering applications, many natural and synthetic polymers have been investigated.
Chitosan (CS) is a naturally derived biodegradable polysaccharide commonly used in tissue engineering because of its biodegradable, biocompatible, and non-toxic properties. Therefore, it is a safe material for use in biomedical applications (Ibrahim et al. 2016;Ismail et al. 2018). CS is a compound of glucosamine and N-acetyl glucosamine linked in β (1-4) manner. CS, as a derivative of chitin, has as intrinsic antibacterial activity. Therefore, it can decrease the infection rate of experimentally induced osteomyelitis by Staphylococcus aureus in rabbits. It has been mentioned that CS with a variety of delivery materials such as alginate, hydroxyapatite, hyaluronic acid, and growth factors have a potential application in orthopedic tissue engineering (Li et al. 2005;Yamane et al. 2005;Hsieh et al. 2005).
Interestingly, it has been reported that CS blended with poly (vinyl alcohol) (PVA) have good mechanical and chemical characteristics (Charernsriwilaiwat et al. 2010). PVA is a water-soluble synthetic resin that produced via polymerization of vinyl acetate monomer. PVA was used in controlled release systems and due to its biocompatible nature; it has a variety of biomedical uses (Soppimath et al. 2000).
Water-soluble polymers including polysaccharides (such as alginate) as well as synthetic polymers such as [Poly (ethylene oxide), PEO], [Poly (vinyl alcohol), PVA], [Poly (vinyl pyrrolidine, PVP] are known to be more biocompatible than other organic-soluble polymers. The electrospinning process which of relatively low cost and low toxicity, is an interesting approach for regenerative medicine requirements (Jimmy and Kandasubramanian 2020;Krishnan et al. 2013).
There is another important factor in tissue engineering which is the scaffold fabrication method. Recently researcher focused on the electrospinning for the manufacture of nanofibrous scaffolds that are suitable for the 3D cell cultures for tissue regeneration (Li et al. 2002). Continuous nanofibers in electrospinning are formed due to the electrostatic Coulombic repulsive forces applied throughout elongation of the viscoelastic solution as it strengthens to form a fiber. Electrospinning is a simple method to produce nanofibers that is similar to the collagen part of the extracellular matrix (ECM). Fibers produced by this method have the features of large surface-to-volume ratio, and high porosity that are needed for tissue engineering, by which nanofibers allow better cellular spreading, attachment and supply efficient nutrient to the cells (Hezma et al. 2017;El-Rafei 2015;El-Rafei et al. 2017).
The aim of the current study was to establish suitable physiologically and biochemically relevant microenvironment allowing ADSCs proliferation and differentiation into chondrocyte-like cells using CS/PVA nanofiber scaffolds.
Preparation of CS/PVA solutions
Various combinations of the factors that control the quality of the electrospun fibers (e.g., composition of the electrospinning solution and its viscosity, applied voltage, and distance between collector and nozzle) were investigated by try-and-error method. The reported conditions are the optimal ones that gave fibers a homogeneous structure and high quality. Fibers were prepared by the dissolution of chitosan (medium molecular weight, deacetylated chitin, poly (D-glucosamine), Aldrich) in 2% acetic acid solution for 2-3 h at room temperature until the formation of a clear solution. PVA (typical molecular weight = 124,000, 87-89% hydrolyzed, Sigma-Aldrich) was gradually added to the chitosan solution at 75 ± 5°C while stirring for an additional 2 h in order to enhance the dissolution of the PVA crystals. After complete dissolution, the prepared solution was stirred overnight in a magnetic stirrer at room temperature to obtain homogeneous solution. The CS/ PVA nanofibrous mat was prepared using electrospinning apparatus (NaBond Company, China). The solution was transferred into a 10 ml plastic syringe equipped with a metallic capillary nozzle connected to a high power supply. The voltage was adjusted at 25 kV. The inner diameter of the used nozzle was 0.49 mm and its height from the collector was set at 10 cm. The selected flow rate was 0.7 mL/h. The electrospun fibers were collected on an aluminum foil collector. Then, the electrospun mat was collected, dried for 24 h then stored for further characterization.
Characterization of the CS/PVA
The microstructure of as-spun nano fibers was examined using Field Emission Scanning Electron Microscopy (FE-SEM) (Philips XL30, Netherlands).The Fourier transform infrared spectroscopy (FTIR) spectrum of nanofibrous mats was recorded using a Vertex 70 spectrometer (Bruker Optiks, Germany). The nanofibers were mixed with KBr powder, at a weight ratio of 1/100 nanofiber/KBr, then pressed to form a disc. The spectrum was in the spectral range of 4000-400 cm − 1 with spectral resolution of 2.0 cm − 1 and scan speed of 2 mm − 1 . The viscosity of the blend solutions CS/PVA was measured using a rotating Viscometer (Brookfield viscometer DV-E, USA).
Isolation of adipose tissue Mesenchymal stem cells (ADSCs)
ADSCs were obtained from freshly isolated subcutaneous fat from healthy donors (n = 5, age: 22-41) undergoing cesarean section surgery as described previously (Gimble and Guilak 2003), after written informed consent. The study was approved by the ethics committee of National Research Centre (NRC). Adipose tissue was minced, washed 3 times with phosphate-buffered saline (PBS). Subsequently, adipose tissue pieces were digested while shaking by 1 mg/mL collagenase type I (Gibco, USA) for 2 h at 37°C. The released cells and residual adipose tissue were centrifuged at 1200 rpm for 10 min. The pellet was re-suspended in a complete culture medium of alpha-DMEM (Gibco, USA) with 10% fetal bovine serum (FBS), 100 U/mL Penicillin and 100 μg/mL Streptomycin. Cells were seeded at a density of 1 × 10 5 cells/mL in 25 mm tissue flasks and incubated in a humidified atmosphere of 5% CO 2 at 37°C. Subcultured when cells reached 80% confluent (Alstrup et al. 2019).
Cell seeding of scaffolds and culture CS/PVA scaffolds were sterilized by UV light for 1 h before cell seeding. The cell seeding of scaffolds was performed in a 6 well plates at density of 1 × 10 5 cells/well. Human Adipose-derived Stem Cells (hADSCs) were harvested from the cell culture plates with 0.05% trypsin. The cell-seeded scaffolds were cultured in alpha-DMEM supplemented with 10% FBS and 1% penicillin-streptomycin.
Cell viability and proliferation assay
Cell viability on the scaffolds and tissue culture plate was assessed by MTT cell proliferation assay kit (Roche Applied Science, Penzburg, Germany), following manufacturer instructions. After 1, 7, and 14 days of cell culture, 20 μl MTT reagent was added to each well of the microtiter plates containing the scaffolds, and cells were incubated for 4 h at 37°C. After 200 μl solubilization solution (DMSO; Roche Diagnostics, Indianapolis, IN, USA) was added to each well, the plates were incubated overnight. The absorbance was measured at 595 nm using a microplate reader (Bio-Rad Laboratories, Inc., Hercules, CA, USA).
Cell adhesion assay
Cells were seeded on both scaffold, and micro tire culture plate as a control surface at density of 1 × 10 4 cells/ well and same surface area. Cells were allowed to adhere for 4, 16, and 24 h. The non-adherent cells were washed gently with PBS and adherent cells in both conditions were counted.
Apoptosis assay
Cellular apoptosis was analyzed using an Annexin V-EGFP/PI kit (Nanjing KeyGEN Biotech. Nanjing, China). Briefly, cell pellets were re-suspended in a binding buffer followed by incubation with 5 ml of Annexin V (conjugated with FITC) and propidium iodide (PI) staining in a dark place for 10 min. Fluorescence was analyzed by the FACS Caliber Flow Cytometer. Cells positive for Annexin V-FITC and negative for PI were considered apoptotic and those positive for both Annexin V-FITC and PI were deliberated necrotic.
Chondrogenic differentiation
Cell seeded-scaffolds were cultured in StemPro™ Chondrogenesis Differentiation media (Gibco, USA) for 21 days, and the media were changed twice a week.
Scanning Electron microscopy (SEM)
For both differentiated and undifferentiated groups, cell attachment and differentiation were evaluated via SEM. The cell-seeded scaffolds were washed twice with PBS and fixed with 2.5% glutaraldehyde for 30 min followed by 2% osmium tetroxide treatment for 30 min. After washing steps, the scaffolds were dehydrated in a series of ethanol solutions with increasing concentrations (30% to 100%) and finally dried with hexamethyl disilazane (HDMS). For SEM analysis, the cell-seeded scaffolds were sputter coated with a 5 nm gold layer. A silver/carbon sputter coating was applied to the examined samples. The scaffolds were examined with a Quanta 400F scanning electron microscope (FEI Company, Oregon, USA) (Liu et al. 2013).
Reverse transcription quantitative polymerase chain reaction (RT-qPCR)
Total RNA was isolated from human MSCs, using TRIzol. cDNA synthesis and real-time PCR were performed as described previously (Vimalraj and Selvamurugan 2015). cDNA was amplified in a 20-μl reaction mixture containing FastStart SYBR Green Master (Roche Applied Science, Penzberg, Germany) and a specific primer pair of each cDNA according to the published sequences. PCRs were prepared in duplicate and heated to 95°C for 10 min followed by 40 cycles of denaturation at 95°C for 15 s, annealing at 60°C for 1 min, and extension at 72°C for 20 s with a specific primer pair of each cDNA following by sequences. RNA expression levels were quantified by means of a Light Cycler 480 (Roche Diagnostics, Mannheim, Germany) in relation to glyceraldehyde-3phosphate dehydrogenase (GAPDH) housekeeping gene. The primer sets were as follows: COL2A1: GAGACAGCAT GACGCCGAG (forward) and GCGGATGCTCTCAA TCTGGT (reverse), aggrecan: TCGAGGACAGCGAGGCC (forward) and TCGAGGGTGTAGCGTGTAGAGA (reverse), SOX-9: GTACCCGCACTTGCACAAC (forward) GTAATCCGGGTGGTCCTTCT (reverse), MMP13: AACGCCAGACAAATGTGACC (forward) and AGGTCA TGAGAAGGGTGCTC (reverse).
Statistical analysis
T-test was used to evaluate the statistical significance of the results. Differences with P values < 0.05 were considered significant. The cycle threshold (Ct) values for each sample were given automatically by the I Cycler according to the amplification curves. The baseline and threshold values were manually set as recommended by the PCR array user manual. The selected threshold was 20.0 and the baseline cycles 2-10. The relative mRNA expression was calculated using the ΔΔCt (delta delta cycle threshold) method (Vimalraj et al. 2014), and the data were normalized, across all plates, to the following housekeeping GAPDH.
Microstructure of electrospun mat
Various Blended weight ratios of CS/PVA solutions, were used to control the quality of the electrospun fibers following the try-and-error method. The optimal composition of chitosan /PVA was chosen depending on the produced nanofibers morphology.
In case of spinning of blended solutions of CS/PVA with weight ratios more than 10:90; drops was not able to eject and consequently, blocked the needle. This might be owing to the high viscosity of the concentrated solution, which cause clogging of the needle. The viscosity of the solutions of compositions greater than CS/ PVA =20:80 was too high to be spun. In the composition CS/PVA =20/80, the outcome electrospun fibers start to appear on the collector, however, a non-uniform ejection of the jet as shown in Fig. 1. This can be attributed to the viscoelastic force of the polymeric solution that lead to thinning of the charged jet less likely to happen. Consequently, the jet segment was prevented from being stretched by the constant Coulombic repulsion force. There was a suitable range of viscosity of the polymer solution within which the polymer solutions are electrospinable and beyond which droplets were likely to happen.
The suitable weight ratio was 10:90 for CS/PVA yield nanofiber microstructures mats with homogenous architecture mats as presented in Fig. 2. The following electrospinning parameters were applied: electrical potential of 25 kV, TCP: 10 and flow rate 0.7 mL/h and the viscosity value of the solution was almost 1195 cP. The diameter of the nanofibers was within 50-200 nm and the beads clearly disappeared.
FTIR of the Nanofibers
Carboxylic acid [−COOH] and symmetric deformation of amino [−NH 3+ ] groups, arises from the two peaks at 1427 cm − 1 and 1537 cm − 1 due to the ionization of the primary amino groups in the acidic medium, respectively as presented in the FTIR spectrum of the CS/PVA nanofibers mat shown in Fig. 3. The peak at 1730 cm − 1 was ascribed to the carboxylic acid dimer (Alhosseini et al. 2012). This peak is due to the acetic acid employed for dissolving the chitosan. The peak positioned at 1248 cm − 1 related to the C-O of the CH 2 OH chitosan group forming a hydrogen bond with the OH group of the PVA, confirming the fabrication of CS/PVA blend nanofibers (Gholipour et al. 2009). The absorption peak at 1661 cm -1 was attributed to the C-O stretching of the acetyl group (amide I), as well as the C=C stretching vibration of the PVA. The band at 1599 cm -1 was assigned to the N-H bending and stretching amide II. The C-O asymmetric stretching band, was around 1089 cm − 1 . The broad absorption peak; in the range 3600-3000 cm − 1 , could be related to the O-H and N-H stretching vibrations (Jia et al. 2007).
Characterization of isolated hADMSCs
Fibroblast-like cells started to develop from the adipose tissue cell pellets between the 7th and 10th day of primary culture and reached up to 80-90% confluence in a whirlpool or radiating manner after 10 days (Fig. 4). Cells were characterized as strongly positive for mesenchymal cell surface markers CD105 (endoglin, expression over 84%), CD90 (Thy-1, thymocyte antigen-1, expression of more than 82% of cells), CD271 (LNGFR: low affinity nerve growth factor receptor, expression of more than 85% of cells), and CD73 (ecto-5′nucleotidase, expression of more than 77% of cells) and low expression for HLA-DR (expression was less than 12%) and CD34 (early hematopoietic stem cell marker expression was less than 16%). Results confirm that the cultured cells presented the typical MSC immunophenotype: CD105+/ CD73+/CD90+/CD271+/CD34−/HLA-DR− (Fig. 5). RT-PCR results demonstrated that ADSCs express the pluripotent markers Oct-4, Nanog and Sox-2 at passage 3 (Fig. 6). Differentiation potential was demonstrated at day 21 after induction through Calcium mineralization, which was confirmed by positive Alizarin Red S staining for osteogenesis, accumulation of lipid vacuoles for adipogenesis by oil red stain, and micromass pellet formation for chondrogenesis by Alcain blue stain.
Cell adhesion and viability
Adhesion capability of seeded cells was almost the same on scaffold and plastic tissue culture plates as it was estimated for the same period and surface area (Fig. 7a). Statistically significant difference of viable cells at day 7 and day 14 of culture between plate and scaffold-seeded cells (p-value < 0.05) demonstrated by MTT assay (Fig. 7b).
Apoptosis assay
Cells, which were positive for Annexin V-FITC and negative for PI, were considered apoptotic; while those cells, which were positive for both Annexin V-FITC and PI, were considered necrotic, as showed in (Fig. 8), most of cells were living cells (more than 75%).
Cell attachment and differentiation potential
SEM images at day 21 determined, both undifferentiated ADSCs and ADSCs with chondrogenic differentiation media, cultured on scaffolds were distributed regularly forming a cell monolayer on top of the constructs, and filled the pore spaces (Fig. 9).
Expression of Chondrogenic marker genes
Quantitatively, the differentiation potential of ADMSC on the CS/PVA scaffolds was verified after chondrogenic induction by analyzing the expression of chondrogenic marker genes using real-time (RT PCR). In the present study, we analyzed expression of involved genes using real-time RT PCR. Expression of COL2A1,[a fibrillar collagen present in the cartilage and the vitreous humor of the eye] SOX9 [identifies the sequence CCTTGAG along with other members of the HMG-box class DNAbinding proteins, it acts through chondrocyte differentiation and, with steroidogenic factor 1] and Aggrecan, genes [the most abundant proteoglycan in cartilage, that during early development makes up much of the skeleton] were differentially upregulated, although MMP13 [convoluted in the breakdown of extracellular matrix in normal physiological processes] was downregulated (Fig. 10).
Discussion
Articular cartilage is a hyaline tissue, but without any blood, lymphatic or nerve supply. It is characterized by a single cell type, chondrocytes, which are responsible for the synthesis of highly hydrated extracellular matrix. It is composed of collagen fibers, which provide tensile strength, and proteoglycan aggregates, mainly aggrecan (responsible for the compressive strength, attached along a filament of hyaluronic acid). Cartilage provides protection to the subchondral bone. It is considered as a lubricant and a shock absorber (Baugé and Boumédiene 2015;Pujol et al. 2008). During life, acute trauma, that may be happen, can cause articular cartilage defects. In addition, biochemical changes by aging may induce the degradation of cartilage matrix and result in chronic diseases such as osteoarthritis. Recently, osteoarthritis has been defined as progressive chronic inflammatory joint disease due to abnormal immune response, as early inflammatory response was induced by innate immune cells, while the chronic, relapsing course of inflammation of osteoarthritis was developed by adaptive immunity (Kandahari et al. 2015;Jaime et al. 2017).
Nanofiber scaffolds composed of ultra-fine biodegradable polymeric fibers morphologically similar to natural ECM have been widely emerged as potential scaffolds for cartilage tissue engineering. It merits referencing that while nanofibrous structures could mimic similar fiber diameters, composition, and alignment of the ECM of articular cartilage, the synchronization of these scaffolds with appropriate cells will accomplish the best tissue engineering outcomes for articular cartilage (Chiang and Jiang 2009). It was reported that MSCs affect joint microenvironment and facilitate tissue repair not only by their differentiation potential, but also via paracrine The aim of this study is to measure the extent of biocompatibility of fabricated CS/PVA nanofibrous scaffolds by electrospinning to mimic the biological and biochemical milieu of the native (ECM) to encourage the isolated adipose tissue derived mesenchymal stem cells (ADSCs) proliferation and chondrogenic differentiation. In current study, non-seeded CS/PVA nanofibrous scaffold provided a significant high surface area to volume ratio as it consisted of long nanofibers with large surface area, small sized pore and big pore volume. Cells used for the study was at third passage of the 2D culture system as the differentiation potential of ADSCs decline Isolated ADSCs had typical spindle-shaped morphology with colony formation as seen in Fig. 4. Cells expressing certain immunophenotyic surface markers (CD105+ /CD73+ /CD90+ /CD271 + /CD34− /HLA-DR−) shown in Fig. 5, pluripotency markers (OCT4, SOX2, and Nanog) and also had multidifferentiation potential detected by Alizarin red stain for osteoblasts, Alcain blue stain for chondrocytes, and oil red stain for adipocytes as shown in Fig. 6, which was consistent with the International Society for Cellular Therapy (Krampera et al. 2013), as there is three criteria must be fulfilled for the MSC phenotype: adherence to plastic with characteristic morphology, appropriate immunophenotypic profile, and expression of multipotent differentiation potential. Cell viability was assessed by Annexin V as it showed 75% of cells were viable.
MTT test was used to determine the viability and proliferation of cells on days 1, 7, and 14 of being in culture on the scaffolds. As it is shown in Fig. 7, scaffold did induce any negative impact or toxic effect on the proliferation level of seeded cells. It was reported that MSCs had a weak attachment and proliferation at the first 3 days of culture. This delay, which is a constant biological procedure, may be due to the need of cellular adaptation with the new matrix and environment, as there is an alternation in the nutrient consistency or due to the presence of solvent remnant in media during the first few days (Kheradmandi et al. 2016).
SEM images also showed a uniform cell distribution and high scaffold biocompatibility and demonstrated good interaction between the cells and scaffold as a hopeful solution for tissue engineering. For chondrogenic differentiation of ADSCs on the CS/PVA nanofiber mat, ready to use chondrogenesis supplements for MSCs, StemPro A10071-01 chondrogenesis differentiation kits "Gibco/Life Technologies, Darmstadt, Germany" for 21 days. Detection methods for chondrogenic differentiation vary from lineage-specific immunological or histological assays to the direct detection of chondrocyte specific extracellular matrix (ECM) protein and gene expression.
Chondrogenic gene markers, which are known to be expressed at different stages of chondrogenic differentiation over time (Xu et al. 2008), including SOX9, Aggrecan and COL2A1, and MMP13 were estimated 21 days after induction by qRT-PCR. Expression of Sox-9 mRNA, which is the principal transcription factor for chondrogenic genes (Akiyama and Lefebvre 2011), was upregulated on CS/PVA scaffolds following chondrogenic stimulation. SOX9 required for chondrogenic lineage commitment by mesenchymal stem cell condensation (Quintana 2009). It was also reported that SOX9 inhibits the osteoblastogenesis key transcription factor runt-related transcription factor 2 (RUNX2) (Studer Fig. 10 The mRNA expression levels of pluripotent stem cell markers were analyzed by real-time PCR and normalized to their respective GAPDH levels. Values represented as mean + SD et al. 2012). Aggrecan as well as COL2A1 were significantly upregulated suggesting promotive impact of CS/ PVA on chondrogenic differentiation of ADSCs in vitro. MMP-13, matrix metalloproteinase 13, was significantly down regulated.
Several studies reported that the process of endochondral ossification takes place during the progression from the early commitment to the late hypertrophic stage of chondrogenic differentiation (Gawlitta et al. 2010), This late hypertrophic progression initiates the formation of a mineralized cartilaginous matrix which is eventually converted into bone (Kronenberg 2003). It was reported that in vitro chondrogenically differentiated bone marrowderived MSC express several hypertrophy-related genes, such as matrix metalloproteinase 13 (MMP-13) and type X collagen (Coleman et al. 2013), leading to unwanted formation of calcified matrix implanted subsequently in the sub cutis of nude mice (Dickhut et al. 2009).
Conclusion
Adult mesenchymal stem cells along with biomaterial scaffolds seem to be attractive candidates for regenerating articular cartilage dysfunction, due to chondrogenic differentiation potential, and immunomodulatory characteristics. Current study suggest significant potential applications for human adipose tissue derived mesenchymal stem cells (ADSCs) with Chitosan/poly (vinyl alcohol) nanofibrous scaffolds in improving osteoarthritis pathology. Our future plan is to establish controlled animal model to study if human mesenchymal stem cells along with CS/PVA nanofibrous scaffolds can induce structural joint improvement for osteoarthritis invivo. | 5,731.6 | 2020-06-10T00:00:00.000 | [
"Biology",
"Materials Science",
"Medicine"
] |
Semantic Neural Machine Translation Using AMR
It is intuitive that semantic representations can be useful for machine translation, mainly because they can help in enforcing meaning preservation and handling data sparsity (many sentences correspond to one meaning) of machine translation models. On the other hand, little work has been done on leveraging semantics for neural machine translation (NMT). In this work, we study the usefulness of AMR (abstract meaning representation) on NMT. Experiments on a standard English-to-German dataset show that incorporating AMR as additional knowledge can significantly improve a strong attention-based sequence-to-sequence neural translation model.
Introduction
It is intuitive that semantic representations ought to be relevant to machine translation, given that the task is to produce a target language sentence with the same meaning as the source language input. Semantic representations formed the core of the earliest symbolic machine translation systems, and have been applied to statistical but non-neural systems as well.
Leveraging syntax for neural machine translation (NMT) has been an active research topic (Stahlberg et al., 2016;Aharoni and Goldberg, 2017;Li et al., 2017;Chen et al., 2017;Bastings et al., 2017;Wu et al., 2017;Chen et al., 2018). On the other hand, exploring semantics for NMT has so far received relatively little attention. Recently, Marcheggiani et al. (2018) exploited semantic role labeling (SRL) for NMT, showing that the predicate-argument information from SRL can improve the performance of an attention-based sequence-to-sequence model by alleviating the ''argument switching'' problem, 1 one frequent and severe issue faced by NMT systems (Isabelle et al., 2017). Figure 1(a) shows one example of semantic role information, which only captures the relations between a predicate (gave) and its arguments (John, wife, and present).
Other important information, such as the relation between John and wife, cannot be incorporated.
In this paper, we explore the usefulness of abstract meaning representation (AMR) (Banarescu et al., 2013) as a semantic representation for NMT. AMR is a semantic formalism that encodes the meaning of a sentence as a rooted, directed graph. Figure 1(b) shows an AMR graph, in which the nodes (such as give-01 and John) represent the concepts and edges (such as :ARG0 and :ARG1) represent the relations between concepts they connect. Comparing with semantic roles, AMRs capture more relations, such as the relation between John and wife (represented by the subgraph within dotted lines). In addition, AMRs directly capture entity relations and abstract away inflections and function words. As a result, they can serve as a source of knowledge for machine translation that is orthogonal to the textual input. Furthermore, structural information from AMR graphs can help reduce data sparsity when training data is not sufficient for large-scale training.
We fill in this gap, taking an attention-based sequence-to-sequence system as our baseline, which is similar to Bahdanau et al. (2015). To leverage knowledge within an AMR graph, we adopt a graph recurrent network (GRN) as the AMR encoder. In particular, a full AMR graph is considered as a single state, with nodes in the graph being its substates. State transitions are performed on the graph recurrently, allowing substates to exchange information through edges. At each recurrent step, each node advances its current state by receiving information from the current states of its adjacent nodes. Thus, with increasing numbers of recurrent steps, each word receives information from a larger context. Figure 3 shows the recurrent transition, where each node works simultaneously. Compared with other methods for encoding AMRs (Konstas et al., 2017), GRN keeps the original graph structure, and thus no information is lost . For the decoding stage, two separate attention mechanisms are adopted in the AMR encoder and sequential encoder, respectively.
Experiments on WMT16 English-to-German data (4.17M) show that adopting AMR signifi-cantly improves a strong attention-based sequenceto-sequence baseline (25.5 vs 23.7 BLEU). When trained with small-scale (226K) data, the improvement increases (19.2 vs 16.0 BLEU), which shows that the structural information from AMR can alleviate data sparsity when training data are not sufficient. To our knowledge, we are the first to investigate AMR for NMT.
Related Work
Most previous work on exploring semantics for statistical machine translation (SMT) studies the usefulness of predicate-argument structure from semantic role labeling (Wong and Mooney, 2006;Wu and Fung, 2009;Liu and Gildea, 2010;Baker et al., 2012). Jones et al. (2012) first convert Prolog expressions into graphical meaning representations, leveraging synchronous hyperedge replacement grammar to parse the input graphs while generating the outputs. Their graphical meaning representation is different from AMR under a strict definition, and their experimental data are limited to 880 sentences. We are the first to investigate AMR on a largescale machine translation task.
Recently, Marcheggiani et al. (2018) investigated SRL on NMT. The predicate-argument structures are encoded via graph convolutional network (GCN) layers (Kipf and Welling, 2017), which are laid on top of regular BiRNN or CNN layers. Our work is in line with exploring semantic information, but different in exploiting AMR rather than SRL for NMT. In addition, we leverage a GRN for modeling AMRs rather than GCN, which is formally consistent with the RNN sentence encoder. Since there is no one-to-one correspondence between AMR nodes and source words, we adopt a doubly attentive LSTM decoder, which is another major difference from Marcheggiani et al. (2018).
GRNs have recently been used to model graph structures in NLP tasks. In particular, use a GRN model to represent raw sentences by building a graph structure of neighboring words and a sentence-level node, showing that the encoder outperforms BiLSTMs and Transformer (Vaswani et al., 2017) on classification and sequence labeling tasks; build a GRN for encoding AMR graphs for text generation, showing that the representation is superior compared to BiLSTM on serialized AMR. We extend by investigating the usefulness of AMR for neural machine translation. To our knowledge, we are the first to use GRN for machine translation.
In addition to GRNs and GCNs, there have been other graph neural networks, such as graph gated neural network (GGNN) (Li et al., 2015b;Beck et al., 2018). Because our main concern is to empirically investigate the effectiveness of AMR for NMT, we leave it to future work to compare GCN, GGNN, and GRN for our task.
Baseline: Attention-Based BiLSTM
We take the attention-based sequence-to-sequence model of Bahdanau et al. (2015) as the baseline, but use LSTM cells (Hochreiter and Schmidhuber, 1997) instead of GRU cells (Cho et al., 2014).
BiLSTM Encoder
The encoder is a bidirectional LSTM on the source side. Given a sentence, two sequences of are generated for representing the input word sequence x 1 , x 2 , . . . , x N in the right-to-left and left-to-right directions, respectively, where for each word
Attention-Based Decoder
The decoder yields a word sequence in the target language y 1 , y 2 , . . . , y M by calculating a sequence of hidden states s 1 , s 2 . . . , s M recurrently. We use an attention-based LSTM decoder (Bahdanau et al., 2015), where the attention memory (H) is the concatenation of the attention vectors among all source words. Each attention vector h i is the concatenation of the encoder states of an input token in both directions ( N is the number of source words. While generating the m-th word, the decoder considers four factors: (1) the attention memory H; (2) the previous hidden state of the LSTM model s m−1 ; (3) the embedding of the current input (previously generated word) e y m ; and (4) the previous context vector ζ m−1 from attention memory H. When m = 1, we initialize ζ 0 as a zero vector, set e y 1 to the embedding of sentence start token ''<s>'', and calculate s 0 from the last step of the encoder states via a dense layer: where W 1 and b 1 are model parameters.
For each decoding step m, the decoder feeds the concatenation of the embedding of the current input e y m and the previous context vector ζ m−1 into the LSTM model to update its hidden state: Then the attention probability α m,i on the attention vector h i ∈ H for the current decode step is calculated as: The output probability distribution over the target vocabulary at the current state is calculated by where V 3 and b 3 are learnable parameters.
4 Incorporating AMR Figure 2 shows the overall architecture of our model, which adopts a BiLSTM (bottom left) and our graph recurrent network (GRN) 2 (bottom right) for encoding the source sentence and AMR, respectively. An attention-based LSTM decoder is used to generate the output sequence in the target language, with attention models over both the sequential encoder and the graph encoder. The attention memory for the graph encoder is from the last step of the graph state transition process, which is shown in Figure 3. Figure 3 shows the overall structure of our graph recurrent network for encoding AMR graphs, which follows . Formally, given an AMR graph G = (V , E), we use a hidden state vector a j to represent each node v j ∈ V . The state of the graph can thus be represented as:
Encoding AMR with GRN
In order to capture non-local interaction between nodes, information exchange between nodes is executed through a sequence of state transitions, leading to a sequence of states g 0 , g 1 , . . . , g T , where g t = {a j t }| v j ∈V , and T is the number of state transitions, which is a hyperparameter. The initial state g 0 consists of a set of initial node states a j 0 = a 0 , where a 0 is a vector of all zeros. A recurrent neural network is used to model the state transition process. In particular, the transition from g t−1 to g t consists of a hidden state transition for each node (such as from a j t−1 to a j t ), as shown in Figure 3. At each state transition step t, our model conducts direct communication between a node and all nodes that are directly connected to the node. To avoid gradient diminishing or bursting, LSTM (Hochreiter and Schmidhuber, 1997) is adopted, where a cell c j t is taken to record memory for a j t . We use an input gate i j t , an output gate o j t , and a forget gate f j t to control information flow from the inputs and to the output a j t . The inputs include representations of edges that are connected to v j , where v j can be either the source or the target of the edge. We define each edge as a triple (i, j, l), where i and j are indices of the source and target nodes, respectively, and l is the edge label. x l i,j is the representation of edge (i, j, l), detailed in Section 4.1.1. The inputs for v j are grouped into incoming and outgoing edges before being summed up: where E in (j) and E out (j) are the sets of incoming and outgoing edges of v j , respectively.
In addition to edge inputs, our model also takes the hidden states of the incoming and outgoing neighbors of each node during a state transition.
Taking v j as an example, the states of its incoming and outgoing neighbors are summed up before being passed to the cell and gate nodes: Based on the above definitions of φ j ,φ j , ψ j , andψ j , the state transition from g t−1 to g t , as represented by a j t , can be defined as: and f j t are the input, output, and forget gates mentioned earlier.
With this state transition mechanism, information of each node is propagated to all its neighboring nodes after each step. So after several transition steps, each node state contains the information of a large context, including its ancestors, descendants, and siblings. For the worst case where the input graph is a chain of nodes, the maximum number of steps necessary for information from one arbitrary node to reach another is equal to the size of the graph. We experiment with different numbers of transition steps to study the effectiveness of global encoding.
Input Representation
The edges of an AMR graph contain labels, which represent relations between the nodes they connect, and are thus important for modeling the graphs. The representation for each edge (i, j, l) is defined as: where e l and e i are the embeddings of edge label l and source node v i , and W 4 and b 4 are model parameters.
Incorporating AMR Information with a Doubly Attentive Decoder
There is no one-to-one correspondence between AMR nodes and source words. To incorporate additional knowledge from an AMR graph, an external attention model is adopted over the baseline model. In particular, the attention memory from the AMR graph is the last graph state In addition, the contextual vector based on the graph state is calculated as: . W a ,W s ,ṽ 2 , andb 2 are model parameters.
The new context vectorζ m is calculated via N i=1α m,i a i T . Finally,ζ m is incorporated into the calculation of the output probability distribution over the target vocabulary (previously defined in Equation 1):
X (j) represents the inputs for the jth instance, which is a source sentence for our baseline, or a source sentence paired with an automatically parsed AMR graph for our model. θ represents the model parameters.
Experiments
We empirically investigate the effectiveness of AMR for English-to-German translation.
Setup
Data We use the WMT16 3 English-to-German dataset, which contains around 4.5 million sentence pairs for training. In addition, we use a subset of the full dataset (News Commentary v11 , containing around 243,000 sentence pairs) for development and additional experiments. For all experiments, we use newstest2013 and newstest2016 as the development and test sets, respectively. To preprocess the data, the tokenizer from Moses 4 is used to tokenize both the English and German sides. The training sentence pairs where either side is longer than 50 words are filtered out after tokenization. To deal with rare and compound words, byte-pair encoding (BPE) 5 (Sennrich et al., 2016) is applied to both sides. In particular, 8,000 and 16,000 BPE merges are used on the News Commentary v11 subset and the full training set, respectively. On the other hand, JAMR 6 (Flanigan et al., 2016) is adopted to parse the English sentences into AMRs before BPE is applied. The statistics of the training data and vocabularies after preprocessing are shown in Tables 1 and 2 of the AMR vocabulary, which covers more than 99.6% of the training set. For our dependency-based and SRL-based baselines (which will be introduced in Baseline Systems), we choose Stanford CoreNLP Manning et al. (2014) and IBM SIRE to generate dependency trees and semantic roles, respectively. Since both dependency trees and semantic roles are based on the original English sentences without BPE, we used the top 100K frequent English words, which cover roughly 99.0% of the training set.
Hyperparameters We use the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.0005. The batch size is set to 128. Between layers, we apply dropout with a probability of 0.2. The best model is picked based on the cross-entropy loss on the development set. For model hyperparameters, we set the graph state transition number to 10 according to development experiments. Each node takes information from at most six neighbors. BLEU (Papineni et al., 2002), TER (Snover et al., 2006), and Meteor (Denkowski and Lavie, 2014) are used as the metrics on cased and tokenized results.
For experiments with the NC-v11 subset, both word embedding and hidden vector sizes are set to 500, and the models are trained for at most 30 epochs. For experiments with full training set, the word embedding and hidden state sizes are set to 800, and our models are trained for at most 10 epochs. For all systems, the word embeddings are randomly initialized and updated during training.
Baseline Systems We compare our model with the following systems. Seq2seq represents our attention-based LSTM baseline (Section 3), and Dual2seq is our model, which takes both a sequential and a graph encoder and adopts a doubly attentive decoder (Section 4). To show the merit of AMR, we further contrast our model with the following baselines, all of which adopt the same doubly attentive framework with a BiLSTM for encoding BPE-segmented source sentences: Dual2seq-LinAMR uses another BiLSTM for encoding linearized AMRs. Dual2seq-Dep and Dual2seq-SRL adopt our graph recurrent network to encode original source sentences with dependency and semantic role annotations, respectively. The three baselines are useful for contrasting different methods of encoding AMRs and for comparing AMRs with other popular structural information for NMT.
We also compare with Transformer (Vaswani et al., 2017) and OpenNMT (Klein et al., 2017), trained on the same dataset and with the same set of hyperparameters as our systems. In particular, we compare with Transformer-tf, one popular implementation 7 of Transformer based on TensorFlow, and we choose OpenNMT-tf, an official release 8 of OpenNMT implemented with TensorFlow. For a fair comparison, OpenNMT-tf has one layer for both the encoder and the decoder, and Transformer-tf has the default configuration (N = 6), but with parameters being shared among different blocks. Table 3: TEST performance. NC-v11 represents training only with the NC-v11 data, while Full means using the full training data. * represents significant (Koehn, 2004) result (p < 0.01) over Seq2seq. ↓ indicates the lower the better.
Development Experiments
on the development set. Dual2seq (self) represents our dual-attentive model, but its graph encoder encodes the source sentence, which is treated as a chain graph instead of an AMR graph. Compared with Dual2seq, Dual2seq (self) has the same number of parameters, but without semantic information from AMR. Due to hardware limitations, we do not perform an exhaustive search by evaluating every possible state transition number, but only transition numbers of 1, 5, 10, and 12. Our Dual2seq shows consistent performance improvement by increasing the transition number both from 1 to 5 (roughly +1.3 BLEU points) and from 5 to 10 (roughly 0.2 BLEU points). The former shows greater improvement than the latter, showing that the performance starts to converge after five transition steps. Further increasing transition steps from 10 to 12 gives a slight performance drop. We set the number of state transition steps to 10 for all experiments according to these observations.
On the other hand, Dual2seq (self) shows only small improvements by increasing the state transition number, and it does not perform better than Seq2seq. Both results show that the performance gains of Dual2seq are not due to an increased number of parameters. Table 3 shows the TEST BLEU, TER, and Meteor scores of all systems trained on the small-scale News Commentary v11 subset or the large-scale full set. Dual2seq is consistently better than the other systems under all three metrics, showing the effectiveness of the semantic information provided by AMR. Especially, Dual2seq is better than both OpenNMT-tf and Transformer-tf. The recurrent graph state transition of Dual2seq is similar to Transformer in that it iteratively incorporates global information. The improvement of Dual2seq over Transformer-tf undoubtedly comes from the use of AMRs, which provide complementary information to the textual inputs of the source language.
Main Results
In terms of BLEU score, Dual2seq is significantly better than Seq2seq in both settings, which shows the effectiveness of incorporating AMR information. In particular, the improvement is much larger under the small-scale setting (+3.2 BLEU) than that under the large-scale setting (+1.7 BLEU). This is an evidence that structural and coarse-grained semantic information encoded in AMRs can be more helpful when training data are limited.
When trained on the NC-v11 subset, the gap between Seq2seq and Dual2seq under Meteor (around 5 points) is greater than that under BLEU (around 3 points). Since Meteor gives partial credit to outputs that are synonyms to the reference or share identical stems, one possible explanation is that the structural information within AMRs helps to better translate the concepts from the source language, which may be synonyms or paronyms of reference words.
As shown in the second group of Table 3, we further compare our model with other methods of leveraging syntactic or semantic information. Dual2seq-LinAMR shows much worse performance than our model and only slightly outperforms the Seq2seq baseline. Both results show that simply taking advantage of the AMR concepts without their relations does not help very much. One reason may be that AMR concepts, such as John and Mary, also appear in the textual input, and thus are also encoded by the other 25 AMR Anno. BLEU Automatic 16.8 Gold 17.5* (sequential) encoder. 9 The gap between Dual2seq and Dual2seq-LinAMR comes from modeling the relations between concepts, which can be helpful for deciding target word order by enhancing the relations in source sentences. We conclude that properly encoding AMRs is necessary to make them useful. Encoding dependency trees instead of AMRs, Dual2seq-Dep shows a larger performance gap with our model (17.8 vs 19.2) on small-scale training data than on large-scale training data (25.0 vs 25.5). It is likely because AMRs are more useful on alleviating data sparsity than dependency trees, since words are lemmatized into unified concepts when parsing sentences into AMRs. For modeling long-range dependencies, AMRs have one crucial advantage over dependency trees by modeling concept-concept relations more directly. It is because AMRs drop function words; thus the distances between concepts are generally closer in AMRs than in dependency trees. Finally, Dual2seq-SRL is less effective than our model, because the annotations labeled by SRL are a subset of AMRs.
We outperform Marcheggiani et al. (2018) on the same datasets, although our systems vary in a number of respects. When trained on the NC-v11 data, they show BLEU scores of 14.9 only with their BiLSTM baseline, 16.1 using additional dependency information, 15.6 using additional semantic roles, and 15.8 taking both as additional knowledge. Using Full as the training data, the scores become 23.3, 23.9, 24.5, and 24.9, respectively. In addition to the different semantic representation being used (AMR vs SRL), Marcheggiani et al. (2018) laid GCN (Kipf and Welling, 2017) layers on top of a bidirectional LSTM (BiLSTM) layer, and then concatenated layer outputs as the attention memory. GCN layers encode the semantic role information, while BiLSTM layers encode the input sentence in the source language, and the concatenated hidden 9 AMRs can contain multi-word concepts, such as New York City, but they are in the textual input. states of both layers contain information from both semantic role and source sentence. For incorporating AMR, because there is no oneto-one word-to-node correspondence between a sentence and the corresponding AMR graph, we adopt separate attention models. Our BLEU scores are higher than theirs, but we cannot conclude that the advantage primarily comes from AMR.
Analysis
Influence of AMR Parsing Accuracy To analyze the influence of AMR parsing on our model performance, we further evaluate on a test set where the gold AMRs for the English side are available. In particular, we choose The Little Prince corpus, which contains 1,562 sentences with gold AMR annotations. 10 Since there are no parallel German sentences, we take a Germanversion The Little Prince novel, and then perform manual sentence alignment. Taking the whole The Little Prince corpus as the test set, we measure the influence of AMR parsing accuracy by evaluating on the test set when gold or automatically parsed AMRs are available. The automatic AMRs are generated by parsing the English sentences with JAMR. Table 4 shows the BLEU scores of our Dual2seq model taking gold or automatic AMRs as inputs. Not listed in Table 4, Seq2seq achieves a BLEU score of 15.6, which is 1.2 BLEU points lower than using automatic AMR information. The improvement from automatic AMR to gold AMR (+0.7 BLEU) is significant, which shows that the translation quality of our model can be further improved with an increase of AMR parsing accuracy. However, the BLEU score with gold AMR does not indicate the potentially best AMR: (s2 / say-01 :ARG0 (p3 / person :ARG1-of (h / have-rel-role-91 :ARG0 (p / person :ARG1-of (m2 / meet-03 :ARG0 (t / they) :ARG2 15) :mod (m / mutual)) :ARG2 (f / friend)) :name (n2 / name :op1 ''Carla'' :op2 ''Hairston'')) :ARG1 (a / and :op1 (p2 / person :name (n / name :op1 ''Lamb''))) :ARG2 (s / she) :time 20) Src: Carla Hairston said she was 15 and Lamb was 20 when they met through mutual friends . Ref: Carla Hairston sagte , sie war 15 und Lamm war 20 , als sie sich durch gemeinsame Freunde trafen . Dual2seq: Carla Hairston sagte , sie war 15 und Lamm war 20 , als sie sich durch gegenseitige Freunde trafen . Seq2seq: Carla Hirston sagte , sie sei 15 und Lamb 20 , als sie durch gegenseitige Freunde trafen . AMR: (s / say-01 :ARG0 (m / media :ARG1-of (l / local-02)) :ARG1 (c2 / come-01 :ARG1 (v / vehicle :mod (p / police)) :manner (c3 / constant) :path (a / across :op1 (r / refugee :mod (n2 / new))) :time (s2 / since :op1 (t3 / then)) :topic (t / thing :name (n / name :op1 (c / Croatian) :op2 (t2 / Tavarnik))))) Src: Since then , according to local media , police vehicles are constantly coming across new refugees in Croatian Tavarnik AMR: (b2 / breed-01 :ARG0 (p2 / person :ARG0-of (h / have-org-role-91 :ARG2 (s3 / scientist))) :ARG1 (w2 / worm) :ARG2 (s2 / system :ARG1-of (c / control-01 :ARG0 (b / burst-01 :ARG1 (w / wave :mod (s / sound))) :ARG1-of (p / possible-01)) :ARG1-of (n / nervous-01) :mod (m / modify-01 :ARG1 (g / genetics)))) Src: Scientists have bred worms with genetically modified nervous systems that can be controlled by bursts of sound waves . performance that our model can achieve. The primary reason is that even though the test set is coupled with gold AMRs, the training set is not. Trained with automatic AMRs, our model can learn to selectively trust the AMR structure. An additional reason is the domain difference: The Little Prince data are in the literary domain while our training data are in the news domain. There can be a further performance gain if the accuracy of the automatic AMRs on the training set is improved.
Performance Based on Sentence Length
We hypothesize that AMRs should be more beneficial for longer sentences: Those are likely to contain long-distance dependencies (such as discourse information and predicate-argument structures), which may not be adequately captured by linear chain RNNs but are directly encoded in AMRs.
To test this, we partition the test data into four buckets by length and calculate BLEU for each of them. Figure 5 shows the performances of our model along with Dual2seq-Dep and Seq2seq. Our model outperforms the Seq2seq baseline rather uniformly across all buckets, except for the first one, where they are roughly equal. This may be surprising. On the one hand, Seq2seq fails to capture some dependencies for medium-length instances; on the other hand, AMR parses are more noisy for longer sentences, which prevents us from obtaining extra improvements with AMRs.
Dependency trees have been proved useful in capturing long-range dependencies. Figure 5 shows that AMRs are comparatively better than dependency trees, especially on medium-length (21-30) sentences. The reason may be that the AMRs of medium-length sentences are much more accurate than longer sentences, and thus are better at capturing the relations between concepts. On the other hand, even though dependency trees are more accurate than AMRs, they still fail to represent relations for long sentences. It is likely because relations for longer sentences are more difficult to detect. Another possible reason is that dependency trees do not incorporate coreferences, which AMRs consider.
Human Evaluation
We further study the translation quality of predicate-argument structures by conducting a human evaluation on 100 instances from the test set. In the evaluation, translations of both Dual2seq and Seq2seq, together with the source English sentence, the German reference, and an AMR are provided to a German-speaking annotator to decide which translation better captures the predicate-argument structures in the source sentence. To avoid annotation bias, translation results of both models are swapped for some instances, and the German annotator does not know which model each translation belongs to. The annotator either selects a ''winner'' or makes a ''tie'' decision, meaning that both results are equally good.
Out of the 100 instances, Dual2seq wins on 46, Seq2seq wins on 23, and there is a tie on the remaining 31. Dual2seq wins on almost half of the instances, about twice as often as Seq2seq wins, indicating that AMRs help in translating the predicate-argument structures on the source side.
Case Study The outputs of the baseline system (Seq2seq) and our final system (Dual2seq) are shown in Figure 6. In the first sentence, the AMRbased Dual2seq system correctly produces the reflexive pronoun sich as an argument of the verb trafen (meet), despite the distance between the words in the system output, and despite the fact that the equivalent English words each other do not appear in the system output. This is facilitated by the argument structure in the AMR analysis.
In the second sentence, the AMR-based Dual2seq system produces an overly literal translation for the English phrasal verb come across. The Seq2seq translation, however, incorrectly states that the police vehicles are refugees. The difficulty for the Seq2seq probably derives in part from the fact that are and coming are separated by the word constantly in the input, while the main predicate is clear in the AMR representation.
In the third sentence, the Dual2seq system correctly translates the object of breed as worms, while the Seq2seq translation incorrectly states that the scientists breed themselves. Here the difficulty is likely the distance between the object and the verb in the German output, which causes the Seq2seq system to lose track of the correct input position to translate.
Conclusion
We showed that AMRs can improve neural machine translation. In particular, the structural semantic information from AMRs can be complementary to the source textual input by introducing a higher level of information abstraction. A graph recurrent network (GRN) is leveraged to encode AMR graphs without breaking the original graph structure, and a sequential LSTM is used to encode the source input. The decoder is a doubly attentive LSTM, taking the encoding results of both the graph encoder and the sequential encoder as attention memories. Experiments on a standard benchmark showed that AMRs are helpful regardless of the sentence length and are more effective than other more popular choices, such as dependency trees and semantic roles. | 7,201.6 | 2019-02-19T00:00:00.000 | [
"Computer Science"
] |
SMART HELMET: SMART SOLUTION FOR BIKE RIDERS AND ALCOHOL DETECTION
An accident is an unexpected, unusual, unintended external action which occurs in particular time and place. Carelessness of the driver is the major factor for accident. The government has made rules that rider should compulsory wear the helmet and not consume alcohol and drive. Still the riders do not obey the rules. These accidents are caused due to negligence of the rider. Not wearing the helmet causes the rider with head injuries which may lead to death of the rider. In order to overcome this an intelligent system
An accident is an unexpected, unusual, unintended external action which occurs in particular time and place. Carelessness of the driver is the major factor for accident. The government has made rules that rider should compulsory wear the helmet and not consume alcohol and drive. Still the riders do not obey the rules. These accidents are caused due to negligence of the rider. Not wearing the helmet causes the rider with head injuries which may lead to death of the rider. In order to overcome this an intelligent system, smart helmet is proposed, it detects the helmet and also the alcohol present in rider"s breath. This system has a pair of transmitter and receiver, the transmitter is placed in the helmet and the receiver is placed at the bike ignition. There are different sensors to ensure the helmet is on the head. These vibration sensors are placed in helmet where the probability of hitting is more. An alcohol sensor is placed near mouth of the rider. The alcohol sensor detects the presence of alcohol in rider"s breath. The data of the detection of helmet and alcohol is coded with RF encoder and then transmitted through radio frequency transmitter. The receiver at the bike receives the data and the data is decoded using RF decoder. The result of presence of helmet and the alcohol detection is analyzed on the smart phone. The proposed system will be so designed that if one of the two conditions are violated then also the bike won"t start. The bike will start only if the both conditions are followed. This smart helmet will help the rider to compulsory wear helmet and restrict drink and drive condition. MCU controls the function of relay and the ignition, it control the engine through a relay and a relay interfacing circuit.
…………………………………………………………………………………………………….... Introduction:-
An accident is said to be any vehicle accident occurring on a public highway. These accidents therefore include collisions between vehicles and animals, vehicles and pedestrians, or vehicles and fixed obstacles. e to reports the average accidents per day in India are around 1600 and 550 people are dying on each day because of road accidents. The main causes of road accident are drink and drive and not wearing helmet. The usage of helmet by two-wheeler riders is compulsory under Motor Vehicle Act. The section 129 of Motor Vehicle act 1988 makes it must for a rider to wear the helmet. Consumption of alcohol reduces concentration of the rider. It prevents the riders vision due to giddiness. Alcohol obscure fear and actuate the rider to take risks. All above factors causes accidents while driving and many a times it proves dire consequences. The risk of accident doubles for every increase of 0.05 blood alcohol concentration. To make this matter worse Indian traffic officials are not well equipped with the necessary ISSN: 2320-5407 Int. J. Adv. Res. 4(11), 1891-1896 1892 equipments required to check. There are laws to check drunken drive and wear helmet but there is no successful implementation of the law. The Motorcycle Act, 1939, has a clause which states that "Motor cycle driven by a drunken rider shall be liable for punishment at first offense for imprisonment for a term of six months or with a fine which may extend two thousand rupees or both for a next offense .The law is very successful if it is made compulsory, but it is usually failed due to the hands of the concerned incharge officer are bribed. The drunken driver is equally to a murderer as he cannot carry out his own tasks without any risk and endanger. These are the two main reasons which motivate us to build the Smart Helmet. The very first step will be detection of the helmet and the alcohol detection. When the both conditions are checked then only the bike ignition will start. IR sensor, PIR sensor and MQ303A alcohol sensors are used for the same. The result obtained from the sensors will be analyzed on the smart phone. This analyzed result will be sent to the concerned authority.
Literature Review:-
The issue of not wearing the helmet though it is being compulsory and the drink and drive condition to overcome this they proposed a smart helmet. It includes two steps, first is to detect the helmet and the second is to detect the alcohol. When this two conditions are checked then only the bike will start.IR sensor, PIR sensor and MQ-3 alcohol sensor is used for the same. They are using accelerometer to limit the speed of the bike and for fall detection. Fall detection indicates that the accident has occurred. If the fall is detected the message is sent to the bike riders family through GSM. For this accelerometer ADXL335 and GSM module are used. A simple telemetry system, sensor is activated when pressure that is applied to the helmet's interior when the rider wears it. Once activated the sensor then transmitter sends a control signal to the receiver circuit and activates the relay which is connected to the bikes unit ignition circuit's power supply. The prototype uses a dPDT electromechanical relay for detecting wear the helmet and switching on of the circuit, however on large scale state relays can be out to use which are much faster and have better response.
The smart helmet with radio frequency link, as user wear the helmet a rf signal radiates from transmitter and these rf signals are sensed and synchronized with the help of address matching by the receiver section placed in the ignition switch of the bike and the bike gets started and bike stopped working as the helmet keep out from head. This ensures that the bike works properly till the helmet is on head.
(ITS) Intelligent Traffic System or "Intelligent Vehicle Highway Systems" (IVHS). ITS/IVHS incorporate intelligence in both the roadway infrastructure and in the vehicles with the intention of reducing congestion and environmental impact, and of improving traffic performance, by exploiting the distributed nature of the system and by making use of cooperation and coordination between the various vehicles and the various elements of the roadside infrastructure. IVHS comprise traffic management systems, driver information systems, and vehicle control systems. Automated Highway Systems (AHS) go one step further than IVHS and involve complete automation of the driving task. For better (network-wide) coordination of traffic activities, AHS also distribute the intelligence between the vehicles and the roadside infrastructure.
Identified nine existing safety enhancing ITS systems for motorcycles. In addition, eight emerging technologies currently in prototype form, and several additional "potential" systems have been described. These have been discussed in terms of the critical motorcycling safety issues, namely loss of control crashes, multiple vehicle crashes, and additional factors such as conspicuity, alcohol and unlicensed riding. While some of these systems serve to address specific safety issues, such as interlocks and alcohol-related crashes, other systems will show comprehensive benefits across a number of crash types. Importantly, this is one area of ITS development that has shown a significant amount of development.
An efficient system of vehicle accident prevention system embedded by alcohol detector. It consist of PIC 16F876A as the main controller, alcohol sensor as the input and three output such as ignition system, LCD display and alarm system. This system capable to alert the driver about the level of drunkenness by indicates the condition on LCD display. It also produce an alarm from buzzer to make the driver aware their own condition and to vigilant other people in surrounding area. The most safety element provided by this system is the driver in high level of drunkenness is not allowed to drive a car as the ignition system will be deactivated. Ultimately, this system help to prevent the driver to drive in risky situation and will avoid accident occur on the road.
1893
This real time embedded system is based on low cost and easy solution to avoid accidents caused by break of rules and carelessness. The subsystem of the system are ALCOHOL DETECTION-if the rider is found to have drunk alcohol then vehicle does not start, EMERGENCY SYSTEM ACCIDENT-if any movement detected using vibration sensor and using GSM module the area where accident occurs is sent to help center.
A new secret key generation scheme is defined to improve the data security. Secret key generation using received request message (RRM) scheme is used for extraction of secret key. It takes the users request message as input for extraction of secret key. The basic idea is to generate a unique key which can provide data confidentiality and improve the strength of extracted key. This scheme provides high entropy data bit during extraction, thus ensures strength of generated secret key acceptable. Extensive performance evaluation demonstrates that the proposed schemes outperform the existing solutions in terms of highly efficient secret key generation.
Helmet system designed road hazard warning given to rider with wireless bike authentication and traffic adaptive mp3 playback. The main aim is to provide protection to bike rider and encourage people to wear helmet and to prevent road accidents and follow traffic-rules.
That it checks the wearing of helmet and drunken driving. In this system a safe two wheeler journey is possible which would decrease the head injuries during accidents and also reduce the accident rate due to drunken driving. This system also indicates No parking area which would reduce the crowd of the vehicle in those areas. No entry area is mainly allocated during the construction or repairing of the road, if the rider enters in such area this system would immediately intimate as -No entry area and vehicle will stop automatically. In case of any accident it would send the messages to the friends continuously about the location of the accident happened till the first aid reaches the rider. The system helps to know the location of the vehicle for rescuing in the case of theft incidents.
Sr.No
Paper Name Description 1.
"Smart Helmet System Using Alcohol Detection For Vehicle protection" Proposed System:- The proposed system mainly focuses on avoidance of drunken driving and restricts to wear the helmet compulsory, which results in Avoidance of accident and save the human life. Now a day"s according to government rules this 1894 smart helmet will be hopefully in use. The proposed system includes Helmet detection, Alcohol detection, and the Android mobile for sending the alert message to person about helmet detection and alcohol detection.
A.1.Helmet Unit:-
This system will provide helmet sensor switch, infrared sensor switch MCU encoder and Rf transmitter Both the switch & Alcohol switch is fitted in the helmet unit. The MCU reads data from the sensors & the sensor gives the result to MCU. If the driver has non-alcoholic breath and also the helmet switch is in a closed position and also it gives the corresponding digital output to RF encoder only. The encoder block will check the conditions are satisfied. it will encode the active inputs to a coded binary output. The RF transmitter will transmit this coded binary output from RF encoder block. The system uses the ASK Modulation technique. In RF transmitter system the digital data is represented in the form of variations in the amplitude of carrier wave by using ASK modulation is known as amplitude shift keying (ASK).
A.2.vehicle unit:-
The vehicle unit includes the RF Receiver, RF Decoder, and MCU. The receiver is the next block of transmitter it receives the coded binary data which is transmitted by the RF transmitter and all data is given to the RF decoder. The RF transmitter gives the input to RF decoder which decodes this input and gives the four bit digital data to the MCU (Micro Controller Unit)only if the address bit of RF encoder & RF decoder is matches. The MCU block receives the digital data from RF transmitter block. after that it operate the engine of the vehicle, It operates the engine through a relay circuit but it cannot operate the relay directly. So a relay interface is also used here.
1895
Fig:VEHICLE UNIT Alcohol detection:-Now a days we see the people drive the vehicle when they drunk .so this reason the accident is cause so to avoid this accident and save human life the solution is find. This alcohol detection phase the MQ-3 gas detector (Alcohol Sensor)is used for detecting alcohol content from the breath. so the MQ-303A gas detector can be placed below the face defend & above the additional face detection the surface of sensor is to be sensitive for various alcoholic contractions & it detects the alcohol from the riders breath. the sensor be sensitive to various alcoholic contraction we programmed threshold limit as 0.04mg/L so this system can be integrated with ignition system thus allow to people to handle the vehicle. this sensor is manufactured by Hansel electronic CO-Ltd & has high sensitivity sensor able to detect BAC with different canentaction Helmet detection:-Most of the people lost their life because not wearing helmet so in proposed system the smart helmet will hopefully used & can save accident death by 35% to 45%. the risk of death is 2.5 times more among riders Not wearing a helmet detection of helmet is done using IR & PIR sensors.
Pir sensor:-
The PIR Sensor is help to detect if the person wear the helmet or not the PIR(Passive infra-red) sensor it detect the motion by measuring changes in the infrared levels emitted by surrounding objects. this motion of helmet can detected by checking for high signal on a single input output(I/O)Pins At that time bike riders head is detected while he is trying to wear helmet & check the movement of his head from outside to inside which is give high output used PIR sensor the polo electric devices, such as the PIR sensor which is made of crystalline material that generate an electric charge when exposed to infrared radiation. the changes in the amount of striking and element get change. the voltage generated this change is measure in amplifier board the Fresnel lens which focuses the infrared signals on to the element & it ambient rapidly in signal changes the on-board amplifier trips output to indicate motion.
Ir sensor:-
The IR sensor is fitted on the left & right side of helmet so that human head will be detected. here the IR sensors used the obstacle electrons. the IR LED work as the it transmit the IR signal on to the object & the signal is to be reflect back from the surface of the helmet this reflected signals are received by an IR receiver & result is save in the block MCU.
Conclusion:
The government has taken initiative by making compulsory Helmet and NO Drink and Drive. According to analysis only 10% bike riders follows these rules. Many a times these rules are violated. The previously developed Helmet only detects the presence of helmet and not the alcohol. The proposed system provides a "Smart Helmet" which detects the alcohol consumed by the rider and whether the rider has worn the helmet or not. This system consists of an android application. The result obtained from the sensors i.e. IR Sensor for Helmet detection and MQ303A for alcohol detection will be analyzed on the smart phone. Hopefully the proposed system will provide the rider"s safety and restrict Drink n Drive condition and the traffic rules will also be followed. | 3,719.8 | 2016-11-30T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
A regression based transmission/disequilibrium test for binary traits: the power of joint tests for linkage and association
Background In this analysis we applied a regression based transmission disequilibrium test to the binary trait presence or absence of Kofendred Personality Disorder in the Genetic Analysis Workshop 14 (GAW14) simulated dataset and determined the power and type I error rate of the method at varying map densities and sample sizes. To conduct this transmission disequilibrium test, the logit transformation was applied to a binary outcome and regressed on an indicator variable for the transmitted allele from informative matings. All 100 replicates from chromosomes 1, 3, 5, and 9 for the Aipotu and the combined Aipotu, Karangar, and Danacaa populations were used at densities of 3, 1, and 0.3 cM. Power and type I error were determined by the number of replicates significant at the 0.05 level. Results The maximum power to detect linkage and association with the Aipotu population was 93% for chromosome 3 using a 0.3-cM map. For chromosomes 1, 5, and 9 the power was less than 10% at the 3-cM scan and less than 22% for the 0.3-cM map. With the larger sample size, power increased to 38% for chromosome 1, 100% for chromosome 3, 31% for chromosome 5, and 23% for chromosome 9. Type I error was approximately 7%. Conclusion The power of this method is highly dependent on the amount of information in a region. This study suggests that single-point methods are not particularly effective in narrowing a fine-mapping region, particularly when using single-nucleotide polymorphism data and when linkage disequilibrium in the region is variable.
Background
As the characterization of the human genome continues, increasingly dense marker maps are being created using single-nucleotide polymorphisms (SNPs). It has been suggested that the availability of these dense marker maps will make gene mapping by joint linkage and association preferable to tests for either linkage or linkage disequilibrium (LD) alone [1,2] due to the use of only partial information in either of the individual tests.
Assessment of LD has given rise to many different methods, several of which emphasize the utilization of transmission disequilibrium information. The original transmission disequilibrium test (TDT) uses parent-offspring trios, in which at least one parent is heterozygous at the marker locus and the offspring is affected. A χ 2 test is then conducted to compare the transmission of an allele in an affected child to the non-transmitted allele [3]. A later extension, the Sib TDT, [4] incorporates the use of information from unaffected siblings when parental data is absent. The TDT has been expanded by Martin et al. [5] to allow for sibships with multiple affected individuals by modeling allele transmission to the affected sibship as a group instead of each sibling separately [5]. George et al. [6] developed a TDT that regresses a quantitative trait on a parentally transmitted allele. The approach allowed for a wide range of pedigree structures, including both concordant affected and discordant sibships, as well as nonindependent nuclear families. Additionally, the regression model can simultaneously estimate the magnitude of association with other covariates associated with the trait as well [6].
The TDT and its extensions were designed for assessing linkage to candidate genes that were already known to be associated with the trait of interest. Recently however, transmission-based tests have been applied to samples of non-independent families, becoming a test of linkage alone. The power of such methods has not been thoroughly investigated when there is no prior knowledge of association.
By incorporating the method proposed by George et al. into a regression-based association test for binary traits, we tested the power of a TDT-type test to detect linkage in the presence of LD. We use the Genetic Analysis Workshop 14 (GAW14) simulated data and assess the power and type I error rate.
Sample
The simulated SNP genome scan data (3-cM density) from all 100 replicates of the Aipotu population were used for this analysis, with the authors knowing the simulated parameters. To further explore the magnitude of the impact of the sample size on power, data from the Aipotu, Danacaa, and Karangar populations were pooled and analyzed. We also examined the effect of using denser marker maps on power as well by selecting SNPs with an average spacing of 1 cM and 0.3 cM from the additional genotyping packets.
Model definition
The outcome trait was presence or absence of Kofendred Personality Disorder (KPD). Covariates A through L from the three classification groups were considered for possible inclusion in the model. Correlations between covariates were examined using FCOR in Statistical Analysis for Genetic Epidemiology (S.A.G.E v. 4.6) in an effort to pare down the model. Because several covariates within each classification were highly correlated (r > 0.7), it was not possible to easily identify non-colinear covariates. Thus, we selected one covariate from each of the classification groups for inclusion in the model (C, G, and J; with prevalences 8.5%, 9.0%, and 15.6%, respectively).
Linkage and association analysis
To conduct the TDT analysis, we defined indicator variables for a transmitted allele (1 if allele of interest was transmitted, 0 if not) for each SNP in offspring of informative matings. In particular, all offspring from a heterozygous × homozygous mating and all homozygous offspring from a heterozygous × heterozygous mating yielded unambiguous identification of the transmitted allele.
Significance of both the transmitted allele indicator variable and covariates C, G, and J in pedigree data was assessed using the ASSOC program in S.A.G.E. This program uses a regression model with a logit link to obtain residuals that approximate normality, while at the same time allowing for the non-independence of family data. For any individual i, with a binary trait y i and vector of covariates x i , the regression model is of the form: where G i is a random polygenic effect, F i is the random nuclear family effect, M i is the random marital effect, S i is the random sibship effect, and E i is the residual individual random effect [7].
In this analysis we included only the residual individual random effect, which was assumed to be normally distributed with mean zero and variance σ E 2 such that: V [logit(P(y i = 1)|X i ))] = σ E 2 was estimated. The likelihood was maximized numerically both with and without the specified test covariate (in this case the transmitted indicator variable) and corresponding likelihoods calculated. Standard errors were determined by numerical double differentiation of the log likelihood and p-values, based on a Wald test, were calculated for the random, environmental variance and covariate coefficients. p-Values are two-sided for all covariate coefficients and one-sided for all variances.
Type I error and power
To calculate type I error rate, we identified the number of replicates with a p-value less than 0.05 in unlinked regions greater than 10 cM away from the simulated disease locus.
To determine power, we calculated the number of replicates for which the p-value for the marker locus nearest the simulated disease locus was less than 0.05. This was done for chromosomes 1, 3, 5, and 9. Adjustments for multiple comparisons were not performed.
Type I error and power
In both the Aipotu and the combined populations, the type I error rate was approximately 7%. The power of this method to detect each of the disease loci on chromosomes 1, 3, 5, and 9 was very low when using the Aipotu population alone (Figure 1). On chromosome 1 at a density of 3 cM, the power at the marker nearest the disease locus (C-01-R0052) was 13%. The power was 21% for both the 1-cM and 0.3-cM scan. On chromosome 3, the power to detect linkage in the presence of association was highest (35%) at the marker nearest the disease locus at a density of 3 cM, decreased to 25% at a density of 1 cM, and 26% at 0.3 cM. Power was substantially greater 3 cM away from the disease locus on this same chromosome, reaching 93% using a 0.3-cM scan. Of note, this is the beginning of the region for which LD was simulated. For chromosome 5 the power at the marker nearest the disease locus was 9% at 3-cM density, but increased to 11% and 16% using the finer 1-cM and 0.3-cM scans, respectively. On chromosome 9, 12% power was observed near the disease locus for the 3-cM scan, and for the 1-cM and 3-cM scan, power to detect linkage and association was very similar: 10% and 9%, respectively. Power was greater, in all cases, in regions where LD was said to have been simulated, whether that region included the disease locus or not.
The power was improved in the combined population, but not markedly. On chromosome 1, the power was 17% at the marker nearest the disease locus when using the 3-cM scan and 37% and 38% at the 1-cM and 0.3-cM scan, respectively. At the 3-cM density the power was 63% for the marker nearest the disease gene on chromosome 3 and 42% and 43% for the 1-cM and 0.3-cM scan. For chromosome 5 the power increased from 19% to 31% as the density increased from 3-to 0.3-cM. For chromosome 9, the power was 18% and 23% at 3-and 0.3-cM density, respectively. Again, power increased, in all cases, in regions where LD was said to have been simulated. In fact, on chromosome 3, the method had 100% power at the first marker in the region in which LD was said to be simulated.
Discussion
This study assesses the power and type I error of a regression-based TDT for binary traits using information from nuclear families. We were able to explore the strength of the method at detecting linkage at varying map densities and sample sizes.
Type I error was stable but slightly inflated (7%), possibly due to transmission distortion [8]. In terms of power, the method performed very well for one of the four simulated disease loci (chromosome 3) at the finest density map of 0.3 cM. The power was above 90% for both the Aipotu and the combined populations at SNP B03T3056, approximately 3 cM away from the disease locus. At the 3-cM density scan the power was much lower, however the marker was closer (approximately 2 cM) to the disease locus than the SNP that produced the maximal power. Other GAW submissions [9][10][11] that sought to identify association between marker and disease also found the strongest association at SNP B03T3056. We also note that on chromosome 3, the power was higher for the 3-cM scan than the more dense 1-cM map. This is likely due to the fact that 1-cM SNP markers were selected without regard to SNP informativity or proximity to the disease locus. While we would expect that markers closer to the disease locus would yield stronger signals, in this simulation this may not have been the case. Specifically, the strength of the signal appears to be highly dependent on LD in the region. This is certainly more of an issue with single-point methods and we would not expect to see these results if multipoint methods were employed.
Interestingly enough, the power of this method to detect linkage in a region where there was said to be no LD (chromosome 1) was actually higher (21%) than regions on chromosomes 5 and 9 where LD was said to have been simulated (<20%). McCaskie et al. [10] and Song et al. [11] also detected association near the disease marker on chromosome 1 for the Aipotu population in replicate 1, suggesting that some association was present near the disease locus. Our results as well as others' [10][11][12] further suggest that the LD reported to be on chromosome 5 is weak at best, and hardly detectable. Our results suggest a similarly weak association on chromosome 9, but this was detected by the Song et al. [11].
By using a transmitted allele as the test covariate in the regression model, the sample size was reduced substantially after excluding non-informative individuals (average: 244.4 ± 64.5 individuals, across all SNPs and replicates for Aipotu population). Nevertheless, we were able to explore the impact of increasing sample size by pooling populations. Surprisingly, tripling the average sample size (average: 741.2 ± 202.9 individuals) had a modest impact on power, but the high variability of the sample size makes interpretation of the effects of sample size less straightforward. The gains in power were highest on chromosomes 1 and 3, where presence of a signal was confirmed by other groups with association based tests. For example, power increased to 100% for the 0.3-cM density map on chromosome 3. Power also doubled for the 3-cM density map on chromosome 3 and the 1-cM density map on chromosome 1. Because all populations were simulated with the same disease loci, the modest gain in power due to larger sample size was not likely due to heterogeneity.
To better understand our results, we performed a subsequent analysis, to characterize the amount of LD on chromosomes 1, 3, 5, and 9 by comparing the power at each of the SNPs in the 0.3-cM map to the informativity of those same SNPs (results not shown). There was indeed indication of LD between several of the markers on chromosomes 1 and strong LD on chromosome 3, particularly in regions of strongest signal. These results suggest that our method performed well in the presence of LD. However, even our strongest signal was not very precise. This is likely because it is a single-point method and therefore does not make use of all of the information available in the region.
Conclusion
While our method performed reasonably well in regions where LD was confirmed, the power was highly dependent on the amount of information in a region, including density of markers and sample size. Certainly, the loss of sample size due to uninformative matings is a weakness of any transmission-based test, and the case in this study as well. Overall, this study suggests that single-point methods, particularly those based on transmitted alleles, are only marginally effective in narrowing a fine-mapping region, particularly when using SNP data containing varying degrees of LD in the region. Further assessment of this method will require detailed information about LD in regions containing the causal locus not available in this dataset. | 3,326.4 | 2005-12-30T00:00:00.000 | [
"Biology"
] |
Orientifold limits of singular $F$-theory vacua
We construct global orientifold limits of singular $F$-theory vacua whose associated gauge groups are SO(3), SO(5), SO(6), $F_4$, SU(4), and Spin(7). For each limit we show a universal tadpole relation is satisfied, which is a homological identity whose dimension-zero component encodes the matching of the D3 charge between each $F$-theory compactification and its orientifold limit. While for smooth $F$-theory compactifications which admit global orientifold limits the contribution to the associated universal tadpole relation comes from its Chern class, we show that for all singular $F$-theory compactifications under consideration, the contribution to the universal tadpole relation comes from its \emph{stringy} Chern class.
Introduction
F -theory compactifications of string vacua are related to weakly coupled type-IIB compactifications via S-duality, which relates strongly coupled regimes with weakly coupled ones. For F -theory compactified on an elliptic Calabi-Yau (n + 1)-fold X → B whose total space may be given by a global Weierstrass equation, Sen was able to identify a certain limit in the complex structure moduli space of such compactifications with an orientifold theory compactified on an n-fold which is the total space of a ramified double cover Z → B [18]. In particular, the j-invariant of the elliptic fibers in the limit constructed by Sen generically approach infinity, signaling weak coupling almost everywhere on B, and a monodromy analysis of the associated limiting discriminant reveals the presence of an O7-plane and a D7-brane (for n = 3).
A non-trivial consistency condition for an orientifold limit of F -theory consists of a comparison of the D3 tadpole between the two theories in the absence of fluxes, which should be equal. For F -theory compactified on a general elliptic Calabi-Yau 4-fold X → B, we have that the D3 tadpole is given by where χ(X) denotes the topological Euler characteristic of X, while on the type-IIB side we have where O i and D j denote the supports of the O7-planes and D7-branes. As such, in an orientifold limit of F -theory, we expect to find (1.1) In the limit constructed by Sen, the brane spectrum consists of an O7-plane and a single D7-brane, which wraps a singular divisor D in the total space of the orientifold Z → B, whose equation takes the form D : (η 2 + 12ζ 2 ϑ = 0) ⊂ Z.
The surface D then acquires singularities along the curve η = ζ = 0, which enhance to pinch-point singularities along the zero-dimensional locus η = ζ = ϑ = 0. One must then be careful how to incorporate the charge of D into the D3 tadpole, since in string theory, classical invariants of singular varieties must often be replaced with their 'stringy' versions. In the case of Sen's limit, it was first shown in [2] that the modification of the D3 tadpole constraint (1.1) due to the singularities of D takes the form 2χ(X) = 4χ(O) + χ str (D) − χ(S), (1.2) where O denotes the O7-plane, χ str (D) denotes the stringy Euler characteristic of D, and S denotes the pinch-locus of D. A physical argument for χ str (D) − χ(S) being the contribution of D to the D3 tadpole on the type-IIB side was provided in [6], while a top-down derivation of the tadpole constraint (1.2) using only mathematical considerations was provided in [13]. Further insights into the dictionary between F -theory and Sen's limit were also provided in [5]. From a purely mathematical perspective, a compelling aspect of the D3 tadpole constraint (1.2) associated with Sen's limit, is that it is the dimension-zero component of a homological identity which holds in a much broader context than considered by physicists [2]. In particular, if we denote the projection of the F -theory elliptic fibration by ϕ : X → B, and the orientifold projection by ρ : Z → B, then equation (1.2) is the dimension-zero component of the homological Chern class identity given by where c str (D) denotes the stringy Chern class of D. Moreover, not only does the identity (1.3) hold over a base B of arbitrary dimension, it also holds without any Calabi-Yau hypothesis on X (as the only requirement on X is that it may be given by a global Weierstrass equation). As such, equation (1.3) is often referred to as the universal tadpole realtion associated with Sen's limit. Universal tadpole relations for orientifold limits of smooth F -theory vacua which do not admit global Weierstrass equations were also shown to hold in [3][10] [4], and a universal tadpole relation associated with an oriented type-IIB limit was shown to hold in [15]. The fact that Sen's limit is defined exclusively for smooth Weierstrass fibrations X → B is quite restrictive. On the F -theory side, this prohibits one from geometrically engineering non-abelian gauge theories, as one may associate a non-abelian gauge theory with a Weierstrass fibration only once singularities are introduced into the total space of the fibration. As such, the state of the art in engineering non-abelian gauge theories associated with an F -theory compactification, is to work with the Tate form of singular Weierstrass fibrations X → B[?], which is given by (1.4) The Tate form of a Weierstrass fibration is particularly useful due to the fact that given a Kodaira fiber f, Tate's algorithm provides a precise recipe for tuning the coefficients a i in such a way that f will appear over a divisor S ⊂ B upon a resolution of singularities X → X [19]. The fiber f together with the Mordell-Weil group of X then determines the associated gauge group. In light of this, Sen's limit was generalized by Donagi and Wijnholt in [8] to singular Weierstrass fibrations in Tate form. The orientifold limits of singular Weierstrass fibrations constructed by Donagi and Wijnholt are used in their analysis to construct local models, thus global constriants such as the D3 tadpole are never considered in such limits. Furthermore, as pointed out by Esole and Savelli [9], it is often the case with such limits (e.g. limits of SU(n) theories) that the associated orientifold admits conifold singularities whose crepant resolutions are not compatible with the orientifold involution. In this note, we then construct global orientifold limits of singular F -theory vacua which admit the gauge groups SO(3), SO(5), SO(6), F 4 , SU(4), and Spin (7). Outside of the SO(3) case each of our limits are distinct from the Donagi-Wijnholt limits, and we show that the D3 tadpole constraint given by (1.1) holds in each of our limits once χ(X) is replaced by the stringy Euler characteristic χ str (X). This yields compelling evidence that the D3 tadpole associated with a singular F -theory compactification is given by as opposed to being proportional to the usual topological Euler characteristic. And just as in the case of Sen's limit (and in the limits constructed in [3][10]), we show that each of the numerical tadpole relations are the dimension zero component of a much more general homological identity involving the stringy Chern class c str (X), which holds in a much broader context than its physical origins. Unlike the Sen limit however, in all the limits we construct the branes which arise in the limits are all supported on smooth divisors, so there is no need to modify the D3 charge on the type-IIB side as in Sen's limit. Moreover, the total space of the orientifold in all of our limits are smooth, so there is no issue with whether or not a crepant resolution is compatible with the orientifold involution. We also note that the F 4 and SO(6) cases admit the same limit, as do the SU(4) and Spin(7) cases, and as such, these cases provide distinct F -theory lifts of their orientifold limits.
As the Weiertrass fibrations we consider are all singular, either one defines the F -theory compactification on a crepant resolution X → X, or one takes up the issue of defining F -theory on X itself [7]. While the language used in this note tends to favor the latter approach, we note that our results may be adapted to the former approach as well, since stringy invariants of X coincide with those of X.
The singular F -theory vacua under consideration
Let B be a compact complex manifold, L → B a holomorphic line bundle, and let E → B be the rank 3 vector bundle given by where L k denotes the kth tensor power of L . We then consider the projective bundle of lines in E → B, which is a P 2 -bundle π : P(E ) → B whose tautological bundle we denote by O(−1). The vacua we consider are all elliptic fibrations X → B, whose total space is a hypersurface in P(E ) which may be given by a global Weierstrass equation In the equation for X, x, y and z are projective coordinates on the fibers of π : P(E ) → B, which are sections of O(1) ⊗ π * L 2 , O(1) ⊗ π * L 3 and O(1) respectfully. The coefficients F and G are then sections of π * L 4 and π * L 6 , so that X corresponds to the zero-scheme of a section of O(3) ⊗ π * L 6 . The singular fibers of X → B lie over the discirminant of X, which is given by so that ∆ is the zero-scheme of a section of L 12 . Over a general point of the discriminant the fibers are nodal cubics, which then enhance to cuspidal cubics along F = G = 0. When L is the anti-canonical bundle O(−K B ), the canonical class K X is trivial, thus the case L = O(−K B ) with B of dimension 2 or 3 is the case of physical interest. However, no such assumptions are needed for our calculations. For our purposes it will be more useful to work with the Tate form of X, which is obtained from the Weierstrass equation from a linear change of coordinates. In particular, the Tate form of X is given by In the Tate form of X, each a i is a section of π * L i , and with regards to the Weiertsrass form of X, we have 6 . From here on, we will denote π * L i simply by L i for ease of notation.
Each X we consider admits singularities over a smooth divisor S ⊂ B which determines an associated gauge group G X (to see how the singularities of X determine G X , one may consult for example [11]). In particular, we consider X whose associated gauge groups are SO(3), SO(5), SO(6), F 4 , SU(4) and Spin (7). After denoting a regular section of O(S) by s, the singular locus of each fibration is given by x = y = s = 0.
SO(3) fibrations.
For X → B an SO(3) fibration, the Tate form is given by so that a 4 = s in this case. Note that this constrains s to be a section of L 4 . The discrminant of SO(3) fibrations is then given by For X → B an SO(5) fibration, the Tate form is given by so that a 4 = s 2 in this case. Note that this constrains s to be a section of L 2 . The discrminant of SO(3) fibrations is then given by ∆ SO (5) : s 4 (a 2 − 2s)(a 2 + 2s) = 0. (6) fibrations. For X → B an SO(6) fibration, the Tate form is given by SO (6) :
SO
so that a 4 = s 2 and a 2 = s in this case. Note that this constrains s to be a section of L 2 . The discrminant of SO(6) fibrations is then given by ∆ SO (6) : s 4 (a 2 1 − 4s)(a 2 1 + 12s) = 0.
2.4. F 4 fibrations. For X → B an F 4 fibration, the Tate form is given by so that a 4 = c 1 s 3 and a 6 = c 2 s 4 in this case. Note that this constrains s to be a section of L , while c i is constrained to be a section of L i . The discrminant of F 4 fibrations is then given by (4) fibrations. For X → B an SU(4) fibration, the Tate form is given by so that a 2 = c 1 s, a 4 = c 2 s 2 and a 6 = d 2 s 4 in this case. Note that this constrains s to be a section of L , while c i and d i are constrained to be a sections of L i . The discrminant of SU(4) fibrations is then given by 2.6. Spin (7) fibrations. For X → B a Spin (7) fibration, the Tate form is given by Spin (7) : so that a 4 = c 1 s 3 and a 6 = c 2 s 4 in this case. Note that this constrains s to be a section of L , while c i and d i are constrained to be a sections of L i . The discrminant of Spin (7) fibrations is then given by ∆ Spin (7) :
Orientifold limits
In this section, we review Sen's limit for smooth Weierstrass fibrations, and then construct orientifold limits for the six families of vacua whose equations were given in §2. In all cases B denotes a compact complex manifold, L → B a line bundle, and E denotes the rank 3 vector bundle given by (2.1).
3.1. Sen's limit revisited. Let ψ : W → B be a smooth Weierstrass fibration, so that W is a hypersurface in P(E ) given by W : y 2 z = x 3 + f xz 2 + gz 3 .
As in the case of the singular Weierstrass fibrations introduced in the previous section, x, y, and z are sections of O(1) ⊗L 2 , O(1) ⊗L 3 and O(1) respectively, while f and g are sections of L 4 and L 6 respectively. To ensure that W is smooth, we assume that the hypersurfaces in B given by f = 0 and g = 0 are both smooth and intersect transversally. The singular fibers of W → B lie over the discriminant locus ∆ ⊂ B, which is given by In particular, over a generic point of ∆ the fibers are nodal cubics, which then enhance to cuspidal cubics over the codimesion 2 locus given by f = g = 0.
The orientifold limit constructed by Sen is then achieved by taking where h and η are general sections of L 2 and L 4 respectfully, and t is a complex deformation parameter which varies over a disk D ⊂ C centered about the origin. Such redefinitions then give rise to a family W → D, whose total space is given by The central fiber of the family is then given by which is degenerate fibration with only singular fibers. In particular, the fibers are generically nodal cubics, which signals weak coupling as the j-invariant of an elliptic curve approaches ∞ as the curve approaches a nodal singularity. The fibers then enhance to cuspidal cubics along the hypersurface O ⊂ B given by h = 0, which is then identified as the orientifold hyperplane. We then take a double cover ρ : Z → B ramified along O, which is achieved by defining Z to be the hypersurface in the total space of L → B which is given by where ζ is a section (of the pullback to L ) of L 2 . We summarize the geometry of the situation via the following diagram The orientifold involution is then given by ζ → −ζ, and a simple adjunction calculation shows that K Z is trivial if and only if K W is, i.e., when L = O(−K B ). To arrive at the brane spectrum associated with the limit, we then take the flat limit of the associated family of discriminants as t → 0, and then pull it back to Z. In particular, expanding the associated family of discriminants viewed as a function of t yields ∆(t) = h 2 (η 2 + 12hϑ)t 2 + · · · , thus the flat limit as t → 0 is given by Pulling ∆ 0 back to Z then yields ∆ 0 : (ζ 4 (η 2 + 12ζ 2 ϑ) = 0) ⊂ Z.
We then see that the brane spectrum associated with the limit consists of the orientifold hyperplane given by ζ = 0, together with a singular brane supported on D : (η 2 + 12ζ 2 ϑ = 0) ⊂ Z. (3) limit. Let X → B be an SO(3) fibration as defined in §2, whose defining equation is given by y 2 z = x 3 + a 2 x 2 z + sxz 2 . We then let s = σt, a 2 = h, where t is a complex deformation parameter, varying over a disk D ⊂ C. As t varies we then arrive at a family X → D, whose central fiber is given by
An SO
We immediately see that the fibers of X 0 over h = 0 are nodal curves, thus signaling weak coupling almost everywhere over B. Over h = 0 the nodal curves then enhance to cuspidal curves, thus h = 0 is identified with the orientifold hyperplane. The double cover Z → B corresponding to the orientifold is then constructed as exactly the same way as in Sen's limit, and the associated family of discriminants expanded with respect to t is then given by It then follows that the pullback to Z of the flat limit of ∆(t) as t → 0 is given by where we recall ζ is such that the equation for Z is given by ζ 2 = h. We then see that the limiting brane spectrum in Z corresponds to a smooth orientifold hyperplane given by ζ = 0, and a stack of 2 branes supported on the smooth divisor σ = 0.
3.3. An SO(5) limit. Let X → B be an SO(5) fibration as defined in §2, whose defining equation is given by y 2 z = x 3 + a 2 x 2 z + s 2 xz 2 . We then let s = ψt, a 2 = h, where t is a complex deformation parameter, varying over a disk D ⊂ C. As t varies we then arrive at a family X → D, whose central fiber is given by We immediately see that the fibers of X 0 over h = 0 are nodal curves, thus signaling weak coupling almost everywhere over B. Over h = 0 the nodal curves then enhance to cuspidal curves, thus h = 0 is identified with the orientifold hyperplane. The double cover Z → B given by ζ 2 = h is then smooth, and the associated family of discriminants expanded with respect to t is then given by ∆(t) = h 2 ψ 4 t 4 + · · · . It then follows that the pullback to Z of the flat limit of ∆(t) as t → 0 is given by We then see that the limiting brane spectrum in Z corresponds to a smooth orientifold hyperplane given by ζ = 0, and a stack of 4 branes supported on the smooth divisor ψ = 0.
3.4. An SO(6) limit. Let X → B be an SO(6) fibration as defined in §2, whose defining equation is given by y 2 z + a 1 xyz = x 3 + sx 2 z + s 2 xz 2 . We then let which gives rise to a family X → D, whose central fiber is given by The Weierstrass coefficients of the central fiber are then given by We immediately see that the fibers of X 0 over ζ = 0 are nodal curves, thus signaling weak coupling almost everywhere over B. Over ζ = 0 the nodal curves then enhance to cuspidal curves. We then define a double cover Z → B, whose total space is given by ζ 2 = h, where h is a general section of L 2 . The total space of the double cover Z → B is then smooth, and the associated family of discriminants expanded with respect to t is then given by It then follows that the pullback to Z of the flat limit of ∆(t) as t → 0 is given by Viewing the equation of ∆ 0 as ζ 4 ζ 2 ζ 2 ζ 2 η 2 = 0, we then see that the limiting brane spectrum in Z corresponds to a smooth orientifold hyperplane given by ζ = 0, a stack of 3 brane-imagebrane pairs supported on the orientifold hyperplane, and a stack of 2 branes supported on the smooth divisor η = 0. If we specialize the situation by letting η = ζ, then the brane spectrum corresponds to a stack of 4 brane-image-brane pairs supported on the orientifold hyperplane.
3.5. An F 4 limit. Let X → B be an F 4 fibration as defined in §2, whose defining equation is given by y 2 z = x 3 + c 1 s 3 xz 2 + c 2 s 4 z 3 . We then let s = ζ, c 1 = −3ζ, c 2 = 2ζ 2 + η 2 t which gives rise to a family X → D, whose central fiber is given by The Weierstrass coefficients of the central fiber are then given by We immediately see that the fibers of X 0 over ζ = 0 are nodal curves, thus signaling weak coupling almost everywhere over B. Over ζ = 0 the nodal curves then enhance to cuspidal curves. We then define a double cover Z → B, whose total space is given by ζ 2 = h, where h is a general section of L 2 . The total space of the double cover Z → B is then smooth, and the associated family of discriminants expanded with respect to t is then given by It then follows that the brane spectrum is exactly the same as in the SO(6) limit constructed in §3.4, and as such, each of the SO(6) and F 4 fibrations may be viewed as distinct F-theory lifts of this orientifold limit.
3.6. An SU(4) limit. Let X → B be an SU(4) fibration as defined in §2, whose defining equation is given by We then let which gives rise to a family X → D, whose central fiber is given by The Weierstrass coefficients of the central fiber are then given by We immediately see that the fibers of X 0 over ζ = 0 are nodal curves, thus signaling weak coupling almost everywhere over B. Over ζ = 0 the nodal curves then enhance to cuspidal curves. We then define a double cover Z → B, whose total space is given by ζ 2 = h, where h is a general section of L 2 . The total space of the double cover Z → B is then smooth, and the associated family of discriminants expanded with respect to t is then given by It then follows that the pullback to Z of the flat limit of ∆(t) as t → 0 is given by We then see that the brane spectrum in Z corresponds to a smooth orientifold hyperplane given by ζ = 0, a stack of 4 branes supported on the smooth divisor γ = 0, and a stack of 2 branes supported on the smooth divisor β = 0. If we specialize the situation by letting γ = ζ, then the stack of 4 branes supported on γ = 0 then becomes a stack of 2 brane-image-brane pairs supported on the orientifold hyperplane. (7) limit. Let X → B be a Spin(7) fibration as defined in §2, whose defining equation is given by y 2 z = x 3 + c 1 sx 2 z + c 2 s 2 xz 2 + d 2 s 4 z 3 . We then let s = ζ, c 1 = ζ d 2 = ηt 3 , c 2 = βt, which gives rise to a family X → D, whose central fiber is given by
A Spin
The Weierstrass coefficients of the central fiber are then given by We immediately see that the fibers of X 0 over ζ = 0 are nodal curves, thus signaling weak coupling almost everywhere over B. Over ζ = 0 the nodal curves then enhance to cuspidal curves. We then define a double cover Z → B, whose total space is given by ζ 2 = h, where h is a general section of L 2 . The total space of the double cover Z → B is then smooth, and the associated family of discriminants expanded with respect to t is then given by It then follows that the pullback to Z of the flat limit of ∆(t) as t → 0 is given by We then see that the brane spectrum in Z is precisely the same as the SU(4) limit constructed in §3.6 specialized at γ = ζ, and as such, each of the SU(4) and Spin (7) fibrations may be viewed as distinct F -theory lifts of this limit.
Universal tadpole relations
As D3 charge is preserved under S-duality, the D3 tadpole of a consistent, global orientifold limit of F -theory should coincide with that of its F -theory lift. For a smooth F -theory compactification X → B, the D3 tadpole is given by while on the type-IIB side the D3 tadpole is given by where the O i are the supports of the orientifold hyperplanes and the D j are the supports of the codimension 1 branes in the orientifold double cover Z → B. Equating the two tadpoles then yields the consistency condition In each of the limits constructed in §3, the brane spectrum consists of a single orientifold hyperplane O together with branes supported on smooth divisors in Z, and in each case, what we find is that the tadpole relation (4.1) holds if and only if the Euler characteristic χ(X) appearing on the LHS of the relation is replaced with the stringy Euler characteristic χ str (X). In particular, for each limit constructed in §3, we find We take this as strong evidence that for singular F -theory compactifications, the D3 tadpole should be given by Moreover, in each of the limits constructed in §3, we find that equation ( where ϕ : X → B is the F -theory compactification and ρ : Z → B is the associated orientifold compactification. Stringy Chern classes are defined for varieties X with at most Gorenstein canonical singularities [1] [12], and in the case that X admits a crepant resolution τ : X → X, so that τ * K X = K X , the definition of stringy Chern class yields τ * c( X) = c str (X). Stringy Chern classes reside in the Chow group of algebraic cycles modulo rational equivalence, and the stringy version of the Gauss-Bonnet theorem is then given by the formula where the integral sign in notation for taking the degree of the zero-dimensional component of a Chow class.
For each singular F -theory compactification ϕ : X → B we consider, crepant resolutions τ : X → X were constructed in [11], which were then used to compute explicit formulas for ϕ * (τ * c( X)) (which may also be recovered by taking the limit as y → −1 of the stringy Hirzebruch class formulas derived in [14]). And since τ * c( X) = c str (X), we have ϕ * (τ * c( X)) = ϕ * c str (X), which will enable us to compute the LHS of the universal tadpole relation (4.3) assciated with each limit. As the degree of a zero-dimensional Chow class is invariant under proper pushforward [16], we also have thus the dimension-zero component of (4.3) encodes the numerical identity (4.2) correpsonding to the matching of the D3 tadpoles associated with an orientifold limit of F -theory.
In what follows, we verify the universal tadpole relation (4.3) associated with each limit constructed in §3. In each case, we denote the first Chern class of L → B by L, and we will use the fact that if D a is a smooth divisor of class ρ * aL in the total space of the orientifold ρ : Z → B, then ρ * c(D a ) = 2aL 1 + aL · 1 + L 1 + 2L c(B). We then have thus 2ϕ * c str (X) = ρ * (4c(O) + 2c(D)), as desired. 4.3. The SO(6) and F 4 universal tadpole relations. Let ϕ : X → B be an SO (6) or an F 4 fibration. As computed in [11], in both cases we have ϕ * c str (X) = 12L 1 + 2L c(B). | 6,876.8 | 2019-03-07T00:00:00.000 | [
"Physics"
] |
Doppler Spectrum-Based NRCS Estimation Method for Low-Scattering Areas in Ocean SAR Images
The image intensities of low-backscattering areas in synthetic aperture radar (SAR) images are often seriously contaminated by the system noise floor and azimuthal ambiguity signal from adjacent high-backscattering areas. Hence, the image intensity of low-backscattering areas does not correctly reflect the backscattering intensity, which causes confusion in subsequent image processing or interpretation. In this paper, a method is proposed to estimate the normalized radar cross-section (NRCS) of low-backscattering area by utilizing the differences between noise, azimuthal ambiguity, and signal in the Doppler frequency domain of single-look SAR images; the aim is to eliminate the effect of system noise and azimuthal ambiguity. Analysis shows that, for a spaceborne SAR with a noise equivalent sigma zero (NESZ) of −25 dB and a single-look pixel of 8 m × 5 m, the NRCS-estimation precision of this method can reach −38 dB at a resolution of 96 m × 100 m. Three examples are given to validate the advantages of this method in estimating the low NRCS and the filtering of the azimuthal ambiguity.
Introduction
Areas with low normalized radar cross-section (NRCS) appear dark in synthetic aperture radar (SAR) images.It is frequently seen in ocean SAR images, such as those of oil spills, organic films, low wind areas, fronts, upwelling, current shear zones, and dark strips of internal waves and swells [1,2].Among land targets, the backside of mountains and flat ground such as airport runways are also typical low-backscattering targets.The signal intensities of low-backscattering areas in SAR images are often close to or even less than the noise floor of the SAR system.Taking the ocean surface as an example; the mean NRCS of the ocean surface for the L, C, and X bands ranges from −15 dB to −25 dB at moderate wind speeds and incident angles.However, the NRCS of low-backscattering areas on the ocean surface is much lower than the mean NRCS of the ocean surface.The NRCS of low-backscattering areas of the ocean surface is often less than −30 dB, whereas the noise equivalent sigma zero (NESZ) of most spaceborne SAR systems ranges from −20 dB to −30 dB.Hence, the backscattering signal intensities of low-backscattering areas in ocean SAR images are often less than the noise floor of SAR systems.The NESZ values of typical spaceborne SAR systems are listed in Table 1 [3,4].Another factor that can affect the image intensities of low-backscattering areas is the azimuthal ambiguity effect of high-backscattering areas.It occurs because the Doppler frequency of the signal reflected from the area illuminated by the azimuthal sidelobe of the antenna exceeds the pulse repetition frequency (PRF).The azimuthal ambiguity signal of a target is located at a position with a certain displacement relative to its real position.This displacement depends on the PRF, the velocity of the platform and the Doppler centroid frequency of the SAR system.A typical value of the azimuthal ambiguity of a spaceborne SAR is about −15 dB to −20 dB.If the NRCS of a high-backscattering area is 15 dB to 20 dB higher than that of a low-backscattering area located at the position where the azimuthal ambiguity signal from the high-backscattering area is present, the azimuthal ambiguity signal could significantly affect the image intensity of the dark area.Azimuthal ambiguities are especially frequent in land-water junctions, because the NRCSs of land targets are much higher than that of the water surface.
Two analyses above indicate that, in order to estimate the certain value of the low areas' NRCS, the effect of the azimuthal ambiguity must be taken into consideration.Nevertheless, the standard radiometric calibration algorithm for SAR images only takes the system noise into account and ignores the azimuthal ambiguity effect, which is expressed in the following equation [5,6].
where I, R, α, and G are the image intensity, slant range, elevation angle and system gain of a certain image pixel respectively; g(α) is the two-way antenna gain at elevation angle α, N 0 is the system noise, K is the calibration constant, and R re f , α re f , and G re f are the slant range, elevation angle, and system gain of the reference target, respectively.However, an accurate system noise N 0 is seldom provided in standard commercial spaceborne SAR data products.Moreover, even if a sufficiently accurate N 0 is provided, it is possible to obtain Remote Sens. 2017, 9, 219 3 of 22 a meaningless NRCS of less than or equal to zero because the image intensity is a stochastic variable that may be less than the system noise N 0 , especially when the NRCS of the target is relatively low.Hence, in most practical NRCS calibration applications, the system noise is also ignored and Equation ( 1) is simplified as follows [5][6][7][8][9][10].
When calibrated using Equation ( 2), the NRCS of low-backscattering areas inevitably includes a significant contribution from the system noise and azimuthal ambiguity, which can cause confusion in subsequent image processing and interpretation.
In this paper, a method of NRCS estimation for low-backscattering areas based on a spectrum is proposed.This method requires to know the noise floor N 0 and antenna patterns firstly.If N 0 and antenna patterns are unknown, we also can estimate them from a single-look complex SAR image.This method can also eliminate the azimuthal ambiguity effect according to the shape of the Doppler spectrum; avoiding a meaningless NRCS estimation by using the maximum likelihood (ML) estimation method and the modified Newton's iteration method.
The rest of this paper is organized as follows.Section 2 gives the details of the algorithms and principles used in this method.In Section 3, three examples are presented to validate the advantages of this method.In Section 4, an analysis of the estimation precision and simulations are stated.Finally, some conclusions are presented in Section 5.
Analysis of Doppler Spectrum Composition
From the SAR imaging theory [4][5][6][7][8][9][10][11], it is well known that the shape of system noise, azimuthal ambiguity, and the backscattering signal present different patterns in the Doppler spectrum of the SAR raw signal (here, it is supposed that the range match filtering and range cell migration correction have been done), i.e., the system noise power density is a certain constant in the Doppler spectrum, whereas the shape of the Doppler spectrum of the backscattering signal and azimuthal ambiguity depend on the antenna pattern: the backscattering signal and azimuthal ambiguity correspond to the main lobe and side lobe respectively.The Doppler spectrum of the SAR raw signal can be expressed as: In Equation (3) x 0 y 0 are the center positions of the area where the Fourier transformation apply, and x 0 and y 0 are the coordinates in the flight and look directions, respectively, E[ ] refers to the mathematic expectation, f denotes the Doppler frequency, and p r ( f ) denotes the azimuthal power spectrum of the SAR raw signal.P a ( f ) is the power spectrum function of an ideal point target with a 0 dB NRCS, the shape of which is determined by the two-way antenna azimuthal pattern.Further, f 0 is the Doppler centroid, F r refers to the pulse repeat frequency of the SAR system, N 0 is the intrinsic noise floor of the SAR system, σ x 0 − nD x , y 0 − nD y is the mean NRCS of the pixels located between x 0 − nD x − L/2 y 0 − nD y and x 0 − nD x + L/2 y 0 − nD y (L is the data length for calculating the Doppler spectrum), D x and D y are the displacements in the flight and look directions, respectively, between the position of the azimuthal ambiguity signal and the real target position.They can be written as: where R is the slant range of the target, λ is the radar wavelength, and V is the velocity of the SAR platform.
In Equation (3), n = 0 corresponds to the signal reflected from the mainlobe of the antenna, and n = 0 corresponds to the contribution from the azimuthal ambiguity effect.In general, among the azimuthal ambiguity signals, only n = −1 and 1, which correspond to the azimuthal ambiguity from the first azimuthal antenna sidelobes.Hence, Equation (3) can be simplified as: Equation ( 5) indicates the shapes of the averaged power spectrum of the backscattering signal are determined by the antenna pattern P a ( f ) and N 0 .To illustrate the shape difference of the power spectrum between system noise, azimuthal ambiguity, and the backscattering signal more clearly, a schematic diagram is given in Figure 1.
Remote Sens. 2017, 9, 219 4 of 21 In Equation (3), n = 0 corresponds to the signal reflected from the mainlobe of the antenna, and n ≠ 0 corresponds to the contribution from the azimuthal ambiguity effect.In general, among the azimuthal ambiguity signals, only n = −1 and 1, which correspond to the azimuthal ambiguity from the first azimuthal antenna sidelobes.Hence, Equation (3) can be simplified as: ) Equation ( 5) indicates the shapes of the averaged power spectrum of the backscattering signal are determined by the antenna pattern ( ) a P f and 0 N .To illustrate the shape difference of the power spectrum between system noise, azimuthal ambiguity, and the backscattering signal more clearly, a schematic diagram is given in Figure 1. Figure 1 is only a schematic diagram.When the image distribution is relatively uniform, the noise is the main disturbance.When there is a strong target around it, the azimuthal ambiguity is mainly from the strong target.From Figure 1, it is clear that the shapes of various components of the power spectrum of the SAR raw signal, composed of the backscattering signal, azimuthal ambiguity, and system noise, are very different.In general, the antenna pattern ( ) a P f and system noise 0 N can be acquired from the external and internal calibration of the SAR system.Therefore, it is possible to eliminate the effect of the azimuthal ambiguity on the NRCS estimation by taking full advantage of these differences.However, the azimuthal resolution of the SAR raw signal is too coarse for most applications.To increase the azimuthal resolution, the azimuthal matching filter must be applied on the SAR raw signal to convert it to a single-look complex image.However, the unweighted azimuthal matching filter can be used, which only changes the phase of the Doppler spectrum without modifying the amplitude of the Doppler spectrum.Thus, the azimuthal power spectrum of the singlelook complex image has the same shape characteristic with that of the SAR raw signal.The relation between the power spectra of single-look complex images and the SAR raw signal is given as: Figure 1 is only a schematic diagram.When the image distribution is relatively uniform, the noise is the main disturbance.When there is a strong target around it, the azimuthal ambiguity is mainly from the strong target.From Figure 1, it is clear that the shapes of various components of the power spectrum of the SAR raw signal, composed of the backscattering signal, azimuthal ambiguity, and system noise, are very different.In general, the antenna pattern P a ( f ) and system noise N 0 can be acquired from the external and internal calibration of the SAR system.Therefore, it is possible to eliminate the effect of the azimuthal ambiguity on the NRCS estimation by taking full advantage of these differences.However, the azimuthal resolution of the SAR raw signal is too coarse for most applications.To increase the azimuthal resolution, the azimuthal matching filter must be applied on the SAR raw signal to convert it to a single-look complex image.However, the unweighted azimuthal matching filter can be used, which only changes the phase of the Doppler spectrum without modifying the amplitude of the Doppler spectrum.Thus, the azimuthal power spectrum of the single-look complex image has the same shape characteristic with that of the SAR raw signal.The relation between the power spectra of single-look complex images and the SAR raw signal is given as: where is the unweighted azimuthal matching filter, p s ( f , x 0 , y 0 ) and p r ( f , x 0 , y 0 ) are the azimuthal power spectra of the single-look complex image and the SAR raw signal, respectively.The shape patterns shown in Figure 1 are the mathematics expectation of the power spectrum.The real power spectrum of a small patch of the single-look complex images is in fact a stochastic process.As the signal of a single-look complex image is a complex Gaussian process, the probability density function of each sample of the power spectrum can be given by the well-known exponential distribution that follows: Equation (5) indicates that the backscattering signal σ(x 0 , y 0 ) of a certain area contributes to three spectra: p s ( f , x 0 , y 0 ), p s f , x 0 − D x , y 0 − D y , and p s f , x 0 + D x , y 0 + D y .Hence, the joint condition probability density function of all the frequency points is: where m is the point number of the discrete Doppler spectrum.
Moreover, the image intensity is also a stochastic process related to the backscattering signal.In general, if the image pitch is small enough, the probability density function of a multi-look image intensity of an image pitch can be modeled by a gamma distribution as follows: where M is the look number, I(x 0 , y 0 ) is the mean image intensity of a certain area with the center located at [x 0 , y 0 ].E c , E l , and E R are the main lobe, left side lobe, and right side lobe factors, respectively: To estimate higher resolution NRCS from the power spectrum, the single-look complex images are divided into many small pitches and the Fourier transformation is applied on each image patch.
After obtaining P a ( f ) and N 0 , the local NRCS can be further estimated from the Doppler spectrum.In this step, L is selected according to the desired final resolution, but it cannot be significantly larger.
Methods and Solutions to Estimate the NRCS from the Doppler Spectrum
Suppose that a SAR single-look complex image has been corrected.For example, the range shift caused by the azimuthal ambiguity has been compensated, and the image has been interpolated k times in azimuth.Meanwhile, the azimuthal shift caused by the azimuthal ambiguity is X times larger than emphL.The corrected image is divided into some small patches.The size of every patch is about R m × R a (range multiplied by azimuthal).Choosing a row of azimuthal patches, and supposing that the scattering coefficient of every patch is σ 1 , σ 2 , . . ., σ T , respectively, the Doppler spectrum of the i-th block is f i_m .
Estimating NRCS from Doppler spectrum is a typical Bayesian estimation problem [12,13], which is expressed as the following equation.
The estimation equation of single patches can be expressed as: σ(x 0 , y 0 ) = argmax σ(x 0 ,y 0 ) g(p( f 1 ), p( f 2 ), ..., p( f m )|σ(x 0 , y 0 ))g p (σ(x 0 , y 0 )) where σ(x 0 , y 0 ) is the estimation of σ(x 0 , y 0 ), f i (i = 1, 2, ...m) is the discrete frequency point, m is the point number of the discrete Doppler spectrum, g(p( f 1 ), p( f 2 ), ..., p( f m )|σ(x 0 , y 0 )) refers to the conditional probability density function of the Doppler spectrum, and g p (σ(x 0 , y 0 )) refers to the a-priori probability density of σ(x 0 , y 0 ).Bayesian estimation is a global optimal estimation method.It increases the estimation precision at the NRCS with high a-priori probability density, but decreases the estimation precision at the NRCS with low a-priori probability density.In general, the a-priori probability density of the NRCS of a SAR image can be expressed by models such as Gamma, inverse Gaussian, or other distribution models [14].However, in these commonly used models, the probability densities of low-NRCS are relatively low, which will lead to a less accurate estimation result for the low-backscattering areas.Hence, to acquire a higher estimation precision for the low-backscattering areas, the commonly used NRCS distribution models are not adopted, but it is assumed that the a-priori probability density of the NRCS is uniformly distributed.Another point which should be considered is that the NRCS should be greater than zero.Therefore, the a-priori probability density of the NRCS used in this paper is given as: The Equation ( 13) is used as the new a-priori probability density in proposed method, then the NRCS is estimated by the maximum likelihood (ML) estimation method.The advantage of the proposed method is that it can avoid meaningless estimation results less than or equal to zero.
Because the a-priori probability density given by Equation ( 13) is a discontinuous function, which is not convenient for the solving of Equation (11), it is approximated by: where, in order to match Equation ( 13), α should be more than 10 16 and choose 10 20 in this method.
The estimation of Equation ( 12) can be obtained by solving the following equation: where p n ( f i ) is the i-th frequency of the n-th block.
Because the signal of a single-look complex image is a complex Gaussian process, the probability density function of each sample of the Doppler spectrum can be given by the well-known exponential distribution as follows: Thus, the joint probability density function of all the frequency points is: Inserting Equations ( 5), ( 16) and ( 17) into Equation ( 15), and considering that the Doppler spectrum at different azimuthal locations has different components, thus, the following functions at different azimuthal locations are derived. where ; n refers to the pixel location in the flight directions.T is the length of the azimuthal data.X is the number of the azimuthal ambiguity.When X < n ≤ 2X: Remote Sens. 2017, 9, 219 8 of 22 When 0 < n ≤ X: Combining all the equations above, there are n equations.To solve all the unknown variables, the Newton iterative method is adopted.The Jacobian matrix of the derived function is in the Appendix A.
To solve σ(x 0 , y 0 ) from all equations above, σ x 0 − D x , y 0 − D y and σ x 0 + D x , y 0 + D y should be first known.However, to obtain σ x 0 − D x , y 0 − D y or σ x 0 + D x , y 0 + D y , a known σ(x 0 , y 0 ) is also needed.This self-contradiction problem can be addressed by using an iterative strategy.In the n-th iteration, Equation ( 17) is written as: where σ n (x 0 , y 0 ) is the estimation result in the n-th iteration.The initial guess of σ(x 0 , y 0 ) is given as: where I(x 0 , y 0 ) is the mean image intensity of the pixels between x 0 − L/2 y 0 and x 0 + L/2 y 0 , A is the azimuthal ambiguity factor which is given by: The convergence condition is given by: where σ min refers to a certain small NRCS value.N x and N y are the pixel numbers of the estimated NRCS image in flight and look directions, respectively.The aforementioned σ(x 0 , y 0 ) is a relative backscattering intensity rather than an absolute NRCS.If we have the K-constant needed in the radiometric calibration, the estimated relative backscattering intensity σ(x 0 , y 0 ) can be further converted to the absolute NRCS by replacing I − N 0 in Equation (1) with σ(x 0 , y 0 ) estimated by this method, expressed as:
Algorithm Flow Chart and Summary
The azimuthal matching filters of the standard imaging algorithm of commercial SAR products are generally weighted filters, which does not satisfy the requirements of our method.Thus, our algorithm begins with the SAR raw data product.In the first step, SAR imaging, an unweighted azimuthal matching filter is used.The byproduct of SAR imaging is the Doppler centroids of each range cell, which will be used in the second step.The method shifts the Doppler centroids of the single-look complex images to a zero frequency position.This includes some substeps, such as fast Fourier transform (FFT), inverse FFT, and spectrum shifting.In the last step, the single-look complex image is divided into many subimage patches first.The size of the subimage patches is selected based on the desired resolution.Then, an iteration strategy is used to estimate the signal intensities of each subimage patch.In each iteration, the signal intensities are estimated on the basis of Equation (23), which is solved by the Newton-iteration algorithm.Finally, the estimated relative backscattering intensity σ(x 0 , y 0 ) is converted to absolute NRCS using Equation ( 27).
The algorithm used in this method is summarized in Figure 2.
Algorithm Flow Chart and Summary
The azimuthal matching filters of the standard imaging algorithm of commercial SAR products are generally weighted filters, which does not satisfy the requirements of our method.Thus, our algorithm begins with the SAR raw data product.In the first step, SAR imaging, an unweighted azimuthal matching filter is used.The byproduct of SAR imaging is the Doppler centroids of each range cell, which will be used in the second step.The method shifts the Doppler centroids of the single-look complex images to a zero frequency position.This includes some substeps, such as fast Fourier transform (FFT), inverse FFT, and spectrum shifting.In the last step, the single-look complex image is divided into many subimage patches first.The size of the subimage patches is selected based on the desired resolution.Then, an iteration strategy is used to estimate the signal intensities of each subimage patch.In each iteration, the signal intensities are estimated on the basis of Equation ( 23), which is solved by the Newton-iteration algorithm.Finally, the estimated relative backscattering intensity ( ) 0 0 , x y σ is converted to absolute NRCS using Equation (27).
The algorithm used in this method is summarized in Figure 2.
Validation of the Proposed Estimation Method
In this section, three examples will be presented to demonstrate the advantages of this method in low NRCS estimation and azimuthal ambiguity filtering.
Validation of the Proposed Estimation Method
In this section, three examples will be presented to demonstrate the advantages of this method in low NRCS estimation and azimuthal ambiguity filtering.
Example 1: Qualitative Analysis for the Estimation Method in Low NRCS
The first example is an ocean image acquired by ERS-2 (European remote sensing satellite (ERS) was the European Space Agency's first Earth-observing satellite) on 30 April 2005 in the South China Sea as a qualitative analysis, which is shown in Figure 3.There are 4912 pixels in the look direction and 28,695 pixels in the flight direction in the single-look complex image used in this example.Frame 1 in Figure 3 is a subimage for the comparison between a conventional SAR image and the corresponding estimated NRCS image.The first step is estimating the Doppler centroid f0 for each range cell [11,[15][16][17][18][19], and then shifting the Doppler spectrum centroid of the single-look complex image to zero.Note that ocean currents can lead to an additional local shift of the Doppler centroid [20][21][22].However, the Doppler shift resulting from the ocean current is generally less than 5% of the PRF, which can be neglected in the method proposed in this paper.Examples of unshifted and shifted Doppler spectra are shown in Figure 4.The first step is estimating the Doppler centroid f 0 for each range cell [11,[15][16][17][18][19], and then shifting the Doppler spectrum centroid of the single-look complex image to zero.Note that ocean currents can lead to an additional local shift of the Doppler centroid [20][21][22].However, the Doppler shift resulting from the ocean current is generally less than 5% of the PRF, which can be neglected in the method proposed in this paper.Examples of unshifted and shifted Doppler spectra are shown in Figure 4.The first step is estimating the Doppler centroid f0 for each range cell [11,[15][16][17][18][19], and then shifting the Doppler spectrum centroid of the single-look complex image to zero.Note that ocean currents can lead to an additional local shift of the Doppler centroid [20][21][22].However, the Doppler shift resulting from the ocean current is generally less than 5% of the PRF, which can be neglected in the method proposed in this paper.Examples of unshifted and shifted Doppler spectra are shown in Figure 4.The second step is calculating Doppler spectra from the single-look complex image.In this example, each Doppler spectrum is a 128-point discrete spectrum that is averaged by 224 times in the flight direction and 10 times in the look direction.A total of 491 Doppler spectra from the entire SAR image are obtained.The azimuthal length used for calculating one Doppler spectrum is about 121 km (i.e., L = 121 km).
Frame 1 in Figure 3 is chosen to compare the conventional SAR image with the corresponding estimated NRCS image.The result is shown in Figure 5.The pixel size of the single-look complex image is about 21 m (range direction) × 4.2 m (flight direction).Figure 5a is a multi-look SAR image, in which each pixel is averaged by 80 adjacent pixels of the single-look complex image (4 pixels in the look direction × 20 pixels in the flight direction).Figure 5b is the estimated NRCS image, in which each pixel is estimated from 80 pixels of the single-look complex image (in each estimation, the Doppler spectrum is calculated from 20 pixels in the flight direction and averaged by 4 times in the look direction).The pixel sizes of both images in Figure 5 The comparison of Figure 5a,b demonstrates that Figure 5b presents the features of the dark area more clearly.To compare these two images qualitatively, the image intensity profiles along the white lines are depicted in Figure 6.
In Figure 6, the signal intensity is normalized by the mean intensity of the entire image.The image feature near the white line is an oceanic internal wave.Four peaks of the internal wave are marked by dashed lines, and three troughs are marked by bold dashed lines.At the positions of the peaks, the estimated NRCS intensity is very close to that of conventional SAR image intensity because the SNR of the peaks is sufficiently high.As a comparison, at the positions of the troughs, the SAR signal is buried by the noise floor (about −10 dB after normalization), making it hard to judge the exact trough position.In contrast, the estimated NRCS can remove the effect of the noise floor to a large extent, and the trough positions of the estimated NRCS are near the midpoint of the two adjacent peaks, which indirectly validates the correctness of the proposed method.The comparison of Figure 5a,b demonstrates that Figure 5b presents the features of the dark area more clearly.To compare these two images qualitatively, the image intensity profiles along the white lines are depicted in Figure 6.The comparison of Figure 5a,b demonstrates that Figure 5b presents the features of the dark area more clearly.To compare these two images qualitatively, the image intensity profiles along the white lines are depicted in Figure 6.
In Figure 6, the signal intensity is normalized by the mean intensity of the entire image.The image feature near the white line is an oceanic internal wave.Four peaks of the internal wave are marked by dashed lines, and three troughs are marked by bold dashed lines.At the positions of the peaks, the estimated NRCS intensity is very close to that of conventional SAR image intensity because the SNR of the peaks is sufficiently high.As a comparison, at the positions of the troughs, the SAR signal is buried by the noise floor (about −10 dB after normalization), making it hard to judge the exact trough position.In contrast, the estimated NRCS can remove the effect of the noise floor to a large extent, and the trough positions of the estimated NRCS are near the midpoint of the two adjacent peaks, which indirectly validates the correctness of the proposed method.In Figure 6, the signal intensity is normalized by the mean intensity of the entire image.The image feature near the white line is an oceanic internal wave.Four peaks of the internal wave are marked by dashed lines, and three troughs are marked by bold dashed lines.At the positions of the peaks, the estimated NRCS intensity is very close to that of conventional SAR image intensity because the SNR of the peaks is sufficiently high.As a comparison, at the positions of the troughs, the SAR signal is buried by the noise floor (about −10 dB after normalization), making it hard to judge the exact trough position.In contrast, the estimated NRCS can remove the effect of the noise floor to a large extent, and the trough positions of the estimated NRCS are near the midpoint of the two adjacent peaks, which indirectly validates the correctness of the proposed method.
Example 2: Quantitative Analysis for the Estimation Method in Low NRCS
This example is the atmospheric gravity waves' image acquired by ERS-2 on 11 March 2006 in the East China Sea as a quantitative analysis, which is shown in Figure 7.There are 4912 pixels in the look direction and 28,695 pixels in the flight direction in the single-look complex image used in this example.
Example 2: Quantitative Analysis for the Estimation Method in Low NRCS
This example is the atmospheric gravity waves' image acquired by ERS-2 on 11 March 2006 in the East China Sea as a quantitative analysis, which is shown in Figure 7.There are 4912 pixels in the look direction and 28,695 pixels in the flight direction in the single-look complex image used in this example.The four white data lines a, b, c, and d are the profiles for comparison between the proposed method, the SAR raw image intensity minus N0 and the optimal parameter estimation method of internal waves [23].
In this example, the method of optimal parameter estimation of internal waves in SAR images [23] and the proposed method in this paper will be used to deal with the internal wave in Figure 7.
The optimal parameter estimation is the latest method for estimating the parameter of internal solitary waves.In this article, it is referred to as optimal parameter estimation.In order to verify the feasibility of this method, we found a section at the other location of atmospheric gravity waves (the red solid line region in Figure 7.The estimation result is shown in Figure 8, showing that the optimum estimators are very close to the Cramér-Rao bound (CRB).Therefore, the estimation method in the literature [23] is considered to fit the atmospheric gravity waves' profile in Figure 7.The four white data lines a, b, c, and d are the profiles for comparison between the proposed method, the SAR raw image intensity minus N 0 and the optimal parameter estimation method of internal waves [23].
In this example, the method of optimal parameter estimation of internal waves in SAR images [23] and the proposed method in this paper will be used to deal with the internal wave in Figure 7.
The optimal parameter estimation is the latest method for estimating the parameter of internal solitary waves.In this article, it is referred to as optimal parameter estimation.In order to verify the feasibility of this method, we found a section at the other location of atmospheric gravity waves (the red solid line region in Figure 7.The estimation result is shown in Figure 8, showing that the optimum estimators are very close to the Cramér-Rao bound (CRB).Therefore, the estimation method in the literature [23] is considered to fit the atmospheric gravity waves' profile in Figure 7.
We selected four profiles from the atmospheric gravity waves in Figure 7, and at the position of the trough, the SAR signal is buried by the noise floor at positions a and b are more obvious than positions c and d.The estimation method in the literature [23], the proposed method and the SAR Raw Image are used in the four profiles for comparison, The results of the comparison are shown in Figure 9.
The optimal parameter estimation is the latest method for estimating the parameter of internal solitary waves.In this article, it is referred to as optimal parameter estimation.In order to verify the feasibility of this method, we found a section at the other location of atmospheric gravity waves (the red solid line region in Figure 7.The estimation result is shown in Figure 8, showing that the optimum estimators are very close to the Cramér-Rao bound (CRB).Therefore, the estimation method in the literature [23] is considered to fit the atmospheric gravity waves' profile in Figure 7.We selected four profiles from the atmospheric gravity waves in Figure 7, and at the position of the trough, the SAR signal is buried by the noise floor at positions a and b are more obvious than positions c and d.The estimation method in the literature [23], the proposed method and the SAR Raw Image are used in the four profiles for comparison, The results of the comparison are shown in Figure 9. the SAR raw image intensity minus N0; the black solid line is the proposed method in this paper; the red dot dash line is the optimal parameter estimation method of internal waves.
In Figure 9, the signal intensity is normalized by the mean intensity of the entire image.The image feature near the white lines represent atmospheric gravity waves.At the position of the peak, the estimated NRCS intensity of all the profiles are very close to that of conventional SAR image intensity due to sufficiently high SNR of the peak.As a comparison, at the position of the trough, the SAR signal is buried by the noise floor (about −10 dB after normalization at positions a and b, and about −5 dB to −8 dB at positions c and d), making it hard to judge the exact trough position.In 7; The green dashed line is the SAR raw image intensity minus N 0 ; the black solid line is the proposed method in this paper; the red dot dash line is the optimal parameter estimation method of internal waves.
In Figure 9, the signal intensity is normalized by the mean intensity of the entire image.The image feature near the white lines represent atmospheric gravity waves.At the position of the peak, the estimated NRCS intensity of all the profiles are very close to that of conventional SAR image intensity due to sufficiently high SNR of the peak.As a comparison, at the position of the trough, the SAR signal is buried by the noise floor (about −10 dB after normalization at positions a and b, and about −5 dB to −8 dB at positions c and d), making it hard to judge the exact trough position.In contrast, the estimated NRCS can remove the effect of the noise floor, which can reach −22 dB after normalization at positions a and b, and about −16 dB to −18 dB at positions of c and d.Position a is selected as an example through the method of optimal Parameter Estimation of Internal Waves in SAR images from the literature [23] to estimate the energy intensity value in the wave trough position, which is around −22 dB.From Figure 9, the estimation curve of signal intensity by the proposed method in this paper is very close to the method of optimal Parameter Estimation of Internal Waves in SAR images, which directly validates the accuracy of the proposed method.
Example 3: Validation of the Azimuthal Ambiguity Analysis
The third example is a RADARSAT-1 (RADARSAT is a Canadian remote sensing Earth observation satellite program overseen by the Canadian Space Agency) image of Vancouver, which is shown in Figure 10.The SAR raw data of this example was obtained from the accompanying CD of literature [11].
Example 3: Validation of the Azimuthal Ambiguity Analysis
The third example is a RADARSAT-1 (RADARSAT is a Canadian remote sensing Earth observation satellite program overseen by the Canadian Space Agency) image of Vancouver, which is shown in Figure 10.The SAR raw data of this example was obtained from the accompanying CD of literature [11].There are 7940 pixels in the look direction and 19,425 pixels in the flight direction in the single-look complex image of this example.As in the first example, each Doppler spectrum in this example is also a 128-point discrete spectrum, which is averaged by 151 times in the flight direction and 30 times in the look direction.A total of 264 Doppler spectra can be obtained from the entire single-look complex image.The azimuthal length used for calculating one Doppler spectrum is about 109 km (i.e., L = 109 km).
Using Frame 2 in Figure 10 as an example, the conventional SAR image and the corresponding estimated NRCS image are shown in Figure 11.The pixel size of the single-look complex image is about 8 m (look direction) × 5.6 m (flight direction).Figure 11a is a multi-look SAR image, in which each pixel is averaged by 192 adjacent pixels of the single-look complex image (12 pixels in the look direction × 16 pixels in the flight direction).Figure 11b is an estimated NRCS image, in which each pixel is estimated from 192 pixels of the single-look complex image (in each estimation, the Doppler spectrum is calculated from 16 pixels in the flight direction and averaged by 12 times in the look direction).The pixel sizes of both images are about 96 m × 90 m.There are 7940 pixels in the look direction and 19,425 pixels in the flight direction in the single-look complex image of this example.As in the first example, each Doppler spectrum in this example is also a 128-point discrete spectrum, which is averaged by 151 times in the flight direction and 30 times in the look direction.A total of 264 Doppler spectra can be obtained from the entire single-look complex image.The azimuthal length used for calculating one Doppler spectrum is about 109 km (i.e., L = 109 km).
Using Frame 2 in Figure 10 as an example, the conventional SAR image and the corresponding estimated NRCS image are shown in Figure 11.The pixel size of the single-look complex image is about 8 m (look direction) × 5.6 m (flight direction).Figure 11a is a multi-look SAR image, in which each pixel is averaged by 192 adjacent pixels of the single-look complex image (12 pixels in the look direction × 16 pixels in the flight direction).Figure 11b is an estimated NRCS image, in which each pixel is estimated from 192 pixels of the single-look complex image (in each estimation, the Doppler spectrum is calculated from 16 pixels in the flight direction and averaged by 12 times in the look direction).The pixel sizes of both images are about 96 m × 90 m. about 8 m (look direction) × 5.6 m (flight direction).Figure 11a is a multi-look SAR image, in which each pixel is averaged by 192 adjacent pixels of the single-look complex image (12 pixels in the look direction × 16 pixels in the flight direction).Figure 11b is an estimated NRCS image, in which each pixel is estimated from 192 pixels of the single-look complex image (in each estimation, the Doppler spectrum is calculated from 16 pixels in the flight direction and averaged by 12 times in the look direction).The pixel sizes of both images are about 96 m × 90 m.The white frame in Figure 11a is contaminated by the azimuthal ambiguity signal from the strong land targets to the right.As a comparison, the azimuthal ambiguity signal is filtered quite clearly in the same position in Figure 11b.
Points A and B in Figure 11a are selected to illuminate the difference in the Doppler spectrum between the signals contaminated and uncontaminated by the azimuthal ambiguity.The Doppler spectra of points A and B are shown in Figure 12.The white frame in Figure 11a is contaminated by the azimuthal ambiguity signal from the strong land targets to the right.As a comparison, the azimuthal ambiguity signal is filtered quite clearly in the same position in Figure 11b.
Points A and B in Figure 11a The centroids of the Doppler spectra depicted in Figures 12a and 12b have been shifted to zero.The blue solid lines are the measured Doppler spectra that is calculated directly from the single-look complex image, and the red dashed lines are the Doppler spectra modeled by Equation ( 5), in which A is an uncontaminated target, so its Doppler spectrum satisfies a typical Gaussian function quite well (Figure 12a).The accordance between the measured and modeled spectra validates the accuracy of the proposed method.As a comparison, point B is a target seriously contaminated by the azimuthal ambiguity effect.The low-frequency and high-frequency parts of the Doppler spectrum of B (Figure 12b) are very high; they correspond to the ambiguity signals from the right and left sides of point B, respectively.The spectrum at low frequency is especially high.From Figure 11a, it is known that the left and right sides of point B are all high-backscattering land targets, whereas point B is a water area with a very low NRCS, and the target on the right is much stronger than on the left.The NRCS distribution of Figure 11a agrees with the analysis of the Doppler spectrum of point B and the modeled spectrum satisfies the measured spectrum quite well, which further validates the proposed method.
Discussions
As SAR imaging is more and more widely used, the radar echo is analyzed in the proposed method, and the relative value of RCS is extracted from the Doppler spectrum, the estimated relative The centroids of the Doppler spectra depicted in Figure 12a,b have been shifted to zero.The blue solid lines are the measured Doppler spectra that is calculated directly from the single-look complex image, and the red dashed lines are the Doppler spectra modeled by Equation ( 5), in which σ(x 0 , y 0 ), σ x 0 + D x , y 0 + D y , σ x 0 − D x , y 0 − D y , P a ( f ), and N 0 are all known from the raw data.Point A is an uncontaminated target, so its Doppler spectrum satisfies a typical Gaussian function quite well (Figure 12a).The accordance between the measured and modeled spectra validates the accuracy of the proposed method.As a comparison, point B is a target seriously contaminated by the azimuthal ambiguity effect.The low-frequency and high-frequency parts of the Doppler spectrum of B (Figure 12b) are very high; they correspond to the ambiguity signals from the right and left sides of point B, respectively.The spectrum at low frequency is especially high.From Figure 11a, it is known that the left and right sides of point B are all high-backscattering land targets, whereas point B is a water area with a very low NRCS, and the target on the right is much stronger than on the left.The NRCS distribution of Figure 11a agrees with the analysis of the Doppler spectrum of point B and the modeled spectrum satisfies the measured spectrum quite well, which further validates the proposed method.
Discussion
As SAR imaging is more and more widely used, the radar echo is analyzed in the proposed method, and the relative value of RCS is extracted from the Doppler spectrum, the estimated relative backscattering intensity is converted to absolute NRCS using Equation ( 27).The comparison between the proposed method and the traditional method is described in detail in Section 3. Three examples show the feasibility and superiority of the proposed method.In the following, the estimation accuracy of the proposed method and the traditional method is analyzed by simulation.
Because the normalized image intensity differs from the NRCS only with a constant offset, the proposed method uses the normalized image intensity to be equivalent to NRCS.In this paper, in order to simplify the calculation without the loss of equivalence, the normalized image intensity is adopted in Figures 6, 8 and 9.
The Comparative Simulation Analysis of Estimation Accuracy for Different NRCS Estimation Methods
The simulations were performed under different signal-to-noise ratios (SNR) and azimuthal ambiguity conditions.The parameters of the simulations are given in Table 2, which correspond to low, intermediate, and high azimuthal ambiguity, respectively.Take the estimation precision of ML estimation into consideration; according to mathematical statistics theory, the ML estimation can reach the Cramer-Rao bound [12,13].That is the root-meansquare (rms) of the estimation which can be expressed as: where σ(x 0 , y 0 ) is the ML estimation of σ(x 0 , y 0 ), and rms[•] refers to the root-mean-square.The rms of the modified method is shown in Figure 13.For comparison, the Cramer-Rao bound of ML estimation and the simple estimation of I − N 0 , which is used in Equation (1), are also depicted in Figure 13.In Figure 13, the SNR (x-axis) refers to σ(x 0 , y 0 )/N 0 , and the estimation error (y-axis) is normalized by N 0 .A comparison of Figure 13a, Figure 13b, and Figure 13c clearly shows that the estimation error of the simple estimation I − N 0 increases significantly with increasing azimuthal ambiguity, but the results from the proposed method of estimation can almost maintain the same estimation precision under various azimuthal ambiguity conditions.Even under low azimuthal ambiguity conditions (Figure 13a), the estimation error of the proposed estimation method is also significantly less than that of simple estimation I − N 0 .In Figure 13a-c, it indicates that the rms of the estimation error of the proposed estimation method is very close to the Cramer-Rao bound with the increase in SNR.This result validates that the proposed estimation method can significantly increase estimation precision under a low SNR or low-scattering area in SAR images.
Take the low SNR condition into account, supposing σ(x 0 , y 0 ) << N 0 ; neglecting the contribution from the azimuthal ambiguity effect, the rms of the estimation error of the proposed estimation method can be obtained as: The Doppler power spectrum can often be obtained by incoherently averaging the spectra of several uncorrelated signals.The estimation precision derived so far is also applicable in the case when the number m is replaced by the overall number of pixels contributing to the estimation.
For example, assume that the NESZ of a spaceborne SAR is −25 dB, the single-look pixel size is 8 m (look direction) × 5 m (flight direction), and the number of pixels contributing to one estimation is 240 (in each estimation, the Doppler spectrum is calculated from 20 pixels in the flight direction and incoherently averaged by 12 times in the look direction); an NRCS estimation precision of about −38 dB could be acquired in the low-backscattering area at a resolution of 96 m × 100 m.
Conclusions
The image intensities of SAR images of low-backscattering areas are often affected by the system noise and azimuthal ambiguity effect.In this paper, a method is proposed for estimating the NRCS of low-backscattering areas.The method can eliminate much of the effect of system noise and azimuthal ambiguity.This method is based on the single-look complex image, and the azimuthal matching filter in the imaging algorithm must be an unweighted filter.The parameters needed for this method can all be estimated from the single-look complex image itself, which makes the method easy to apply.An analysis on the estimation precision demonstrates that, for a typical spaceborne SAR with a NESZ of −25 dB and a single-look pixel size of 8 m × 5 m, the NRCS estimation precision of low-backscattering areas can reach −38 dB at a resolution of 96 m × 100 m.
Three examples are given for validation in Section 3. The first example is a SAR image that is an oceanic internal wave.In the conventional SAR image, the troughs of the internal wave signal intensity are buried by the noise floor, making it hard to judge the exact trough position.In contrast, the NRCS estimated by the proposed method can recover the texture features of the low-scattering area much better, and the recovered troughs of the internal wave are located near the midpoint of the adjacent peaks.The result is a qualitative analysis for the estimation method in low NRCS.In addition, the example in Section 3.2 is atmospheric gravity waves.The estimation values of signal intensity by the proposed method in this paper are very close to the theoretical value of the signal intensity in the low-scattering area of the original image.The result is a quantitative analysis for the estimation method in low NRCS.The third example is a SAR image of a land-water junction, in which the water area is seriously affected by the azimuthal ambiguity signals from high-backscattering land targets.As a comparison, the azimuthal ambiguity signals are filtered out quite clearly in the NRCS image estimated by the proposed method.The Doppler spectra of two points were analyzed, one contaminated and one uncontaminated by the azimuthal ambiguity signal.Analysis proves that the Doppler spectra modeled by the proposed method can satisfy the actual Doppler spectra calculated from the single-look complex image quite well.These three examples both indirectly and directly validate the feasibility of the proposed method in this paper.
This proposed method can be applied to SAR image processing in low-scattering areas in the ocean, such as internal waves, oil spills, low wind speed zones, upwelling, and so on.Conversely, the proposed method can be applied to the data processing of the SAR satellite system with lower NESZ, which can reduce the cost of satellites and improve the bandwidth, resolution, and other indicators of the SAR system.
p n+X ( f i )P 2 L ( f i ) p n+X ( f i )P C ( f i )P L ( f i ) p n+X ( f i )P L ( f i )P R ( f i ) p n+X ( f i )P L ( f i )P R ( f i )
Figure 1 .
Figure 1.Schematic of the Doppler spectrum of synthetic aperture radar (SAR) raw signal and its various components.
Figure 1 .
Figure 1.Schematic of the Doppler spectrum of synthetic aperture radar (SAR) raw signal and its various components.
21 Figure 3 .
Figure 3. ERS-2 (European remote sensing satellite (ERS) was the European Space Agency's first Earth-observing satellite) ocean SAR image of South China Sea collected on 30 April 2005, at 02:28 UTC.Frame 1 is a subimage for the comparison between conventional SAR image and the corresponding estimated NRCS image.
Figure 4 .Figure 3 .
Figure 4. Unshifted and shifted Doppler spectra.The second step is calculating Doppler spectra from the single-look complex image.In this example, each Doppler spectrum is a 128-point discrete spectrum that is averaged by 224 times in the flight direction and 10 times in the look direction.A total of 491 Doppler spectra from the entire SAR image are obtained.The azimuthal length used for calculating one Doppler spectrum is about 121 km (i.e., L = 121 km).Frame 1 in Figure3is chosen to compare the conventional SAR image with the corresponding
Figure 3 .
Figure 3. ERS-2 (European remote sensing satellite (ERS) was the European Space Agency's first Earth-observing satellite) ocean SAR image of South China Sea collected on 30 April 2005, at 02:28 UTC.Frame 1 is a subimage for the comparison between conventional SAR image and the corresponding estimated NRCS image.
Figure 4 .Figure 4 .
Figure 4. Unshifted and shifted Doppler spectra.The second step is calculating Doppler spectra from the single-look complex image.In this example, each Doppler spectrum is a 128-point discrete spectrum that is averaged by 224 times in the flight direction and 10 times in the look direction.A total of 491 Doppler spectra from the entire SAR image are obtained.The azimuthal length used for calculating one Doppler spectrum is about 121 km (i.e., L = 121 km).Frame 1 in Figure3is chosen to compare the conventional SAR image with the corresponding are about 84 m × 84 m.The image intensities of both images are shown by logarithmic grayscaling to display clear texture features of the dark area.Remote Sens. 2017, 9, 219 11 of 21 each pixel is estimated from 80 pixels of the single-look complex image (in each estimation, the Doppler spectrum is calculated from 20 pixels in the flight direction and averaged by 4 times in the look direction).The pixel sizes of both images in Figure 5 are about 84 m × 84 m.The image intensities of both images are shown by logarithmic grayscaling to display clear texture features of the dark area.
Remote Sens. 2017, 9, 219 11 of 21 each pixel is estimated from 80 pixels of the single-look complex image (in each estimation, the Doppler spectrum is calculated from 20 pixels in the flight direction and averaged by 4 times in the look direction).The pixel sizes of both images in Figure 5 are about 84 m × 84 m.The image intensities of both images are shown by logarithmic grayscaling to display clear texture features of the dark area.
Figure 6 .Figure 6 .
Figure 6.Image intensities along the white lines in Figure 5a (red dotted line) and Figure 5b (blue solid line).
Figure 7 .
Figure 7. ERS-2 Ocean SAR image of the East China Sea collected on 11 March 2006, at 02:24 UTC.The four white data lines a, b, c, and d are the profiles for comparison between the proposed method, waves SAR Raw Data Optimal parameter estimation of internal waves
Figure 7 .
Figure 7. ERS-2 Ocean SAR image of the East China Sea collected on 11 March 2006, at 02:24 UTC.The four white data lines a, b, c, and d are the profiles for comparison between the proposed method, the SAR raw image intensity minus N 0 and the optimal parameter estimation method of internal waves[23].
Figure 8 .Figure 8 .
Figure 8.The validation of the optimal parameter estimation of internal waves.
Figure 9 .
Figure 9. Image intensities along the four white lines in Figure 7. (a) the profile a in Figure 7; (b) the profile b in Figure 7; (c) the profile c in Figure 7; (d) the profile d in Figure 7; The green dashed line is minus N0Optimal parameter estimation of internal wav es
Figure 9 .
Figure 9. Image intensities along the four white lines in Figure 7. (a) the profile a in Figure 7; (b) the profile b in Figure 7; (c) the profile c in Figure 7; (d) the profile d in Figure7; The green dashed line is the SAR raw image intensity minus N 0 ; the black solid line is the proposed method in this paper; the red dot dash line is the optimal parameter estimation method of internal waves.
Figure 10 .
Figure 10.RADARSAT-1 (RADARSAT is a Canadian remote sensing Earth observation satellite program overseen by the Canadian Space Agency) SAR image of Vancouver collected on 16 June 2002, at 02:24 UTC.
Figure 10 .
Figure 10.RADARSAT-1 (RADARSAT is a Canadian remote sensing Earth observation satellite program overseen by the Canadian Space Agency) SAR image of Vancouver collected on 16 June 2002, at 02:24 UTC.
Figure 12 .
Figure 12.(a) Doppler spectrum of point A in Figure 11a; (b) Doppler spectrum of point B in Figure 11b; Blue solid and red dashed lines are the measured and modeled Doppler spectra, respectively.
and 0 N are all known from the raw data.Point
Figure 12 .
Figure 12.(a) Doppler spectrum of point A in Figure 11a; (b) Doppler spectrum of point B in Figure 11b; Blue solid and red dashed lines are the measured and modeled Doppler spectra, respectively.
Figure 13 .
Figure 13.The comparison of three kinds of estimation precision, rms of estimation error of modified estimation method (black solid line), I − N0 (green dashed line), and Cramer-Rao bound of maximum likelihood (ML) estimation (red dotted line), (a) for simulation 1; (b) for simulation 2; and (c) for simulation 3.
Figure 13 .
Figure 13.The comparison of three kinds of estimation precision, rms of estimation error of modified estimation method (black solid line), I − N 0 (green dashed line), and Cramer-Rao bound of maximum likelihood (ML) estimation (red dotted line), (a) for simulation 1; (b) for simulation 2; and (c) for simulation 3.
3 J 2 C 2 R 2 L
( f i ) σ n−X P L ( f i ) + σ n P C ( f i ) + σ n+X P R ( f i ) + ( f i ) σ n−2X P L ( f i ) + σ n−X P C ( f i ) + σ n P R ( f i ) + ( f i ) σ n P L ( f i ) + σ n+X P C ( f i ) + σ n+2X P R ( f i ) + f i )P 2 C ( f i ) σ n−X P L ( f i ) + σ n P C ( f i ) + σ n+X P R ( f i ) + X ( f i )P 2 R ( f i ) σ n−2X P L ( f i ) + σ n−X P C ( f i ) + σ n P R ( f i ) + ( f i )P 2 L ( f i )
F r 3 J
f i )P C ( f i ) σ n P L ( f i ) + σ n+X P C ( f i ) + σ n+2X P R ( f i ) + ( f i )P L ( f i )P C ( f i ) σ n P L ( f i ) + σ n+X P C ( f i ) + σ n+2X P R ( f i ) + N 0 (n, n + 2X) = m ∑ i=1 P L ( f i )P R ( f i )
( f i )P L ( f i ) σ n−X P L ( f i ) + σ n P C ( f i ) + σ n+X P R ( f i ) + n P L ( f i ) + σ n+X P C ( f i ) + σ n+2X P R ( f i ) + n−X P L ( f i ) + σ n P C ( f i ) + σ n+X P R ( f i ) + J(n, n − X) = m ∑ i=1 P C σ σ | 13,673.8 | 2017-02-28T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Smart Cities for People with IDD - Foundations for Digitally Inclusive Healthcare Ecosystems
. Smart cities require smart healthcare. In a smart city, the purpose of citywide efforts has the fundamental objectives of livability, sustainability, and productivity. Some well-intentioned smart city programs unintentionally worsen inequality when they lack transparency, fail to involve the community or ignore the varied requirements and preferences of residents. To address ongoing health disparities among persons with intellectual disabilities, patient-centred preventive healthcare that considers both their physical and mental health needs must be prioritized. Engagement and inclusion must at the forefront of smart city initiatives that shift from being technology-centric to citizen-centric. We bring attention to pillars of interaction in inclusive smart cities in the context of care for people with intellectual and developmental disabilities. We explore the fundamentals of a digitally inclusive healthcare service ecosystem for people with IDD through the lens of the Actor for Actor framework to learn about the foundational facilities for IDD patients' to engage an establish care pathways.
Introduction
The notion of inclusive healthcare ecosystems is strongly represented within the sustainable development goals identified by the United Nations 2 . These goals strengthen the resolution of the Convention on the Rights of Persons with Disabilities 3 adopted in 2006. Thus, the subject of disability is an essential component of sustainable development and integral to at least five of the 17 Goals as SDGs 4 . Specifically related to healthcare (Goal 3) education (Goal 4); growth and employment (Goal 8); social, economic and political inclusion of all, including persons with disabilities (Goal 10); to inclusive safe and sustainable cities (Goal 11); as well as the overarching goal of data collection and monitoring of the SDGs (Goal 17). Healthcare is an essential component of the 2030 agenda of the United Nations' Sustainable Development Goals (SDGs). Concerning "Good Health and Well-being", SDG 3 has the objective to ensure healthy lives and promoting well-being for all at all ages. Guiding principles for SDG3 define the scope of equitable access to care for persons with disabilities and the removal of discriminatory barriers that prevent full access to health-care services 5 .
In order to support the SDG 2030 agenda, individuals, companies and governments must invest in digital accessibility, proposing initiatives that encourage inclusion and social wellbeing. Digital accessibility is a cornerstone to removing barriers that prevent interaction with, or access to digital tools and technologies, by people with disabilities. Leadership in digital accessibility can inspire much needed transformational change for people with intellectual and developmental disabilities. Digital inclusion and fairness should be considered while designing cities to promote a barrier-free, digital urban logic. That is, smart cities need to get smarter to include the underserved populations, in our case, people with intellectual and developmental disabilities (IDD), especially in providing access to equitable care.
Inclusion of people with intellectual disability in cultural and civic activities is an important point for discussion, particularly in the context of supporting the social sustainability of our local communities and cities. While inclusion is the idea that everyone should be able to use the same facilities, take part in the same activities, and enjoy the same experiences, including people who have a disability or other disadvantage 6 . People with intellectual disabilities may find it difficult to participate in community activities, despite their best efforts. The literature provides chances to investigate how persons with intellectual disabilities' particular preferences might be correlated with structure and degrees of involvement to increase social inclusion in their local communities and to develop inclusive cities 7 . Critical studies and discussions of inclusion and accessibility have been offered, not least on topics such as universal service [1], digital divide [2], community networking [3], information technology development [4], and technology accessibility design [5]. Although there has been much work over the past two decades in understanding disability and in conceptualizing and critiquing inclusion and how it is produced through policies, practices, and technologies, we still lack answers to, and indeed workable strategies for foundations for digitally inclusive healthcare ecosystems.
What are Intellectual and Developmental Disabilities (IDDs)
Intellectual and developmental disabilities (IDD) are characterized by differences with both intellectual functioning or intelligence, which includes the ability to learn, reason, solve problems, and other skills, and adaptive behavior, which includes everyday social and life skills, and can begin at any time in early childhood.
IDDs are variations that are typically present at birth and that have a particular impact on how a person develops physically, intellectually, and/or emotionally. Many of these ailments have an impact on numerous organ systems or body parts. It may be degenerative. A larger range of frequently lifelong difficulties that might be intellectual, physical or both is referred to as "developmental disabilities." IDD sufferers frequently experience neurological disorders, which can impair cognition and learning. Additionally, these disorders may result in behavioral problems. IDD users frequently experience neurological disorders, which can impair cognition and learning. Other problems including behavioral disorders, speech or language challenges, seizures, and mobility problems can also be brought on by these illnesses. IDDs with nervous system issues include cerebral palsy, Down syndrome, and autism spectrum disorders (ASDs). Others with IDD report that these illnesses have an 5 https://sdgs.un.org/goals/goal3 6 impact on their metabolism, senses (sight, hearing, touch, taste, and smell), or how the brain processes or interprets sensory data.
Motivation
The future of kids with IDD enjoying a full life and thriving depends in large part on health equity. It is evident that, to do this, a number of contextual elements that affect health outcomes must be taken into consideration. The demand on currently available services for this population of children with severe medical requirements is increasing, which puts more strain on families to provide intensive, continuing care at home with no assistance or respite. There was some worry about the rise in children presenting with IDD as well as the rise in children with more complicated medical demands or illnesses with a shorter lifespan. The already overburdened services for children with disabilities are coming under extra strain as a result of these population patterns.
"It is more important to know what sort of person has a disease than to know what sort of disease a person has". A quote that was attributed to Hippocrates, the father of medicine, more than 2500 years ago, is never more important than in our present day.
To address ongoing health disparities among persons with intellectual disabilities, patient centered preventive healthcare that considers both their physical and mental health needs must be prioritized. Finding specialized professional services can be difficult, and the rising cost of living has put a financial pressure on many people [6]. In the literature, inclusive smart cities must be "accessible," "adaptable" as well as "affordable" [7].
We ask ourselves the question of "What does that mean for people with IDD -what services must health ecosystems provide for inclusive care in Smart Cities? This paper explores potential service and social innovations for IDD inclusive smart cities.
For this work, we start by introducing the concepts of smart cities, equitable care and people with IDDs to lay the conceptual foundation of inclusive smart cities and care journey for people with IDD. Then, through the grounded in the theory of value co-creation and service dominant logic [8], this manuscript proposes a conceptual framework for expressing the complex dynamics that are shaping actual conditions in healthcare for people with IDD.
To understand the engagement model of an inclusive health care service ecosystem, we look for answers through the Actor-for-Actor (A4A) lens to unpack a potential for IDD patients' to engage, through meaningful interaction to establish adaptive care pathways [9].
Smart Cities, equitable care and people with IDDs
In a smart city, the purpose of citywide efforts have the fundamental objectives of livability, sustainability and productivity. Livability is in the interest of all key actors, residents and end users, urban professionals, technology and other firms central and local governments benefit from enhanced livability. Although livability is broad in concept, safety, quality of built environments, the convenience of public facilities, accessibility to services including healthcare are keys to livability [10]. Further, the sustainability of smart cities is often assisted by new technological inventions that provide the basis for renewal, continuity and equity in the services that drive social innovation ideas, concerned with improving and empowering underprivileged groups [11]. For a sustainable smart city, an adaptive capacity is becoming more and more important, requiring timely innovation technological and social innovation [12]. This leads to improved productivity where all actors, including the underprivileged can produce higher values that can be reflected in citywide economic growth. When cities attract skilled knowledge workers, they can bring new ideas, innovation, and prosperity [13].
Cities are using digital technology to regulate urban life and facilitate urban experiences for the benefit of the inhabitants and business. New technologies and digital services, together with the enthusiastic participation of interested individuals, have been viewed as essential elements in smart cities because they connect service providers, consumers, infrastructures, and communities in a single ecosystem to facilitate value co-creation. Some well-intentioned smart city programs unintentionally worsen inequality when they lack transparency, fail to involve the community, or ignore the varied requirements and preferences of residents. Engagement and inclusion are at the forefront of smart city initiatives that shift from being technology-centric to citizen-centric; enablers include data and security, digital and technology, ecosystem, money and funding, internal structure, and policy and regulation 8 . Modern smart cities combine data, digital technology, and humancentered design to encourage decision-making among all local stakeholders, including people, businesses, and other interested parties as well as by the government. Taking this democratization of urban development a step further, some communities are encouraging locals to co-create solutions to neighborhood issues by providing them with the necessary tools, training, and information. By standardizing digital access, promoting digital inclusion and fairness, building shared, multifaceted stakeholder commitment, improving regulatory urban performance, and putting persons with disabilities (e.g. people with IDD) at the center of smart projects, cities may become more inspirational, livable, and compassionate [14].
Inclusive smart cities and care journey for people with IDD
Smart cities require smart healthcare. Medical care is critical for smart city growth, so assuring high-quality (equitable) medical care for all constituents is the most difficult goal for city governments to achieve [15].
Healthy people must have access to primary care, healthy food and environment, transportation, income levels, social support, health insurance coverage, and health literacy. People who have IDDs face obstacles to health care access, such as problems with communication and a lack of participation, making it challenging to deliver this type of treatment to a high standard, often leading to a shortened lifespan [16]. Therefore, it can be challenging for young people with intellectual and developmental disabilities to move between phases of care, navigate the healthcare system, and find support [17]. Key findings of studies about people with intellectual disabilities have indicated an overwhelming difficulty in articulating emotions [18]. They require additional assistance to navigate the system and access proper care. Practitioners are developing tip sheets for engaging people with IDD, recommending keeping communication simple, using visual aids, respecting the person's choice of language or terminology 9 .
The literature recommended technology accessibility design for users with disabilities [5], where persons with disability, are included in the design process, However, technology continues to be a major obstacle for many individuals with disabilities in accessing the digital city [14]. A more person-centered, humanizing approach of healthcare delivery has lately emerged shifting the paradigm of care, focused on people with disabilities [19]. Personcentered or patient-centered models of care enable patients to make educated decisions about how to manage their health requirements, improve the customization of care, and empower patients to share responsibility for their health [20]. This reflects a realization of the significance of humanizing values, such as empathy and respect for people's dignity, agency, individuality, sense of place, personal journey, and holistic well-being, as the foundation for care practices [21].
Methodological Approach
Service Science is an interdisciplinary effort to understand how service systems interact and co-create value [22]. Value co-creation is therefore conceptualized through the exchange of services among configurations of actors, focusing on service ecosystems [23], including institutions, humans, societies and those technological actors. This Service-dominant (S-D) logic bears the fundamentals that logic that humans apply their competences to benefit others and reciprocally benefit from others' applied competences through service-for-service exchange [8]. The development of this ecosystems perspective allows a more holistic, dynamic, and systemic perspective of value creation, emphasizing the contribution of all actors in the S-D logic. To that effect, service-dominant ecosystems, such as healthcare ecosystems, look at value as something created by multiple actors in an integrative approach and coordination of services for value exchange with the premise that service is the essential component of the exchange [24]. Ecosystems include human and non-human actors (such as technology) that integrate resources and exchange of service, establishing a foundation for dynamic adaptation [9].
Healthcare may be envisioned as a complex ecosystem based on the interactions of many entities [25]. It must adjust its agenda to the community's unpredictability and include the anticipated value creation from all players, including those with IDD. The difficulty is in adapting particular characteristics to cope with the alignment to given services and service procedures for diverse demands [26]. The same institutional logic that underpins healthcare ecosystems promotes resource integration and re-bundling processes as a means of assuring sustainability and the well-being of all stakeholders [27].
Therefore, our topic under investigation is around essential technology based services that health ecosystems must provide for inclusive care in smart cities. We set the stage for our work on the concept equitable care and people with IDD. Then use the Actor-for-Actor (A4A) approach to uncover the potential for IDD patients' to engage, through meaningful interaction and establish care pathways [9]. We believe that the A4A model establishes a contract within the healthcare ecosystem for value co-creation based on actors integrating their resources and acting with intention to obtain value by providing benefits to each other, thus including each other into the value delivery.
The Actor-for-Actor (A4A) relationships
The A4A relationships involve value co-creation based on actors integrating their resources and acting intentionally to obtain value by benefiting other parties and by being a part of the emerging viable system; actor acts for other actors directly involved in the relationship generating positive effects for the entire system in which it is contextualized. To comprehend this value chain, the A4A model represents interactions among ecosystem participants, in a cyclical of A4A stages (Fig. 1), while concentrating on the structural prerequisites for actors' connections and involvement as well as the system determinants and their commitment to the value of delivering equitable care for the underserved patients with IDD. Fig. 1. A4A relationships [9].
The A4A cycle starts with a situation where each actors becomes interested and 'engaged' in a given role within the care ecosystem (A4A -Actors' Engagement). It is possible to see continual interactions that are initiated by the alignment of objectives and shared purpose among all parties engaged after the first "connection" of significant players has resulted in the beginning of the engagement.
This knowledge contributes to the process of identifying the numerous spheres of interaction that may have an impact on the outcome of the interaction (A4A -Subjective Awareness).
The development of these should result from an understanding of the differences that affect each actor in terms of inclusion, equity and ethical limits so that there is empathy, trust, and support for all parties engaged to maintain focus on the shared objectives and purpose (A4A -Shared Intentionality).
When all parties involved collaborate, out of a sense of mutual satisfaction (implicit or explicit), they can work toward shared objectives and close potential gaps (A4A -Alignment of Finality).
During the subsequent stage of service transformation, every aspect of the healthcare system, including practitioners, patients, caregivers, information systems, governments, law and payment systems, delivery design, assessment, patient engagement/democratization, training, and research, cooperate to enhancing the value and equity of healthcare (A4A -Resource Integration).
Regardless of difficulties, capacities and social challenges, or other factors that may restrict a patient's ability access care, resources must be integrated to offer continuous and accessible care to all groups of patients once a care choice has been made.
Finally, the integration of resources is essential for a system of actors to produce an outcome that cares for the ever evolving patient's case (A4A -Emergence in Action). The cycle of engagement starts again with a new context to support the transformation strategy needed for sustainability and continuity until the outcome is recognized.
Actors' Engagement
The roles of the actors involved in healthcare processes are continuously being redefined from the standpoint of new dimension of value-based healthcare, the inclusion of patients with IDD. Actors of the inclusive smart city must emerge with a barrier-free, digital urban logic intended to close the digital gap and create a digitally inclusive ecosystem. Consequently, actors in an inclusive equitable healthcare ecosystem, become interested in the connection between all people's access to and understanding of health services and their own health. Actors must participate in building inclusive cities by collaborating to create a shared experience of inclusion, with the goal of delving into the processes and experiences of people with intellectual disabilities, as well as community group leaders, and mapping pathways to a continuum of participation [28]. The emergence of a shared intentionality and evidence based decisions for action aim to sustain inclusive societies. Consequently, value co-creation within the ecosystem contributes to mainstreaming and building inclusive public services that are welcoming and accessible to people with intellectual disability throughout the city [5]. Capacity-building initiatives that build inclusive cities for and with people with intellectual disability are therefore possible 10 .
Subjective Awareness
Including people with IDD in a smart health service system, requires a better understanding of how to promote health among those in the IDD population, which often linked to the ability to monitor the life and risk factors of the youth among IDD sufferers [29]. Healthcare providers must be aware about the needs of people with IDD, some literature even advocates establishing systems of care that integrate acute healthcare with long-term services and support, developing IDD medicine as a specialty [30]. After the first 'connection' of important actors has led to the start of the engagement, it is imperical to observe ongoing interactions, which are triggered by the alignment of aims and common purpose among all parties involved. Substandard access to care (leading in insufficient treatment and a higher risk of complications), lower quality of care, and poorer self-care practices, for example, are all part of a postulated causal link between lower socioeconomic level and poor health care outcomes (including diet, exercise). When the context in which this engagement takes place is populated by multiple actors whose behavior can directly or indirectly influence (on the other hand, in a reciprocal manner) the engagement, such awareness becomes a part of recognizing various spheres that can influence the success of the engagement. Therefore, clinicians, payers, and other healthcare stakeholders must be able to comprehend how non-clinical variables influence the health trajectories of the patients they care for if population health and cost management are to be successful and equitable. This awareness becomes part of recognizing various spheres of equity that can influence the success of the engagement.
Shared Intentionality
A focus on health-care equity ensures that a health-care system is designed to decrease inequities in health-care processes and outcomes, especially for people from low-income families. All aspects of the health-care system, from law and payment systems to delivery design, assessment, patient engagement/democratization, training, and research, should be aligned with improving health-care value and equity during the next stage of transformation [31]. For the care of people with IDD, the necessity for caregiver respite is well-documented. For example, through "complementary caregiving" activities that encourage involvement and educational possibilities for a care recipient (CR) with IDD, Social Assistive Robotics (SAR) hold promise in alleviating the need for caregiver respite [32]. In other cases, the use of robots could be an agent of personal privacy and dignity for those with intellectual disabilities, who need ongoing assistance with everyday activities [33]. Recognizing the distinctions that influence each agent in terms of equity and ethical boundaries should lead to developing these so that there is empathy, trust, and support for all parties involved to keep focused on the common goals and purpose. A human-centered approach to healthcare service design builds on transformative service research and emphasizes the notion of service inclusion to highlight the necessity for improving the welfare of people with restricting biophysical or psychological features [34].
Finality Alignment
The potential for certain interventions to improve health care outcomes by addressing patient resource, communication, and navigation barriers; these interventions are often facilitated by technology and are dependent on (1) disease or condition type; (2) phase of the care process; and (3) technology accessibility in many cases [35]. Studies reveal that medical professionals do not believe they are sufficiently trained to assist people with an intellectual impairment issue [36]. The training and equipment required to meet the needs of persons with IDDs in an equitable and empowering manner may not be available to health care professionals who provide some kind of health or social care assistance to people with IDDs [37]. Complexities include short consultation times, multi-morbidities (which can also make the care needs of people with IDDs more complex), a lack of health education, inadequate training for healthcare professionals [38], their negative attitudes [39], and the requirement for complex patient-caregiver-physician interaction [40]. Health care providers must have the skills and ability to initiate change and define the future of their discipline and practices for developing high-quality care inside the digital ecosystem in order to help shape these new practices [41]. When all parties involved cooperate because they believe that working together will result in mutual satisfaction (real or perceived), they can work toward common goals and bridge gaps.
Resource Integration
Once a care decision has been made, resources must be integrated to provide accessible and ongoing treatment to all patient groups, regardless of location, social issues, or other limitations that may limit patient movement to the point of care. The measures suggested by studies to decrease health inequalities for persons with IDD include developing a better method and systems to monitor and treat chronic illnesses common in the general population that are also experienced by people with IDD [42]. In the bulk of these systems, communications technology is the underlying facilitator for patient-clinician connection, which, due to its relative ease of use, low cost, and ubiquitous nature, has the ability to create health care solutions that transcend socioeconomic class. For example, using telemedicine to treat numerous contagious diseases promotes continuity of care while preventing physical contact, allowing all actors to participate, reducing the access gaps and improving patient outcomes [43; 44]. However, the technical gap in the patient population must also be addressed [45]. Patients who do not have cellphones, for example, can still communicate with care professionals via SMS. Older people, on the other hand, have restricted access to internet-based services due to their low socioeconomic status and computer capabilities; this disparity around telemedicine may exacerbate mental health issues and widen global health [46]. Trust in technology, design, cognitive impairment, and physical limitations such as poor vision, hearing, or sensory impairment are among issues that telemedicine for the elderly faces [47].
Emergence in Action to Adapt and Refine
Service inclusion gives service-seeking actors equal access to the service, safeguards must be developed against "deliberately or unintentionally failing to include or to adequately serve customers (patients and caregivers) in a fair manner" [48]. That may induce intentional misbehavior from the part of the people with IDD, due to perception of threat and need of attention [49], thus leading to exclusion. People with IDD want their support teams to be trained in activities that enhance navigation and communication between the IDD and the care ecosystem, while integrating their families and caregivers to reduce the risk of isolation and emphasize public awareness for inclusion, better understanding and compassion. To improve their chances of integration, calls to support inclusive educational experiences for students with intellectual and developmental disabilities (IDD) have been longstanding [50]. Technology could improve the vocational inclusion of people with disabilities, thus increasing their wellbeing and competence development [51]. Research much enrich data sets and data analysis focused on people with IDD to determine patient centered approaches and align their outcome [52].
Summary and Call to Action
A health care ecosystem's largest value proposition is to support a healthy lifestyle, including illness prevention and general health, in all facets of life, and for all its beneficiary stakeholders. Our investigation shows that a health ecosystem must enable patients and other public and private actors-including private businesses in the fields of diagnostics, pharmaceuticals, medical devices, healthcare delivery, support services, and technologyto interact actively in order to be inclusive of IDD patients (Fig. 2). For people with IDD to engage with their health ecosystem and enable their care pathways, they must easily understand what health services are available to them and have ready access to these services. This demands increased communication and participation from all actors in the ecosystem to provide assistance in navigating the system and access proper care. Special attention must be provide to keeping communication simple, potentially using visual aids, etc.
Actors in the ecosystem must gain better understanding of how to promote health among those in the IDD population. Systems must be in place to monitor the life and risk factors of the youth among IDD sufferers, especially, so that they get a better chance at a quality of life. Then, to address their ongoing needs, healthcare providers in the ecosystem must integrate acute healthcare with long-term services and support for people with IDD, to better learn how non-clinical variables influence their health trajectories. This will establish a common purpose and promote better inclusion. With that, all aspects of the health-care ecosystem should be aligned during all next stage of care.
Results and Implications
This paper is a conceptual exercise to propose essential activities for engaging people with IDD in a digitally inclusive healthcare ecosystem. Looking through the A4A lens has identified that a positive outcome of an inclusive healthcare service must be rooted in the design of the ecosystem in a way to assure that patients are central to communication. While integrating their families and caregivers, surrounding them with teams that are trained in activities that enhance navigation and communication, the ecosystem must provide foundations for tailored and human centered digital accessibility while emphasizing better understanding and compassion. Fundamentals of a digitally inclusive health service ecosystem for people with IDD must be tailored to engage them and understand their needs, so that all actors in the ecosystem become aware, converge and align their resources as they refine and adapt care pathways, increase care satisfaction and improve outcomes through an inclusive healthcare ecosystem, in an inclusive smart city.
In context, digitally inclusive healthcare ecosystems must have ways to understanding and address the capabilities of the caregiver and availing technology aides for their respite. Robotics and data driven facilities ought to be included to provide tailored and smart healthcare that can adapt to the stages of needs. A human-centered approach to healthcare service design should be considered; one that maintains empathy, trust, and support for all parties involved to stay focused on the common goals and purpose. Medical professionals must be sufficiently trained to assist people with IDD, facilitated by technology, with sufficient data on disease or condition type, phase of the process, with as continuous focus on communication and removing the digital accessibility barriers for the patients. Thus, when integrating the resources available, patients, caregivers and other ecosystem actors all included in the process, engaging witk the patients with IDD. Since the fragile trust in technology of patients and other physical limitations that may be present (poor vision, hearing, or sensory impairment), can be hindering, tailored care must be provisioned with a thorough understanding of the patient needs, placing technology at work for monitoring the progress of the treatments.
Limitations and Future Research Paths
High quality, equitable and inclusive care is about providing care that does not vary in quality because of personal characteristics such as gender, ethnicity, geographic location, and socioeconomic status [53] and disabilities [54]. With our work, we identify stages of capability integration in a systems approach to co-create value for people with IDD in an inclusive approach. In public health research, service design for inclusion is about patient engagement and 'empowerment'.
Our discovery on fundamentals of a digitally inclusive health service ecosystem for people with IDD has just begun. Our review contributes a new conception of an ecosystem for Healthcare for IDDs in terms of developing an inclusive service system, in addition to adding to the literature on transformational services, social innovation and service inclusion. We therefore encourage using this work as a sprfurther research paths to evaluate the potential of IDD patient enablement in our ever-evolving quest for an inclusive, sustainable and smart healthcare ecosystem. | 6,805.4 | 2023-01-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Estimating Adult Mortality in Cameroon from Census Data on Household Deaths : 1976-1987
Many African countries lack conventional data sources for systematic assessment of adult mortality. Studies of mortality in Cameroon have mainly been concerned with infant and child survival, while levels and structure of adult mortality have rarely been investigated. This paper employs the Generalized Growth Balance method to estimate adult mortality in Cameroon using data from 1976 and 1987 censuses. More specifically, we use data on household deaths during the 12 months preceding the 1976 and 1987 censuses to assess the adult mortality situation in Cameroon prior to the onset of HIV/AIDS pandemic. Results suggest that overall adult mortality in Cameroon prior to the HIV/AIDS era was high even by African standards. Ignoring potential methodological and data differences, a comparison of age-specific death rates from the two censuses to those from the recent DHS results portray a recent increase in mortality during the peak productive and reproductive years. However, a complete and reliably operational vital registration system remains the ultimate solution to estimating and fully understanding the trends in adult mortality. In the meantime, consistently collecting census data on household deaths can enhance knowledge and inform policy intervention.
La quasi-totalité des études sur la mortalité au Cameroun, ainsi que les efforts déployés ont été focalisé sur l'amélioration de la mortalité infantile et juvénile. Les données conventionnelles pour l'évaluation systématique de la mortalité des adultes manquent encore. Dans cet article, nous utilisons la méthode généralisée de croissance équilibrée avec les données des recensements des 1976 et 1987 pour évaluer la situation de mortalité des adultes au Cameroun. Sauf pour la mortalité infantile et juvénile, les résultats obtenus montrent que la mortalité des adultes est restée inchangée au cours de la décennie précédant l'avènement du HIV/SIDA. Une comparaison des taux de la mortalité par age pour la période considérée à ceux des résultats du EDS II et III, dépeignent une augmentation récente de la mortalité des personnes aux ages productifs et procréatifs. Les données du troisième recensement, une fois rendu publique, nous permettront éventuellement de se situer sur cette tendance.
particular, have been justifiably devoted to the study of, and struggle to curb mortality at the lower extreme ages where mortality risks are known to be particularly high.This is also based on the conventional wisdom that investing in the health and well-being of children is an investment in the future development of the population.In recent years, adult mortality has been recognized as a serious threat to child survival and welfare.Increasing literature on the dynamics of poverty requires a better understanding of prime-age adult mortality because of the potential effects on household behavior and welfare.For instance, recent evidence suggests that children who have lost their parents are at risk for worse schooling and health outcomes (Case et al. 2004;Case and Ardington 2006;Ainsworth et al. 2005;Noumbissi et al. 2005;McDaniel and Zulu 1996).Moreover, in the current context of mounting HIV/AIDS prevalence, a better understanding of the adult mortality situation is crucial for health and development planning, since human capital is highly specialized, scarce and not easily replaceable. .
Introduction
Reducing mortality in Africa remains a major goal of public health and development efforts.Following substantial investments in medical technology and public health interventions among other factors, dramatic improvements in mortality were recorded during the latter half of the 20th century, resulting in widespread extension of human lifespan beyond past predicted levels in most African countries.Improvements were particularly impressive in child mortality rates as recorded since the 1960s (Hill et al. 1999;Hill and Pebley 1989;Wilmoth 2000).However, mortality at the turn of the 21st century remains disturbingly high in most African countries.Moreover, in recent years, stagnation or reversals of mortality declines have been observed in many African countries.Current assessments of under-five mortality (U5M) suggest that results of the previous decades of focused efforts to reduce mortality are gradually being eroded (Ahmad et al. 2000;Walker et al. 2002;Hill 1993;Zuberi et al. 2003).
Considerable efforts and resources in most developing countries, and Cameroon in 1 1 Explaining the dramatic imp rovements in mortality during the 20 th century h as been a subject of intensive schol arly debate and research over the past three to four decades; particularly as regards the role of eco nomic development and public/personal health m easures in the p rocess.Yet, there is no consensus as to the relative importance of the different factors.Preston's (1975) groundbreaking work remains th e corners tone of medical innovations and public health measu res while p roponents of nutritional influence (see Fogel 2004) con tinue to emphas ize the historical importance of rising incomes on living standards (nutrition).African countries however, s eem to have benefited greatly from medical technology and public health interventions thanks to the diffu sion of innovations (see Easterlin 2004 andSoares 2007) measurement techniques, which are limited in terms of the degree of accuracy.The cornerstone in the study of mortality under such circumstances has been the analytical tools pioneered by Brass and his scholars (Brass and Hill 1973;Brass 1975;Hill 1977;Hill and Trussell 1977) over three decades ago that have subsequently been refined and improved (Timaeus 1992;UN 1983) and are being continuously evaluated when and where minimum data are available (Blacker 2004;Timaeus 1999;Timaeus and Jasseh 2004;Gakidou and King 2006;Stanton et al. 2000).Of these analytical tools, the technique for estimating childhood mortality has been most successful and universally accepted.For this reason, the two requisite questions on number of children ever born and number surviving to a woman by age have become standard for virtually all population censuses and surveys across Africa and in many other data-deficient settings.Thanks to this technique, a great deal has been uncovered about the causes and underlying determinants of child mortality, and ways to prevent childhood deaths in African countries, and developing countries in general.
On the other hand, attempts at developing techniques for measurement of a d u l t m o r t a l i t y h a v e b e e n m o r e problematic, partly because of the relatively rare nature of the adult deaths (Preston et al. 2001) relative to childhood deaths.In effect, given the relative infrequency of adult events, data on a large sample or on events over a long reference period are necessary in order to allow for a precise measure to be obtained.Also, the likelihood of undercounting and multiple reporting is quite common due to the difficulty of identifying an appropriate informant for reliable information on adult deaths, unlike childhood deaths whose details are more readily provided by their mothers (Timaeus 1991;Lopez et al. 2002).In order to circumvent such difficulties, alongside those associated with direct estimation of mortality in developing countries, numerous techniques have been proposed for the measurement of adult mortality.Unfortunately, none has so far acquired the widespread recognition typical of the technique for estimating childhood mortality.P r e s t o n ( 1 9 9 6 ) p r o v i d e s a comprehensive review of measurement techniques for overall mortality based on an assessment of all mortality-related studies published in Population Studies from its inception in 1947 up to 1995.Similar reviews by Timaeus (1991) and Hill et al. (2005) have discussed some of the existing indirect or "unconventional" methods for estimation of adult mortality in developing countries where reliable and complete vital registration systems are l a c k i n g .P r e s t o n ( 1 9 9 6 ) , w h i l e acknowledging that new methods featured prominently among the key achievements of demography during the post-war period, concludes that improved methods for the assessment of adult mortality remain a major part of the unfinished business for demography.Timaeus (1991) suggests a need to adopt a more eclectic approach in trying to improve knowledge of adult mortality in developing countries; that the assessment of the most appropriate measures and methods should be done at the country level.Hill et al. (2005) conclude that adult mortality can be represented by partially registered deaths or deaths by age recorded in censuses on condition that the sources are evaluated and where need be, adjusted by the use of death registration methods.
Prominent among adult mortality techniques are the census-based death distribution technique or the growth balance approach.While these techniques remain relatively untested and hence, not extensively accepted, several rounds of African censuses have progressively collected requisite information for their application (e.g.household deaths 12 months to census) that have not been systematically analyzed.Recently one of the proponents and pioneer of these indirect techniques has initiated a review of the death distribution techniques for performance and sensitivity of estimates (Hill and Cho 2004) to the use of different age ranges for adjustment (Hill and Thomas 2007).In the initial phase of this evaluation, they have opted first for testing the methods under ideal circumstances where requisite data are essentially complete.In this paper, we extend the evaluation of the technique to a particular setting where requisite data are believed to be essentially incomplete.(MSP et OMS 1989).Current adult prevalence rate based on pregnant women at antenatal clinics is about 11% while a recent household survey estimated that national HIV prevalence in 2004 at 5.5% (UNAIDS 2006).Despite strong interest in, and common wisdom about the impact of HIV/AIDS on adult mortality, absence of conventional and reliable data severely constrains attempts to quantify deaths attributable to AIDS.Moreover, an understanding of the structural impact of this pandemic requires some information on mortality prior to the onset.It is the goal of the current paper to provide this baseline assessment of adult mortality in Cameroon.
Specifically, Hill's (1987) variant of the death distribution methods, applicable to non-stable populations is applied to the 1976 and 1987 census data.The data are evaluated and based on the adjustment factors obtained; reported household deaths and census counts are adjusted to produce corresponding life tables for Cameroon.Also, we examine the direct estimates to see how they compare with similar estimates from recent sources.To assess the performance of the technique, the potential implication of several assumptions (in relation to the coverage of household deaths) on possible estimates of adult mortality is examined.Three sets of adult mortality estimates are generated on the basis of three assumptions: One is based on household deaths in 1986-87 under the assumption that the resultant age-specific mortality rates (ASMR) are observed all through the intercensal period, another based on a similar assumption for 1976 ASMR, and lastly one based on the average ASMR for the two years (1976)(1977)(1978)(1979)(1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987). .
Data and Methods
Among the notable efforts to improve the data environment in Africa is the African Census Analysis Project (ACAP) that has complemented various i n t e r n a t i o n a l o r g a n i z a t i o n s a n d governments through the creation of a unique data archive.This effort has helped prevent the disappearance (due to poor storage) of the 1970 and 1980 rounds of African censuses, and these censuses have become increasingly available to many researchers.For this study, we use the first two censuses of Cameroon available from the ACAP data archive to assess adult mortality in Cameroon.The first census was conducted from April 9-24, 1976 and the second followed 11 years later from April 14-28, 1987.In terms of population estimates, there were 7.6 million and 10.5 millions individuals in 1976 and 1987, respectively.Data are effectively available for 7,429,555 persons from the 1976 census data file, corresponding to the unadjusted census count.On the other hand, data are available for 8,883,643 persons from the 1987 census data file, which falls short of the unadjusted head count reported for this census (see BCR 1978 andCameroun 1992 for details).
While no Brass-type questions on survivorship of children were included, a direct question was consistently included on deaths in the household during the past 12 months preceding the census which constitute the mortality inputs for this analysis.A total of about 104,000 household deaths were recorded during the 12 months prior to the 1987 census.There is a slight indication of a male penalty and about 40% of all deaths pertain to children under age five.Similarly, some 79,300 deaths were reported for the 12 months prior to the 1976 census, 45.9% of them being children under age 5.While the percent distribution of the deaths by age and sex could be influenced by the size of each group (age-sex) relative to the total population, these preliminary numbers portray the very high and expected contribution of infants and children to overall deaths in 12 months.These numbers also imply a slight drop in the proportion of infants and children that succumbed to death during their first five years of life.
Methods
In the absence of reasonably operational vital registration systems, census and survey-based estimation of adult mortality rely heavily on indirect methods that entail use of information on survivorship of close relations and/or census age distribution combined with reported or registered deaths.Because reported household deaths in censuses, are generally believed to be affected by reference period errors, techniques developed (Brass 1975;Preston and Hill 1980;Hill 1987;UN 2002a) for use under such conditions requires some data adjustments.The present analysis employs the generalized growth balance (GGB) technique and examines the potential implications of certain assumptions (particularly in relation to the coverage of household deaths) on possible estimates of adult mortality for Cameroon.This method is most appropriate because it equally serves as an evaluation tool for the data.To obtain the overall estimates of intercensal deaths required for the application of the method, the two-census age distributions are combined with the household deaths reported 12 months prior to each census.The resultant age-specific mortality rates (ASMR) from each census are assumed to prevail all through the intercensal period and applied to the person-years lived to obtain the deaths.By assuming that 1987 ASMR prevail over the intercensal period, an estimate of adult mortality is obtained that should portray the conditions in 1987.Another set of estimates is obtained under a similar assumption for the 1976 ASMR which should provide an idea of the situation in the mid 1970s, and lastly the average of the ASMR for 1976-1987 is used.Brass (1975), the method describes that for an open ended age segment of a closed population (denoted by x+), the birth rate (entries into an age segment) is equal to the growth rate of the segment plus the death rate (departures from that segment).Initially defined only for stable populations, later developments relaxed the assumption of stability (Preston and Hill 1980) given two or more censuses and a further refinement (Hill 1987) is designed to be less sensitive to age misreporting.This latter refinement has been referred to as the generalized growth balance (GGB) method.It focuses on changes in sizes of age groups rather than on changes in cohort sizes with allowance for differences in census coverage and incompleteness of death registration (Preston et al. 2001).By comparing the age distribution of deaths with the age distribution of the surviving it provides an age pattern of mortality for a given reference period.
The GGB is a death distribution m e t h o d d e r i v e d f r o m t h e b a s i c demographic balancing equation. First pioneered by
The fundamental assumption here is that the population in question experiences negligible migration during the intercensal period.
Consequently, it is implicitly assumed in the analysis that the population experienced negligible net migration during the intercensal period.Migration is generally a more intricate issue in demographic estimation, since it is particularly unusual to have adequate information on it.Migration can affect a population in the same way mortality does and in some cases may potentially result in greater distortion since it is historically age-, and sex-selective in favor of males in the adult ages.While recognizing that migration may considerably affect the level of estimates, in our view negligible net migration is a reasonable assumption for this period in the history of Cameroon and considering the fact that the analysis are restricted to the national level.In effect, it is only in the late 1990s following the combined political crisis of the early 1990s and mounting economic depression that Cameroon is believed to have witnessed a considerable wave of out-migration likely to have resulted in substantial net migration.
For an idea on the plausibility of this assumption for Cameroon, the UN migration stock database (UN 2002b) provides rough estimates indicating that annual net migration up to the early 1990s was about -0.1 per 1000 population for Cameroon.As such, it is reasonable in this case to assume negligible net migration.While this assumption is plausible for the current analysis, it is particularly important to include migration in the estimation procedure if these analyses were to be extended to areas where net migration is substantial.In this direction, the GGB method has been reformulated (Bhat 2002;Hill and Queiroz 2004) for application to populations that are substantially affected by migration.However, use of these refinements still requires some knowledge of the intercensal age distribution of migrants which are hard to come by.B y i m p l i c a t i o n t h e b a s i c demographic balancing equation (see Hill 1987 andUN 2002a for details on the derivation) is applied to a segment of the population over any given age x such that: Where r(x+), b(x+) and d(x+) are respectively, the growth rate, birth rate, and death rate of the population segment age x and above.Birth rate in this case refers to the ratio of the people attaining age x during the reference period to the total person-years lived by the population age x and above.According to model propose by Hill (1987), the basic accounting equation can be transformed into its component rates by the following formula: Where PYL(x+) is the person-years lived by age x and above, r(x+) is the population growth rate at aged x and above, D(x+) is the total deaths at ages x and over.N(x) denotes the number of persons reaching exact age x (or celebrating their xth birthday) during the interval and is estimated geometrically by interpolation of persons in the 5-year age groups x-5 to x, and x to x+5 of the two census distributions.As mentioned earlier, by comparing the two sides of the foregoing equation, the relative completeness of reported household deaths is estimated.The calculations are then extended to the evaluation of the two successive censuses and subsequently adjusted to produce an age pattern of mortality for a given reference period.
Another variant of the death distribution techniques is the extinct generations' method that uses data only on registered deaths to reconstruct the cohort size of a particular age by counting all the cohort deaths till it is extinct (Bennett and Horiuchi 1981;1984).Also referred to as the Bennett-Horiuchi (BH) approach, the method indirectly provides a test of whether data are accurate (UN 2002a).But by definition, it cannot be applied to recent non-extinct generations (Preston et al. 2001).Hill and Choi (2004) have used simulations to evaluate effect of common patterns of data errors on the performance of the GGB and the BH methods.Their simulation results suggest that combining the GGB and BH method can produce better estimates of death registration completeness.In line with this suggestion, the GGB is first used to estimate census coverage, then the census count is adjusted accordingly to be consistent, and thereafter applying the BH method to arrive at an adjustment factor for deaths.But prior to applying the techniques and generating mortality estimates, it is appropriate to assess the data quality, particularly the age-sex composition on which the techniques employed relies.
Data Assessment
An examination of the percentage distribution of single age (Figure 1) for the two censuses show a general decline as age advances with some considerable amount of heaping suggesting a strong preference for digits 0 and 5 in the reporting of ages.
Such age heaping and digit preference is common in demographic data.The graphical assessment indicates that this age heaping is somewhat similar in both censuses, though age heaping and digit preference appears to be somewhat lower in 1987 compared to 1976.Grouping the data into conventional age groups reveals a steady decline with age.A consistency check on the data that consist of comparing 5-years and 10-years birth cohorts portray virtually parallel lines with slight indentation for the younger birth cohorts which seems consistent with lower fertility during the intercensal period.There is no crossover to suggest a large intercensal m i g r a t o r y m o v e m e n t o r a l a r g e undercount in one census relative to the other.
The age-specific sex ratios of the population are consistent with the expected pattern that typically implies more male births than females, with lower mortality during the life course allowing females to eventually catch up with the males towards the young adult ages and outnumber them at the older ages.The sex ratio in the Cameroon census is estimated at 96 males per 100 females for both 1976 and 1987, suggesting a constant fraction of males as a proportion of the total population.By age groups the sex ratios ranges from 104 for the youngest age group to below 100 by age group 15-19 where it remains through to the older ages.On the whole ignoring slight fluctuations, the pattern of age-specific sex ratios seems consistent with the global picture for Africa (see UN 2004;Shyrock, Siegel and Associates 1976) where sex ratios are generally around 100 for the age range 0-14, drop to a little above 90 for the broad age range 15-59, and then drop below 80 thereafter.Also, the age ratios are generally above 90 except above age group 55-59.Departures from the standard expected value of 100 are not large for ages below 60, a pattern suggestive of relatively reasonable data errors below this age.The overall sex ratio of household deaths reported for the 12 months preceding the 1987 census is 117 males per 100 females which is not significantly different from the 116 obtained for the 12 months prior to the 1976 census.The observed age-specific patterns are indicative of substantial excess male mortality.The sex ratios of reported deaths by age group start at 120 and remain steadily above 100 across age, except for ages 15-44 where the male mortality penalty seems to disappear and resurfaces at the end of the reproductive span (45-49).The pattern is same for the two census years, except that the trough between ages 15-49 is deeper for the first census than for the second.Figure 2 compares the ratio of observed male to female age-specific death rates from the two censuses and provides a closely similar picture to that portrayed by the sex ratios of reported deaths pointing to male excess mortality as well as high maternal mortality at the time of the first census that appears to have declined by 1987.A similar pattern emerges from the comparison of ratios of observed ASMR for 1987 to those for 1976 as depicted in Figure 3.A key component of the diagnostic checks on census coverage entails examining the intercensal age-specific growth rates (ASGRs).Except for situations where there has been an appreciable net migration, average annual growth rates have seldom been recorded outside the range of 0 to 3% (Shyrock, Siegel and associates 1976).Rates outside this range may also suggest that one or the other of the censuses was a substantial undercount.We also examined the mean annualized ASGRs and they follow virtually a similar pattern for both sexes.The rate falls a little off this expected range only in one or two cases: females age 35-39, where the ASGR is not very different from zero and age groups 10-14 for both sexes and 0-4 for males where the ASGR is at the upper boundary.The well-known mobile ages likely to be missed by a census are the young adults and to some extent the old adults.The observed trough does not seem consistent with the type of distortion that might be attributable to age-selective migration, since this is observable for both sexes, but steeper for females.This could be partly an attribute of potential downward shifting of persons as a result of age misstatement.Aside from this slight distortion, the growth rates are generally within expected range, implying the data are perhaps fairly good.A better gauge of the growth trend can only be done in a case where there has been a series of preceding censuses.In the absence of such a series, we turn to the GGB to evaluate and generate adjustment factors that allow for consistency between reported deaths and the population denominators.
Relative completeness of deaths and census enumeration
To minimize errors resulting from frequent exaggeration of the age at death of the elderly, the initial refinement of the growth balance method by Hill (1987) recommends that the completeness parameters be estimated by fitting lines to data points below age 60.Incidentally, the method is being reviewed for performance and sensitivity of estimates to the use of different age ranges for adjustment.In the preliminary evaluation, Hill and Thomas (2007) opted first for testing the method under ideal circumstances where requisite data are essentially complete.They have compared three alternative age ranges for adjustment (15-55+, 5-65+, and 40-80+) and from the preliminary results, the two preferable age ranges are 15-55+ and 40-80+.It appears their tilt is in favor of using the latter, which produces relatively stable results in the face of substantial net migration (Hill and Thomas 2007).Except that the age range 15-55+ tends to be affected by substantial net migration, the results are pretty stable as well.
Considering the preceding assessment of data for Cameroon, and the awareness that age at death for the elderly is likely to be distorted, especially for these early censuses, the data are fitted to the average age range 15-55+.In effect, completeness estimates were equally fitted to the 40-80+ age range (results not shown here) but from the outcome, fitting completeness estimates to 40-80+ does not appear to work well for Cameroon, at least in this case.
Table 1 shows the estimated completeness of reported deaths for the 12 months preceding each census.It should be recalled that deaths are estimated under the assumption that each set of ASMR prevail all through the intercensal period.
Based on these estimates, the reported household deaths prior to the 1987 census seem to approximate fairly complete reporting.Generally household deaths appear to be slightly less completely reported for females than for males.The overall reporting of male deaths based on the 1987 ASMR was about 87% complete relative to the population count as compared to female deaths estimated at 8 1 % c o m p l e t e .T h e m a l e -f e m a l e differential reporting seems consistent in the two censuses, with a slightly larger difference in the case of 1976 deaths.Under the alternative assumption that 1976 ASMR prevailed all through the intercensal period, the estimated completeness suggest that reported deaths represent a shortfall of close to half of the expected intercensal deaths relative to the population count.This is particularly the case with reported female deaths (59% complete) as against male deaths (66% complete).As noted in Table 1, the application of the GGB and BH adjusted method (Hill and Choi 2004) produces almost identical estimates of deaths The estimates of enumeration completeness also provided (k1 and k2 for the first and second censuses respectively) are relative to one or the other census with the reference census being a simple matter of choice.It is not possible to estimate the completeness parameters for both censuses individually because there is no way to distinguish the situation in which both censuses and deaths are under-reported by precisely the same magnitude from the situation in which both censuses and deaths are completely reported.This is not potentially problematic, since the equal under-reporting in both censuses and deaths cancel out (Preston et al. 2001;UN 2002a).At best, for a known level of deaths reporting in the intercensal period, or if we are able to make informed guesses on its value, it will be possible to estimate k1 and k2 individually.In the current situation we do not have sufficient information to venture into informed guesses on the value of death reporting.A convenient way to proceed (UN 2002a) is to ascertain which of the two k values is larger, arbitrarily set this value equal to one and then estimate the other k value by their ratio.Hence if k1/k2 > 1 implies k1 > k2, then we assume k1 = 1 and k2 =1/(k1/k2).On the other hand, if k1/k2 < 1 then k1 < k2, and we set k2 = 1 and k1 = k1/k2.By setting any of the parameter equal to 1, we are implicitly assuming a case where the corresponding (reference) census was at least complete.
Following this logic, it is estimated that the 1976 enumeration was apparently more complete relative to the 1987 census.In effect, the estimates show that the 1987 census was about 93% complete relative to t h e 1 9 7 6 c e n s u s , w i t h t h e m a l e enumeration estimated at 95% complete and the enumeration of females at 90% complete.These findings seem pretty consistent when we considered the d i f f e r e n t r e p o r t s f r o m t h e p o s t enumeration assessment of the two operations.In effect, the estimated undercount level was much larger for the second operation than in the first; an average of 7% in 1976 as against 12.7% for the 1987 census.
Adult Mortality Estimates
Using the estimated adjustment factors obtained, we have adjusted the reported household deaths and census counts accordingly before generating the corresponding life tables for Cameroon.The general age pattern of mortality depicted by the adjusted age-specific mortality rates (ASMR) are presented in Figures 4a and 4b.The age pattern is consistent with the general expectation where mortality is approximately "Ushaped" reflecting high mortality during childhood and late adulthood.Both figures (males and females respectively) portray some possible intercensal declines in mortality among the under 15 years old.But for ages above 15, the only visible sign of mortality gains is for females, whereas the ASMR for males above 15 are virtually identical between 1976 and 1987.An appropriate way of quantifying adult mortality risk in a population based on the life table approach is to express it as the conditional probability of dying between ages 15 and 60, denoted conventionally as 45Q15.In other words, this gives the proportion of persons aged 15 years old who will die before their 60th anniversary under given mortality conditions.Table 2 presents the observed and adjusted probabilities (45Q15) which are further broken down into the conditional probabilities of dying between age 15 and 40 (25Q15) and between age 40 and 60 (20Q40).This should allow for a comparison of young adult mortality to that of older adults.The estimates for each sex are different by whether the censuses are adjusted for the relative undercount or not, but they are nevertheless quite close.Using the unadjusted census counts produces simultaneously a slightly higher percentage completeness of deaths and a higher adult mortality estimate.
On the whole, the adjusted estimates suggest that under the 1987 mortality conditions, about 37% of Cameroon males reaching age 15 were u n l i k e l y t o c e l e b r a t e t h e i r 6 0 t h anniversary.A slightly smaller proportion is obtained for females (32%), which imply a 15% excess adult male mortality relative to adult female mortality.As expected the mortality risk increase generally with age from age 15 but the slope is steeper after age 40.The probability of dying between ages 40 and 60 is almost twice that between age 15 and 40.The BH adjusted estimates are lower but consistent with those produced by the GGB adjustment.As indicated previously, the adjusted ASMR and the corresponding life table values, 45Q15, were also estimated under the assumption that 1976 rates prevailed all through the intercensal period.Comparing the estimates with those based on the 1987 rates should provide some rough indication of the adult mortality trends during this period.The results suggest that adult male mortality in Cameroon probably stagnated during the intercensal period at their 1976 levels.In particular, the probability of dying between ages 15 and 60 under the 1976 mortality conditions estimated at 38% is so close to the corresponding estimate (37%) under the 1987 mortality conditions.On the other hand, adult female survival probability had apparently improved during the same period, from about 63% in 1976 which was similar to that of males, to 68% in 1987.This is consistent with the results presented in the data assessment section where an observed trough in the 1976 ratio of male to female ASMRs for the reproductive years disappears in 1987.
An alternative life table measure of adult mortality is the average expected remaining years of life at age 15 (Table 3).Based on the adjusted mortality conditions (ASMR) for 1976, the average expectation of life at age 15 (e15) was almost similar for both sexes (about 47.3 years as against 47.9 for females).The corresponding estimate based on the 1987 conditions suggest an increasing sex difference of about 2 years in favor of females (50.2 years) while that of males remained virtually at the same level (47.9 years) all through the intercensal period.However, given that this is a cumulative measure of mortality there are signs of a slight general increase in life expectancy for males as average remaining years at age 5 (e5) went up from 54.4 to 56.0 years.Again the increase for women is more (from 55.2 to 58.3 years).Generally the improvements in average remaining years turn to be minimal for both sexes towards the higher ages.
Discussion and Conclusion
This paper employs the generalized growth balance method to estimate adult mortality in Cameroon using data from 1976 and 1987 censuses.Overall, there are indications that childhood mortality improved during the intercensal period.By contrast mortality for those who survived through childhood years apparently stagnated.Two nationwide surveys conducted as part of the worldwide effort, the Cameroon Demographic and Health Surveys (CDHS) in 1998 and 2004 have collected sibling survival information.A comparative assessment of these data alongside similar data for other African countries by Timaeus and Jasseh (2004) estimates the probability of dying between ages 15 and 60 around 1995 at 28% and 23% f o r C a m e r o o n m e n a n d w o m e n , respectively.These estimates are evidently on the low side compared to those presented in the current study, perhaps attributable to methodological and data differences.The CDHS estimates are much lower for 1990; 18% and 15% for males and females respectively.It is informative to examine the direct estimates to see how well they compare with similar estimates from the sibling survival data.Figures 5a and 5b present the age-specific mortality rates for men and women aged 10-64 years for the 1976 and 1987 censuses compared with the estimates based on sibling information in the 1998 and 2004 DHS provided in the country reports.The latter estimates correspond to the periods 1989-1998 and 1998-2004, respectively, while those computed for the census correspond to the mortality situation during the 12 months period preceding each census.For both men and women, mortality rises rapidly with age and rates are consistently higher for males than for females.Compared to the CDHS estimates, the 1987 mortality rates seem slightly higher than rates for 1989-1998 but lower than the rates for 1998-2004.Irrespective of the estimates, there is some general indication of male mortality penalty.In combination with the adjusted estimates presented earlier, the observed estimates also suggest little or no improvement in the adult mortality situation in Cameroon over the last two decades of the 20th century.The situation seems to have deteriorated or stagnated even prior to the onset of the HIV/AIDS pandemic.Based on this comparison, the ages for which mortality seems to have increased barely overlap, but are apparently different for males and females.The increase for males (Figure 5a) seems to be more pronounced in the age range 30-45 years and in the age range 20-35 years for women (Figure 5b).These ages correspond to the peak productive and reproductive adult years.These are also known to be the AIDS years, where the impact is most often pronounced.However, it should be noted that the indication of a pronounced increase at these ages is mostly portrayed by the recent DHS survey conducted in 2004.With sample size issues and other data limitations, more information (from other sources) on adult mortality risks in the country are needed before any conclusive statements can be drawn from these trends on the current levels.
Considering that the first documented AIDS cases in Cameroon were in 1987 (MSP et OMS 1989), this paper provides a baseline assessment of the adult mortality situation in Cameroon.The overall picture from the results is that of a population that experienced stagnating adult mortality between the mid 1970 and mid 1980.Adult mortality in Cameroon prior to the onset of the pandemic was high by African standards.However, results disaggregated by sex suggest that adult female mortality temporarily improved over the intercensal period.The female mortality advantage translates to a difference of less than three years in the expected remaining years of life at age 15.The estimated probabilities of dying between ages 15 and 60 imply an overall 15% excess adult male mortality relative to female mortality.The indication that female adult mortality declined over the intercensal period is consistent with both national and international efforts to curb maternal mortality through the institution and implementation of safe motherhood and maternal and child health (MCH) programs during the period covered by these censuses.
Meanwhile, a potential suspect for the male mortality stagnation for which there is no empirical evidence is the dramatic changes in the sociopolitical and economic environment in Cameroon in the mid-1980s.The relatively poor performance of the Cameroon economy during the early 1980s and the dramatic changes in employment opportunities and other related political changes are probably among the major factors.Mortality and, more probably, morbidity, may have increased as a consequence of the, economic crises, worsening of living c o n d i t i o n s a n d m o u n t i n g unemployment.Due to data availability, studies relating socio-economic fluctuation and mortality focus mostly on overall mortality.A comprehensive attempt to assess the demographic response to economic shocks in Latin America (Hill and Palloni 1992) show that recession has some expected adverse effects on mortality of vulnerable groups like women, though t h e m o r t a l i t y r e s p o n s e s a r e quantitatively unimportant compared to historical trend (see also Soares 2007).In the case of Cameroon it is possible to suspect that because men were traditionally the breadwinners, political and economic actors; socio-political and economic shocks of the early 1980s might have induced a differential male-female mortality response.In particular, any negative effect on women might have been cushioned by the corresponding worldwide efforts during this era to curb maternal and child mortality.
Conclusion
There are no approximately comparable estimates from other sources that could serve as external checks for our estimates.Nonetheless, the assessment of sibling histories from the DHS (Timaeus and Jasseh 2004) in Sub-Saharan Africa provides estimates for a few countries corresponding to 1985.As mentioned already, these DHS estimates are all on the low side, except for the Central African Republic, a close neighbor to Cameroon, for which the estimated probability of dying between ages 15 and 60 (0.308 and 0.348 for women and men respectively) approach the estimates we obtained in this study.It may appear from this comparison that the census data (more precisely household deaths) overestimates mortality compared to the results of the recent DHS surveys.More recent comparable data or a much longer data series are needed to ascertain the observed trend.The third Cameroon census conducted in 2005, 18 years after the second, is not yet available.It will be interesting to evaluate the situation once this data eventually becomes available.This paper reiterates the usefulness to mortality analysis and policy intervention, of consistently collecting information on household deaths in censuses. 4The economic depres sion that emerged during the early to mid 1980s developed into political crisis in the early 1990s, and was further aggravated by the devaluation of the CFA Franc in 1994, accompanied by the two succes sive salary slashes.In effect, Cameroon happens to be the only coun try (amon g a dozen former French colonies in Central and West Africa using th e common currency, FCFA) where civil servants had to support the weight of the franc devaluation alongside heavy cuts in salaries.Other neighboring countries rather increased salaries to counter the effect of this devaluation.
Figure 1 :
Figure 1: Distribution of population by single age, sex and year of census
Figure 2 :
Figure 2: Ratios of male to female age-specific death rates by 5-year age groups
Figure 3 :
Figure 3: Ratios of age-specific death rates for 1987 relative to 1976 by sex
Figure
Figure 4a: Male adjusted age-specific mortality rates for 1976 and 1987
Table 1 : Relative completeness of household deaths reporting in 1976 and 1987 censuses
Source: Author's computations based on the 1976 and 1987 censuses of Cameroon Notes: k1 is the completeness of coverage for the 1976 census; k1/k2 is relative Completeness GGB is generalized growth balance method; BH is the Bennett-Horiuchi method
Table 2 : Probability of dying for adults between exact ages 15 and 60 (45Q15), for young adults between exact ages 15 and 40 (25Q15), and old adults between exact ages 40 and 60 (20Q40) by sex, Cameroon 1976-1987
Source: Author's computations based on the 1976 and 1987 censuses of Cameroon
Table 3a : Male and female life tables based on adjusted 1987 deaths
Source: Author's computations based on the 1976 and 1987 censuses of Cameroon
Table 3c : Male and female life tables based on average 1976-1987 deaths
Source: Author's computations based on the 1976 and 1987 censuses of Cameroon | 9,842 | 2013-10-16T00:00:00.000 | [
"Economics"
] |
Linear cavity erbium-doped fiber laser with over 100 nm tuning range
We report a widely tunable single-frequency linear-cavity erbium-doped fiber laser covering both the conventional wavelength band (C-band) and the long wavelength band (L-band). The laser has low threshold, high slope efficiency and high signal-to-noise radio. A large tuning range of over 100 nm is realized by optimization of the active fiber length. 2003 Optical Society of America OCIS codes: (060.2320) fiber optics amplifiers and oscillators; (140.3500) lasers, erbium; (140.3600) lasers, tunable References and links 1. P. L. Scrivener, E. J. Tarbox, and P. D. Maton, “Narrow linewidth tunable operation of Er-doped singlemode fiber laser,” Electron. Lett. 25, 549-550 (1989). 2. J. L. Zyskind, J. W. Sulhoff, J. Stone, D. J. Digiovanni, L. W. Stulz, H. M. Presby, A. Piccirilli, P. E. Pramayon, “Electrically tunable, diode-pumped Erbium-doped fiber ring laser with fiber Fabry-Perot etalon,” Electron. Lett. 27, 1950-1951 (1991). 3. Th. Pfeiffer, H. Schmuck, and H. Bülow, “Output power characteristics of Erbium-doped fiber ring laser,” IEEE Photon. Technol. Lett. 4, 847-849 (1992). 4. S. Yamashita, and M. Nishihara, “Widely tunable erbium-doped fiber ring laser covering both C-band and L-band,” IEEE J. Select. Topics Quantum Electron. 7, 41-43 (2001). 5. A. Bellemare, M. Karasek, C. Riviere, F. Babin, G. He, V. Roy, and G. W. Schinn, “A broadly tunable Erbium-doped fiber ring laser: experimentation and modeling,” IEEE J. Select. Topics Quantum Electron. 7, 22-29 (2001). 6. Y. T. Chieng, G. J. Cowle, and R. A. Minasian, “Optimization of wavelength tuning of Erbium-doped fiber ring lasers,” J. Lightwave. Technol. 14, 1730-1739 (1996). 7. M. Mignon, E. Desurvire, “An analytical model for the determination of optimal output reflectivity and fiber length in Erbium-doped fiber lasers,” IEEE Photon. Technol. Lett. 4, 850-852 (1992). 8. E. Delevaque, T. Georges, M. Monerie, P. Lamouler, and J.-F. Bayon, “Modeling of pair-induced quenching in erbium-doped silicate fibers,” IEEE Photon. Technol. Lett. 5, 73-5 (1993). 9. P. Myslinski, D. Nguyen, and J. Chrostowski, “Effects of concentration on the performance of erbiumdoped fiber amplifiers,” J. Lightwave Technol. 15, 112-120 (1997). 10. J. L. Wagener, P. F. Wysocki, M. J. F. Digonnet, H. J. Shaw, and D. J. Digiovanni, “Effects of concentration and clusters in erbium-doped fiber lasers,” Opt. Lett. 18, 2014-2016 (1993). 11. X. Dong, N. Q. Ngo, P. Shum, B.-O. Guan, H.-Y Tam, X. Dong, “Concentration-induced nonuniform power in tunable erbium-doped fiber laser,” to be published. 12. A. Bellemare, J.-F. Lemieux, M. Têtu, and S. LaRochelle, “Erbium-doped fiber ring lasers step-tunable to exact multiples of 100 GHz (ITU-grid) using periodic filters,” ECOC’98, 153-154 (1998). 13. B.-H. Choi, H.-H. Park, M. Chu, and S. K. Kim, “High-gain coefficient long-wavelength-band erbiumdoped fiber amplifier using 1530-nm band pump,” IEEE Photon. Technol. Lett. 13, 109-111 (2001). (C) 2003 OSA 14 July 2003 / Vol. 11, No. 14 / OPTICS EXPRESS 1689 #2545 $15.00 US Received June 02, 2003; Revised July 05, 2003
Introduction
Widely tunable, narrow linewidth, single frequency erbium-doped fiber lasers (EDFLs) have been studied extensively as an essential laser source for wavelength-division-multiplexed (WDM) transmission systems and for performance testing of optical components [1][2][3].They can be tunable over a large wavelength range of the erbium-doped fiber amplifiers (EDFAs), have low threshold, high signal-to-noise radio, moderate effective linewidth (0.1~1.0 GHz), and excellent wavelength and power repeatability [4,5].In recent years, there is a greater demand on widely tunable fiber lasers to take advantage of the L-band of EDFA to increase transmission capacity.Some investigations have been carried out to study the effect of the laser cavity parameters such as erbium-doped fiber (EDF) length, intra-cavity loss, output coupling ratio, and pump wavelength and power, on the laser performance [3,5,6].It was shown by theoretical analysis that the tuning range of EDFL could be larger than 100 nm in 1992 [7], but it was only in recent years that such a large tuning range can be experimentally obtained in a ring cavity EDFL [5].The key idea is to make the EDF operate in deep saturation by optimizing the EDF length and reducing the intra-cavity loss [5].Compared with ring lasers, linear cavity lasers have the advantage that the gain medium amplifies the laser light twice per circulation that makes it easy to reach deep saturation.Therefore, a large tuning range as well as low threshold pump power and high slope efficiency can be easily achieved.
In this paper, we report a single-wavelength, linear cavity EDFL with large tuning range of over 100 nm by optimizing the EDF length.The effects of the EDF length and output coupling ratio on the laser performance are investigated.The laser has low threshold, high slope efficiency, and high signal-to-noise radio.
Laser configuration
Figure 1 shows the proposed laser configuration.Two circulators with port-to-port loss of 0.5 dB are used as the end mirrors, which allows us to insert the polarization controller (PC), output fiber coupler, and filter elements into the two loops rather than into the laser cavity to minimize the insertion loss.The lightwave propagates through the two loops once per circulation but it is twice through the cavity.Thus, the insertion loss introduced by these components is reduced by two fold.We can also choose to insert a given component, such as the filter, into one of the two loops to optimize the laser performance.The EDF was backward pumped through a micro-optic WDM coupler by a 90 mW laser diode emitting at 1480 nm.At the left end of the EDF, incident laser light, which is "reflected" by circulator on the left, has relatively high power because it is amplified once by the EDF.The high-power laser light enters the EDF again and this makes the EDF reach deep saturation easily so that a large tuning range can be obtained [5].The EDF has high erbium concentration of ~ 9.2 × 10 24 ions/m 3 , absorption coefficients of α (1480 nm) = 7 dB/m and α (1532 nm) = 15 dB/m, cutoff wavelength of 1000 nm, and a numerical aperture of 0.29.
FFP filter WDM
As a tunable filter with a sufficiently large tuning range is not commercially available, a tunable narrow-bandpass fiber Fabry-Perot (FFP) filter with a bandwidth of 0.2 nm and free spectral range (FSR) of 28 nm and six bandpass filters (BPFs) with 20 nm bandwidth are used as an alternative to demonstrate the effectiveness of the proposed method.The BPFs are used one by one.For each BPF, its 20-nm bandwidth is less than the FFP's 28-nm FSR and thus continuous wavelength tuning can be obtained within the 20 nm range by changing the voltage applied to the piezoelectric transducer of the FFP.Hence, a total tuning range of 120 nm can be achieved by using the six BPFs in turn.Furthermore, as each BPF is inserted into a separate fiber loop from that of the FFP, improvement on the laser performance is also expected because the limited 20-nm bandwidth of the BPF will suppress the amplified spontaneous emission (ASE) power in the laser cavity.The total insertion loss of the two filters is about 3.5 dB.
Results and discussion
Figure 2 shows the measured laser output powers for various EDF lengths.The coupling radio of the output coupler is 0.5.For EDF lengths of 5.5, 8.0 and 12.5 m, laser emission can be tuned in large wavelength ranges of 105 nm (1511-1616 nm), 110 nm (1511-1621 nm) and 98 nm (1523-1621 nm), respectively.It can be seen that optimization of the EDF length is necessary to obtain high power over a large tuning range.It influences the short and long wavelength ends of the tuning range noticeably.For 5.5 m of EDF, the output power decreases more rapidly in the L-band region because the EDF is too short to obtain sufficient gain.On the contrary, for 12.5 m of EDF, the power decreases more rapidly in the C-band region and is very small below 1525 nm due to absorption from the long EDF length.Thus, the optimal EDF length is about 8.0 m, which gives the largest tuning range with the highest average power.In this case of 8.0 m of EDF, tunable ranges for 1 dB and 3 dB power flatness are 76 nm (1537-1613 nm) and 100 nm (1519-1619 nm), respectively.
Fig. 4. Laser power against pump power at 1580 nm for different output coupling radios.
Figure 3 shows the laser output powers over the tuning range of 1511 nm to 1621 nm for various output coupling ratios of 0.3, 0.5 and 0.7.The EDF length is 8.0 m.It can be seen that the output power increases with an increase in the output coupling ratio over the mid-band of the wavelength range but it decreases at the short and long edges of the wavelength tuning range which results in a smaller tunable range.This is because as the output coupling ratio increases the intra-cavity loss also increases and a larger intra-cavity loss leads to a less tuning range in a tunable laser [5].The maximum output powers are 10.6, 12.7 and 14.3 dBm at around 1580 nm for the output coupling radios of 0.3, 0.5 and 0.7, respectively.Fig. 4 shows the measured the laser output power at 1580 nm versus the pump power for various output coupling radios.The threshold pump powers are 4.4, 4.6 and 5.9 mW and the calculated slope efficiencies are 13.5%, 21.3% and 30.7% for the output coupling radios of 0.3, 0.5 and 0.7, respectively.
We have also measured the threshold pump power and have calculated the slope efficiency at other emission wavelengths.The results show that the threshold pump power increases and the slope efficiency decreases slightly at a wavelength shorter than 1540 nm, and this causes a power drop at around 1530 nm and hence degradation on the power flatness (see Fig. 2 and Fig. 3).This phenomenon is mainly caused by concentration quenching due to clustering of erbium ions in the high-concentration EDF.Concentration quenching has been identified as the main cause of EDF's performance degradation [8], which may cause a decrease in the population inversion and hence quantum efficiency, resulting in a reduction of EDFA gain and an increase in the threshold of EDFLs [9,10].Our recent theoretical studies on fiber ring laser have shown that it may lead to nonuniform output power in tunable EDFLs because it influences the laser performance more in the C-band, especially at around 1530 nm, than in the L-band [11].The laser power profile we obtained in this paper is similar to the simulation result of Ref. [11].Using low-concentration EDF can overcome this undesirable effect but at the cost of increasing the device size due to the long EDF length required.Figure 5 shows the laser spectrum from 1513 nm to 1618 nm measured using an optical spectrum analyzer (OSA).The EDF length is 8.0 m and the output coupling radio is 0.7.It can be seen that the signal-to-noise radio is better than 60 dB within the whole tuning range of 105 nm.The ASE noise is very small because the FFP filter was placed in front of the output coupler.The linewidth of each spectral peak is about 0.014 nm measured with 0.01 nm resolution of the OSA.The asymmetric line shape is due to the OSA's response.
We also measured the laser output power when the BPF was inserted into the fiber loop in which the FFP filter was placed.We found that the laser tuning range was reduced by about 15 nm in the short wavelength region although the maximum power increased slightly by about 0.2 dBm.We believe that it is due to the ASE light generated in the EDF, which is usually high when an EDFL emits at wavelengths below 1530 nm and in the L-band [4,12].In the original design with the BPF and FFP filter in two separate loops, the ASE light was filtered and greatly suppressed by the BPF before it was fed back into the cavity.However, in the case of the BPF and FFP filter in the same loop, the ASE light was directly fed back into the EDF through the left fiber loop.This ASE light, with most of its power in the C-band, was amplified in the EDF when the laser was tuned to wavelengths below 1530 nm, resulting in a reduction of the gain of the EDF.However, when the laser emitted in the L-band, the ASE light provided an additional pump source to the EDF [13].This is the reason why the tuning range of the laser decreased at the short wavelength end and the maximum power increased slightly when we placed the BPF and FFP filter in the same fiber loop.From this observation, it is clear that the original design of the laser provides a larger tuning range.
Summary
We have demonstrated a widely tunable, single wavelength, linear cavity erbium-doped fiber laser covering both the C-band and L-band.The dependence of the laser output power on the emission wavelength for different EDF lengths and different output coupling radios has been studied.Large wavelength tuning range of over 100 nm has been obtained by optimizing the EDF length.The laser has low threshold power, high slope efficiency, and high signal-tonoise ratio.
Acknowledgement
This work was partially supported by the Research Council of the Hong Kong Polytechnic University.The authors would like to thank Dr. Bai-Ou Guan, Dr. Chunliu Zhao, Dr. W. H. Chung for useful discussions. | 3,056.2 | 2003-07-14T00:00:00.000 | [
"Physics"
] |
Detection of defects in atomic-resolution images of materials using cycle analysis
The automated detection of defects in high-angle annular dark-field Z-contrast (HAADF) scanning-transmission-electron microscopy (STEM) images has been a major challenge. Here, we report an approach for the automated detection and categorization of structural defects based on changes in the material’s local atomic geometry. The approach applies geometric graph theory to the already-found positions of atomic-column centers and is capable of detecting and categorizing any defect in thin diperiodic structures (i.e., “2D materials”) and a large subset of defects in thick diperiodic structures (i.e., 3D or bulk-like materials). Despite the somewhat limited applicability of the approach in detecting and categorizing defects in thicker bulk-like materials, it provides potentially informative insights into the presence of defects. The categorization of defects can be used to screen large quantities of data and to provide statistical data about the distribution of defects within a material. This methodology is applicable to atomic column locations extracted from any type of high-resolution image, but here we demonstrate it for HAADF STEM images.
Introduction
Structural defects can vastly alter the performance of materials so that control of defect distribution and density is an important tool in engineering materials with novel functionalities. Even small concentrations of defects can often change the properties of materials so that it is important to quantify the type and concentration of defects [1][2][3]. Over the last two decades, aberration-corrected scanning-transmission-electronmicroscopy (STEM) has become a quantitative structural tool capable of locating atomic columns with picometerlevel precision. The ability to achieve sub-pixel precision for the location of the center of an atomic column in STEM images has been demonstrated through image analysis techniques such as finding the center of mass and 2D function fitting with a Gaussian, allowing for accurate, consistent, and repeatable determination of the centers of atomic columns in STEM images [4][5][6][7][8]. Utilizing the crystallographic nomenclature [9], STEM essentially images diperiodic structures where thick structures are often referred to as 3D or bulk-like materials and thinner structures of just a few atomic layers are often termed 2D materials. Within a STEM image, it is possible to visually identify many defects such as impurities, interstitials, stacking faults, and a plethora of other complex defects for both types of diperiodic structures.
Several methods exist to detect and identify defects in STEM images, each having unique benefits and limitations. Defects within atomic columns can be detected by examining deviations in the contrast, looking for deviations in the local atomic-scale structure [10,11], overlaying an ideal atomic-scale structure on the image [12], and by using vector tracing [13]. These methods include measuring the distance between neighboring atoms in the structure and then using statistics and modeling Open Access to detect the presence and depth of a single defect in atomic columns [10,11]; measuring the relative positions of neighboring atoms and then applying principal component analysis (PCA) followed by K-means clustering to map the ideal atomic-scale structure and statistical deviations from this idealized structure [12]; and using the Fourier transform of the image to determine the crystal structure's lattice parameter and then overlaying the obtained periodic structure on the atomic coordinates [14]. Alternatively, defects may be detected using crosscorrelation between the STEM image and a simulated STEM image based on coordinates obtained by relaxing a model structure via density-functional-theory (DFT) calculations and then detecting defects through areas of low correlation [15]. Determination of the composition for mixed-species atomic columns can also be accomplished through the use of a parametric model based on statistical-parameter-estimation theory and further combined with STEM image simulations to quantitatively improve the model [16,17]. All of these methods have achieved detection of defects that would not be possible or would be extremely time-intensive with the human eye. Furthermore, these methods provide a framework that is either general across materials or tailored to specific material systems in such a way that transferability is not limited via a known training set as might be the case in machine-learning-based object detection algorithms.
In this paper, we report the development of a method that applies cycle analysis from geometric graph theory to the positions of atomic-column centers and is capable of detecting a wide range of defects in STEM images with no prior knowledge of the material. Although graph theoretical techniques have been used previously for the segmentation of spatial regions and identification of voids in imaged materials [18][19][20], these applications do not necessarily provide information on slight structural deviations in the imaged material at an atomic-column by atomic-column level of detail. In graph theory, a cycle is a path between points that connects a point back to itself. Multiple types of cycles exist such as the simplewalk cycle that does not allow any point or connection to be repeated. For this paper, a particular type of cycle is created with the following conditions: no vertices may be repeated, no connecting line may intersect another connecting line, the cycle must enclose a reference atomic column, the cycle must not enclose any additional atomic columns, and, finally, the cycle must be the shortest path connecting the vertices. For every atomic column in an image, a single cycle is found to represent it. Based on the number of vertices and the area of the cycles, it is possible to detect and categorize defects in the STEM image. The concept of pre-filtering wherein crystallographic information can inform the search is also discussed; however, the use of such databases may be limited for real-time analysis during image acquisition since the crystallographic nature of the material may not be known a priori. The approach is applied to STEM images of both thick diperiodic structures (i.e., 3D or bulk-like materials) as well as thin diperiodic structures (i.e., "2D materials"). In bulk-like silicon doped with bismuth, we demonstrate the ability of cycles to detect the Bi dopants in the atomic columns and compare with Z-contrast. In monolayer MoS 2 doped with rhenium, sulfur vacancies are detected using two different cycle metrics.
Image filtering, finding centers of atomic columns
All of the raw STEM data are first processed to identify the centers of atomic columns as follows. At each pixel, a subimage is defined, centered at the pixel and encompassing an area roughly equal to the area per atomic column. These subimages are filtered using PCA to remove noise and surface contamination [21,22]. The subimages are then passed through a 2D correlation [23] with an ideal atomic column (a 2D Gaussian) defined by: where A and B are two 2D matrices of the same size while A and B are their determinants, respectively. This process returns a single normalized intensity. From the filtered image, the centers of atomic columns are then found using a simple intensity threshold followed by density-based clustering [24]. Any clusters that do not meet a minimum size requirement are rejected. The center of mass of each cluster is treated as the center of an atomic column. Further refinement of the positions of the atomic-column centers is performed using nonlinear least-squares curve fitting between the raw data and a 2D Gaussian. The center of the fitted Gaussian is then treated as the refined center of the atomic column.
Finding cycles for each atomic column
A cycle is a path that connects atomic-column centers or "vertices" in such a way that it forms a closed loop to the original vertex. First, all possible cycles are found, after which they are filtered based on the previously described restrictions. To find the cycles associated with an atomic column, one considers only its n nearest points to speed up calculation time. The connections between each vertex and its m nearest neighbors are mapped. Typically, n = 40 columns and m = 10 columns. Then, using the nearest neighbor (or one of the equidistant nearest neighbors) as a starting point, all valid connections are followed to neighboring vertices. This process is repeated until the starting vertex is encountered or no connection can be followed without causing a repeating vertex.
Filtering cycles
Once the cycles are found as described above, they must be filtered to find a single cycle to represent each atomic column. This filtering is done by checking every cycle to see if it meets a series of rules. These rules are that no point may be repeated in the cycle, no connecting line may intersect another connecting line, the cycle must enclose the reference atomic column, the cycle must not enclose any other atomic column, the cycle must have no smaller angle then x degrees (x was set at 45 o in the current work), and finally the cycle must be the shortest path connecting the points within a tolerance factor (within 1% of shortest path in the current work) (Fig. 1). Once all of the cycles that do not meet these criteria are removed, the cycles with the largest number of points are selected. From these cycles the one with the largest area is chosen as the cycle to represent an atomic column. The reason the largest cycle is used is for reproducibility. To ensure that we choose the same cycle each time, it must have a unique feature. Since the smallest cycle would always be a triangle and provide little information, the largest cycle is used instead. In cases of crystalline symmetry (i.e., rotationally equivalent cycles), a preferred cycle orientation can be chosen.
In order to provide a concrete example of defect detection, we demonstrate in Fig. 2 how the above described algorithm finds a vacancy (i.e., more generally, a missing atomic column) for a 2D material with a hexagonal lattice like graphene (Fig. 2a). In Fig. 2b, three different orientations of a seven-sided cycle are shown. This is the optimal cycle based on the algorithm filter in the absence of defects, while the threefold symmetry is a manifestation of the point-group symmetries of the lattice. In the presence of strain, the length of the three bonds may be differentiated. In Fig. 2c, an atom that serves as a secondnearest neighbor to the vacancy site is shown, having a nine-sided cycle as its optimal cycle. In practical implementations, however, restrictions on the atomic-column search distance may cause the original seven-sided cycle to be found instead. In Fig. 2d, a third-nearest neighbor to the vacancy site retains the original seven-sided cycle (though the threefold symmetry is broken). For the atomic columns adjacent to the vacancy site (Fig. 2e), the optimal cycle is an eight-sided cycle. The resulting cycle mapping of the structure is shown in Fig. 2f, highlighting b a c f e d Fig. 1 a Rejected cycle due to cycle lines crossing each other. b Rejected cycle due to an extra atom enclosed in the cycle. c Rejected cycle due to cycle not being the shortest path connecting the points. d Accepted cycle that meets all parameters. e Accepted cycle that meets all parameters. f Accepted cycle that meets all parameters and is also the cycle with the most points the location of the vacancy. Due to the atomic-column distance search limits, the purple atomic columns may be grouped with the blue atomic columns; there is no loss of detection of the vacancy due to the uniqueness of the directly neighboring sites. The uniqueness of the neighboring sites implies that the method is, in general, well suited to detection of missing columns with high accuracy in identification. Further classification due to variations in the vertex count of cycles is discussed in Section "Clustering cycles into defects using number of points in cycle".
Constructing cycles using Delaunay triangulation
Finding all possible cycles and checking them is a timeconsuming process. A faster way to find a good guess of the best cycle is through the use of Delaunay triangulation [25]. For an atomic column, the positions of the nearest n neighbors (typically n = 40) are put into a Delaunay triangulation algorithm. Using the triangle that encloses the atomic column as the starting cycle, triangles from the Delaunay triangulation are combined with the cycle, testing whether the increased cycle at each step meets the selection criteria, until no further Fig. 2 a Subset of a graphene-like structure in a hexagonal lattice (for the present example, the lattice is assumed to continue outside the drawn boundary). b Three rotationally equivalent seven-sided cycles indicating atoms in defect-free regions. Atomic columns inside such cycles are indicated in blue. c Optimal (nine-sided) and restricted (seven-sided) cycles for atomic columns that are second-nearest neighbor to a vacancy (indicated in purple). d Third-nearest neighbor atomic columns retain the original cycle with loss of rotational symmetry. e Atomic columns adjacent to the vacancy site are described by an eight-sided cycle (indicated in orange). f Final cycle-based color coding of the observed atomic columns based on the algorithm a b c Fig. 3 The centers of atomic columns for MoS 2 colored based on the number of points in the cycles associated with them for a searching all cycles, b using Delaunay triangulation, and c using Delaunay triangulation plus pre-filtering triangles can be added that meet the criteria. This method of finding the cycle for an atomic column is more than an order of magnitude faster than searching all possible cycles. However, it does not always find the true, correct cycle, as determined by a time-consuming exhaustive search, though it is often close. Generally, the fast process does not match the results of an exhaustive search only around defects that change the local structure, which does not affect the ability of this method to correctly detect defects (Fig. 3a, b).
Pre-filtering of cycles
To further improve the speed of the algorithm, prefiltering of cycles was tested. As all defect-free periodic crystalline materials can be described by a Bravais lattice with a given atomic basis and a set of symmetries, it is often possible to know the correct cycle before searching.
To take advantage of this, we created a small library of possible cycles. We can overlay a library cycle onto the points by aligning it to the reference point and its nearest neighbor (or one of equidistant nearest neighbors) and scale it to fit the distance between the two points. Starting with the largest cycles in the library, the cycles are overlaid onto the points. If every point in the cycle coincides, within some uncertainty, with a point in the image, it is selected as the correct cycle. This process sometimes yields too small of a cycle, but it has no effect on defect detection (Fig. 3c). The speed improvement of pre-filtering is based on the size of the test library and the percent of defect atomic columns, as this is an additional operation that must be performed on defect atomic columns.
Clustering cycles into defects using number of points in cycle
The first method of detecting defects is by looking at deviations in the number of vertices in the cycles. In "2D materials", atomic columns near defects that cause changes in the local geometric structure such as vacancies, interstitials, or stacking faults have cycles that contain a different number of points than in the perfect material's local structure. To detect a defect, we mark as acceptable any atomic column that has a cycle with the same number of vertices in it as a cycle in the perfect crystal. The remaining cycles are clustered together using a density-based clustering algorithm [24]. This algorithm randomly selects a point as the start of a cluster and then adds every point that is within a specified radius into the cluster. This is repeated until no point can be added to the cluster. These clusters are then grouped based on the number of atomic columns in the cluster. This procedure allows for the automatic detection and grouping of defects in STEM data.
Using cycle area to find defects
Another method of using cycles to find defects is to look at changes in the cycle's area. Using the cycle's area to look for defects allows for the detection of defects that do not change the local structure's geometric coordination relative to defect-free regions, such as interstitials and vacancies in bulk materials or substitutional impurities. Any cycle area that is much larger or smaller than the average cycle area within some threshold represents the presence of a defect near that cycle. This works on a similar idea to previous work where single strontium and lanthanum vacancies where detected by measuring changes in the distance to nearest-neighbor atomic columns [10,11]. The accuracy of using the area to detect such defects is limited by how much deviation in the cycle areas exists in the nominally pristine regions and the threshold for the change in area that is used to classify the defect.
Results and discussion
Here, we demonstrate the method for the detection of defects in 2D and 3D materials and discuss the method's efficacy at finding such defects.
3D bulk-like materials
The ability to study defects using cycles is first demonstrated in a Mo-V-M-O material system, where M can c Atomic columns whose cycles are not part of the perfect crystal. d Non-perfect crystal atomic columns (i.e., those with differing cycles) grouped into defects and colored based on the number of atoms in a defect (single missing column-yellow; two adjoining missing atomic columns-purple; large staking fault-black) be one of any number of atomic species [26], with Te being the most likely one in the current sample. The Mo-V-M-O compound is a material that has been studied as a potential catalyst and can display a variety of interesting phases and defects. In certain areas, this material possesses large stacking faults and missing atomic columns making it a good material to demonstrate the steps described in the "Methods" section ( Fig. 4). In Mo-V-M-O, we believe that we can see the pooling of vacancies or M atoms under the surface of the material (Fig. 5c).
The potential for large-scale vacancy clusters in this material and large stacking faults can be seen in Fig. 4. Ordinarily, if these types of defects occur below the surface layers, they may not be visible in Z-contrast imaging due to the presence of surface contamination which can obscure slight changes in column intensity. The ability of using cycle area to detect defects within an atomic column was tested using bismuth (Bi) doped silicon (Si) (Fig. 5d). This material was used because the Z-squared difference between Bi and Si makes identifying the Bi locations straightforward as an independent comparison. The ability of cycle size to identify Bi within an atomic column was found to be worse than the reference of Z-squared intensity. Using cycle size, it was only possible to identify the approximate location of roughly 80% of the Bi dopants (Fig. 5e, f ). In areas with more than one Bi dopant in close proximity, it is difficult to identify the number and exact location of the dopant. However, for isolated Bi, it is much easier. The 80% identification rate is due to the depth of the defects in the material, as intensity can help identify defects at a greater depth than using the distortion in the local structure on which cycle area identification relies. This result is in line with previous works that have used distortions in the local lattice to identify defects [10,11].
2D materials
For 2D materials, we have selected rhenium-doped MoS 2 (Fig. 6a). This material was chosen due to the nature of defects available, namely rhenium (Re) dopants at molybdenum (Mo) sites along with single and double sulfur (S) vacancies. We filtered this image and found all the atomic columns using the procedure described in the Methods sections. Using the centers of the atomic columns, the cycles are found for each atomic column more than 2.5 times the average nearest neighbor distance from the edge. The number of points in each cycle are analyzed and the defects are subsequently categorized (Fig. 6b). Using the number of points in the cycle, all of the missing S columns were detected and categorized into single missing columns, two adjacent missing columns, and three adjacent missing columns. In MoS 2 , a fully missing S column is a divacancy. To find the single S vacancies, the areas of the cycles were used (Fig. 6c). Sulfur vacancies cause a noticeable decrease in the area of cycles. No method was found to identify the Re dopants using cycles because the frequent occurence of S vacancies near the Re makes the use of cycle area unreliable. The presence of S vacancies near Re dopants may be explained by the differing electronic properties of these defects in MoS 2 . Rhenium dopant atoms are shallow donor defects [27,28] whereas sulfur vacancies are deep acceptor defects [29,30]. Therefore, the S vacancies act as electron traps which can compensate for the excess electrons introduced by the Re dopant atoms. The electron transfer leads to an energy gain, making pairing the two defects energetically favorable.
Conclusions
We described a method to detect structural defects in materials based on the concept of cycles from graph theory. We tested the method on STEM data for both thin and thick diperiodic materials (i.e., 2D and 3D or bulklike materials, respectively). The method is best suited to finding defects in 2D materials, but it can supply useful information about the presence of defects in thicker materials as well. In 2D materials, we demonstrated the ability of the method to distinguish a number of common defects, including interstitials and vacancies. In practice, the present method provides a relatively fast, non-preconditioned approach to identify and classify defects in atomic-scale images, which can augment existing methods in an ensemble approach. | 5,035.8 | 2020-03-30T00:00:00.000 | [
"Materials Science"
] |
Things to know about Bayesian networks: Decisions under uncertainty, part 2
Bayesian networks help us model and understand the many variables that inform our decision‐making processes. Anthony C. Constantinou and Norman Fenton explain how they work, how they are built and the pitfalls to avoid along the way
Bayesian networks help us model and understand the many variables that inform our decision-making processes. Anthony C. Constantinou and Norman Fenton explain how they work, how they are built and the pitfalls to avoid along the way Things to know about Bayesian networks Decisions under uncertainty, part 2 W e constantly make decisions -some routine, such as what to wear, some more complex and important, such as choosing where to live and work. The more complex a decision, the less likely we are to have all the information we need to make the best possible choice. Because incomplete information requires us to reason with probabilities and risk, people (even experts) often arrive at non-optimal decisions.
More complex decisions are usually based on a host of factors or variables. For example, imagine you are the owner of a top-flight football (soccer) team and you must decide before the start of each season how much money to invest in new players. There are many factors to consider: the likely income from sales of unwanted players; the relative net spending of other teams; and the possible negative impact on team performance of making too many personnel changes at once. We can map the decision the team owner needs to make, and all the different variables, using a Bayesian network (BN). This is a graphical model that captures the relationship between variables under causal or influential assumptions.
In this article, we provide an overview of BNs and the kind of assumptions required to build useful networks for complex decision-making. We highlight the need to fuse data with expert knowledge, and we describe the challenges in doing so. Finally, we explain why, for fully optimised decision-making, extended versions of BNs, called Bayesian decision networks, are required.
The basics
A BN is a diagram which uses arrows ("directed arcs") to show how various factors -represented by elliptical nodes -influence one another. Each node comes with its own probability table, known as a conditional probability table (CPT), reflecting the chances of various outcomes resulting from the different influences directly affecting it. guides for action. This means they require models more like (b) than (a). Although the distinction between association and causation is nowadays well understood, what has changed is mostly the way the results are stated, rather than the way the results are generated. Consequently, too often, important conclusions and recommendations are based on models similar to (a), and we believe this is a problem.
Building a Bayesian network
Constructing a BN involves determining both its structure and CPTs. We can do this by eliciting knowledge from domain experts (the knowledge-based approach), learning from data (the data-driven approach), or a combination of the two (information fusion). For the football model, we used both data and knowledge. Specifically, knowledge was used to construct the influential structure of the model, whereas data were used to learn the CPTs of the variables that make up the model.
In general, algorithms that deduce BNs from data (called "machine learning algorithms") can be classified as either "search-and-score" or "constraint-based" (or "hybrid", which is simply a combination of the two). In the search-and-score approach, we search for different structures and score them, using one or more scoring functions, based on how well the fitting distributions agree with the empirical distributions. ) illustrates part of a BN model which provided long-term predictions for football team performance. 1 Once the graphical structure and CPTs of a network have been defined, there are standard algorithms that compute the states of the unknown variables based on the states of the known variables in the model. We can enter as many or as few actual observations as we have available, and the algorithm will update the probability distribution of each of the unobserved variables.
A reason why BNs are so powerful is that they can perform both predictive and diagnostic inference. For example, we can predict a team's league position for a given value (observation) of team performance, but we can also enter a required state of league position as an observation to examine what level of team performance could explain that observation. These standard algorithms are called "Bayesian propagation" algorithms because they rely on Bayes' theorem, in which the probability of an unknown variable is updated after evidence relevant to that variable is observed. In a BN, Bayesian probability inference is driven by the three causal classes illustrated in Figure 2. Specifically:
Causal chain: This describes variables that have a knock-
on effect on each other. For example, changes in players' quality impacts team performance which impacts league position. This means that league position is independent of changes in players' quality once we know team performance.
Common effect:
This is where two different variables, such as transfers in and transfers out, both impact a third variable, such as net transfer spending. This means that transfers out is dependent on transfers in once we know net transfer spending. 3. Common cause: This is where two different variables, such as league position and attendance, are impacted by the same variable, such as team performance. This means that attendance is independent of league position once we know team performance.
These causal assumptions are also vital if we wish to reason about an intervention. Figure 3 shows two different representations of the relationships between four of the variables used in the BN model of Figure 1. If we rely on data alone to determine associations between variables, we would arrive at the model shown in (a), which is not a BN because it does not capture the direction of influence between factors. In contrast to model (a), model (b) indicates that an intervention on attendance will have no effect on other parts of the model, whereas an intervention on changes in players' quality will have an effect on all of the model variables. In model (a), the association between league position and attendance comes via the common cause, team performance. The causal assumptions established by model (b) allow us to simulate the effect of interventions, actions, or decisions and hence, in contrast to model (a), enable us to move from mere prediction to risk management. Scientific research is heavily driven by interest in discovering, assessing, and modelling cause-and-effect relationships as This approach typically aims to maximise predictive accuracy associated with targeted variables of interest. In the constraint-based approach, we check for causal conditions between variables in sets of triples in order to discover causal relationships by performing what are called "conditional independence" tests. For example, we can reconstruct the BN fragment of Figure 3 by discovering that while changes in players' quality and league position are predictive of each other, the association is eliminated conditional on team performance.
Issues with structure learning
Many of the proposed structure learning algorithms are assessed with simulated data, which are generated using hypothetical models that are assumed to represent reality, or "ground truth". Simulated data are then taken as input by an algorithm in an attempt to reconstruct the hypothetical model that was initially used to generate the input data. Figure 4 (page 22) illustrates this process. Each algorithm is then judged both on its speed and accuracy in terms of how well the learnt model resembles the hypothetical model.
While structure learning algorithms tend to perform well when tested with simulated data, this level of performance is not repeated when the input data set represents observations from the real world. This is because of four main challenges: 1. Missing data points: Real-world data sets are often incomplete, yet many of the proposed structure learning algorithms require, and are assessed with, complete data sets. While some algorithms accept incomplete data sets as input, they tend to produce inadequate models. 2. The need for really big data: The accuracy of the learnt model depends on the available data. The number of possible models that can be discovered increases with the states and the number of variables that form the input data set. High-dimensional data sets, and resulting complex models, require extremely large volumes of data to learn a reasonably accurate BN. However, for many critical realworld problems we often have data for a large number of variables but with (relatively) insufficient sample size. For example, the BN model in Figure 1 is based on just 20 samples (teams) for each of the 15 Premier League seasons between 2000 and 2015. Moreover, even when there are plenty of data, there is still the issue of that data potentially being biased and thus likely to be precise but occasionally inaccurate. 3. Latent confounders: These are variables which are missing and which may have a major impact in explaining observed data. In our football model, the squad stability factor was missing from the data and, had it not been incorporated by expert judgement, as we later illustrate in Figure 5, it would have been a latent confounder. Structure learning algorithms rarely account for such variables and their resulting impact on what can and cannot be discovered. In the real world, latent confounders are impossible to avoid, simply because they are often unknown unknowns.
Data quality:
To be able to learn the "correct" causal network, we require the "correct" data. Simulated data satisfy this requirement, since they are generated based on clearly defined models that are assumed to represent the ground truth. However, real-world observations rarely adhere to causal representation in the same way simulated data do. Because of this, results from simulated data tell us very little about the extent to which modelling assumptions hold true for real-world applications.
In addition, there is no agreed evaluation process to determine which algorithm is "best". Different evaluation methods often lead to inconsistencies, whereby one evaluator determines algorithm A to be superior to algorithm B, and vice versa. Moreover, there is a risk of erroneously rejecting a good algorithm while accepting a poor one. In practice, we never know what the ground-truth model really is and so require different evaluation procedures when applying structure learning to a problem in the real world.
In domains such as bioinformatics, applying structure learning algorithms to large data sets can reveal new insights that would otherwise remain unknown. But structure learning algorithms are less effective in areas where domain experts have knowledge about the underlying mechanisms of the problem. Incorporating knowledge into the structure learning process inevitably leads to better-quality models. Such knowledge comes in the form of constraints in terms of what can and cannot be discovered.
One constraint is temporal order, whereby event B occurs after A, and hence B cannot influence A (for example, changes in players' quality occurs first, so cannot be influenced by future team performance). We may also specify constraints about direct relation (for example, indicating that there is an arc between attendance and earnings, with or without specifying the direction of the arc). It is also possible to constrain the graph structure by specifying a simplified "best-guess" model with a metric that forces the algorithm to assign higher scores to models closer to the best guess. For example, since the structure of the BN model in Figure 1 is based on knowledge, it can serve as an initial best-guess model for a structure learning algorithm. An important benefit of such constraints is that they help to reduce the search space significantly, which relaxes some of the structure learning issues discussed earlier.
Information fusion
When the historical data fail to capture all of the key factors of interest, there are two options: either ignore the missing factors, or incorporate them as knowledge-based factors.
For example, the data used to construct the BN model in Figure 1 indicates that teams which increase wages and net transfer spending faster than adversaries improve their performance in the league, on average. However, what the data fail to capture, but for which we have knowledge, is that extreme increased spending does not necessarily translate immediately into improved performance, partly due to some form of team instability caused by multiple substantial changes in players. This may have been the case for Manchester City after it was bought by the Abu Dhabi United Group in 2008: the new owners purchased many high-profile players to improve the squad, but the team finished in a lower position with fewer points than the previous season. This missing factor -squad stability -is captured as a knowledgebased variable in the revised football model in Figure 5, indicated with a dashed node and dashed arcs.
In the absence of such explanatory factors, we run the risk of deducing from data alone the generalised expectations, which may negatively influence optimal decision-making. However, knowledge about missing explanatory factors enables us to question such inferences and improve the structure of a BN model and improve accuracy. 1 But how do we properly fuse knowledge with data? It is vital to acknowledge that the statistical outcomes of datadriven variables are already influenced by the causes an expert might identify as variables missing from the data set. As a result, the incentive is to incorporate knowledgebased variables so that they do not influence data-driven expectations, as long as the knowledge-based factors remain unobserved within the model. 2 In the football example in Figure 5, this means that the incorporated variable, squad stability, preserves the expectation of team performance, as long as the knowledgebased variable remains unobserved. Increased spending continues to correlate positively with team performance in the same way, but the augmented model explains the correlation in the knowledge-based variable and hence reduces the risk of flawed interventions.
Bayesian decision networks
A BN model provides an effective graphical representation of a problem and can be used for multiple types of complex inference. Despite these benefits, a BN model alone is incapable of determining the optimal decision pathways of a problem. For example, we may want to determine the optimal net transfer spending to achieve certain improvements in team performance. To achieve this, a BN needs to be extended to a Bayesian decision network (BDN), also known as an influence diagram. A BDN contains additional types of nodes and arcs, as illustrated in Figure 6.
In a BN, all nodes are considered uncertain "chance nodes". But in a BDN, if a node corresponds to a decision to be made we distinguish it as a "decision node" (drawn as a rectangle). We also introduce "utility nodes" (drawn as diamonds), which are targeted for maximising or minimising a particular outcome of interest, and "information arcs" (drawn as dashed arcs) entering decision nodes, indicating that the decision is determined by information retrieved from parent nodes. In contrast to normal BN arcs, information arcs only pass information forward.
Specifying a BDN inevitably requires expert knowledge since we need to specify the decision options available to the decision-maker, and the utilities we seek to minimise or maximise. In fact, there are certain structural rules we need to follow when transforming a BN into a BDN, such as ensuring that only informational arcs enter a decision node. 3,4 Figure 6 illustrates how a BN fragment may look when modified into a BDN, depending on which variables we define as decisions and utilities. Note that the example in Figure 6 also illustrates how earnings will determine next season's decision on net transfer spending. This process enables temporal analysis from a past BN to a future BN, and models extended towards this kind of analysis are called dynamic Bayesian (decision) networks.
Costs and benefits
With the wide availability of software that makes it easy to build and run models efficiently, there has been an explosion of interest in BNs for solving complex decision problems under uncertainty. In addition to the examples in this article, BNs have been used to evaluate the probative value of complex forensic evidence, and to provide more accurate diagnoses in medicine and more accurate predictions of financial risk. However, for fully optimised decision-making we require BDNs, which enable us to maximise or minimise different outcomes of interest -whether these involve the most cost-effective route, maximum impact, minimum risk, or an equilibrium between the two.
The benefits of BDNs, however, come at a cost of significant effort due to the high levels of manual model construction they currently require. This is because we need to establish the various decision support requirements and their associated cost and benefits, and then incorporate them into the model and appropriately combine them with data. For example, we may want to determine the optimal treatment combination in a particular sequence, to control unexplained symptoms or cure a disease, while at the same time taking care to minimise the risk of potential known and unknown side effects.
While some manual knowledge-based contribution is inevitable when building decision models, future research promises improvements in the ways we establish and model relationships (causal or other) between real-world factors, to allow for faster and more accurate optimal decision-making solutions under uncertainty. n | 3,999.8 | 2018-04-01T00:00:00.000 | [
"Computer Science"
] |
The efficacy of machine learning models in lung cancer risk prediction with explainability
Among many types of cancers, to date, lung cancer remains one of the deadliest cancers around the world. Many researchers, scientists, doctors, and people from other fields continuously contribute to this subject regarding early prediction and diagnosis. One of the significant problems in prediction is the black-box nature of machine learning models. Though the detection rate is comparatively satisfactory, people have yet to learn how a model came to that decision, causing trust issues among patients and healthcare workers. This work uses multiple machine learning models on a numerical dataset of lung cancer-relevant parameters and compares performance and accuracy. After comparison, each model has been explained using different methods. The main contribution of this research is to give logical explanations of why the model reached a particular decision to achieve trust. This research has also been compared with a previous study that worked with a similar dataset and took expert opinions regarding their proposed model. We also showed that our research achieved better results than their proposed model and specialist opinion using hyperparameter tuning, having an improved accuracy of almost 100% in all four models.
Introduction
Lung cancer is one of the most commonly diagnosed cancers worldwide to date.It's a severe disease that affects individuals from all aspects of life regardless of their lifestyle and surroundings environment.This cancer lies within the intricate network of the human respiratory system and arises as a silent menace, often remains unnoticed until it reaches an advanced stage [1].It can manifest in various forms, from the insidious creep of small cell lung cancer to more common non-small cell lung cancer, presenting its own challenges in diagnosing and treatment.The impact is profound, not just physically but emotionally and socially as well.One's journey through lung cancer is often fearful and uncertain, having to make some tough decisions.Starting from a strategic decision about the risk stratification of the screening program to decide eligibility [2,3].
In the United States, lung cancer is the second most common cancer in both men and women.The American Cancer Society estimates that there will be 234,580 new cases of lung cancer, including 116,310 in men and 118,270 in women, and about 125,070 deaths due to this in the year 2024 [4].They also reported that the proportion of developing lung cancer in men is 1:16, whereas in women, it is 1:17, including both smokers and non-smokers, but for smokers' people, the chances are much higher.Having a five-year relative duration to gather survival rate, the Surveillance, Epidemiology, and End Results (SEER) database shows 28% relative survival probabilities for non-small cell lung cancer and 7% survival probabilities for small cell lung cancer [5].Data collected by Cancer Research UK (2016-2018) shows that lung cancer is the third most common cancer in the UK, with around 48500 new cases in the UK every year, including 23,300 cases in women and 25,300 cases in men and only 10% survival rate is remoted in 2013-2017 [6].In Asian countries like China, a total of 19.2% of cancer-related deaths were reported due to lung cancer in 2017 [7]; India had 8.1% of cancer-related deaths [8]; Malaysia reports that lung cancer has a 10% contribution to all malignancies [7].Analysing these situations, the seriousness of the prevention and early diagnosis can easily be understood.
Lung cancer has many risk factors; smoking is number one, as reported by the Centers for Disease Control and Prevention (CDC) [8].According to the CDC, smoking is linked to 80-90% of lung cancer deaths in the USA.Secondhand smoking, which means people who don't smoke themselves but are exposed to smoking, are also highly prone to lung cancer.As reported by the CDC, around 14 million children were affected from 2013 to 2014.The second leading cause of lung cancer is Radon, which is a gas that forms in rocks, soil, and water and can enter buildings through cracks and cause lung cancer if breathed for a long time.Other factors include radiation therapy, abnormal diet, and family genetic history.Risk prediction is more important in lung cancer screening than clinical assessment, as demonstrated by many trials, including the Back model [9], the Lung Cancer Risk Assessment Tool (LCRAT) [10], the Lung Cancer Death Risk Assessment Tool (LCDRAT) [10], the Liverpool Lung Project (LLP) model [11,12], and the PLCOm2012 model [13,14].
This study addresses the relationship between lung cancer factors and early symptoms.Four machine learning models are tuned to detect the low, medium, or high lung cancer risk level.Most of the studies only do detection and provide no explanation due to the black-box nature of machine learning.This study overcame that limitation by explaining each model's interpretability using different explanation methods such as decision boundaries, Local Interpretable Model-agnostic Explanations (LIME), and tree extraction.The primary motivation of this study is to explain model results to non-technical people or patients so they can trust the process more.Significant contributions are mentioned in below points: 1. Exploring dataset to figure out relations between different features.
2. Tuning four machine learning algorithms to outperform previous best results.
3. Explaining the model behaviour and reasoning through explainable AI methods.
The rest of the paper is divided into multiple sections, including a literature review in section 3, materials and methods in section 4, result analysis in section 5, discussion in section 6, limitations in section 7 and conclusion in section 8.
Literature review
Though the development in medical science has advanced so far, cancer remains a highly critical and significant concern throughout medical aid, oncology, health care professionals, and AI-based medical science researchers.Diagnosis of lung cancer mainly relies on manual pathology screening, which is highly prone to error due to the human nature of manual film reading.A good number of algorithms and methods were developed using machine learning (ML) and deep learning (DL) to identify cancer from numerical datasets or image-based datasets [15].Early detection is still crucial for improving survival rates among patients.
Various ML and DL-based techniques have been applied to identify many kinds of cancer diagnosis, prognosis, and risk factors [16][17][18][19].Specifically, AI has been seen to be applied to lung cancer risk assessment, utilising diverse data sources such as medical imaging, genetic markers, clinical records, and environmental factors [20].ML models incorporating clinical data, such as patient demographics, smoking history, and symptoms, have demonstrated efficacy in predicting lung cancer risk [21].Among different models, the Support Vector Machine (SVM) showed the highest detection accuracy, but no reasoning has been provided, lacking model explainability.Four ML models, SVM, Naïve Bayes (NB), Decision Tree (DT), and Logistic Regression (LR), were applied to predict lung cancer from two datasets (collected from UCI and Data World), achieving the highest accuracy of 96.9% with LR on UCI and 99.2% with SVM on Data World [22].In both cases, no model explainability has been provided.Using five different data mining techniques: SVM, K-Nearest Neighbors (KNN), NB, DT, and Artificial Neural Network (ANN), lung cancer prediction has been done with three case scenarios [23].The best accuracy achieved was 93% using the ANN algorithm and SMOTE Upsampling technique on an unbalanced Kaggle dataset [24].Biomarkers are also used to identify early lung cancer by analysing the combination of metabolism factors with ML methods.Among the used models, Neural Network (NN) and NB achieved 100% classification accuracy [25].Among these machine learning models, none was found that explained the internal reasoning that resulted in the accuracy, causing the trustfulness of the application.A customised Lung Cancer Prediction Tool (LCPT) has been developed to predict lung cancer using the risk factor and compared with expert opinion to verify the result [26].They have shown an accuracy of 93.33% using LCPT, more significant than the specialist opinion of 86.66%.A Random Forest (RF) model was also used to generate ten random trees to compare the results with LCPT.They explained how the factors resulted in the decision using Degree of Importance (DOI).
Image processing of computed tomography (CT) screening has widely been used to diagnose lung cancer using different computer vision techniques [27].Previously, 2D images of CT were hugely popular for classification and segmentation.As computational power has increased, people are now exploring 3D images and have achieved excellent results.Researchers are using DL methods to identify high-risk smokers suitable for lung cancer screening CT using chest radiographs.Their model's performance was validated, showing promise in improving the selection process of lung cancer screening [28].A customised deep CNN has been proposed to classify interstitial lung diseases (ILDs) from CT image patches, achieving around 85.61% accuracy, higher than VGG-Net performance, which gained 78% [29].3D-VNet and 3D-ResNet architecture have been developed to train 3D CT image slices for segmentation and classification problems [30].They have achieved a 99.3% Dice Similarity Coefficient (DSC) for segmentation and 99.2% accuracy for classification on the LUNA16 dataset.Though their accuracy and segmentation results are promising, model explainability is very difficult to show due to the complexity of CNN architecture.Another study developed a cascade 3D-UNet to detect lung cancer bone metastases (LCBM) from CT images.Compared with five radiologists, their model outperformed in detecting LCBM, with higher AUROC (0.875 vs. 0.819) and sensitivity (0.894 vs. 0.892) in an observer-independent study [31].Combining with transformer and U-Net, an architecture has been developed named UNETR and used to segment 3D images of lung cancer using the Decathlon dataset [32], achieving an accuracy of 97.83% with DSC of 96.42% and a classification result of 98.77% [32].They have shown the different performances regarding different hyperparameters like optimiser, number of epochs, and activation functions.Still, the CNN explainability remains unnoticed.An ensemble multi-view 3D CNN model has been designed for risk stratification of invasive lung adenocarcinoma using thin slice CT scan images, with an AUC of 91.3% for benign/malignant diagnosis and 92.9% for pre-invasive/invasive nodule classification [33].It also outperforms senior doctors in risk assessment, having 77.6% accuracy, but lacks information on why it can outperform doctors.
Explainable AI, or XAI, is gaining significant attention because of its ability to understand the reasoning behind the model's prediction, classification, or segmentation.Various works are seen to explain the AI model to understand the importance of features; for example, in chronic wound images, LIME has been applied to understand ROI [34].Another custom XAI diagnostic model has been proposed to interpret the model using TabNet with causal graphs on mammography reports of breast cancer [35].Another ensemble learning framework with XAI has been developed by ref. [36] to determine breast cancer with explanations.The SHAP model has been used to explain lung cancer reasoning from biomarker values identified from CT scan reports by ref. [37], where they developed an AI CAD model using multiple ML methods.Many other works have used the XAI method to explain model behaviour on lung cancer detection on both ML and DL methods [38][39][40][41].
Regarding the risk factors, almost all image-based detection is the diagnosis of lung cancer, and most risk factors are considered in a numerical form of data with different vital factors and symptoms.This research is based on exploring the relationship between these factors and lung cancer chances using four popular machine learning models named SVM, KNN, DT, and RF.It also explains why the model behaved as it did.This work will improve a previous work [26] that addresses issues on the same dataset and proposes a custom LCPT model.In this work, we have also shown that higher classification accuracy can be achieved using parameter tuning.
Dataset description
The dataset is taken from Data World [42].There are 22 features: Age, Gender, Air Pollution, Alcohol use, Dust Allergy, Occupational Hazards, Genetic Risk, Chronic Lung Disease, Balanced Diet, Obesity, Smoking, Passive Smoker, Chest Pain, Coughing of Blood, Fatigue, Weight Loss, Shortness of Breath, Wheezing, Swallowing Difficulty, Clubbing of Fingernails, Frequent Cold, Dry Cough, Snoring.The details of the dataset have been shown in Table 1.The association of features with risk-level classes is significant.It has been seen that people aged around 35-40 are more likely to have lung cancer.Also, high air pollution levels cause a higher risk, along with alcohol use.People with less alcohol use are seen to have lower chances of getting lung cancer.A similar trend has been seen among other features; having high points generally leads to higher positive risk, except shortness of breath, wheezing, clubbing fingernails, and snoring.The exceptions are seen to have a range of values having higher risks.A total feature distribution histogram has been shown in Fig 1.
The dendrogram represents a hierarchical clustering of various features based on their similarity or dissimilarity.The y-axis measures the distance between clusters, with lower values indicating greater similarity and likeliness of occurring together.Each feature on the x-axis corresponds to a symptom or factor of lung cancer.Feature distance has been calculated using a dendrogram graph and is shown in Fig 2 .As we move up the hierarchy, clusters form by joining related symptoms and factors.For example, Fatigue and Snoring suggest that they are likely to occur together.Broader clusters emerge as we ascend the dendrogram, where Age and Gender show that they are unlikely to be related to each other.The red cluster indicates a close relation with risk level and other highly connected features.
Another correlation matrix has been generated to show the relationship between features in a better way in Fig 3 .Light color represents a higher correlation, and dark color means a lower correlation.It can be noticed that Age and Gender have the lowest correlation with Levels of 0.079 and -0.16, and Obesity and Coughing of blood have higher correlations of 0.82 and 0.77.Among features, a high (0.82-0.88) correlation value is seen among Occupational hazards, Genetic risk, Alcohol use, and Dust Allergy.Some more high correlations between 0.79-0.82are seen among Chest pain and Occupational hazard, genetic risk, lung disease, and balanced diet.The lowest correlation value is seen among smoking and weight loss as -0.27.
Model training and validation
Four widely known ML models (SVM, KNN, DT, RF) are selected to train and validate the data to predict the risk factors.The dataset has been split into train and test, with 70% for training and 30% for testing, which included 219 low, 235 medium, and 246 high-risk level datapoint in training and 84 low, 97 medium, and 119 high-risk level datapoint in test portions (details shown in Table 2).The model training speed is fast, and it took around 1-2 seconds to train all four models.From the previous dataset analysis section, looking into the correlation matrix, it can be understood that Age and Gender might not have a major contribution to the risk factor.The Random Forest classifier is used to calculate the feature importance, having n_estimator as 20.It also shows that Gender has 0 feature importance and Age is the second lowest with a 0.0022 score.Hence, these two features have been deducted from the training dataset.The total feature importance is shown in Table 3. Parameter tuning has been applied to select the best parameters among given dictionaries using the Grid Search algorithm with cv = 5 and n_jobs = 5.The Fbeta scorer was selected with beta = 2, and the micro average was considered to create the scorer.For SVM, four parameters were given with suitable ranges as C = [0.1,1,10];Kernel = [rbf, poly, sigmoid, Though parameter tuning has also been tested on DT and RF, as they are already achieving 100% accuracy, after tuning, parameters are just the first value of the dictionary.Improved parameters after grid search algorithm that are applied: {'C': 0.1, 'class_weight': 'balanced', 'decision_function_shape': 'ovo', 'kernel': 'linear'} for SVM and {'algorithm': 'auto', 'leaf_size': 10, 'n_neighbors': 3, 'p': 1, 'weights': 'uniform'} for KNN.All four models have been trained again with selected parameters, and this time, SVM, KNN, and DT showed improved K-fold test accuracy.
Model explainability
Machine Learning model explainability refers to the understanding and interpreting how a model arrives at its predictions, in this case, risk levels.It is about demystifying the blackbox nature of complex scenarios and making their internal functionality more transparent and understandable to humans.Understanding why a model makes a specific prediction or classification can help build trust in its reliability and fairness and provide valuable insights for improving the model's performance or addressing biases.There are various techniques for enhancing model explainability, ranging from simple methods like feature importance analysis to more sophisticated approaches such as generating human readable explanations for individual predictions using popular algorithms like LIME or Shapley Additive Explanations (SHAP).
Support vector machine (SVM).
A decision boundary has been plotted and analyzed to explain the outcome of the SVM model and shown in Fig 4 .However, a decision boundary is a 2D plot, where the dataset has multiple features; hence, a principal component analysis (PCA) was done to reduce the dimensionality of the data.After doing 2D PCA, minimum and maximum values are extracted to create a mesh grid.SVM has been trained to get the Z-axis values using that grid data.A decision boundary has been drawn using the mesh grid and Z-axis values, and a scatter plot is used to plot the training and testing data points.
K-nearest neighbors (KNN).
The LIME method has been used to explain the KNN model.LIME begins by generating a dataset of perturbed instances around the instance of interest.For presenting a prediction made by a KNN model, LIME would generate new data points by perturbing the features of the instance being explained.The KNN model is then used to predict the output for these perturbed instances.Since KNN is a lazy learner, it directly uses the nearest neighbors from the training data to make predictions without explicitly building a model.Next, LIME fits an interpretable model to the generated dataset using the perturbed Decision tree.A decision tree is a popular supervised learning algorithm for classification and regression tasks.It works by recursively partitioning the input space into smaller regions and assigning a label or value to each area.This process creates a tree-like structure where each internal node represents a decision based on a feature, each branch represents a possible outcome of that decision, and each leaf node represents the final prediction or value.The topmost node in the tree is the root node, representing the entire input space.Nodes that represent decisions based on features are called internal nodes.Each internal node splits the data into two or more subsets based on a feature value.The edges connecting nodes are called branches, representing the possible outcomes of decisions.Nodes at the end of the branches that do not split further are terminal nodes.They represent the final predictions or values.At each internal node, the decision tree algorithm selects the best feature and the corresponding threshold to split the data into subsets.The goal is to maximize the homogeneity or purity of the subsets regarding the target variable.After selecting the splitting criteria, the algorithm recursively applies the splitting process to each subgroup, creating a binary tree structure.The recursive partitioning process continues until a stopping criterion is met.Standard stopping criteria include reaching a maximum tree depth, achieving a minimum number of samples per leaf node, or no further improvement in homogeneity.Once the tree is constructed, it can be used to make predictions for new instances by traversing the tree from the root node to a leaf node based on the values of the input features.In our case, Frequent Cold is the root node that links to Air Pollution and Obesity for direct high-risk prediction.A complete tree explanation has been shown in Fig 6.
Random forest.Plotting a decision tree from a random forest ensemble can provide insights into how the individual decision trees within the random forest make predictions collectively.While each decision tree in a random forest is trained independently, understanding the structure of a single decision tree can help in understanding the overall behavior of the random forest algorithm.However, it's important to note that plotting a single decision tree from a random forest does not fully represent the complexity and diversity of the entire ensemble.Each decision tree in a random forest is trained on a bootstrapped subset of the original dataset and may use a random subset of features at each split.Plotting an individual decision tree can help understand the specific features and decision criteria used by that tree to make predictions.By analyzing the splits and decisions made by the particular tree, one can infer the importance of different features in making predictions.Features that appear higher in the tree and are used for multiple splits are likely more important in decision-making.While a single decision tree provides insights into the decision-making process, the strength of random forests lies in aggregating predictions from multiple trees.Plotting multiple decision trees from a random forest and analyzing their commonalities and differences can help understand how the ensemble combines diverse predictions to improve overall accuracy and generalization.Understanding the decision-making process of individual trees within a random forest can enhance the interpretability of the model, providing insights into why specific predictions are made and how different features contribute to those predictions.The first decision tree from the random forest model is shown in Fig 7.
Result analysis
Initially, all four models are trained with default parameters, having five k-fold cross-validation.SVM has achieved a cross-validation accuracy of 95% (+/-0.02),total accuracy of 96.33%, KNN achieved a cross-validation accuracy of 92% (+/-0.07),and total accuracy of 99.66%.On the other hand, the decision tree achieved 99% (+/-0.03)cross-validation accuracy, and the random forest achieved 100% accuracy in cross-validation and total accuracy.Detailed results, including precision, recall, and f1-score are shown in Table 4 for all four models.The confusion matrix of the test result is shown in Fig 8.
Though the classification accuracy with tree algorithms is highest at 100%, SVM seems to miss some values.For which tuned parameters are seen as an improvement in accuracy, achieving almost 100% test accuracy even with five k-folding tests.Similar improvements have been seen for KNN, which has come to 99% test accuracy from the previous 92%, improving almost 7%.The decision tree also improved on the 1 st K-fold test, increasing to 100% from 96%, but decreased 1% on the 3 rd K-fold test, but overall remains at 99% test accuracy.The random forest model remains unchanged on all K-folding.A detailed comparison of results is shown in Table 5.The learning curve has also been analyzed before and after parameter tuning.Before parameter tuning, SVM shows a slow learning curve in training and cross-validation.Similar but lower validation scores can also be seen with KNN.Comparatively, DT and RF show better training and cross-validation scores from the beginning.Learning curves for four models before parameter tuning have been shown in Fig 9.
After parameter tuning, much better learning curves are seen for SVM and KNN.Previously, reaching the peak accuracy took SVM around 400 training sets, but now it is achieved within 300 training sets.Similarly, KNN's previous peak performance was at 500 training sets, but now it is achieved within 400 training sets.Little performance disruption is seen for DT, but it can ultimately maintain its peak accuracy.No noticeable changes were seen for RF, as it has a stable learning curve and accuracy throughout the K-folding and fixed parameter tests.Details of learning curves after parameter tuning for four models are shown in Fig 10.
Discussion
Machine learning has always been one of the most popular techniques for analyzing numeric data.Specifically, it has been widely used in medical science to help healthcare professionals with prognosis, diagnosis, different factor analysis, and so on.In this paper, four ML models are used to predict lung cancer risk levels, as well as analyze the explainability of these models' outcomes.Previous studies that worked with similar datasets used some standard models but achieved lower performance and provided no logical explanation [22,26].Ahmad A. et al. even collected expert professional comments to compare with the ML predicted result, which shows the superiority of ML prediction.Even with CT scans, the CNN method shows better accuracy than senior doctors in the case of risk assessment [33].It refers to the importance of explainability in answering why these models can perform better than professionals.
This paper worked with a lung cancer risk assessment dataset, which has been trained with four machine learning models, achieving higher accuracy than previous work [26].Hyperparameter tuning has been done using the Grid Search algorithm to get the best parameters among given ranges for all four models.After training with those parameters, improvements were seen in the SVM, KNN, and DT models.Random Forest has remained the top scorer since the beginning due to its capability of generating multiple trees, each having split features based on feature weights.SVM, KNN, and DT are some of the most popular models for training numeric datasets, and some other Kaggle datasets were seen to be trained by using these models.One recent work achieved accuracy around 95.4% for SVM, 93.7% for DT, and 95.2% for KNN [21], another work [22] achieved 99.2% with SVM and 90% with DT on the same dataset.
In both cases, no hyperparameter tuning has been done, which might be able to increase their accuracy, and no model explanations were mentioned.On a custom-collected dataset, SVM, RF, and KNN were applied to train the data, and they gained 94.7%, 89.5%, and 89.5% model accuracy, respectively.A comparison of these results is shown in Table 6.This paper also explained the models' performance reasons using different explainability methods.A decision boundary has been drawn for both training and testing data for SVM.From the decision boundary, it can be understood that the data points are well-categorized and differentiable from one class to another.The LIME method has been used to extract the reasoning behind KNN's classification.For example, one "High" risk level category is selected as "High" due to having higher Fatigue, specifically having a value of more than 5, Passive Smoking value of more than 5, Obesity between 4 to 7, Alcohol use value of more than 7, and so on.Similarly, the reasons for other levels can also be determined.For the Decision Tree and Random Forest classifier, a single tree has been shown to explain why the model came to the particular decision of given feature values.
Limitations of the study
While machine learning (ML) models hold promise for lung cancer risk prediction, there are limitations to their efficacy, especially regarding explainability: Data Dependence: ML models are only as good as the data they are trained on.Biases in the data, such as underrepresentation of certain demographics, can lead to inaccurate predictions for those groups.
Focus on Established Risk Factors: Current models primarily focus on well-established risk factors like smoking history.They might miss subtle or emerging risk factors not yet incorporated into the training data.
False Positives and Negatives: ML models can generate false positives (identifying low-risk individuals as high-risk) and false negatives (missing high-risk individuals).This can lead to unnecessary procedures or missed opportunities for early detection.
Conclusion
Lung cancer remains the deadliest disease, with a high mortality rate throughout the world regardless of economic or social conditions.It is much better to care about early prevention, which could save not only one life but also a whole family.To understand the possibility of lung cancer, its factors are critical to understand and analyze and know early symptoms.In this work, a dataset having 22 such properties have been analyzed.The machine learning models used are very lightweight, easily reproducible, and usable in real life without having much technical knowledge.With parameter tuning, almost 99 to 100% test accuracy has been achieved for all four (SVM, KNN, DT, RF) models, even with 5 K-fold cross-validations.Later, each model's decisions were explained with valid and accessible reasoning through various methods such as decision boundary, LIME, and tree representation.This research has been done considering the low correlation between Age and Gender.This might not be true if any other dataset has a higher correlation.Hence, it can be regarded as a minor limitation.Future work can include multiple neural network-based model architectures to deal with more complex datasets and compare results and explainability with ML models. | 6,403.6 | 2024-06-13T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Climate Change Awareness and Adaptation Strategy in Ibadan Metropolis, Nigeria
Discovery of renewable energy as an alternative means of generating power without depleting the ozone layer or doubling Co 2 in the ecosphere is crucial in managing climate change risks. This study explains the awareness of the people in Ibadan metropolis about the adverse effect of climate change. 300 questionnaires were administered using random sampling technique in six communities drawn from four local governments out of nine in Ibadan metropolis in Oyo state. Descriptive statistics and Chi-square estimation technique were used to analyze the data. The result reveals that both urban and rural respondents have knowledge of climate change and renewable energy. Respondents’ knowledge about climate change highlighted teaching from school. Also, rural respondents are willing to pay for renewable energy as an adaptation strategy to minimize the effect of climate change caused by carbon emissions and environmental pollution. The study, therefore, concludes that renewable energy should be introduced as an alternative source of energy at a minimal cost. Keywords: Climate Change, Renewable Energy, Knowledge, Awareness, Carbon footprint, Adaptation. DOI: 10.7176/JRDM/69-05 Publication date: October 31 st 2020
Introduction
Education and training are part of the elements used over the years to drive human capital development. This can take the form of raising awareness, transferring knowledge, and skills training. Knowledge about changes in the environment is critical in achieving human and sustainable development. Climate change has emerged as a subject of concern in the past few years as scientists have become increasingly concerned about the greenhouse effect (Jain 1993). Changes in the climate are not unexpected this is expected to affects the frequency and intensity of the hazards and the probability of extreme events (for instance those linked to sea-level rise, extreme heat and ozone depletion). It is a major threat to people's health and has implication for sustainable development in Africa. Several pieces of evidence of climate change that are due to human activity have been collected for many years since the industrial revolution. According to IPCC, (2007) the Global Greenhouse Gas emission that has been due to human activities increased immediately after the industrial times. Specifically, the increase of 70% between 197070% between and 200470% between was reported by IPCC, (2007. Consequences of higher average temperatures due to the greenhouse effect such as rises in sea level, desertification, extinction of plant and animal species, shifting of agricultural patterns and increased frequencies in extreme weather phenomena such as cyclones are now considered unequivocal evidence by the scientific community (IPCC, 2007(IPCC, , 2014) with very few divergent views (Khilyuk and Chilingar, 2003).
Therefore, awareness of the adverse effect of climate change and the need to embrace other sources of energy especially renewable energy and proper waste management is very crucial in managing the risk of climate change and mitigating the carbon footprint. This study utilized structured questionnaires to generate information from the respondents (workers and students) about their knowledge and understanding of climate change risk and adaptation to renewable energy and proper waste management as a way of mitigating their carbon footprint. In exception of the introductory section, the paper contains four sections. The first is a review of the literature on climate change and renewable energy. The second describes the method of estimations adopted for the study. In the third section, the paper presented the empirical results and findings while the last section contains the concluding remark.
Literature Review
The state of knowledge as regards awareness and adaptation of climate change is presented in this subsection. We start with Shahid and Piracha (2016), who analyzed the awareness of Climate Change and its impacts and adaptation at Local Level in Punjab, Pakistan. The results showed that the local officials at union councils possess a low level of education and are poorly trained. They, however, do not know about international climate change agreements. They also found that the local officials are not adequately equipped to mitigate the effects of climate change. De Sousa et al. (2018) examined the farmers' climate awareness and their perceptions regarding the change in climate patterns as well as their choices of farming practices to adapt to these changes. The study indicated that reforestation was the preferred adaptation strategy among interviewed farmers and that educational profiles and the size of landholdings drive the adoption of this and other practices. The findings provided evidence to support the design of capacity development interventions targeting specific groups of farmers according to their main crop and education profile. Ahmad et al., (2019) analysed the adaptation towards climate change among islanders in Malaysia. Using a total of 400 islanders were used as the respondents through a multi-stage sampling technique. They found a moderate to high mean score for adaptation aspects namely awareness, dependency and structure. Ole, et al (2009) examined the perception of climate change and the strategies for coping and adaptation by sedentary farmers in the savanna zone of central Senegal using focus group discussion and household survey. They found that households are aware of climate variability and identified wind and occasional excess rainfall as the most destructive climate factors. Households also attributed poor livestock health, reduced crop yields and a range of other problems to climate factors especially wind. Bostrum, et al (1994) examined public understanding of climate change using a set of exploratory studies and mental models, they found that automobile use, heat, and emissions from industrial processes, aerosol spray cans and pollution, in general, were frequently perceived as primary causes of global warming. Also, the "greenhouse effect" was often interpreted literally as the cause of a hot and steamy climate. The issue of misconception about the meaning of climate change necessitates the need to create awareness and knowledge of climate change among the citizenry.
Methodology
Global development agenda in the past and the present Sustainable Development Goals (SDG) rely on education as a means towards their achievement, the main elements that comprise this means are skills training, transferring knowledge and raising awareness among individuals, institutions, regional and national government. Hence empowering them to act as an active agent for sustainable human development. The theoretical underpinning of this study is hinged on the theory of human development that emphasized education, skills and knowledge in achieving economic growth and development. The methodological approach to this study relies on the use of structured questionnaires to generate information from the respondents about their knowledge and understanding of climate change risk and adaptation to renewable energy as a way of mitigating their carbon footprint. The study utilizes random sampling techniques to administered 300 questionnaires in six communities drawn from five local governments out of nine in Ibadan metropolis in Oyo state. 3 rural communities (Eruwa, Iyana Church, Idi ayunre) and 3 urban communities (Bodija, Akobo Orogun) were selected. Descriptive statistics and Chi-square estimation technique were used to analyze the data. The choice of these communities is to show whether the urban settlers are more informed than the rural settlers. The questionnaires covered mainly information about the socialeconomic profile of the respondents, availability of renewable energy and waste management in their communities (carbon footprint), understanding of climate change and their effort in militating against carbon footprint by adapting to renewable energy and proper waste management.
Knowledge about climate change
This section discusses the respondents' knowledge of climate change using chi-square tests of independence and their significance. The first question that the respondents were asked about their knowledge of climate change is if they have heard of climate change. Over 90.6 and 80.8 per cent of the respondents (urban and rural respectively) say "yes". This shows that respondents have information about climate change. A larger percentage of urban (55.2 per cent) believed that climate change is caused by heavy rainfall while about 25 per cent believed that all options (greenhouse gas, heat waves, ozone depletion, heavy rainfall, rising temperature/sea level) were the causes of climate change. However, 13 per cent of the rural think climate change is caused by ozone depletion; this is followed by all option 29.6 per cent and 66.4 per cent heavy rainfall. The main greenhouse gas according to all respondents is carbon dioxide (with urban 61.4 per cent and rural 73.3 per cent) this is followed by Nitrogen (24.3 per cent for urban and 11.9 per cent for rural). The respondents were also asked about their understanding of the effect of a warmer climate on sea level, rainfall, sunshine and farmers crops. A larger percentage of the respondents believed that the sea level will rise (57.9 per cent urban and 54.5 per cent rural) while over 27 per cent and 16.8 per cent believed it will fall. The urban settlers understanding of the effect of a warmer climate on rainfall shows that rainfall will be lower in some places and higher in some places (29.3%) while 51.5 per cent of rural believed it will be higher in most places. Most of the respondents (an average of over 40 per cent) think that a warmer climate will bring more sunshine in most places and affect the crops of farmers badly. A chi-square test applied to the data as revealed in table 1 shows insufficient evidence to reject the null hypothesis that rural and urban respondents have knowledge about climate change and that their responses are independent of each other. The pvalue ranges between 0.05 and 0.01. Less sunshine in most places 5.0 8.9 Less in some places and more in others 32.9 19.8 The same in most places 9.3 11.9 Don't know 5.7 2.9 Source: Climate Change Survey 2015
Sources of Climate Change Knowledge
Information about climate change is very important to the way people can mitigate and adapt to climate change risk. This can occur through awareness and education. Table 2 shows the source of respondents' knowledge about climate change; it reveals that over 35 per cent of the respondents learn about climate change in school followed by radio (40.9 per cent for rural and 27.1 per cent); television (15.7 per cent) for urban and 7.9 per cent; book and magazine (11.9 per cent) for rural and 12.9 per cent for urban. The Internet also played an important role as the source of knowledge about climate change, 6.4 per cent and 1.9 per cent for urban and rural respectively.
Carbon Foot Print Knowledge
Carbon footprint is historically defined as the total sets of greenhouse gas emissions caused by an organization, event, product or person. The single largest source of emissions for the typical household is from driving (gasoline use). Transportation as a whole (driving, flying & small amount from public transit) is the largest overall category, followed by housing (electricity, natural gas, waste, construction) food (mostly from red meat, dairy and seafood products, but also includes emissions from all other food) and services. It is often expressed in terms of the amount of dioxide, or its equivalent of other GHGs, emitted. The way household handles their waste in the family will determine how they will manage climate change risk caused by carbon footprint. The most frequently used method of waste disposal identified by the respondents is government-designated refuse dumps with 43.6 and 44.6 per cent responses from the urban and rural respectively. This is followed by burning in open space (18.9 per cent for urban and 22.8 per cent for rural. Collection by waste management firm recorded 15 per cent for urban settlers and 14.9 per cent for rural. Dumping in open space is also identified by the respondents as one of the most frequent waste disposal methods in the communities. To the urban settlers, the most important challenge associated with the waste collection was "Ignorance of the people on benefits of good waste management behaviour" with 27.9 per cent while rural identified "Lack of government-designated waste dumps/skips" as the major challenge (30.1%). Ignorance of the people on the benefits of good waste management behaviour 39 27.9 28 27.7 Lack of space for sitting designated refuse dumps due to the crowded nature of buildings 17 12.1 13 12.9 Nature of food items consumed by the people 5 3.6 2 1.9 Nature of luxury items consumed by the people 0 .0 0 0 Others 4 2.9 1 0.9 Source: Climate Change Survey 2015
Renewable Energy
The adaptation and switching to renewable energy to minimize the effect of climate change caused by carbon emissions and environmental pollution is pertinent and drawing the attention of world leaders. Thus information about the respondents "types of energy sources", knowledge on renewable energy and how much they are willing to pay for renewable energy are discussed in this subsection. Table 5 revealed the knowledge and usage of renewable energy by the sampled respondents. It shows that respondents in the urban area ranked hydroelectric source of renewable energy as the source they are more aware of compare to other sources. This followed closely by solar, wind, biomass and geothermal in that order. For the respondents in rural areas, solar was ranked higher than any of the sources. The hydroelectric source remains the major type of renewable energy that has been mostly used by the respondents (urban and rural). Sources like wind and geothermal have not been used by most of the respondents because they are not readily available to the respondents. From table 6, a larger percentage of urban and rural (88.3 and 74.4 per cent) respectively indicated that they would be able to pay if renewable energy tariff were introduced this indicates that the respondents have an idea about other sources of energy and are willing to reduce their carbon footprint to avoid the consequences of climate change. The result also shows that over 60 per cent of the respondents consider energy costs when building, buying or renting a house.
Conclusion and Policy Recommendation
The purpose of this study is to generate information from the respondents about their knowledge and understanding of climate change risk and adaptation to renewable energy as a way of mitigating their carbon footprint. The results revealed respondents understanding and knowledge of climate change and renewable energy. Issues such as causes and effects of climate change, changes in rainfall and temperatures, and sources of their knowledge about climate change as well as the action they are prepared to take to mitigate their carbon footprint. The source of respondents' knowledge about climate change highlighted teaching from school as one of the major source this supported empirical findings that knowledge through education and information through awareness are the key issues that will help in militating against climate change. The findings of the study also revealed that the respondent considers energy cost when building, buying and renting a house. The striking feature of the results obtained is the level of willingness of the rural dwellers to pay any amount for renewable energy, the study, therefore, concludes that renewable energy should be introduced as an alternative source of energy at a minimal cost to address the issue like deforestation that can help to reduce carbon footprint as the main source of climate change. The study, therefore, recommends the following: Since respondents are willing to pay for renewable energy as an adaptation strategy, the issue of affordability and accessibility, which is a major problem, should be handled with utmost urgency. More awareness should be created about the causes, adaptation and mitigation strategy of climate change risk in Nigeria Establishment of a proper waste management system by private and public institutions should be encouraged since the study shows that waste disposal, energy consumption and transportation accounted for the greatest sources of GHGs in Nigeria. Private and Public Investment should be made in renewable energy as an alternative source of energy to mitigate against carbon footprint. | 3,652.6 | 2020-01-01T00:00:00.000 | [
"Economics"
] |
A Micro-CT Analysis of Initial and Long-Term Pores Volume and Porosity of Bioactive Endodontic Sealers
The evaluation of the porosities within the interface of root canals obturated with endodontics materials is extremely important for the long-term success of endodontic treatments. The aim of this study was to compare initial and long-term volume of pores (total, open, closed) and porosity (total, regional) of three bioactive endodontic sealers: GuttaFlow Bioseal, Total Fill BC Sealer, and BioRoot RCS. Root canals were obturated with three “bioactive” sealers using the single-cone technique. The volume of open and closed pores and porosity were calculated using a micro-computed tomography (MCT) method. The measurements were performed after 7 days (initial) and after 6 months (long-term) of incubation. Statistical significance was considered at p < 0.05. The total volume of pores remained unchanged after the 6-month storage. GuttaFlow Bioseal exhibited significantly higher long-term volume in open pores than Total Fill BC Sealer. The total porosity in all the tested sealers presented no statistically significant change after the 6-month storage, except for BioRoot RCS. The total porosity values of this latter material significantly increased after long-term incubation, especially in the apical region. In conclusion, the use of bioactive sealers with excessive tendency to create porosities both in shorth- and long-term periods of storage may compromise the long-term success of endodontic treatments.
Introduction
The root canal filling in endodontic treatments should provide a three-dimensional, fluid-tight seal of the prepared and disinfected space [1]. The most common material used for obturation is gutta-percha (GP), but due to the lack of adhesion to the root dentin, the application of a sealant is indispensable to achieve a suitable sealing at the interface [2]. Root canal sealers fill the space between the GP and the canal wall, flowing into lateral irregularities and accessory canals and spaces between GP points when used in a lateral compaction technique [3].
The first pre-blended and reusable calcium silicate sealant (CSBS), which was introduced in 2007 (iRoot SP, IBC, Burnaby, Canada), triggered significant development of these materials [4]. Today, CSBSs are available in a wide range of products varying in composition, properties, and consistency. CSBS requires water for the hydration reaction to sealants. Thus, the aim of this study was to compare initial and long-term volume of pores (total, open, closed) and porosity (total, regional) of three endodontic sealers: two bio-ceramics, Total Fill BC Sealer (FKG Dentaire, La-Chaux-de-Fonds, Switzerland) and BioRoot RCS (Septodont, Saint Maur Des Fosses, France), and one silicone-based sealer, GuttaFlow Bioseal (Coltène/Whaledent AG, Altstätten, Switzerland). The null hypothesis was that there would be no differences between the three tested sealers in terms of the evaluated parameters and no change in both parameters evaluated over time.
Sample Preparation
After research ethics committee approval (RNN/36/20/KE; Lodz, Poland; 11/02/2020), 30 roots of freshly extracted first and second mandibular molars were included into the study. After manual debridement of the root surface with a curette, samples were disinfected in 5.25% NaOCl for 2 h and then stored in 0.5% thymol until experimentation. The sample size was calculated with the following parameters: effect size of 10%, standard deviation of 7%, significance level of 0.05, and power of 80%. The minimum sample size of 9 was determined.
The selection of the canals to be included in the study was performed based on a preliminary micro-computed tomography (MCT) analysis to provide homogeneity of evaluated roots in terms of anatomy. The following morphological parameters were considered: single, oval, canal in fully formed root, and curvatures between 10 • and 20 • calculated by Schneider's method [33]. Additionally, roots with calcifications and internal or external resorption were excluded from the study. Moreover, the clinical inclusion criteria considered lack of root caries and an initial foramen diameter of canal equivalent to a size of 15 K-file (Dentsply Sirona Endodontics, Ballaigues, Switzerland).
The crowns were removed using a diamond bur (4ZR.FG.012; Komet Dental Gebr. Brasseler GmbH & Co., Lemgo, Germany) under continuous water-cooling, leaving 13 ± 1 mm long roots. A size 10 K-file (Dentsply Sirona Endodontics, Ballaigues, Switzerland) was inserted into the canals until the tip was visible at the apical foramen under the microscope (8X, OPMI Pico; Carl Zeiss, Oberkochen, Germany) and measured in mm after removal. The working length (WL) amounted to the above-mentioned value reduced by 1 mm.
The canals were instrumented by one operator using the X-smart Endodontic Motor (Dentsply Sirona Endodontics, Ballaigues, Switzerland) according to the manufacturer's instructions. A glide path was created using several PathFiles (Dentsply Sirona Endodontics, Ballaigues, Switzerland) up to size 19, 0.02 taper. Apical patency was controlled with 10 K-file. The ProTaper Next (Dentsply Sirona Endodontics, Ballaigues, Switzerland) files were used with a constant speed of 300 revolutions per minute (rpm) and torque of 2 Ncm. The sequence was as follows: X1 (size 17, 0.04 taper), X2 (size 25, 0.06 taper), X3 (size 30, 0.07 taper), and X4 (size 40, 0.06 taper) up to the established WL. Each set (X1-X4) of rotary instruments was used for shaping one canal and then discarded. After using each file, copious irrigation with 5 mL of 5.25% NaOCl (CHLORAXiD, Cerkamed, Stalowa Wola, Poland) was performed for 120 ± 10 s. As a final rinse, canals were irrigated with 2.5 mL physiological saline and then with 5 mL 17% EDTA (Cerkamed, Stalowa Wola, Poland) for 1 min., 2.5 mL physiological saline, and 5 mL 5.25% NaOCl, followed by 2.5 mL of physiological saline. For irrigation, a 5 mL disposable plastic syringe with a 30-gauge Endo-Eze Tip (Ultradent, South Jordan, UT, USA) inserted without binding 2 mm shorter than WL was used. The solutions were manually activated with ProTaper Next X4 GP cone (Dentsply Sirona Endodontics, Ballaigues, Switzerland) using gentle strokes at a 100 times/minute cycle for 2 min. Finally, the canals were dried with paper absorbent points X4 (Dentsply Sirona Endodontics, Ballaigues, Switzerland).
In all study groups, canals were obturated using the single-cone technique with matched GP cones (40/0.06; Dentsply Sirona Endodontics, Ballaigues, Switzerland) and standardized amount of sealer (0.30 ± 0.010 g). The teeth were coded and randomly dis-tributed into three groups (n = 10) based on the root canal sealers used: GuttaFlow Bioseal (Coltène/Whaledent AG, Altstätten, Switzerland), Total Fill BC Sealer (FKG Dentaire, La-Chaux-de-Fonds, Switzerland), and BioRoot RCS (Septodont, Saint Maur Des Fosses, France) ( Table 1). The sealants were applied on the selected GP cone. The GP cone was covered with a standardized amount of sealer and slowly introduced into canal. Then, it was slowly and gently rotated (twice) to spread the sealer on the canal walls. Next, the GP cone was delicately pushed toward the apex to achieve the tag back and the estimated WL. The excess of material was removed using a hot plugger (Fast Pack Plugger Tips, size Fine Medium (Yellow) 50/0.05 taper, E-Connect Eighteenth, China), and the teeth were cleaned with isopropyl alcohol and temporarily filled with glass ionomer Fuji IX (GC Europe, Leuven, Belgium) to seal the canal. All specimens were stored in a Hank's balanced salt solution (HBSS) at 37 • C for up to 6 months. The storage medium was renewed every 7 days.
Micro-CT Imaging
High-resolution MCT (SkyScan 1272; Bruker, Kontich, Belgium) scans were carried out under the following scanning conditions: X-ray source voltage, 90 kV; X-ray source current, 111 µA; and pixel size, 21 µm. Rotation of 180 • was performed with a rotation step of 0.5 • , and a copper + aluminum filter was selected. All specimens were scanned at four point times. Scan 1 was performed on the intact root canal for statistical analysis of volume variations between evaluated canals, while scan 2 occurred after root canal preparation with rotary files to measure the pre-obturation root canal volume. Initial pore volume and porosity after 7 days were evaluated on scan 3, whereas long-term pore volume and porosity after 6 months were evaluated on scan 4.
The images were re-constructed using NRecon v.1.6.0 software (Bruker, Kontich, Belgium), and the parameters were calculated using CTAn v1.14.4 software (Bruker, Kontich, Belgium). Three-dimensional (3D) visualization was achieved using CTvol v2.3.2.0 software (Bruker, Kontich, Belgium). The CTAn software enables the selection of the region of interest (ROI) based on the contrast of various image areas resulting from the un-like value of X-ray absorption by elements of scanned object due to difference in density. Namely, the X-ray absorption of the tooth differs from the X-ray absorption of the filling material, facilitating the selection of evaluated elements of scanned object: the cone absorbed X-rays more than the sealant, while the surrounding air absorbed X-rays less than the sealant. However, to ensure the measurement of all the open pores in the sealant, the ROI ran tangent to the sealant boundary ( Figure 1). A volume of interest (VOI; mm 3 ) was established from the furcation level to the apex of the root, and canals were divided into three equal regions: coronal, middle, and apical ( Figure 2). The evaluation of MCT images was performed by a blinded researcher. sorbed X-rays more than the sealant, while the surrounding air absorbed X-rays less than the sealant. However, to ensure the measurement of all the open pores in the sealant, the ROI ran tangent to the sealant boundary ( Figure 1). A volume of interest (VOI; mm 3 ) was established from the furcation level to the apex of the root, and canals were divided into three equal regions: coronal, middle, and apical ( Figure 2). The evaluation of MCT images was performed by a blinded researcher. sorbed X-rays more than the sealant, while the surrounding air absorbed X-rays less than the sealant. However, to ensure the measurement of all the open pores in the sealant, the ROI ran tangent to the sealant boundary ( Figure 1). A volume of interest (VOI; mm 3 ) was established from the furcation level to the apex of the root, and canals were divided into three equal regions: coronal, middle, and apical ( Figure 2). The evaluation of MCT images was performed by a blinded researcher.
Initial and Long-Term Volume of Pores
The initial total volume of pores was determined by subtracting the filling material volume after 7 days (scan 3) from the pre-obturation root canal volume (scan 2). Accordingly, long-term total volume of pores was calculated subtracting the filling material volume after 6 months (scan 4) from intact root canal volume (scan 2). On the one hand, closed pores were defined as not communicating with the root dentin and completely closed in the material (in the sealer or between the gutta-percha cone and the sealer) ( Figure 3A,B). Conversely, open pores were defined as communicating with the dentin surface (located between the sealer and dentin wall) ( Figure 3B). The volume of open and closed pores was calculated analogously for total pores volume. The change in pore volume (total, open, closed) over time was calculated by subtracting the initial volume (7 days) from the long-term volume (6 months). volume after 7 days (scan 3) from the pre-obturation root canal volume (scan 2). Accord-ingly, long-term total volume of pores was calculated subtracting the filling material volume after 6 months (scan 4) from intact root canal volume (scan 2). On the one hand, closed pores were defined as not communicating with the root dentin and completely closed in the material (in the sealer or between the gutta-percha cone and the sealer) (Figure 3A,B). Conversely, open pores were defined as communicating with the dentin surface (located between the sealer and dentin wall) ( Figure 3B). The volume of open and closed pores was calculated analogously for total pores volume. The change in pore volume (total, open, closed) over time was calculated by subtracting the initial volume (7 days) from the long-term volume (6 months).
Initial and Long-Term Porosity
Porosity refers to the percentage (%) of pore volume in relation to total sealer volume and was calculated for two time periods: initial (after 7 days, scan 3) and long-term (after 6 months, scan 4). The total porosity refers to the entire canal, while the regional porosity refers to root regions of equal length: coronal, middle, and apical. The change in porosity over time was calculated by subtracting the initial porosity (7 days) from the long-term porosity (6 months).
Statistical Analysis
The Shapiro-Wilk test was used to confirm the normality of the data. The analysis of variance was conducted with Levene's test. The Student's t-test and Friedman test were used for paired samples comparison. Moreover, Kruskal-Wallis's test was performed for
Initial and Long-Term Porosity
Porosity refers to the percentage (%) of pore volume in relation to total sealer volume and was calculated for two time periods: initial (after 7 days, scan 3) and long-term (after 6 months, scan 4). The total porosity refers to the entire canal, while the regional porosity refers to root regions of equal length: coronal, middle, and apical. The change in porosity over time was calculated by subtracting the initial porosity (7 days) from the long-term porosity (6 months).
Statistical Analysis
The Shapiro-Wilk test was used to confirm the normality of the data. The analysis of variance was conducted with Levene's test. The Student's t-test and Friedman test were used for paired samples comparison. Moreover, Kruskal-Wallis's test was performed for independent samples. All statistical analyses were evaluated with the statistical software package Statistica v. 13.1 (StatSoft, Inc., OK, USA), and statistical significance was considered at p < 0.05.
Initial and Long-Term Volume of Pores
There was no significant difference in volumetric variance between prepared canals in the three study groups.
Moreover, no statistically significant difference was found between the experimental groups regarding initial volumes of pores (open, closed, and total; p > 0.05) ( Table 2). Nevertheless, after 6 months (long-term), the volume of open pores was significantly higher for GuttaFlow Bioseal compared with Total Fill BC Sealer (p < 0.05) ( Table 3). In all groups, the mean percentage of open pores increased (after 6 months), in particular for GuttaFlow Bioseal; however, the difference was not statistically significant (p > 0.05). Moreover, there were no statistically significant differences in terms of initial and long-term total pore volumes between study groups (p > 0.05). The long-term total volume of pores decreased for GuttaFlow Bioseal and Total Fill BC Sealer, while it increased for BioRoot RCS, and such differences were statistically insignificant (p > 0.05). The representative 3D images of pores distribution after 7-day incubation are presented in Figure 4.
Initial and Long-Term Porosity
For BioRoot RCS, the long-term total porosity was significantly higher than the initial one (p < 0.05). The highest change in total porosity (long-term vs. initial) was observed for BioRoot RCS (2.35% ± 2.20%), followed by GuttaFlow Bioseal (1.54% ± 3.91%), and the least was for Total Fill BC Sealer (0.02% ± 4.80%) ( Figure 5); however, such differences were not statistically significant (p > 0.05). After a 7-day incubation, significantly higher regional porosity was found in the apical part compared to the coronal area for BioRoot RCS (p < 0.05). In the apical region, a significantly higher porosity was found for BioRoot RCS compared to GuttaFlow Bioseal (p < 0.05). After 6 months, no statistically significant differences in regional porosity between the root regions were detected ( Figure 5).
The highest (both initial and long-term) apical porosity was observed for BioRoot RCS (2.24% ± 1.67% and 2.99% ± 2.46%, respectively). The smallest apical porosity after 7 After a 7-day incubation, significantly higher regional porosity was found in the apical part compared to the coronal area for BioRoot RCS (p < 0.05). In the apical region, a significantly higher porosity was found for BioRoot RCS compared to GuttaFlow Bioseal (p < 0.05). After 6 months, no statistically significant differences in regional porosity between the root regions were detected ( Figure 5).
Discussion
The null hypothesis was rejected because significant differences were found between tested sealers in terms of the evaluated parameters initially and over time.
The main purpose of root canal filling is to prevent micro-leakage and root canal re-infection. Root canal treatment aims to prevent the formation of periapical lesions or, if they exist, facilitate their proper healing. Interestingly, the obturation technique and materials employed in the treatment were found to exert no influence on the prognosis of apical periodontitis [34]. However, this statement should be considered with caution due to the low number of included studies (10) into the meta-analysis and a high risk of bias. Therefore, further research is needed to investigate safety and success rate of obturation materials and techniques.
In the present study, three root canal sealants were selected for the analysis: two bio-ceramics (BioRoot RCS, manually mixed powder with liquid, and Total Fill BC Sealer, pre-mixed sealer), and a silicone-based, ready-to-use sealant (GuttaFlow Bioseal). These products were chosen for the study because they are modern and popular materials differing in composition and form of preparation. Contradictory classifications of GuttaFlow Bioseal can be found in the literature: due to the addition of calcium silicate, it is sometimes referred to as bio-ceramic sealer [35][36][37]. However, sealants should be classified according to the main components within their compositions, which in this case is a silicone; therefore, GuttaFlow Bioseal should be classified as silicone-based material [7,23,38]. The addition of substances, i.e., glass ceramic and/or calcium silicate, provides bio-active properties; thus, these sealers are considered bio-active [39]. Moreover, pre-mixed and syringe-mixed, with the ratio of ingredients controlled by the manufacturer, and manually mixed materials can be distinguished. The latter form do not provide a reproducible, faster, and cleaner mixing of the material [40].
The quality of a root canal filling in in vitro studies can be assessed with dye penetration, dye extraction (dissolution method) or fluid filtration (transportation method), glucose penetration model using fluid filtration, protein micro-leakage test, bacteria and toxin infiltration method, electrochemical micro-leakage test, radioisotope penetration method, microscopy, or 3D imaging (Cone-Beam Computed Tomography, MCT) [41,42]. Most of above-mentioned techniques lack quantitative measurements and may lead to sample destruction. Conversely, MCT is currently considered the most reliable non-destructive method for assessing the quality of filling over time. It allows for distinguishing between gutta-percha, sealant, and voids [9,43]. Moreover, the evaluation of voids/porosity in different parts of the root (apical, middle, coronal) and differentiation in the type of pores (open, closed) are feasible [5,9,43]. However, disadvantages include the high cost of this investigation [43].
The presence of pores was most frequently analyzed due to the space in which bacteria can re-grow and proliferate, causing long-term treatment failures [9,26,28,31,44,45]. Unfortunately, none of the filling materials or application methods can provide a pore-free root canal filling [9,45,46]. It is important to know whether pores are open or closed in the material, because it can determine the clinical success of the treatment. Indeed, open pores can cause micro-leakage, while closed ones seem to exert no clinical impact [46][47][48]. However, the presence of closed pores as well as their percentage was considered in the present study because they may turn into open ones with time, due to the sealant dissolution, causing a micro-leakage of clinical significance. Many different factors may influence the number of pores and porosity, including the filling technique, such as the wettability and flowability of the sealant; the form of the sealer (pre-mixed or manually mixed); irrigation protocol; and the anatomy of root canal [5].
In all studied groups (both initial and long-term), the mean volume and percentage of closed pores were greater than those of open ones. This finding is not supported in any other studies [9,31,49]. These differences may be explained by different anatomy of the examined roots in this study (distal roots of mandibular first molars), while others evaluated maxillary [9] or mandibular central incisors [49] or single-rooted pre-molars [31]. Additionally, the selection of shaping system: rotary or reciprocating and final instrument tip size and taper may exert an impact on the results. In the present study, ProTaper Next in a single length technique up to X4 (size 40, 0.06 taper) was applied. However, other studies used the crown-down technique along with rotary instruments: Reciproc R40 (size 40, 0.06 taper) (VDW, Munich, Germany) [31], Twisted Files Adaptive ML3 (size 50, 0.04 taper) (SybronEndo, Glendora, CA) [9], and EndoSequence (Brasseler, Savannah, GA) size 40 and 0.06 taper as a finishing files [49]. Moreover, the penetration of the sealant into the canal dentinal tubules and walls, and thus the presence of open pores, can be influenced by the irrigation protocol [50,51]. The application of chelating compounds immediately after NaOCl may contribute to dentin erosion, thereby reducing the adhesion surface of sealant [50,51]. However, the use of intermediate pH-neutral rinsing solution prevents the chelator inactivation caused by the oxidizing potential of NaOCl. As a result, the smear layer is being removed and the risk of micro-leakage minimized [31]. In the present study, physiological saline was applied as intermediate rinsing solution, while other studies used 17% EDTA (chelator) directly after 1% [9] or 5.25% [49] NaOCl [9,49]. The latter procedure contributed to dentinal erosion and insufficient removal of the smear layer and therefore insufficient sealer penetration into the canal walls and/or micro-leakage, increasing the open pores formation. In contrast, Angerame et al. [31] used distilled water as the intermediate rinsing solution. The application of EDTA (chelator) as the last rinsing solution before obturation can impar the hydration of the calcium silicate due to calcium chelation; this may have interfered with the setting process of bio-ceramic sealers [1,8].
Moreover, the mean long-term percentage of open pores in all tested groups increased when compared to initial one, and it was statistically higher for GuttaFlow Bioseal than for Total Fill BC Sealer (p < 0.05). However, the pore volume for GuttaFlow Bioseal was not previously evaluated; hence, it was not possible to compare the present findings with the results of previous studies. It can be hypothesized that higher detection of open pores is related to higher solubility of this material and/or weaker bonding to the dentin walls when compared to Total Fill BC Sealer. The greater solubility of GuttaFlow Bioseal in comparison to other sealants was observed in the study conducted by de Camargo et al. [52].
The release of calcium ions over time can contribute to the formation of precipitates that can accumulate in pores, thereby reducing their number over time [53]. Such a decrease in the number of pores may be associated with improved sealing ability, thus reducing the risk of micro-leakage and its clinical implications, i.e., development or persistence of periapical inflammation leading to treatment failure [28]. In the present study, after 6 months, the total volume of pores decreased for GuttaFlow Bioseal and Total-Fill BC Sealer, while they increased for BioRoot RCS. However, both differences (long-term vs. initial) were not statistically significant (p > 0.05). These results for Total Fill BC Sealer and BioRoot RCS are in agreement with previous studies [9]. However, a statistically significant increase in total volume of pores after 8 weeks of storage in a phosphate-rich medium for BioRoot RCS was observed by Atmeh et al. [28].
Porosity and other defects in the micro-structure of the endodontic sealant can cause foci of structural weakness and the tensile strength of the material, creating micro-cracks, which, in turn, can cause leakage within the endodontic cement in the root canal [54]. The total porosity changes for GuttaFlow Bioseal and Total Fill BC Sealer increased insignificantly over time (p > 0.05). However, the long-term total porosity of BioRoot RCS was significantly higher than initial one (p < 0.05). Similar results for Total Fill BC Sealer and BioRoot RCS were observed by Milanovic et al. [9], but the difference was statistically insignificant. The increase in porosity may be attributed to the enhanced solubility of bioactive sealers due to ion release [55,56]. In addition, the form and method of mixing sealers can affect the porosity and the presence of pores. Manually mixed materials are more prone to subjective factors induced by the operator, thus producing more structural defects [54,57,58]. It can be the reason behind the greater total porosity for manually mixed BioRoot RCS. Considering the regional porosity of the root canal, a higher apical porosity was detected for BioRoot RCS after 7 days and 6 months. Similar results were also reported for lateral compaction using RealSeal sealer (SybronEndo, Orange, CA, USA), but some authors suggested that pores distribution can be unpredictable regardless of the filling method and sealer type [59]. The coronal region of roots filled with GuttaFlow Bioseal and Total Fill BC Sealer was found to exhibit greater initial and long-term porosity than other regions. It can be hypothesized that such a great level of apical porosity in BioRoot RCS could result from the high density of the material, which impaired the placement of air bubbles (creating pores) toward the coronal part of the root during its application. As a consequence, apical porosity and poor seal of the apical area occurred. On the contrary, the great number of coronal porosity of GuttaFlow Bioseal and Total Fill BC Sealer might be induced by the movement of the air bubbles from the apical part to the coronal due to the lower density of these materials. The introduction and gentle pumping movement of the GP cone may have caused a translocation and entrapment of air bubbles within the coronal part of the root. Moreover, the coronal part is the area containing the most sealant, and, therefore, it may undergo volumetric changes during setting and dissolution, contributing to increased porosity. The highest porosity of this region was also found in other studies, regardless of the sealant and the obturation technique used [9,60,61]. It is worth emphasizing that the lack of regional sealing can lead to re-infection and create a critical condition for persistent bacteria accumulation, causing treatment failure. The apical area is probably the most "delicate" and important region within the entire canal system because, during chemo-mechanical shaping, it is the part which is less instrumented and de-contaminated [62]. Consequently, the enhanced penetration of the sealant into this region may trap persistent micro-organisms, providing a hermetic seal of this crucial area.
Some limitations of the present study should be acknowledged. The results of these in vitro evaluations do not reflect phenomena and relations occurring in a real clinical scenario (micro-biome, host immune system, seepage of fluids though accessory canals, temperature changes, occlusal load, root anatomy variations). Thus, clinical studies are necessary to provide a wider perspective on the investigated parameters. Moreover, the limited sample size could have contributed to the high variability of the obtained data and high value of standard deviation. Thus, studies using a larger sample size are recommended. Additionally, only three sealants were used for the study. Therefore, other sealants should be investigated. Moreover, future studies should be performed to evaluate sealants in teeth with complex root anatomy, e.g., greater curvatures, using various techniques of root canal shaping and its obturation including application of thermal methods.
Conclusions
Within limitations of the study, it can be stated that: 1.
The total volume of pores remained unchanged after 6 months of storage.
2.
GuttaFlow Bioseal exhibited significantly higher long-term volume of open pores than Total Fill BC Sealer.
3.
The total porosity remained of all investigated sealers unchanged after the 6-month storage except for BioRoot RCS. Total porosity of this materials significantly increased after long-term incubation.
4.
Initial total porosity of BioRoot RCS was significantly higher in the apical region than in coronal area.
5.
BioRoot RCS exhibited significantly higher initial total porosity in the apical area than GuttaFlow Bioseal.
Therefore, the use of bioactive sealers with excessive tendency to create porosities both at shorth-and long-term period of storage might compromise the long-term success of endodontic treatments. Funding: This research was funded by grant No 503/2-148-04/503-21-001 from the Medical University of Lodz. These studies were financed partially from "Innovative Textiles 2020+" no. RPLD.01.01.00-10-0002/17-00 investment project within the Regional Operational Programme for Łódzkie 2014-2020.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki and approved by Bioethics Committee of the Medical University of Lodz (RNN/36/20/KE). | 6,524 | 2022-09-26T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
CNView: a visualization and annotation tool for copy number variation from whole-genome sequencing
Summary Copy number variation (CNV) is a major component of structural differences between individual genomes. The recent emergence of population-scale whole-genome sequencing (WGS) datasets has enabled genome-wide CNV delineation. However, molecular validation at this scale is impractical, so visualization is an invaluable preliminary screening approach when evaluating CNVs. Standardized tools for visualization of CNVs in large WGS datasets are therefore in wide demand. Methods & Results To address this demand, we developed a software tool, CNView, for normalized visualization, statistical scoring, and annotation of CNVs from population-scale WGS datasets. CNView surmounts challenges of sequencing depth variability between individual libraries by locally adapting to cohort-wide variance in sequencing uniformity at any locus. Importantly, CNView is broadly extensible to any reference genome assembly and most current WGS data types. Availability and Implementation CNView is written in R, is supported on OS X, MS Windows, and Linux, and is freely distributed under the MIT license. Source code and documentation are available from https://github.com/RCollins13/CNView Contact<EMAIL_ADDRESS>
Introduction
Deletions and duplications of genomic segments, collectively known as copy number variants (CNVs), are the single largest influence in determining the content and organization of an individual genome (Sudamant et al., 2015) and are strongly associated with an increased risk of numerous cancers and neurodevelopmental disorders (McCarroll & Altshuler, 2007). Whole genome sequencing (WGS) is the only currently practical method able to capture the full size spectrum of CNV in the human genome (Sudamant et al., 2015). Detection of CNV in WGS data commonly relies on measuring relative losses or gains in depth of sequencing coverage, but most algorithms yield too many candidate CNV calls to be molecularly validated at scale. Visual assessment of sequencing depth can quickly assess CNV in silico, but there is presently a paucity of tools for CNV visualization from population-scale WGS data.
We present CNView, an R software tool for normalized visualization of sequencing depth in population-scale WGS datasets. CNView applies global intra-sample normalization and localized inter-sample normalization to delineate, annotate, and statistically score CNVs in individual samples or up to hundreds of WGS libraries simultaneously.
Methods & Application
As input, the BEDtools coverage and uniongbed commands are used to generate a matrix of uniformly binned sequencing coverages for each
R.L. Collins et al.
library (Quinlan & Hall, 2010;Online Documentation). Compressing coverage into bins of 100bp-1kb smooths visible noise in the sequencing depth while also lowering local computational requirements. After generating this input coverage matrix, CNView can assess and visualize any query region in up to 300 samples at once in under a minute on a laptop with a 2.3 GHz dual-core processor and 8GB RAM.
CNView has six sequential steps: (1) matrix filtering, (2) matrix compression, (3) intra-sample normalization, (4) inter-sample normalization, (5) coverage visualization, and (6) genome annotation. Coverage is extracted from the query region including several flanking megabases (default=5Mb). CNView further compresses this subsetted matrix to reduce local noise. Each library is then normalized by dividing the coverage of each bin by the library's median nonzero binwise coverage. The intra-sample normalized coverage in each bin is then normalized across all samples to fit the standard normal distribution (µ=0, σ=1). This normalization procedure produces a t-score per sample per bin.
Coverage t-scores are plotted as semi-contiguous step functions for each sample specified by the user. Individual bins significantly depleted or enriched for normalized sequencing depth are indicated by red and blue outlines, respectively (α=0.05, Bonferroni correction). P-values of deletion and duplication are calculated for each highlighted interval by computing the mean t-score of all bins overlapping that interval. The background of each plot is shaded with measurements of central tendency (median) and deviation (median absolute deviation; MAD) per bin. Median and MAD identify regions with unusually high or low coverage variability across samples, which could occur at sites of multiallelic segmental duplications or across regions of heterochromatin, as examples. These features of the coverage distribution per bin cannot be captured by mean and standard deviation due to the normalization function applied in step four, but are readily reflected at regions where the median and MAD diverge significantly from the scaled mean (0) and standard deviation (1). Finally, CNView provides an extensible interface to the UCSC MySQL database and plots specified genomic annotations beneath the normalized coverage signal (Kent et al., 2002).
Results
We previously applied an alpha version of CNView to delineate simple and complex CNVs in two independent WGS cohorts (Brand et al., 2014;Brand et al., 2015). Here, we also applied CNView to a recently described WGS cohort of 160 individuals comprising 40 quartet families (Turner et al., 2016) to show that CNView readily visualizes simple CNVs in individual samples, like the 46kb paternally-inherited, twoexon deletion of PDE11A shown in Figure 1A. Further, CNView can provide visual confirmation of unbalanced complex genomic rearrangements or compound CNV sites, as shown in Figure 1B. In this example, sequencing analysis predicted two large, rare, overlapping CNVs near the p-terminus of chromosome 7: a 467 kb distal deletion and a 449 kb proximal duplication. CNView assessment of this site provides supporting evidence of the compound CNV by illustrating copy loss of the deletion-specific interval, copy gain of the duplication-specific copy number, and no change in copy number in the overlapping interval between the deletion and duplication. Links to the data used to create both panels of Figure 1 are available in the Supplementary Information. | 1,264.2 | 2016-04-20T00:00:00.000 | [
"Computer Science",
"Biology",
"Medicine"
] |
Amyloid-β Peptide (Aβ) Neurotoxicity Is Modulated by the Rate of Peptide Aggregation: Aβ Dimers and Trimers Correlate with Neurotoxicity
Alzheimer's disease is an age-related neurodegenerative disorder with its toxicity linked to the generation of amyloid-β peptide (Aβ). Within the Aβ sequence, there is a systemic repeat of a GxxxG motif, which theoretical studies have suggested may be involved in both peptide aggregation and membrane perturbation, processes that have been implicated in Aβ toxicity. We synthesized modified Aβ peptides, substituting glycine for leucine residues within the GxxxG repeat motif (GSL peptides). These GSL peptides undergo β-sheet and fibril formation at an increased rate compared with wild-type Aβ. The accelerated rate of amyloid fibril formation resulted in a decrease in the presence of small soluble oligomers such as dimeric and trimeric forms of Aβ in solution, as detected by mass spectrometry. This reduction in the presence of small soluble oligomers resulted in reduced binding to lipid membranes and attenuated toxicity for the GSL peptides. The potential role that dimer and trimer species binding to lipid plays in Aβ toxicity was further highlighted when it was observed that annexin V, a protein that inhibits Aβ toxicity, specifically inhibited Aβ dimers from binding to lipid membranes.
Given the significant role for A oligomerization in AD pathogenesis, it is important to identify the sequence motifs within A that modulate peptide oligomerization and toxicity. Recent literature has implicated a motif within A as being po-tentially responsible for the conformational transition that precedes the oligomerization of A (Liu et al., 2005). This motif comprises four glycine residues found within the A 25-37 segment ( Fig. 1) and is known as the GxxxG repeat motif. Theoretical studies have indicated that this motif might facilitate the conversion of ␣-helical or random coiled A to -sheet and eventually fibril formation (Liu et al., 2005). The GxxxG repeat motif is also thought to be involved in modulating membrane helix-helix interactions (Russ and Engelman, 2000;Kleiger et al., 2002). Munter et al. (2007) demonstrated that transmembrane dimerization of APP has a direct effect on APP processing and specifically implicated the G 29 xxxG 33 (A sequence numbers) motif within APP, as playing a significant role in modulating APP dimerization, processing, and A generation (Munter et al., 2007).
The role of the GxxxG motif in A oligomerization and toxicity was investigated using glycine-substituted-to-leucine (GSL) peptides (Fig. 1). These GSL peptides have single amino acid alterations at the respective glycine residues. Biophysical characterization of these GSL peptides suggest that alterations within the GxxxG repeat motif increase the rate of fibril formation, leading to a decrease in the concentration in solution of small soluble oligomers, particularly dimers and trimers. Furthermore, a reduction in the ability of these oligomers to bind to lipid membranes was observed. The differential membrane binding ability of the different A dimers correlates well with the toxicity of their respective peptides. This critical role of dimers in A toxicity was confirmed with annexin V. The capacity of annexin V to reduce dimer binding to synthetic membranes correlates to its ability to inhibit A toxicity (Lee et al., 2002).
Methods
Peptide synthesis. Continuous-flow Fmoc-SPPS (solid-phase peptide synthesis) was used for all syntheses. A 42 and the GSL peptides were synthesized on a 0.1 mmol scale using Fmoc-L-Ala-PEG-PS resin as a solid support on an Applied Biosystems Pioneer Synthesizer as described previously (Tickler et al., 2001). GSL peptides were synthesized using the same method as wild-type (WT) A, with single amino acid substitution of glycine to leucine residues in the GxxxG repeat motif. G25L, G29L, G33L, and G37L had leucine replacements at position 25, 29, 33, and 37, respectively.
Preparation and incubation/aggregation of A peptides. A peptides were dissolved in HFIP at a concentration of 1 mg/ml (w/v) to induce a monomeric and helical conformation of the peptides (Smith et al., 2006) Aliquots of 100 l were dried under vacuum. Dried samples were stored at Ϫ20°C.
Samples used for analysis were prepared in the following way: 100 g aliquots of peptides were dissolved in 50 l of 20 mM NaOH (w/v) at pH 11 and sonicated for 15 min. After this, the sample was dissolved in 50 l of 10 mM phosphate buffer (PB) (w/v) at pH 7.4 (10 mM Na 2 HPO 4 and NaH 2 P0 4 ) and 400 l of milliQ H 2 O. The solution was filtered using 20 m Minisart RC4 filters (Sartorius) to ensure preformed aggregates Ͼ20 m were removed. The peptide concentration in solution was determined using a combination of amino acid analysis and spectrophotometric methods. The calculated molar extinction coefficient values of WT, G25L, G29L, G33L, and G37L were 144,999, 167,142, 143,588, 180,000, and 191,605 L ⅐ mol Ϫ1 ⅐ cm Ϫ1 , respectively, when using the absorbance value of 214 nm. This was performed to account for any loss of peptide caused by the removal of aggregated material during filtration. Samples were diluted to 10 M (w/v) and incubated at 37°C shaking at 1400 rpm to induce fibril formation. Incubated samples were measured for various static parameters.
Thioflavin T binding assay for the generation of amyloidogenic structures. A kinetic aggregation assay was performed using the "Simple read" program on a Varian Cary Eclipse Fluorescence Spectrophotometer (Smith et al., 2006). A peptides, prepared as described above, were diluted in 10 mM PB, pH 7.4 to a final concentration of 10 M with thioflavin T (ThT) to 20 M. Excitation and emission wavelength were at 444 and 480 nm, respectively. Excitation and emission slit widths were both at 5 nm. The photomultiplier was set to 680 V. The signal was normalized by subtracting the signal with buffer containing ThT alone and dividing by the maximum signal seen with WT A.
Far UV circular dichroism spectroscopy. Circular dichroism (CD) spectroscopy was performed on a Jasco J815 CD spectropolarimeter. Measurements were performed in the far UV with the CD signal being recorded in a 0.1 cm path length Helma quartz cuvette. Investigation of secondary structure during aggregation was performed at a protein concentration of 10 M (w/v) (Smith et al., 2006). A composite buffer containing 1 mM PB (w/v), 2 mM NaOH (w/v) at pH 7.4 was used in all measurements. Measurements were recorded at 37°C from 185 to 260 nm with a 1 nm bandwidth, 0.1 nm resolution, interval speed of 500 nm/min, and a response time of 1 s. Peptide measurements were subtracted from background readings to give a normalized spectrum. Spectra were converted from machine units in millidegrees, to delta epsilons (Lobley et al., 2002). After delta epsilon conversion, deconvolution of the resulting spectra was achieved using a CDSSTR analysis program provided in the Dichroweb database (Lobley et al., 2002). Using this program, the relative amounts of random coil, ␣-helix, -sheet, and -turn were determined from the normalized contribution of each secondary structure element function to the observed spectrum after curve fitting.
Primary neuronal cultures. Mouse cortical neuronal cultures were prepared as described previously under sterile conditions (Barnham et al., 2003;Ciccotosto et al., 2004). Briefly, embryonic day 14 BL6Jϫ129sv mouse cortices were removed, dissected free of meninges, and dissociated in 0.025% (w/v) trypsin in Krebs' buffer. The dissociated cells were triturated using a filter-plugged fine pipette tip, pelleted, resuspended in plating medium (minimum Eagle's medium, 10% fetal calf serum, 5% horse serum), and counted. Cortical neuronal cells were plated into poly-D-lysine-coated 48-well plates at a density of 150,000 cells/well in plating medium. All cultures were maintained in an incubator set at 37°C with 5% CO 2 . After 2 h, the plating medium was replaced with fresh Neurobasal medium containing B27 supplements, geneticin, and 0.5 mM glutamine (all tissue culture reagents were purchased from Invitrogen unless otherwise stated). This method resulted in cultures highly enriched for neurons (Ͼ95% purity) with minimal astrocyte and microglial contamination, as determined by immunostaining of culture preparations using specific marker antibodies (data not shown).
Cell viability assays. The neuronal cells were allowed to mature for 6 d in culture before commencing treatment using freshly prepared Neurobasal medium plus B27 supplements minus antioxidants. For the treatment of neuronal cultures, freshly prepared soluble A stock solutions were diluted to the final concentration in Neurobasal medium. The mixtures were then added to neuronal cells for up to 4 d in vitro. Cell survival was monitored by phase contrast microscopy, and cell viability was quantitated using the 3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2 H-tetrazolium) or MTS assay, as described previously (Ciccotosto et al., 2004). Briefly, the medium was replaced with fresh Neurobasal medium supplemented with B27 lacking antioxidants and 10% v/v MTS (Promega) was added to each well and incubated for 3 h at 37°C in a 5% CO 2 incubator. Plates were gently shaken, and a 150 l aliquot from each well was transferred to separate wells of a 96-well plate. The color change of each well was determined by measuring the absorbance at 490 nm using a PerkinElmer Wallac Victor Multireader, and background readings of MTS incubated in cell-free medium were subtracted from each value before calculations. The data were normalized and calculated as a percentage of untreated vehicle control values. Vehicle control in this study consists of 4 mM NaOH in PBS buffer.
Detection of oligomers. Arrays were washed twice with 5 l of 10 mM PB, pH 7.4, on a shaking table for 2 min. PB was then wicked off, and 10 M peptide samples (prepared as described above) were loaded onto the arrays and allowed to incubate for 2 h, while shaking. Samples were wicked off and arrays were washed twice with PB, followed by two 1 min washes with 1 mM HEPES, pH 7.2. The arrays were air dried and matrix was applied. One microliter of 50% ␣-cyano-4-hydroxycinnamic acid applied twice to each array with arrays being air dried between each application (Guerreiro et al., 2006). The matrix, which is an energyabsorbing molecule, facilitates desorption and ionization of peptides in the MS. Arrays were then analyzed by surface enhanced laser desorption/ ionization-time of flight-mass spectrometry (SELDI-TOF MS), and resulting spectra were examined using ProteinChip software, version 3.2.1. Various oligomeric forms of the peptides based on the time of flight detector can be separated according to their mass-to-charge (m/z) ratio. This translates to a spectral view with peaks representing peptides of different molecular masses. Area under the curve (AUC) of each peak was used to quantify the level of binding for each peptide.
Synthetic lipid binding assay. A novel lipid binding assay was designed to mimic detection of specific oligomers of A 42 WT and GSL peptides binding to lipid membranes. Liposomes [small unilamellar vesicles (SUVs)] were prepared as described below. Arrays were initially washed with 5 l of 30 mg/ml CHAPS (3-[(3-cholamidopropyl)dimethylammonio]-1-propanesulfonate) (w/v) and immediately wicked off. This was followed by three washes with 5 l of 10 mM PB on a shaking table for 2 min. After washing, 5 l of liposomes at 20 mM were placed onto the arrays forming a monolayer of lipids. Control arrays had PB on the hydrophobic surface instead of lipid. Arrays were incubated at 37°C for 2 h to allow for sufficient binding of lipid onto the hydrophobic surface of the chip. The arrays was washed with 10 mM PB to remove unbound lipid, and peptide samples at 50 M were loaded onto the arrays and allowed to incubate for 5 min. Samples were removed, chips were washed with 10 mM PB twice, followed by two 1 min washes with 1 mM HEPES at pH 7.2. The chips were then air dried and 1 l of 50% CHCA (w/v) in 50% acetonitrile (v/v) and 0.5% TFA (v/v) was applied onto each spot twice with arrays being air dried between each application. Chips were then analyzed by SELDI-TOF MS, and resulting spectra were examined using ProteinChip software, version 3.2.1. Oligomers binding to the surface of either the lipid or the hydrophobic surface (control arrays) can be separated according to their m/z ratio. AUC for each peptide was used to quantify the level of oligomer binding to either the lipid layer or the H50 surface. The integrity of the lipid layer was determined by using melittin as a positive control and BSA as a negative control.
Annexin inhibition of oligomeric lipid binding. Inhibition of oligomers binding to the lipid coated H50 array in the presence of annexin V was essentially performed as described above. To test whether annexin V altered the lipid binding of A, an equal ratio of annexin V was incubated for 10 min on the lipid surface before the addition of A 42 . All binding experiments were performed in triplicate.
The GxxxG mutant peptides have reduced neurotoxic activity
To determine whether the GxxxG repeat motif modulates A toxicity, mouse cortical cultures treated with either 15 M WT or GSL peptide were measured for cell viability by MTS assay. A 42 WT peptide decreased neuronal cell viability to 41.4 Ϯ 1.7% (Fig. 2). The G25L and G29L peptides were significantly less toxic ( p Ͻ 0.05 and p Ͻ 0.01, respectively) to neuronal cells exhibiting 72.7 Ϯ 6.8 and 81.2 Ϯ 3.8% cell viability, respectively, whereas G33L and G37L ( p Ͻ 0.01 for both peptides) treated cells exhibited minimal toxicity (95.9 Ϯ 4.0 and 90.9 Ϯ 9.6% cell viability, respectively). A range of assays was performed to ascertain which biophysical properties associated with changes to the GxxxG motif best correlated with the reduced toxicity and cell binding.
Time-dependent aggregation profiles of A peptides; GSL peptides have increased rate of -sheet and fibril formation
Initial ThT measurements of WT A 42 indicated minimal formation of amyloidogenic material (Fig. 3A) at day 0; the ThT signal is expressed as percentage of maximum WT fluorescence. Corresponding EM detected small globular structures on day 0 (supplemental Fig. 1 A, available at www.jneurosci.org as supplemental material). These structures exhibited diameters ranging from 12 to 24 nm. This was consistent with "pseudospherical" structures as described by Goldsbury et al. (2000). CD spectroscopy indicated that these structures were predominately random coiled (Fig. 3B). As incubation time increased, the peptide underwent conformational changes forming predominately -sheet structures (Fig. 3C). After 7 d, there was an extensive network of fibrils (supplemental Fig. 1 D, available at www.jneurosci.org as supplemental material). The appearance of fibrils correlated with maximum ThT fluorescence (Fig. 3A). These results on A amyloid formation are in accordance with previously published studies on A aggregation (Klunk et al., 1989;Jarrett and Lansbury, 1992;Sunde et al., 1997;Kowalewski and Holtzman, 1999;Tjernberg et al., 1999;Kirkitadze et al., 2001).
G25L had similar initial ThT fluorescence to that of the WT peptide; however, differences were observed at later stages of aggregation. Initially, G25L had low initial ThT fluorescence (Fig. 3A) and an unordered structure, as indicated by CD spectroscopy (Fig. 3B). These structures underwent conformational changes from random coil to -sheet during the course of aggregation, as indicated by CD spectroscopy (Fig. 3C). The increase in -sheet content of G25L was similar to the WT peptide. However, after a day, there was a sharp increase in ThT fluorescence, which was much more intense than that observed for the WT peptide (Fig. 3A), indicating that amyloidogenic material was being generated faster than for WT A. By day 7, there was a network of fibrils observed (supplemental Fig. 2, available at www.jneurosci.org as supplemental material) similar to that of WT. Although the final fibril network is similar to WT, the ThT and CD data indicated that G25L had a much faster rate of aggregation.
The G29L peptide had an aggregation profile similar to that of WT. The ThT fluorescence on day 0 was minimal (Fig. 3A). Pseudospherical structures (supplemental Fig. 3A, available at www.jneurosci.org as supplemental material) were seen on this day with no ordered structure (Fig. 3B). ThT fluorescence grad- Figure 2. Cell viability assay after treatment with A peptides. Primary cortical neurons were grown at low density (1.25 ϫ 10 5 cells/cm 2 ) for 6 d, and the viability of these cells after peptide treatment was determined by measuring the inhibition of MTS reduction. Cortical neurons were treated with 15 M peptide for 96 h in serum-free media. Results are expressed as percentage of cell viability with mean Ϯ SEM. Cell toxicity assays were done in triplicate and repeated at least three times. A one-way ANOVA using Tukey's multiple-comparison tests comparing WT to other GSL peptides was performed (*p Ͻ 0.01; **p Ͻ 0.001).
ually increased, reaching a maximum on day 7 (Fig. 3A). After 7 d of incubation, fully formed fibrils were present (supplemental Fig. 3D, available at www.jneurosci.org as supplemental material). The fibril formation process of G29L was therefore seen to follow a similar trend as WT.
The fibril formation profiles that differ the most from WT were those of G33L and G37L, especially during the initial stages of incubation. Although there was a similar transition of secondary structure from random coil on day 0 to predominately -sheet thereafter (Fig. 3 B, C), these two peptides had a greater rate of fibril formation as shown in the ThT studies (Fig. 3A). This indicated that these peptides underwent rapid fibril formation once in solution. EM studies showed fibrils of a different morphology compared with the WT were detected after 7 d of aging (supplemental Figs. 4,5, available at www.jneurosci.org as supplemental material) with aggregates showing smaller more branched fibrils than observed for WT A.
Detection of soluble oligomers using SELDI-TOF MS; GSL peptides have reduced concentrations of small soluble oligomers SELDI-TOF MS is a mass spectroscopy method allowing for detection of oligomers based on their differential molecular weights. The hydrophobic nature of the peptides facilitates their interaction with the carbon molecules on the surface of the H50 ProteinChip array used in this study. Therefore, by using SELDI-TOF MS, the different oligomeric states formed during aggregation of the various A peptides were compared and correlated with toxicity.
In addition to the monomer, this method detected a range of WT oligomeric species, ranging from dimeric species up to octamers (Fig. 4 A) with the most predominate species present being the dimer when A was aged for a day; a representative complete SELDI-MS TOF spectrum is shown in supplemental Figure 6 (available at www.jneurosci.org as supplemental material). There were distinct differences in the oligomeric profile of the GSL peptides compared with WT A. Although all oligomeric species up to an octamer were detected for the WT, only species up to trimers were detected for the GSL peptides. In addition, there was a reduction in the levels of these oligomers compared with the WT (Fig. 4 B). The GSL dimers were not as pronounced as that of the WT, with monomer levels being much higher than that of dimers. This indicated that there has been a reduction in the quantity of smaller oligomers for the GSL peptides present in solution compared with the WT.
A aggregation has long been recognized as a necessary condition for toxicity and it has been hypothesized that dimers and trimers of A are the principal toxic species (Podlisny et al., 1995;Roher et al., 1996Roher et al., , 2000McLean et al., 1999); therefore, we examined whether there was a correlation between the levels of detectable monomer and the various oligomeric species with the toxicity of the respective peptides. The data shown in Figure 4 indicate that there is a negative correlation between the levels of monomer and toxicity (r 2 ϭ 0.96; p ϭ 0.004) (Fig. 4C), whereas there is a highly significant positive correlation between the levels of dimer (r 2 ϭ 0.8267; p Ͻ 0.05) and trimer (r 2 ϭ 0.996; p Ͻ 0.0005) (Fig. 4 D, E) of the various peptides and their respective toxicity levels. In addition, it was observed that a dose-dependent increase in toxicity of A was accompanied by a similar doseresponse increase in the percentage of dimers present in the A solution (Fig. 4 F). At 5 M, A WT cell viability was 92% with corresponding monomer and dimer levels of 46 and 54%, respectively, whereas at 15 M, A WT cell viability was at 52% with corresponding monomer and dimer levels at 12 and 87%, respectively. This further demonstrates the crucial role small soluble oligomers may play in A toxicity.
Detection of small oligomers bound to lipid surface by SELDI-TOF MS; GSL have reduced oligomers detected on the lipid membranes
The SELDI-TOF MS method was extended to investigate the interaction of oligomeric species with membrane surfaces because various studies have shown a correlation between lipid interactions and A toxicity (Kayed et al., 2004;Ambroggio et al., 2005).
Detection of oligomeric forms of A peptides on a membrane surface was accomplished using a novel method developed in our laboratory (Giannakis et al., 2008). H50 ProteinChip arrays were coated with liposomes; the carbon molecules on the surface were able to interact with the hydrophobic chains of the lipids via hydrophobic interactions, thereby coating the surface of the chip with a lipid monolayer. Oligomers binding to the lipid surface were then detected by mass spectroscopy. As a positive control, melittin, a bee venom protein that is known to bind to lipid membranes, demonstrated selective binding to the lipid-coated arrays, which confirmed the integrity of the lipid layer, whereas BSA (negative control) showed minimal binding to the lipid sur- face. The converse was true for the H50 arrays not coated in lipid (Giannakis et al., 2008). Figure 5 shows the differential binding affinity of the oligomers to the lipid monolayer. For the WT peptide, only monomers, dimers, and trimers could be detected binding to the lipid monolayer (Fig. 5A), although mass spectrometry analysis of the peptide in solution indicated that up to octamers were present in solution (Fig. 4 A). However, assessment of the AUC values revealed only small levels of dimers and trimers of G25L and G29L bound to the lipid monolayer, whereas G33L and G37L peptides had minimal oligomer binding to the lipid with only monomers detected (Fig. 5C). Therefore, one of the obvious differences between WT and the GSL peptides is the lipid binding ability of the dimeric species. Whereas predominately dimeric species of WT were detected on the lipid after a day of aging (Fig. 5B), there is a significantly lower level of G25L and G29L dimers detected, with minimal binding of G33L and G37L dimers to the lipid surface (Fig. 5C).
Given that small soluble oligomers of A have previously been implicated as the toxic species via membrane interactions (Podlisny et al., 1995;Roher et al., 1996Roher et al., , 2000MacKenzie and Engelman, 1998;McLean et al., 1999;Walsh et al., 2002;Kayed et al., 2004;Tickler et al., 2005;Lesné et al., 2006), the ability of dimer and trimer of each peptide to bind to the lipid, as detected by SELDI-TOF MS, was correlated to toxicity (Fig. 6 B, C). Correlations of dimer and trimer to toxicity gave r 2 values of 0.96 and 0.90, respectively. Furthermore, monomers were seen to be negatively correlated to toxicity (r 2 ϭ 0.956; p ϭ 0.004) as seen in Figure 6 A.
Annexin V inhibition of oligomeric lipid binding
Annexin V, which has a high affinity for phosphatidylserine lipid head groups, has previously been shown to inhibit A toxicity by stopping A binding to cell membranes (Lee et al., 2002). To investigate whether annexin V was able to alter the oligomeric profile of A binding to lipid membranes, the binding of oligomeric A Figure 4. A, Detection of oligomeric A species using SELDI-TOF MS. Oligomeric species up to octamers were detected for WT A. Oligomeric species of up to trimer were observed for G25L, G33L, and G37L peptides. Tetramers were also detected for G29L. B, Areas under the curve of peaks as seen in A were used to quantify oligomers. The values shown here are expressed as percentage of total oligomeric detection after a day of aging. This assay was performed in duplicates and repeated at least three times. Correlation of the amount of monomer dimer and trimer present with toxicity are shown in C-E, respectively. The quantity of detected oligomers by SELDI-TOF MS after a day of incubation is represented on the y-axis and correlated to cell toxicity on the x-axis. Statistically significant correlation of monomer (r 2 ϭ Ϫ0.96; p ϭ 0.0035) (C), dimer (r 2 ϭ 0.83; p ϭ 0.03) (D), and trimer (E) detected during aggregation to toxicity (r 2 ϭ 0.99; p ϭ 0.0004). F, Graph showing concentration-dependent decrease in cell viability to A 42 WT. Concentrations of 5 and 15 M A were used. Similarly, monomers and dimers were also analyzed 4 after a day of incubation using SELDI-TOF MS on a H50 hydrophobic surface at these concentrations. SELDI-TOF MS analysis revealed a similar dose-dependent decrease in the percentage of monomers present in the A solution as well as a concentration-dependent increase in the percentage of dimers. Error bars indicate SEM.
to lipid membranes as determined by SELDI-TOF MS was repeated in the presence of annexin V. The results shown in Figure 7 indicated that, in the presence of annexin V, there was a large and specific decrease in the amount of dimer and trimer of WT A 42 bound to the lipid, a 70 and 85% reduction, respectively; however, because of the relatively small amounts of trimer observed, only the reduction in dimer reached statistical significance ( p ϭ 0.009). Importantly, there was no decrease in the amount of monomer bound to the lipid in the presence of annexin V, implying that the inhibition of dimeric and trimeric A was through inhibiting a specific interaction between the oligomers and the lipid and that the nature of this interaction is different than the interaction that the monomer has with the lipid.
Discussion
Alzheimer's disease is a neurodegenerative disorder that is related to protein misfolding and aggregation. The aggregation process induces fibril formation by A peptides; it is believed that along the aggregation pathway toxic intermediates such as small soluble oligomers are generated (Walsh et al., 2002;Walsh and Selkoe, 2004;Lesné et al., 2006). Because it has been shown in theoretical studies that the GxxxG repeat motif within the A sequence may have a role in fibrillization, investigation of peptides with alterations in this motif may provide insight into how aggregation of A relates to toxic species generation (Liu et al., 2005). In addition, the GxxxG repeat motif has also been implicated in helix-helix interactions in various membrane proteins (Russ and Engelman, 2000).
The GSL peptides used in this study have leucine substitution at respective glycine residues; leucine and not alanine was used to replace the glycine residues because AxxxG and GxxxA motifs have also been implicated in modulating protein/ protein interactions (Kleiger et al., 2002). The GSL peptides were seen to undergo a conformational transition to -sheet and form fibrillar material (Fig. 3). These results do not support the reported theoretical studies undertaken by Liu et al. (2005), who postulated that glycine residues within the GxxxG repeat motif of A facilitated amyloid formation and substitution of these residues would inhibit fibril formation (Liu et al., 2005). Not only were these peptides capable of forming -sheet fibrils, these peptides formed more fibrils and at a faster rate than the WT. This effect on fibril formation may be attributable to the increase in hydrophobicity; according to the scale of Kyte and Doolittle (1982), replacing a glycine residue with leucine leads to a 0.15 increase in the mean hydrophobicity of the peptide. The effect of hydrophobicity on the rate of peptide aggregation has been previously demonstrated (Calamai et al., 2003;Chiti et al., 2003).
The lack of toxicity seen by these GSL peptides is consistent with literature that shows that some effective inhibitors of Ainduced toxicity appear to alter A aggregation by increasing the rate of peptide aggregation (Pallitto et al., 1999). The relationship between rate of aggregation and toxicity shown in this study may have parallels with plaque formation in vivo. Amyloid plaques, although being the main pathological marker of AD, do not correlate to disease progression, and it has been postulated that the deposition of plaque is a protective mechanism against the toxicity of soluble A (Cuajungco et al., 2000). Plaque formation Figure 5. A, Detection of oligomeric A species binding to lipid via SELDI-TOF MS. Oligomeric species up to tetramers could be seen binding to the lipid. Oligomers belonging to G25L and G29L peptides exhibit diminished ability to bind to lipid, whereas no oligomeric species for G33L and G37L were detected on the lipid. Membrane consists of equal ratios of POPC and POPS (20 mM of each lipid). B, Detection of wild-type A oligomers binding to lipid via SELDI-TOF MS. A WT dimeric species are found to be more abundant on the lipid compared with monomeric and trimeric species. C, Dimers of each peptide were compared; dimers of the WT are detected more readily on the lipid surface than the other GSL dimers. Values are quantified using the area under the curve for oligomeric peak obtained by SELDI-TOF MS. The signals are expressed as percentage of total oligomers binding to the lipid after a day of aging. The synthetic lipid binding assay was done in duplicate and repeated at least three times. A one-way ANOVA using Tukey's multiple-comparison tests comparing dimers of WT and GSL peptides was performed (*p Ͻ 0.01). Error bars indicate SEM.
could therefore be a process by which the body attempts to deal with A misfolding and aggregation by partitioning A peptides into nontoxic aggregates via accelerating fibril formation. Indeed, elements that are associated with the plaques such as zinc and neuroserpin are known to accelerate the aggregation of A peptides and have been shown to inhibit A toxicity (Cuajungco et al., 2000;Kinghorn et al., 2006). Results seen here are also consistent with those seen by Cheng et al. (2007) in which transgenic mice carrying the A arctic mutation, which has the propensity to increase A fibrillization, had normal neurological functions, although there was an increase in plaque load (Cheng et al., 2007).
Interestingly, accelerated fibril formation resulted in a decreased ability of these peptides to generate small soluble oligomers. Results obtained from SELDI-TOF MS indicated that these GSL peptides have much lower levels of dimers and trimers than those of WT with minimal presence of "higher order" oligomers ( Fig. 4) during the early stages of aggregation. This is consistent with the concentrationdependent toxicity of WT A, which showed an increase in the percentage of oligomers present at higher, more toxic, peptide concentrations (Fig. 4 F). Hence, there appears to be a relationship between the formation of such small oligomeric species and toxicity with monomer disappearance associated with increased toxicity (Fig. 4C-E). This is in accordance with previous studies (Yankner et al., 1989;Stern et al., 2004) that indicated that production of soluble oligomeric intermediates was responsible for A toxicity (Yankner et al., 1989;Stern et al., 2004;Cappai and Barnham, 2007). Such oligomers have been shown to be toxic in CNS slice cultures (Lambert et al., 1998) and inhibited hippocampal long-term potentiation in rats (Walsh et al., 2002). In particular, oligomers of low molecular weight such as dimeric and trimeric forms of A are prime suspects in instigating toxicity. Putative dimers and trimers were detected in the culture media of Chinese hamster ovary cells expressing endogenous or transfected amyloid APP (Podlisny et al., 1995). Likewise, dimers and trimers isolated from AD brain amyloid deposits were able to elicit neuronal death (Roher et al., 1996(Roher et al., , 2000McLean et al., 1999).
Because these small oligomers are thought to exhibit their toxic effects through membrane interactions, we extended the SELDI-TOF MS system to investigate the interaction of these oligomers with a lipid surface (Kayed et al., 2004;Tickler et al., 2005;Smith et al., 2006). In this method, a monolayer of lipid was coated onto the surface of a hydrophobic ProteinChip array. Any species interacting with the lipid layer were detected via mass spectroscopy. By using this novel assay, we were able to demonstrate that there is a correlation between peptide toxicity and lipid membrane binding propensity of A dimers and trimers. This is consistent with the concept that A toxicity is manifested via its interaction with neuronal cell membranes (Kayed et al., 2004;Tickler et al., 2005;Smith et al., 2006). Investigating this lipid system, we were only able to detect significant amounts of WT monomeric, dimeric, and trimeric species on the lipid, although there is a much greater range of oligomers present in the sample solution (Fig. 5). WT dimers were the most abundant of all WT species detected on the lipid membrane after a day of incubation (Fig. 5B). This is consistent with in vivo studies showing that A dimers accumulate in lipid rafts at a time when memory impairment begins in Tg2576 mice (Kawarabayashi et al., 2004). These A dimer levels increase steadily from 6-month-old mice and become the major form of A present in lipid rafts in 11-monthold Tg2576 mice.
A reduction in the amount of detectable oligomers was observed for the GSL peptides on the lipid surface with G33L and G37L having no detectable oligomers on the lipid surface. Therefore, the ability to generate significant quantities of oligomers capable of binding to a lipid membrane correlates with the respective toxicities of the various peptides (Fig. 6 B, C). The negative correlation of monomer to toxicity (Fig. 6 A) further validates the role of dimers/trimers in A toxicity. This specificity of the dimers/trimers for a role in A toxicity was confirmed with annexin V inhibition of oligomer binding to membranes. Lee et al. (2002) have shown that annexin V is able to inhibit binding of A to lipid membranes by competitively binding to the negatively charged phosphatidylserine (PS) head groups (Lee et al., 2002). This inhibition resulted in attenuated A toxicity (Lee et al., 2002). Moreover, it has been reported that cells with exposed PS were more sensitive to A toxicity than non-PS-exposed cells (Simakova and Arispe, 2007). Annexin V in this study was able to specifically inhibit dimeric and trimeric species of A from binding to a lipid surface with no effect on monomeric levels, suggesting that the dimer/trimer binds to the lipid in a different manner than the monomer. Studies performed by Kayed et al. (2004) showed that binding of soluble oligomers to lipids increased con- Figure 6. Correlation of membrane binding by monomeric, dimeric, and trimeric species with toxicity. The detection of monomer, dimer, and trimer by SELDI-TOF MS on day 1 are represented on the y-axis and correlated to cell toxicity of peptides. Statistically significant correlation of monomer (r 2 ϭ Ϫ0.96; p ϭ 0.004) (A), dimer (r 2 ϭ 0.96; p ϭ 0.041) (B), and trimer (C) detected on lipid to toxicity (r 2 ϭ 0.97; p ϭ 0.013).
ductance across the bilayer membrane (Kayed et al., 2004), an indication of membrane disruption. However, fibrillar A did not have any effect on the membrane conductance, suggesting limited lipid binding by these species.
In conclusion, the GxxxG repeat motif is reported here to modulate the formation of oligomeric species of A. Modification of this motif led to increased rate in amyloid formation. SELDI-TOF MS results showed that increased rate of fibril formation led to a decrease in the formation of toxic oligomeric species. In solution, A peptides exist as inter-converting ensemble of various oligomeric forms, and recent studies have implicated dimers and trimers as the potential toxic species (Shankar et al., 2008); our data provide evidence that is consistent with this literature. The differential pattern of toxicity, oligomeric formation, and lipid binding ability of the GSL peptides along with the specific inhibition of the lipid binding by annexin V implicate dimeric and trimeric species of A interacting with lipid membranes as being key modulators of A toxicity. Figure 7. Annexin inhibition of lipid binding by oligomeric A species. Annexin V was incubated on the lipid membrane for 10 min before addition of A 42 WT. The lipid binding assay is the same as the one described in Figure 5. A 42 WT was incubated for 1 d shaking at 37°C before addition onto the lipid system. A, SELDI-TOF MS spectra of the monomer, dimer, and trimer with and without the presence of annexin V. B, Values are quantified using the area under the curve for the respective peaks obtained by SELDI-TOF MS. The signals are expressed as percentage of dimer present in the absence of annexin V. This assay was done in duplicate and repeated three times. A paired t test was performed comparing the oligomers in the presence and absence of annexin (*p Ͻ 0.01). Error bars indicate SEM. | 8,519 | 2008-11-12T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Theoretical Estimation of Evaporation Heat in Paper Drying Process Based on Drying Curve
: At present, the theoretical estimation of paper web’s evaporation heat is based on sorption isothermals. The measuring conditions are harsh, and the test speed is slow. This paper attempts to explore a theoretical method which can quickly determine the evaporation heat of paper web. During the new method, based on the measurement of the paper drying curve, the theoretical estimation model of paper evaporation heat was obtained by deducing the mechanism of heat and mass transfer. Compared with the traditional method based on sorption isothermals, the new model based on the drying curve has some advantages in measurement speed and easy access to basic data. Finally, the paper verifies the reliability of the model from two application scenarios of the laboratory and production line. of the sorption isotherm in both scenarios. It shows that the new method can achieve the credibility of the old method. This paper provides a simple tool for estimating the heat of evaporation in the drying process, and it can also be used to estimate the total energy consumption in the drying process, taking the total energy consumption as the goal, an optimal drying curve can be found through the optimization calculation, and then the drying process can be optimized and adjusted. The focus of our next study is the quantitative study of this model applied for optimal operation during paper drying.
Introduction
Paper is a kind of hygroscopic material with a porous structure, mainly composed of fibers and other solid particles, such as fillers, sizing materials, additives and so on. Paper products play a significant role in every area of human activity, such as the recording, storage and dissemination of information, wrapping and packaging, writing and printing and so on. Papermaking is a basic raw material industry, closely related to the national economy.
The paper-making process is essentially a very large dewatering operation. The major sections of the paper machine consist of: the forming section, press section and dryer section. The dryer section removes between 1.1-1.3 kg of water per kg of paper production, as compared with the 200 and 2.6 kg removed in the forming and press section [1]. Although the dryer section is responsible for a small fraction of total dewatering, it is the major energy consumer in the paper mill because porous and hygroscopic pulp fibers have hardto-remove water that is considered to be located in the fiber cell wall and trapped in the fiber network geometry. According to the report prepared by the Institute of Paper Science and Technology (IPST), 61.9% of the total energy required for paper making is consumed in the paper drying process [2], and about 65% of the thermal energy is used for water evaporation based on Chen's investigation [3,4]. In spite of its key role in papermaking and its high energy consumption-taking approximately 60% of the total physical length and accounting for almost 40% of the total capital cost of a common paper machine-paper drying is arguably the least understood papermaking operation. Perhaps the reason is the complexity of the paper drying process that involves heat transfer, evaporation, and the water removal process where steam pressure, air conditions, and condensate removal play key roles in determining the drying capacity and final product quality. The papermakers often treat the dryer section as a "black box". Nevertheless, rising energy costs are forcing papermakers to pay more attention to the dryer section, and especially steam usage.
The thermal energy consumption during paper drying is mainly used to evaporate the water in the moist paper. According to the diverse binding force, the water in the moist paper can be divided into different types, such as free water and bound water, of which thermodynamic and physical properties are different. For the bound water, evaporation heat is used not only to vaporize water, but also to overcome the interaction between cellulose fibers and water. The quantitative measurement of evaporation heat is important to understand the interaction between fibers and water (drying mechanism) and the resulting impact on the paper drying behavior for optimizing the capital and operating costs of the dryer section.
At present, there are few studies on the quantitative measurement of evaporation heat, mainly because heat energy flow is difficult to measure directly. Heikkilä [5] studied the evaporation heat of coated paper and found that the evaporation heat of bound water was composed of two parts, latent heat of vaporization and an extra amount of energy called heat of sorption. The latent heat of evaporation is the same as that of pure water, which can be easily obtained by consulting the physical property data of water, but the data of sorption heat is difficult to obtain. At present, the traditional method to obtain the sorption heat is to establish its theoretical model based on the Clausius-Clapeyron equation, and the theoretical model estimation needs to use the data of paper sorption isotherm [1]. Taking mechanical pulp as an example, the theoretical estimation equation of sorption heat is as follows: where Equation (1) is the calculation model of sorption heat (∆H sorp ), R v is the gas constant, T p is the paper temperature, and u is the paper moisture, Equation (2) is an example of fitting equation of sorption isotherm(ϕ), c 1 , c 2 , c 3 and c 4 are the equation parameters. This model had been widely used in modeling and simulation of the paper drying process. For instance, Slätteke [6] established a complete simulation model for the drying section that is implemented in the object-oriented modeling language Modelica. The purpose is to study the feedback control of paper drying. Roonprasang [7] made a theoretical study on the modeling and simulation of the impact of new design configuration geometry on the drying process, with means of theoretical estimation model of evaporation heat. Karlsson [8,9] presented several models for the drying section to describe the phenomena at a steady state. New control strategies applicable for both steady-state control and for grade changes were derived. Heo [10] used the theoretical estimation model of evaporation heat to model and simulate the state change of paper in the drying section, trying to solve the problem of the "black box" in dryer section. Åkesson [11] used the theoretical estimation model of evaporation heat to present a Modelica library, DryLib, which enables users to rapidly develop complex models of dryer section. In addition, parameter optimization by means of non-linear model predictive control is treated.
Nevertheless, there were still many inconveniences in the practical application in the process of using the above model based on the sorption isotherm. The main reason is that the data of sorption isotherm curves of different paper grades in the industry are very few. The sorption isotherm is a correlation between relative humidity and the equilibrium moisture content. The measurement of the sorption isotherm is time-consuming and difficult, requiring strict test conditions such as constant temperature and humidity [12,13]. Measurement of sorption isotherms is a common practice in many fields of science and engineering, such as food science, material science, and so on. Traditional methods to determine sorption isotherms usually rely on conditioning above saturated salt solutions that keep constant relative humidity and gravimetric methods to determine the moisture content in the sample. In order to reduce the systematic error caused by the change of relative humidity around the sample during sampling and weighing, the whole experiment process, including water balance and weighing, needs to be kept under the same relative humidity. Many different relative humidity are needed to determine a complete sorption isotherm, as at each relative humidity the water balance of the sample takes a long time. Therefore, the measurement of sorption isotherm line is very time-consuming.
The sorption isotherm of paper depends on the structure of the paper and the fibers and other raw materials used to manufacture the paper. Keränen [14] has studied handmade paper's sorption isotherms with different filler contents. Fitting the experimental data with the Soininen model, it was found that the model parameters was obviously different corresponding to the different filler content. Popescu [15] studied the adsorption isotherm of wood and got the similar conclusion. There are more than 1000 kinds of paper in the current paper-making industry; according to market demand, each kind of paper's manufacturing process has been frequently adjusted in the actual production process. How to rapidly estimate the evaporation heat of the frequently changing paper, and then guide the dryer section to optimize the process parameters in time, is a new problem encountered in the process of paper enterprises in pursuit of energy saving and consumption reduction. The traditional estimation method based on sorption isotherm obviously can not achieve the purpose of rapid measurement, and can not adapt to the rapidly changing market demand. Therefore, developing a simple approach for rapidly measuring the evaporation heat of paper is urgent.
The drying curve is an overall indication of the paper drying process, covering all the information during paper drying, and including paper's evaporation heat [16,17]. The current study attempts to explore a simple method for rapidly measuring evaporation heat based on the drying curve measurement. Generally, the drying curve is a curve that changes in the moisture content of the material with the drying time. In the papermaking industry, it was also expressed by a correlation between paper moisture content and drying position (cylinder number) [18]. The new method starts from the measurement of the drying curve, and no longer needs to measure the sorption isotherm, which makes the measurement of evaporation heating time-saving and convenient.
Theoretical Model Based on Drying Curve
The moist paper consists of water in different fractions, free water in large pores and bound water in micro pores. They have different thermodynamic and physical properties, such as different vapor partial pressures, enthalpies and so on. The thermodynamic and physical properties of free water being same as that of bulk water. The binding force of free water is small, generally, it evaporates first in the drying process. Drying decreases the level of fiber swelling by closing the pores in a fiber cell wall irreversibly. The removal of bound water needs to overcome such resistance, so with the passage of time, it is more and more difficult to remove water by drying, so it also needs to consume more heat energy.
During the drying process, wet paper passes through three distinct phases, including the preheating phase, c, and the falling drying rate phase. In the preheating phase, part of the heat of the heating medium is used to heat the paper, and the paper temperature increases rapidly, but the moisture content of the paper changes little with the drying time. Then, the paper drying enters the constant drying rate phase, and the moisture content of the paper is basically linear with the drying time. The slope is constant, that is, the drying rate is constant. At this time, the heat transferred by the heating medium to the paper and the heat required by the evaporation of the water in the paper are in balance, and the surface temperature of the paper remains unchanged, which is just equal to the wet bulb temperature of the hot air around the paper surface. After a period of time, when the moisture content of the paper is lower than the critical moisture content (CMC), the paper drying enters the deceleration stage. In this stage, the drying characteristic curve tends to be flat, gradually approaching the equilibrium moisture content (EMC), and the drying rate decreases. Part of the heat provided by the heating medium is used to evaporate water, and part is used to heat the paper, and the surface temperature of the paper continues to Processes 2021, 9, 1117 4 of 11 rise. The above idealized scheme occurs if drying conditions are similar over the entire drying process. In commercial dryer sections, the constant rate phase does not often exist.
The schematic diagram of the evaporation heat measurement based drying curve is shown in Figure 1. It shows that the one-to-one correspondence function between the evaporation heating curve (Figure 1a) and the drying curve ( Figure 1b) can be established theoretically during the paper drying. The specific derivation process is as follows: the heat required by the evaporation of the water in the paper are in balance, and the surface temperature of the paper remains unchanged, which is just equal to the wet bulb temperature of the hot air around the paper surface. After a period of time, when the moisture content of the paper is lower than the critical moisture content (CMC), the paper drying enters the deceleration stage. In this stage, the drying characteristic curve tends to be flat, gradually approaching the equilibrium moisture content (EMC), and the drying rate decreases. Part of the heat provided by the heating medium is used to evaporate water, and part is used to heat the paper, and the surface temperature of the paper continues to rise. The above idealized scheme occurs if drying conditions are similar over the entire drying process. In commercial dryer sections, the constant rate phase does not often exist.
The schematic diagram of the evaporation heat measurement based drying curve is shown in Figure 1. It shows that the one-to-one correspondence function between the evaporation heating curve (Figure 1a) and the drying curve (Figure 1b) can be established theoretically during the paper drying. The specific derivation process is as follows: Assumptions: 1) Drying conditions are consistent; 2) Ignoring the energy loss, the energy provided by the drying system is only used for paper drying. Assumptions: 3)The absolute dry weight of paper does not change during drying Theoretical analysis: 1) It is feasible to estimate the heat of evaporation by measuring the drying curve.
2) Another key problem is to determine the parameters K and M dp . The evaporation heat (∆H) in the paper drying process refers to the energy consumed by the evaporation of water inside the paper sheet and the final release. As shown in Figure 1a, it is a curve where the y-axis is evaporation heat (ΔH) and the x-axis is the moisture of the paper (u), u is a state variable, which can be easily obtained by weighing the paper sheet. ΔH is a process variable, which can be calculated by Equation (3).
Energy consumption(ΔE) is difficult to measure directly. According to the principle of energy conservation, it can theoretically be measured indirectly by measuring the energy consumption of the drying system. As shown in Equation (4), K is the effective horsepower of drying system. In a period of time (t1~t2), energy consumption (ΔE) can be calculated by an integral formula. The right side approximation holds in the assume that K is a constant. The evaporation heat (∆H) in the paper drying process refers to the energy consumed by the evaporation of water inside the paper sheet and the final release. As shown in Figure 1a, it is a curve where the y-axis is evaporation heat (∆H) and the x-axis is the moisture of the paper (u), u is a state variable, which can be easily obtained by weighing the paper sheet. ∆H is a process variable, which can be calculated by Equation (3).
Energy consumption(∆E) is difficult to measure directly. According to the principle of energy conservation, it can theoretically be measured indirectly by measuring the energy consumption of the drying system. As shown in Equation (4), K is the effective horsepower of drying system. In a period of time (t 1~t2 ), energy consumption (∆E) can be calculated by an integral formula. The right side approximation holds in the assume that K is a constant.
Water evaporation (∆W) is also a process variable, which is a function of the drying rate (R) and time (t), as shown in Equation (5). M dp is the mass of the dry paper.
From Equations (3)-(5), evaporation heat (∆H) can be calculated by Equation (6), under the following assumptions: (1) drying conditions are consistent; (2) ignoring the Processes 2021, 9, 1117 5 of 11 energy loss, the energy provided by the drying system is only used for paper drying; (3) the absolute dry weight of the paper does not change during drying.
A relationship model between evaporation heat (∆H) and the first derivative of paper moisture (∆u/∆t) has been established in Equation (7). That means the evaporation heat curve can be obtained as long as the drying curve (Figure 1b) was measured. K/M dp is the model parameter.
Through theoretical analysis, it is found that it is feasible to theoretically estimate the heat of evaporation by measuring the drying curve, but there is still a key problem to be solved, which is to determine the constants K and M dp . The method for determination of the model parameter K/M dp is introduced in Figure 2. In the initial drying period, free water is preferentially evaporated. The thermodynamic and physical properties of free water are the same as that of bulk water. Its evaporation heat is just the latent heat of bulk water (∆H lat ). Therefore, Equation (7) can be obtained according to the law of energy conservation. u ∞ is the paper moisture in the initial drying period, mainly referring to the constant drying rate phase.
Water evaporation (ΔW) is also a process variable, which is a function of the drying rate (R) and time (t), as shown in Equation (5). Mdp is the mass of the dry paper.
From Equations (3)-(5), evaporation heat (ΔH) can be calculated by Equation (6), under the following assumptions: (1) drying conditions are consistent; (2) ignoring the energy loss, the energy provided by the drying system is only used for paper drying; (3) the absolute dry weight of the paper does not change during drying.
A relationship model between evaporation heat (ΔH) and the first derivative of paper moisture (Δu/Δt) has been established in Equation (7). That means the evaporation heat curve can be obtained as long as the drying curve (Figure 1b) was measured. K/Mdp is the model parameter.
Through theoretical analysis, it is found that it is feasible to theoretically estimate the heat of evaporation by measuring the drying curve, but there is still a key problem to be solved, which is to determine the constants K and Mdp. The method for determination of the model parameter K/Mdp is introduced in Figure 2. In the initial drying period, free water is preferentially evaporated. The thermodynamic and physical properties of free water are the same as that of bulk water. Its evaporation heat is just the latent heat of bulk water (ΔHlat). Therefore, Equation (7) can be obtained according to the law of energy conservation. u∞ is the paper moisture in the initial drying period, mainly referring to the constant drying rate phase.
∆H ≈
The model parameter (K/M dp ) can be calculated as following: At atmospheric pressure, ∆H lat is a single variable function of evaporation temperature that is approximately equal to paper temperature (T p ). By inquiring the physical property data of water, the following fitting Equation (9) can be utilized to calculate the latent heat of water vaporization (∆H lat ), which is obtained by regressing the physical property data of water at atmospheric pressure. Finally, we find that the evaporation heat is related to the drying curve ( Figure 1b) and temperature of paper at atmospheric pressure. It can be calculated as following Equation (10), which is obtained by substituting Equation (8) into Equation (6):
Advantages of the New Method
In this paper, a convenient and rapid method to measure the evaporation heat of paper is proposed. The new method is based on the drying curve, while the old method is based on the sorption isotherm. Because the measurement of the drying curve is more convenient than that of sorption isotherm, the new method has two great advantages.
As shown in Table 1, one is the measuring time, in the same laboratory environment, the old method based on the sorption isotherm get each data point generally needs to be in constant temperature and humidity environment for 24 h to balance water, however, the new method based on the drying curve only takes 1-2 h to complete the measurement of the whole drying curve, or even less than 15 min when using the moisture rapid analyzer. During on-line measurement at the production site, in the on-line measurement of production site, it takes 10 min for the new method to measure a data point, but the old method can not measure directly, so it can only use the data measured in the laboratory. Another advantage is the measuring conditions. The measurement of the drying curve only needs the conventional experimental environment, while the measurement of the sorption isotherm needs the special experimental environment of constant temperature and humidity.
Laboratory Verification
In the laboratory, mechanical pulp was used as raw material to make hand paper. The specific steps are as follows: Step1: Through the Wally beater (VB-42F, China pulp and Paper Research Institute Co., Ltd., Beijing, China), the raw material of mechanical pulp was dredged and beat to a specific beating degree (30 • SR).
Step2: The pulp concentration was measured, and a certain amount of pulp was weighed based on the paper with a weight of 60 g/m 2 .
Step3: The pulp was dissociated with a fiber standard dissociator (ZY-XW, Shandong Zhongyi Instrument Co., Ltd., Shandong, China), and formed into wet paper on a rapid Kaiser sheet former (ASM-32N2F, China pulp and Paper Research Institute Co., Ltd., Beijing, China).
Step4: The wet paper was pressed on the multi-functional extruder (MASP22H803, China pulp and Paper Research Institute Co., Ltd., Beijing, China) by 50 kPa.
Step5: It was sent to the moisture rapid analyzer (MB120, Aohaus Changzhou Co., Ltd., Jiangsu, China) to determine the drying curve (Figure 3), obtain and record the test data of paper moisture (u) changing with time(t).
Step2: The pulp concentration was measured, and a certain amount of pulp was weighed based on the paper with a weight of 60 g/m 2 .
Step3: The pulp was dissociated with a fiber standard dissociator (ZY-XW, Shandong Zhongyi Instrument Co., Ltd., Shandong, China), and formed into wet paper on a rapid Kaiser sheet former (ASM-32N2F, China pulp and Paper Research Institute Co., Ltd., Beijing, China).
Step4: The wet paper was pressed on the multi-functional extruder (MASP22H803, China pulp and Paper Research Institute Co., Ltd., Beijing, China) by 50 kPa.
Step5: It was sent to the moisture rapid analyzer (MB120, Aohaus Changzhou Co., Ltd., Jiangsu, China) to determine the drying curve (Figure 3), obtain and record the test data of paper moisture (u) changing with time(t). The experimental model, Gaussian model and Fourier model were used to fit the test data. The fitting curves were shown in Figure 3, and the model structure and model parameter results are shown in Table 2. The Gaussian model has the best simulation effect and R 2 value is 0.996. Therefore, the Gaussin model is used to estimate the subsequent evaporation heat. The experimental model, Gaussian model and Fourier model were used to fit the test data. The fitting curves were shown in Figure 3, and the model structure and model parameter results are shown in Table 2. The Gaussian model has the best simulation effect and R 2 value is 0.996. Therefore, the Gaussin model is used to estimate the subsequent evaporation heat. u = 2.563· exp − t + 78.25 445.6 2 (11) Table 2. Fitting results of drying curve data in laboratory. Figure 4 shows a comparison of calculation results of the evaporation heat between the new and old methods in laboratory. The results calculated by the new method based on the test of drying curve are similar to the old method based on the measurement of the sorption isotherm. The MRE (average relative error) was calculated as 0.81% by Equation (12).
Models
where, i represents the number of data groups, N represents the number of data groups, ∆H 1 represents the evaporation heat obtained by the new method, ∆H 2 represents the evaporation heat obtained by the old method.
new and old methods in laboratory. The results calculated by the new method based on the test of drying curve are similar to the old method based on the measurement of the sorption isotherm. The MRE (average relative error) was calculated as 0.81% by Equation (12).
where, i represents the number of data groups, N represents the number of data groups, ∆ 1 represents the evaporation heat obtained by the new method, ∆ 2 represents the evaporation heat obtained by the old method. The laboratory verification results show that the new method can achieve the credibility of the old method. However, the new method can greatly simplify the test process of the water evaporation heat in paper drying process. The laboratory verification results show that the new method can achieve the credibility of the old method. However, the new method can greatly simplify the test process of the water evaporation heat in paper drying process.
Industrial Verification
The verification object is a pre-drying process of high-strength corrugated paper with an annual output of 80,000 tons. The paper machine has 48 dryer cylinders, a speed of 500 m/min, a width of 4 m, and can produce high-strength corrugated paper with basis weight of 100 g/m 2 . Figure 5 shows the schematic diagram of paper drying process. The material flow in and out of the pre-dryer section is clearly marked. In the actual production process, the paper drying is completed in the dryer-section, that is, a group of horizontally arranged heating dryer cylinders. The paper is pasted on the surface of the dryer cylinders to evaporate and dry. The production is continuous and the paper is moving forward dynamically, generally, the speed of the paper machine is 1000~2000 m/min. In this case, it is difficult to obtain the data of the moisture content of paper with time. Therefore, the drying curve of the production process is usually recorded as the change of the moisture content of the paper with different drying positions(u~N). The moisture content of the paper can be measured by hand-held portable sensor, and the drying position is usually indicated by the cylinder number of different positions. Figure 6 shows measurement and analysis of drying curve in industrial production. The basis weight of paper web was measured by a portable sensor (NDC 8110-F/104 from NDC infrared Engineering Co., Ltd., Los Angeles, CA, USA). This kind of small single-sided sensor can be extended into the narrow gap between the dryer cylinders, and it can be used to measure quantitatively on-line on the surface of the wet web without affecting the normal production. The drying curve data represented by paper moisture (u) and cylinder number (N) are recorded online as shown in Figure 6. The experimental model, Gaussian model and Fourier model were used to fit the test data. The fitting curves are shown in Figure 6, and the model structure and model parameter results are shown in Table 3. The Gaussian model has the best simulation effect and the R 2 value is 0.986. Therefore, the Gaussian model is used to estimate the subsequent evaporation heat.
The verification object is a pre-drying process of high-strength corrugated paper with an annual output of 80,000 tons. The paper machine has 48 dryer cylinders, a speed of 500 m/min, a width of 4 m, and can produce high-strength corrugated paper with basis weight of 100 g/m 2 . Figure 5 shows the schematic diagram of paper drying process. The material flow in and out of the pre-dryer section is clearly marked. In the actual production process, the paper drying is completed in the dryer-section, that is, a group of horizontally arranged heating dryer cylinders. The paper is pasted on the surface of the dryer cylinders to evaporate and dry. The production is continuous and the paper is moving forward dynamically, generally, the speed of the paper machine is 1000~2000 m/min. In this case, it is difficult to obtain the data of the moisture content of paper with time. Therefore, the drying curve of the production process is usually recorded as the change of the moisture content of the paper with different drying positions(u~N). The moisture content of the paper can be measured by hand-held portable sensor, and the drying position is usually indicated by the cylinder number of different positions. Figure 5. Schematic diagram of paper drying process. Figure 6 shows measurement and analysis of drying curve in industrial production. The basis weight of paper web was measured by a portable sensor (NDC 8110-F/104 from NDC infrared Engineering Co., Ltd., Los Angeles, CA, USA). This kind of small singlesided sensor can be extended into the narrow gap between the dryer cylinders, and it can be used to measure quantitatively on-line on the surface of the wet web without affecting the normal production. The drying curve data represented by paper moisture (u) and cylinder number (N) are recorded online as shown in Figure 6. The experimental model, Gaussian model and Fourier model were used to fit the test data. The fitting curves are shown in Figure 6, and the model structure and model parameter results are shown in Table 3. The Gaussian model has the best simulation effect and the R 2 value is 0.986. Therefore, the Gaussian model is used to estimate the subsequent evaporation heat. Figure 7 shows comparison of calculation results of evaporation heat between the new and old methods in industrial production. Like the results of the laboratory verification, the results calculated by the new method based on the test of the drying curve are similar to the old method based on the measurement of the sorption isotherm in industrial production. The MRE was calculated as 0.80%. It shows that the new method can achieve the credibility of the old method in industrial production too.
Conclusions
In this paper, a mathematical model for rapid estimation of water evaporation heat in the paper drying process was proposed. During modeling, the drying curve is selected as the model input variables, and then evaporation heat (∆H) can be calculated. Compared with the old method using adsorption isotherm as model input, the new method has the advantages of easily obtained model inputs, short measurement time, and less harsh measurement environment.
The current paper also compares and analyzes the calculation results of the new and old methods in laboratory and industrial production. The results calculated by the new method based on the test of the drying curve are similar to the old method based on the measurement of the sorption isotherm in both scenarios. It shows that the new method can achieve the credibility of the old method.
This paper provides a simple tool for estimating the heat of evaporation in the drying process, and it can also be used to estimate the total energy consumption in the drying process, taking the total energy consumption as the goal, an optimal drying curve can be found through the optimization calculation, and then the drying process can be optimized and adjusted. The focus of our next study is the quantitative study of this model applied for optimal operation during paper drying.
Conclusions
In this paper, a mathematical model for rapid estimation of water evaporation heat in the paper drying process was proposed. During modeling, the drying curve is selected as the model input variables, and then evaporation heat (∆H) can be calculated. Compared with the old method using adsorption isotherm as model input, the new method has the advantages of easily obtained model inputs, short measurement time, and less harsh measurement environment.
The current paper also compares and analyzes the calculation results of the new and old methods in laboratory and industrial production. The results calculated by the new method based on the test of the drying curve are similar to the old method based on the measurement of the sorption isotherm in both scenarios. It shows that the new method can achieve the credibility of the old method. This paper provides a simple tool for estimating the heat of evaporation in the drying process, and it can also be used to estimate the total energy consumption in the drying process, taking the total energy consumption as the goal, an optimal drying curve can be found through the optimization calculation, and then the drying process can be optimized and adjusted. The focus of our next study is the quantitative study of this model applied for optimal operation during paper drying. | 7,606.2 | 2021-06-28T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Improving Vocabulary Mastery Through the Traditional Game “Engklek” For Children in Kalijaten Village, Kec. Taman, Kab. Sidoarjo
Vocabulary, English, “Engklek”, children, Kalijaten Vocabulary is a basic thing that children should know before they learn English because vocabulary will make it easier to understand or master English. In Kalijaten, children tend not to be able to master English and they also have difficulty learning vocabulary. So to help with this the researcer have implemented a method using “engklek” to help improve children's vocabulary in Kalijaten Village. Using these traditional learning media can make it easier for children to improve their vocabulary skills and they will not get bored easily so the possibility of improving the vocabulary can increase. The purpose of holding vocabulary learning through the game “engklek” is to make it easier for children in Kalijaten village to master vocabulary easily. The method used for research is qualitative by collecting data by means of observation to children in Kalijaten village and it also takes 10 days, where the first and second days of surveying the place and collecting data on children in Kalijaten village and the following day the game was held. The place used to research this PKM is in the village of Kalijaten, Taman Subdistrict, Sidoarjo Regency. The length of observation used was two days when the author did practice in the field. The materials used are white chalk, gaco, laminating paper and the vocabulary that has been given. The results have shown that 70% of the children in Kalijaten village have low proficiency in mastering English, especially in vocabulary and 20% of Kalijaten children have a standard ability in mastering English, especially in vocabulary, and 10% of children in Kalijaten village have high skills in mastering vocabulary.
Vocabulary is a basic thing that children should know before they learn English because vocabulary will make it easier to understand or master English. In Kalijaten, children tend not to be able to master English and they also have difficulty learning vocabulary. So to help with this the researcer have implemented a method using "engklek" to help improve children's vocabulary in Kalijaten Village. Using these traditional learning media can make it easier for children to improve their vocabulary skills and they will not get bored easily so the possibility of improving the vocabulary can increase. The purpose of holding vocabulary learning through the game "engklek" is to make it easier for children in Kalijaten village to master vocabulary easily. The method used for research is qualitative by collecting data by means of observation to children in Kalijaten village and it also takes 10 days, where the first and second days of surveying the place and collecting data on children in Kalijaten village and the following day the game was held. The place used to research this PKM is in the village of Kalijaten, Taman Subdistrict, Sidoarjo Regency. The length of observation used was two days when the author did practice in the field. The materials used are white chalk, gaco, laminating paper and the vocabulary that has been given. The results have shown that 70% of the children in Kalijaten village have low proficiency in mastering English, especially in vocabulary and 20% of Kalijaten children have a standard ability in mastering English, especially in vocabulary, and 10% of children in Kalijaten village have high skills in mastering vocabulary.
A. Introduction
English has to be the authorized language of instruction in 42 nations around the world, and that is a very popular language studied by 1.5 billion people globally. It ranks top among the 839 languages spoken in 60 countries, ahead of French, Mandarin, Spanish, German, Italian, and Japanese (Iriance, 2018). According to Iriance (2018), Indonesia is one of three countries with a low level of English proficiency.
In English Language Learning, there is one very important thing to know in advance in learning English, namely vocabulary or in English learning, known or known as vocabulary (Kusumawati, 2017). Vocabulary in general is a language component that contains some information about a meaning and usage (Flyman Mattsson & Norrby, 2013). A word in a language or words that belong to a speaker, writer or a language; and also a list of words arranged like a dictionary, but with a short and practical explanation (Putri et al., 2020). A vocabulary is the number of words in a language, all the words a person knows or uses in a particular book, and also a list of words and their meanings, especially contained in a textbook in a foreign language (Hornby in Katemba & Sianipar, 2020).
Based on the above definitions, it can be concluded that vocabulary is a number of words in a foreign language (English) which are mastered by students or a study in English. Cameron in Katemba & Sianipar (2020), drew the conclusion that teaching vocabulary in elementary school is difficult, and that teaching vocabulary to young learners requires extra effort and techniques. Teachers should use more effort to teach them since children have unique features that necessitate special attention (Derakhshan & Shirmohammadli, 2015;Thi To Hoa & Thi Tuyet Mai, 2016). Alam in Katemba & Sianipar (2020), discovered various characteristics that make it harder for elementary school students to learn vocabulary. He confirmed that elementary school students have a variety of issues when it comes to dealing with English, including being too young to learn English, still preferring to play with each other during class hours, and lacking motivation in learning English.
So, to be able to master English, the children must have or learn some vocabulary which is easy and clear (Artini, 2017). Because this is often an obstacle for people, especially children in Kalijaten village who want to improve their English language skills well. There are some children who use a memorization method to enrich their vocabulary (Copland et al., 2014). However, in our opinion memorizing without any further action to keep the words that have been memorized in memory, is just as meaningless especially for children who tend to be more difficult and not interested in memorizing (Syafrizal, 2019).
In order to solving the problem, the researcher try to use games as a media in teaching (Noviyanti et al., 2019). Especially traditional Game, because traditional games have several advantages in learning English such as through games the students can be more relaxed and enjoy the learning, Games also involve friendly competition and still keep their interest in learning (Akbari et al., 2009). It encourages students to be involved and actively participate in learning activities, and vocabulary games can provide real-life into the class context and improve the use of English communicatively (Fitriyah & Khaerunisa, 2018). But there are a lot of traditional games in Indonesia, some are suitable for learning, some are not suitable, so teachers must be very clever in choosing traditional games for learning. Here the author tries to implement a traditional game called "engklek". That's game is the one of the famous Indonesian's traditional game. "engklek" can also be used for teaching English to young learner (Wiranti & Mawarti, 2018). "Engklek" is a traditional game from Indonesia. This game is quite popular among the people, but the term is different in various regions the term "Engklek" comes from Javanese, in Batak Toba the game is called "Marsitekka", whereas in Jambi it is called "Tejek-tejekan", in Sundanese the game is called "Manda", in Betawi, is called "dampu bulan", and many more different titles for this game in various regions in Indonesia (Wiranti & Mawarti, 2018). "Engklek" is a game that is played on a flat surface with squares drawn with a certain pattern using chalk or soil, otherwise the player must have a "gaco" in the form of thin plates which can be made of broken pieces of ceramic or flat rock (player usually using rock) (Supriadi & Arisetyawan, 2020). This game can be played by all gender and can also be played in groups or individually. The way to play it is that the player must throw the "gaco" first, then jump with one foot on the squares that have been made, but that is only for one square, if the square is two (next to each other) the player does not need to jump using one foot. When players reach the square before the square that has their "gaco", they must stop according to the rules of the game (1 square: stand using 1 foot, 2 squares: stand using 2 feet) then take their "gaco" and proceed to the next square, but may not pass through the square which was "gaco" before. After arriving in the last square the player must return again in the same way, if it falls then they fails and puts their "gaco" in the same place as before (Wiranti & Mawarti, 2018).
Apart from just for fun, the game "engklek" can also be used in the learning process, because "engklek" can allegedly be used as a tool to remember and memorize a concept of the lesson (Ali & Aqobah, 2020). Especially for young learners, where at the age of those who have not reached 12 years, they still really like to play especially those who need direct action like this game. In addition to making students interested, this game can also make children remember traditional games in the midst of this modern era or what we often refer to as the 4.0 era (Munawaroh, 2017). Then how the implementation of this game in learning. First of all the teacher has to prepare the things needed for this game such as squares that have been drawn on a flat surface and also "gaco" (Utami et al., 2018). And for learning the teacher must also prepare flashcards that contain questions based on the material being taught at that time. In learning this game conceptualized in groups. For how to play the same as the usual "engklek" game, but when they take "gaco" they also have to take the flashcards in the same square as the "gaco", then quickly answer the questions contained in the flashcards. Every child who succeeds in answering and completing the game according to the rules, then the group will be given points, but those who fail will not be reduced points.
From the explanation above, it is found that mastering English is very important nowadays, but not everyone in Indonesia can master English. Especially the children in Kalijaten village, Taman Subdistrict, Sidoarjo who have low vocabulary mastery problems. The purpose of holding this activity was to make it easier for children in Kalijaten Village to improve their vocabulary skills by implementing an "Engklek" game.
B. Methodology
This research is a mix of qualitative and quantitative research. Data collection method have done by observing. The researcher held a "engklek" game to test the children's understanding of vocabulary in Kalijaten village, namely by put each vocabulary in the game box that has been made, then the children are asked to answer the vocabulary that researcher have provided.
This activity takes 5 days, where on the first and second day researcer survey the place and collect data on the children in Kalijaten village before the "engklek" game is held, then on the 5th day the researcher observe and collect data after doing the "engklek" game. This Community dedication activity was carried out in Kalijaten village, Taman District, Sidoarjo Regency. Observations were made for two days while the authors was practicing in the field. The materials and tools used were white chalk, "gaco", paper, laminating and the vocabulary provided.
The technique of collecting data used observations technique. Where the data is the result of mastery of the vocabulary of children in the village of Kalijaten. The data is taken by giving flashcards to each "engklek" box. The flashcards contain questions that must be answered by the children right away. Children must answer the flashcards that are right in the box where he dropped the "gaco". Every vocabulary that the child has mentioned will be counted and calculated into grades. Then the data taken is qualitative data, where the data obtained is data in the form of flashcards containing vocabulary. The source of data taken is internal data where the data is taken from the environment of the object under study. In this program the data used is the existing data in the Kalijaten village environment. .
Figure 1. The Design of flashcards
The participants who took part in the "engklek" game were 10 children in Kalijaten village. The "engklek" game activity is carried out for about 50 -60 minutes in groups. Each playing group must answer the vocabulary that has been provided in the "engklek" box. The group that can answer the most vocabulary will be rewarded, so that they are enthusiastic about playing "engklek" for vocabulary mastery.
C. Result and Discussion
"Engklek" game is a traditional Indonesian game that is almost extinct, because at this time the game of "engklek" is rarely played by children. This is due to the growing technological factor. Even so, children do not forget how to play this game. So that this game does not become completely extinct, it is necessary to have a few additions and changes in this game to make it even more interesting. Therefore, the researcher applied this game to improve vocabulary in children by adding a flashcard containing vocabulary in each "engklek" case. the way to play remains the same, the difference is that each player must take the flashcard in their gaco box. The flashcard contains questions about Indonesian vocabulary which they have to translate into English. Players must answer the questions on the flashcard. If the player manages to answer then the player can continue the game and vice versa if the player cannot answer the question then the player cannot continue the game. With this game can improve the player's English vocabulary. Players can find out new vocabulary that they don't know yet. And players who already know can recall the vocabulary they have memorized.
Figure 2. The implementation of "Engklek" in Kalijaten
This research was conducted on September 5, 2020 in Kalijaten Village Sidoarjo. There were 10 children who participated in this activity, aged 7-12 years. This activity was carried out for 10 days. The first 3 days the researcer conducted a survey of the place of implementation and observation of the children in the village. After conducting the survey, it was found that children in Kalijaten village tend not to master English and they also have difficulty memorizing vocabulary. this is known after the implementation of learning English with them for 5 days. In the implementation of learning English, it is also known that the level of intelligence of Kalijaten village children can be classified, 3 children are classified as intelligent, 5 children are classified as standard and 2 children are classified as lacking. One day after that, preparations were made and the last 1 day the traditional game "engklek" method was carried out to help improve children's vocabulary in Kalijaten village. Using these traditional learning media can make it easier for children to improve their vocabulary skills and they will also not get bored easily so that vocabulary improvement can increase. From the results of research in the "engklek" game, it is known that 80% of the increase in vocabulary in children begins to increase. This is known from the number of questions they answered correctly. 20% increase in vocabulary in children is still lacking. This is known from the few questions they answered. The reason for this is due to a lack of concentration in learning and playing "engklek". Even though 20% of vocabulary improvement is still lacking, this doesn't mean it's a bad thing, it's just that they don't concentrate on playing and they don't remember vocabulary they know. Detailed details of the research results can be seen in the following table: Based on Table 1.1, it can be seen that in this Iengkle game activity, all children in Kalijaten village at SD level participated. The number of children who participated in the activity was 10 children. Based on Table 1.2, it can be seen that the characteristics of the respondents towards vocabulary improvement in the "engklek" game are followed by children who are in Kalijaten village, elementary school level. The number of participants who took part in this game was 10 children. with 5 children (50%) who memorized a lot of vocabulary and 3 (30%) children who memorized a lot of vocabulary were 2 (20%). Based on table 1.3, it can be seen that the level of intelligence of the children in Kalijaten village with the number of children who take part in the "engklek" game is 10 children. The number of intelligent children is 3 (30%). the number of children whose intelligence level is standard is 5 (50%). and the rest is less than 2 children (20%). Based on table 1.4, it can be seen that the interest of children in Kalijaten village in English lessons. The number of children who have the same interest between English lessons and other subjects is 5-5 children (50% -50%). This is known from the results of learning English for 5 days. Based on table 1.5, it can be seen that the frequency of children in Kalijaten village was attended by 10 children. the number of children who often memorized vocab beforehand was 3 (30%). The number of children who memorized when instructed by the teacher or memorized due to demands was 5 (50%), and the remaining children who rarely memorized vocab were 2 (20%). This is known from the results of learning English for 5 days.
Based on the tables above, it can be concluded that there are several factors that trigger children to memorize vocab less. Even though memorizing a lot of vocabulary is very important for children to make it easier for children to learn English lessons. With the "engklek" game method, it can make it easier for children to improve vocabulary. It can be seen from the results of this study that 80% of "engklek" games can improve vocabulary in children and can help to recall known vocabulary.
D. Conclusion
The results showed that the traditional "engklek" game method can help and make it easier for children to improve vocabulary. Not only playing, but children can also get lots of new vocab that they don't know without having to memorize what makes them bored. So, this game really needs to be applied in school or anywhere so that they can add to their vocab as well as play and can increase children's interest in English lessons. | 4,333 | 2021-08-31T00:00:00.000 | [
"Education",
"Computer Science"
] |
Correction of Retransformation Bias in Nonlinear Predictions on Longitudinal Data with Generalized Linear Mixed Models
Researchers often encounter discrete response data in longitudinal analysis. Generalized linear mixed models are generally applied to account for potential lack of independence inherent in longitudinally data. When parameter estimates are used to describe longitudinal processes, random effects, both between and within subjects, need to be retransformed in nonlinear predictions on the response data; otherwise, serious retransformation bias can arise to an unanticipated extent. This study attempts to go beyond existing work by developing a retransformation method deriving statistically robust longitudinal trajectory of nonlinear predictions. Variances of population-averaged nonlinear predictions are approximated by the delta method. The empirical illustration uses longitudinal data from the Asset and Health Dynamics among the Oldest Old study. Our analysis compares three sets of nonlinear predictions of death rate at six time points, from the retransformation method, the best linear unbiased predictor, and the fixed-effects approach, respectively. The results demonstrate that failure to retransform the random components in generalized linear mixed models results in severely biased nonlinear predictions, as well as much reduced standard error approximates. Journal of Biometrics & Biostatistics J o ur al of Bio metrics & Bistatis t i c s
Introduction
In longitudinal data analysis, researchers frequently encounter discrete response data. When the distribution of the response variable is not normal or the variance/covariance matrices are not homogeneous longitudinally, the use of linear mixed models can lead to erroneous parameter estimates and unrealistic predictions of the response. There are different types of discrete longitudinal data, such as binary, ordinal, count, and multinomial. While there are particular model specifications and estimating procedures for each data type, the underlying expressions and statistical inferences in many of those models can be generalized by following the tradition of generalized linear models, referred to as generalized linear mixed models. Generalized linear mixed models incorporate subject-specific random effects, thereby addressing dependence among subject-specific observations. A variety of approaches have ben advanced to yield statistically efficient, robust, and consistent estimators on parameters of generalized linear mixed models [1][2][3][4][5][6][7].
In displaying analytic results of generalized linear mixed models, parameter estimates are often not directly interpretable due to transformation of distributional functions. With specification of the subject-specific random effects, covariates' regression coefficients in generalized linear mixed models do not necessarily describe changes in the mean response in the study population, and the actual effects must be evaluated by averaging over the distribution of specified random components [8]. The issue of interpretability in the analytic results of generalized linear mixed models thus calls for nonlinear predictions given the covariates' values, the estimated regression coefficients, and the average of random effects. When one converts a transformed linear function to predict the marginal mean at the untransformed scale, normality of the random components in the linear predictor must be retransformed to a non-normal distribution [9,10]; otherwise serious retransformation bias can arise. Even if true values of the parameters are known, the analytic results from generalized linear mixed models cannot be converted to unbiased nonlinear predictions without appropriately retransforming the random components.
In this article, we display an efficient, robust method for nonlinear predictions in generalized linear mixed models that corrects for retransformation bias. In Section 2, we briefly review general specifications of generalized linear mixed models. In Section 3, we present the classical best linear unbiased predictor for nonlinear predictions as an ancillary presentation of generalized linear mixed models. This is followed by a description of the retransformation method in nonlinear predictions. Section 5 presents approximation of the variance-covariance matrix for nonlinear predictions. An example is provided in Section 6 comparing results of nonlinear predictions from various methods. In the final section, we discuss merits and remaining issues in the methods described in this article.
Specifications and Inferences
Generalized linear mixed models are simply an extension of the classical generalized linear modeling from univariate data to clustered measurements. In the longitudinal setting, let i denote subject i in a random sample of N subjects and j=1, ..., n i be the number of repeated measurements nested within i. The response variable Y ij can be regarded as a discrete realization of a random variable Y ij with mean μ ij and variance var(y ij ). The function g(y ij , x ij ) is a nonlinear function linking Y ij to covariate vector X ij and a 1 × q vector of subject-specific random effects . As the random components are unobservable, it is desirable to specify generalized linear mixed models based on expectation [1], written as [2,4,[5][6][7]. The variance-covariance matrix of withinsubjects uncertainty, if specified, can be approximated by using the Hessian of the log likelihood function or of the log restricted likelihood function. The analytic results from such procedures, however, are not sufficiently interpretable until they are converted into nonlinear predictions.
Best Linear Unbiased Predictor
There is a variety of approximation methods for nonlinear predictions on longitudinal data [1,7,11]. Among those approximation methods, a popular approach is the best linear unbiased predictor, which uses estimates of the fixed effects and the predicted values of the random effects. In this method, the linear predictor in generalized linear mixed models can be written as the mean , β is the maximum likelihood or the restricted maximum likelihood estimate of β, and ˆi b is the prediction of b i . The marginal likelihood can be fitted by averaging over the distribution of the unobserved random effect b i , with the corresponding joint likelihoods over N subjects written as The maximum likelihood or the restricted maximum likelihood estimates β , Ĝ , and ˆ are the values of β, G, and φ that maximize the above likelihood function. The random effect predictions can be obtained as the conditional mean of b i given,β , Ĝ , and φ : Among various approximation techniques for ˆi b , quadrature methods are regarded to generate accurate approximations of the integral specified in the marginal likelihood function [4,5,8,12]. This popular procedure first uses the likelihood calculations given in McGilchrist [11], and then the empirical Bayes estimates of the random effects to approximate the Gaussian quadrature integral [13]. The approximated integral as a marginal likelihood is then optimized for the fixed effects, and the fixed effect estimates are applied to produce the final prediction. In this approach, as the random effect prediction is treated as the fixed effect, some inherent variability is overlooked thereby causing retransformation bias in nonlinear predictions. Therefore, the best linear unbiased predictor is a partial empirical Bayes method in nonlinear predictions. Furthermore, if there is evidence that uncertainty is not negligible, the estimator The marginal mean for a population subgroup can be predicted from subject-specific predictions by creating a scoring dataset that represents an actual or a hypothetical population group taking selected values of covariates. By retaining the predicted random effect for each subject, the mean and the standard deviation of subject-specific predictions in the scoring dataset approximate the populationaveraged prediction and its standard error by use of the best linear unbiased predictor.
Retransformation Method
One limitation in the best linear unbiased predictor is that ˆi b is Where E(y ij ) is the expected value of Y ij , β is an M × 1 vector of unknown regression parameters to be estimated including a time factor, and Z ij is a design block matrix. The 1 × q random effects vector b i is distributed as N(0,G), where G is a q × q covariance matrix. The matrix Z ij can contain time or other covariates whose association with the response is assumed to vary across subjects. With the specification of β and b i , the elements in vector y i =y i1 ,….y ni , are thought to be conditionally independent. The linear predictor, ( ) ( ) With the specification of a link function in generalized linear mixed models, random errors for nonlinear functions depend on the mean function, and accordingly, the variance of y ij is written as Where ν is a specific variance function, andφ represents a scale factor for over-dispersion. Given this flexible specification, y ij can follow a probability distribution other than multivariate normality. Equation (2) includes two distinctive variance components, betweensubjects and within-subject. Given the specification of the variance function, the within-subjects variance cannot be specified freely in non-normal longitudinal data.
Equation (1) does not specify a within-subject error term, implying zero uncertainty given β and b i . From Equation (2), however, it seems desirable to express y ij as a conditional function by including an error term for addressing uncertainty [9]: Where µ ij is the conditional mean after accounting for the withinsubject random error ε ij . Correspondingly, ij η is defined as Specification of the variance matrix for within-subjects random errors depends on a specific link function, and this random term may be conveniently assumed to have local normality with property . Due to retransformation of ε ij in expressing the expected value of ij y , very often µ µ α be a correlation parameter, and ( ) R a be the n i × n i matrix of α describing the correlation pattern within subject i. Then, if within-subjects uncertainty is considered, the within-subject variance-covariance structure can be written as where i µ is the vector of means over n i time points, ( ) × n i diagonal within-subject variance matrix containing elements ν, evaluated at i µ , and R i is an unknown matrix to be estimated. For analytic convenience and simplicity, the matrices in Equation (5) are assumed to be common to all subjects. When the subject-specific random effects are specified, The variance-covariance matrix of the linear predictor, denoted by ( ) , can be written as The parameters β and G can be estimated by the maximum likelihood or the restricted maximum likelihood, or other Bayes-treated as the fixed effect in nonlinear predictions. Consequently, a portion of variability is ignored in the retransformation process. For a population group, the entire set of random components needs to be retransformed for correctly approximating the population-averaged mean. We refer to this retransforming of random components in nonlinear predictions as the retransformation method. In this section, we present two such methods, one without and one with considering within-subjects uncertainty, respectively.
Retransformation method without considering intrasubjects variability
We start the description with the case of a log link. Let subject i be a typical case in a population of interest characterized by covariate vector X 0 . The predicted value for the typical subject at time j can then be considered the population-averaged prediction for the population taking covariate values X 0 . Given Equation (1), the marginal mean of y ij given X 0j and b i is is defined as the moment generating function Therefore, the prediction at time j for the population with covariate X 0j is given by Equation (10) indicates that unless all elements in Ĝ take value zero. Therefore, given the log link, the nonlinear response y ij will be under-predicted if retransformation of between-subjects random effects is completely or partially neglected, with the magnitude of such retransformation bias depending on the size of 0 Φ j .
Next, let g(·) represent the popular logit link. Given the logit function, neglect of retransformation of random effects can also lead to tremendous retransformation bias on the binary data. Let y ij denote a dichotomous response variable taking value 0 or 1 for subject i at time j. Then the predicted probability that y ij = 1 for person i at time j given X 0j and b i can be written as where; in the construct of the lognormal distribution, Let the fixed-effects estimator ( ) ( ) which is generally the case in longitudinal data analysis, we have . Therefore, in nonlinear predictions with mixed-effects logit models, ignoring retransformation of between-subjects random effects can lead to strong downward bias in the predicted probability. For other discrete data types, such as the multinomial, the retransformation bias in nonlinear predictions can be equally impactful [9].
Retransformation method with intra-subjects variability
The equations specified in the above section do not specify a random term accounting for within-subjects uncertainty. In the application of generalized linear mixed models, between-subjects random effects are often perceived to reflect individual differences in unspecified characteristics thereby addressing within-subjects variability simultaneously [14]. In certain situations, however, this assumption does not reflect the true experiences generated by the stochastic longitudinal process.
If within-subjects variability is taken into account, the expectation of nonlinear response y for subject i at time j can be written as where ( ) can be understood as a second-order smearing estimate evaluated at ( ), Empirically, the within-subjects random term can be approximated from the partial derivative of the log likelihood function with respect to β in estimating a generalized linear mixed model, given by y is the local approximation of the within-subjects random error with local normality. Such an approximate is model-based; different from linear mixed models, its specification depends on the marginal mean. In nonlinear predictions, the specification of this local approximation step can be ignored only when there is strong evidence that between-subjects variability completely captures within-subjects uncertainty.
Some researchers recommend the application of the latent variable approach to estimate within-subjects random errors in generalized linear mixed models [15]. This standardized approach specifies a constant variance of within-subjects random errors, regardless of the response type and the number of covariates utilized in a particular longitudinal study. It is argued that, when the between-subjects random effects are specified, not that much variability remains in a binary or a multinomial response [15]. Furthermore, this approach does not specify a covariance structure when the data type is multinomial thereby overlooking the multivariate nature of the nonlinear data structure.
If within-subjects random errors are considered non-ignorable in specifying a generalized linear mixed model, the marginal mean of the nonlinear response y ij , taking the logit link, is
Variance Matrix for Nonlinear Predictions
We propose to use the delta method to approximate the standard errors of nonlinear predictions, as following the tradition in nonlinear predictions [12][13][14]16,17]. Suppose that the discrete response is binomial. Let ˆi j L be a random variable of the linear predictor for subject i at time point j from Equation (4) and; of variance In Equation (15) L , respectively, where K denotes the number of non-reference response levels [9]. Correspondingly, the variance- where; ( ) ( ) ( ) In generalized linear mixed models, ( ) var ij L may consist of two components, between-subjects and within-subjects variances, as indicated earlier. Whereas the between-subjects variance is specified in a generalized linear mixed model and thereby estimable, the withinsubjects random component can be approximated from the variance of the intercept after the covariates are rescaled to be centered at selected values [9]. The rationale is that if covariates are centered at some specified values, the intercept represents the transformed margin, and therefore, the variance of the estimated intercept plus the corresponding variance for the between-subjects random effects can be considered the approximation of the variance for the marginal mean.
Illustration
Data used for the illustration come from the Survey of Asset and Health Dynamics among the Oldest Old, a nationally representative investigation of older Americans. This survey, conducted by the Institute for Social Research, University of Michigan, is funded by National Institute on Aging as a supplement to the Health and Retirement Study. To date, the survey consists of nine waves of data collection. The Wave I survey was conducted between October 1993 and April 1994, identifying 9,473 households and 11,965 individuals in the target area range. The survey obtains detailed information on a number of domains, including demographics, health status, health care use, disability, and health and life insurance. Survival information throughout the follow-up waves is obtained through a link to the data of National Death Index. Because the first two waves (1993 and 1995) were based on a slightly different questionnaire from those of the succeeding waves, we use data from the six waves starting with the 1998 wave (1998, 2000, 2002, 2004, 2006, and 2008). For details about the study design, see the Health and Retirement Study website (hrsonline. isr.umich.edu).
In this illustration, the outcome variable is survival status, with 1=death and 0=else. Operationally, we analyze the probability of death between two successive waves, defined as pr(Y ij =1) where j indicates a time interval between time point j-1 and j. Given the focus on nonlinear predictions on longitudinal trajectory of death rate, the main explanatory variable is time, measured as the number of years elapsed since the 1998 survey (t = 0, 2, 4, 6, 8, 10). Among the control variables considered in this illustration, gender is a dichotomous variable with 1=women and 0=men, and age and education are both continuous variables. In estimating the model parameters, the control variables are rescaled to be centered about sample means. The interaction between time and each of the controls was considered for capturing possible convergence of the covariate's effect, but its inclusion did not contribute significantly to the model fit and thus was removed.
Given the binary outcome data, we apply the mixed-effects binary logit model. For illustrative simplicity without loss of generality, we use the random intercept logit model, assuming the effects of covariates on the logit to be fixed over time. The SAS PROC NLMIXED procedure (SAS Institute Inc., Cary, NC) is applied to compute the fixed and the random effects given its tremendous flexibility in estimating and predicting parameters in generalized linear mixed models [16]. We use adaptive Gaussian quadrature to approximate the integral of the likelihood over the random intercept with its advantage over other approximation methods in deriving robust random effect estimates and the model fit statistic [4,12]. With the specification of betweensubjects random intercepts, time is treated as a continuous variable. A combination of time and time × time, the so-called quadratic polynomial time function, was found to best describe the longitudinal trajectory of death rate. Given high correlation between the two time components, the time variable t was rescaled as a centered covariate to reduce numeric instability and collinearity.
To compare statistical efficiency and robustness of different methods for nonlinear predictions in longitudinal analysis, we first create a "full" logit model consisting of all covariates and two random components in the estimation process, assuming the populationaveraged predictions and the corresponding variances/covariances derived from this model to be exact. Then we purposefully exclude the variable "education" from the logit model. As education has a statistically significant effect on death rate, there is definitely additional clustering in the outcome data after its removal, and therefore such a reduced model is actually a misspecified model. We have three operational objectives in this illustration. First, we examine whether the retransformation method can capture the random effect after an important predictor is removed, relative to the results from the full model. Next, we assess whether the retransformation method yields much smaller retransformation bias than the best linear unbiased predictor in nonlinear predictions. Third, we demonstrate how the fixed-effects approach, though tending to generate unbiased estimates of regression coefficients [18], results in tremendous retransformation bias in nonlinear predictions. The best linear unbiased predictor and the retransformation method are actually based on the same model eliminating education from the estimating process, and therefore, we develop three logit models in the illustration: the full mixed-effects logit model, the reduced mixed-effects logit model removing education from regression, and the reduced fixed-effects logit model.
While the best linear unbiased predictor approximates death rate for each observation, we obtain the population-averaged predictions from this method by creating a scoring dataset that specifies the outcome variable as missing and the independent variables as zero, with covariates rescaled to be centered at selected values corresponding to a target population group. In the scoring data, the random effect per individual should be present, and therefore, the mean of the predictions with y=missing and X's=0 in the scoring dataset generates the predicted death rate for the population or a typical subject taking selected values of covariates. Table 1 displays the analytic results of three models. All regression coefficients, full or reduced, random-or fixed-effects, are statistically significant at α=0.05. In each model, the regression coefficient of time on the logit is positive, while that of time × time is negative, which, combined, suggest a decelerating time trend in old-age mortality. The between-subjects random effect on the logit is 0.381 for the full model and 0.380 for the reduced, both statistically significant. The intercept, the regression coefficients, and the standard errors in the reduced fixed-effects model are close to those from the two randomeffects logit models. Such a similarity indicates that in this analysis, the large-sample behavior follows, and consequently, the asymptotic process ( ) 1/ 2ˆ0 j n -β β , where β 0 is the true parameter vector, tends to converge to a normal vector with mean 0 and the covariance matrix as approximated by the inverse of the observed information matrix. The fixed-effects logit model, however, does not have capability to yield a robust and consistent estimator for nonlinear predictions in the longitudinal setting, as will be shown in Table 1.
The predicted death rate at each time point and its variance can be estimated by applying Equations (14) and (15), respectively. In the construct of binary response data, the partial derivative of death rate with respect to the logit function is The variance of the predicted population-averaged death rate ˆi j p can be approximated by L consists of two components if within-subjects random errors are considered. Table 2 presents four sets of the predicted death rates and the standard errors at six time points, computed from the full model, the retransformation method, the best linear unbiased predictor, and the fixed-effects reduced model, respectively.
In Table 2, death rate is shown to increase exponentially over time at the early and the middle stage, and then the increase slows near the end of the observation period, reflecting a "selection of the fittest" process. The retransformation method, transferring both the between-subjects and the within-subject random components in this analysis, generates the closest predictions to those from the full model, indicating its high efficiency and coverage in handling retransformation bias in nonlinear predictions. In the first two panels, the predicted Note: Randomness of the intercept is parameterized by the variance of the random effects. The best linear unbiased predictor provides close predictions at the early times; in the last three time periods, however, the predicted death rate deviates markedly from those from the first two methods. Furthermore, this method results in severely underestimated standard errors of the predictions. The fixed-effects logit model generates the least valid predictions with massive deviations from those of the mixedeffects models. More significantly, the fixed-effects approach severely underestimates standard errors, much more so than the best linear unbiased predictor. With severe underestimation of standard errors, all predicted death rates from the best linear unbiased predictor and the fixed-effects model are very strongly statistically significant, thereby yielding erroneous test results. Figure 1 plots three longitudinal trajectories of death rate and the corresponding 95% confidence limits, approximated from the retransformation method, the best linear unbiased predictor, and the fixed-effects approach, respectively. The solid line and the shaded area in each panel are the trajectory and the 95% confidence region from the full model, used as a standard for comparison. Panel A demonstrates how well the retransformation method predicts old-age mortality and the confidence limits, after one theoretically important, statistically significant predictor is removed. In Panel B, the best linear unbiased predictor estimates death rate fairly nicely in the early stage, but it then falls off systematically from the solid line, with the 95% confidence limits dramatically contracted. In Panel C, the predicted time trend of death rate is completely amiss, with the two 95% confidence limits very narrowly scattered around the predicted curve.
Discussion
As regression coefficients in generalized linear mixed models are often not interpretable, we have seen applications using the fixed-effect estimates for nonlinear predictions. Such an application can lead to considerable retransformation bias due to neglect of retransforming random components. If random disturbances in a generalized linear mixed model truly follow normality, this distribution needs to convert to a non-normal function to correctly predict the nonlinear outcome. Consequently, the expectation of random components is not zero in nonlinear predictions unless the identity link function is specified or both between-and within-subjects random disturbances are zero for all subjects. Without appropriately retransforming the random effects, the variance of nonlinear predictions will be underestimated, thereby affecting the quality of the significance test. The best linear unbiased Table 2 are based on "reduced" (misspecified) models. prediction, widely applied in longitudinal analysis, only accounts for a portion of between-subjects variability and overlooks the withinsubject random component that may exist inherently even with the specification of the between-subjects random effects.
In this article, we compare the predicted death rate at six time points from three predicting methods: the retransformation method, the best linear unbiased predictor, and the fixed-effects approach, as relative to a pre-specified full mixed-effects logit model. In particular, we examine how each of these methods behaves in nonlinear predictions after an important predictor variable is removed. The results of our illustration display that failure to retransform random components in generalized linear mixed models can result in severe bias in nonlinear predictions and sizable underestimation of standard errors, even when the estimated regression coefficients are unbiased. Such retransformation bias exerts substantial influences on the quality of significance test on nonlinear predictions. The fixed effects are population-averaged, and therefore, the subject-specific variability is disregarded; once random effects are considered in nonlinear predictions, the inherent variability is much increased, thereby lowering the value of the chi-square statistic. The best linear unbiased predictor reduces some retransformation bias but its effect is shown to be very limited. Relative to the fixed-effects approach and the BLUP estimator that are associated with tremendous retransformation bias, our retransformation method is shown to increase the efficiency and coverage in nonlinear predictions. | 6,108.2 | 2015-06-24T00:00:00.000 | [
"Mathematics"
] |
Piloting moringa agribusiness to improve villagers’ economic community
Oki Wijaya, Lestari Rahayu, Nur Rokhim, Tsaniya Yusmiastuti, Surya Aditya Utama a Agribusiness Study Program, Faculty of Agriculture, Universitas Muhammadiyah Yogyakarta, Jl. Brawijaya, Kasihan, Bantul, Yogyakarta 55183 Indonesia b Agrotechnology Study Program, Faculty of Agriculture, Universitas Muhammadiyah Yogyakarta, Jl. Brawijaya, Kasihan, Bantul, Yogyakarta 55183 Indonesia c Law Science Study Program, Faculty of Law, Universitas Muhammadiyah Yogyakarta, Jl. Brawijaya, Kasihan, Bantul, Yogyakarta 55183 Indonesia 1<EMAIL_ADDRESS><EMAIL_ADDRESS>3,4,5<EMAIL_ADDRESS>* Corresponding author
INTRODUCTION
Pilangrejo Village, Nglipar District, Gunungkidul Regency is one village that is vulnerable to landslides. During 2012-2013, there were three villages in Nglipar District that suffered with landslides, they were Kedungpoh Village, Pilangrejo, and Pengkol. As a result of disaster, 37 houses were destroyed, three people injured, and one person got a serious injury with an estimated loss of IDR 236,500,000.00 (Badan Penanggulangan Bencana, 2012). In 2017, there was a large landslide disaster. This affected several houses were buried by the landslides so that 46 people were evacuated (Polda DIY, 2017). A similar case also occurred in 2018, Danyangan Hamlet, Pilangrejo Village suffered the landslides for eight meters long and six meters wide (Putri, 2017). Nurohmah (2017) states that apart from Pengkol Village and Kedungpoh Village, Pilangrejo Village, Nglipar Village, Gunung Kidul Regency, Yogyakarta Special Region was included in the zone category with a high level of danger.
The area of Pilangrejo Village is approximately 875 Ha, most of which are hilly areas. The use of some of the hilly areas is as dry land agriculture (Badan Pusat Statistik, 2020). Most of the crops cultivated are seasonal crops such as corn, rice, soybeans, and cassava. In addition to seasonal crops, residents also plant elephant grass for animal feed. Corn, rice, and soybeans are crops that the results are used for daily consumption and some are sold to meet other needs. On the other hand, the villagers of Pilangrejo depend on the agricultural sector for their livelihood. This condition encourages residents to maintain the planting pattern that has been done for years. On the other hand, the farming system carried out by residents has caused the land not able to store water during the rainy season. This is the cause of landslides in Pilangrejo Village, Nglipar District, Gunung Kidul Regency during the rainy season because there are no roots that hold back the contours of the soil (Badan Pusat Statistik, 2020;Badang Penanggulangan Bencana, 2012;Nurohmah, 2017;Putri, 2017).
The dry and hilly condition of the land in Pilangrejo Village should be planted with annual plants that have taproots so that they can function to hold the soil and minimize the possibility of landslides. However, due to economic conditions and food needs, residents in Pilangrejo Village, particularly in Danyangan Hamlet, replaced the annual corps with corn and cassava.
One alternative solution to this problem is to carry out nature conservation by planting annual plants that are able to withstand the contours of the soil while having an economic value. One of the annual plants that can be grown in the area is the Moringa plant. Moringa is an herbaceous plant that has a taproot. This plant is able to hold the contours of the soil and store water reserves so as to prevent landslides. In addition, Moringa plants have an economic value (K.F. Omotesho et al., 2013).
Based on the situation analysis and problems found in the targeted community, thus this community service aims to improve the economic gain of the community through Moringa agribusiness in Pilangrejo Village, Nglipar District, Gunung Kidul Regency.
METHOD
Time and venue of the activity Moringa agribusiness pilot project activities serves as an effort to conserve the environment and improve the community's economy were carried out from January to June 2020 in Danyangan Hamlet, Pilangrejo Village, Bglipar District, Gunung Kidul Regency.
Participants
This activity was attended by the people of Danyangan Hamlet, both men and women. Male participants were people who make a living as farmers. While the female participants were the housewives who do not have a permanent livelihood. The primary participants were 10 people who played as pilot project group. However, the conservation activities carried out by this pilot group, in reality, were usually followed by public with an average of 20 participants per activity.
The people of Danyangan Hamlet in general make a living as farmers in rainfed and dry fields. Rainfed rice fields are usually planted with rice, corn, and secondary crops. Meanwhile, the field is used with cassava plants, or fruits for household consumption. In addition, most of the people in Danyangan Hamlet are categorized in the poor list, as their income from farming is relatively low (average net income from farming is IDR 1,500,000/harvest season) and is uncertain. People also plant secondary crops on sloping land which often causes landslides.
Preparation
Preparation in this community service were undertaken in three stages: 1) Observation; 2) Focus Group Discussion (FGD); and 3) Socialization. The observation was conducted to find out problems and potencies that exist in the location, so that the activities will be effectively done. Afterwards, the FGD was administered, to plan for the development of a participative village. To make the residents know about village development, the next stage was to conduct socialization.
Implementation
The implementation stages carried out in this activity includes: 1) moringa planting and seedling; 2) Training on postharvest processing; and 3) providing production tool. The implementation of this activity is based on subsystems in agribusiness, it is the upstream (input) subsystem, on-farm subsystem, and downstream subsystem.
Monitoring and Sustainability Efforts
The last stage in this community service was monitoring and compiling a roadmap for the sustainability of activities. It is expected that this activity can develop and bring an impact for the society and the environment.
RESULTS AND DISCUSSION
Pre-activity observation The first stage done in this community service was activity observation. This observation was carried out to find out the problems and potencies that exist in the location, so that the activities can be effectively done. The results of observation show that the target location is a dry and hilly area. The hills in the location were reported to have frequent landslides. In addition, agriculture in the area relies on rainfed rice fields. The yard own by residents was used for fruit crops and livestock. In addition, during observation, many Moringa plants were found in the residents' yards. Almost all residents grow Moringa, but the plants had not been used optimally. Moreover, the residents did not know the efficacy of moringa plant.
Focus group discussion
After the observation had been done, the next stage was to do Focus Discussion Group (FGD) together with village community leaders. FGD was chosen as the method used to obtain in-depth information about the community and their expectations from this community service program. This method had also been proven effective by various qualitative studies (Paramita & Kristiana, 2013), in particular in the field of conservation (O. Nyumba et al., 2018). The results of FGD was to determine the Moringa plant as a commodity to be developed in Pilangrejo Village. This was decided based on the problems and potencies of the village as the result of observation, as well as residents' expectation to improve community's economy based on local wisdom and environmental sustainability.
Socialization on the activity plans
The results of FGD were attended by several village community leaders, then socialized to the wider village community. This stage was carried out so that residents know the planned activities to be undertaken, thus, they could be interested in participating in this activity. Not only does it increase the community's motivation to be actively involved in the activities, but an overview of the benefits from the activity program is believed to be able to increase their commitment to carry out their role (Asah & Blahna, 2013). The results of the socialization stage were the participation and enthusiasm of citizens in activities. Number of residents took part in this activity were 50 residents. This stage became the point in developing Moringa agribusiness in Danyangan Hamlet.
Kelor (Moringa) planting and seedling
After the socialization, the next activity was planting and seedling Moringa together with the community. This activity began with a symbolic tree planting by village officials, Muhammadiyah Branch Leaders, UMY student representatives, Babinsa, and Babinkamtibmas. In his remarks during the symbolic planting, a representative of village officials expressed his appreciation for piloting this activity. It was further stated that this activity was a community empowerment activity that had a holistic and integrative concept for the first time in Pilangrejo Village. The socialization activity was well documented in Figure 1. Considering the importance of community participation in conservation effort (Rathnayake, 2016), this socialization activity was a key to involving them.
Other than planting activities, this moringa agribusiness initiation activity was also done by moringa seedling. The seedling was carried out through seed or seeds. Seedling from seed was chosen on the grounds that the resistance of Moringa plants from seeds proved to be more durable. In addition, trees that grow from seed or seeds will have deep roots, forming deep taproots and fibers. The taproots will function to hold the soil and minimize the possibility of landslides (Adhitya et al., 2016;Santoso & Parwata, 2018;Syarifuddin, 2017;Wasonowati et al., 2018). In the implementation phase, participants were taught how to process Moringa leaves for household food. Moringa leaf processing began with an explanation of how to dry Moringa leaves then how to use Moringa leaves and Moringa leaf powder. The community members were taught how to make cakes made from moringa, moringa tea mixed with palm sugar, moringa chips, and other various processed food. The training on how to process moringa leaves was conducted at Danyangan Hall as shown in Figure 2. In the future, it is expected that the community will be able to develop this Moringa leaf processing in more varied forms in order to increase the economic value. Theoretically, there have been many studies discussing about the health benefits of Moringa leaves for the various compounds contained in them (Arulselvan et al., 2016;Calderon-Montano et al., 2011;Leone et al., 2015). The benefits of processed Moringa leaves are not only for health, but also for improving environmental quality (Camacho et al., 2017;Rahman et al., 2014). By providing more information will also increase their participation in such economic and environmental conservation activities (Rasoolimanesh et al., 2017).
Providing production tool
Post-harvest processing production tool is an important part in the development of agribusiness in the village. One of the series of community service activities was to provide production tool in the form of moringa leaf flour machine as shown in Figure 3. With this tool, it is expected that residents can process moringa powder. To support this stage of activity, residents were introduced to how to use and function the tools. In addition, this stage was also equipped with a manual module composed by the project team. Through the manual module, the target group would easily understand the use of tools that could be learned independently when the project team were away (Wulandari et al., 2017). This community service activity is still a piloting project of Moringa agribusiness in Pilangrejo Vilage, Nglipar District, Gunungkidul Regency. Therefore, there are still many shortcomings that must be addressed for better improvement in the future. The next development plan was discussed through regular community meetings led by the Head of Danyangan Hamlet and attended by 30 residents. The result of the discussion was the commitment of the residents to continue this activity, as well as to prepare a roadmap for the development of Moringa agribusiness-based village. These such discussions with residents was important to undertake, so that residents would not feel that this activity brought benefit for the project team only, but this is an effort made for village development (Sidik, 2015). Furthermore, long-term planning and a detailed explanation of the role and involvement of the community in this economic and environmental conservation effort will ensure the effectiveness of the program's sustainability (Rodríguez-izquierdo et al., 2010).
ACKNOWLEDGEMENT
Our gratitude goes to the Rector and Vice Rector of the Universitas Muhammadiyah Yogyakarta and the Director of the Directorate of Research and Community Service, Universitas Muhammadiyah Yogyakarta, who have provided funding for this activity, as well as the partners so that this program can run smoothly. | 2,858.8 | 2021-08-26T00:00:00.000 | [
"Agricultural and Food Sciences",
"Economics"
] |
INFLUENCE OF THE STRAIN RATE ON THE NOTCH TOUGHNESS OF COLD-FORMING STEELS INFLUENCE OF THE STRAIN RATE ON THE NOTCH TOUGHNESS OF COLD-FORMING STEELS
is to analyse the influence of the loading rate on the notch toughness of cold-forming steels with higher tensile strength values and, based on this analysis, to assess a possibility of utilizing the notched-bar impact test to predict their formability at increased strain rates, which is necessitated by the practice at present. The experiments were made on hot-rolled micro-alloyed steels S 315 MC and S 460 MC with the thickness of 8 mm and on cold-rolled deep-drawing steels DC 06, H 220 B and H 220 P with the thickness of 0.8 and 1 mm. The paper analyses the influence of the strain rate with loading rates of 1.7.10 (cid:11) 5 (cid:11) 4.8 m.s (cid:11) 1 on the notch toughness of hot-rolled microalloyed steels S 315 MC and S 460 MC and cold-rolled deep-drawing steels DC 06, H 220 B and H 220 P. It discusses a possibility of utilising a modified notched-bar impact test to predict the formability of these steels at high strain rates of thin steel sheets.
Introduction
The influence of the strain rate in the forming process can be formulated in such a way that the resistance of metal against dislocation movement increases with an increasing strain rate, which has an impact on an increase in the yield point, the tensile strength, a change in the deformation characteristics, etc. A localization of plastic deformation can occur and at higher strain rates the whole process assumes an adiabatic character [1]. The influence of the strain rate on the strength and deformation characteristics is significantly influenced by the structure of metallic material.
The notched-bar impact test is one of the simplest tests making it possible to assess the behaviour of materials under the dynamic loading conditions and expressing the active fracture resistance in a narrow zone of the tested cross-section [1,2]. Its disadvantage consists in the fact that it does not make it possible to obtain absolute values of material toughness that would characterize the fracture resistance. The notch toughness is influenced by the size and shape of the notch of the test bar, while their influence on the notch toughness depends on the internal structure of material. Nowadays, the test is usually made using standard tests bars with the dimensions of 10 ϫ 10 ϫ 55 mm and with the V notch with the depth of 2 mm, the diameter r ϭ 0.25 mm and the angle of 45°. Even though there are more exact tests to determine material fracture resistance [1,2], the notched-bar impact test is the most used one in practice for its simplicity. Its application to testing the notch toughness of semi-products and products from which standard test bars cannot be made necessitated studying the influence of the test bar thickness, the notch shape, the specimen dimensions, as well as the loading rate, etc. on the characteristics that can be obtained using the notched-bar impact test [3,4,5,6,7].
The goal of this paper is to analyse the influence of the loading rate on the notch toughness of cold-forming steels with higher tensile strength values and, based on this analysis, to assess a possibility of utilizing the notched-bar impact test to predict their formability at increased strain rates, which is necessitated by the practice at present.
Experiments and their analysis
The experiments were made on hot-rolled micro-alloyed steels S 315 MC and S 460 MC with the thickness of 8 mm and on coldrolled deep-drawing steels DC 06, H 220 B and H 220 P with the thickness of 0.8 and 1 mm.
INFLUENCE OF THE STRAIN RATE ON THE NOTCH TOUGHNESS OF COLD-FORMING STEELS
The basic mechanical properties and characteristic of the tested steels are given in Table 1.
From the strips of the tested steels, materials were taken in the rolling direction and the following test bars were made: from steels S 315 MC and S 460 MC -with the dimensions of 10 ϫ 8 ϫ ϫ 55 mm and the V notch with the depth of 2 mm, and from steels DC 06, H 220 B and H 220 P -with the dimensions 8 ϫ (sheet thickness) ϫ 28 mm and the V notch with the depth of 4 mm. The shape and dimensions of these test bars were based on obtained practical knowledge.
The notched-bar impact test was made at two or three loading rates on different testing machines given in Table 2.
Loading rates and used notched-bar testing machines Table 2 The required test bar failure energy at the loading rates (v 1 and v 2 ) was evaluated by planimetring the area of the force Fdeflection diagram. Fig. 1 that the KCV values of steels S … MC are significantly higher than that of the other tested steels, during both the static loading (v 1 ) and the impact loading (v 3 ). This is, besides different mechanical values, due to different dimensions of the test bars, which has a significant influence on the KCV value.
The evaluation of the influence of the property (structure) of the tested steels on the loading rate can be made e.g. using the following relationship where KCV v3 is the notch toughness at the loading rate v 3 ϭ ϭ 4.8 m.s Ϫ1 , KCV v1 is the notch toughness at the loading rate v 1 ϭ 1.7.10 Ϫ5 m.s Ϫ1 and k is a material constant expressing the sensitivity of steel to the change of the loading rate. Fig. 2 shows the relationship between the k constant and the yield point of the tested steels, demonstrating that the higher yield point of steel the less sensitivity of steel to the strain rate. The yield point can be considered as a macroscopic structural characteristic. The matrix of all the tested steels is ferritic. The increase in the yield point of H 220 B and H 220 P steels is due to the BH effect and the phosphorus content, and that of S 315 MC and S 460 MC steels is due to fine grains and precipitation hardening. This means that the more dislocation movement obstructions in the steel structure the less sensitivity of steel to the strain rate [1,8,9,10,11]. For the tested steels, the material constant k can be analytically expressed using the following parametric equation For the tested steels, the constant A ϭ 1.8027 and the constant B ϭ 0.0014. For S 315 MC and S 460 MC steels, a temperature dependence of the notch toughness was constructed for the static loading (v 1 ϭ 1.7.10 Ϫ5 m.s Ϫ1 ) and the impact loading (v 3 ϭ 4.8 m.s Ϫ1 ). The results are shown in Fig. 3 and Fig. 4. It results from the figures that at the static loading the KCV values in the super-transitional area are lower than at the impact loading, and the transitional temperature at the static loading is also lower. If we take the temperature at which KCVmax decreases by its half (T50) as the transitional temperature, this temperature at the static loading is 41 °C lower than at the impact loading for S 315 MC steel and as few as 19 °C lower for S 460 MC steel (see Fig. 3 and Fig. 4) an increasing strain rate the susceptibility of metals to brittle failure increases and that the sensitivity to the strain rate under the same external conditions is a function of the structure.
It resulted from the experiments and their analysis that the notched-bar impact test can also be made, while meeting certain conditions, on thin steel sheets applied in the automotive industry. The notch toughness values in the super-transitional area are higher at a higher rate and this increase is a function of the structure, whose macroscopic characteristic is the yield point. With an increasing strain rate, the susceptibility of the tested steels to brittle failure (unstable crack propagation) increases.
The above-mentioned conclusions enable us to assume that a modified notched-bar test can serve to predict the formability of drawing steels at increased strain rates. With an increasing strain rate, the strain work increases, in dependence on the strength characteristics of steel (k ϭ A Ϫ B . R e ). If the notch toughness at the required strain rate v x (KCV x ) is higher that at the static strain rate v s (KCV s ), at this strain rate the formability of steel sheet can be assessed according to traditional formability criteria. In case that KCV x is less than KCV s , at the strain rate v x there is a risk of plastic instability and, as a result, a local failure. Such a condition may occur for example in S 315 MC steel at the temperature of Ϫ50 °C (see Fig. 3), but also when the critical strain rate is exceeded, where the relationship KCV x ϭ k . KCV s does not apply and where there is a risk of sudden fracture.
Conclusion
The paper analyses, based on the experiments, the influence of the loading rate on the notch toughness of hot rolled microalloyed steels suitable for cold working (S 315 MC and S 460 MC) and cold rolled drawing steels (DC 06, H 220 B and H 220 P). Possibilities of utilizing the notched-bar impact test results to assess the formability of steels at higher strain rates are also discussed. It results from the analysis that: -the notched-bar impact test can also be applied to thin sheets ( ϳ 1 mm), but the test bar shape must meet certain conditions, in particular the bar height to the notch depth ratio, -in the super-transitional area, the notch toughness increases with an increased loading rate in the interval from 1.7.10 Ϫ5 to 4.8 m.s Ϫ1 , while this increase rate is a function of the structure of the tested steel, -with an increasing loading rate, a risk of unstable crack propagation increases (the transitional temperature increases), a modified notched-bar impact test can serve to predict the formability of steel sheets at increased strain rates, mainly as regards the prediction of the strain resistance, the loss of plastic stability and a possibility of using traditional formability criteria at increased strain rates; standardized tests of deep-drawing properties of steel sheets are practically static (ε Ϸ 10 Ϫ3 s Ϫ1 ), however, in the technical practice steel sheets are processed at rates of as many as 1 -10 s Ϫ1 . | 2,455.6 | 2004-06-30T00:00:00.000 | [
"Materials Science"
] |
Topological Data Analysis as a Morphometric Method: Using Persistent Homology to Demarcate a Leaf Morphospace
Current morphometric methods that comprehensively measure shape cannot compare the disparate leaf shapes found in seed plants and are sensitive to processing artifacts. We explore the use of persistent homology, a topological method applied as a filtration across simplicial complexes (or more simply, a method to measure topological features of spaces across different spatial resolutions), to overcome these limitations. The described method isolates subsets of shape features and measures the spatial relationship of neighboring pixel densities in a shape. We apply the method to the analysis of 182,707 leaves, both published and unpublished, representing 141 plant families collected from 75 sites throughout the world. By measuring leaves from throughout the seed plants using persistent homology, a defined morphospace comparing all leaves is demarcated. Clear differences in shape between major phylogenetic groups are detected and estimates of leaf shape diversity within plant families are made. The approach predicts plant family above chance. The application of a persistent homology method, using topological features, to measure leaf shape allows for a unified morphometric framework to measure plant form, including shapes, textures, patterns, and branching architectures.
INTRODUCTION
Leaves are three-dimensional structures that grow through time, but flattened laminae provide a unique opportunity to reduce leaf morphology to a two-dimensional shape. Local features (such as serrations and lobes) and general shape attributes (like length-to-width ratio) can be measured, but other methods also exist to measure leaf shape more globally and comprehensively. A popular method to quantify leaf shape is to place (x, y) coordinates, known as landmarks, on homologous features that are related by descent from a common ancestor on every sample (Bookstein, 1997). The set of landmarks from each leaf can be superimposed by translation, rotation, and scaling using a Generalized Procrustes Analysis (Gower, 1975). Once superimposed, the Procrustes-adjusted coordinates of each shape can be used directly for statistical analyses. Landmark analysis excels in its interpretability, because each landmark is an identifiable feature with biological meaning imparted by the shared homology between samples. Because landmarks are homologous features, their use often reveals genetic and developmental patterns in shape variation (Chitwood et al., 2016a).
Not all leaves have obvious homologous features that can be used as landmarks. Further, when comparing leaves with disparate morphologies (e.g., simple vs. compound leaves), there may not be identifiable homologous points. Nearly all leaves have homologous landmarks at the tip and base, but if there are no other identifiable landmarks, an equal number of equidistant points on each sample between the landmarks can be placed (Langlade et al., 2005). The denser and more numerous such pseudo-landmarks are, the closer they come to approximating the contour itself.
Another method, the use of Elliptical Fourier Descriptors (EFDs), measures shape as a continuous closed contour, and can also be used when homologous features are absent. EFD analysis begins with a lossless data compression method called chaincode, in which the up, down, left, right, and diagonal relationship of each successive pixel to the next is recorded as a chain of numbers, 0-7, so that from this chain the closed contour can be faithfully reproduced (Freeman, 1974). The chain code is decomposed by a Fourier analysis into a harmonic series that is used to quantify an approximate reconstruction of the shape (Kuhl and Giardina, 1982).
Both pseudo-landmarks and EFDs measure leaf shapes for which homologous features that can be used as landmarks are lacking (Bensmihen et al., 2008;Chitwood and Otoni, 2017b). Still, when comparing disparate leaf shapes, unless major sources of shape variance in the data (such as the number of lobes or leaflets) are present in every sample, individual pseudolandmarks or harmonic coefficients will not correspond between samples in a comparable way useful for analysis. For example, an EFD analysis of the abstract Monstera leaf shapes from La Gerbe (a work from Henri Matisse's cut out period) demonstrates this problem: the leaves are similar in shape, but the differing numbers of lobes on each leaf creates a circumstance where the harmonic coefficients do not correspond to comparable features, creating a nonsense morphospace, and preventing statistical analyses (Supplementary Figure S1). The differing number of lobes also inherently prevent landmark-based approaches, as the lobes do not correspond with each other.
Recently, a computer vision method coupled with machine learning was used to classify leaves, with diverse vascular patterns and leaf shapes, into plant families and orders (Wilf et al., 2016). This method uses a visual descriptor to train a classifier. Since cleared leaves are used, this method relies on both internal features like branch points in the vasculature as well as features on the leaf margin. These internal features provide a rich set of information which the authors use to classify 7,597 cleared leaves from up to 29 families and 19 orders up to an accuracy of 72.14%. Computer vision circumvents the problems associated with traditional morphometric methods, above, that used defined features for analysis (either landmarks or harmonics from Fourier-based approaches) that prevent a broad comparison across diverse leaf shapes, and the venation patterns of cleared leaves provide abundant information for these methods to classify leaves.
Cleared leaves, though, are time consuming to prepare compared to simply scanning leaves and analyzing their outlines. It is much easier to sample the immense amount of leaf shape diversity using scans and photographs than preparing cleared leaves. Fossil leaves, too, may have shape information associated with a closed contour, and their venation pattern may not be available for analysis. There needs to be a morphometric method that can accommodate the closed contours of the diverse leaf shapes found in nature, and although outlines contain less information than the vasculature of cleared leaves, they potentially provide broader sampling of leaf shape diversity across plants.
To develop a morphometric method that (1) comprehensively measures shape features in leaves, both locally and globally, (2) can compare disparate leaves shapes, (3) is robust against noise commonly found in leaf shape data (e.g., internal holes because of overlapping leaflets or small defects introduced during imaging and thresholding), and (4) can be extrapolated to other plant phenotyping needs (e.g., measuring the branching architectures of roots and trees, the spatial distributions of plants in ecosystems, or the texture of different pollen types; Mander et al., 2013Mander et al., , 2017Li et al., 2017) we used a persistent homology approach. Persistent homology is a topological data analysis method. Topology is the field of mathematics concerned with properties of space ("homology groups") preserved under deformations (e.g., bending) but not tearing or re-attaching (we stress that "homology" refers to unrelated concepts in biology and topology and that "persistent homology" does not refer in any way to "homology" in the biological sense of the word). Persistent homology measures topological features as a filtration across simplicial complexes (or more simply, a method to measure topological features of spaces across different spatial resolutions; Edelsbrunner and Harer, 2008;Weinberger, 2011;Li et al., 2017).
Here, we present a morphometric technique based on topology, using a persistent homology framework, to measure the outlines of leaves and classify them by plant family. We analyze 182,707 leaves (freely available to download; Chitwood, 2017a), from both published studies and shapes analyzed for the first time, from 141 plant families and 75 sites throughout the world. We first compare the diverse shapes represented in a common morphospace using persistent homology, which captures traditional shape descriptors in varying combinations and novel morphological features, as well. Major phylogenetic groups of plants occupy distinct regions of the morphospace and we estimate plant families that have the most and least diverse leaf shapes. Using persistent homology, we then use a linear discriminant analysis (LDA) to classify leaves by plant family. Persistent homology predicts family at a rate above chance and 2.7 times the rate of traditional shape descriptors. Persistent homology, by measuring topological features, can be generally applied to branching architectures found in plants, providing a shared framework to quantify the plant form comprehensively.
Dataset and a Morphospace Defined Using Traditional Shape Descriptors
To broadly analyze seed plant leaf shape diversity collected from sites throughout the world, we used both published and unpublished data. In total, 182,707 leaves were analyzed ( Table 1). Many of these datasets address specific genetic and developmental questions, focusing on genetic variability within a group or closely related species. Leaves were analyzed from the following groups of plants: Alstroemeria (2,392 leaves), apple (9,619 leaves), Arabidopsis (5,101 leaves), Brassica (1,832 leaves), Capsicum (3,277 leaves), Coleus (34,607 leaves), cotton (2,885 leaves), grapevine and wild relatives (20,121 leaves), Hedera (common ivy, 865 leaves), Passiflora (3,301 leaves), Poaceae (866 leaves), wild and cultivated potato (1,840 leaves), tomato and wild relatives (82,034 leaves), and Viburnum (2,422 leaves) (please see Table 1 for references and AUTHOR CONTRIBUTIONS for unpublished data). We also analyzed two datasets that sample broadly across seed plants and from sites throughout the world. The Leafsnap dataset, with 5,733 leaves, represents mostly tree species of the Northeastern United States, but other groups of plants as well (Kumar et al., 2012). The Climate dataset, with 5,812 leaves total, analyzes the relationship between leaf shape and present climates as indicators of paleoclimate (Huff et al., 2003;Royer et al., 2005;Peppe et al., 2011).
We analyzed all leaves using the traditional shape descriptors circularity, aspect ratio, and solidity (Figure 1). These shape descriptors are simple in the sense that they each measure a very specific aspect of shape, but they are powerful in that they can be applied to any shape, which is not necessarily true of other methods that measure shape more comprehensively (such as landmarks, pseudo-landmarks, and EFDs). Circularity is a ratio of area to perimeter (true perimeter, excluding holes in the middle of the object) measured as 4π * area perimeter 2 and is sensitive to undulations (like serrations, lobes, and leaflets) along the leaf perimeter, but is also influenced by elongated shapes (like grass leaves) when comparing leaves with such different shapes, as in this analysis. Aspect ratio is measured as (major axis/minor axis) of a fitted ellipse, and it is a robust metric of overall length-to-width ratio of a leaf. Solidity is measured as area convex hull where the convex hull bounds the leaf shape as a polygon. Leaves with a large discrepancy between area and convex hull (such as compound leaves with leaflets, leaves with deep lobes, or leaves with a distinct petiole) can be distinguished from leaves lacking such features using solidity.
Differences between groups were visualized as scatterplots and density diagrams (Figure 1), using transformed values of aspect ratio 1 aspect ratio and solidity (solidity 8 ) to create more normal distributions that allow the separation between groups to be better visualized. The linear leaves of grasses (Poaceae, in Figure 1, lavender) are perhaps the most distinct group of leaf shapes. The Brassicaceae (light green) are bimodal in their distribution, reflecting entire vs. highly lobed and compound leaves, as well as differences in petiole length. Passiflora (dark orange), Solanaceae (purple), and Viburnum (brown) exhibit broad, continuous distributions, which like the Brassicaceae reflect the diversity of leaf shapes in these groups. Alstroemeria (light blue), apple (light orange), Coleus (pink), cotton (dark green), grapevine (red), and common ivy (dark blue) all have more localized distributions in the morphospace, indicating that shape variation is expressed within a smaller range, relative to other groups, as measured using traditional shape descriptors.
Persistent Homology and Non-linear Relationships With Traditional Shape Descriptors
Although traditional shape descriptors can describe important shape features across diverse leaves, they do not measure shape comprehensively like landmarks, pseudo-landmarks, and EFDs. Comprehensive morphometric methods, however, cannot be applied across diverse shapes, only between leaves with similar shapes, as in natural variation studies. We crafted a persistent homology method to quantify the features of leaves. Persistent homology is a Topological Data Analysis method that examines (1) topological features as a (2) filtration across a simplicial complex (Munch, 2017). In the specific case we have devised to measure leaf shape, the topological features are simply "blobs, " contiguous non-touching islands that are "born" and "die" across the filtration. The filtration is a function projected onto the shape. In this case, the filtration is a density function indicating the concentration of pixels. As the filtration passes from high to low density values, "blobs" are "born" and "die" and these features are monitored in the form of a Euler characteristic curve. The details of how persistent homology are implemented here to measure leaf shapes are described in detail, below.
We begin by conceptualizing shape as a two-dimensional point cloud of an outline defined by pixels (Li et al., unpublished;Migicovsky et al., 2018). A Gaussian density estimator, assigning each pixel a value that indicates the density of neighboring pixels, is calculated (Figure 2). In leaves, high density pixels with lots of neighbors tend to reside in the sinuses of serrations or lobes or at points of intersection, such as the attachment points of leaflets to the rachis of a compound leaf. Using a Gaussian density estimator, rather than focusing on continuity of a closed contour (as in pseudo-landmarks and EFDs), minimizes the impact of internal or non-continuous features, such as holes or occlusions made by the overlap of leaflets and lobes (see the bottom palmately shaped leaf in Figure 2). Sixteen annuli (or, ring structures) emanating from the centroid of the shape (Figure 2A) serve to partition the leaf into subsets of features, increasing the ability to distinguish between shapes. An annulus kernel for each ring ( Figure 2C) is multiplied by the density estimator ( Figure 2B) to isolate density features that intersect with the annulus (Figures 2D,E). The resulting density function from each annulus is the function across which topological space is measured. As shown in Figure 2F, beginning with the highest density level (that is, those pixels with the highest value of the Gaussian density estimator, as shown by the plane intersecting the red in the graphs below the curves in Figure 2F, the plane intersecting lower density levels depicted in blue going from Left to Right), the number of connected features with densities above that level is recorded. Counting the number of connected components minus the number of holes (which is a topological feature, known as the Euler characteristic) continues across the function, proceeding to lower density levels. The value of the curve (y axis in Figure 2F) at each density level (x axis in Figure 2F) records the topological structure across the values of the function, the crux of persistent homology. A curve is recorded for each annulus, so that using our method, the shape of a single leaf is represented by 16 curves.
To analyze the persistent homology output, we discretize each Euler characteristic curve into 500 values ( Figure 2F) and then concatenate these values over the 16 annuli, representing each leaf shape as 8,000 values. A principal component analysis (PCA) performed using the 8,000 values creates a leaf morphospace defined by persistent homology (Figure 3). To interpret this morphospace, we colored data using traditional shape descriptor values. Although clear patterns among aspect ratio ( Figure 3A), circularity ( Figure 3B), and solidity ( Figure 3C) with persistent homology data are evident, the relationships are non-linear compared to the orthogonal PC axes. Aspect ratio, circularity, and solidity are similarly correlated with PC1 (ρ-values of −0.72, 0.70, and 0.61, respectively) demonstrating that persistent homology PCs can capture distinct attributes of shape simultaneously (Figure 3D). The correlations between traditional shape descriptors and persistent homology PCs (F) A plane traverses the density function from the highest to lowest densities (x axis). As the plane traverses the function, the topological space is recorded as the number of connected components minus the number of holes above the plane at any given point, the Euler characteristic (y axis). Three pink dotted lines correspond to the plane at three points along the density function, which are visualized below the graphs. Together, similar curves from the 16 annuli comprise the persistent homology description of leaf shape.
Frontiers in Plant Science | www.frontiersin.org rapidly diminish among high order PCs (Figure 3D). The nonlinear relationship between traditional shape descriptors and persistent homology PCs indicates that persistent homology captures differing combinations of traditional shape descriptors (and novel features) in different ways among the represented leaf shapes. Such non-linear relationships are influenced by the different groups represented in the dataset (Figure 3E). If the Leafsnap and Climate datasets-representing 141 plant families and 75 sites from throughout the world-are superimposed as points on top of the independent dataset representing natural variation within a limited number of different groups ( Figure 3F), then the overall shape of the persistent homology space defined by specific groups is recapitulated, suggesting that the overall shape and density of the persistent homology morphospace is partially saturated. This does not mean that there is no other significant leaf shape variation to be explored, only that some archetypal leaf shapes are well represented in our dataset. The boundaries of the persistent homology morphospace allow for speculation. Likely the morphospace is (1) bimodal, defined by elongated leaf shapes found in some Poaceae and Gymnosperms (specifically Pinophyta in the Leafsnap and Climate datasets) compared to other leaf shapes and (2) is defined by variation spanning entire to deeply lobed (or even compound) leaf shapes, as represented by Passiflora, Solanaceae, and Viburnum across PC1. Of course, other leaf shape variation exists (and is even visually apparent in the plots of PC2 vs. PC1) and other PCs in this dataset remain to be explored. The dataset does not come near to sampling all existing leaf shapes.
Differences in Leaf Shape Between Phylogenetic Groups and the Most Diverse Plant Families
We were interested in detecting difference in leaf shape between phylogenetic groups and performed a PCA for just the Leafsnap and Climate datasets ( Table 1), which together represent 141 plant families, but without the over-representation from specific taxonomic groups presented earlier. Visualizing gymnosperms, magnoliids, rosids I, rosids II, asterids I, and asterids II across PCs 1-10 (representing 73% of shape variance) clear differences in persistent homology shape space can be detected (Figure 4). Differences in shape are most easily detected for the earliest diverging lineages. For example, gymnosperms occupy a distinct region of morphospace defined by PCs 1-6 ( Figures 4A-C) compared to angiosperms. Subtler differences between recently diverging groups can also be seen. Asterids II, for example, are excluded from some regions of morphospace occupied by rosids I/II and asterids I for PCs 1-4 (Figures 4A,B).
Differences in occupied morphospace between phylogenetic groups prompted us to ask: are plant families diverse across all PCs or just some, and what are the most and least morphologically diverse families? To answer the first question, we calculated variance across PCs 1-179 (representing >95% of all shape variance) for each plant family and then ranked families from most to least variable for each PC (Figure 5A). Visualizing the ranked variability of families across PCs (where PCs are color coded from the most variable, yellow, to the least, black; Figure 5A), it is apparent that the most diverse families tend be the most diverse across all PCs. Increased variability in persistent homology PCs, though, might simply be due to more leaves in some families compared to others. Indeed, the most diverse plant families are also the most represented in our dataset, as seen when families are arranged by abundance ( Figure 5A, see bar graph of counts on the Right side). Because highly variable families tend to be variable across PCs, we took the median rank of variance across PCs as a measure of overall family leaf shape diversity. The relationship betweenmedian rank variance and log 10 (count) is linear (Supplementary Figure S2). Using linear regression, we took the residuals from the model as an estimate of plant family leaf shape diversity, corrected for differences in sample size ( Figure 5B). A wilcoxon signed rank test on residuals indicates that asterids I are marginally significant (p = 0.08) for lacking diversity (two sided, µ = 0) but other groups (gymnosperms, p = 0.25; magnoliids, p = 0.20; rosids I, p = 0.97; rosids II, p = 0.63; asterids II, p = 0.63) show no detectable biases in diversity. The overall results indicate that, for the current dataset, leaf shape diversity within major phylogenetic plant groups is equivalent, but specific families have higher estimated leaf shape diversity than others.
Persistent Homology Predicts Plant Family and Outperforms Traditional Shape Descriptors
The separation of different groups in the traditional shape descriptor (Figure 1) and persistent homology (Figures 3, 4) morphospaces suggests the ability to predict the phylogenetic identity of a leaf based on its shape. Previous computer vision approaches coupled with machine learning have successfully predicted plant family and order using vein patterning and margin features (Wilf et al., 2016). Can the same be done using a persistent homology analysis of the outline alone? Using the Leafsnap and Climate datasets ( Table 1) that together represent 141 plant families, we used a LDA on PCs 1-179, representing >95% of the persistent homology morphospace variation, to create a classifier scheme. Leaves were then reassigned to the linear discriminant space using a cross-validated "leave one out" approach (Venables and Ripley, 2002) and the results visualized as a confusion matrix (Figure 6), plotting the actual family identity of leaves as a function of the proportion of their predicted family identity. Family was used as the taxonomic level of prediction because it was the most specific level of identification common to all leaves. Using persistent homology, there was a 27.3% correct plant family assignment rate of leaves. Using a bootstrapping approach permuting plant family identity against leaf shape information, a 27.3% correct reclassification rate or higher was never achieved in 1,000 bootstrapped simulations, indicating that assignment is above chance. This outperforms traditional shape descriptor prediction (at a rate of 10.2%) by 2.7 times ( Table 2), and including both persistent homology and traditional shape descriptor data only marginally increases the prediction rate (to 29.1%) over that of persistent homology alone (27.3%), indicating that persistent homology largely captures the same shape features as traditional descriptors, but provides additional information as well. If order is used as the taxonomic level of prediction, prediction rates are almost identical to those for family (27.3% for persistent homology, 9.2% for traditional shape descriptors, and 29.1% for both).
DISCUSSION
We have presented a new morphometric method using persistent homology, a topological approach, that can comprehensively measure leaf shape. Other methods measure leaf shape comprehensively, including traditional landmarks, pseudolandmarks, and EFDs. However, no method comparatively analyzes the diverse shapes of leaves in seed plants (simple leaves, deeply lobed leaves, compound leaves of different shapes, leaves with differing numbers of leaflets or lobes, or large variation in petiole length and shape), only naturally varying leaves among related plant species (Supplementary Figure S1). Other morphometric methods that only analyze the external contour of shapes are sensitive to artifacts, such as internal holes made by the overlap of leaflets or lobes, or small errors during thresholding and isolation. Finally, although appropriate for plant organs that can be represented by discrete shapeslike leaves, petals, seeds, or other lateral organs-current morphometric techniques fail to capture other attributes of plant architecture, like the branching patterns of roots, shoots, and inflorescences. A framework that can not only measure shape, but other features that are important to the plant form, is currently lacking.
By converting shapes into a topological space, as defined by a function that isolates subsets of the shape and describes FIGURE 5 | Highly variable plant families are variable across Principal Components (PCs) and estimates of leaf shape diversity by family. (A) Variance was measured for each plant family and then ranked from most variable (yellow) to least variable (black) for each PC. Plant families are ordered by abundance, as seen in the bar graph (Right) indicating count number in the dataset. The most abundant plant families in the dataset tend to be the most variable. (B) Linear regression was used to model the -median variance ranking for each plant family as a function of log 10 (count). The residuals are estimates of plant family leaf shape diversity, as corrected for representation in the dataset. Higher residual values indicate higher estimated leaf shape diversity. Gymnosperms, orange; magnoliids, yellow; rosids I, light blue; rosids II, dark blue; asterids I, light green; asterids II, dark green; other groups, gray.
it in terms of neighboring pixel density (Figure 2), the described persistent homology approach can compare disparate leaf shapes across seed plants, allowing for the approximation of the overall leaf morphospace (Figure 3). By estimating pixel density, the method accommodates internal features (such as holes caused by leaflet overlap) or small processing artifacts, that do not unduly influence the output compared to the absence of such imperfections. The ability to compare shapes FIGURE 6 | Predicting plant family using persistent homology. Using persistent homology data from the Climate and Leafsnap datasets, a linear discriminant analysis (LDA) was used as a classifier to predict plant family, cross-validated using a jackknifed "leave one out" approach. The vertical axis indicates actual plant family and the horizontal axis predicted plant family. Color indicates proportion of leaves from each actual plant family assigned to each predicted family, such that proportions across the horizontal axis sum to 1. Black indicates no assignment. A phylogeny indicating key taxonomic groups is provided.
broadly and be robust against processing artifacts will enable large scale data analyses in the future, such as the analysis of digitized herbarium vouchers, ecological studies, or genetic and developmental insights into complex morphologies, for which current morphometric approaches are not designed. We detected clear differences in leaf shape between major phylogenetic groups (Figure 4) and estimated leaf shape diversity across plant families (Figure 5), demonstrating that a persistent homology approach is relevant for large-scale morphometric studies across plant evolution. The ability to comprehensively measure shapes permits alternative statistical approaches, moving beyond descriptive statistics used with traditional shape descriptors (Figure 1) and allowing for classifier and prediction approaches (Figures 6, 7 and Table 2). Although the overall prediction rate of 27.3% for plant family is relatively low ( Table 2), it is important to remember that it is above the level of chance (determined by bootstrapping, 1,000 simulations) and that the rates are not evenly distributed across family. Plant family prediction rates vary from 0 to 100% (Figure 7). The variability in rates is not overly influenced by sampling depth or variation within a group. For example, prediction rate of plant family and abundance are correlated at ρ = 0.37, and the correlation with median rank PC variance is ρ = −0.24. It is also important to keep in mind that plants are usually identified using floral structures, and leaf morphology is not the most discriminating morphological factor between species. Additionally, our prediction is distributed over 141 plant families, whereas previous studies were predicting over fewer, which decreases the correct assignment rate. Although comprehensive, our dataset does not begin to encompass the total shape variation present in a plant family and there are undoubtedly collection biases in the data influencing prediction. Other factors than diversity within a group or the degree to which it is sampled, though, likely influence prediction rate too. Theoretically, a unifying morphometric framework that can accommodate not only shapes but the branching architectures of plants, is lacking. Topology is a field of study concerned with features that are invariant under deformations, such as bending or stretching. Traditional morphometric methods (like landmarks and EFDs) work superbly with shapes that have either homologous features for landmarks or features that allow pseudo-landmarks and harmonic coefficients to correspond (some types of leaves, seeds, petals, etc.). Computer vision, machine learning, and deep learning approaches work especially well with two-dimensional gray-scale and color image data for prediction and identification (Joly et al., 2016;Wilf et al., 2016).
But the plant form is not a shape or a two-dimensional color image. Plants are branching architectures that develop through time. The connectedness of branches-irrespective of deformation or bending-is topology, and it is useful for describing variation in plant architecture . Describing branching patterns is relevant to describing phylogenetic trees, too, to which Topological Data Analysis approaches have been applied (Munch and Stefanou, 2018). We converted shape into a topological feature space to comprehensively describe leaf shape diversity where other methods have failed. But separate from this use (for shapes) persistent homology is a promising technique to describe diverse plant structures, including root architecture (Li et al., unpublished), serrations and branching patterns, and venation architecture in novel ways, quantifying complex morphological features relevant to botany and taxonomy that previously have only been described qualitatively. Topological Data Analysis and persistent homology approaches can also be applied in n-dimensional space. Plant development occurs in 4D: 3D, over time. Rather than describing development as a time series, plant morphology can be quantified as it truly is-a single, four dimensional shape. There is a place for measuring plant morphology in terms of topological features, and this space has not been thoroughly explored yet, and can potentially drive a more comprehensive analysis of plant architecture across diverse cells, tissues, organs (pollen textures; Mander et al., 2013), organismal forms, or biomes (satellite images of the savannah; Mander et al., 2017), as we have used it here for leaves. The morphometric approach described here is compatible with similar persistent homology methods, creating a shared framework in which the plant form can be measured (Li et al., unpublished).
Leaf Shapes
The 182,707 leaf outlines from 141 plant families from 75 sites throughout the world used in this manuscript are available to download (Chitwood, 2017a). This file directory includes x,y coordinates that form the outlines of the leaves. Separate folders contain text files with x,y coordinates for the leaves from each of the indicated groups in Table 1. Within each folder, original x,y coordinates and scaled coordinates are provided. This dataset contains leaves from both published and unpublished sources (see Table 1 for details).
Persistent Homology
The MATLAB code necessary to recapitulate the persistent homology analysis in this manuscript can be found in the following GitHub repository (Li, 2017) Persistent homology is a flexible method to quantify branching structures (Edelsbrunner and Harer, 2008;Weinberger, 2011;Li et al., 2017), point clouds (Ghrist, 2008), two-dimensional and three-dimensional shapes (Gamble and Heo, 2010), and textures (Mander et al., 2013(Mander et al., , 2017. Each of these different phenotypes can be described by a multidimensional vector (e.g., Euler characteristic curve), integrating how homology (e.g., path-connected components) persists across the filtration of a simplicial complex.
Leaf contours are represented as two-dimensional point clouds extracted from binary images (Figure 2A). We use a Gaussian density estimator, which can be directly derived from the point cloud and is also robust to noise, to estimate the neighborhood density of each pixel. Denser point regions, such as serrations, lobes, or the attachment points of leaflets, have higher function values ( Figure 2B). Formally, the Gaussian density estimator is defined as where y 1 , ..., y n are the data points and h is a bandwidth parameter. Because a set of local and regional topologies may often be more effective to represent shapes, we use a local persistent homology technique to subset the density estimator into 16 concentric annuli centered around the centroid of the leaf (Figures 2A,D). To achieve this, we multiply this function by a "bump" function K which highlights an annulus, defined as where y is the center of the annulus, tσ determines its radius, and the parameter σ is its width (Figure 2C). Each local function emphasizes the density function falling in the annulus. Given a threshold and a local function, the points whose function values are greater than this threshold form a subset (superlevel set). Changing this threshold value from the maximum function value to its minimum value, we can get an expanding sequence of subsets, or a superlevel set filtration. Figure 2E shows the shapes above a plane, an example of a superlevel set filtration. For each subset, we calculate the Euler characteristic, which equals the number of connected components minus the number of holes. Thus, for a sequence of subsets, we get a sequence of numbers (a multidimensional vector). All 16 annuli derive 16 multidimensional vectors which are concatenated into an overall vector used for analysis. PCA was performed in MATLAB on the vectors and PC scores and percent variance explained by each PC used in subsequent analyses.
Statistical Analysis and Visualization
The R code (R Core Team, 2017) and data necessary to recapitulate the statistical analyses and figures in this manuscript can be found as a zipped folder directory on figshare (Chitwood, 2017b) 2 . Unless otherwise specified, all graphs were visualized using ggplot2 (Wickham, 2016). Scatterplots were visualized using the geom_point() function, density plots were visualized with the geom_density2d() function, heatmaps were visualized using the geom_tile() function, and colors were selected from ColorBrewer (Harrower and Brewer, 2003) and viridis (Garnier, 2017). Other visualization functions used and specific parameters that can be found in the code used to generate the figures (Chitwood, 2017b).
Variance was calculated for each plant family for each principal component using var() and families ranked for each principal component using rank() (Figure 5). Linear regression was performed using lm() and residuals retrieved to estimate leaf shape diversity for each plant family (Supplementary Figure S2). The Wilcoxon signed rank test was performed using wilcox.test() to test for higher or lower than expected phylogenetic diversity using a two-sided test with µ = 0. LDA was performed using the lda() function in the package MASS (Venables and Ripley, 2002). LDA was performed using the number of principal components that contributed at least 95% of all variance (PCs 1-179 for phylogenetic prediction). Prediction using the discriminant space was performed using CV = TRUE for a "leave one out" cross-validated jackknifed approach and the priors set equal across factor levels. Prediction rates were bootstrapped over 1,000 simulations. A for loop was used, permuting family against leaf identity, performing an LDA on the permuted data, and recording the correct prediction rate for each permuted simulation. A permuted correct prediction rate (out of 1,000 simulations) higher than the actual correct prediction rate was never detected. | 7,988.4 | 2018-04-25T00:00:00.000 | [
"Biology",
"Computer Science",
"Environmental Science"
] |
PHOTOGRAPHIC INTERPRETATION OF THE STRATIGRAPHY OF BLOCK ISLAND, RHODE ISLAND
Application of standard geologic photointerpretive techniques to a series of offshore photographs of the Block Island cliffs has shown signifi~ant correspondence with previous stratigraphic field studies. Constant scale while mapping and a vertical exaggeration for the final profile were achieved with the aid of a Zoom Transfer Scope. Photogeologic unit boundaries were defined by: the nature of bedding visible in.the photographs, the extent of different erosional and drainage patterns on the cliff face, changes in texture of the cliff face, tonal variations, and variations in clast size. Five photogeologic units have been defined on the northern cliffs of the island by using the above criteria .. Their boundaries correspond closely with those of units defined in the field by previous workers and observed in field work for this study. These include a basal outcropping Cretaceous unit, the Raritan Formation; and Pleistocene units equivalent to two members of the Montauk Drift of Sirkin (1976) and the New Shoreham Outwash and New Shoreham Till of Sirkin (1976). Nine photogeologic units, representing six depositional stages, were identified on the southern cliffs of the island. These six stages are equivalent to the three members of the Montauk Drift of Sirkin (1976), his New Shoreham Outwash and Till, and channel gravel deposits laid down during the: f·:nal stages of glacial retreat (Sirkin, 1976). •. Hhere Sirkin provided unit thicknesses or outcrop locations, there is also agreement with photointerpretive results.
The New England Islands, which include Long Island, Block Island, No Mans Land, Marthas Vineyard, and Nantucket, are cored by a cuesta of Cretaceous sediments (Johnson, 1925;Fennemnn, 1938;McMaster and others, 1968). This cuesta is cut by several preglacial river systems running southward to ancient shoreline positions on the continental shelf (McMaster and Ashraf, 1973). This is overlain in places by Tertiary deposits (Woodworth and Wigglesworth, 1934) and capped by Pleistocene glacial sediments.
the Pleistocene stratigraphy and palynology.
STRATIGRAPHIC r1APPING BY PHOTOGRAPHY Although remote sensing techniques have been used for geologic mapping from aerial photographs (Ray, 1960;van Bandat, 1962;American Society of Photogrammetry, 1966, 1968, 1975Avery, 1977}, 1 ittle attention has been paid to the value of ground photographs.
As photographtng the entire geologic section does not require detailed study at the time, the worker can cover lc.rge areas of the cliff in a short time. Later, by mosaicing the photographs, he can see larger areas than might be visible in the field, and with minimal distortion from offset in the section. In addition, he can review two or more sections which may be geologically separated by using the photographs, which a field worker would not have available.
For later, detailed studies, the geologist need only return to the photographs to remap the section in the required detail. In addition, he can easily map a section without the distortion induced by viewing an outcrop from one limited vantage point. In coastal studies, cliffed areas which are relatively inaccessible can be photographed from a distance offshore or from a plane.
The field time required to collect data for initial mapping is less for a photographic survey than for an equivalent on-site study.
The Block rsland photographs were taken in one day from offshore, while even cursory field studies of the cliffs, which must be done on foot due to a lack of roads at the base, require two days field time (Hallick, 1896;Bierschenk, unpub.).
AVAILABLE GROUND DATA
Ground data is any information collected on the ground, or derived from that data, to aid in the interpretation of remotely sensed data (American Society of Photogrammetry, 1975 worth and Wigglesworth, 1934;Hansen and Schiner, 1964;Sirkin, 1976).
These data could be plotted on scale sections of the cliffs and used as standards against which to measure the detail recorded in the photo-graphic study. There also exist a number of brief studies, descri"bing, but not tllustrattng, specific sections, and numerous articles and books on the regional geology which were used to provide supplementary data to aid in the interpretation.
BEDROCK GEOLOGY
The crystalline bedrock, as exposed on the Rhode Island mainland, is primartly Paleozoic and Mesozoic granite or gneiss (Quinn, 1971). The metasediments of the Narragansett Basin may pass just to the east of Block Island (Woodworth and Wigglesworth, 1934;Tuttle, Allen, and Hahn, 1961). The bedrock surface on the mainland has been eroded to form an anc,~ent peneplain (Davis, 1895) which dips to the south at five to ten meters per kilometer (Woodworth and Wigglesworth, 1934). Seismic work on the island showed the crystalline bedrock to be at a depth of about 330 meters at the north end of the island (Tuttle, Allen, and Hahn, 1961). Tuttle, Allen, and Hahn (1961) (Fuller, 1906(Fuller, , 1914Hoodworth and Wigglesworth, 1934;Fetter, 1976). The unconsolidated sediments on Block Island are between 150 and 200 meters thick (Tuttle, Allen, and Hahn, 1961), the upper portions of which are Known from well logs (Hansen and Schi'ner, 1964) and outcrops (Livermore, 1877; Woodworth and Wigglesworth, 1934;Christopher, 1967;S.irkin, 1976}~ (Woodworth and Wigglesworth, 1934). Several boulders found near Clay Head contained fossils characteristic of the Calvert Formation of Maryland and Virginia, and regionally unique to this occurrence (Shimer, 1916;Woodworth and Wigglesworth, 1934). No associated strata are found nearby, so the boulders presumably weathered out of the till (which im-plies an as-yet unknown source to the north) or served as ballast for some boat (Shimer, 1916).
PLEISTOCENE GEOLOGY
By far the greatest amount of the exposed sediments on the island are Pleistocene. Upham (1879) Woodworth and Wigglesworth (1934) and Fuller (1914), but did not attempt to relate directly his work to theirs.
Sirkin (1972, 1976} and this complexity has resulted in various contradictory interpretations of the stratigraphic sequence. Some of these will be discussed later. Shaler (i888, 1894, 1897Shaler and others, 1896) reported highamplitude, sharply compressed folds of considerable size in the Tertiary and Cretaceous beds of Marthas Vineyard. He also cited communications from Woodworth (Shaler, 1897) indicating similar features existing on Block-Island. Shaler considered these folds to be the result of regional orogenic activity rather than of glacial deformation for two reasons.
One was that he saw a well-developed preglacial topography superimposed on his folded sediments. Secondly, he observed that the apparent direction of applied stress on the folded sediments was at 90° to the direction of apparent motion of the ice sheet.
Upham considered some of the deformation in the lower beds on Marthas Vineyard to be the result of ice thrusting of the frozen sediment. He ascribes much of the volume of Shaler's "preglacial erosional remnants" to glacfofluvial deposition in contact with the ice sheet.
Shaler considered the southern margin of the ice sheet to have been very thin and, in fact, flof!ting en the sea surface between supporting points on his erosional remnants; thus it would not be capable of deforming sediments. Upham, tn contrast, consfdered the sea had retreated out to the conttnenta1 slore, and that the ice sheet was much thicker and more widely spread than did Shaler, and therefore more likely to cause deformatfon.
Woodworth (1897) also considered the folding and thrusting on Block IsTand and Marthas Vineyard to be ice-pushed_ features, but he felt that positive evidence either way was lacktng, as the deformation, even i·f by gl acia 1 tee movement, had occurred before the deposit ton of the glactal sediments.
Kaye (1964a} stressed the thrust-faulting and folding found on Marthas Vfneyard and cons.idered the deformation to be of glacial origin.
He considered the thrusting may have resulted in displacements of up to several miles, and stated that this complicated interpretation of the secttons as it was difficult to determi'ne whether differences in adjacent deposits were from faulting or some other factor. The photographs were scanned twice, with details being transferred to tracing paper at a constant horizontal scale of 1:1,000
METHODS OF
and 2x vertical exaggeration. The first mapping was for the purpose of noting lineations and the lithologic variations, while the second was to determine apparent unit boundaries.
Constant scale plotting was achieved by vary1'ng the photo magnification so the cliff height, as determined from the U. s. Geologic Survey topographic map of the island (U. S. Geologic Survey, 1970} was to scale.
At least three control points per strip were used to provide the maximum posstble accuracy.
Print enhancement by unsharp masking, a well-known remote-sensing technique (American Society of Photogrammetry, 1975; r 2 s, undated; Mirkin and others, 1972) was employed in apparently featureless or confused areas to help disttnguish lineations. Duplicate negatives of increased contrast range, made for use in the unsharp masking process, were also printed directly to provide improved tonal discrimination between units.
These two techniques are applicable to different parts of the problem as unsharp masking is considered to provide increased line enhancement, while the increased tonal range achieved by printi'ng the duplicate negatives without masks simplifies the discrimination of areas of different tonal values. As the preparations for the two processes are the same, both techniques were used where any enhancement was needed, the informatton from the resultant prints being transferred directly to the paper section with the Zoom Transfer Scope.
PHOTOGEOLOGIC MAPPING INTERPRETATION
Lithologic di-fferences were determined by tonal variations, textural vartations on the photographs, differences tn erosional features and drainage patterns on the cliff face, the nature of bedding, and to a lesser extent, by the topography as determined from air photos and the U.S. Geologic Survey topographic map of the island.
Tonal variattons within a single photograph, but not across photo boundaries, were considered. This was done to minimize errors due to (Ray, 1960;van Bandat, 1962; American Society of Photogrammetry,. 1966Photogrammetry,. ,· 1975Avery, 1977), changes in drainage patterns were considered significant indicators of lithologic changes.
Hoodoos developed in massive units composed of dominantly fine sedtments, while fine· parallel drainage was most apparent in coarse sediments or areas of recent slumping, presumably poorly compacted and relatively permeable, thus retarding growth of the drainage channels.
Both of these features can be considered variants on the vee-shaped gullies, as adapted to their special lithologic conditions. The time required to develop these features is probably also variable, with hoodoos and large vee-shaped gullies requiring the longest time to develop, and parallel drainage representing a short-term feature in areas of recent Bedding and structures were defined on the basts of tfie number of beds per five meter vertical sectton of cliff. Ftne-bedded sections had from three to twenty beds per five meter vertical section (twenty beds per ftve meter section represented the limit of clear resolution}, In pl aces,. the texture appeared to be that which would be expected from sharply contrasting bands at a spacing just beyond the limit of resolvable detail. "Fine bedded" was chosen as it has no ffold sedimentological connotations, and therefore would be 1 ess 1 H.ely to be misconstrued than more traditional terms such as "thin-bedded." Between one bed per ten • meter section and three beds per five meter section, units were considered to be thi'ck-bedded, while beyond that point they were considered massive. Massive units in this case could include units wi-yt bed thicknesses of thtrty cm or less (medium-bedded or below, Blatt, Middleton, and Murray, 1972), which were too fine to be distinguished.
If the units, or portions thereof, appeared resistant, this was also noted, as a possible indication of thrust planes or coarse layers cemented by iron oxide deposits from groundwater action.
Topographic expression, studied with the aid of stereo paired air photo coverage of the island, and with the aid of a conventional 7½ min topographic map (U. s. Geologic Survey, 1970) was plotted for those areas adjacent to the cliffs to aid in the delineation of units present.
The possible influence of geology on topography was suggested by various authors (Woodworth and Wigglesworth, 1934;Merrill, 1896).
FIELD STUDIES
To asstst tn both the photogeologic mapping and interpretation, geologic field studies were conducted throughout southern New England and Long Island. In these areas, the type sections of several of the major units in the area were studied, and tnterpretations were discussed with recent v10rkers. These field studies included one on Long Island which included di•scussion by Dr. L. A. Strkin, whose Block Islarrd paper were then compared with the interpretations of the three most informative work avai1a.b1e on the island's geology (Woodworth and Wigglesworth, 1934;Hansen and Schiner, 1g64;and Si'rkin, 1976) The third glacial unit found is the Jameco Outwash, which lies unconformably on the Nebraskan sediments. Its basal member is a boulder bed locally cemented by iron oxides and outcropping on Clay Head. This (Woodworth and Wigglesworth, 1934, p. 39, 52) describes it as a late Sankaty (Yarmouth) sand laid do11m by a retreating sea, and twice (Woodworth and Wigglesworth, 1934, p. 52, 220) identifies it as a glacial gravel and sand.
The next unit, the Manhasset Formation, ts the fourth glacial unit. Hansen and Schiner (1964) .. : SIRKIN, 1976 Sfrktn (_19761 describes, only two glacial fonnations, the Montauk Formation and the New Shoreham Drtft, and isolated occurrences of a presumed interstadial unit, the whole column being capped by scattered occurrences of channel gravels, peat bogs, and lake deposits ( fig. 6, table 21.
The first fonnation, the Montauk, is of early \Hsconsinan age, and extends below sea level wherever it occurs on the island, alt~ough The second glacial unit is the rlew Shoreham Drift. This forr,ation Photogeologic unit boundaries were determined in the cliff photographs by tonal and textural variations, differences in erosional and drainage patterns, and the character of the bedding. Unit boundaries were not plotted for cliffs less than ten meters htgh as they did not provide sufficient continuity. Cliffs of that height should provide no difficulty to the field worker in any event.
Tonal, erosional, and drainage differences were used to determine Textural variattons were of primary importance in determinfog grain stze. Texture is an indication of detail below the resolution of the film. Units of smooth or velvety texture were consiclered fine-grained (less than 2 mm diameter). Units with coarse texture were considered to contain sediments of betvreen 4 and 128 mm diameter, while particles of greater than 128 mm diameter were considered to occur in areas of extremely coarse texture. Particles in this last class were normal_ly visible as distinct objects, although at the limits of resolution of the film used.
Bed thicknesses, as discussed previously, were fine (three to twenty beds per ftve meter interva,11, coarse Cone to six beds per ten meter secttonl, and mas.sive (less than one bed per ten meter section}. whi'le to the west it averages two meters thi'ck. Displacement along the fault is two to three meters. This unit was only photographi'cally studied. Unit J Cftg, 9b I, foynd at th.e base of Barl ows Point, ts up to twenty-two meters thick, and consists of fi'ne parallel beds and massive b-eds. I't has a re.ugh surface and texture, and grain size probably ranges from fi'ne to coarse, with clast stzes up to twenty or twenty-five cm.
Descrtpttons apply only to the locattons referred to. Changes in
Only photoi'nterprctation \'/as used to describe this untt. Only photographs were used in its stud~,.
Unit N ( fig. 11), also from Vaills Beach, has massive bedding, a rough, resistant surface, and it appears to contain material wi'th a size range from ftne materials up to clasts of twenty cm dtameter. It was studied from photographs only. A occurs as an asymmetrical hump wtth the gentler slop~ to the north. It ts ove.rlatn by unit B·, whi'ch is strongly contorted at the southern end of the exposure and dips• steeply to the south there. Uhi't C, which ts flat .. lying above these two untts, appears to fJe extremely crumpled just south of them and at the same level. This suggests that units A and B may be the ttp of a thrust plate, emplaced by tee-shove of the frozen . ground, as has been observed in Long Island (Sirktn and Mills, 1975} and Marthas Vineyard (Kaye, 1964a. Relocating Hansen and Schiner's (1964) contacts to positions closer to those reported by Woodworth and Wigglesworth (1934) and Sirkin (1976) yields results which agree with those of the other workers, and which are further confirmed by this study.
North of Balls North Point, however, Hansen and Schiner show a till unit which, starting from the top of the cliff, dips to the south at 9° until ft jotns up with the unit A equivalent at Balls North Point. This unit is not visible in the photographs, which plainly show untt C, the only untt in the area, either fl at-lying or dippfog to th·e north at a very low angle, wtth bedding planes vtsi·ble throughout the section.
Hansen and Schiner's (1964r i'nland sectfon in this area (fi'g. 5, section E-P) shows the untts t-n that area a 11 dipptng to the north. As the control for lithologic changes from the well logs (measured from the surfacel has no perspective problems and provtdes better opportunities for exami'ntng th.e lithologies of the· unt-ts involved, and as all other data suppot'ts the probability of north-dipptng units, tt would seem that the upper portion of Hansen and Schiner's (1934} sections, at least, should be viewed with caution. The till Hansen and S'chiner (_1964) show at the base of the cliff tn the northern Clay Head area corresponds to a possible outcrop of unit A under the thrust plane in the same araa; _,,unfortunately, fan deposits at the base of the cliff prevent tracing this contact on the photographs.
The depostts on the northern cliffs mapped in this study and by prevtous workers show a close correspondence. The boundartes of the five uni-ts were photogeologically mapped first and then were observed to be. (Strki'n, 1976) and the Nantucket Ground Morai'ne (Woodworth and Wigglesworth, 1934) The results of this study support Strkin's (1976) Si-rktn has channel gravels mapped.
Predictions on the northern and southern cliffs were borne out by field checks.
APPENDIX 1 -RECOMMENDATIONS
Th i's study was-carried out using existi'ng bl acf<: and white photo-. graphs. In future studies, if funds and ttme are aval'lable, preliMinary testtng should be carrted out to determtne the most suitable film and filter combtnatfons for resolutfon and photogeologic unit discrimtnatfon for a parttcular type of depostt.
To make full use of stratigraphic photointerpretation, photo coverage should also be obtained i'n color, whether by use of a color·.sHd"e or negative film or by use of a tri'color separation process. Color slide and negattve films are generally of poorer resolution and coarser grain than black and white films, so use of the tricolor method ts preferable i'f opttrnal resolution ts required. It has the disadvantage of requiring either more cameras or ~ore complex equtpment and some means of viewing or printfog the combined separation negati~es, but penntts, by addition of a fourth camera, procurement of "false color" infrared data and finer disttnctton between units by selective filtration. There are presently several multi-band or multi-camera systems available for this type of work. Offsetting the advantages of the tricolor method to some extent is the convenience of the color transparency or negative system with only one camera and film to be handled, a significant help both in operation of the system and in viewing the final product.
Color negatives are somewhat less convenient than transparencies as prints must be made before the sectton can be viewed with any facility.
In additi"on, paper prints have i'nherently 1 ess detai'l than images viewed by, transmitted ltgnt, Th.tsts trequs~ tn vi~wing an tmage· by reflected 1 tgf1t the· Hght ray must pas-5' through· the fmage-bear.i-r.s errul s-ion twtce before reaching the-eye, whereas an image vi"ewed by, transmttted ltght onl,y requtres the li'ght ray to pass through the emulston once. Thus for the same image dens.ity, more detai"l wtll be visible i"n a transparency than tn a print, or, gtven the-same_ range of visible detatl, a transparency wi-11 have more contrast. Transparencies may be vtewed directly on a Zoom Transfer Scope or other camera lucfda, or prints may be made and assembled into mosaic form if this is desired. Unmounted prints or transparencies are desirabTe for areas of great reli'ef, as they can be viewed under stereoscopes, thus pennittfog more accurate models to be made.
In securing the initi'a 1 photographs, a means of keeping the. range To ensure continuity of coverage_ b'lcr cameras-should be available,
S9
or interchqn~eable m~gazine.s tf the camera useq has thjs feature, Th.is .. wN1 permt,t drangtng ft,lm at the·end of tn:e-·ro11 w-i'thout 1eavi'ng any gaps tn the· coverage. rn 3S-mm cameras-, where the-fi'1m must EYe rewound after each roll, or tn aeri"al cameras, which must b"e unloaded and reloaded tn total darkness, thts ts a disttnct prob·1em. • rn ana lyztng the photographs, stereo i·mages-may prove helpful, enahling resolutton_ of detai-1s which may otherwtse be lost in rubble, vegetatton, or other masking matertal. Wtth practice, the observer can also esttmate slopes of resistant faces or loose sediments, th.e former . gtving the idea of the relattve· reststances of the untts to erosion and the latter indtcating the grain size of the sediments present. | 4,914 | 2022-01-01T00:00:00.000 | [
"Geology"
] |
Some q -Rung Picture Fuzzy Dombi Hamy Mean Operators with Their Application to Project Assessment
: The recently proposed q -rung picture fuzzy set ( q -RPFSs) can describe complex fuzzy and uncertain information e ff ectively. The Hamy mean (HM) operator gets good performance in the process of information aggregation due to its ability to capturing the interrelationships among aggregated values. In this study, we extend HM to q -rung picture fuzzy environment, propose novel q -rung picture fuzzy aggregation operators, and demonstrate their application to multi-attribute group decision-making (MAGDM). First of all, on the basis of Dombi t-norm and t-conorm (DTT), we propose novel operational rules of q -rung picture fuzzy numbers ( q -RPFNs). Second, we propose some new aggregation operators of q -RPFNs based on the newly-developed operations, i.e., the q -rung picture fuzzy Dombi Hamy mean ( q -RPFDHM) operator, the q -rung picture fuzzy Dombi weighted Hamy mean ( q -RPFDWHM) operator, the q -rung picture fuzzy Dombi dual Hamy mean ( q -RPFDDHM) operator, and the q -rung picture fuzzy Dombi weighted dual Hamy mean ( q -RPFDWDHM) operator. Properties of these operators are also discussed. Third, a new q -rung picture fuzzy MAGDM method is proposed with the help of the proposed operators. Finally, a best project selection example is provided to demonstrate the practicality and e ff ectiveness of the new method. The superiorities of the proposed method are illustrated through comparative analysis.
Introduction
In the framework of multi-attribute group decision-making (MAGDM), decision-makers evaluate all alternatives from multiple aspects.Afterward, the best alternative is determined according to some techniques and methods.Hence, when using MAGDM models to deal with real decision-making problems, a very important issue is to express decision-makers' evaluation information over alternatives properly.Due to the complexity of decision-making problems and the inherent fuzziness information, it is almost impossible for decision-makers to express their decision opinions in crisp numbers.Atanassov [1] provided a new methodology to deal with fuzzy information, called intuitionistic fuzzy sets (IFSs).Thus, IFSs have been widely and successfully applied in MAGDM [2][3][4][5][6][7][8].IFSs are constructed by a series of order pairs, called intuitionistic fuzzy numbers (IFNs), having a membership and a non-membership degree.The membership degree represents the degree that an element belongs to a given set, and the non-membership degree denotes the degree that the element does not belong to the given set.An obvious fact is that in IFSs, once the membership and non-membership degrees are determined, the indeterminacy degree or hesitancy degree is a default.For example, let α = (0.3, 0.4) be an IFN, then the indeterminacy degree of α is 1 − 0.3 − 0.4 = 0.3.However, in some situations which require human opinions involving more answers of types like yes, abstain, no, and refusal (such as voting), IFSs are insufficient and unsuitable to express the decision-makers' opinion.Hence, Coung [9] extended the classical IFSs and proposed a concept of picture fuzzy sets (PFSs), which have a positive membership degree, a neutral membership degree, and a negative membership degree.As an extension of IFSs, PFSs can deal with more decision-makers' opinion and are more flexible than IFSs.Therefore, MAGDM based on PFSs have become a promising research field.Recently, quite achievements on PFSs in MAGDM have been reported.Wei [10] extended the traditional TODIM to MAGDM with picture fuzzy (PF) information.Wei [11] proposed the PF cross entropy and applied it in solving MAGDM in which decision-makers are required to use PF numbers to express their evaluation values.Wei [12] introduced the cosine similarity measures between PFSs and showed their application in strategic decision-making problems.Wei [13] proposed operations of PF numbers.To deal with MAGDM problems in which attributes are dependent, Xu et al. [14] proposed PF Muirhead mean operators.Wei [15] investigated PF operational rules based on Hamacher t-norm and t-conorm.Jana et al. [16] and Zhang et al. [17] put forward new PF aggregation operators based on Dombi t-norm and t-conorm.Liu and Zhang [18] proposed a concept of a picture linguistic set by combing PFSs with a linguistic term set, and investigated picture linguistic aggregation operators.Wei [19] and Wei et al. [20] combined 2-tuple linguistic variables with PFSs and proposed picture 2-tuple linguistic sets as well as their aggregation operators.Wei [21] further introduced picture uncertain linguistic and proposed picture uncertain linguistic aggregation operators and applied them in decision making.
The constraint of PFSs is that the sum of positive membership degree, the neutral membership degree, and the negative membership degree should not exceed one.However, this constraint cannot be always satisfied in practical MAGDM problems.For example, if a decision-maker provides 0.6 as the positive membership degree, 0.7 as the neutral membership degree, and 0.8 as the negative membership degree.Then the evaluation value (0.6, 0.7, 0.8) cannot be represented by PFNs as 0.6 + 0.7 + 0.8 = 2.1 > 1.In order to effectively deal with such a case, motivated by Yager's [22] q-rung orthopair fuzzy sets (q-ROFSs), Li et al. [23] proposed a concept of q-RPFSs.As we know, the q-ROFSs, satisfying the condition that the sum of the qth power of membership degree and the qth of non-membership degree is equal to or less than one, are a good tool to express decision-makers' evaluation values in MAGDM [24][25][26][27][28][29].Hence, q-RPFSs satisfy the similar constraint as q-ROFSs do, i.e., the sum of qth power of positive membership degree, the qth power of the degree of neutral membership and the qth power of negative membership degree is equal to or less than one.However, in Reference [23], Li et al. did not study an aggregation operator for q-rung picture fuzzy information.Thus, the purpose of the paper is to propose q-rung picture fuzzy aggregation operators.
When considering q-rung picture fuzzy aggregation operators, we should pay attention to two aspects, i.e., the operational rules and aggregation functions.(1) For the first aspect, Li et al. [23] presented some basic algebraic operations of q-RPFNs.The DTT [30] are a general t-norm and t-conorm, which have the advantages of making the information aggregation process more flexible.Due to this characteristic, DTT have been applied in the information aggregation process of IFSs [31], single-valued neutrosophic sets [32,33], and hesitant fuzzy sets [34].Therefore, this paper proses new q-rung picture fuzzy operational rules.(2) For the second aspect, we should notice the fact that attributes are usually related in practical MAGDM problems.It means that not only the attribute values, but the interrelationships among them should be taken into account.The Bonferroni mean and Heronian mean are two powerful aggregation functions which consider the interrelationship between any two interrelationships.However, in most situations, such interrelationships exist among multiple attributes.The HM is an aggregation function which is able to reflect the interrelationship among multiple attributes.Up to now, HM operator has been successfully applied to aggregate intuitionistic fuzzy numbers [35], Pythagorean fuzzy numbers [36], 2-tuple linguistic neutrosophic numbers [37], and linguistic neutrosophic numbers [38].Hence, this paper utilizes the HM operator to fuse q-RPFNs based on Dombi operations and proposes a family of q-rung picture fuzzy Dombi Hamy mean operators.Moreover, a new MAGDM method is presented on the basis of the proposed aggregation operators.
We organize this paper as follows.Section 2 reviews basic concepts and proposes Dombi operations of q-RPFNs.Section 3 proposes the q-rung picture fuzzy Dombi Hamy mean operators and studies their properties.Section 4 introduces a new MAGDM method.Section 5 shows the performance of the proposed method in dealing with real MAGDM problem.Conclusion remarks are given in Section 6.
Preliminaries
In this section, we briefly review the concept of q-ROFS, DTT and HM operators.On this basis, we propose the Dombi operational rules of q-RPFNs.
2.1.q-Rung Orthopair Fuzzy Set (q-ROFS) Definition 1 [39].Let X be a finite universe of discourse.A q-rung orthopair fuzzy set (q-ROFS) A defined on X is given as follows: where u A (x) ∈ [0, 1] is called the degree of membership of B and v A (x) ∈ [0, 1] is called the degree of non-membership of A. u A (x) and v A (x) satisfy the following condition: Li et al. [23] proposed the concept of q-rung picture fuzzy sets by taking the decision-makers' neutral membership degree into account in q-ROFSs.
Definition 2 [23].Let X be an ordinary fixed set.A q-rung picture fuzzy set (q-RPFS) A defined on X is given as follows: where u A (x), η A (x) and v A (x) represent degree of positive membership, degree of neutral membership, and degree of negative membership respectively, satisfying u A (x), is called the degree of refusal membership of x to X.For simplicity, (u C (x), η C (x), v C (x)) is called a q-RPFN, denoted by α = (u, η, v).
To compare two q-RPFNs, we propose a method to rank q-RPFNs.
Dombi T-Norm and T-Conorm
Definition 4 [30].Let x and y be any two real numbers.Thenthe Dombi T-norm and T-conorm (DTT) between x and y are defined as follows: where λ > 0, (x, y) Based on the DTT, we provide new operations of q-RPFNs.
Hamy Mean
In 1998, Hara [40] proposed an aggregation operator for non-negative real numbers, HM, which captures the relationship between multiple input parameters.
Furthermore, Wu et al. [37] proposed a dual form of HM, called the dual Hamy mean (DHM).
Some q-Rung Picture Fuzzy Dombi Hamy Mean Operator
In this section, we utilize HM and DMM to fuse q-rung picture information based on DTT and develop the q-RPFDHM operator, the q-RPFDWHM operator, the q-RPFDDHM operator, and the q-RPFDWDHM operator.In addition, some properties and special cases of these new operators are also studied.
Based on the DTT operational rules of q-RPFNs, we can obtain the following theorem.
In the followings, some desirable properties of the q-RPFDHM operator are introduced.
Theorem 3 (Idempotency).
Let α j = u j , η j , v j ( j = 1, 2, . . ., n) be a set of q-RPFNs, suppose α j = α for all j.Then q Proof.Since α j = α for all j, we have Theorem 4 (Monotonicity).Let α j = u α j , η α j , v α j and Since u α j ≤ u β j holds for all j, we can obtain . Further, we have Similarly, we can also prove that η α ≥ η β and v a ≥ v b .Further, we have Therefore, the Theorem 4 is proved.
Further, we shall discuss some special cases of the q-RPFDHM operator with respect to the parameters k and q.
3.2.The q-Rung Picture Fuzzy Dombi Weighted Hamy Mean Operator Definition 9. Let α j = u j , η j , v j ( j = 1, 2, . . ., n) be a set of q-RRFNs with a weight vector then q − RPFDWHM (k) is called the q-rung picture fuzzy Dombi weighted Hamy mean (q-RPFDWHM) operator, where C k n is the binomial coefficient.
In the followings, some desirable properties of the q-RPFDWHM operator are presented.
Theorem 10 (Idempotency).Let α j = u j , η j , v j ( j = 1, 2, . . ., n) be a set of q-RPFNs, suppose α j = α for all j.Then q The proof of Theorem 10 is similar to that of Theorem 3, which is omitted here.
Theorem 12 (Boundedness).Let α j = u j , η j , v j ( j = 1, 2, . . ., n) be a set of q-RPFNs, and α + = max u j , min η j , min v j , α − = min u j , max η j , max v j .Then The proof of Theorem 12 is similar to that of Theorem 5, which is omitted here.Further, we shall discuss some special cases of the q-RPFDDHM operator with respect to the parameter k and q.
Theorem 13.Let α j = u j , η j , v j ( j = 1, 2, . . ., n) be a set of q-RRFNs with a weight vector and n j=1 ω j = 1, (i 1 , i 2 , . . ., i k ) be all the k-tuple combinations of (1, 2, . . ., n), and k = 1, 2, . . ., n.Similar to the Theorem 2, we can prove that the aggregated value by the q-RPFDWDHM operator is still a q-RPFN and In the following equations, some desirable properties of the q-RPFDWDHM operator are presented.
Theorem 14 (Monotonicity).Let α j = u α j , η α j , v α j and The proof of Theorem 14 is similar to that of Theorem 4, which is omitted here.
MAGDM Method Utilizing Proposed Operators
This section proposes the technique to solve the MAGDM problems by utilizing the q-RPFDWHM and q-RPFDWDHM operators.For a MAGDM problem, assuming that A = {A 1 , A 2 , . . ., A m } is any finite collection of m alternatives, C = {C 1 , C 2 , . . ., C n } be any finite collection of n attributes and E = E 1 , E 2 , . . ., E p be any finite collection of p decision-makers.For every alternative A i (i = 1, 2, . . ., m) on attribute C j ( j = 1, 2, . . ., n), the decision-maker E k (k = 1, 2, . . ., p) is required to utilize a q-RPFN to express his/her evaluation value, which can be denoted as Finally, we can obtain a q-rung picture fuzzy decision matrix, which can be denoted as is the weight vector of attribute, satisfying the condition that ω j ∈ [0, 1], the main steps for dealing with the MAGDM problem using the proposed operator are listed below.
Step 1. Normalize the original decision matrix.Extensively, we have two kinds of attributes; one is said to be a benefit attribute and the other one said to be a cost attribute.Therefore, the original decision matrices can be normalized by where I 1 and I 2 represent the benefit-type attribute and the cost-type attribute, respectively.
Step 2. Utilize the q-rung picture fuzzy Dombi weighted average (q-RPFDWA) operator or the q-rung picture fuzzy Dombi weighted geometric (q-RPFDWG) operator to aggregate all decision-makers' evaluation values A k (k = 1, 2, . . ., p) for each standard value of each alternative into a set decision matrix A = α ij m×n .The calculation process can be easily obtained from Definition 5.
Step 3. Utilize the q-RPFDWHM operator or the q-RPFDWDHM operator to obtain the overall preference value α i (i = 1, 2, . . ., m) of all alternatives.
Step 5. Rank all alternatives in ascending order according to their scores and choose the optimal alternative(s).
Application Examples
In this section, we introduce the decision-making process of this new MAGDM method through a numerical example of project assessment, and verify the effectiveness and superiority of proposed operators through comparative analysis.
Suppose that there are five projects A 1 , A 2 , A 3 , A 4 and A 5 , and three experts E 1 , E 2 and E 3 are required to evaluate the benefits achieved by the project from the following four attributes: The economic benefits (C 1 ), social benefits (C 2 ), sustainable benefits (C 3 ) and ecological benefits (C 4 ).The weight vector of the attribute is ω = (0.4,0.2, 0.3, 0.1) T and the weight vector of three experts is λ = (0.35, 0.20, 0.45) T .Each expert is asked to evaluate five projects from four aspects using q-RPFNs.
Based on this, we can obtain the decision matrix
as shown in Tables 1-3.
Decision-Making Process
In this section, we solve the above MAGDM problem by a proposed method.
Step 1.Since the attributes are of the same type, there is no need to be normalized.
Step 2. Utilize the q-RPFDWA operator to aggregate all decision-makers' evaluation values for each attribute value of each alternative.We suppose q = 3 and λ = 2, utilize Equation (36) as follows: Therefore, the collective decision matrix A = α ij 5×4 is shown in Table 4.
Table 4.The collective decision matrix is given by the q-RPFDWA operator.Step 3. Computing the overall evaluation values of the alternatives by utilizing the q-RPFDWHM operator (Equation ( 38)) as follows: .
Step 5. Then we can get the rank of the five alternatives Therefore, the best option is A 1 .In step 2, if we utilize the q-RPFDWG operator to aggregate all decision-makers' evaluation values for each criterion value of each alternative (suppose q = 3 and λ = 2).Utilize the q-RPFDWG operator (Equation ( 37)) as follows: We can get the following collective decision matrix in Table 5.
Therefore, the rank of the five alternatives is A 5 > A 1 > A 3 > A 4 > A 2 , the best option is A 5 .
The Influence of the Parameters on the Results
Different parameter values will affect the aggregation process and the final results.In this section, we discuss the influence of different values of the parameters k, q and λ on the evaluation of alternatives and the final ranking results, respectively.
In order to analyze the influence of parameter k on the experimental results, we set different k values to solve the above examples, when q = 3, λ = 2.The experimental results for different values of k are listed in Tables 6 and 7.
Table 6.Ranking results by using the different parameter k in the q-RPFDWHM operator.
k
S(α i ),i =1,2,3,4,5 Ranking Results based on the q-RPFDWHM operator.As can be seen from Figure 1, when we use the q-RPFDWHM operator, different values of parameter q will lead to different scores.However, the optimal result is always 1 A .Furthermore, the scores of all alternatives are decreasing with the increase of the q value and are more and more close to 1.The value of parameter q can reflect the attitudes of decision-makers.The more optimistic the decision-makers are, the smaller the q value is, and the more pessimistic the decision-makers are, the larger the q value is.In real decision scenarios, decision-makers can choose the appropriate q value according to their preferences.Figure 2 shows that the final score can be different by assigning different q values when utilizing the q-RPFDWDHM operator.However, regardless of the value of q, the final ranking result is the same, that is Similar to the q-RPFDWHM operator, when the q value is larger, the score value is closer to 1.Then, we discuss the impact of the change of λ value on the final score and ranking by setting different λ values in the application of the proposed operator.Let us still use the above example, assuming k = 2, q = 3, and the final results are shown in Figures 3 and 4 As can be seen from Figure 1, when we use the operator, different values of parameter q will lead to different scores.However, the optimal result is always A 1 .Furthermore, the scores of all alternatives are decreasing with the increase of the q value and are more and more close to 1.The value of parameter q can reflect the attitudes of decision-makers.The more optimistic the decision-makers are, the smaller the q value is, and the more pessimistic the decision-makers are, the larger the q value is.In real decision scenarios, decision-makers can choose the appropriate q value according to their preferences.
Figure 2 shows that the final score can be different by assigning different q values when utilizing the q-RPFDWDHM operator.However, regardless of the value of q, the final ranking result is the same, that is A 5 > A 1 > A 3 > A 4 > A 2 .Similar to the q-RPFDWHM operator, when the q value is larger, the score value is closer to 1.
Then, we discuss the impact of the change of λ value on the final score and ranking by setting different λ values in the application of the proposed operator.Let us still use the above example, assuming k = 2, q = 3, and the final results are shown in Figures 3 and 4. As can be seen from Figure 1, when we use the q-RPFDWHM operator, different values of parameter q will lead to different scores.However, the optimal result is always 1 A .Furthermore, the scores of all alternatives are decreasing with the increase of the q value and are more and more close to 1.The value of parameter q can reflect the attitudes of decision-makers.The more optimistic the decision-makers are, the smaller the q value is, and the more pessimistic the decision-makers are, the larger the q value is.In real decision scenarios, decision-makers can choose the appropriate q value according to their preferences.Figure 2 shows that the final score can be different by assigning different q values when utilizing the q-RPFDWDHM operator.However, regardless of the value of q, the final ranking result is the same, that is Similar to the q-RPFDWHM operator, when the q value is larger, the score value is closer to 1.Then, we discuss the impact of the change of λ value on the final score and ranking by setting different λ values in the application of the proposed operator.Let us still use the above example, assuming k = 2, q = 3, and the final results are shown in Figures 3 and 4 We can draw a conclusion from Figures 3 and 4 that the aggregation results are different with the increase of parameter λ in the proposed operators.However, for the q-RPFDWHM operator, the optimal choice is always 1 A , and for the q-RPFDWDHM operator, the optimal choice is always 5 A .
Besides, with the increase of λ value, the score value of q-RPFDWHM operator decreases, while the overall evaluation score value of the q-RPFDWDHM operator shows an increasing trend.This shows that the value of λ can reflect the attitude of decision-makers.When using q-RPFDWHM operator, the more optimistic the decision-maker is, the smaller the λ value is, and the more pessimistic the decision-maker is, the larger the λ value is.On the contrary, the more optimistic the decision-maker is, the greater the value of λ, and the more pessimistic the decision-maker is, the smaller the value of λ is when using a q-RPFDWDHM operator.In practical decision-making, the decision-maker can choose the appropriate λ value according to his preference.
Comparative Analysis
Recently, the application of fuzzy theory to multi-attribute group decision making has become a hot research area.Obviously, q-RPFNs is developed from PFNs and q-ROFNs, which is the basis of our proposed method.Thus, in order to further demonstrate the advantages and superiorities of the proposed operators, we compare the proposed method with some picture fuzzy operators and some q-rung orthopair fuzzy operators, respectively.
Compared with Some Picture Fuzzy Operators
In this section, to better illustrate the validity of the proposed method, we compare our method with that proposed by Wei [13] based on the picture fuzzy weighted average (PFWA) operator, that introduced by Wei [15] based on the picture fuzzy Hamacher weighted average (PFHWA) operator, that presented by Jana et al. [16] based on the picture fuzzy Dombi weighted average (PFDWA) operator, that put forward by Zhang et al. [17] based on the picture fuzzy Dombi weighted Heronian mean (PFDWHM) operator, and that proposed by the Ashraf et al. [41] proposed by the spherical fuzzy weighted average (SFWA) operator.In order to compare these operators, we use each method to solve the above example and present the score values and ranking orders of various methods in Table 8.We can draw a conclusion from Figures 3 and 4 that the aggregation results are different with the increase of parameter λ in the proposed operators.However, for the q-RPFDWHM operator, the optimal choice is always A 1 , and for the q-RPFDWDHM operator, the optimal choice is always A 5 .Besides, with the increase of λ value, the score value of q-RPFDWHM operator decreases, while the overall evaluation score value of the q-RPFDWDHM operator shows an increasing trend.This shows that the value of λ can reflect the attitude of decision-makers.When using q-RPFDWHM operator, the more optimistic the decision-maker is, the smaller the λ value is, and the more pessimistic the decision-maker is, the larger the λ value is.On the contrary, the more optimistic the decision-maker is, the greater the value of λ, and the more pessimistic the decision-maker is, the smaller the value of λ is when using a q-RPFDWDHM operator.In practical decision-making, the decision-maker can choose the appropriate λ value according to his preference.
Comparative Analysis
Recently, the application of fuzzy theory to multi-attribute group decision making has become a hot research area.Obviously, q-RPFNs is developed from PFNs and q-ROFNs, which is the basis of our proposed method.Thus, in order to further demonstrate the advantages and superiorities of the proposed operators, we compare the proposed method with some picture fuzzy operators and some q-rung orthopair fuzzy operators, respectively.
Compared with Some Picture Fuzzy Operators
In this section, to better illustrate the validity of the proposed method, we compare our method with that proposed by Wei [13] based on the picture fuzzy weighted average (PFWA) operator, that introduced by Wei [15] based on the picture fuzzy Hamacher weighted average (PFHWA) operator, that presented by Jana et al. [16] based on the picture fuzzy Dombi weighted average (PFDWA) operator, that put forward by Zhang et al. [17] based on the picture fuzzy Dombi weighted Heronian mean (PFDWHM) operator, and that proposed by the Ashraf et al. [41] proposed by the spherical fuzzy weighted average (SFWA) operator.In order to compare these operators, we use each method to solve the above example and present the score values and ranking orders of various methods in Table 8.
In these methods, Wei's [13] method based on the PFWA operator and Ashraf et al.'s [41] method based on the SFWA operator both use the simple weighted averaging operator, which leads to their lack of flexibility in aggregating information.Although Ashraf et al.'s [41] SFWA operator based on SFNs is better than Wei's [13] PFWA operator based on PFNs, it is far inferior to our operators based on q-RPFNs.PFNs and SFNs are special cases of q-RPFNs (q = 1, 2).Furthermore, the simple algebraic operation is a special case of DTT.So, the method we proposed is more general and flexible.
Wei's [15] method based on the PFHWA operator and Jana et al.'s [16] method based on the PFDWA operator use Hamacher t-norm and t-conorm and DTT, respectively.This makes them more flexible than the PFWA operator proposed by Wei's [13] and the SFWA operator proposed by Ashraf et al. [41], but all of them ignore the correlation between attributes.Our method applies DTT and Hamy Mean to q-RPFNs, which takes into account the interrelationship among attributes and has strong flexibility and is superior to these methods.
The method based on the PFDWHM operator proposed by Zhang et al. [17] is based on DTT and Heronian Mean.It has high flexibility and takes into account the relationship between attributes.However, it can only capture the relationship between any two parameters.The proposed q-RPFDWHM and q-RPFDWDHM operators based on parameter k can capture the relationship between more than two parameters (at most n − 1 arguments).In addition, our method based on q-RPFNs can contain more information and is more suitable for MAGDM problems.
To sum up, our method based on the q-RPFDWHM operator and the q-RPFDWDHM operator can not only capture the relationship between multiple attributes to imitate a more realistic decision-making environment, but also make the information aggregation process more flexible and effective by using DTT.Compared with other methods, our methods are more flexible and suitable for addressing MAGDM problems.
Table 12.Score values and ranking results using our operator and some q-rung orthopair fuzzy operators.
Liu and Wang's [25] method is based on the q-ROFWA operator, which assumes that the attributes are relatively independent, and does not take into account the correlation between the attributes.The proposed method based on q-RPFDWHM operator and q-RPFDWDHM operator can well reflect the correlation among attributes and use DTT operation rules to show the attitude of decision-makers.
Liu, P.D., and Liu, J.L.'s [42] and Wei et al.'s [43] method based on the Bonferroni mean and Heronian mean operators respectively.The advantage of these two operators over the method proposed by Liu and Wang [25] is that they take account of the correlation between attributes, but they can only capture the correlation between any two attributes, and our method can capture the correlation between multiple attributes (at most n − 1 arguments) by setting the parameter k.That's to say, our method is more practical and more suitable for MAGDM problems.
Wei et al.'s [44] method is based on the Maclaurin symmetric mean operator, which is a special case of HM and can also capture the correlation between any two attributes.It is worth mentioning that our method is based on the q-RPFDWHM operator and the q-RPFDWDHM operator, which also have a parameter λ which can reflect the attitudes of decision-makers.The different values of parameter λ represent different decision-making attitudes.Decision-makers can adjust the values of parameters according to their own interests and actual needs, so as to obtain more appropriate solutions.
Through the above analysis, the advantages of our method based on the q-RPFDWHM operator and the q-RPFDWDHM operator are obvious, which can be summarized as follows: First, our method is based on q-RPFNs, which includes non-membership, neutrality, and membership, and gives decision-makers a more flexible environment to avoid information loss in the decision-making process.Secondly, the attributes in real instances are often related.Our method based on the q-RPFDWHM operator and the q-RPFDWDHM operator can capture the correlation between the attributes and simulate the real MAGDM process more effectively.Thirdly, the proposed method based on the q-RPFDWHM operator and the q-RPFDWDHM operator has three different parameters.Decision-makers can set different parameters according to their risk aversion, their own interests, and actual situation, so as to obtain the most appropriate decision-making objectives reasonably, which creates a flexible decision-making environment for decision-makers.Furthermore, the proposed operators provide a new method to aggregate q-RPFNs based on the DTT, which is more general and powerful.Our method is more effective, flexible and powerful, and more suitable for solving MAGDM problems.
Conclusions
At present, q-RPFNs have become more popular for dealing with multi-attribute group decision-making problems, because they cannot only contain more information, but also take into account the neutrality of decision-makers.In this paper, we propose novel operational rules of q-rung picture fuzzy numbers (q-RPFNs) on the basis of Dombi t-norm and t-conorm.Then, we apply the traditional Hamy mean operator to q-RPFNs based on the DTT and propose q-RPFDHM, q-RPFDWHM, q-RPFDDHM, and q-RPFDWDHM operators.On this basis, a new solution to the MAGDM problem is proposed and applied to optimal project evaluation.In order to better verify the effectiveness and superiority of this method, we carried out parameter analysis, and compared this method with some picture fuzzy operators and some q-rung orthopair fuzzy operators, respectively.Through analysis, the main advantages of this method are as follows: (1) The use of q-RPFNs can capture more comprehensive information and effectively avoid information loss in the decision-making process.(2) It can capture the correlation among the attributes, which is more suitable for the real decision-making environment.(3) Different parameters can be set to meet various needs, with greater flexibility and versatility.(4) The proposed operators provide a new method to aggregate q-RPFNs based on the DTT, which is more general and powerful.
In future work, we will apply our method to more practical and more extensive MAGDM problems.Considering the validity and extensiveness of the Hamy mean operator and DTT operation paradigm, we will study them in a more ambiguous environment, such as in hesitant decision-making.
Figure 1 .
Figure 1.Score values of the alternatives when
Figure 1 .
Score values of the alternatives when q ∈ (1, 10) based on the q-RPFDWHM operator.
Figure 2 .
Figure 2. Score values of the alternatives when .
Figure 3 .
Figure 3. Score values of the alternatives when
Figure 2 .
Figure 2. Score values of the alternatives when q ∈ (1, 10) based on the q-RPFDWDHM operator.
Figure 2 .
Figure 2. Score values of the alternatives when .
Figure 3 .
Figure 3. Score values of the alternatives when
Figure 4 .
Figure 4. Score values of the alternatives when
Table 1 .
Decision matrix A 1 from expert E 1 .
Table 2 .
Decision matrix A 2 from expert E 2 .
Table 3 .
Decision matrix A 3 from expert E 3 .
Table 8 .
Score values and ranking results using our operator and other picture fuzzy operators. | 7,589.4 | 2019-05-24T00:00:00.000 | [
"Computer Science"
] |
Omega-3 fatty acids and acute neurological trauma: a perspective on clinical translation *
: Acute neurological trauma remains one of the clinical areas with the most significant unmet needs worldwide. In the central nervous system, acute trauma has two stages: the primary injury and the secondary injury. The former is irreversible
Omega-3 fatty acids and central nervous system injurysteps towards translation
Spinal cord injury (SCI) is a catastrophic event which can result in permanent and major disability.The estimated cost of treating an individual for life can reach over $3 million.In Europe, care costs are estimated at around s4 billion per year.Injury to the spinal cord arises as a consequence of many types of trauma; the initial trauma often leads to forces such as dislocation, distraction and compression being exerted onto the nervous tissue, that ultimately lead to irreversible injury and cell death at the point of impact.
In the aftermath of SCI, a complex chain of reactions is triggered around the injury epicentre, which will ultimately determine the degree of functional impairment (Profyris et al., 2004;De Biase et al., 2005).Haemorrhage develops Abstract: Acute neurological trauma remains one of the clinical areas with the most significant unmet needs worldwide.In the central nervous system, acute trauma has two stages: the primary injury and the secondary injury.The former is irreversible, and is a direct consequence of the impact.In the aftermath of the injury, a complex series of processes exacerbate the injury and amplify tissue damage.Some of these processes are local, others involve a systemic response.It is these processes which ultimately determine the clinical outcome.The aim of the treatments is a) to confer neuroprotection and b) to promote neuroregeneration.The results reported so far with omega-3 fatty acids in animal models of neurotrauma suggest that these compounds have the potential to offer a novel therapeutic approach and target both protection and regeneration.They lead to increased neuronal and glial survival, they can limit the damaging neuroinflammation and they can also protect neurites.Long chain omega-3 fatty acids such as eicosapentaenoic acid and docosahexaenoic acid have a complex pharmacodynamics, which leads potentially to the activation of a multitude of targets, including voltage and ligand-gated ion channels, transcription factors and G-protein coupled receptors.They can produce tissue-specific metabolites which have intrinsic activity, either on the same or on different cellular targets.The apparent large therapeutic window of omega-3 fatty acids is an advantage in the context of trauma, with patients in an unstable state, with multiple injuries.The specific use of omega-3 fatty acids in spinal cord injury and peripheral nerve injury will be discussed, focusing on issues which need to be addressed in order to translate successfully to the clinic the efficacy reported in the initial proof of concept animal studies.
early, leading to tissue oedema, accompanied by a disruption of the blood flow in the cord.Compression of the spinal cord leads to anoxia, that is proportional to the severity of the initial injury.Oedema develops at the injury epicentre and spreads rostrocaudally.Furthermore, injury also triggers a complex inflammatory reaction, which starts with local activation of the microglia, followed by infiltration of neutrophils, systemic macrophages and T-cells.This inflammatory reaction is complex, and may significantly enhance the primary damage, but there is also evidence that some elements of inflammation exert a protective role (Crutcher et al., 2006;Donnelly and Popovich, 2008).The two main therapeutic strategies in SCI are based on neuroprotection (early intervention to protect vulnerable tissue in the early phase of the initial trauma) and neuroregeneration (delayed intervention to promote repair) (Kwon et al., 2004).
Traumatic brain injury (TBI) is the leading cause of disability and mortality in those under 50 years old.It is generally the result of falls, motor accidents, sports and war injuries.The incidence of TBI is increasing worldwide and it is estimated that around 500 in every 100,000 individuals suffer from TBI annually in the US and in Europe.In a manner similar to SCI, tensile stretch forces are also an important factor in the pathology of TBI.
Treatment of TBI, like in SCI, is focused on neuroprotection and neurorepair.In spite of the importance of this condition, there are at present no neuroprotective treatments that could be used to protect the patient in the aftermath of injury.They would be associated with huge personal benefits for patients and carers.Such treatments would also have very significant impact in terms of public health costs.As TBI and SCI share many elements of pathophysiology, certain neuroprotective treatments could be beneficial in both.
The exploration of treatments with neuroprotective properties has led to many promising results in animal models of injury, which attempt to reproduce the conditions of injury in humans.However, numerous clinical trials in SCI and TBI, focused on neuroprotection, have failed to lead to an effective treatment.Treatments tested so far include: corticosteroids, opiate anatgonists, calcium channel blockers (e.g.nimodipine), antioxidants and free rad-ical scavengers (e.g tirilazad) and glutamate NMDA receptor antagonists (Hawryluk et al., 2008).This is reminiscent of the stroke field in which over one hundred clinical trials for acute stroke have failed.The reasons for the failures of acute SCI trials may reside partly in the intrinsic limitations of some of the trials and their design (Hawryluk et al., 2008) but also in the decisions leading perhaps hastily from a promising preclinical observation to a clinical study.
The process of drug discovery in neurotrauma involves the use of in vitro and in vivo models.The former allow a detailed analysis of the response of the neurones and glial cells to injury, and the mechanisms underlying the beneficial efects of compounds.There is also a wide variety of in vivo animals of CNS injury, which attempt to mimic the human trauma.Unfortunately, drugs are sometimes tested in such models with very unrealistic time windows (many treatments lose their effects if delayed by a few hours) and in only one injury model or only one species.It is also sometimes difficult to achieve in humans the drug concentrations that are achieved in animals in order to obtain efficacy.Furthermore, the relative importance of rescuing a small amount of tissue in the area surrounding the lesion in a small rodent (rat, mouse) vs. the need to protect a much more substantial volume of tissue in humans is often overlooked.
As the efficacy and safety of new treatments for neurotrauma must be investigated in animal models before initiation of clinical trials, a large variety of animal species including dogs, cats, sheep, monkeys, rabbits, rats and mice have been used for the modelling of SCI and TBI.What has become clear over decades of failure in translation is that it is essential that a new treatment is validated in several injury models in the same species, and/or injury models in different species.Rats and mice are the most widely used animals and they differ in their reponse to injury of the CNS.Genetically modified mice are widely used to study pathological events and although their response to injury may not be closer to humans, they offer the additional advantage of facilitating the analysis in vivo of some of the mechanistic aspects of new treatments, using specific genetic manipulation.It is well-established that mice exhibit a very different response to SCI, not only in terms of functional recovery, but also in their tissue changes and in particular their inflammatory reaction, compared to that seen in other mammals.For example, in a study by Sroga and colleagues, microglia/macrophages showed a peak activation at 7 days post-injury, similar to what is seen in rats, and subtle decreases in labelling over the next 2-5 weeks.In contrast, the onset and magnitude of lymphocytic infiltration were markedly different between rats and mice.Maximal T-cell accumulation occurred earlier but to a lesser extent in rats compared with mice.One distinct finding in mice was the presence of cell clusters that resembled lymphocytes but did not express lymphocyte-specific markers; these cells extended from blood vessels within the fibrotic tissue matrix, and their phenotype was characteristic of fibrocytes, which are involved in wound healing.These species-specific neuroinflammatory aspects may result in the formation of a distinct tissue environment at the site of SCI, and may also account for differences in neurological outccome (Sroga et al., 2003).Mice do not exhibit the progressive necrosis and larger central dramatic cavitation of the cord that occurs in rats and other mammals, in which a rim of preserved white matter surrounds a fluid-filled cystic cavity after contusion/compression trauma.In contrast, the injured mouse spinal cord shows after injury dense fibrous connective tissue, and if present, there are only very small cavities (microcysts) at the lesion site.The connective tissue matrix at the lesion site decreases in size along its rostrocaudal axis over time, and the small cavities disappear at the chronic time points rather than enlarging, which is different to the trend seen in rats, in which cavitation increases over time.
In SCI research, two types of injury models are used: (i) models that aim to mimic as closely as possible the type of SCI that is observed clinically (i.e.injuries associated with contusion or compression forces), and (ii) models in which specific sections of the cord or tracts are lesioned, which are appropriate for the study of regeneration (i.e.hemisection and transection models).
Omega-3 polyunsaturated fatty acids (PUFA) were shown almost a decade ago to have significant neuroprotective potential (Blondeau et al., 2002) and in particular they appeared to protect acutely the cord after an episode of spinal cord ischaemia (Lang-Lazdunski et al., 2003).Following these observations with alpha-linolenic acid, the biosynthetic precursor of long chain omega-3 PUFA such as docosahexaenoic acid (DHA), studies in our laboratory have shown that the intravenous administration of a bolus of DHA 30 min after hemisection SCI, dramatically improved functional and histological outcome in rats (King et al., 2006).These results were subsequently confirmed in a model of compression SCI (Huang et al., 2007a).In this model, we also showed that the neuroprotective effect of the acute intravenous DHA bolus is further enhanced by combination with a sustained dietary DHA supplementation, in the weeks follwing injury (Huang et al., 2007b;Ward et al., 2010).This confirmation of efficacy strengthens the probability that this treatment will indeed show protection in human SCI.However, in order to increase further the probability of successful translation to clinic, it is important to show efficacy in more than one species and/or model of SCI and ideally, later on to replicate the success of the treatment in more than one independent laboratory.Therefore, we recently performed a study on the effect of the acute DHA treatment in a mouse compression SCI model, using a similar paradigm as that used in the rat, which assesses the effect of a single early acute intravenous administration, associated or not with chronic DHA dietary supplementation in the period following injury (Lim et al., 2010).
Our observations confirmed the neuroprotective effect of a single bolus of DHA administered intravenously at the dose of 500 nmol/kg 30 min after injury after compression SCI.The treatment led to a significant increase in tissue protection, as reflected by a multitude of cellular markers.For example, DHA led to enhanced neuronal survival (figure 1), as well as an increase in oligodendrocyte survival.A marked reduction in the microglial response after injury was also seen (figure 2).
In contrast to our previous study in the rat (Huang et al., 2007b), the significant neuroprotective effect of the acute DHA injection was not markedly enhanced for all the markers studied, by a combination with chronic dietary DHA supplementation (400 mg/kg/day) over the period of 28 days after injury.It is not possible to conclude from this first set of data in the mouse whether the effect of the acute single DHA injection would be enhanced in mice if a longer period of DHA dietary supplementation and/or a different dose are used.Furthermore, raised dietary DHA levels alone for 4 weeks following compression of the spinal cord did not protect significantly against either the neurological deficit or the histological damage.
This confirmation adds further support to the hope of successful clinical translation of DHA in SCI, as an early intervention which could be delivered by emergency teams.The critical time window for this acute bolus intervention appears to be the 2 h period following injury, which is achievable, both within a civilian and military context.Finally, more extensive studies in the semiacute and chronc period after injury are required in order to better understand the potential of long-chain omega-3 PUFA in this period.Interestingly, there is some evidence that the use of fish oil containing lipid emulsions (which contain DHA but also eicosapentaenoic acid (EPA)), may be of benefit in critically ill patients with multiple trauma (Heller et al., 2006).However, it is likely that the mechanisms triggered by the acute bolus and the sustained exposure to DHA (diet or infusion) are quite different.
Mechanism of action of omega-3 PUFA
After SCI, the primary injury area is compromised rapidly, but the injury also spreads.Metabolic and biochemi- OCL VOL.18 N8 6 novembre-de ´cembre 2011 319 cal changes, over a period of hours to days, increase the area of cell death.Excitotoxicity and increased oxidation are key neurochemical processe involved in injury (Hall and Braughler, 1986).Some of the cellular changes that develop in time also attempt to create a boundary between healthy and damaged tissue, and remove necrotic tissue.These changes include the appearance of reactive astrocytes, the activation of microglial cells and the infiltration of immune cells from the periphery.In tracts affected by the injury, Wallerian degeneration begins in the first days following injury and continues over a period of weeks.Gene profiling studies have identified hundreds of genes whose expression is either upregulated or downregulated at various time points after SCI (De Biase et al., 2005).The aim of neuroprotective treatments is to rescue CNS tissue that is under threat from the rapidly spreading damage.However, the extreme complexity of the reactions triggered by neurotrauma presents a tremendous challenge, to identify those events that are key to the evolving pathology and which could be critically targetted by DHA.
Omega-3 PUFAs are essential structural compounds in the CNS, but they also act as endogenous ligands at a variety of receptors and ion channels, and as substrates of enzymes.Their activity at potassium and sodium channels could be a major factor controlling hyperexcitability after injury (Vreugdenhil et al., 1996;Heurteaux et al., 2006).Continuous exposure to DHA can lead to significant changes in the biophysical properties of membranes.Increasing endogenous omega-3 PUFA levels through a raised DHA dietary level produces widespread effects on gene expression.One target mediating the effects on gene expression are nuclear receptors transcription factors.DHA acts as a ligand for the retinoid X receptor (RXR) (Mata de Urquiza et al., 2000).RXR can heterodimerize with retinoic acid receptors (RAR) and act as a modulator of gene expression at retinoid-responsive promoters.Omega-3 PUFAs can also activate PPARs (peroxisome proliferator activated receptors).PPARs can bind DNA as a heterodimer with RXR, and have been shown to have a therapeutic value as a target in SCI (McTigue et al., 2007).
It is important to note that following trauma, polyunsaturated fatty acids such as DHA are cleaved from membrane phospholipids to free (unesterified) DHA by phospholipase A 2 enzymes and the free DHA can then be converted to neuroactive metabolites such as neuroprotectin D1 (NPD1).Omega-3 PUFAs and their metabolites can up-regulate the expression of anti-apoptotic proteins such as the Bcl-2 family, whilst downregulating the apoptotic proteases caspase-3 and 9, and pro-apoptotic signalling proteins including Bax, Bad, Bid and Bik.It has also been suggested that omega-3 PUFAs enhance the expression of neurotrophins, including brain-derived neurotrophic factor (BDNF).An increase in dietary omega-3 PUFAs resulted in BDNF levels being restored after experimental TBI in rats (Wu et al., 2004).
Omega-3 fatty acids and their potetial in the management of peripheral nerve injury
Peripheral nerve injury (PNI) occurs largely as a result of either direct mechanical trauma, disease, (such as diabetes), or toxicity associated with certain drugs.The various types of mechanical trauma include crush and compression injury, transection injury, and stretch injury.The latter could follow displacement of fractures and dislocation of joints.Unlike axons in the CNS, axons in the adult peripheral nervous system (PNS) can regenerate when damaged.Partial or complete axonal regeneration is essential for the functional recovery of nerves after injury.Satisfactory recovery usually occurs only in minor nerve injuries, or when the distance over which regeneration must occur is small (Jaquet et al., 2001).After a PNI, rehabilitation can lead to some recovery.However, some patients may remain extensively incapacitated and are unable to return successfuly to an active life (Rosberg et al., 2005).Hence there is a need for new therapies that protect injured peripheral neurons and enhance regeneration, thus improving functional outcome.
Vascular changes accompany the neural changes seen after PNI, and this can exacerbate hypoxia and ischaemia.There is also an inflammatory reaction following PNI, largely believed to be beneficial for recovery.Macrophages are involved in phagocytosis of degenerating nerve fibres, which is a critical step enabling regeneration, as myelin contains many growth-inhibitory molecules, such as myelin-associated glycoprotein and oligodendrocyte myelin glycoprotein, as well as the Nogo receptor and the p75 neurotrophin receptor.Macrophages release mitogens for Schwann cells and fibroblasts, and cytokines that stimulate the synthesis of growth factors.
Many key events in the pathology of PNI reproduce events occurring in CNS trauma.These include : 1) production of free radicals which results in lipid peroxidation and oxidation of proteins and nucleic acids, 2) activation of proapoptotic proteases, 3) mitochondrial dysfunction, 4) activation of calpains leading to damage of the cytoskeleton, 5) activation of phospholipase A 2 enzymes which release fatty acids such as arachidonic acid, triggering a local increase in deleterious prostaglandins and leukotrienes.
In humans, the distance over which a nerve must regenerate can be quite large, and for example it can take approximately 800 days for a nerve to regenerate from the shoulder, after a brachial plexus injury, to the hand.After such a time there will be irreversible damage to the denervated target organs, and full functional recovery will be unlikely.Therefore, therapies that can not only offer neuroprotection in the aftermath of PNI but also accelerate the rate of regeneration would be very beneficial in the clinic.
The encouraging effects seen with long chain omega-3 PUFA suh as DHA in SCI, have led us to explore the effects of omega-3 fatty acids in PNI.The aim of our first study was to assess the effect of increasing tissue levels of omega-3 PUFA on the response to a PNI, sciatic nerve crush, in the mouse (Gladman et al., in press).Such an increase can be achieved through dietary supplementation, or alternatively through the use of the recently developed fat-1 mouse.These mice express the fat-1 gene from C. elegans, which encodes a fatty acid desaturase not normally present in mammals.This enzyme can convert omega-6 into omega-3 PUFAs, leading to enrichment in tissue omega-3 PUFA levels.This genetic manipulation allows us to produce two different tissue fatty acid profiles (i.e.high vs. low omega-6/ omega-3 ratio), in wild type animals maintained on a diet enriched in omega-6 PUFA (WT-omega-6) vs. the fat-1 mice maintained on the same diet.
In agreement with previously published findings using an in vitro analysis of the response of primary sensory neurones to fatty acids (Robson et al., 2010) results of the study with fat-1 mice demonstrate the intrinsic neurotrophic properties of omega-3 PUFAs.Thus, in vitro, dorsal root ganglia primary sensory neurones from fat-1 mice showed much more complex neurite outgrowth compared to wild type animals.The response to PNI in mice expressing the fat-1 gene was then examined using the sciatic nerve crush model.Behavioural observations showed that higher endogenous omega-3 PUFAs had a positive effect on the rate of sensory functional recovery.The sciatic functional index reflects locomotion and combines the coordination of motor and sensory reflexes.At 7 days postinjury there was a small but significant difference, with fat-1 mice regaining comparatively more function, thus indicative of an increased rate of recovery.We used the von Frey test to assess the motor response to a sensory stimulus and as expected, both groups lost initially sensation after the injury.By 4 days post-crush, the withdrawal threshold for fat-1 mice was significantly lower than for WT-omega-6 mice (figure 3).This trend continued up to day 7 when the experiments were completed, thus suggesting that omega-3 PUFAs increase the rate of regeneration.However, by 1 day post-injury there was already a small difference in the force required to induce a response between the two groups.This is a strong indication that the injury was less severe in the fat-1 mice, likely due to the neuroprotective properties of omega-3 PUFA leading to more spared axons, and this could be the explanation for the observed improvements in both the von Frey test and the sciatic functional index.Additionally it could be hypothesised that omega-3 PUFAs led to an increase in collateral sprouting from spared axons in the sciatic nerve, and that this could further contribute to the increased rate of recovery seen in mice expressing the fat-1 gene.The fat-1 background also led to an increased staining for neurofilament, as a marker of axonal integrity (figure 4).
Conclusion
Omega-3 PUFA continue to represent a significant promise in CNS trauma, where progress towards clinical translation is being made, and there is also an emerging hope in the use of these compounds in treatment regimes for PNI.Many questions remain: what are the main mechanisms driving neuroprotection and possibly neuroregeneration with these compunds?Are the long-chain omega-3 PUFA acting only as pro-drugs, and precursors to power-ful metabolites which have distinct receptors and associated signalling pathways (Serhan et al., 2004) or do they have a unique therapeutic value without conversion to metabolites ?How will omega-3 PUFA, which have powerful efects on systemic inflammation (Calder, 2003;Mori and Beilin, 2004), modulate complex tissue reactions such as neuroinflammation, which has a dual function after neurotrauma (Donnelly and Popovich, 2008;Kigerl et al., 2009)?Complex questions which are awaiting the answers that will make safe clinical translation possible.
Figure 1 .
Figure 1.Neuronal protection in the dorsal horn and ventral horn of the spinal cord in mice injected with saline or DHA and then fed a DHA enriched diet or control diet.Representative images taken from the injury epicentre show that the number of NeuN-labelled cells in the animals treated with saline injection (B and G) and DHA diet alone (D and I) was substantially less than naı ¨ve control animals (A and F).Animals receiving a DHA injection alone (C and H) and DHA injection plus DHA diet (E and J) had more NeuN-labelled cells compared with saline-treated animals.Scale bar = 50 mm.(Lim et al., 2010).
Figure 2 .
Figure 2. Decreased microglial activation in mice treated with saline injection and DHA injection plus DHA-enriched diet after spinal cord injury.Representative images taken from the epicentre show that resting microglial cells with ramified thin processes were noted in the naı ¨ve tissue (A and D).The microglia shape in saline-treated injured controls was larger and irregular (B and E), reflecting activation.At 28 days after surgery, there appeared to be less Iba1 immunoreactivity in DHA-treated animals (C and F) compared with saline-treated animals (B and E).The effect is more marked in the dorsal horn compared to the ventral horn.Scale bar = 50 mm.(Lim et al., 2010).
Figure3.Recovery after peripheral nerve injury in wild type (WT) and fat-1 mice.Animals were submitted to an injury of the sciatic nerve, and were tested functionally for 7 days.There was no difference in pre-injury withdrawal thresholds between WT-omega-6 and fat-1 groups, but at 4 days post-injury there was a statistical difference between the response of the two groups, and this continued to day 7 (*p<0.05).Error bars indicate S.E.M for n=5 animals/group.
(Gladman et al., in press) Donnelly DJ, Popovich PG.Inflammation and its role in neuroprotection, axonal regeneration and functional recovery after spinal cord injury.Exp Neurol 2008; 209: 378-88.Gladman S, Huang W, Lim S, et al.Improved outcome after peripheral nerve injury in mice with increased levels of endogenous omega-3 polyunsaturated fatty acids.J Neurosci, in press.Hall ED, Braughler JM.Role of lipid peroxidation in post-traumatic spinal cord degeneration: a review.Cent Nerv Syst Trauma 1986; 3: 281-94.Hawryluk GW, Rowland J, Kwon BK, et al.Protection and repair of the injured spinal cord: a review of completed, ongoing, and planned clinical trials for acute spinal cord injury.Neurosurg Focus 2008; 25: E14.Figure 4. Neurofilament NF200 staining following sciatic nerve injury: Immunoreactivity for NF200 was analysed in transverse sections of the mouse sciatic nerve, 6 mm from the crush site 7 days after injury.NF200 staining shows that there was significantly less staining in the WTomega 6 group (*p<0.05,means AE S.E.M., n= 11 animals/group).(Gladman et al., in press). | 5,675.6 | 2011-11-01T00:00:00.000 | [
"Biology"
] |
π-phase modulated monolayer supercritical lens
The emerging monolayer transition metal dichalcogenides have provided an unprecedented material platform for miniaturized opto-electronic devices with integrated functionalities. Although excitonic light–matter interactions associated with their direct bandgaps have received tremendous research efforts, wavefront engineering is less appreciated due to the suppressed phase accumulation effects resulting from the vanishingly small thicknesses. By introducing loss-assisted singular phase behaviour near the critical coupling point, we demonstrate that integration of monolayer MoS2 on a planar ZnO/Si substrate, approaching the physical thickness limit of the material, enables a π phase jump. Moreover, highly dispersive extinctions of MoS2 further empowers broadband phase regulation and enables binary phase-modulated supercritical lenses manifesting constant sub-diffraction-limited focal spots of 0.7 Airy units (AU) from the blue to yellow wavelength range. Our demonstrations downscaling optical elements to atomic thicknesses open new routes for ultra-compact opto-electronic systems harnessing two-dimensional semiconductor platforms with integrated functionalities.
I enjoyed reading the updated version of this manuscript. The authors have taken care of my comments in a satisfactory manor. After the removal of the interesting, but quite preliminary photoluminescence results, I am now happy to recommend publication of this important work. I think the idea of using a single (few) atomic layer(s) to strongly manipulate the reflection phase will inspire a wave on new experiments. In my opinion it is still too early to compare the performance of the elements against other dielectric flat optics (e.g. diffractive optical elements or Mie-resonant metasurfaces) in terms of the performance. These elements had much more time to develop and the performance and application areas of these unique, conceptually new devices is expected to improve over time.
Reviewer #2 (Remarks to the Author):
Authors demonstrate the MoS 2 binary phase-modulated supercritical lenses with sub-diffractionlimited focal spots of 0.7 Airy units (AU) in blue to yellow wavelength range. The principle, method, 2D-materials, and experimental results are not new to most people. Authors need to carefully address why their work can be published by Nature Communications? A few comments to help authors to clarify the importance of their work.
Reply:
We thank the reviewer for reviewing our manuscript again. To clearly illustrate the novelty and advances of our work, we have summarized as follows.
Firstly, the high refractive indices of transition metal dichalcogenide (TMD) 2D materials are appreciated for light field manipulation applications until very recently [Refs. 21-22 in our revised manuscript]; however, these samples prepared in hundreds-of-nanometers thickness to access Mie resonances and phase accumulations are conventionally perceived essential. When the thickness of 2D materials decreases to atomic monolayers, approaching the physical thickness limit, the common light phase modulation methods such as Mie resonances and phase accumulations are not applicable any more. By utilizing widely perceived adverse effect of losses in nanophotonics, we demonstrate that the integration of a monolayer MoS 2 sheet with only 0.67 nm thickness on a uniform substrate can create the spot of the critical coupling and hence for a remarkable π phase shift, which represents the world's first of its kind. The underlying physics makes it fundamentally different from previous reports.
Secondly, in addition to the demonstration of the nontrivial phase jump crossing the critical coupling point at a single wavelength, we discovered that the highly dispersive extinctions of MoS 2 sheets empowers a prominent broadband phase shift (greater than π/2) by increasing the thickness of the MoS 2 sheets from monolayer to bilayer. To showcase the capability in broadband light field manipulations, a variety of meta-optics have also been successfully demonstrated, including broadband beam deflection by meta-grating and broadband meta-holograms, etc.
Thirdly, utilizing a facile direct laser writing method, we demonstrate binary phase modulated meta-optics based on atomically thin surface corrugations, representing the thinnest planar optics. In specific, we demonstrate atomically thick supercritical lens that allows to break the diffraction limit for super-resolved focusing, which stands for its own strength and makes it fundamentally different from the previous demonstration of diffraction-limited FZP lens on 2D materials.
Overall, our work reports a remarkable π phase shift in monolayer 2D materials approaching the physical thickness limit of materials and demonstrates its potential applications in light field manipulation meta-optics. We believe our manuscript demonstrates a timely and important advance at the merge of 2D materials and nano-photonics, and should meet the criteria of Nature Communications.
1. To focus light by using 2D material is not a new topic. Many papers have been published, such as Nano Letter 18 (11) 6961-6966(2018), and https://arxiv.org/ftp/arxiv/papers/1411/1411.6200.pdf, etc. Previous works demonstrated not only the focusing but also imaging. Please explain why your work is so important? Your manuscripts showed the low quality and weak focusing profile and point spread function, while others showed not only good focusing but also the imaging capability. Please address the uniqueness and innovation in comparison with previously published many articles.
Reply:
We thank the reviewer for suggesting relevant literatures. Though demonstration of light focusing by thick 2D materials is not new, to achieve remarkable phase modulation by monolayer 2D materials remains still elusive. The most notable difference that distinguishes our work from previous published results is that a remarkable π phase modulation is achieved with monolayer TMDs with sub-nanometer thickness, while others achieve π or 2π phase modulation usually by many layers of 2D materials with thicknesses of tens to hundreds nanometer. The two references mentioned by the reviewer which are cited as Refs. 22 and 23 in our revised manuscript demonstrate light focusing with a thickness of 190 nm and 6.28 nm, respectively. Thus, it is less comparable with our demonstration of phase modulations by monolayer MoS 2 with a thickness of 0.67 nm. Downscaling the TMDs sheet from multilayer to monolayer will bring a crossover from indirect bandgap to direct bandgap property of the TMDs, then make it a good candidate for excitonic applications such as the photocurrent generation and photoluminescence process. Our demonstration unlocks the full potential of a new class of 2D optics with long-sought integration and miniaturization capabilities and opens a new route to develop photonic integrated circuits incorporating optical wavefront modulators and detectors on the same chip.
By the way, the references of this paper cited many papers from authors but not closely related to the contents of this paper. An updated and proper reference list is needed.
Reply:
We have followed the reviewer's suggestion and re-organized the reference list. Only the papers closely related to this works are cited.
2. Similarly, the phase modulation in the z-axis is not new. There are papers, such as Communications Physics 2, 156, 2019, etc., that demonstrated by metalens or zone plates, their phase modulation can cover the whole 2π. However, in this manuscript, there is only 1π phase modulation, which would cause some problems for the optical functionalities.
Reply:
We thank the reviewer for professional comments. Indeed, there are many published works on metalens and zone plate based on phase modulations. However, the phase modulation strength with 1π or 2π is usually realized by dielectric resonators with a thickness in the scale close to the wavelength. As shown in the paper mentioned by the reviewers (Ref 8 in the revised manuscript), a dielectric metasurface based zone plate is experimentally demonstrated with higher performance than conventional amplitude-modulated zone plates. However, 2π phase modulation for the wavelength of 635 nm in this work is achieved through the electric and magnetic dipole resonance of α-Si nanorods with a thickness of 330 nm. By contrast, in our work, we demonstrated the remarkable 1π phase shift with monolayer MoS 2 approaching the physical thickness limit of the material, by utilizing widely perceived adverse effect of losses in nanophotonics. Since only single layer MoS 2 is employed, binary phase contrast of 1π from the scribed and un-scribed regions can be achieved on the uniform substrate. Nevertheless, such a binary phase modulation is sufficient and widely adopted for the construction of many flat optical device with binary configuration. To illustrate capabilities in complex light field manipulations, we have demonstrated planar metalens, metagrating, and holographic imaging, etc, 3. The feasibility of the applications is an important issue as well.
Reply:
In this work, we successfully demonstrated a remarkable π phase modulation with monolayer MoS 2 . The configuration of "semiconductor substrate / dielectric buffer layer / monolayer TMD film" we used in this work is fully compatible with reported TMD devices with opto-electronic functions. Efficient phase modulation capability will release the full potential of monolayer TMDs, could pave the route for 2D flat optical elements approaching the physical thickness limit with considerable miniaturization and supreme integration capabilities.
Combining with the wavefront manipulation, direct bandgap and tunable excitonic property of monolayer TMDs, we envision its bright future in integrated optical circuits. It will significantly benefit many fundamental investigations including exciton-polariton interactions, information valleytronics, and nonlinear optics, and also may find potential applications in the fields of selfmodulating photoluminescence, function-integrated photodetection, exciton field-effect transistor, atomic thin augmented/virtual reality system, even the next generation computing, etc.
Finally, although quite a few photonic devices have been demonstrated on monolayer MoS 2 , advancing progresses for the integration system still require significant efforts in terms of cooperation of the researchers among the material synthesis, device fabrication, system architecture design in the future. The reviewer #3 also shares the same opinion in this aspect as "it is still too early to compare the performance of the elements against other dielectric flat optics (e.g. diffractive optical elements or Mie-resonant metasurfaces) in terms of the performance.
These elements had much more time to develop and the performance and application areas of these unique, conceptually new devices is expected to improve over time" 4. The results (Fig. S19) of meta-holograms on the atomic thin MoS 2 sheet are not good enough, which have a low contract. The author should find out the cause of this problem and improve the results before re-submission. The current results in the supplementary material seem to appear the authors prepared in a hurry.
Reply:
We thank the reviewer for this valuable suggestion to further improve the quality of our manuscript. In regards to the low-quality meta-holographic results shown in Fig S19, it comes from the relative low pixel numbers in the hologram pattern with only 400*400 pixels. If a larger hologram pattern is generated, the quality can be significantly improved. To verify that, a new meta-hologram with 1000*1000 pixels is generated with the same algorithm and patterned on the atomically thin MoS 2 sheet by fs laser scribing system. High fidelity holographic images have been clearly demonstrated experimentally, as shown in Figure S19 in the revised supplementary materials and the panel B in Figure R1 in below. Figure R1 Comparison of reconstructed holographic images between patterns with 400*400 pixels in panel A (the results in previous manuscript) and 1000*1000 pixels in panel B (in the revised manuscripts).
My recommendation is that authors should make a major revision of this paper. An updated and proper reference list is needed.
Reply:
We deeply appreciate the reviewer for professional comments and valuable suggestions.
In response to those constructive suggestions, we have performed additional experiments on high quality meta-holograms. Brief discussions about the potential applications of such monolayer TMD system with dramatic phase modulation are added. The references are re-organized to make sure that citations are closely related to this work.
Reviewer #3 (Remarks to the Author):
I enjoyed reading the updated version of this manuscript. The authors have taken care of my comments in a satisfactory manner. After the removal of the interesting, but quite preliminary photoluminescence results, I am now happy to recommend publication of this important work. I think the idea of using a single (few) atomic layer(s) to strongly manipulate the reflection phase will inspire a wave on new experiments. In my opinion it is still too early to compare the performance of the elements against other dielectric flat optics (e.g. diffractive optical elements or Mie-resonant metasurfaces) in terms of the performance. These elements had much more time to develop and the performance and application areas of these unique, conceptually new devices is expected to improve over time.
Reply:
We thank the reviewer for reviewing our manuscript again, and recommending the publication of our work in Nature Communications. We share the same opinion with the reviewer that it is the not the time to compare the performance of such atomic thin elements with other dielectric flat optics, joining the competition with other diffractive optical elements is not the main focus of this work. The novelty of this work is to demonstrate giant phase modulation capability on monolayer 2D materials by utilizing widely perceived adverse effect of losses in nano-photonics, and provided an exciting potential of a new class of 2D optics with long-sought integration and miniaturization capability. Advancing progresses for the integration system still require significant efforts in the aspect of material synthesis, device fabrication, and system architecture design in future.
REVIEWERS' COMMENTS
Reviewer #2 (Remarks to the Author): Authors demonstrate the MoS2 binary phase-modulated supercritical lenses with sub-diffractionlimited focal spots of 0.7 Airy units (AU) in blue to yellow wavelength range. The principle, method, 2D-materials, and experimental results are not new to most people. However, the authors keep insisting that they demonstrated the integration of a monolayer MoS2 sheet with only 0.67 nm thickness on a uniform substrate can create the spot of the critical coupling and hence for a remarkable π phase shift, which represents the world's first of its kind. Judging from their responses to my previous comments, and their efforts on a new meta-hologram with 1000*1000 pixels on the atomically thin MoS2 sheet by fs laser scribing system, as shown in Figure S19 in the revised supplementary materials, the revised manuscripts can be accepted now. | 3,190.6 | 2021-01-04T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Twist-3 effect from the longitudinally polarized proton for $A_{LT}$ in hadron production from $pp$ collisions
We compute the contribution from the longitudinally polarized proton to the twist-3 double-spin asymmetry $A_{LT}$ in inclusive (light) hadron production from proton-proton collisions,i.e., $p^\uparrow \vec{p}\to h\,X$. We show that using the relevant QCD equation-of-motion relation and Lorentz invariance relation allows one to eliminate the twist-3 quark-gluon correlator (associated with the longitudinally polarized proton) in favor of one-variable twist-3 quark distributions and the (twist-2) transversity parton density. Including this result with the twist-3 pieces associated with the transversely polarized proton and unpolarized final-state hadron (which have already been calculated in the literature), we now have the complete leading-order cross section for this process.
Introduction
Twist-3 observables in high-energy semi-inclusive reactions provide us with an important opportunity to test theoretical frameworks for QCD hard processes and to understand the quark-gluon substructure of hadrons beyond the conventional parton model. Well-known examples are the experimental observation of hyperons with large transverse polarization produced in unpolarized proton-proton collision, pp → Λ ↑ X [1][2][3][4][5], and the transverse single-spin (or left-right) asymmetry (SSA) A N of a produced hadron in the collision between a transversely polarized proton and an unpolarized proton, p ↑ p → h X (h = π, K, η, etc.) [6][7][8][9][10][11][12][13][14][15][16]. The magnitude of the asymmetries were as large as a few tens of percent in the forward direction. In collinear factorization, these SSAs appear as twist-3 observables. They are driven by multi-parton (quark-gluon or purely gluonic) correlations [17,18] either in the initial-state hadrons or in the final-state fragmentation process. The formalism for deriving the twist-3 cross section for SSAs has been well developed, and the formulae involve the relevant multi-parton correlation functions instead of the usual (twist-2) parton densities or fragmentation functions [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35]. The A N data for π, K, η, and jet production obtained at the Relativistic Heavy Ion Collider (RHIC) have been analyzed using this formalism [20,[36][37][38]. 1 Besides these large SSAs, the double-spin asymmetry (DSA) A LT for particle production (direct photon, Drell-Yan lepton pair, hadron, jet, etc.) in collisions between longitudinally and transversely polarized protons, p ↑ p → C X, is also a twist-3 observable [40][41][42][43][44][45]. 2 Unlike SSAs, which are naively "T-odd" effects, DSAs like A LT are naively "T-even," which leads inherently to different forms for the corresponding twist-3 cross section (see the discussion below Eq. (2)). Therefore, A LT and A N probe different yet complimentary aspects of hadronic structure, and both are critical to test the underlying mechanism for these asymmetries. Surprisingly, RHIC has never run an experiment for A LT despite being the only facility in the world with polarized proton beams and having measured every other combination of proton spins (A N , A L , A T T , A LL ).
In this paper we compute the polarized cross section for A LT in the production of an unpolarized (light) hadron h from proton-proton collisions, where S ⊥ is the transverse spin vector for the nucleon A, Λ is the helicity of the longitudinally polarized nucleon B, and the momenta of the particles are shown. In the framework of collinear factorization, the first nonvanishing contribution to the cross section appears at twist-3, and it receives three contributions, where f a/A(3) represents the twist-3 distribution function for parton species a (a = q,q, g) in nucleon A with the subscript (3) indicating the twist (and similar for f b/B(3) ). Likewise, D h/c (3) represents the twist-3 fragmentation function for the parton species c into the final-state hadron 1 Data from RHIC is on tape for AN in prompt photon production and several predictions exist for this asymmetry within collinear factorization [34,37,39]. 2 ALT in ep collisions is also an interesting twist-3 asymmetry and has been studied in Refs. [46][47][48].
h. The factors H, H ′ , and H ′′ are the partonic hard cross sections for each contribution, and ⊗ represents a convolution in the appropriate momentum fractions. So far, the leading-order (LO) cross section was derived for the first term [43] and the third term [45] in Eq. (2). The first line of (2) involves twist-3 distributions in the transversely polarized nucleon coupled to the twist-2 helicity distribution. Unlike the SSA for p ↑ p → h X, the partonic hard part for this term is given as a non-pole contribution [42,43]. In the third line of (2), the real part of the unpolarized chiral-odd twist-3 quark-gluon fragmentation function couples to the transversity parton density [45]. This is in contrast to SSAs, where the imaginary part of the same quark-gluon twist-3 fragmentation function contributes [31,32]. A recent analysis suggests that this imaginary part can be the main cause of the large A N observed for pion production in pp collisions at RHIC [38]. This new insight is what motivated the calculation of the third line in Eq. (2) for the A LT case [45]. Again we emphasize that A LT in p ↑ p → h X is a unique quantity that should be measured at RHIC.
To complete the LO cross section for the process (1), we will compute the second term in Eq. (2), where, as we will see in Sec. 3, chiral-odd twist-3 distributions for the longitudinally polarized nucleon enter along with the transversity parton density (the latter shows up when one employs QCD equation-of-motion and Lorentz invariance relations). Both of these couple to the transversity function for the transversely polarized nucleon. We note that two twist-3 terms analogous to the first two lines in Eq. (2) (with the fragmentation functions omitted) contribute to A LT in Drell-Yan when one integrates over the transverse momenta of the lepton pair, and both pieces are of a similar magnitude [41]. Therefore, it is possible that the second term of (2) for hadron production is just as important as the first and brings a non-negligible contribution. In addition, as alluded to above, the third term might also be significant (as in A N ). Thus, a detailed numerical study of all three parts of A LT will be needed and is the subject of future work.
The rest of this paper is organized as follows: in Sec. 2 we summarize the twist-3 distribution functions in the nucleon relevant for this computation and the relations among them. In Sec. 3, we derive the LO cross section for the second term of Eq. (2). We will see that, owing to a simple form of the partonic hard cross sections, the effect of the twist-3 quark-gluon correlation function in the longitudinally polarized nucleon can be expressed in terms of one-variable twist-3 quark distributions and the transversity parton density. Sec. 4 is devoted to a brief summary.
Twist-3 distribution functions for a longitudinally polarized proton
In this section we summarize the distribution functions in the nucleon relevant to our study. We first have a quark correlator in the nucleon that gives two chiral-odd polarized functions needed in our calculation [40], where ψ i is a quark field with spinor index i, M N is the nucleon mass, S is the nucleon spin vector normalized as S 2 = −1, and Λ = M N (S · n) is its helicity. We also introduced two lightlike vectors p µ and n µ , where P = p + (M 2 N /2)n and p · n = 1 with the only nonzero components p + = P + and n − for the nucleon moving in the +z-direction. For simplicity, here and below we suppress the gauge-link operators and use the shorthand σ np ≡ σ αβ n α p β . The F -type twist-3 distribution in the longitudinally polarized proton is defined as [49] where F αn is the gluon field strength tensor and g αβ ⊥ ≡ g αβ − p α n β − p β n α . From Hermiticity and P T -invariance, H F L (x 1 , x 2 ) is shown to be real and satisfies the symmetry property The (4), and is related to where P indicates the principal value. The functionh L (x) is another real twist-3 distribution function, which is defined as where in the first line we explicitly wrote the gauge links [∞n + z ⊥ , λn + z ⊥ ], etc., so that the meaning of the derivative becomes clear. Using the QCD equation-of-motion, h L (x) can be expressed in terms of H F L (x 1 , x) andh L (x) as In addition, the operator product expansion gives another relation among h L (x), h 1 (x), and The combination of (8) and (9) leads to which is known as a Lorentz invariance relation in the literature [48]. In Sec. 3, we will see the relations (8) and (10) lead to a simple form for the cross section for the second term of (2).
x ′ 1 p ′ Figure 1: Generic diagrams for the contribution to the process (1) from the second term in Eq. (2). The correlators for the longitudinally polarized nucleon (upper blob) couple to the transversity distribution (lower blob). Diagram (a) gives rise to the first and second terms in (11), and (b) and (c) are for the third term in (11). Mirror diagrams of (b) and (c) also contribute, which are included in Eq. (11).
Calculation of the polarized cross section for A LT
We now derive the cross section for the second term of Eq. (2). As mentioned before, the twist-3 cross section for the naively T-even A LT arises from non-pole contributions. The method of the calculation has been formulated both in Feynman gauge [32,44] and lightcone gauge [31,42,43], and it has been confirmed that they give identical results for the twist-3 cross section in terms of the gauge-invariant distribution and fragmentation functions defined in the previous section [45,47,50].
Here we follow the Feynman gauge formulation (but have checked that the same result is achieved in lightcone gauge), which has an advantage that the gauge invariant correlation functions appear manifestly. Since we are interested in the twist-3 effect from the longitudinally polarized nucleon, we factorize the transversity distribution h 1 (x) and the unpolarized fragmentation function for the hadron D(z) from the rest of the cross section and perform a collinear expansion of the hard part. The generic diagrams for this contribution is shown in Fig. 1. According to the general formalism developed in [32], the twist-3 cross section is obtained as where S = (P + P ′ ) 2 is the center-of-mass energy squared, M (x ′ ), M β ∂ (x ′ ), and M β F (x ′ 1 , x ′ ) are, respectively, defined in Eqs. (3), (7), and (4) with p and n replaced by p ′ and n ′ (similarly defined for the momentum P ′ by P ′ = p ′ + (M 2 N /2)n ′ and p ′ · n ′ = 1), and ω α β = g α β − p ′α n ′ β . The partonic hard parts S(k) and S Lα (x ′ 1 p ′ , x ′ p ′ ) are shown by the middle blobs of Fig. 1(a) and Fig. 1(b),(c), respectively. (It is understood that S and S Lα also depend on xp and P h /z.) Here S Lα (x ′ 1 p ′ , x ′ p ′ ) represents the hard part for the diagram in which the coherent gluon line from M β F (x ′ 1 , x ′ ) is located in the left of the cut, and the effect of the mirror diagrams is taken into account by the principal value prescription and the factor of 2 in the third term of Eq. (11). The LO diagrams for the hard parts are shown in Figs. 2-4: they correspond to the qq → qq channel 3 (Fig. 2),qq → q ′q′ ,qq →q ′ q ′ ,qq → qq,qq →qq channels (Fig. 3), andqq → gg channel (Fig. 4). Inspecting these diagrams, it is not difficult to find that S Lα (x ′ 1 p ′ , x ′ p ′ ) depends on x ′ 1 only through the factor 1/(x ′ 1 − x ′ ) and 1/x ′ 1 . Therefore the cross section can be decomposed as where i a,b,c indicates a sum over channels i and parton flavors in each channel (where {a, b} ∈ {q,q}, c ∈ {q,q, g}). The partonic hard cross sectionsσ L ,σ N D ,σ D ,σ F 1 ,σ F 2 ,σ SF P are independent of x ′ 1 and are functions of the Mandelstam variableŝ By extracting the 1/x ′ 1 component of S Lα (x ′ 1 p ′ , x ′ p ′ ) we can see thatσ SF P has a structure identical to a SSA soft-fermion-pole (SFP) cross section (besides the projection tensor) with x ′ 1 = 0 [26,34,35]. By direct computation of all channels, we find thatσ SF P = 0,σ N D =σ F 1 , and the contribution from Fig. 1(c) is identically zero. This vanishingσ SF P is reminiscent of the fact that the SFP hard parts of the chiral-odd contribution to pp → Λ ↑ X and p ↑ p → γX (i.e., the piece involving twist-3 distributions for the unpolarized proton) vanish [34,35]. Accordingly, using Eqs. (8) and (10) in Eq. (12), one can eliminate , andh L (x ′ ) and obtain the twist-3 cross section as withσ Figure 2: Feynman diagrams in the qq → qq channel for the partonic hard parts S(k) and (11). Only the top two diagrams contribute to S(k), while all the diagrams contribute to S Lα (x ′ 1 p ′ , x ′ p ′ ). The circled cross indicates the fragmentation insertion. For S Lα (x ′ 1 p ′ , x ′ p ′ ), it is understood for each diagram that the coherent gluon line coming out of the longitudinally polarized nucleon matrix element (upper side) attaches to one of the dots. Mirror diagrams also contribute, which is taken into account in (11).
(v)qq →qq channel:σ (vi)qq → gg channel:σ For the charge conjugated channels (where an antiquark comes from the longitudinally polarized proton) we findσāb →cd =σ ab→cd , whereσ ab→cd are given in Eqs. (16)- (21). As shown in Sec. 2, there are various twist-3 distributions which are not independent of each other. In particular, h L (x ′ ),h L (x ′ ), and H DL (x ′ 1 , x ′ ) can be expressed in terms of H F L (x ′ 1 , x ′ ) and the transversity distribution h 1 (x ′ ), and thus are "auxiliary" twist-3 distributions. 5 However, the simple structure of the partonic cross section for H F L (x ′ 1 , x ′ ) allows us to rewrite the cross section in terms of h 1 (x ′ ), h L (x ′ ), andh L (x ′ ), as shown in Eq. (14), for the LO twist-3 cross section. We recall a similar simplification also occurred for the third term in Eq. (2) [45].
Summary
In this paper we have derived the twist-3 contribution from the longitudinally polarized nucleon to A LT in p ↑ p → h X. Along with the other two twist-3 pieces derived in the literature [43,45], we now have the complete LO cross section for this process at twist-3. Like in the case of the twist-3 fragmentation contribution for A LT [45], we found that the twist-3 part for the longitudinally polarized proton can be also expressed in a simple form using one-variable quark distributions. This will be useful for phenomenological analyses. Given that A LT probes different yet equally important aspects of hadronic structure as A N , and the fact that RHIC has never run an experiment for this asymmetry despite being the only accelerator in the world with polarized proton beams and having measured every other proton spin configuration, we plan to conduct such a numerical study in future work. Figure 4: The same as Fig. 2, but for theqq → gg channel. Only the top nine diagrams contribute to S(k), while all the diagrams contribute to S Lα (x ′ 1 p ′ , x ′ p ′ ). | 3,837.4 | 2016-03-25T00:00:00.000 | [
"Physics"
] |
Policy Analysis: An Analysis of National Education Policy 2020
. This paper analyses the National Education Policy 2020 (pages 1-32), the official policy document issued by the Ministry of Human Resource Development (India). This paper is written in light of five pedagogical theories: a). Human Capital; b). Traditional Academic; c). Learner Centred; d). Social Efficiency; e). Social Reconstruction. By identifying the presence of the five theories in the policy and analysing the dominant ideologies of education across the whole text, the paper aims to explore the influence of pedagogical philosophies and the relevant factors on educational policy formation and upgrading in India. It discovers that although all five theories exist in the policy, Learner Centred theory is the most dominant. In addition, it also discusses political and socio-economic factors that impact the formation of the Indian educational policy.
Introduction
This paper analyses the National Education Policy 2020 released by the Republic of India's Ministry of Human Resource Development. The policy (hereafter referred to as NEP) covers four parts of education: a). school education; b). higher education; c). critical areas of focus, such as adult education, promoting Indian languages and online education; d). the implementation of the policy. The complete analysis of NEP focuses on school education. It is based on five theories in pedagogy, including, in that order: the theories of Human Capital, Traditional Academic, Learner Centred, Social Efficiency, as well as Social Reconstruction. Next, this paper will discuss some theories dominating this policy. Finally, it will review various factors that contributed to NEP's existence.
Five theories of education 2.1 Human Capital
A way to think about education is through the lens of human capital, a concept that has its roots in the area of Economics. Human Capital theory in education is about investment and return. It claims that skills are a type of human capital that people develop via conscious investment in education and that these talents will eventually contribute to economic activity. At the same time, people's income in the labour market is regarded as a reward for their productivity (Schultz, 1961, quoted in Little, 2003. It is a fundamental principle of endogenous growth theory that human capital investments, innovation, and knowledge have a sizeable positive impact on economic growth, implicating policymaking's significant role in boosting the economy. Seeing people as only capital products seems overly reductivism and leave out the meaning of being a human (Gillies, 2015).
Traditional Academic
Traditional Academic education, also known as Liberal education, has a rich history that can be traced back thousands of years, with valuable knowledge accumulated throughout human history and has been classified and organised into different academic disciplines. John Henry Newman (1996) states that liberal education is good for developing people's intellect but also makes people who have received a helpful education. Traditional education is a kind of education that equips students with the confidence to handle intricacy, variety, and change through learning curriculum. It aids students in developing a high level of social responsibility and powerful intellectual abilities and fostering their problem-solving capacity in practical situations (Robbins, 2014, quoted in Scott, 2014).
Learner Centred
Learner-centred education is an instructional paradigm that places learners at the centre of the teaching process (Mahendra et al., 2005). It can be interpreted as opposing the traditional academic concept of education. Instead of focusing on the disciplines, Learner Centred education emphasizes the needs and interests of children and individuals, requiring schools to create an inspiring and enjoyable environment for pupils to grow at their own pace. Despite the importance of learner-centred education in today's classrooms, some academics have criticised it for being overly general. For instance, Schweisfurth (2015) argues that phrases such as learner-centeredness and related words are frequently employed indiscriminately and encompass a wide range of ideas and behaviours to the point where people may refer to anything as learner-centred to explain policy or practice.
Social Efficiency
It is possible to think of Social Efficiency as representing a particular aspect of the enlightenment and its rationality, scientific method, paving the way for modernity's industrialization and technical advancement. Mass education is a typical example of social efficiency applied in education, referring to an educational session mandated for all children and regulated by the government. During the first half of 20th century, Social Efficiency was such a widespread concept in US educational theory that educators considered it to be the primary goal of education (Knoll, 2009). Proponents of this view believed education must provide the next generation with the knowledge necessary to contribute to society. Nevertheless, opponents criticize the concept of social efficiency for undermining democratic ideals and teacher autonomy in education (Kim, 2018).
Social Reconstruction
The Social Reconstruction theory of education, also called Critical Pedagogy, places a series of requirements on educators. Educators need to play a leading role in the fight for the fairness of society and the economy. The realities of public life and issues with democracy must be included in educators' teaching and writing (Giroux, 2006). One main point of the Social Reconstruction ideologies is that it calls for learning for sustainability and developing children into global citizens. Another basic theme of this theory is an enterprise in education, which means carrying on the curriculum from the enterprise perspective. Contrarily, some sceptics said these ideas were internally inconsistent and could not be combined into a coherent theory of schooling and education.
Human Capital
In the statements of school education, the authority claims that pupils' nutrition and health is one of the important and essential conditions for learning: Children are unable to learn when they are undernourished. Hence, children's nutrition and health (including mental health) will be addressed through the introduction of well-trained social workers, counsellors in the schooling system. Furthermore, research shows that the morning hours after a nutritious breakfast can be particularly productive for the study of cognitively more demanding subjects, and hence these hours may be leveraged by providing a simple but energizing breakfast in addition to midday meals (NEP:9). Human Capital theory emphasizes investment and return. The Government attaches great importance to children's physical and mental health and invests in meals to ensure nutrition.
Traditional Academic
At the beginning of the policy, the government introduces the assumed outcomes of educated children: Instilling knowledge of India and its varied social, cultural, and technological needs, young people are considered critical for purposes of national pride, self-confidence, cooperation, and integration (NEP:4). By instilling knowledge in different subjects and diverse cultures, children can grow intellectually to tackle the challenges of the modern world. The policy puts forwards a prerequisite requirement for pupils to study, that is to develop the capacity to do reading and writing, as well as carry on simple number operations. However, various governmental, as well as non-governmental surveys, indicate that a large proportion of students currently in elementary school -estimated to be over 5 cores in number -have not attained foundational literacy and numeracy (NEP:8).
Learner Centred
Regarding the evolvement of pedagogy, the policy mentions "learner-centred, discussion-based, flexible, and, of course, enjoyable", which describes what learner-centred education should be. Learner-centred education pays attention to the discussions and interactions between teachers and students. It asks for a joyful environment where students can get inspired and realise their potential (NEP, 2020, p. 3). Besides, the curricular and pedagogical structure of school education will be reconfigured to make it responsive and relevant to the developmental needs and interests of learners at different stages of their development, corresponding to the age ranges of 3-8, 8-11, 11-14, and 14-18 years, respectively (NEP:11).
Social Efficiency
Modern world is full of rapid changes and the growing emergence of epidemics also call for collaborative research in infectious disease management and the development of vaccines, and the resultant social issues heighten the need for multidisciplinary learning (NEP:3). Environmental issues such as climate change, pollution of all kinds and energy shortages are placing new demands on the people of the new world. Social Efficiency thinking allows people to become innovative by making the most of "various dramatic scientific and technological advances". Once people combine the "skilled workforce" and "multidisciplinary abilities", chances are that they will develop the economy of the nation and change the world around them ultimately (NEP, 2020:3).
Social Reconstruction
The Indian Government hopes that implementing NEP can effectively promote learning. Education is the most excellent tool for achieving social justice and equality. Inclusive and equitable education -a fundamental goal in its own right -is also critical to achieving an inclusive and equitable society where citizens can thrive and contribute to the nation. The education system must aim to benefit India's children so that no child loses any opportunity to learn and excel because of circumstances of birth or background (NEP:24). Under the guidance of Critical Pedagogy, educators should try their best to help kids from vulnerable backgrounds access high-quality education, minimalizing the negative impacts their original social and economic status have on them.
Balance of Ideologies
It is stated in the policy that higher education plays a vital role in promoting human and social well-being and in developing what is envisaged in the Indian Constitution. Therefore, the government states that quality higher education must aim to produce excellent, thoughtful, wellrounded and creative people. Enables individuals to delve into one or more professional areas of interest and develop character, moral and constitutional values, intellectual curiosity, scientific temperament, creativity, service and 21st-century competence in a range of disciplines, including the sciences, social sciences, arts, humanities, languages and professional, technical and vocational subjects. This policy suggests that the way education has been delivered before higher education prepares higher education to produce students who can solve social problems.
Political factors
The unfavourable political environment for schooling in India fundamentally impacts how universities in other countries consider possible collaborations and engagements that could lead to a breakthrough to the next level of Indian education. The Hindu ideology of the ruling Bharatiya Janata Party government, particularly its anti-Muslim rhetoric and radicalism, will hinder the cooperation promotion of the Indian higher education system in global competition. The radical ideology of the Indian people was deeply ingrained and affected the development of educational standards in India. The Indian government should enhance students' knowledge and understanding of cultures worldwide from primary education. Social factors should also be given greater attention.
Socio-economic factors
India will have the world's largest youth population over the next decade, and the ability to provide them with quality education will determine India's future. The adoption of NEP is conducive to achieving the grand goal of "becoming a developed country as well as among the three largest economies in the world", establishing an education system that adapts to India's economic and social development, and improving the poor implementation of previous education policies (NEP, 2020:3).
It is in response to several political and socio-economic challenges such as these that India has enacted new policies. Overall, this policy aligns with the future trend of world education development, which makes it possible for India to position itself as a global knowledge superpower and achieve excellent results in international competition.
Conclusion
The purpose of this study is to examine how pedagogical philosophies and other pertinent aspects affect the development and improvement of educational policies in India by analysing the NEP (pages 1-32) in conjunction with five educational theories, namely the Human Capital theory, Traditional Academic theory, Learner Centred theory, Social Efficiency theory, as well as the Social Reconstruction theory. The formation of India's education policy is inseparable from the influence of educational philosophies. The analysis shows that all five ideologies are reflected in the policy. However, the weight of the ideologies is not even. Among them, Learner Centred theory is the most important one in the text that the author has analyzed, which discloses that it is the children and their development that the Indian government emphasizes. In addition to the influence of educational philosophy on India's education policy, political and socio-economic factors also play a significant role. Politically, India's ruling party's anti-Muslim rhetoric and radicalism are unfavourable factors hindering education development in India. It is imperative to change education policy and implement education reform to reverse the negative education situation at that time. At the socio-economic level, the growth of India's youth population requires India to create a better educational environment for the next generation and establish an education system that adapts to the Indian economy. The NEP is expected to improve India's global competitiveness and help India become a global knowledge-based economy. | 2,896.6 | 2023-01-01T00:00:00.000 | [
"Education",
"Political Science",
"Economics"
] |
The Effect and Relative Importance of Neutral Genetic Diversity for Predicting Parasitism Varies across Parasite Taxa
Understanding factors that determine heterogeneity in levels of parasitism across individuals is a major challenge in disease ecology. It is known that genetic makeup plays an important role in infection likelihood, but the mechanism remains unclear as does its relative importance when compared to other factors. We analyzed relationships between genetic diversity and macroparasites in outbred, free-ranging populations of raccoons (Procyon lotor). We measured heterozygosity at 14 microsatellite loci and modeled the effects of both multi-locus and single-locus heterozygosity on parasitism using an information theoretic approach and including non-genetic factors that are known to influence the likelihood of parasitism. The association of genetic diversity and parasitism, as well as the relative importance of genetic diversity, differed by parasitic group. Endoparasite species richness was better predicted by a model that included genetic diversity, with the more heterozygous hosts harboring fewer endoparasite species. Genetic diversity was also important in predicting abundance of replete ticks (Dermacentor variabilis). This association fit a curvilinear trend, with hosts that had either high or low levels of heterozygosity harboring fewer parasites than those with intermediate levels. In contrast, genetic diversity was not important in predicting abundance of non-replete ticks and lice (Trichodectes octomaculatus). No strong single-locus effects were observed for either endoparasites or replete ticks. Our results suggest that in outbred populations multi-locus diversity might be important for coping with parasitism. The differences in the relationships between heterozygosity and parasitism for the different parasites suggest that the role of genetic diversity varies with parasite-mediated selective pressures.
Introduction
While most individuals in a host population carry few or no macroparasites of a particular species, a few individuals harbor high numbers of parasites [1]. The causal factors underpinning this general pattern, which typically fits a negative binomial distribution, differ as a function of the parasitic species under study and the environmental context of the host-parasite interaction. Nonetheless, it is generally recognized that this variation is due to the complex interaction of factors extrinsic and intrinsic to the host, including where a host lives, temporal variability in hostparasite interactions, and variability in host susceptibility, which depends on factors such as age, sex, body condition, or genotype.
A primary focus of many theoretical and empirical studies has been to understand the linkages between the genetic diversity of host populations and their susceptibility to pathogens [2][3][4][5][6][7][8]. The association between reduced levels of genetic diversity and increased prevalence or abundance of pathogens, termed ''the monoculture effect'' [9][10], has been demonstrated in studies of both agricultural and wild species [11][12][13]. In fact, pathogenmediated selection has been proposed as a major underlying mechanism that promotes the accumulation and maintenance of genetic diversity in host populations through selection against common genotypes (Red Queen Hypothesis, [3]) or homozygous genotypes [14][15][16]. For example, Bérénos et al. [17] showed that lines of Red Flour Beetle (Tribolium castaneum) that had coevolved with a microsporidian parasite had higher levels of heterozygosity than control lines that had not coevolved with the parasite. Underlying cross-population and interspecific relationships between genetic variability and parasitism is the differential susceptibility of individuals. Within populations, variance in host genetic diversity (measured as heterozygosity) can be an important predictor of parasitism. In wild populations it has been shown that individuals with lower heterozygosity may have higher infection frequencies and greater morbidity [18][19][20][21]. This extreme has been especially explored in the contex of inbreeding. When inbreeding occurs, the increase in the frequency of alleles that are identical by descent generates correlations in the extent of heterozygosity or homozygosity across loci throughout the genome (i.e. identity disequilibrium) [22][23][24]. Therefore, inbred individuals have a higher probability of homozygosity at all loci, including genes involved in disease resistance [20], and parasites would select against these hosts, ultimately favouring increased genome-wide genetic diversity in the host population [12,25].
Associations between individual host genetic diversity and parasitism have frequently been studied using multi-locus heterozygosity at neutral genetic markers (microsatellites) as a proxy for genome-wide diversity [i.e heterozygosity-fitness correlations (HFC)]. Several studies have shown that the parasite HFCs were a consequence of inbreeding depression [18][19]. However, it is not uncommon to find significant HFCs in non-inbred populations [14,21,26], although in those cases multi-locus heterozygosity is thought to be a poor predictor of genome-wide diversity [27][28][29]. The alternative explanation for these findings is that instead of inbreeding that affects the whole genome, linkage disequilibrium (i.e. non-random associations of different loci in the gamete) may occur between neutral and functional loci [23,27,30] due to the high polymorphism of the microsatellites that favor finding linkages between alleles and candidate genes ( [31]; but see [24] and [32]). Although the exact underlying mechanisms at functional loci that cause these HFCs are still under debate, overdominance (i.e. heterozygote advantage; [23,30,33]) is frequently proposed as a likely explanation, especially for parasite resistance loci [14,15].
However, it has been difficult to generalize about correlations between neutral genetic variability and measures of parasitism since such patterns are sometimes equivocal. Results vary with the species of parasite studied and its fitness effects on the host as well as the general level of inbreeding of the host population. There are several examples where less heterozygous individuals were more susceptible to parasite infection [18,19,34]. There are also instances in which no association between genetic diversity and parasitism was found [4,8,35] or where such correlations were limited to loci that are physically close to immune response candidate genes [36]. Yet, as with other HFCs, the role of genetic diversity in predicting parasite loads is environment and context dependent [26]. Parasitism is influenced by many abiotic and nongenetic biotic factors that determine exposure and susceptibility. For example, if the fitness consequences of a parasite are mild, the role of genetic diversity relative to these other factors may be weak and difficult to detect. Indeed, in some cases genetic diversity has been found to play a role only for young individuals, for whom the effects of the studied parasite on fitness were stronger [37]. In addition, as with inbreeding depression, it is possible that the role of genetic diversity is stronger under stressful environmental conditions [18,25]. Finally, despite the fact that not all parasites influence hosts equally, and that HFCs may differ as a function of the examined parasitic taxon [35,38], most studies are carried out for a single parasite without consideration of the potential for differences in the effect of genetic diversity on susceptibility to different parasite species.
Here we address these issues by considering the relative importance of neutral genetic variability when compared to nongenetic factors that have been shown to be important in predicting parasitism. We focused on outbred, free-ranging populations of raccoons (Procyon lotor) for which we have collected data on behavioral, population, and habitat ecology and for which observational and experimental studies have identified non-genetic predictors of parasitism [39][40][41][42][43]. For these populations, nongenetic predictors of parasitism include both factors extrinsic (annual and seasonal variability, location of the host population and contact rates) and intrinsic to the host (age, sex and body condition). We recently estimated the genetic diversity by measuring individual genetic diversity of individuals in these populations and observed that even small differences in the neutral genetic variability of individuals comprising these populations are associated with the animal's ability to overcome infection by canine distemper virus (CDV), a pathogen that causes high rates of mortality in raccoons [44]. Thus, genetic variability in combination with these non-genetic factors may enhance our ability to predict the extent of parasitism.
In this study we analyze data on infection by macroparasites, including two species of ectoparasites (ticks and lice) and a collective measure of the extent of infection by internal macroparasites (endoparasite species richness). While ecto-and endoparasites may have important effects on host fitness, presumably these effects are less extreme and more variable than that of CDV, which can directly kill hosts. Using an information theoretic approach we selected the best models for each parasite and assessed whether individual genetic diversity is a significant predictor of the extent of ecto-and endoparasitism when placed in the context of non-genetic factors already identified as important predictors [39][40][41][42] and determined whether the relationship between genetic diversity and parasitism differs among parasitic species or groups. We expected to find an overall negative relationship between levels of genetic diversity and parasite load, i.e, more heterozygous individuals will present fewer parasites. We also expected the relationship between genetic diversity and parasite load to differ across parasite taxa, being stronger for those parasites that may have greater fitness consequences and are known to trigger immune responses because there are higher chances of these pathogens to be imposing selective pressures on the host. Endoparasites, for example, are known to trigger an immune response [45], thus they may have larger effects on host fitness than ectoparasites and we predicted that they would have a stronger relationship with genetic variability. Yet some ectoparasites also interact with hosts more closely than others, and thus we predict that female ticks that have remained on a host long enough to draw a blood meal and become replete will show a stronger relationship with genetic variability than ticks that have only recently attached to hosts or chewing lice that feed on skin debris and may not trigger an immune response [46][47]. We also tested whether such relationships are best explained by single locus effects or genome-wide neutral genetic diversity. Since HFCs in outbred populations have been frequently associated with strong effects of single loci [30,48], we predicted that the relationship is due to single locus effects.
Ethical Statement
Research was carried out under Missouri Department of Conservation permit #12869, which specifically approved this study, and University of Missouri Animal Care and Use Protocol #3927.
Sampling Hosts
Raccoons were sampled between 2006 and 2007 at 12 forested sites located within 60 km of Columbia, Missouri, USA ( Figure 1). Details of the sites, raccoon populations and associated macroparasite communities, and host and parasite sampling protocols are given elsewhere [39][40][41][42][43]. In brief, all sites had similar raccoon population densities, and measures of genetic variability indicate that these populations are highly variable and outbred [41][42][43][44]. Sites received different experimental treatments as part of a study that measured the effects of differential contact rates and food provisioning on parasite communities [41,42]. One of the following three experimental treatments was randomly assigned to sites within geographically defined blocks: 1) a permanent feeding station stocked with 35 kg/wk of dried dog food at a single location to aggregate raccoons (n = 5 sites); 2) the same quantity of food, placed at highly dispersed and temporally variable locations to control for the effects of food addition without aggregating hosts (n = 3 sites), or 3) no food additions (n = 4 sites). Prior work on these host populations found that aggregation increased tick abundance and decreased lice abundance [41], while supplemental food decreased the number of indirectly transmitted endoparasites [42].
Raccoons were trapped for $10 days at each site two to three times per year between March and November. Individuals were anaesthetized and ear-tagged, weighed, sexed, measured, and aged [41,49] as kits (0-5 months) or age class I (6-14 months), II (15-38 months), III (39-57 months) or IV (.58 months). Data from kits, which formed only a small portion of the sampled individuals, were excluded from subsequent analyses because they were generally free of ecto-and endoparasites [39,40,42]. Residuals from a linear regression of body mass on body size were used to assess the relative body condition of each individual [50]. Hair samples and blood samples, collected via femoral venipuncture and placed in EDTA, were stored at 220uC. Animals were released at the site of capture following recovery from anesthesia.
Sampling Parasites
We focused on two species of ectoparasites: the American dog tick Dermacentor variabilis, and the chewing louse Trichodectes octomaculatus [39][40][41]. Dermacentor variabilis is a 3-host metastriate tick that occurs on raccoons while feeding or mating. Adult D. variabilis are large (3-5 mm in length) and readily found and identified without magnification. We focused on adult ticks present between April and August, the primary period when this species parasitizes raccoons at our study area [39], and quantified abundance by a thorough search of the entire body. Ticks were classified as non-replete (i.e. males and females that are not engorged with blood), semi-replete (females that have entered the slow feeding stage but are not yet fully engorged), or replete (female is fully engorged, indicating that mating has occurred and the animal has entered a rapid feeding phase). Only ticks in the non-replete and replete groups were included in the analyses since they constitute two well differentiated categories. Lice were Figure 1. Location of the 12 study sites contained within the 6 source areas in central Missouri, USA. Source areas were defined based on Fst assessments as Baskett, Rudolf Bennitt, Davisdale, Reform, Prairie Forks and Whetstone Creek. Location and treatments of each site are indicated by closed circles (control sites which did not receive supplemental food and thus raccoons did not aggregate), open circles (sites which received food, but food was dispersed so as to not to cause raccoons to aggregate) or open squares (sites which received supplemental food at a single site so as to cause raccoons to aggregate). doi:10.1371/journal.pone.0045404.g001 sampled via 10 strokes with a flea comb from the base of the neck to the base of the tail on the dorsal region. Lice were placed in sealed plastic bags and frozen until transfer to the laboratory where they were identified to species and counted under a dissecting scope. To avoid handling effects on ectoparasite abundance and to facilitate a similar likelihood for finding ectoparasites for each host, only the first capture events for each host were included in analyses.
For sampling endoparasites, fresh feces were collected from within or below traps, homogenized, and stored in 10% formalin. The presence of endoparasite species was based on identification of ova and oocysts isolated by fecal flotation procedures using sugar and zinc sulfate centrifugation techniques. Additional methodological details and citations for endoparasite species descriptions and identification are provided elsewhere [42]. Based on the presence/absence data, we calculated prevalence of each endoparasite species and endoparasite richness for each host, which is a proxy of host endoparasite burden. Only one randomly chosen fecal sample per individual host was included in the analyses. Although using additional samples may give a more accurate parasite species richness index for an animal, this would have resulted in a disproportionately larger sampling effort for recaptured raccoons.
DNA Extraction and Genotyping
Total genomic DNA was extracted from blood samples using DNeasy Blood and Tissue Kits (Qiagen, Valencia, CA, USA) with the manufacturer's protocol and from hair samples using InstaGene Matrix kits (BioRad, Hercules, CA, USA) following [51]. Each individual was genotyped at 15 unlinked nuclear microsatellite loci developed for raccoons: PLM01, PLM03, PLM05, PLM06, PLM07, PLM08, PLM09, PLM10, PLM11, PLM12, PLM13, PLM14, PLM15, PLM16 and PLM17 [52]. A total of 203 individuals were previously genotyped at 12 of these loci [44]; these individuals were genotyped for the 3 additional loci (PLM1, PLM3 and PLM17). An additional 177 individuals were genotyped using a multiplex approach, as were 24 of the initial 203 individuals to ascertain genotyping consistency between the two datasets. The 15 microsatellites were co-amplified in three multiplex PCRs following the Multiplex PCR Kit (Qiagen) protocol for 40 cycles (blood extracts) or 45 cycles (hair extracts) and a 60uC annealing temperature. Reactions for DNA extracted from blood were prepared in a final volume of 10 ml containing 1 ml DNA (15-20 ng), 1X Multiplex Master Mix, 0.065 mM each primer, and 0.8 mg/ml BSA. Reactions for the DNA extracted from hair were prepared in a final volume of 12.5 ml containing 3 ml DNA, 1X Multiplex Master Mix, 0.1 mM each primer, and 0.8 mg/ml BSA. Fragment length analyses were performed on an ABI 3730 DNA Analyzer (Applied Biosystems, Foster City, CA, USA), and alleles were scored using GENEMARKER 1.5 (SoftGenetics, State College, PA, USA). We repeated analyses of 35% of blood samples to calculate genotyping error rate. Because DNA derived from hair samples may have low DNA quality and quantity, heterozygous genotypes were confirmed in at least two separate reactions and homozygous genotypes were confirmed in at least three separate reactions.
Genetic Analyses
We tested for deviations from expected genotype frequencies under Hardy-Weinberg equilibrium and for linkage disequilibrium between all pairs of loci using GENEPOP 3.4 [53]. The mean number of alleles and mean expected and observed heterozygosity values were calculated with the program Arlequin 3.1 [54]. The probability of null alleles was estimated using Microchecker 2.2.3 [55]. Using the Excel-macro IRmacroN4 (www.zoo.cam.ac.uk/ zoostaff/amos) we calculated three measures of individual genetic diversity: standard multilocus heterozygosity (sMLH; [18], internal relatedness (IR; [56]) and heterozygosity weighted by locus (HL; [57]). sMLH represents general levels of heterozygosity and controls for the number of loci genotyped [18]. IR and HL are measures of homozygosity but differ in how they are calculated. IR weights homozygotes for rare alleles more heavily than homozygotes for common alleles since the former are more likely derived from related parents, and thus gives a measure of the extent of inbreeding [56]. HL weights contribution of each locus to overall homozygosity in proportion to their allelic variability [57]. To calculate these measures we determined if there was a strong variance across years or sites since these measures have to be calculated at the population level. An Analysis of Molecular Variance (AMOVA) showed no significant partition of variance across years and population structure analyses showed that in spite of isolation by distance differences among some of the sites, these comprise a single population (overall F ST = 0.008). Thus, sMLH, IR and HL were calculated based on gene frequencies pooled across all years and sites. These measures are frequently highly correlated and testing all of them could lead to pseudoreplication [26]. Therefore we calculated pairwise Pearson correlation coefficients to assess the relationship of the three metrics. As expected, the three measures of genetic diversity were highly correlated (r SMLH-IR = 20.980; r IR-HL = 0.980; r HL-SMLH = 20.992; all p,0.001). Therefore, since HL and sMLH had a correlation coefficient higher than 0.99, we excluded sMLH from further analyses. Simulations suggest that HL may be better correlated with inbreeding coefficient and genome-wide homozygosity than IR in populations that present levels of heterozygosity greater than 0.6. However, this only occurred in study populations with high genetic structure and admixture, and the differences were clearer when $50 markers were used [57]. Because the population under study does not fulfil all these assumptions, we carried out analyses using both IR and HL. Both IR and HL were normally distributed, and thus we used one-way analyses of variance (ANOVA) to test for differences in individual genetic diversity across populations and years, and among age and sex classes.
Effects of Multi-locus and Single-locus Heterozygosity on Parasitism
For assessments of genome-wide genetic diversity effects, we used information-theoretic model selection [58] to identify the importance of an individual's genetic diversity relative to other factors known to predict the likelihood or extent of parasitism. To assess whether individual genetic diversity is a significant predictor of parasitism when placed in the context of other factors already identified as important predictors of infection, we conducted a twostage modeling approach. We first identified the best non-genetic model that included predictors extrinsic to the host (site, month, year, aggregation, food supplementation) and predictors intrinsic to the host (age, sex, body condition), and then evaluated whether the model was improved through the addition of individual genetic diversity measures. In both stages we calculated Akaike's information criterion corrected for small sample size (AIC c ) and then calculated the differences between the best approximating model (the model with the lowest AIC c ) and all the other models (DAIC c ), as well as Akaikes weights (W i ) and the evidence ratio (D i ), for each model in the candidate set [58].
In the first stage, selection of the non-genetic terms for inclusion in the models was carried out a priori based on previous studies that identified important predictors of parasitism for each parasitic group [39][40][41][42]. In the second stage we selected all the models from the first stage that were within 2 DAIC c units of the best-fitting model and compared them to new models that included genetic diversity (both the variable and its quadratic term to control for potential non-linear relationships; [59]) as potential explanatory variables. Since significant population structure can lead to spurious significant HFCs [60], we controlled for the potential effects of the isolation-by-distance among the 12 study sites by including the 6 source areas (as defined by F ST assessments; Ruiz-Lopez et al. unpublished data) as a covariate in all models that collectively included the 12 sites (Figure 1).
To compare the effect of genetic diversity and its quadratic term we standardized these continuous predictors by centering and dividing by 2 standard deviations (SDs) to force a mean of 0 and SD of 0.5. We calculated model averaged estimates and odds ratios (695% C.I.) for each variable in the 90% confidence set of models (i.e. inclusion of all parameters in top models that collectively sum to w = 0.90). All analyses were conducted using R.v.13.1 (R Development Core Team 2011); the libraries MASS, and AICcmodavg were used to carry out model selection and the library arm to standardize the variables. Ectoparasite abundance was analyzed using generalized linear models with a negative binomial distribution and a log link function [41]. For D. variabilis, we analyzed non-replete and replete ticks separately because the importance of factors intrinsic and extrinsic to the host differs for these tick classes [39]. Endoparasite species richness was analyzed using a general linear model with a normal distribution [42].
For cases where the best models included a measure of genetic diversity we assessed the potential effects of single-locus heterozygosity. To do so, we selected the top model and replaced the multilocus heterozygosity measure with the single locus heterozygosity at each marker incorporated as a binary variable (0 if homozygous and 1 if heterozygous). We compared both models (the one fitting the multilocus heterozygosity and the one fitting the single locus effects) using an F-test to determine which of the two models explained significantly more variance [24]. Only individuals with complete genotypes were included in these analyses. From the models we calculated the effect size as the partial correlation coefficient [61]. We used a binomial sign test to investigate whether positive and negative effects occurred equally. We also tested whether metrics of genetic diversity (number of alleles, and expected and observed heterozygosity) were associated with the single locus effect size by regressing the effect size on genetic diversity.
Parasite Prevalence and Abundance
Across all years tick prevalence was 96%, and individual abundance ranged from 0 to 142 with a mean of 23.43 ticks per individual (n = 259; Table S1). Prevalence and abundance differed for the two tick categories, with non-replete ticks presenting higher prevalence and mean abundance (Prevalence non-replete = 0.95; Prevalence replete = 0.55; Abundance non-replete = 21.43; Abundance replete = 1.99). Louse prevalence was 0.52 and mean louse abundance ranged from 0 to 55 with a mean of 3.03 (n = 307). The endoparasite community of raccoons at the study sites comprised 16 taxa [42]. Prevalence of each endoparasite species ranged from 0.14 to 0.89 (Table S1).
Descriptive Genetic Analyses
Of 380 raccoons sampled between 2006 and 2007, 94.7% were genotyped at $12 markers, and were included in subsequent analyses. Mean observed heterozygosity was 0.783 (SD = 0.094), mean expected heterozygosity was 0.793 (SD = 0.090), and mean number of alleles per locus was 11.3 (SD = 3.2; range = 6-19; Table S2). Loci PLM7 and PLM12 were marginally significant (pvalue ,0.03) for a heterozygosity deficit after Bonferroni correction. Whereas PLM7 did not present evidence of null alleles, the probability of null alleles at locus PLM12 was close to 0.05 (probability of null alleles Oosterhout = 0.047). Therefore, we carried out all HFC analyses with and without the PLM12 locus, but kept PLM7 in all the analyses. While inclusion of PLM12 locus did not significantly affect the results, to be conservative we excluded data for this locus, and therefore subsequent results are based on a final panel of 14 microsatellite loci. Over all possible pairs, no pair of loci showed significant linkage disequilibrium after Bonferroni correction. Mean sMLH was 1.035 (SD = 0.141), mean IR was 0.009 (SD = 0.130), and mean HL was 0.202 (SD = 0.108). Neither IR nor HL showed significant differences across experimental treatments, sites, years, age or sex categories.
IR and HL yielded similar results when included in the models, probably due to the lack of strong genetic structure among our populations and because we used 14 markers. Therefore, below we present only the IR-based results.
Effect of Multilocus Heterozygosity on Ectoparasite Abundance
The results of adding genetic diversity to the best non-genetic models differed for lice and non-replete and replete ticks (Table 1). Genetic measures were important in explaining tick abundance, especially for replete ticks, but not for explaining lice abundance. For replete ticks, the best-fitting non-genetic model included the parameters aggregation, food, and month. This model garnered the majority of model support (Wi = 0.63) and no other model fell within 3 DAIC units (Table S3a). In Stage 2 of the replete tick analyses, however, the top non-genetic model garnered less support (Wi = 0.18) and the top models contained genetic terms (Wi = 0.73) (Table 1a). Interestingly, the top two models included the quadratic term (IR 2 ) both alone or together with the linear IR term, revealing an underlying curvilinear association between parasite loads and IR. Model average estimates and odd ratios showed that this relation was negative (b = 20.52, odds = 0.59), indicating that moderately heterozygous individuals had, on average, more parasites (Figure 2a). Model averaged estimates for IR 2 (-0.52) and aggregation (0.60) were similar in magnitude and in both cases the 95% CI did not overlap 0, collectively indicating that their effects on tick abundance were significant and of similar relative importance (Table 2). Model estimates for IR were smaller and the 95% CI, although highly skewed towards positive values, overlapped 0 ( Table 2). The three factors that had a higher effect on replete tick abundance were area (with higher abundance in Davisdale CA, Rudolf Bennitt CA and Whetstone Creek CA), month (with higher abundance in July), and aggregation (with higher abundance in aggregated sites) ( Table 2).
The top non-genetic models for non-replete ticks included temporal terms (month, year), treatment category (aggregation, food), and factors intrinsic to the host (age, sex) ( Table S3b). This model (now including area to account for the fine genetic differences across the different conservation areas) remained the top model when genetic variability was included as a potential explanatory metric. The top model that included genetic diversity was the third ranked model (DAICc = 2.2) ( Table 1b). Estimates of b for IR and IR 2 were close to and overlapped 0. Model average estimates indicated that the most important factors were area and month followed by food, aggregation and sex. By comparison, year, age and genetic diversity had less relative importance (Table 2, Figure 2b).
For lice, the top non-genetic model, which included host aggregation, age, sex, and an age*sex interaction (Table S3c), was not improved when measures of genetic diversity were included as additional potential explanatory parameters. In fact, no models that included IR or IR 2 fell within the 90% confidence set of models of the top predictive model (Table 1c). The top model that included a measure of genetic diversity differed by 8.04 AICc units from the best fitting model, and given the low weight of evidence in support of this model (Wi = 0.01), no genetic parameter was in the final model averaged results (Table S4).
Effect of Multilocus Heterozygosity on Endoparasite Richness
The top non-genetic model predicting endoparasite species richness comprised age and year (Wi = 0.35). Two additional models had moderate support (Wi = 0.21) and included the parameters age, sex, year, and food, all of which were used in the second modeling stage which incorporated genetic parameters and the six source areas (Table S3d). Adding IR to the best nongenetic models predicting endoparasite richness improved the fit of these models ( Table 3). The top two models, with a combined Wi = 0.40, included IR and most of the models within the 90% confidence set of models included either IR or the quadratic term. The top model that comprised solely non-genetic terms differed by DAICc = 0.91, and the top model identified during the first stage of the analyses (age + year) differed by DAICc = 8.44 from the best model that included genetic variability.
Model average-based estimates indicated that IR (b = 0.39) and host sex (b = 20.37) had effects of similar magnitude (Table 4). Although the 95% CI estimates for model averaged estimates of IR overlapped 0, they were skewed towards positive values (-0.05 to 0.84), indicating that individuals with higher IR (i.e. more homozygous individuals), harbor more species of endoparasites ( Figure 2d). IR 2 also showed a positive relationship with the endoparasite richness, but model averaged estimates indicated the magnitude of the effect on endoparasite richness was less than that of IR. The factors that better predicted endoparasite richness were: area, food, age, and year (Table 4).
Single Locus Effects
Three markers were significantly correlated with replete tick abundance (PLM5, PLM14, and PLM16) and 2 with endoparasite abundance (PLM01 and PLM14). However, we did not find any overall evidence for significant single locus effects, since most of the effect sizes were not significantly different from 0 ( Figure 3) and the models including the single loci did not improve the variance explained by the models incorporating multi-locus heterozygosity measured as IR (F (13,151)Endoparasites = 0.852, p-value = 0.604; F (12,147)Repleteticks = 0.157, p-value = 0.999). However, there was a clear trend for single-locus effects to differ for ectoparasites and endoparasites. For replete ticks, effect sizes were not significantly more positive or negative (8 positive, 6 negative, p-value = 0.791), whereas for endoparasites the single locus effects were significantly more negative (1 positive, 13 negative, p-value = 0.002), suggesting that higher levels of genetic diversity are associated with reduced endoparasite richness. There was no correlation between the effect
Discussion
We used an information theoretic approach to assess the relative importance of genetic diversity for predicting parasitism as we simultaneously considered non-genetic factors that also influence host exposure to parasites and susceptibility. Despite the outbred character of the host study populations, genetic diversity was an important predictor of parasitism at the individual level. However, the relationship of diversity and parasitism varied in strength and shape across parasitic taxa. Endoparasite species richness displayed an inverse relationship between heterozygosity and parasitism, whereas the relationship between genetic diversity and D. variabilis abundance was better predicted by a curvilinear relationship, especially for replete ticks (Figure 2). In addition, we did not observe strong effects of single loci, suggesting a relationship between parasites and multi-locus diversity and supporting the idea that in some populations parasites generate selective pressure not only on the specific loci directly involved in pathogen resistance, but also throughout the genome [17,62]. To our knowledge, this is the first example of differing relationships within the macroparasite component community in an outbred wildlife population.
Previous studies in this population revealed that individuals that had antibodies to CDV had greater genetic diversity than individuals that were seronegative, suggesting that individuals with lower levels of genetic diversity were less likely to survive CDV [44]. Our results for endoparasite richness show a similar pattern: individuals with fewer endoparasite species had greater genetic diversity than individuals infected by more species. The most likely mechanism that might explain this pattern is that genetic variability may be important for overcoming parasitism. These HFC patterns agree with previous studies that have shown that parasites might act to maintain high genetic diversity in their host population through directional selection for heterozygous individuals [12,14,18,63]. The importance of this pattern has been emphasized repeatedly in the context of inbred populations, in that directional selection imposed by parasites would select against the most inbred hosts. Due to the increase in homozygosity throughout the genome, inbred individuals are more likely to express deleterious recessive alleles, have lower probabilities of carrying adaptive alleles that may aid in infection resistance, and have a lower likelihood of heterozygosity at loci under balancing selection [18,25]. However, our study population was not inbred. Raccoons are among the most abundant mid-sized mammals in North American temperate forests, with densities of 9-32 individuals/km 2 at our study sites [41] and a ubiquity throughout much of North America that presumably allows high rates of gene flow. The population examined for this study showed high rates of allelic diversity and heterozygosity and significant but low F ST values across sampling areas (F ST = 0.008). Our results suggest that the selective pressure of parasites might also be important for populations that are not inbred and add new evidence to support the importance of genetic diversity for coping with parasitism in outbred populations [5,14].
In contrast to the patterns observed for endoparasites and for CDV [44], the results for ectoparasites were more nuanced. Genetic diversity was not a predictor of parasitism for lice but it was a predictor of the abundance of D. variabilis, particularly for ticks that had been on the host long enough to obtain a blood meal. The cross-taxa differences are likely due to the different interactions of lice and ticks with the host immune system. Trichodectes octomaculatus is a raccoon-specific louse [64] that feeds primarily on skin debris and dried blood. As such, it is not clear whether it elicits an immune response. The suborder Ischnocera, to which it belongs, has not been shown to be affected by the host immune system in birds [47,[65][66] and for mammals may be better viewed as a commensal rather than a parasitic taxon [67].
Ticks, in contrast, are known to generate an immune response from the host, most prominently due to tick saliva [46,68]. Inhibition of the host immune system has been suggested to be beneficial not only for the attachment of the tick but also for the transmission of bacteria such as Borrellia and Erlichia spp. [69]. Given the important role that genetic variability plays in facilitating the function of immune genes [15], the link between tick attachment to a host and immune system response might explain why genetic diversity is a better predictor of tick abundance than louse abundance. Furthermore, this could also explain why the strength of the relationship is greater for replete ticks, which have been exposed to a host immune response for a longer period than have non-replete ticks. We propose that the ability of a replete tick to stay attached to the host would depend on both the ability of the host to mount a successful immune response as well as the ability of the tick to modulate that immune response. As a consequence, genetic diversity is more relevant for Table 1. Ranking of models estimating abundance of (a) replete and (b) non-replete ticks (n = 259) and (c) lice (n = 307) in raccoons, including non-genetic and genetic terms. predicting the abundance of ticks that have entered the rapid feeding stage. Interestingly, the effect of genetic diversity on ectoparasite abundance was better predicted by a quadratic measure, indicating a curvilinear relationship. Reports of quadratic effects of genetic diversity are infrequent, but have been observed for survivorship [70], reproductive success and fluctuating asymmetry [59], and most notably for louse abundance [38].
The explanation for such quadratic effects is that, given a continuum between maximal inbreeding and maximal outbreeding, there should be an intermediate level of heterozygosity that maximizes fitness [71]. However, if this applies to our results we would expect more ectoparasites on highly heterozygous or Table 3. Ranking of models estimating endoparasite richness in a raccoon population (n = 250) including non-genetic and genetic terms. [72]. In that case, the authors interpreted their results as evidence for parasite-mediated disruptive selection, with both homozygous and heterozygous individuals having higher fitness than the moderately heterozygous individuals and having higher probabilities of survival.
When the relative importance of genetic diversity was compared with that of the non-genetic factors we observed different patterns across taxa that result from the specific interactions of the parasite, the environment, and the host. In previous studies we have observed that the relative importance of abiotic factors (season, year) was greater for non-replete ticks [39], the relative importance of biotic factors (age, sex and body condition) was greater for lice and replete-ticks [39,40], and the experimental increase in social aggregation could overwhelm these factors for both taxa [41]. For endoparasites, age, sex, aggregation, and year are all important predictors of species richness [42]. With the addition of genetic diversity, however, we gain additional insights into predictors of parasitism. Relative to other factors, host genetic diversity is not important in predicting louse or non-replete tick abundance. For replete ticks, the effects of genetic diversity were comparable to those of aggregation, and more important than food. Thus, under the high contact rates induced by aggregation, the number of replete ticks that persist on a host long enough to mate and gain a full blood meal will more strongly depend on genetic host susceptibility than on age or sex, despite the fact that age and sex are themselves strong predictors of replete tick abundance in the absence of aggregation. Similarly, for endoparasite richness genetic diversity was as important as host sex. These results are especially notable given the importance that host sex and host age are often considered to have in underpinning variance in the extent of parasitism by macroparasites [1].
After analyzing the role that genetic diversity played in predicting parasitism in each group, we determined whether the effects of genetic diversity were best explained as single-locus effects or genome-wide neutral genetic diversity for endoparasite richness and replete tick abundance. We expected to find singlelocus effects, since the detection of HFCs with neutral markers in non-inbred populations has usually been attributed to a particular locus [30]. In fact, it has been suggested the high polymorphism at microsatellite loci might favor the chances of one allele showing linkage disequilibrium with candidate genes [31]. However, in our study no single marker was disproportionately important in explaining any of the observed parasite-HFC models. These results agree with a recent study which suggests that frequently the effect of linked genes on HFCs are negligible compared to the effect of the rest of the genome [24]. In fact, it has been shown that the genetic architecture of resistance may be highly complex, with not only an effect of the few loci directly involved in resistance, but also strong epistatic effects between loci that can be on different chromosomes [62]. In addition, disease susceptibility might also be indirectly associated with genes involved in signaling pathways, or even metabolism [31]. Thus, it is possible that an association exists between genotypes and parasite resistance not only at the regions of the genome directly involved in resistance but in other genomic regions [17], and that here we captured this effect by using a small number of highly polymorphic microsatellites.
Interestingly, when the local effects were analyzed, they confirm the pattern found for the global HFC: for the endoparasite richness most of the local effects were negative (more heterozygous individuals presented less parasites), but for the abundance of replete ticks there is a combination of negative and positive local effects. The combination of negative and positive local effects is probably what yields a curvilinear relationship for the global HFC in replete ticks. To our knowledge, this is the first reported case where the differences are not only based on the presence-absence of an HFC with parasitism, but also on varying types of HFC relationships. The mechanism underlying these differences is unclear. We hypothesize that selective pressure would be different in different genes, with some loci being subject to overdominance that favours the heterozygotes, and other loci showing underdominance that imposes selection against heterozygotes [33,48]. Both types of selection have been shown to act on immune genes [16,73], and have been proposed as potential selection mechanisms underlying HFCs for single loci [33]. Whether both selection mechanisms are simultaneously acting or there are other mechanisms underlying the observed results requires further study.
The extent of parasitism of an individual is a trade-off between exposure and susceptibility. Here we have shown that even small differences in genetic diversity, such as those found in an outbred population, are associated with differences in susceptibility to parasites across individuals. Furthermore, the relationship between genetic diversity and parasitism varies across taxa, probably as a result of different selective pressures. To understand the genetic basis of parasite susceptibility, future studies must simultaneously evaluate the effects of non-genetic and genetic measures in the same models.
Supporting Information
Table S1 Prevalence and mean abundance for 18 species of parasites identified from raccoons in Missouri. Table S3 Ranking of models including only non-genetic terms to estimate parasite loads in a raccoon population. (a) Abundance of replete ticks; (b) Abundance of non-replete ticks (Dermacentor variabilis; n = 259); (c) Abundance of lice (Trichodectes octomaculatus; n = 307) and (d) endoparasite richness (n = 250). (DOCX) | 9,674 | 2012-09-26T00:00:00.000 | [
"Biology"
] |
Does Energy Poverty Reduce Rural Labor Wages? Evidence From China’s Rural Household Survey
Eliminating energy poverty is helpful to get rid of the vicious circle between the lack of adequate and affordable energy services and low income in rural areas. We deconstruct energy poverty into extensive energy poverty and intensive energy poverty and analyze the net effect and its heterogeneity of energy poverty on rural labor wages with micrometric methods, as well as further investigate the impact mechanism from education effect and health effect. The results show that both extensive energy poverty and intensive energy poverty have a significant negative effect on the wages of rural workers, and the marginal effect of extensive energy poverty on the wages of rural workers is lower than that of intensive energy poverty. In addition, the net effect of energy poverty on the wages of rural workers shows labor heterogeneity and regional heterogeneity, and the inhibition effect to low skilled workers and workers with middle wage and in the Western region is the most obvious. Furthermore, energy poverty will limit the access of rural workers to education and damage their health, and then inhibit their productivity and wage. Our results suggest that enhancing the accessibility of energy consumption in rural areas and reducing the incidence of energy poverty are critically essential, and the implementation and optimization of energy poverty alleviation policy should give full consideration to labor force heterogeneity and regional heterogeneity.
INTRODUCTION
Energy poverty is one of the three major challenges facing the energy system of the world and an important symbol of poverty in developing countries, which has been plagued by the development of some countries and regions (Che et al., 2021). On the one hand, the energy structure of rural households based on fossil energy and traditional biomass energy has not been broken. The extensive use of energy has caused certain environmental pollution and restricted the improvement of the quality of life of rural families (Gupta et al., 2020). According to the third agricultural census report of China in 2017, the proportion of electricity used in the surveyed households was 58.6%; the proportion of gas, natural gas, and liquefied petroleum gas used was 49.3%; the proportion of firewood used was 44.2%; the proportion of coal used was 23.9%; the proportion of biogas and solar energy used was 0.9%; and the proportion of other energy used was 0.5%. On the other hand, energy poverty has widened the life quality gap among residents of different income classes and become a "stumbling block" for low-income rural families to pursue a happy life. According to the National Bureau of Statistics of China, the per capita disposable income of rural households in 2019 was 16,020.7 yuan, which is only 37.8% of the per capita disposable income of urban residents. And the housing expenditure of rural households (including water, electricity, gas, and heating expenditure) was 2,871.3 yuan, which is only 42.3% of the housing expenditure of urban residents. Rural household energy supply is insufficient, and the utilization structure is unreasonable, which make it difficult to get rid of the low-income dilemma.
Eliminating energy poverty and promoting the balanced development between urban and rural areas have become the common goals in developing countries (Bardazzi et al., 2021;Faiella and Lavecchia, 2021). The Chinese government attaches great importance to the problem of rural families getting rid of energy poverty and increasing their income and has issued a series of policies, which have achieved certain results. In 2018, the National Energy Administration of China issued the notice of the action plan for further supporting energy development in poor areas and boosting poverty alleviation (2018)(2019)(2020), which clearly put forward the strategic goal of "orderly and effective promotion of energy development in poor areas and significant improvement of energy universal service level." The data released by the National Energy Administration of China in 2020 show that in the past 8 years, the accumulated investment in major energy projects in poor areas has exceeded 2.7 trillion yuan, which has effectively driven the local economic development and played an important role in poverty alleviation. So, whether and how energy poverty reduces rural labor wages at the micro level has become the focus of this study. The main marginal contributions of this study are shown in two aspects. First, we deconstruct energy poverty into extensive energy poverty and intensive energy poverty and implement a more comprehensive measurement using the Foster-Greer-Thorbecke (FGT) index and the micro-survey data of the rural household of China. Second, we empirically study the net effect and its heterogeneity of energy poverty on rural labor wages with micrometric methods and further investigate the impact mechanism of energy poverty on rural labor wages from education effect and health effect, which provides a new explanation for energy poverty alleviation.
LITERATURE REVIEW AND HYPOTHESIS
Energy is the material basis for the survival and development of human society (Fabbri and Gaspari, 2021). Energy poverty will not only restrict economic development but also affect the physical and mental health and labor productivity. British scholar Bradshaw and Hutton (1983) is the first to pay attention to the problem of energy poverty. According to the International Energy Agency (IEA), 2010 the energy-poor group is defined as the group that cannot obtain electricity or other modern clean energy services, but mainly relies on traditional biomass energy or other solid fuels for cooking and heating. In the existing research, energy poverty is mainly manifested as extensive energy poverty and intensive energy poverty (Chang et al., 2020). Among them, extensive energy poverty refers to the incidence of energy poverty in a country or region, that is, the proportion of households whose energy consumption is lower than the energy poverty line. Intensive energy poverty refers to the relative gap between the energy consumption of energy-poor families and the energy poverty line. From the existing literature, most studies have verified the negative correlation between comprehensive energy poverty and the income of rural residents (Liu et al., 2020) and believe that the key to poverty alleviation in rural areas lies in the realization of electrification (Dijk, 2012). However, it is rare to explore the impact of energy poverty on rural labor wage from the perspective of extensive energy poverty and intensive energy poverty. Specifically, the extensive energy poverty reflects the loss of modern energy resources and services to a large extent, which can not only create more employment opportunities (Dinkelman, 2011) but also improve labor productivity by driving modern tools (Ifeoluwa and Richard, 2021). Therefore, we infer that extensive energy poverty will have a negative impact on the wages of rural workers. In addition, intensive energy poverty reflects the difficulty of energy-poor families in obtaining modern energy resources and services (Apergis, 2015). Therefore, there is a negative correlation between intensive energy poverty and the wages of rural workers. At last, although the positive effects of electricity and clean energy use on the employment opportunities of women and labor productivity have reached consensus in academic circles, there are still differences in the effects of the use of electricity and clean energy on the productivity and wages of male workers (Grogan and Sadand, 2013;Topcu and Tugcu, 2019). This means that energy poverty will have a differential impact on the wages of different workers. Therefore, this article proposes proposition 1. Proposition 1. Energy poverty will have a negative effect on the wages of rural workers from both extensive and intensive energy poverty, and this effect will be different for different workers.
So, how can energy poverty restrain the wages of rural workers? What researchers have discussed is that energy poverty limits the educational attainment and health of individual workers. And the positive correlation between education and health level and individual labor productivity has been supported by existing studies (Lucas, 1988(Lucas, , 2004. From the perspective of education, family energy poverty will lead school-age children to spend more time on collecting firewood and other resources, and the access to education will be limited (Sothea, 2019). Moreover, this inhibitory effect is more obvious for rural female children, because they need to spend more time on household energy collection (Nankhuni and Findeis, 2004;Ndiritu and Nyangena, 2011). In addition, the research of Martins (2005); Khandker et al. (2012), and Aguirre (2014) show that the promotion of electrification has a positive effect on the enrollment rate and home study time of school-age children and significantly improves the average education level in this region. From the perspective of health, energy poverty will damage the health of residents and then limit their productivity and wages (Gordon et al., 2014;Sadath and Acharya, 2017;Zhang et al., 2019). For example, Barreca et al. (2014) found that reducing the use of coal in heating resulted in a decrease of about 1.25% in the mortality rate of the whole age population and 3.27% in the infant mortality rate in the United States between 1945 and 1960. Based on Turkish data, Cesur et al. (2018) found that replacing coal with natural gas significantly reduced the risk of death for adults and the elderly, and every 1% increase in household natural gas ordering rate resulted in a decrease of about 1.4% in the overall mortality rate for adults and the elderly. Maji et al. (2021) found that electrification can reduce the probability of cough by about 35-50%. In short, energy poverty not only limits the possibility of rural workers obtaining education resources but also damages their health, resulting in the loss of human capital and labor productivity. Therefore, this article proposes the second and third theoretical hypotheses: Proposition 2. Energy poverty will limit the access of rural workers to education during school age, resulting in loss of human capital and labor productivity.
Proposition 3. Energy poverty will damage the health of rural workers, resulting in the damage of labor productivity and wages.
Data
The data used in this article are derived from the Chinese General Social Survey (CGSS) in 2015, which includes six modules, such as "Core Module, " "Ten Years Review, " "EASS Module, " "ISSP Module, " "Energy Module, " and "Legal Module, " The contents of the survey involve the basic personal information of the subjects, family information, social attitudes, energy use, and knowledge of laws and regulations. The reason why we choose CGSS 2015 as research data is based on two considerations: first, the "Energy Module" only exists in CGSS 2015, and the data can meet the demand of this article for the index data of extensive energy poverty and intensive energy poverty measurement. Second, CGSS is the earliest national, comprehensive, and continuous academic investigation project in China, which adopted multiorder stratified probability proportionate to size (PPS) random sampling method and covered more than 10,000 households in 25 provinces across the country.
Due to the inseparable relationship between the energy access capacity and the overall resource endowment of households (Pachaul et al., 2004), we calculated the comprehensive energy consumption in households and then obtained the countylevel rural household energy poverty index by calculating the arithmetic mean. However, in CGSS, the units of fuel consumption such as electric power, pipeline natural gas, bottled liquefied gas, diesel, firewood, charcoal, and coal are different. So, we first converted them into kgce and then calculated the comprehensive energy consumption.
Identification of Energy Poverty
At present, the FGT index constructed by Foster and Thorbecke (1984) is widely used to measure energy poverty in academic circles. Since this study focuses on the extensive energy poverty and intensive energy poverty, we expand the FGT index to identify two different types of energy poverty index. The formula is as follows: In this formula, n is the total number of rural households in the sample area; q is the number of rural households whose energy consumption is lower than the energy poverty line; zrepresents the energy poverty line; and x i is the energy consumption of household. In addition, we set the value of parameter a to 0 or 1. When a is equal to 0, P 0 represents the incidence of energy poverty, which is used to measure the extensive energy poverty index. When a is equal to 1, P 1 reflects the relative distance between the energy consumption of energy-poor households and the energy poverty line, which is used to measure the intensive energy poverty index.
As for the calculation of the energy poverty line, we first calculated the rural household energy consumption based on the rural per capita energy consumption and household population. According to China Statistical Yearbook 2016, the average number of people in each household in 2015 is 3.1. Combined with the 514.04 kgce per capita domestic energy consumption of Chinese rural households calculated by Qiu et al. (2015), we further calculated that the average domestic energy consumption of rural households in China is 1,593.52 kgce. Then, referring to the practice of Chang et al. (2020), the energy poverty line of rural households in China is 414 kgce by multiplying the average domestic energy consumption of rural households by the proportion coefficient of the national poverty line and the per capita net income of rural households.
Model Specification
Theoretical studies show that the impact path of extensive energy poverty and intensive energy poverty on the wages of rural workers is different, which means that there will be a gap between the two effects on rural labor wages. In order to identify this different effect, this article constructs a wage decision model at the individual level to test the net effect of extensive energy poverty and intensive energy poverty on the wages of rural workers, as follows: In Equations 2, 3, i represents the rural individuals, j represents the county, t is the time, ln(wage)is the natural logarithm of the wages of rural workers, and ex_poverty and in_poverty represent the extensive energy poverty and the intensive energy poverty, respectively. ind_control and fam_control are used to control the individual attribute factors and family factors affecting the wages of rural workers, respectively. The coefficient β represents the net effect of energy poverty on the wages of rural workers, and γ represents the estimated coefficient of each control variable. ε is a random disturbance term.
The individual attributes and family factors controlled in this article are gender, age, marriage, education, experience, total household income (h_income), and number of household real estate (real_estate). Specifically, for measures of gender, the female is assigned 0 and the male is assigned 1. For marital status, the value of unmarried is 0, and the value of first marriage, remarriage, divorce, and widowed is 1. For education level, the value of not having attended school is 1, the value of primary school (including literacy class) is 2, the value of junior high school is 3, the value of senior high school (including technical secondary school) is 4, the value of junior college is 5, the value of undergraduate college is 6, and the value of graduate school is 7. As for work experience, it is obtained by the time that the interviewees have been engaged in their first non-agricultural work so far. In addition, we also control the age squared term according to the general practice of existing literature (Wu et al., 2020). Table 1 represents the baseline estimates for models (2) and (3). The net effect of extensive energy poverty on rural labor wages is reported in columns 1-3, and the net effect of intensive energy poverty on rural labor wages is reported in columns 4-6. We gradually increased the control variables in the estimation equations, so as to reduce the multicollinearity problem and enhance the robustness of the estimation results. Among them, columns 1 and 4 do not include control variables; columns 2 and 5 include individual attribute factors; and columns 3 and 6 include both individual attribute factors and family factors. The results show that both extensive energy poverty and intensive energy poverty have a significant negative effect on the wages of rural workers, no matter whether the control variable is added or not. Specifically, the marginal effect of extensive energy poverty on the wages of rural workers is −0.21, which is lower than that of the intensive energy poverty by −0.36. It shows that even though the incidence of energy poverty will reduce the wages of rural workers, the inhibitory effect on the wages of rural workers is less than that of the intensive energy poverty. Therefore, to expand the effect of energy on economic poverty alleviation, in addition to enhancing the accessibility of energy consumption in rural areas and reducing the incidence of energy poverty, narrowing the gap between the energy consumption of rural low-income families and the energy poverty line is even more important.
Baseline Regression
From the estimation results of control variables, the estimation coefficient of gender is significantly positive, and the marginal coefficient is 0.30, indicating that the average wage of rural male workers is 30% higher than that of female workers. This is because in Chinese tradition, in addition to more domestic activities, rural women also take care of the elderly and children, which affects their labor supply and productivity. The estimation coefficient of age is significantly positive and that of the age square term is significantly negative, which indicates that there is an inverted U-shaped relationship between the wages and age of the rural workers, and the inflection point is about 38 years old. The estimated coefficient of marriage variable is significantly positive and 0.14, which means that the average wage of married rural workers is 14% higher than that of unmarried individuals. Education and work experience are positively correlated with the wages of rural workers, and the marginal coefficients are 0.08 and 0.005, respectively. In addition, the total household income also has a positive effect on the wages of rural individual workers, and every 1% increase in the total household income will lead to an increase of 0.69% in the average wages of individual workers. However, there is a significant negative correlation between the number of household real estate and the wages of rural workers, and each increase in household real estate will reduce the wages of rural workers by 1.42%. This is because the increase in the number of household real estate will reduce the employment participation and labor time supply of individual workers and then pull down their wages.
Quantile Regression Estimation
According to the statistical data, there are differences in the employment industry and occupation distribution of rural workers with different wage levels, which means that rural workers with different wage levels may be affected differently by energy poverty. In order to verify this inference, we use the Quantile regression model (QR) to further investigate the differentiated effect of energy poverty on the wages of rural workers at different quantiles. Table 2 reports the response of the wages of rural workers to the extensive energy poverty and intensive energy poverty at the 25th, 50th, and 75th quantile. Among them, no matter in the estimation equation of extensive energy poverty or intensive energy poverty, energy poverty has a significant inhibitory effect on the wages of rural workers at all quantiles. Furthermore, compared with the rural workers with higher and lower wage income, the middle wage group is more negatively affected by the extensive energy poverty and intensive energy poverty. In summary, proposition 1 has been proved.
Labor Force Heterogeneity
In reality, rural workers are not homogeneous individuals, but have obvious heterogeneity of human capital. The differences in human capital will not only lead to the emergence of labor stratification but also promote different types of workers to show different identities and labor productivity in the labor market. Therefore, this part will focus on the net effect of energy poverty on the wages of rural workers with different skills. By reference to Borjas (1999), we divide the rural workers into two types: high-skilled workers and low-skilled workers according to their education or work experience. Among them, the workers with university degree or above are divided into high-skilled workers, and the workers with high school degrees or below are ***P < 0.01, **P < 0.05, and *P < 0.1, robust standard errors in parentheses.
divided into low-skilled workers. In addition, workers above the average value of work experience are classified as high-skilled workers, and workers below the average value are classified as low-skilled workers. Table 3 reports the net effects of extensive energy poverty and intensive energy poverty on the wages of rural workers with different skills. The results show that whether grouped by education or work experience, extensive energy poverty and intensive energy poverty have a significant inhibitory effect on the wages of rural workers with different skills. Comparatively speaking, the wage of low-skilled workers is more restrained by two types of energy poverty. In addition, whether in the sample of low-skilled workers or high-skilled workers, the results of labor heterogeneity analysis further verify that the inhibitory effect of extensive energy poverty on the wages of rural workers is greater than that of intensive energy poverty. These findings further support proposition 1.
Region Heterogeneity
Due to the large gap in economic development and the obvious difference of energy resource endowment in Eastern, Central, and Western China, there will be differences in the energy resource supply and labor employment policies. Based on the above considerations, we also examined the net effect of extensive energy poverty and intensive energy poverty on the wages of rural workers in different regions ( Table 3). The results show that extensive energy poverty and intensive energy poverty only have significant negative effects on the wages of rural workers in the central and western regions, but not in the eastern region. Specifically, the estimated coefficients of extensive energy poverty in central and western regions are −0.15 and −0.41, and the estimated coefficients of intensive energy poverty in central and western regions are −0.29 and −0.40, respectively. This means that the restraining effect of extensive energy poverty and intensive energy poverty on the wages of rural workers is more prominent in the western regions. Therefore, the implementation and optimization of energy poverty alleviation policy should also give full consideration to regional heterogeneity.
Education Effect
Theoretical research shows that the negative impact of energy poverty on the academic education and non-academic education of rural workers will further affect their labor productivity and wages, which will be more prominent for rural female workers. Due to the heavy labor cost of solid fuel collection, female workers have to reduce their opportunities to participate in education and training, employment, and other productive activities with income (Cooke, 1998). In developing countries, female workers spend seven times as much time collecting fuel as adult male workers and 3.5 times as much time as male workers of the same age. This is no exception in China. A survey on the time distribution of "indoor" activities of farmers in poor areas of China shows that female workers spend an average of 26 h a week collecting firewood and cooking activities, which is much higher than that of male workers who spend 9 h a week (Ding and Chen, 2002).
In order to verify the negative effect of energy poverty on the education of rural workers, we empirically test the education effect of energy poverty on rural workers with mediation effect model. In the benchmark regression equation, we have verified a significant positive correlation between education and the wages of workers. Therefore, according to the identification logic of mediating effect model, we can confirm that energy poverty will affect the wages of rural workers through the education effect as long as we verify that there is a significant negative effect of energy poverty on the education of rural workers. In Table 4, columns 1 and 2, respectively, report the net impact of extensive energy poverty and intensive energy poverty on the education of rural workers. It can be seen that both extensive energy poverty and intensive energy poverty reduce the average education level of rural workers, and the marginal coefficients are −0.56 and −0.24, respectively, which is in line with the above proposition 2. In addition, from the estimation results of female and male subsamples, the two types of energy poverty significantly reduce the average education level of female and male individuals. Comparatively speaking, the average education level of female workers is more restrained by energy poverty.
Health Effect
Theoretical research infers that energy poverty will damage the health of rural workers and then inhibit their labor productivity and wage. In fact, the extensive use of solid fuels such as firewood and coal will damage the health of residents, which has been fully verified in western countries. For example, Peabody et al. (2005) evaluated the health effects of various types of cooking fuels from the aspects of exhaled carbon monoxide content, maximum vital capacity, and the prevalence of chronic obstructive pulmonary disease and found that solid fuels were the most harmful source to health. Lim et al. (2012) Each grid is a separate regression. And the net effect of energy poverty on the wage of rural workers with different skills or in different regions is calculated after controlling individual attributes, family factors that affect the wage of rural workers. ***P < 0.01, **P < 0.05, and *P < 0.1, robust standard errors in parentheses. ***P < 0.01, **P < 0.05, and *P < 0.1, robust standard errors in parentheses. Frontiers in Energy Research | www.frontiersin.org injuries and found that indoor air pollution caused by solid fuel utilization caused 3.55 million premature deaths worldwide in 2010. So, does the health damage effect from energy poverty exist in China? Similar to the education effect test, we tested the health effect of energy poverty on rural workers with the mediating effect model. In the benchmark regression equation, we have verified the significant positive correlation between health and the wages of workers. Therefore, as long as we verify that energy poverty has a significant negative effect on the health of rural workers, we can confirm that energy poverty will affect the wages of rural workers through the health effect. According to the estimated results in Table 5, both extensive energy poverty and intensive energy poverty significantly reduce the health level of rural workers, and this health damage effect is more obvious for female workers, that is, proposition 3 has been proved.
CONCLUSION AND POLICY IMPLICATIONS
This study deconstructed energy poverty into extensive energy poverty and intensive energy poverty and analyzed the net effect and its heterogeneity of energy poverty on rural labor wages with micrometric methods; it further investigated the impact mechanism of energy poverty on rural labor wages from education effect and health effect. The following main conclusions were reached: first, both extensive energy poverty and intensive energy poverty have a significant negative effect on the wages of rural workers, and the marginal effect of extensive energy poverty on the wages of rural workers is −0.21, which is lower than that of intensive energy poverty by −0.36. Second, the rural workers with middle wages are more negatively affected by the extensive energy poverty and intensive energy poverty. Third, extensive energy poverty and intensive energy poverty have a significant inhibitory effect on the wages of rural workers with different skills, and the wage of low-skilled workers is more restrained by two types of energy poverty. Fourth, the negative effect of extensive energy poverty and intensive energy poverty on the wages of rural workers is more prominent in the western regions. Fifth, energy poverty will limit the access of rural workers to education and damage their health, resulting in the decrease of labor productivity and wages. There is often a vicious circle between the lack of adequate and affordable energy services and low income. As an important part of the millennium development goals of China and even the developing countries, eliminating energy poverty is helpful to optimize the energy consumption structure in rural areas and get rid of the vicious circle of energy poverty. To expand energy poverty alleviation and its positive spillover effects on economic poverty alleviation, in addition to enhancing the accessibility of energy consumption in rural areas and reducing the incidence of energy poverty, narrowing the gap between energy consumption of rural low-income families and energy poverty line is even more important. Furthermore, the implementation and optimization of energy poverty alleviation policy should give full consideration to labor force heterogeneity and regional heterogeneity, avoiding one-size-fits-all policy formulation and implementation.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
AUTHOR CONTRIBUTIONS
W-PW: conceptualization, writing -original draft, and methodology. W-KZ: data curation, software, and writingreview editing. S-WG: methodology and writing -review editing. Z-GC: data curation and supervision. All authors contributed to the article and approved the submitted version. | 6,694.8 | 2021-04-21T00:00:00.000 | [
"Economics"
] |
Toward verification of electroweak baryogenesis by electric dipole moments
We study general aspects of the CP-violating effects on the baryon asymmetry of the Universe (BAU) and electric dipole moments (EDMs) in models extended by an extra Higgs doublet and a singlet, together with electroweak-interacting fermions. In particular, the emphasis is on the structure of the CP-violating interactions and dependences of the BAU and EDMs on masses of the relevant particles. In a concrete mode, we investigate a relationship between the BAU and the electron EDM for a typical parameter set. As long as the BAU-related CP violation predominantly exists, the electron EDM has a strong power in probing electroweak baryogenesis. However, once a BAU-unrelated CP violation comes into play, the direct correlation between the BAU and electron EDM can be lost. Even in such a case, we point out that verifiability of the scenario still remains with the help of Higgs physics.
I. INTRODUCTION
The particle content of the standard model (SM) has been completed by the discovery of the 125 GeV Higgs boson at the Large Hadron Collider (LHC) [1]. So far, there is no clear signal beyond the SM in laboratory experiments. Nevertheless, the cosmological problems such as the origin of the baryon asymmetry of the Universe (BAU) and identification of the cold dark matter still remain unsolved within the SM.
One of the mechanisms for generating the BAU is electroweak baryogenesis (EWBG) [2]. In this scenario, the BAU arises during the electroweak phase transition (EWPT), and its feasibility depends on properties of models at the GeV/TeV scales. From the viewpoint of the testability, EWBG is the first scenario that is verified or falsified by the ongoing and upcoming experiments, among others. As is well known, the SM has the two drawbacks that prevent it from generating the BAU: absence of both a strong first-order EWPT [3] and a sufficient amount of CP violation [4]. Supersymmetric (SUSY) models may naturally solve those issues simultaneously. For example, in the the minimal SUSY SM model (MSSM), a light scalar top (stop) could induce the strong first-order EWPT, and the fermionic superpartners provide the substantial amount of CP violation. However, it turns out that the light stop scenario in the MSSM is not consistent with the LHC Run 1 data such as the Higgs signal strengths and the direct stop searches [5]. Given this fact, the colored particles may no longer the candidates for archiving the strong first-order EWPT. Therefore, whatever a UV theory might be, the possibility of EWBG can be investigated in the framework *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>‡<EMAIL_ADDRESS>of an effective field theory of non-colored particles after integrating out irrelevant heavy degrees of freedom, i.e., UV theories ⊃ multi-Higgs + EW-interacting fermions.
Experiments that are most sensitive to the CP violation are measurements of the electric dipole moments (EDMs) of electron, neutron and atoms etc. Clarifying relationships between the BAU-related CP violations and the EDMs are indispensable for the test of the EWBG scenario. In some analyses in the literature, the CPviolating effect is incorporated by higher dimensional operators assuming only one Higgs doublet and by which the BAU is evaluated. In such a case, the CP-violating effects peculiar to the finite temperature, such as a resonant enhancement pointed out in Ref. [6], are missing, which drastically changes the correlation between the BAU and EDM.
In this Letter, we clarify similarities and differences between the BAU-related CP violation and the EDMrelated one with particular emphasis on the structure of the interactions and the mass dependences of the relevant particles. As an illustration, we consider a framework in which the Higgs sector is augmented by an additional Higgs doublet and a singlet, and in addition, SU(2) L doublet fermions and singlet fermion are introduced to accommodate CP violation for baryogenesis. In our setup, the structure of the CP-violating interactions are more generic than those in SUSY models. We evaluate the CPviolating source term for the BAU in the closed-time-path formalism and relate it with the electron EDM. The correlation between the two CP-violating quantities is elucidated as functions of the EW-interacting fermion masses.
As a specific example, we consider a next-to-MSSMlike model and work out the relationship between the BAU and electron EDM. It is found that the electron EDM is the useful probe of the baryogenesis favored region as long as the BAU-related CP violation predom- inately exists in the model. However, there is a case in which a BAU-unrelated CP violation, if it exists, alters the intimate connection between the BAU and EDM, which makes it difficult to test EWBG via the electron EDM experiment only. Nevertheless, such a specific case is possible only in the case that the doublet-singlet Higgs boson mixing exists, which is needed for a tree-potentialdriven strong first-order EWPT, and thus still testable in combination with Higgs physics.
II. GENERAL ASPECTS OF CP-VIOLATING EFFECTS ON THE BAU AND EDMS
Before going to present our model, we here give a simple but rather generic argument about the relationship between the BAU-related CP violation and EDM. For illustrative purposes, we consider the framework in which two Higgs doublets and two species of EW-interacting fermions (denoted as ψ i,j ) are present. For definite, ψ i is assumed to be Dirac fermion and ψ j Majorana fermion. This setup applies to the bino-driven EWBG in the MSSM [7], the singlino-driven EWBG in the nextto-MSSM [8] and the Z ′ ino-driven EWBG in the U(1) ′ -MSSM [9] in proper limits. We expect that the following discussion would hold in other cases by making an appropriate translation.
Let us parameterize the relevant interactions as where v a,b (a, b = 1, 2) denote the Higgs vacuum expectation values (VEVs), and c L,R are the complex parameters. With this Lagrangian, we evaluate the source terms in the diffusion equation of ψ i in the closed-time-path formalism [6]. The vector current of ψ i has the form where only the CP-violating source term is shown on the right-hand side. In a VEV insertion approximation [6], S ψi to leading order is induced by the process shown in Fig. 1, which is cast into the form where κ S = +1 for (a, b) = (2, 1), κ S = −1 for (a, b) = (1, 2) and κ S = 0 for (a, b) = (1, 1), (2,2). m i,j are the masses of ψ i,j ,β(X) is the time derivative of β(X) = tan −1 (v 2 (X)/v 1 (X)), and I f ji denotes a thermal function as will be given below. One can see that S ψi (X) would vanish not only for Im(c L c * R ) = 0 but also the cases in which one of the following condition is fulfilled: v(X) = 0,β(X) = 0 and I f ji = 0. Since the EWPT is of first order, the Higgs VEVs depend on a spacetime variable X, and the profiles of which can be determined by static bubble configurations at a nucleation temperature. In most cases, the shapes of v(X) and β(X) would be approximated by kink-type configurations, so theβ(X) is proportional to a variation of β(X) along the line connecting broken and symmetric phases. In the MSSM,β(X) roughly scales as 1/m 2 A [10], where m A is the CP-odd Higgs boson mass, which implies that S ψi (X) in Eq. (4) would completely disappear if the Higgs sector is composed of only one Higgs doublet, as already indicated in the case of κ S = 0. From this argument, it is expected that the presence of the extra Higgs boson with a nonzero VEV may be essential for successful EWBG, regardless of the strong first-order EWPT realization. Here, it should be reminded that there is another type of the source term that is not suppressed in the large m A limit, which may appear as a higher order correction to the approximation we have made here (see, e.g., Refs. [11,12]). As long as the BAU is explained by a resonant enhancement, which is indeed the case in our analysis, such a source term would not play a central role.
The behavior of the thermal function I f ji is somewhat complicated, and in some specific region it is strongly governed by the finite temperature physics. The explicit form of I f ji is [6] Here, I ij and G ij are respectively expressed by where ω ± = ω i ± ω j and Γ + = Γ i + Γ j . One can see that I f ji vanishes if Γ i = Γ j = 0. Since Γ i,j ≃ gT , where g represents a typical coupling in a model and T a temperature, S ψi (X) first emerges to order of O(g 4 ) assuming |c L | = |c R | ≃ g.
As is well known, S ψi has a resonant enhancement at m i = m j , the behavior of which comes from G ij . Since One can see that G ij has a peak at ω − = 0, which can yield the dominant source for the BAU. We now study the impact of Im(c L c * R ) on the EDM. Since the new fermions have the EW charges, the following interactions exist.
where ψ ± denote electrically charged members in the SU(2) L multiplet fermion. We assume that ψ i is the neutral member of the same multiplet. In this case, the W W -mediated Barr-Zee diagram is induced, as shown in Fig. 2. 1 The EDM of a fermion f using the mass insertion method is given by where the negative (positive) sign is the case that f is up-type (down-type) fermion, The explicit form of f W W is given in Ref. [14]. We emphasize that unlike S ψi (X) in Eq. (4), Eq. (10) does not vanish for (a, b) = (1, 1) or FIG. 3.S ψ i as a function of mi with a fixed mj and the other away around. We set tan β = 1 and the fixed mass is 500 GeV.
(2, 2), in addition, d W W f /e is not enhanced at m i = m j , which are the prominent differences between the two CP-violating quantities. One may find that which signifies another distinct feature of the EDM as discussed below. In what follows, we confine ourself to the cases of (a, b) = (2, 1) and (1, 2).
It is worth making a comment on that the mass insertion method used in Eq. (10) not only makes it easy to see the relationship between the CP-violating source term and the EDM but also gives the numerically good approximation.
Eliminating Im(c L c * R ) in Eq. (4) using Eq. (10), one finds In order to see the correlation between S ψi and d W W f /e in more detail, we definē In what follows, we consider the electron EDM as the experimental constraint, i.e., |d exp e | = 8.7 × 10 −29 e · cm [15]. Here, we get rid of v 2 (X)β(x) in C BAU since it is rather model dependent.
In Fig. 3,S ψi is plotted as a function of m i with a fixed m j or the other away around. As an example, we take tan β = 1, and the fixed mass is set to 500 GeV. As explained above C BAU has a peak at m i = m j . However, the decoupling behaviors in the large mass limits are substantially different from each other. For the varying m j case,S ψi becomes more or less flat in the large mass region while it grows for the varying m i case. The latter is due to the rapid suppression of C W W EDM that scales as m j /m 3 i as mentioned above. Note that Im(c L c * R ) > ∼ 1 for f /e is fixed. Now we move on to discuss a possibility that the aforementioned correlation between the CP-violating source term and the EDM is spoiled by contamination of BAUunrelated CP violation. As delineated below, such a situation can arise when we address the issue of the strong first-order EWPT.
The SM Higgs sector has to be extended in such a way that the EWPT is of first order. There are two representative cases for achieving this: • Thermal loop driven case • Tree potential driven case For example, the former corresponds to the SM, MSSM and a two Higgs doublet model (2HDM) and so on. In such cases, the cubic-like terms arising from the bosonic thermal loops play an essential role in inducing the firstorder EWPT. In the latter case, on the other hand, a specific structure of a tree-level Higgs potential is the dominant source for generating a barrier separating the two degenerate minima at a critical temperature. One of such an example is the EWPT in the SM with a real singlet Higgs boson (rSM) [16,17]. In this case, nonzero doublet-singlet Higgs mixing terms are responsible for the strong first-order EWPT. Once the singlet Higgs field (S) exists, it is conceivable that the following interactions may give rise to an extra source for CP violation.
If the doublet-singlet Higgs mixing is present, the Higgsphoton(Z)-mediated Barr-Zee diagrams could be generated, as depicted in Fig. 2 As far as EWBG is concerned, the new CP-violating phase appearing in Eq. (13) is not directly related to baryogenesis. Therefore, the linear correlation between the CP-violating source term and the EDM in Eq. (11) no longer hold. One of the interesting possibilities is that if a cancellation among those contributions becomes effective, it is possible for d f to be made highly suppressed but with the nonzero d W W f , so the BAU-related CP violation is not constrained by a single EDM experiment in this case.
Nevertheless, one may probe such a parameter space with Higgs physics since the nonzero doublet-singlet Higgs mixing parameter and g S,P would lead to some deviations in the Higgs signal strengths. We will explicitly demonstrate this possibility in the next section.
So far, we have exclusively focused on the relationship between the CP-violating source term and the EDM. Here, we comment on the dependence of Im(c L c * R ) on the baryon number density (n B ) briefly. Under some mild assumptions, one may have where κ B is a coefficient. S CPV is a CP-violating term arising from S ψi discussed above and Γ CPC a CPconserving particle changing rate. For the latter, for example, the interactions in Eq. (2) induce where F ji and R ji are the thermal functions presented in Ref. [9]. As studied in Ref. [18], Γ ψi also has the resonant behavior at m i = m j , rendering n B smaller. It should be emphasized that a cancellation between the first and second terms in Γ ψi can happen depending on the choice of Arg(c L c * R ) and m i,j . Therefore, n B does not necessarily take its maximal value at Arg(c L c * R ) = π/2 or −π/2, which may relax the EDM constraint to some extent.
III. A MODEL
Now, we define our model and give basic ingredients for calculating the BAU and the electron EDM. The particle content of the Higgs and the new EW-interacting fermion sectors in the model is shown in Table I. The total Lagrangian is given by whereΦ 1,2 andS 0 are the two-component spinors, and ǫ 12 = −ǫ 21 = +1. As is the case in the MSSM, to avoid a lepton flavor violation, we impose a matter parity under which new EW-interacting fermions are odd and the SM fermions are even. Furthermore, as in the ordinary 2HDM, another Z 2 symmetry (Φ 1 → −Φ 1 and Φ 2 → Φ 2 ) is enforced to evade tree-level Higgs-mediated flavor-changing-neutral current processes. Depending on Z 2 charge assignments for the fermions, four types of the Yukawa interactions are possible. However, the following analysis does not depend on those types since the top Yukawa coupling is the only relevant that is common to all the types.
The Higgs fields are parametrized as where In the following, we consider a rSM-like limit in which sin(β − α) = 1, where α denotes a mixing angle between two CP-even Higgs bosons (h 1,2 ). In this case, only one state (defined as h) has the VEV and gives the masses of the gauge bosons and fermions. Since the strong firstorder EWPT is assumed to be driven by the tree-Higgs potential, the heavy Higgs bosons do not necessarily have the so-called nondecoupling effect which is needed in the thermal loop driven strong first-order EWPT case [19] . The detailed comparison between the two cases will be given elsewhere [13].
Since we have the singlet Higgs boson in this model, h mixes with h S through a mixing γ as In our scenario, H 1 is the SM-like Higgs boson whose mass is 125 GeV, and H 2 is the singlet-like Higgs boson which is assumed to be heavier than H 1 . Another CPeven Higgs boson originated from the Higgs doublet is denoted as H 3 which is heavier than H 2 .
In response to Z 2 charges assignments ofΦ 1,2 andS, there are several types of the interactions among the new EW-interacting fermions and Higgs bosons [13]. Here, we focus on one of them as an example. The Z 2 charge assignment is listed in Table. I.
IV. NUMERICAL ANALYSIS
Following a calculation method formulated and developed in Refs. [6,18,20], we estimate n B by where is a baryon number changing rate in the symmetric phase, v w is a velocity of the bubble wall, D q is a diffusion constant of the quarks, and R is a relaxation term, which is (15/4)Γ (s) B in our model. n L is the total number density of all the left-handed quarks and leptons [20,21].
Since the EWPT is reduced to that in the rSM, we adopt S2 scenario investigated in Ref. [17] as a benchmark in which m H2 = 170 GeV, cos γ ≃ 0.94 and v C /T C = 206.75 GeV/111.76 GeV. In addition, we take tan β = 1, v w = 0.4, ΓH = 0.025T , ΓS = 0.003T , and use an approximation,β = v w ∆β/L w taking ∆β = 0.015. Under this assumption, n B does not depend on L w . Moreover, the constant VEV but v C /2 is used in calculating n B , which may give a simple approximation of kink-type VEV [13]. For the heavy Higgs boson masses, we set 400 GeV, and for a softly Z 2 broken mass, which is a mixing mass between Φ 1 and Φ 2 , 250 GeV is taken. For the other parameters, we refer to the values adopted in Ref. [9]. In the following, the electron EDM is calculated in the mass eigenbasis of the neutral fermions rather than the mass insertion method, although the both are not much numerically different.
We first present the case where the electron EDM is induced by only the W W -mediated Barr-Zee diagram. In Fig. 4, contours of Y B /Y obs B and |d e | are shown in the (mH , mS) plane. We take |cH 0S L | = |cH 0S R | = 0.42, φ = 225 • and |λ| = 0. Here, φ is chosen in such a way that the cancellation in Γ CPC is effective. In this figure, the orange region is excluded by the current experimental limit of the electron EDM, |d exp e | < 8.7 × 10 −29 e · cm, and the dashed line corresponds to |d e | = 1.0×10 −29 e · cm which is reachable by the future experiments [22]. The black solid (dashed) line indicates Y B /Y obs B = 1 (0.1). One can see that |d e | gets rapidly suppressed as mH increases but does not in the large mS case, as discussed in Sec. II. Furthermore, the BAU is sufficiently generated if mH ≃ mS due to the resonant effect. Our result shows that the successful EWBG region would be entirely verified by the future experiments of the electron EDM even if the BAU calculated here is underestimated by a factor of 10 or even more due to lack of precise knowledge of the bubble profiles etc.
Next, we consider the case in which the BAU-unrelated CP-violating phase φ λH comes into play. Fig. 5 shows the electron EDM in the (|λ|, φ λH ) plane. In this figure, we take the same input parameters as in Fig. 4 = 0, which lead to d n ∼ d p ∼ O(1) × 10 −28 e · cm. Although the current experimental bounds of d n and d p are not strong enough to probe this parameter region, the future experiments might be accessible [23]. Detailed analysis will be conducted in Ref. [13].
Since d Hγ e is correlated with the signal strength of the Higgs decay to two gammas (denoted by µ γγ , for the explicit formula, see, e.g., Ref. [24]), we also examine it. µ γγ is represented by the gray lines: µ γγ = 1.1, 1.0, 0.9, and 0.8 from top to bottom. The whole region is still within the 2σ region of the current LHC data, µ γγ = 1.17 ± 0.27 (ATLAS) and µ γγ = 1.14 +0. 26 −0.23 (CMS). We remark that the the sensitivity of µ γγ is expected to be improved up to O(5)%, and Higgs coupling to the gauge bosons (cos γ in the current setup) up to O(0.1)% at future colliders such as the high-luminosity LHC (HL-LHC) [25], International Linear Collider (ILC) [26] and TLEP [27]. Therefore, the testability of EWBG in this scenario still persists.
V. CONCLUSIONS
We have studied the relationship between the CPviolating source term for the BAU and the EDMs in the framework where the extra Higgs doublet and the singlet as well as the new EW-interacting fermions (ψ i,j ) are introduced. We scrutinized the ratioS ψi (defined by Eq. (12)) as functions of the EW-interacting fermion masses. In the region where new fermions are degenerate,S ψi is resonantly enhanced due to the thermal effect appearing in the source term. In the large mass limits of the fermions, on the other hand,S ψi gets milder or larger depending on the fermion species, and the behaviors of which are mostly governed by the property of the loop function of the EDM rather than that of the CP-violating source term for the BAU.
As a concrete example, we considered the next-to-MSSM-like model and investigated the correlation between the BAU and the electron EDM for a typical parameter set. It is found that as long as the BAU-related CP violation predominantly exists, the current electron EDM places some constraints on the EWBG-favored region, and more importantly, it would probe the whole region if it is improved up to 1.0 × 10 −29 e · cm. However, once the BAU-unrelated CP violation comes into action, the strong connection between the BAU and electron EDM is not guaranteed any more, which makes it challenging to probe the parameter space with the elec-tron EDM only. Nevertheless, even in such a case, the scenario could be probed with the aid of Higgs physics. | 5,511.8 | 2015-10-15T00:00:00.000 | [
"Physics"
] |