text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Light-induced breathing in photochromic yttrium oxyhydrides
When exposed to air, metallic yttrium dihydride YH 2 films turn into insulating and transparent yttrium oxyhydride (YHO). The incorporation of oxygen causes the lattice expansion of YH 2 and the emergence of photochromic properties, i.e., YHO darkens reversibly when illuminated with light of adequate energy and intensity. However, the adequate bleaching of the photodarkened samples once the illumination has stopped is much faster in air than in inert atmosphere. According to this experimental evidence, the photochromic mechanism has to be related to an oxygen diffusion and exchange process. Since this process is accompanied by a lattice expansion / contraction, it can be said that YHO “breathes” when subjected to illumination / darkness cycling. Another interesting side effect of the breathing is the unexpected enhancement of the hydrophobicity of the YHO samples under illumination. A theoretical model able to explain the breathing in YHO is presented, together with the discussion of other alternative explanations.
I. INTRODUCTION
Yttrium hydride and other rare-earth hydrides are extremely strong reducing agents, a feature that considerably complicates their study. For their adequate handling in air, rare-earth hydride thin films are usually protected against oxidation by, for example, Pd capping layers [1]. However, the incorporation of oxygen in rare-earth hydrides after intentional exposure to air [2][3][4], or even through accidental contamination [5], leads to the formation of oxyhydrides, materials that contain oxide and negatively charged hydride [6][7][8][9], which exhibit very interesting properties. One of the pioneering works on this family of materials was carried out by Miniotas et al. [5], who reported gigantic electrical resistivity in oxygen-containing gadolinium hydride. Later, Mongstad et al. [10] reported photochromic properties in oxygen-containing yttrium hydride, a feature observed very recently by Nafezarefi et al. [2] in other-rare earth oxyhydrides such as dysprosium, gadolinium, or erbium oxyhydrides.
The photochromism in Y-related compounds can be traced back to Ohmura et al. [11], who observed light-induced reversible darkening in yttrium hydride thin films subjected to high pressures (∼GPa). Despite the importance of the discovery, the emergence of this new inorganic photochromic material went unnoticed at that time, presumably because the pressure range required is not suitable for practical applications. Today, however, it is known that yttrium oxyhydride, as well as other rare-earth oxyhydrides, are photochromic at room temperature and at ambient pressure; hence, yttrium oxyhydride (YHO), as an inorganic photochromic material, has a multitude of potential applications [12]. Note that in the text we refer to yttrium oxyhydride simply as YHO, a notation that, in principle, is not related to the stoichiometry of the compound, which will be discussed later.
The origin of the photochromic mechanism in YHO is still open to debate and has been attributed to different causes [13][14][15]. In the present paper, the study of the wettability of the YHO surface under illumination and darkness conditions, as well as the photochromic darkening/bleaching dynamics in air and in inert atmosphere, unraveled the cause that underpins the photochromism in YHO.
According to our observations, oxygen diffusion takes place during illumination (consequently, the YHO lattice contracts). The displaced oxygen atoms leave behind an oxygendeficient structure responsible for the optical darkening, which is in agreement with our previous observations [13]. In darkness, the YHO lattice expands back as a consequence of the filling of the oxygen vacancies by oxygen atoms, allowing the film to bleach back to its original state. Since YHO expands/contracts reversibly under dark/illumination cycling produced by the displacement inwards/outwards of oxygen atoms, we refer to this process as breathing.
Due to this breathing, the correct bleaching of the photodarkened YHO coatings depends on the availability of an oxygen source. Thanks to the light-induced oxygen diffusion, YHO could be used for other purposes such as sensing and optical memories, broadening the traditional fields of application of photochromic materials.
The wettability of YHO under illumination is unusual. All oxides and nitrides of low-electronegativity metals can exhibit hydrophobicity [16,17]. Therefore it can be expected that YHO exhibits hydrophobic properties as well. However, while the surface of yttrium oxyhydride increases its hydrophobicity when illuminated, other metal oxides become hydrophilic under UV illumination. In the latter case, the formation of electron-hole pairs under illumination leads to the creation of defect sites where hydroxyl groups can be adsorbed, leading to hydrophilic properties [18]. Generally, when metal oxides are stored in darkness during periods of time ranging from 7 to 50 days [19,20], oxygen replaces back the adsorbed hydroxyl groups, giving raise to hydrophobicity. In the present work, the unexpected behavior observed in YHO, i.e., the enhancement of the hydrophobic properties under illumination, has been found to be caused by the same reason, that is, the oxygen enrichment of the surface under illumination.
II. METHODS
YHO thin films were prepared onto glass substrates following a two-step deposition process. First YH 2 metallic films were fabricated by by magnetron sputtering in a Leybold Optics A550V7 sputter unit. Second, postdeposition oxidation in air transformed YH 2 to YHO. In order to achieve YHO upon air exposure, precursor YH 2 films have to be deposited when the chamber pressure is above a certain critical value, which results in films with large structural disorder [21]. Further details on the synthesis process of photochromic YHO, both intrinsic and doped with Zr, can be found elsewhere [3,21,22]. A cold white LED array from Thorlabs (color temperature 4600-9000 K) was used as an illumination source for the photodarkening experiments. The crystallographic structure of the obtained films was characterized by using x-ray diffraction (XRD) in a Bruker Siemens D500 spectrophotometer (Cu Kα radiation, parallel beam geometry). The composition and surface oxidation states were studied by x-ray photoelectron spectroscopy (XPS) in an Ulvac PHI Quantera II instrument. Surface roughness characterizations were performed using atomic force microscopy with area of 5 μm 2 from a PhotonIc Technologies picostation. The optical transmittance (T) of the YHO films in the clear and photodarkened state was measured using an Ocean Optics QE65000 spectrophotometer and a Perkin-Elmer Lambda 900 with an integrating sphere. Contact angle (CA) measurements were performed using a KSV Attension optical tensiometer under air. A 5-μl drop volume was used for each CA measurement, and three different sessile droplets were measured on several substrates for each value and averaged with a standard deviation of ±2. CA values in the equilibrium (θ e ) for water, ethylene glycol (EG), and methylene iodide (MeI)-both EG and MeI from Sigma-Aldrich-were used to calculate surface free energies of yttrium oxyhydride films at clear and photodarkened state using the van Oss-Good-Chaudhury method [23,24].
The calculations were performed with the Vienna Ab Initio Simulation Package (VASP) code [25][26][27], based on density functional theory (DFT), using a plane-wave pseudopotential method together with the potential projector augmented wave (PAW) [28][29][30]. The generalized gradient approximation (GGA) in the scheme of Perdew-Burke-Ernzerhof (PBE) is used to describe the exchange-correlation functional [27]. To describe the electron-ion interaction standard PAW-PBE pseudopotentials [31] are used with 1s 1 for H, 2s 2 2p 4 for O, and 4s 2 4p 6 4d 1 5s 2 for Y atoms as the valence-electron configuration. The plane-wave functions of valence electrons are expanded in a plane-wave basis set, and the use of PAW pseudopotentials allows a plane-wave energy cutoff (E cut ). Only plane waves with kinetic energies smaller than E cut are used in the expansion. Reciprocal-space integration over the Brillouin zone is approximated through a careful sampling at finite number of k points using a Monkhorst-Pack mesh [30]. We choose the energy cutoff to be 700 eV, and the Brillouin-zone sampling mesh parameters for the k-points set are 8 × 8 × 8. In the optimization process the energy change is set to 1 × 10 −6 eV. The charge densities are converged to 1 × 10 −6 eV in the self-consistent calculation. The range-separated hybrid Heyd-Scuseria-Ernzerhof (HSE06) functional is used for density-of-states calculations [32][33][34]. The hybrid functional requires a standard value of the (short-range) Hartree-Fock exchange (21%) mixed with a portion of PBE exchange (79%), also known as the HSE06 hybrid functional [33,34]. Selection of the parameter has been performed as an inverse value of infinity dielectric constant that is valid, if the energy band gap of these systems is larger than 3 eV.
A. Hydrophobicity control through light illumination
YHO thin films exhibit photochromic properties, that is, YHO films undergo a reversible decrease of their optical transmittance when illuminated with light of adequate energy and intensity [22]. Figure 1(a) shows the transmittance in the clear and photodarkened state for a 1400-nm-thick YHO film. This film decreased its luminous transmittance T lum [35] from 78.5% to 26.7% after illumination. The luminous efficiency of the human eye (photopic vision) is presented in Fig. 1(a) for comparison [35]. How to obtain such optical contrast by illumination will be discussed in detail in the next section. Recent studies by Nafezarefi et al. [21] revealed that the bleaching dynamics and photochromic contrast in YHO is affected by Zr doping.
Nonilluminated (clear) YHO thin films show hydrophobicity with equilibrium contact angle θ e values of 95 • for water (see Table I); however, θ e values increased to 115 • after illumination (again in the case of water), see Table I and Fig. 1(b). Table I also shows θ e for ethylene glycol EG and methylene iodine MeI for the clear and photodarkened states. In the case of MeI, θ e also increases after illumination, from 43 • to 60 • , yet remains constant for EG. Atomic force microscope AFM studies performed in such films revealed a relatively smooth surface with a rms value of surface roughness of around 8 nm.
The observed initial hydrophobicity of the YHO films (clear state) can be explained by the electronic structure of rare-earth elements. According to a detailed experimental analysis of the entire rare-earth oxide series carried out by Azimi et al. [17], the unfilled 4 f orbitals shielded by a full octet of electrons from the 5s 2 p 6 shell result in a lower tendency of such compounds to form hydrogen bonds with the adjacent water molecules [17,36]. Hydrophobicity is not exclusive of the lanthanide f -shell group, but it can be achievable in any metal oxide provided that the electronegativity of such metal is low enough [16]. The low electronegativity of Y and the prevalence of yttrium oxide at the surface [37] explains the high θ e shown in Table I.
One might expect decreased hydrophobicity under illumination caused by electron-hole pairs. Such behavior occurs in other metal oxides [18,19,[38][39][40][41]. As we stated previously, however, hydrophobicity in YHO is enhanced under illumination. The light-induced decrease of wettability can be explained through changes in the oxygen-to-metal ratio at the surface. In metal oxides, coordinatively unsaturated oxygen atoms work as a Lewis base while the metal cations work as a Lewis acid. Combined Lewis acid and base orientation of the surface causes high affinity towards water molecules [42], therefore the oxygen-to-metal ratio in the surface is crucial for understanding the wettability properties [43].
In order to study the compositional changes of the YHO surface, XPS measurements were performed before and after illumination. The results are presented for C1s, O1s, and Y3d in Fig. 1(c). See Table II for the quantification of the different elements by XPS. The quantification has been done using survey spectra (not shown) considering Y3s, O1s, and C1s (column A) or Y3p, O1s, and C1s (column B) for both the clear and dark state.
The carbon (adventitious) C1s signal can be deconvoluted into three different contributions. The signal corresponding to C-C has been established at 284.8 eV as a charge correction reference. Other contributions are located at 286.3 eV (attributed to C-O-C and/or C-OH, which are expected to present a 1.0-eV difference in energy and hence are difficult to resolve), and O-C = O at 288.8 eV [44].
After illumination, the carbon content on the surface decreases, see Table II. This decrease is more pronounced in the C-C contribution, Fig. 1(c). Since the content of C in the surface decreases, the increase of adsorbed hydrocarbons is ruled out as the possible cause for the light-induced enhancement of the hydrophobicity [45][46][47]. Nevertheless, a decrease in carbon content can result in an increase of the intensity of the XPS contributions located at higher energies [48]. In particular, the decrease of carbon may result in an increase of the O1s intensity when compared to the intensity of the Y3d signal. For this reason, O/Y ratios in Table II have been calculated using Y3s and Y3p levels, which are closer in energy to O1s than Y3d.
The O1s signal is composed of two contributions at 529.0 and 531.2 eV. The former can be attributed to O atoms bound to Y atoms, whereas the latter can be assigned to atomic oxygen [49]. After illumination, the oxygen-to-yttrium atomic ratio of the surface increases, see Table II. This increase is consistent when comparing the O1s level to Y3s and Y3p levels, and takes place both for O bound to Y as well as for atomic O. The largest increase is observed in the latter, TABLE I. Equilibrium contact angle values (θ e ) for water, ethylene glycol (EG), and methylene iodine (MeI), as well as the measured total surface energy (γ total ) and its components: Lifshitz-van der Waals interactions term (γ LW ) and acid-base interaction term (γ AB )-calculated from the Lewis acid and base parameters (γ + and γ − , respectively). All data given for the clear and photodarkened state. Fig. 1(c). The contributions of the carbonates in the O1s region seem to be negligible. The obtained results for Y3d correspond very well to the Y 2 O 3 stoichiometry. At the surface the samples consist of Y 2 O 3 , which agrees with our previous work [37]. The Y3d has well-resolved spin-orbit components, namely, Y3d 3/2 and Y3d 5/2 . These components can be deconvoluted into Y 2 O 3 , with contributions at 156.6 and 158.4 eV, Fig. 1(c) [49]. An extra doublet is needed for completing the fitting, with contributions at 158.6 and 160.3 eV. In this energy range, the possibilities are Y-OH [50], yttrium carbonates [50], and Y-H [51], the latter being the best candidate, since no evidence of carbonates and hydroxides is found in C1s or O1s. There is not a remarkable change in Y3d before and after illumination.
When comparing the current XPS results with previous published data [52], it is evident that the films studied here, obtained by an optimized sputtering process [3], present higher homogeneity.
Surface-energy calculations, performed using the van Oss-Chaudhury-Good method [23,24], confirm the lower wettability through reduction under illumination of the total surface energy γ total , see Table I. The enrichment in oxygen of the surface, confirmed by XPS, reduces the Lewis sites as the surface approaches the Y 2 O 3 stoichiometry. The nonpolar Lifshitz-van der Waals surface-energy component γ LW also decreases from 38.07 to 28.57 mJ/m 2 , while the polar acidbase component γ AB decreases from 8.98 to 0.31 mJ/m 2 after illumination. Here γ AB = 2(γ + γ − ) 1/2 , where γ + is the Lewis acid and γ − the Lewis base parameters of surface tension.
Hydrophobic yttrium-based oxides have been reported in the past [16,53]. In those works, as Y 2 O 3−x coatings approached Y 2 O 3 stoichiometries, contact angles increased [53]. This pattern is consistent with metal-to-oxygen ratios of surface studies [43]. Consequently, the enrichment in oxygen of the surface under illumination causes the light-induced hydrophobicity enhancement observed in YHO thin films. In the next section, the exchange of oxygen atoms between the film and the atmosphere, induced by illumination, is demonstrated.
B. Light-induced breathing
Photochromic yttrium oxyhydride has been obtained by the oxidation in air of reactively sputtered metallic YH 2 thin films. The effect that the ambient humidity plays in this trans-formation is unclear. The incorporation of oxygen in the YH 2 lattice causes the increase of the lattice constant a from 5.20 to 5.34 Å [2,3,54] and hence the displacement of the diffraction peaks towards lower angles. Under illumination, the lattice of the YHO films contracts back, but without reaching the original oxygen-free YH 2 lattice constant [55].
After the incorporation of oxygen, NMR studies revealed that most of the hydrogen atoms in YHO remained in a local environment very similar to tetrahedral positions in YH 2 [14]. Small signals, which can be attributed to mobile protons and to oxygen coordination, arise as well after air exposure. Figure 2(a) shows a grazing incidence XRD pattern corresponding to a yttrium oxyhydride sample in its initial (clear), illuminated (photodarkened), and recovered (bleached) state. The standard diffraction peaks for YH 2 The analysis of the XRD patterns revealed how the films undergo an accordionlike transformation: the YHO lattice contracts and expands when subjected to illumination/darkness cycles.
Our previous optical studies pointed to the reversible formation of oxygen-deficient YHO 1−x metallic domains [13] within the dielectric YHO lattice as the cause of the photochromic behavior and lattice expansion/contraction: Since the filling factor ff of the YHO 1−x domains is predicted to be very small [13], the factor η must be η 1. Dilution of YHO 1−x domains in the dielectric YHO structure is necessary to achieve higher optical absorption rather than higher optical reflectance, which is consistent with experimental observations [10,22].
Very few oxygen atoms need to be released under illumination to produce a large optical contrast. In fact, ff = 0.02 causes a drop of the visible transmittance larger than 30% [22]. In addition, not all the released oxygen atoms necessarily need to leave the sample, and the material is able to host the outdiffused O atoms [56]. This was confirmed also by XPS in Fig. 1(c). Therefore, although the effective medium approximation works very well for modeling the optical properties [22], it is very difficult to confirm the release of oxygen in the experiment.
However, we postulate that there must be some exchange of oxygen atoms under illumination/darkness between the sample and its surroundings. This exchange, as discussed before, is probably below the detection limit of most conventional techniques. To prove our hypothesis, YHO thin films were subjected to 2-h-period cycles (0.5 h illumination followed by 1.5 h darkness) inside a glovebox filled with N 2 . The O 2 and H 2 O content within the glovebox was below 0.1 and 1.4 ppm, respectively. The average transmittance of the film was measured between 600 and 800 nm during cycling and plotted in Fig. 2(b). In the absence of air, the films lost part of their initial transparency in each cycle, not being able to recover fully. After 4 weeks of continuous cycling within the glovebox, the luminous transmittance T lum of the samples decreased from 78.5% in the nonilluminated state to 26.7%. This heavily photodarkened films were allowed to bleach in total darkness, both in air and in N 2 atmosphere (glovebox). The evolution of T lum is presented in Fig. 2(c). The bleaching speed of the photodarkened samples in darkness was much slower inside the glovebox than in air. In addition, during the recovery, a series of transmittance measurements were performed during a period of 24 h, both in air [ Fig. 2(d)] and in the glovebox [N 2 atmosphere, Fig. 2(d)].
The films kept in air recover their initial transparency after a few hours [T lum clear, presented in Fig. 2(c) as a horizontal dashed line], while the films in N 2 recovered very little in the same period of time. Since there are no significant differences between the temperature inside and outside of the glovebox (both at ∼20 • C), the data presented in Figs. 2(c)-2(e) strongly indicate that a source oxygen from the ambient is crucial for adequate recovering of the photodarkened films. The need for ambient oxygen, and possibly water vapor, is consistent with the light-induced oxygen release hypothesis summarized in Eq. (1).
The dependence on the atmospheric composition rules out other possible explanations for the photochromic mechanism, including light-induced formation of defect pairs or lattice distortion [14]. The release of hydrogen [15] (instead of oxygen) is an alternative explanation that can also be ruled out but for a different reason. In this case, the reversibility of the process would require the rehydrogenation of the film, a process that cannot take place at ambient pressure [1]. Nevertheless, if hydrogen is released, the film could bleach by the incorporation of O, eventually approaching the Y 2 O 3 stoichiometry. However, this hypothesis is not supported by the x-ray diffractograms shown in Fig. 2(a) and contradicts the reversibility of the process. Besides, the very large band gap of Y 2 O 3 would lead to an increase of T lum . Such increments are not observed, Figs. 2(c) and 2(d).
Considering the low electronegativity of Y, the idea of oxygen being pushed out of the YHO lattice by illumination may seem counterintuitive at first. The thermodynamics and kinetics of Eq. (1) need further studies for clarification. In the next section, a preliminary theoretical model for understanding the light-induced oxygen release in YHO films is presented.
C. Theoretical considerations
The experimental evidence presented above points to lightinduced oxygen exchange between the film and the atmosphere. In the present section, this question is addressed by DFT modeling (ab initio calculations using VASP 5.3.5). It is known that photochromic YHO coatings are obtained experimentally by the partial oxidation of YH 2 films in air. As discussed before, the incorporation of oxygen into YH 2 results in the expansion of the YH 2 lattice. The lattice parameter a increases, and hence the XRD peaks corresponding to YHO appear displaced towards lower angles when compared to oxygen-free YH 2 , Fig. 3(a). The data presented in Fig. 3(a) corresponds to two different samples. The evolution of the XRD pattern for the same sample before, during, and after the YH 2 -to-YHO transformation can be found elsewhere [3].
The oxygen intake also causes the band-gap opening. to photochromic YHO. YH 2 presents the optical behavior of a metal, but after the incorporation of oxygen it turns into YHO, a wide-band-gap semiconductor. The role played by the ambient humidity has not been studied yet.
Taking the crystalline structure of YH 2 as the starting point [ f m-3m and space group symmetry number (SPGN) 225], we build diverse YHO lattices of stoichiometry YH x O y , Fig. 3 Systematic theoretical and experimental studies [3] pointed to P43m with SPGN 216 as the most energetically favorable yttrium oxyhydride lattice, i.e., stoichiometry x = 1 and y = 1 in YH x O y . In this structure, oxygen, as well as hydrogen atoms, are located in tetrahedral sites. The x = 1 and y = 1 stoichiometry is consistent with the recent exhaustive compositional experiments [37,58]. YH 2 (225), Y 4 H 6 O 2 (224), Y 2 H 2 O (134), as well as Y 4 H 4 O 3 (111), present a metallic character, whereas YHO (216) is predicted to be a wide-band-gap semiconductor, as expected experimentally. According to these results, YHO crystallizes into a cubic structure with a lattice constant a = 5.29 Å, which corroborates the lattice expansion that takes place in YH 2 (a = 5.20 Å) when exposed to air. In particular, the expansion of a and the opening of the band gap after air exposure is predicted by DFT, see Fig. 3(d). The calculated lattice constant and band gap are plotted as a function of the Y/O ratio. The predicted value of a for YHO is, however, slightly smaller than the experimental value observed (a = 5.34 Å) [3,54]. Discrepancies may arise from the difficulty in measuring a due to the lattice strains, defects, or other deviations from ideality in these thin films.
The Y, O, and H atoms occupy the Wyckoff positions 4c (1/4; 1/4; 1/4), 4a (0, 0, 0), and 4b (1/2, 1/2, 1/2), respectively, in this energetically favorable YHO (216) lattice. YHO belongs to the emerging family of materials called oxyhydrides [3]. The partial oxidation of YH 2 , and hence the formation of YHO, triggers the expansion of the unit-cell volume [57]. As a consequence of the lattice expansion, the bond distances in YHO will be subjected to oxygen-induced elongation, Fig. 3(c). We show there the Y-O and Y-H bond lengths and the splitting of the Y 3d states at the conductionband minimum for different YH x O y stoichiometries.
The experimental XPS data and the calculated total density of states of YHO (216) are in good agreement, as shown in Fig. 3(c). The opening of a wide band gap as the oxygen atoms are incorporated in the YH 2 structure is also predicted by the ab initio calculations, Fig. 3(d). However, the model overestimates the band gap. The calculated value of 4.9 eV for YHO (216) is about 1 eV larger than the experimental band gap determined in the photochromic films by optical methods [3,22].
It should be noted that the YHO films, obtained by the oxidation of YH 2 previously prepared by reactive sputtering, are polycrystalline and multiphase in nature-note the widening of the XRD peaks of YHO when compared to YH 2 in Fig. 3(a). Therefore, the energy-band diagram of the material most likely corresponds to a heterostructure of type II with a staggered band gap.
Assuming YHO (216) as the possible structure of photochromic yttrium oxyhydride, we can now explain the photochromic effect. The projected density of states (DOS) for YHO (216) revealed that both O and H atoms strongly contribute to the topmost valence band states. However, they are not hybridized because both H and O atoms are connected to the Y atoms independently from each other. On the other hand, the lowest conduction band is triply degenerate and formed mostly by Y d states, in particular, t 2g states, Fig. 3(c). This result suggests that the light-induced O released from the film can be caused by the pseudo Jahn-Teller distortion effect: Y atoms are located at the center of the tetrahedral H and O sublattices. Under illumination, the transfer of electrons from the valence band to the t 2g bands will turn the YHO (216) lattice unstable [59].
As the p orbitals of O atoms are hybridized with the Y d orbitals, the degeneracy of the t 2g states can be avoided by the removal of oxygen atoms. As a result, an O-deficient unit cell with smaller lattice constant will be created. As reported by Pishtshev et al. [57], there are many O-deficient structural arrangements that can be obtained from YHO, Fig. 3.
Before illumination, yttrium cations are in the oxidation state 3+, which is the very stable state. After illumination, some (very few [13]) of the O atoms will be detached from the Y 3+ cations. Those Y atoms evolve from a 3+ to 2+ oxidation state, which is less stable. In darkness, the Y 2+ atoms oxidize back to Y 3+ by the incorporation of oxygen atoms that remained within the lattice [56], Fig. 1(c), or newly incorporated from air.
As a result of illumination, metallic domains of smaller lattice constant will be created in the YHO (216) lattice, which results in the photochromic effect and the lattice contraction observed experimentally. The material seems to be able to host the out-diffused O atoms, which in some cases can reach the surface or even leave the film as demonstrated before. After stopping the illumination, the released O atoms can return to their former positions and the initial optical transparency will be restored.
IV. CONCLUSIONS
When exposed to air the YH 2 lattice expands from 5.20 to 5.34 Å due to the incorporation of oxygen. In addition to the lattice expansion, YH 2 turns into YHO. YHO is transparent and photochromic. The reversibility of the photochromic effect depends on the surroundings of the films, being a source of oxygen necessary for the adequate bleaching of the samples. Therefore, the photochromic mechanism must involve oxygen diffusion and oxygen exchange between the sample and its surroundings. A consequence of the oxygen diffusion is the unusual enhancement of the hydrophobicity and the reversible lattice contraction of the YHO films under illumination. Although further studies are needed, a preliminary theoretical study points to the pseudo Jahn-Teller effect as the possible cause of this light-induced oxygen diffusion observed YHO films. | 6,627.6 | 2019-03-12T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Are Smaller Cities More Sustainable ? Environmental Externalities in Urban Areas . Evidences from Cities in São Paulo , Brazil
The objective of this essay is to explore the relationship between economics of agglomeration, city sizes and negative environmental externalities. Therefore, we contribute to illuminating the controversy on optimal city size, which has been much more concentrated on the reality of developed nations. We emphasize environmental dimensions related to this debate focusing on developing country urban agglomerations. In order to do so we test the hypothesis that smaller cities present better environmental quality indicators than bigger urban centers. Our tentative of rejecting this hypothesis was based upon data on more than 600 cities in the state of São Paulo, Brazil, including its capital city of São Paulo, one of the largest cities in the world with more than 12 million inhabitants. We used cluster technique for a multivariable analysis with several environmental indicators—for water quality and for solid waste disposal and management—and an aggregated quality of life indicator very similar to the Human Development Indicator (HDI). Our results reject the hypothesis that smaller cities in a developing country reality are more environmentally sustainable than bigger cities.
lation has grown only 0.1% p.a. (UN-HABITAT, 2005).In a global scale, the urban population will grow from 3.6 billion people in 2011 to about 6.3 billion in 20501 , with the vast majority living in cities of low and middle-income countries.Increases in population density and intensification of the economic activities in urban areas generate positive effects upon human well-being: expansion of employment opportunities, increase in productivity and in income.Nonetheless, higher population density and production concentration also generate undesirable effects upon human well-being and reduction in the quality of urban environment: congestion and air pollution, for example.Nevertheless, these are not the only negative effects of urban crowding.
Expansion of urban areas and increase of population density in these spaces demand the choice of better strategies for the development and management of urban agglomerations.Economic analyses argue that the size of a city is directly related to the availability of human, financial and economic resources.At the same time, urban economics reasoning suggests that as cities grow benefits of agglomeration decrease and its negative effects increase rapidly.However, recent studies, as the one developed by Au and Henderson (2006) for North American and Asian cities, suggest that big cities are best suitable to the promotion of environmental conservation.The argument is based upon the fact that such cities, due to higher concentration of income, have for more resources available for dealing with environmental issues.
It is worthwhile to emphasize that the arguments by Au and Henderson (2006) go against a widespread belief among policymakers and managers that smaller towns are better suited to maintain the environmental quality of urban areas.For many of these analysts, big cities are not desirable from an environmental perspective.In their opinion, the spatial organization of large urban centers is the worst possible option for sustainability due to various negative externalities derived from the gigantic size of some cities: congestion, air pollution, intolerable noise levels, reduced green areas, to mention only a few.
The controversy between those in favor of large city advantages and those who postulates greater environmental sustainability of smaller urban centers is far from being resolved.Spatial and environmental analyses are not among the most popular research theme among economists.Both areas of investigation tend to motivate more other scientists in geography, agriculture, biology, engineering and architecture.Hotelling (1929) was probably a pioneer in introducing the spatial variable into economic analyses.However, even when considering the spatial dimension, economists tend to incorporate it in dealing with issues related to industry location, energy, telecommunications and transportation.
Observers of urban development and environmental issues, Gaigne, Riou and Thisse (2011) developed the study "Are compact cities environmentally friendly?".In so doing, they used the transportation system to measure environmental and financial costs of large and small cities.The transportation system was chosen because the movement of people and commodities is responsible for about 30% of the total emission of greenhouse gases.In addition, from this emission 80% come from private cars.Whereas energy efficiency is not enough to solve problems of pollution and of environmental damage, these authors advocate the reduction of covered distances reduces the emission of gases.They conclude that large and polycentric cities are more environmentally efficient than compact cities.
Motivated by the research of Gaigne, Riou and Thisse (2011), this paper sets out to test the hypothesis that compact cities provide better environmental quality than major urban centers.However, unlike Gaigne, Riou and Thisse (2011) we take into consideration other environmental impacts related to urban areas besides the usual air pollution.It is our understanding that the use of more diversifies set of environmental indicators is more suited to illuminate the controversy highlighted above.A broad set of indicators enables broader and easily understood analyses.
Due to the lack of data for Brazil as a whole as well as the low reliability of existing data, we have decided to test our hypothesis only to the cities in the state of São Paulo, the economic powerhouse of the Brazilian economy.In this context, the question to be answered is: do smaller cities have more environmental benefits than large?In order to answer it, we use cluster multivariate analysis and factorial analysis in order to identify the relationship between size of cities and level of environmental quality reflected on those selected variables.This is done by means of a comparison between large and small municipalities with regard to environmental quality.
This paper has four sections besides this introduction and a conclusion.Section 2 presents a review of the literature on the existence of urban agglomerations, its origins, benefits and harms, as well as highlight the great correlation between the spatial and environmental studies.Section 3 presents, in turn, the object of study: municipalities in the state of São Paulo, the selected environmental indicators, as well as data sources and methodo-logical procedures (cluster and factorial analyses) followed in order to develop our analysis of environmental indicators of the municipalities.Finally, Section 4 describes our results derived from the developed analyses.Special attention is given to both techniques employed and to the difficulties encountered in each one of our simulations.
Cities, Economies of Agglomeration, Externalities and the Environment
Urban agglomerations are nothing more than the spatial concentration of economic activities.They occur because of increasing returns to scale, known as agglomeration economies.The existence of cities has become a global phenomenon as they speed up the economic development of societies (Fujita &Thisse, 1996 and2002).In fact, the spatial agglomeration of economic activity and economic growth are difficult to separate.It is clear that the process of urban concentration is not new, but its intensification in recent years is impressive (Rosenthal & Strange, 2004).Studies estimate that in 1800 the world urban population concentrated in cities with more than 100,000 inhabitants was less than 2% of the total.In 1850, the proportion would have passed to 2.3%, in 1900, to 5.5%.Furthermore, data from the World Bank claim that currently more than 50% of the world's population is urban (Banco Mundial, 1992, 2009 a andb).Lampard (1955) argued many years ago that it is hard to make a generalization about the origin and history of cities.This difficulty is due to the fact that cities vary according to time, function and location.Nevertheless, it is possible to identify some points that characterize life in cities: increased economic efficiency, optimization of working conditions, public services and market access.Cities produce a basket of goods that is not possible to be produced in isolated areas.As presented by Mills (apud Henderson, 1972 and1974), even the simplest city is able to produce goods for domestic consumption and export, housing and transportation.Thus, the clustering in cities would occur due to economies of scale.These would be from both the reduction in production costs (transportation, raw materials, skilled labor, public services) and due to the existence of a large captive market.
According to Thisse (2002 and2011) there is consensus that the economics of space can be considered as the product of trade-offs between different types of economies of scale in production and the cost of mobility of goods, people and information.Although been repeatedly rediscovered, this trade-off is at the heart of economic geography since the work of the first location theorists.This suggests that the location of economic activities is the result of a complicated balance of forces that pull and push consumers and businesses in opposite directions.
There are, in fact, various types of economies of scale, which would not exist in isolation.Agglomeration generates favorable situations for all actors involved (Verhoef, & Nijkamp, 2003).One of the main arguments in the debate about the urbanization is focused on whether quantity and quality of services demanded or supplied for the urban population are higher than they were on less urbanized regions when compared in terms of income.Income elasticity of demand for public goods is positive and higher in areas that have a higher degree of urbanization.This positive income elasticity reflects the willingness to pay taxes and contributions that may finance the provision of the services (Linn, 1982).
Cities, especially larger ones, offer several advantages for firms.They also offer facilities to citizens, such as housing, health and education services, and markets.Therefore, according to Lampard (1955), cities become the rational choice of economic agents.Although cities present several advantages (forces of attraction or centripetal forces), they also have several factors of expulsion (centrifugal forces, Verhoef &Nijkamp, 2003).These dispersion forces are as important as those of agglomeration in determining size and efficiency of a urban agglomeration.
It is clear, population density brings with it inherent problems.Among them congestion, environmental damage, as well as social issues-slums and ghettos-that end up aggravating the problems of urban communities (Banco Mundial, 2009).Increased demand for homes makes the rent charged increased (Glaeser & Gyourko, 2007).Edward Glaeser (2007) presents a series of studies on the increased bid of land in cities.According to the study, is that the price of homes reflects the increased income and economies of agglomeration.Rosenthal and Strange (2001) state that the prices of homes are more related to the provision of public services (health, education and leisure) than with the proximity of urban centers.In American cities the poorest population is concentrated in the central areas, while the richest are in outlying areas (suburbs).For Glaeser et al. (2005), this distribution is related to quality and costs of transportation, besides the income elasticity of demand for land.
In a setting like this, the poorest population makes use of public transport because of its lower cost and tends to live closer to their workplace.Already the highest-income people have cars and live in more remote regions with greater space.This leads another diseconomy: urban congestion.Congestion in urban areas affect the cost of firms and the well-being of families.They are a kind of obstacle to trade, because they restrict access to market goods and services, increasing their costs.For Himanen, Perrels and Lee-Gosselin (2005) congestion are not something new, but currently are enhanced by accelerated growth of income in urban agglomerations and consumer-focused policies that discourage the use of public transport by the high-income population.The use of private transport to offsets commuting is inefficient.Despite the time spent on the road, buses are the most efficient vehicles, considering the volume of passengers transported and occupation of road space2 .
Intensification of trade, increase production of goods and more automobiles tend to worsen air quality with consequences upon the health of those living in big cities (Banister, 1999).Despite the great attention given by researchers to the effects of urban air pollution upon the health of urban center inhabitants, human health is also affected by other negative externalities of urban agglomeration.
Among these negative externalities, inadequate management of solid waste, deficiency in basic sanitation and improper use of land sites are factors that encourage soil and water contamination.These negative externalities of urban agglomeration are widespread in cities of developing countries.As we will show later on in this paper, they are perceived in major Brazilian cities.However, these externalities and their consequences have received less attention from scholars, in spite of the fact they probably are probably the most common form of malfunctions of markets in these peripheral societies.
It is widely perceived that to the extent that cities become bigger, marginal benefits of agglomeration tend to decrease as diseconomies generated by large cities grow quickly.Eventually, a city becomes overly large so that the diseconomies exceed benefits of scale.In some cases, activities may be motivated to transfer to other locations (Wheeler, 2002).Nevertheless, as private and social costs of agglomeration are not equal, cities may operate at scales above the optimal.Nijkamp (1999) says that geographic space and environment are like twins; what happens to one, affects the other.The most noticeable relationship between environment and space are (positive or negative) externalities.In fact, one can say that externalities are the materialization of spillovers.All space-related activities-housing, transport, industrial development, etc. are (positive or negatively) connected to environmental changes.
In relation to these aspects, Nijkamp (1999) observes that there are direct relationships between the environment and the geographic space.In special, he emphasizes: 1) space is the physical market of environmental externalities; this relationship is valid for local issues (soil pollution) and for global issues (greenhouse gas emissions); 2) space is naturally heterogeneous, so the environmental externalities resulting from its occupation are also unevenly distributed; and 3) space and the environment are scarce resources; furthermore, to use one implies the consumption of the other (complementary goods); thus, the conservation of one requires good management of the other.
One attempt to establish possible analytical links between geographic space and the environment is the debate on the optimal size of the city, as evidenced in the study by Riou and Thiesse (2011).They argue that when we think that the structure of cities can be either monocentric or polycentric, it becomes evident possible links between environmental performance and population density.Changes in population density affect income and wages; due to these effects, workers and firms may be encouraged to reallocate resources in a new standard of agglomeration.Thus, policies for encouraging decentralization of cities would be efficient as it would reduce pollution and increase the social welfare.This would be also valid for decentralization within the city itself, with the creation of poles.
Riou and Thiesse (2011) claim that the problem of the traditional analyses made on the optimum size of cities is that the models have two substantial failures.First, location of firms and individuals is thought as an exogenous variable; in fact, it is endogenous and determined by prices, wages and returns determined by the market.Second, analyses are done for cities individually without considering the spatial mobility of the factors of production.Taking into account these two factors, they defend the idea that polycentric cities are more efficient from an environmental point of view compared with compact cities.
It is not difficult to observe that the main novelty in this study is the article's analytical assumption of mobility of factors.Thiesse and Riou seek to show the effect of an increase of population density upon the environment when both firms and workers can move freely between towns (or within a polycentric city).Their analysis was used to develop a transport system to get a measurement of environmental and financial cost of large and small cities.The transport used in locomotion is seen as one of the great market failures and dead-weight loss to society.
The analysis considers the existence of a trade-off: on one side, the agglomeration reduces pollution by transporting goods between cities.On the other hand, the crowding increases the pollution by increasing distance for displacement of people to their work.The two are affected by population density and by the income level of agglomerations which are influenced by the agglomeration effect.The model presented is considered: two cities, mobility of factors, three primary goods (work, land and cash), distance between cities (or centers) and an industry.The result by the authors was that large and polycentric cities are more "green" than compact cities.
These results by Thiesse and Riou are quite unique among studies that predominantly argue that "small (city) is beautiful" from an environmental point of view.Their findings also motivated our research in a different setting.Will their results be valid for cities located in countries with lower average per capita income?Are large cities also the greenest in developing or emerging countries?What would happen to those results if other environmental characteristics, as well as air pollution, were considered in the comparison of sustainability between large and small cities?In the next pages of this paper, these questions are answered with empirical evidence for cities of different sizes in the state of São Paulo, Brazil.
Cities of Different Sizes and Sustainability: Methods and Procedures
Do smaller cities have more environmental benefits than large ones?Our main objective is to answer this question.We develop a multi-criteria analysis to see if residents of larger municipalities have lower environmental quality than the one observed in smaller municipalities.As object of analysis was selected the state of São Paulo3 , the economic richest state in the whole Brazil.The choice of São Paulo was not random, but rather guided by the availability of environmental data provided by the São Paulo State Environmental Agency (CETESB), besides the economic importance of this state to Brazil (Governo do Estado de São Paulo, 2012).
In this research we used both quality-of-life indicators and environmental indicators.Environmental indicators reveals, in qualitative and quantitative manner, the result of certain human actions upon the environment (externalities).Quality-of-life indicators reveals the utility gain of families.Among our indicators, 4 (four) are descriptive, 2 (two) are performance indicators, 2 (two) reveals efficiency and 1 (one) is a welfare indicator.In a more detailed fashion we have: 1) proportion of households with sewage collection network: measures the supply of sanitary sewage collection service in response to increase in population in cities.It is a public service and essential for the population health.This indicator varies between zero and one and it is the ratio of number of households serviced by sewage collection service and the total number of residences in the city.The distance between the observed value and 1 represents the proportion of households that still have sanitary septic tanks or riverbeds as the final disposal of their waste.Data for this indicator are from the Brazilian Institute of Geography and Statistics (IBGE) and collected from the 2010 Census; 2) sewage treatment as a proportion of sewage collected: this indicator measures the proportion of the waste collected by the sewage network that is properly treated before returned to the environment.The sewage treatment service is a public service that aims to guarantee both public health and sound management of environmental resources, mainly to avoid contamination of water and soil.The indicator, which varies between zero and 1, is the ratio of the volume of treated sewage in relation to sewage collected by sanitary sewage network.The distance between the observed value and 1 represents the proportion of residues returned to the environment without proper treatment.The data presented for this indicator are from CETESB and available in the Groundwater Quality Report 2009; 3) proportion of households serviced by piped water delivery by distribution network: the indicator measures the supply of tap water service to the residents and businesses of the town.As an indicator of response it reflects measures taken by the Government in order to supply quality water, which has positive effects upon population health.This indicator also varies between zero and one and it is estimated as the ratio of number of households served by piped treated water and service by the total number of residences in the city.The distance between the observed value and 1 represents the proportion of households that still have lakes or rivers as direct source of water for survival and economic activities.The data for this indicator are from the Brazilian Institute of Geography and Statistics (IBGE) and collected though the 2010 Census (IBGE, 2012); 4) proportion of households serviced by regular solid waste collection: this indicator measures the proportion of households serviced by regular garbage collection.It is worth mentioning that it does not reveal the periodicity of collection, but only its existence.This service also has positive effects upon population health.The indicator, which varies between zero and one, is the ratio of number of households served by regular solid waste collection and the total number of residences.The distance between the observed value and 1 represents the proportion of households that are not served by urban cleaning service and that, therefore, give various destinations to their waste.The data presented for this indicator are from the IBGE and collected in the 2010 Census; 5) number of contaminated areas: this indicator quantifies the number of contaminated areas by different sources (industry, trade, gas station-major contaminator-, waste, accidents, agriculture).It reveals inappropriate use and mismanagement of land sites.The indicator, however, only highlights the existence of a contaminated area and the source of its contamination.The extent of the area nor the severity of the contamination are not identified.Thus, it is not possible to show the extent of areas contaminated in relation to the geographic area of the municipality.From the total of registered areas were subtracted those areas already recovered.The data presented for this indicator are from CETESB and released/updated on the website of the institution where it is also possible to monitor the stage of the research process and restoration of environmental damage caused.
6) quality index of landfill waste (IQR): the index, prepared by CETESB for all municipalities via application of a standard questionnaire, aims to assess the quality of the treatment and final disposal of solid waste produced by inhabitants and collected by the system of urban cleaning of each municipality.Varying in a range of zero to ten, the index ranks the solid waste treatment facilities or waste final disposal from inadequate to adequate; 7) daily production of garbage: measuring the tonnage of garbage produced daily in the towns; it reveals the impact of final consumption upon the environment.The indicator is expressed in absolute terms, that is, the total tons produced each day in each of the municipalities.Therefore its estimates are influenced by two basic characteristics: the size of the municipality and the income level (which directly influences consumption).The data are computed and published by CETESB in the State Inventory of Household Solid Waste and does not take into consideration the generation of waste by productive activities; 8) daily per capita garbage production: it is a variation from the previous indicator that no longer configures as an absolute, but rather as a relative, indicator.It is the ratio between the daily production of waste and the total population of the municipality.The data are sourced from the IBGE (population) and CETESB (waste); 9) Municipal Development Index (IFDM Firjan): it is prepared and released by the FIRJAN (Federation of Industries of the State of Rio de Janeiro).The IFDM is an annual monitoring of 5564 Brazilian municipalities.Three areas of human well-being are considered in IFDM: economic (employment and income), education and health.The IFDM varies from 0 to 1.The closer to 1, the greater the level of "development" of the locality.The IFDM was chosen because it is an indicator of quality of life that is very similar to the HDI (Human Development Index) and in addition to presenting county data, the indicator has annual periodicity.
Using these indicators we developed a multivariate analysis.Our data basis was valid for 607 municipalities in the state of São Paulo in Brazil.Given the complexity of the analysis were used two methods of multivariate analysis: cluster analysis and factorial analysis.It is well known that cluster analysis groups objects considering their similarities and differences.The formation of clusters ensures that the objects inside are the most homogeneous possible and, on the outside, the more heterogeneous as possible.The method does not distinguish between dependent and independent variables since the goal is to characterize the groups (Malhotra, 2006).This kind of analysis is particularly effective when the number of observations is large.In this way, cluster-groups are created that make the data more malleable for analyses (Cluster Analysis, Chapter 23).
The cluster formation process has basically two steps: the estimation of similarity measures and the adoption of a technique for definition the number of groups.Second Bussab (1990apud Albuquerque, 2005) there are a number of measures of similarity being that the choice of which to use depends on the convenience of the researcher.For this work we decided to use a non-hierarchical analysis.In fact, hierarchical methodology was tested, but this was unsatisfactory to the object of study.Thus, for the analysis in question has used the k-means clustering method.For the definition of the number of clusters formed was used as auxiliary, the method vfold.
Factorial analysis is a technique that involves an examination of the interrelationships between variables, so that they can be conveniently described by basic categories, called factors.In so doing, one explains the covariance between variables using a statistical model and assuming the existence of n non-observable variables (KINGS, 1997apud VICINI, 2005).The application of this technique allows the construction of an array of factorials which is able to explain the correlation that exists between the common factors.To do this, you use the correlation matrix of the initial indicators of the model that is being studied.Associated with the correlation matrix of indicators there are auto-vectors that provide the percentage of variance explained by factors in such a way that the sum of the variances of the factors is equal to the total variance of the model4 .Factorial analysis was used to make the construction of a Municipal-IQM quality indicator.To this end, we used the methodology developed by Smith (1999).The IQM calculated from the factorials loads aims to summarize the information about environmental quality and quality of life for each of the studied municipal districts (607).The indicator was calculated as follows: ( ) where: IQM i : quality index for county i; j λ : j-th characteristic root of the correlation matrix; k: number of factors; F ji : factorial load of municipality i; and tr(r): trace of the correlation matrix.
After the creation of the IQM we did its normalization to facilitate comparison between the municipalities so that there were no negative values.As a result, to answer the question proposed, a new cluster analysis was developed with the results obtained with the creation of the IQM.It was, again, the k-means technique in order to group the municipalities to provide the same quality standard municipal environmental and socioeconomic elements when considered.
Cluster Analysis-Results and Analyses
In the cluster analysis we considered the data entries for the selected variables for all 607 municipalities of the state of São Paulo.The vfold test indicated that the number of clusters best suited for data analysis would be five (5).After the definition of the number of clusters, cluster analysis, based on k-means method, was performed.Table 1 summarizes results with the characterization of clusters formed and presentation of averages-centroidvariable for each cluster under analysis.In Table 1 we also highlighted the best result for each of the nine variables considered in the analysis.
Examining Table 1 we observe a heterogeneous distribution of best environmental practices within the state of São Paulo.One cannot combine in a single cluster all the best environmental quality conditions.Nevertheless, in evaluating the characteristics of each cluster for each variable, the composition of each grouping can be analyzed and this allow us to have a proper picture of the relationship between geographical and demographic size and environmental quality.
Thus, Cluster 1 is formed by 134 municipalities and represents 22.08% of them.It has a total area of 50 thousand km 2 and a total population is of 1.19 million inhabitants.Therefore, its average population density is 26.15 inhabitants per km 2 .Each municipality has on average a population of 39 thousand inhabitants; thus, there is a concentration of small municipalities in Cluster 1: less than 50 thousand inhabitants.The municipality with the largest population (São Roque) has 79 thousand inhabitants and the smallest (Nova Castilho), 1125 inhabitants.In terms of geographical area, the largest municipality has 1556 km 2 (Teodoro Sampaio) and the smaller, 34 km 2 (Nova Guataporanga).The average per capita income for Cluster 1 is R$ 552.00, being the highest R$ 916.00 (Julio Mesquita) and the smallest R$ 289.00 (Irapuã).
Cluster 2 has 18.29% of all municipalities (111) in a total area of 348 km 2 .The smallest municipality occupies 3.64 km 2 (Águas de São Pedro) and the largest, 1482 km 2 (Botucatu).In terms of population,the county of Guarulhos is exceptional: despite having only 318 km 2 , it registers 1.2 million inhabitants.The average per capita income in Cluster 2 is R$ 615.00; varying from R$ 1613.00 (São Caetano do Sul) to R$ 415.00 (Potim).amine whether the presence of a large city is detrimental to the environmental performance of the cluster.In order to sort the indexes of quality for municipalities and their clusters, we constructed an indicator of socio-economic and environmental quality for each municipality using factorial analysis.
Factorial Analysis-Municipal Quality Indicator
The Municipal Quality Indicator (IQM) allowed us to compare the environmental quality enjoyed by residents in each of the municipalities without disregard the quality of life presented in each one of them.Factorial analysis was used for the composition of the IQM.To make use of factor analysis is necessary to ensure that selected variables are correlated, i.e., having non-zero correlation coefficient.Table 2 presents the correlation matrix when tested all variables for all 607 municipalities.Note that, despite the existence of correlation, it was very low in some cases such as, for example, between IQR and proportion of households served by running water.The fact that there are correlations of lower mounts do not preclude the application of the method for the construction of the IQM.
After checking the feasibility of application of factor analysis was necessary to define the number of factors used for construction in the IQM.Given that our objective through this approach was to reduce the number of database variables by creating a smaller number of unobservable variables.The definition of the number of factors depends on the number of variables, observations and the correlation between variables.To set this option, it was rated the degree of explanation of the Eigen values as greater than 0.7.For this study there were used 5(five) factors with an explanatory power of 81.3%.
We calculated the value of IQM for each of the municipalities.However, obtained results, however, did not allow the comparison of indicators, because there were negative indexes.To carry out the comparison, IQMs were normalized, generating a new value for them.Thus the IQM ranged from zero to 100 (one hundred).The maximum (100) registers the best environmental quality considering the selected variables.Our calculation was done with the nine selected variables and we obtained 60.76 as the largest IQM.
Our result indicates that the environmental quality of the municipalities of the state of São Paulo is at least 40% smaller than the possible maximum.For the analysis of clusters of IQM was used the k-means method and to determine the number of clusters the technique employed was vfold.In this context, the cluster that had the best IQM was the number 5. The cluster that had the largest variance was the number 3. When considered the heterogeneity among the clusters, it was observed that the greatest distance was between Clusters 5 and 3.These were, therefore, the best and worst clusters, respectively.
Given these results, it is necessary to evaluate the composition of each cluster.Cluster 5 is formed by a single municipality: São Paulo.The result contradicts several studies that claim that larger urban agglomerations present worst environmental quality indicators.Cluster 4 is the second best ranked.It has 147 municipalities and, among them, the largest municipality in terms of population is Guarulhos.In addition to Guarulhos, other major cities are also component of the Cluster (Campinas, São Bernardo do Campo, Santo André, for example).On the other hand, the cluster that has the worst performance was the Cluster 3. Composed of 50 municipalities this cluster focuses municipalities that revealed IQM ranging from 11.53 and 18.04.Cluster 3 is formed by small and small-medium municipalities.One of its municipalities-Redenção with 3873 inhabitants-presented the worst result for the whole state of São Paulo.
The formation of the clusters after the creation of the IQM revealed that it is not possible to affirm that smaller cities have better environmental quality than large cities.However, even with the fact the two worst clusters were formed by small municipalities, we cannot say that larger urban agglomerations had better environmental quality indicators.When considered the 20 best placed municipalities in terms of IQM we observed the presence of municipalities of all sizes and with varying levels of income per capita.Our results confirms that, for the state of São Paulo in Brazil and for the selected variables, it is not possible to say, categorically, that smaller cities offer better environmental quality to its inhabitants.
It is important to emphasize that these results do not change after we tested them with a second group of variables.In this second group we did not take into consideration two variables: number of contaminated areas and production of daily garbage.These changes altered clusters and classification of cities.Nevertheless, big cities retain the best results.In this context, the result obtained in this paper differs from the consensus that compact cities are better from an environmental point of view.However, it was not enough to validate the proposal submitted by Henderson (1974) and Thisse (2010) that larger urban agglomerations generate environmental gains of scale.The only thing that can be said is that, for the state of São Paulo, the answer to the question that gives name to this paper is no.
Conclusions
This paper discussed the relationship between the size of urban agglomerations and environmental externalities they experience.Seminal studies on regional economy sought to understand reasons to live and to produce in urban agglomerations.Our research tries to understand reasons not to live or not to produce in (big) cities.In this context, using as an object of study municipalities of the state of São Paulo in Brazil, we sought to answer: do smaller cities have more environmental benefits than large?
To answer this question, we considered the main reasons for the formation of cities, their advantages and their disadvantages.The rationale for the existence of cities is based on two fundamental points: economies of scale and indivisibility of certain goods and services.The economy of scale issue is related to gains from the reduction of the marginal costs of production and the formation of large consumer market and of concentration of manpower-supply.The indivisibility issue is closely related to public services such as electricity supply, water treatment, sanitation infrastructure, hospitals and health care, etc.For all these services, sunk costs in their provision are very high, requiring a range of significant consumption in order to make their supply feasible.
Despite all these benefits, cities have a number of negative externalities.These externalities shall be considered in any economic analysis assessing social welfare of urban inhabitants.Congestion, water, air and soil pollution, violence and crime, among others, are factors that must be considered when conducted a cost-benefit analysis of whether to live or to install in urban agglomerations.As we shown, the vast majority of scholars agree with the fact that inasmuch cities grow the negative effects of agglomeration grow more rapidly that the gains arise from urbanization.The mismatch between the private and social cost curves would be the reason for the existence of cities with size larger than its optimal size.The failure in internalize some costs of urbanization would be responsible for the disorderly growth of cities.
However, in looking at the reality of developing country urban areas, we arrive at a result different from that of a large portion of studies.Our results were more in line to those by Henderson (1974).His result was that, with a new allocation of income and better quality of life quality, the big city would become attractive to new residents.Largely Henderson's result was because in larger cities income tends to be higher.The higher remuneration of agents enables them to demand new goods, including luxury goods like the environment.The greatest willingness and ability to pay and the new basket of consumption of residents in big cities drives the demand for goods such as clean air, water treatment, and environmental preservation areas.
Motivated by the paper by Thisse (2011) our paper tested the hypothesis that the most efficient structure for environmental preservation and social welfare would exist in large cities.Our investigation was conducted using multivariate analysis.This choice was due to the fact this approach allows the use of several variables and is quite useful in decision-making for public and private managers.We, in fact, structured clusters able to combine cities with similar indicators enabling a comparison between cities of environmental indicators and indicators of the quality of life enjoyed by residents.
Our results indicate that, for the state of São Paulo, it is not possible to affirm that smaller cities provide better environmental quality to their population.It was also possible to verify that largest cities and those with higher income are able to provide to its inhabitants better conditions of sanitation, water treatment and garbage collection.This confirms the hypothesis that the scale of cities makes the provision of the so-called indivisible services, including services related to good environmental management.The complexity of the matter recommends further studies on the topic and the consideration of other variables not covered here.However, this research has helped to reinforce the need to study the relationship between land use and occupation and their environmental impacts, as well as to quantify this relationship. | 8,682.8 | 2013-04-04T00:00:00.000 | [
"Economics",
"Environmental Science"
] |
Texture Zero Neutrino Models and Their Connection with Resonant Leptogenesis
Within the low scale resonant leptogenesis scenario, the cosmological CP asymmetry may arise by radiative corrections through the charged lepton Yukawa couplings. While in some cases, as one expects, decisive role is played by the $\lambda_{\tau }$ coupling, we show that in specific neutrino textures only by inclusion of the $\lambda_{\mu }$ the cosmological CP violation is generated at 1-loop level. With the purpose to relate the cosmological CP violation to the leptonic CP phase $\delta $, we consider an extension of MSSM with two right handed neutrinos (RHN), which are degenerate in mass at high scales. Together with this, we first consider two texture zero $3\times 2$ Dirac Yukawa matrices of neutrinos. These via see-saw generated neutrino mass matrices augmented by single $\Delta L=2$ dimension five ($\rm d=5$) operator give predictive neutrino sectors with calculable CP asymmetries. The latter is generated through $\lambda_{\mu , \tau }$ coupling(s) at 1-loop level. Detailed analysis of the leptogenesis is performed. We also revise some one texture zero Dirac Yukawa matrices, considered earlier, and show that addition of a single $\Delta L=2$, $\rm d=5$ entry in the neutrino mass matrices, together with newly computed 1-loop corrections to the CP asymmetries, give nice accommodation of the neutrino sector and desirable amount of the baryon asymmetry via the resonant leptogenesis even for rather low RHN masses($\sim $few TeV -- $10^7$~GeV).
1 Introduction them we pick up those which involve complexities and have potential for the CP asymmetry. With the updated neutrino data, we give updated results of the corresponding neutrino models which are highly predictive and determine cosmological CP violating phases in term of the δ phase. In section 4, applying results of the previous sections we determine cosmological CP violation for each considered model and use them for calculating of the baryon asymmetry. The latter is generated via resonant leptogenesis. We demonstrate that successful scenarios are possible for the low RHN masses (in a range few TeV -10 7 GeV). In section 5 we revise textures of Ref. [17] and make model improvements of the obtained neutrino mass matrices by adding the single ∆L = 2, d = 5 mass terms to certain non zero entries (in a spirit of Sect. 3). This makes the neutrino scenarios compatible with the best fit values of the neutrino data [3] and also proves to blend well with the leptogenesis scenarios. We stress that in the P 4 neutrino texture scenario (discussed in Sect. 3) and also in the texture B 2 ′ (considered in Sect. 5), for successful leptogenesis to take place crucial role is played by the λ µ Yukawa coupling which via 1-loop correction generates sufficient amount of the cosmological CP asymmetry. Such possibility has not been considered in the literature before.
(The general expressions for the corresponding corrections are presented in Sect. 2). Sect. 6 includes discussion and outlook where we also summarize our results and highlight some prospects for a future work. Appendix A includes some expressions, details related to the renormalization group (RG) studies and description of calculation procedures we are using. In appendix B the contribution to the net baryon asymmetry from the decays of the scalar components (RHS) of the RHN superfields is considered in detail. These analyses also include new corrections due to λ µ and corresponding soft SUSY breaking trilinear A µ coupling (besides λ τ , A τ and other relevant couplings).
Loop Induced Calculable Cosmological CP Violation
Before going to the calculations we first describe our setup. The framework is the MSSM augmented with two right-handed neutrinos N 1 and N 2 . This extension is enough to build consistent neutrino sector accommodating the neutrino data [3] and also to have a successful leptogenesis scenario. The relevant lepton superpotential couplings are given by: where h d and h u are down and up type MSSM Higgs doublet superfields respectively and l T = (l 1 , l 2 , l 3 ), e cT = (e c 1 , e c 2 , e c 3 ), N T = (N 1 , N 2 ). We work in a basis in which the charged lepton Yukawa matrix is diagonal and real: Y diag e = Diag(λ e , λ µ , λ τ ). (2.2) Moreover, we assume that the RHN mass matrix M N is strictly degenerate at the GUT scale, which will be taken to be M G ≃ 2 · 10 16 GeV. 6 Therefore, we assume: This form of M N is crucial for our studies. Although it is interesting and worth to study, we do not attempt here to justify the form of M N (and of the textures considered below) by symmetries. Our approach here is rather phenomenological aiming to investigate possibilities, outcomes and implications of the textures we consider. Since (2.3) at a tree level leads to the mass degeneracy of the RHN's, it has interesting implications for resonant leptogenesis [16,17,22] and also, as we will see below, for building predictive neutrino scenarios [17], [18]. For the leptogenesis scenario two necessary conditions need to be satisfied. First of all, at the scale µ = M N 1,2 the degeneracy between the masses of N 1 and N 2 has to be lifted. And, at the same scale, the neutrino Yukawa matrixŶ ν -written in the mass eigenstate basis of M N , must be such that Im[(Ŷ † νŶ ν ) 12 ] 2 = 0. [These can be seen from Eq. (4.1) with a demand ǫ 1,2 = 0.] Below we show that both of them are realized by radiative corrections and needed effect already arises at 1-loop level, with a dominant contribution due to the Y e Yukawa couplings (in particular from λ τ and in some cases from λ µ ) in the RG.
As it was shown [17], [15], within considered setup, radiative corrections are crucial for generating cosmological CP violation. In [15] it was shown that needed asymmetry is generated at 1-loop level due to λ τ Yukawa coupling provided that the condition (Y ν ) 31 (Y ν ) 32 = 0 is satisfied. Here, to be more generic and to not limit the class of the models, we also include the effects of the λ µ Yukawa coupling in the calculation. 7 Thus, in this section we present details of these calculations. We will start with radiative corrections to the M N matrix. RG effects cause lifting of the mass degeneracy and, as we will see, are important also for the phase misalignment (explained below).
At the GUT scale, the M N has off-diagonal form with (M N ) 11 N (obeying the RG equations investigated below). That's why M N was parametrized in a form given in Eq. (2.4). With |δ (1,2) N | ≪ 1, the M (at scale µ = M) will determine the masses of RHNs M 1 and M 2 , while δ (1,2) N will be responsible for their splitting and for complexity in M N (the phase of the overall factor M do not contribute to the physical CP). As will be shown below: (2. 7) In the N's mass eigenstate basis, the Dirac type neutrino Yukawa matrix will beŶ ν = Y ν U N . In the CP asymmetries, the components (Ŷ † νŶ ν ) 21 Therefore, the CP violation should come from P * N Y † ν Y ν P N , which in a matrix form is: We see that η ′ − η difference (mismatch) will govern the CP asymmetric decays of the RHNs. Without including the charged lepton Yukawa couplings in the RG effects we will have η ′ ≃ η with a high accuracy. It was shown in Ref. [13] that by ignoring Y e Yukawas no CP asymmetry emerges at O(Y 4 ν ) order and non zero contributions start only from O(Y 6 ν ) terms [14]. Such corrections are extremely suppressed for Y ν < ∼ 1/50. Since in our consideration we are interested in cases with M 1,2 < ∼ 10 7 GeV leading to |(Y ν ) ij | < 7 · 10 −4 (well fixed from the neutrino sector and the desired value of the baryon asymmetry), these effects (i.e. order ∼ Y 6 ν corrections) will not have any relevance. In Ref. [17] in the RG of M N the effect of Y e , coming from 2-loop corrections, was taken into account and it was shown that sufficient CP violation can emerge. Below we show that including Y e in the Y ν 's 1-loop RG, will induce sufficient amount of CP violation. This mainly happens via λ τ and in particular cases (which are considered below) from λ µ Yukawa couplings. Thus, below we give detailed investigation of λ τ,µ 's effects. Using were in second lines of (2.10) and (2.11)
21
(2.12) 8 Omitted terms are either strongly suppressed or do not give any significant contribution to either the CP violation or the RHN mass splittings.
where t = ln µ, t G = ln M G and we have used the boundary conditions at the GUT scale δ N (t G ) = 0. For evaluation of the integral in (2.12) we need to know the scale dependence of Y ν and Y e . This is found in Appendix A.1 by solving the RG equations for Y ν and Y e . Using Eqs. (A.5) and (A.6), the integral of the matrix appearing in (2.12) can be written as: and we have ignored λ e Yukawa couplings. For the definition of η-factors see Eq. (A.6). The Y νG denotes corresponding Yukawa matrix at scale µ = M G . On the other hand, we have: (Derivations are given in Appendix A.1.) Comparing (2.13) with (2.16) we see that difference in these matrix structures (besides overall flavor universal RG factors) are in the RG factors r τ,µ (M) andr τ,µ (M). Without the λ τ,µ Yukawa couplings these factors are equal and there is no mismatch between the phases η and η ′ [defined in Eqs. (2.7) and (2.9)] of these matrices. Non zero η ′ − η will be due to the deviations, which we parameterize as The values of ξ µ and ξ τ can be computed numerically by evaluation of the appropriate RG factors. Approximate expressions can be derived for ξ τ,µ , which are given by: (2.20) shows well that in the limit ξ τ,µ → 0, we have η = η ′ , while the mismatch between these two phases is due to ξ τ,µ = 0. With ξ τ,µ ≪ 1, from (2.20) we derive: We stress, that the 1-loop renormalization of the Y ν matrix plays the leading role in generation of ξ τ,µ , i.e. in the CP violation. 9 [This is also demonstrated by Eq. (2.18).] When the product (Y ν ) 31 (Y ν ) 32 is non zero, the leading role for the mismatch between η and η ′ is played by ξ τ . However, for the Yukawa texture, having this product zero, important will be contribution from ξ µ . [As we will see on working examples, this will happen for T 9 of Eq. (3.1) and texture B 2 of Eq. (5. 2)]. The value of |δ N (M)|, which characterizes the mass splitting between the RHN's, can be computed by taking the absolute values of both sides of (2.20): (2.22) These expressions can be used upon the calculation of the leptogenesis, which we will do in sections 4 and 5 for concrete models of the neutrino mass matrices.
3 See-Saw via Two Texture Zero 3 × 2 Dirac Yukawas Augmented by Single d=5 Operator. Predicting CP Violation Within the setup with two RHNs, having at the GUT scale mass matrix of the form (2.3), we consider all two texture zero 3 × 2 Yukawa matrices. As given in [18], there are nine such different matrices: Note that since RG equations for M N and Y ν in non-SUSY case have similar structures (besides some grouptheoretical factors) the ξ τ,µ would be generated also within non-SUSY setup.
where "×"s stand for non-zero entries. From these textures one can factor out phases in such a way as to make maximal number of entries be real. As it was shown in [18], phases can be removed from all textures besides T 4 , T 7 and T 9 . Thus, here we pick up only T 4,7,9 textures, which lead to cosmological CP violation and have potential to realize resonant leptogenesis [16], [17] (due to quasi-degenerate N 1 and N 2 states). Therefore, we can parametrize these three textures as: with with with The phases x, y and z can be eliminated by proper redefinition of the states l and e c . As far as the phases ω and ρ are concerned, because of the form of the M N matrix (2.3), they too will turn out to be non-physical. As we see, in textures T 4 , T 7 and T 9 there remains one unremovable phase φ (i.e. in the second matrices of the r.h.s of Eqs. (3.2) (3.4) and (3.6) respectively). This physical phase φ is relevant to the leptogenesis [17] and also, as it was shown in [18], it can be related to phase δ, determined from the neutrino sector. Integrating the RHN's, from the superpotential couplings of Eq. (2.1), using the see-saw formula, we get the following contribution to the light neutrino mass matrix: For Y ν in (3.8) the textures T 4,7,9 should be used in turn. All obtained matrices M ss ν , if identified with light neutrino mass matrices, will give experimentally unacceptable results. The reason is the number of texture zeros which we have in T i and M N matrices. In order to overcome this difficulty, in Ref. [18], the following single d = 5 operator was included for each case: whered 5 , x 5 and M * are real parameters. (3.9), together with (3.8) will contribute to the neutrino mass matrix. This will allow to have viable models and, at the same time because of the minimal number of the additions, we will still have predictive scenarios. The operators (3.9) can be obtained by another sector in such a way as to not affect the forms of T 4,7,9 and M N matrices (one detailed example was presented in [15]). See Sect. 6 for more discussion on a possible origin of the (3.9) type operators. Above we have written the Yukawa textures in the form: where P 1 , P 2 are diagonal phase matrices and Y R ν contains only one phase. Making the field phase redefinitions: the superpotential coupling will become: with: Now, for simplification of the notations, we will get rid of the primes (i.e. perform l ′ → l, e c′ → e c ,...) and in Eq. (3.8) using Y R ν instead of Y ν , from different T 4,7,9 textures we get corresponding M ss ν , and then adding the single operator (3.9) terms to zero entries of (3.8), one per M ss ν , obtain the final neutrino mass matrices. Doing so, one obtains the neutrino mass matrices [18]: where each type of texture originate as: where subscript for M indicates which Yukawa texture the see-saw part [of Eq. (3.8)] came from, while superscript denotes the non zero mass matrix element arising from the addition of the d =5 operator of type (3.9). Since within our setup we are deriving neutrino mass matrices, we are able to renormilize them from high scales down to M Z . With details given in the Appendix A of Ref. [15], we here write down P 1,2,3,4 textures at scale M Z and give results already obtained in [18]. Before doing this, we set up conventions, which are used below. Since we work in the basis in which charged lepton Yukawa matrix is diagonal and real, the lepton mixing matrix U is related to the neutrino mass matrix as: are light neutrino masses) and the phase matrices and U are: where s ij ≡ sin θ ij and c ij ≡ cos θ ij . For normal and inverted neutrino mass orderings (denoted respectively by NH and IH) we will use notations: As far as the numerical values of the oscillation parameters are concerned, since the bfv's of the works of Ref. [3] differ from each other by few %'s, we will use their mean values: sin 2 θ 12 = 0.308, sin 2 θ 23 = 0.432 for NH 0.591 for IH , sin 2 θ 13 = 0.02157 for NH 0.0216 for IH , In models, which allow to do so, we use the best fit values (bfv) given in (3.20). However, in some cases we also apply the value(s) of some oscillation parameter(s) which deviate from the bfv's by several σ.
This texture, within our scenario, can be parametrized as: NH, sin 2 θ 23 = 0.451, sin 2 θ 12 = 0.323 and best fit values for remaining oscillation parameters, (m 1 , m 2 , m 3 ) = (0.00694406, 0.0110914, 0.0509217), m ββ = 0 Table 1: Results from P 1 type texture. Masses are given in eVs. and RG factors rm and r ν3 are given in Eqs. (A.17) and (A.18) of Ref. [15]. (For notations and definitions see also Appendix A.2 of the present paper.) Due to the texture zeros, it is possible to predict the phases and values of the neutrino masses in terms of the measured oscillation parameters. Referring to [18] for the details, in Table 1 we summarize the results. [Only normal hierarchical (NH) neutrino mass ordering scenario works for the P 1 type texture.] Results from this texture are given in Table 2. The best fit values (bfv) of the oscillation parameters are taken from Eq. (3.20). For the details of the analysis of this model we refer the reader to [18].
The results of this texture for NH and inverted hierarchical (IH) neutrino mass orderings are summarized in Table 3. Table 3: Results from P 3 type texture. Masses are given in eVs.
The results obtained from this texture P 4 for NH and IH cases are presented in Table 4. The value of s 2 23 we are using is deviated from the bfv, because the conditions (M ν ) 1,3 = (M ν ) 3,3 = 0 do not allow to use bfv's. Note that in NH, case 2 and for IH the values of s 2 23 are less deviated from bfv, but the NH's case 1, as it turns out, is preferred for obtaining needed amount of the baryon asymmetry. Without the latter constraint, just for satisfying the neutrino data, we could have used smaller values of s 2 23 , but this would give higher values of neutrino masses which would not satisfy the current cosmological constraint i m i < 0.23 eV (the limit set by the Planck observations [26]). Upon leptogenesis investigation we will use NH, case 1 given in Tab.4.
Resonant Leptogenesis
Expression for δ N (M) with effects of λ µ,τ and ignoring λ e , is given by Eq. (2.20). The CP asymmetries ǫ 1 and ǫ 2 generated by out-of-equilibrium decays of the quasi-degenerate fermionic components of N 1 and N 2 states respectively are given by [9], [10]: 10 Here M 1 , M 2 (with M 2 > M 1 ) are the mass eigenvalues of the RHN mass matrix. These masses, within our scenario, are given in (2.6) with the splitting parameter given in Eq. (2.22). For the decay widths, here we will use more accurate expressions [5]: where M S is the SUSY scale and we assume that all SUSY states have the common mass equal to this scale. s β and c β are short hand notations for sin β and cos β respectively. N i decays proceed via N i → h u l i and N i →h uli channels. Upon derivation of (4.2) we took into account that h u is a linear combination of the SM Higgs doublet h SM and the heavy Higgs doublet H: Mass of the h SM has been ignored, while the mass of the H has been taken≃ M S . Moreover, the imaginary part of [(Ŷ † νŶ ν ) 21 ] 2 will be computed with help of (2.8) and (2.9) with the relevant phase given in Eq. (2.21). Using general expressions (2.21) and (2.22) for the given neutrino model we will compute η − η ′ and |δ N (M)|. With these, since we know the possible values of the phase φ [see Eqs. (4.6),(4.8),(4.10),(4.12)], and with the help of the relations (4.7), (4.9), (4.11), (4.13) we can compute ǫ 1,2 in terms of |M| and a 2 or a 1 (depending on the texture we are dealing with). Recalling that the lepton asymmetry is converted to the baryon asymmetry via sphaleron processes [27], with the relation we can compute the baryon asymmetry. The notion n f b is used for the baryon asymmetry created through the decays of the fermionic components of N 1,2 superfields. The net baryon asymmetry n b receives the contribution from the decays of the scalar componentsÑ 1,2 . The latter contribution we denote byñ b . The computation of it (being suppressed in comparison with n f b ) will be discussed in appendix B. For the efficiency factors κ f (1,2) we will use the extrapolating expressions [5] (see Eq. (40) in Ref. [5]), with κ f (1) and κ f depending on the mass scalesm 1 = v 2 Within our studies we will consider the RHN masses ≃ |M| < ∼ 10 7 GeV. With this, we will not have the relic gravitino problem [28], [29]. For simplicity, we consider all SUSY particle masses to be equal to M S < |M|, with M S identified with the SUSY scale, below which we have just SM. As it turns out, via the RG factors, the asymmetry also depends on the top quark mass.
It is remarkable that within some models the observed baryon asymmetry (the recent value reported by WMAP and Planck [26]), can be obtained even for low values of the Below, we perform analysis for each of these P 1,2,3,4 cases (and for revised models of Ref. [17] discussed in Sect.5) in turn and present our results. As an input for the top's running mass we will use the central value, while for the SUSY scale M S we will consider two cases: Procedure of our RG calculation and used schemes are described in Appendix A.3. As it was shown in [18], for neutrino mass matrix textures P 1,2,3,4 , we will be able to relate the cosmological phase φ to the CP violating phase δ. We will introduce the notation: which will be convenient for writing down expressions for the φ and for expressing neutrino Dirac type Yukawa couplings in terms of one independent coupling element. (The latter will be selected by the convenience.) For P 1 Texture For this case, using the form of the M ν [given by Eq. (3.21) and derived within our setup] in the relation (3.15) and equating appropriate matrix elements of the both sides, we will be able to calculate the phase φ [18], [15]: Moreover, expressing a 3 , b 2,3 in terms of a 2 (taking a 2 to be an independent variable) and other known and/or predicted parameters, we will have: As we see from Eqs. (4.6) and (4.7), there is a pair of solutions. When for the a 3 in (4.7) we are taking the " + " sign, in (4.6) we should take the sign " − ", and vice versa. (The same applies to the cases of textures P 2,3,4 .) For this case, the baryon asymmetry via the resonant leptogenesis has been investigated in Ref. [15]. In this work, for the decay widths we use more refined expressions of Eq. (4.2). Because of this, the values of tan β (given in Table 5) are slightly different. Since in this model (Y ν ) 31 and (Y ν ) 32 are non zero, according to Eq. (2.20) the mismatch η − η ′ (e.g. CP asymmetry) is mainly arising due to ξ τ . However, in numerical calculations we have also taken into account the contribution of ξ µ . The results are given in Table 5 (for more explanations see also caption of this table). While in the table we vary the values of M and tan β, the cases with I and II correspond respectively to the cases (I) and (II) of Eq. (4.4) (i.e. M S = 1 and 2 TeV resp.). For the definition of the RG factors given in this table see Appendix A.2 of Ref. [15]. For finding maximal values of the Baryon asymmetries (given in Tab.5) we have varied the parameter a 2 . As we see, the value of the net baryon asymmetry n b slightly differs from n f b . This is due to the contribution fromñ b [coming from the right handed sneutrino (RHS) decays], which is small (less than 3.4% of n f b ). Details ofñ b 's calculations are discussed in Appendix B. For P 2 Texture With a pretty similar procedure, for this case we get: Expressing a 3 , b 2,3 in terms of a 2 and other parameters (yet known or predicted in this scenario), we will have: Results for this case are presented in Table 6. Table 3 and computed from Eq. (4.10) (for IH case) φ = ±3.124. For all cases r ν3 ≃ 1.
(For notations and definitions see also Appendix A.2 of the present paper.) Expressing a 3 , b 1,3 in terms of a 1 and other fixed parameters, we will have: Results for this texture for cases of NH and IH neutrinos are presented in Tables 7 and 8 respectively.
For P 4 Texture For this case cosmological phase is given by: Expressing a 1 , b 1,2 in terms of a 2 and other known and/or predicted parameters, we will have: In this scenario, since (Y ν ) 31 and (Y ν ) 32 are zero, according to Eq. (2.20) the mismatch η − η ′ (e.g. CP asymmetry) is arising due to ξ µ . Since the latter is suppressed by λ 2 µ , as it turns out large values of the tan β are required and only in NH case needed amount of the Baryon asymmetry can be generated. Results are given in Table 9.
Revising Textures of Ref. [17] and Improved Versions
In this section we revise the textures considered in the work [17]. Since some of them are excluded by the current neutrino data [3](see also Eq. (3.20)), we apply d = 5 contributions (in a spirit of section 3) and achieve their compatibility with the best fit values. Together with this, we investigate resonant leptogenesis and show that one loop corrections via λ τ and/or λ µ are crucial. In [17], while ignoring λ µ the two loop correction to λ τ was taken into account and this suggested for textures A and B 1 specific low bounds on the values of tan β. As demonstrated below, one loop effects of λ τ (giving dominant contribution for textures A and B 1 ) and λ µ (for the texture B 2 ) significantly change results.
In the setup of two degenerate RHNs, in Ref. [17] the following three possible one texture zero neutrino Dirac Yukawa couplings have been considered : where for notational consistency with the whole paper, we have shown phases α i , β j , while assuming that the couplings a i , b j are real. 11 Below we will (re)investigate these textures in turn.
Texture A
The A Yukawa texture can be written as: As we see, besides the phase φ all phases are factored out and have no physical relevance. With the RHN mass matrix of Eq.(3.13), via the see-saw[see expression in Eq.(3.8)] we will get the light neutrino mass matrix: [15], and comments therein.] This texture has only two non-zero mass eigenvalues.
As it was shown in [17], this for NH (m 1 = 0) and IH (m 3 = 0) neutrino mass patterns, gives respectively the predictive relations tan θ 13 = m 2 m 3 s 12 and tan θ 12 = m 1 m 2 . Both of them are in a gross conflict with the current neutrino data, which excludes this scenario.
A ′ Neutrino Texture: Improved Version
The drawbacks coming from the A neutrino mass matrix can be avoided by adding d 5 terms to the one of the entries. Here we consider this addition to the (2, 3) and (3, 2) elements of the light neutrino mass matrix, which would make the model viable. (We refer to this improved version as the A ′ neutrino texture.) After this, the M ν will have the form: With this modification, all masses are non zero. One can check out, that with the fixed phase redefinitions [given in Eq. (5. 3)], in general d 5 is a complex parameter. Thus, together with additional mass, we will have one more independent phase. As it turns out, only NH scenario is possible to realize. Therefore as additional independent parameters we take one of the mass and ∆ρ = ρ 1 − ρ 2 . From the condition M (Here and below we use short-handed notations t ij ≡ tan θ ij .) From the first relation of (5.6) one can check that IH scenario can not be realized. As far as the NH scenario is concerned, it will work with low bound on the lightest neutrino mass m 1 . In fact, the first relation of (5.6) gives the allowed range for m 1 . For example, with bfv's of the oscillation parameters (3.20) we have: Thus, as independent parameters we will take m 1 and ∆ρ. We will select them in such a way as to get desirable baryon asymmetry. For example, with the choice As far as the baryon asymmetry is concerned, using (5.5) in (3.15) for the CP pase φ and expressing couplings a 1,3 , b 2,3 in terms of a 2 we get For the values of (5.8), (5.9) and bfv's of s 2 12,23,13 we get φ = −2.9297 . The B 1 Yukawa texture can be written as: With the RHN mass matrix of Eq. (3.13), via the see-saw we will get the light neutrino mass matrix: This texture works only for inverted neutrino mass ordering [17] (with m 3 = 0) and has two predictive relations. In particular, in terms of measured oscillation parameters we can calculate the phases δ and ρ 1 . The exact expressions are: Although the first expression in (5.14) excludes the possibility of using the best fit values for all oscillation parameters, it allows for keeping values of s 2 23 and s 2 13 within 1σ, while confining s 2 12 to 2σ. Remarkably, needed baryon asymmetry can be achieved with relatively low values of tan β. By addition of the d 5 term to (1,3) and (3,1) entries of the B 1 neutrino texture, the light neutrino mass matrix becomes: which gives all neutrinos massive and opens up a possibility of choosing two variables such as m 3 and ∆ρ ≡ ρ 1 − ρ 2 as independent ones to operate with. From the condition M (2,2) ν = 0 we have: As far as the baryon asymmetry is concerned, using (5.17) in (3.15), we get: Using all these, we can calculate the baryon asymmetry. The results are given in Tab This texture is interesting because, due to specific form of Y ν , the radiative corrections through the λ τ coupling do not generate cosmological CP asymmetry. Thus λ µ may be important, which we investigate below. Thus, this model (and its slight modification discussed below) serves as a good demonstration of the role of ξ µ correction in emergence of needed Baryon asymmetry.
The B 2 Yukawa texture can be written as: Via the see-saw we will get the light neutrino mass matrix: This texture works only for inverted neutrino mass ordering [17] (with m 3 = 0) and has two predictive relations. In particular, in terms of measured oscillation parameters we can calculate the phases δ and ρ 1 . The exact expressions are: In order to avoid difficulties with the texture B 2 we add d 5 term to the (1, 2) and (2, 1) elements of the light neutrino mass matrix. After this, the M ν will have the form: With this modification, all masses are non zero, and therefore two additional parameters m 3 = 0 and ρ 2 enter. Thus our relations will involve two more independent quantities. For convenience we take m 3 and ∆ρ = ρ 1 − ρ 2 as such. From the condition M From these relations the phases δ and ρ 1 can be calculated in terms of m 3 and ∆ρ.
As it turns out, in this improved version the IH case works well for both neutrino sector and for the baryon asymmetry as well. So, we will start with discussing the IH case. For measured oscillation parameters we take the best fit values given in (3.20) and select pairs (m 3 , ∆ρ) in such a way as to get needed baryon asymmetry. One such choice is: These for the observable ν02β-decay give m ββ ≃ 0.0193 eV. As far as the baryon asymmetry is concerned, using (5.29) in (3.15) for the CP pase φ and expressing couplings a 2,3 , b 1,2 in terms of a 1 we get For the values of (5.31), (5.32) and bfv's for the θ ij angles we get φ = 2.2301 . With these, and for given values of M and tan β by varying a 1 we can investigate the baryon asymmetry. Results are given Tab. 13.
As far as the NH case is concerned, the neutrino sector can work well by certain selection of (m 3 , ∆ρ). However, in order to generate needed baryon asymmetry we need to take values of sin 2 θ ij deviated from the bfv's by the ( Note that the B 2 ′ neutrino texture coincides with the texture P 7 of Ref. [18] if all entries in (5.29) are taken to be real. As was shown in [18] the real neutrino texture with M (3,3) ν = 0 will work for both NH and IH neutrinos (see Tab. 6 of Ref. [18]). Advantage of complex d = 5 entry [like in texture (5.29)] is that it gives good possibility for generation of the baryon asymmetry with the λ µ 's radiative correction playing the decisive role. Similar possibility has not been considered in the literature before.
Concluding, note also that the neutrino textures A ′ and B 1 ′ are generalizations of the textures P 5 and P 6 (respectively), considered in [18]. The latter two had no complex phases, while A ′ and B 1 ′ scenarios besides good neutrino fits give possibility for the generation of the baryon asymmetry.
Discussion and Outlook
In this work we have investigated the resonant leptogenesis within the extension of the MSSM by two right handed neutrino superfields with quasi-degenerate masses < ∼ 10 7 GeV. It was shown that in this regime the cosmological CP asymmetry arises at one loop level due to charged lepton Yukawa couplings. In particular, needed corrections may come from either of the λ τ and λ µ couplings. Which one is relevant from these two couplings depends on the structure of the 3 × 2 Dirac type Yukawa matrix Y ν . Aiming to make close connection with the neutrino sector, we first examined all viable neutrino models (considered earlier in Ref. [18]) based on two texture zero Y ν 's augmented by single ∆L = 2, d = 5 operators. This setup is predictive and allows to relate leptonic CP violating phase δ with the cosmological CP violation. In one of such scenarios the role of the λ µ coupling in CP asymmetry generated at quantum level has been demonstrated. We have also revised the models of Ref. [17] and considered their improved versions by including proper ∆L = 2, d = 5 operators. This allowed to have good fit with the neutrino data and generate needed amount of the baryon asymmetry.
Without specifying their origin, in our considerations we have extensively applied the ∆L = 2, d = 5 operators, of the form given in Eq. (3.9). Such d = 5 couplings can be generated from a different sector via renormalizable interactions. For instance, introducing the pair of MSSM singlet states N , N and the superpotential couplings it is easy to verify that integration of the heavy N , N multiplets leads to the operator in Eq. (3.9) withd Important ingredient here is to maintain forms of the matrices Y ν , M N . In [15] considering one such fully consistent extension, it was demonstrated that all obtained results (e.g. neutrino masses and mixings, and baryon asymmetry as well) can remain intact. Although the way demonstrated above is rather simple, there can be considered also alternative ways for generating those ∆L = 2 effective couplings. These could be done either in a spirit of type II [30], or type III [31] see-saw mechanisms, or even exploiting alternative possibilities [32], [33] through the introduction of appropriate extra states. Details of such scenarios should be pursued elsewhere. Throughout our studies we have studied texture zero coupling matrices, but did not attempt to explain and justify considered structures by symmetries. Our approach, being rather phenomenological, was to consider such textures which give predictive and/or consistent scenarios allowing for transparent demonstrations of the suggested mechanism of the loop induced cosmological CP violation. It is desirable to have explanation of texture zeros at more fundamental level, and exploiting flavor symmetries seems to be a good framework. We are planning to pursue this approach in a future work [34].
Since the supersymmetry is a well motivated construction, we have performed our investigations within its framework. However, it would be interesting to examine the considered models also within the non-SUSY setup. For the latter, the scenarios with low tan β look encouraging to start with.
Finally, it would be challenging to embed considered models in Grand Unification (GUT) such as SU (5) and SO(10) GUTs. Due to the high GUT symmetries, additional relations and constraints would emerge making models more predictive. These and related issues will be addressed elsewhere.
Acknowledgments Z.T. thanks CERN theory division for warm hospitality and partial support during his visit there.
A Renormalization Group Studies
A.1 Running of Y ν , Y e and M N Matrices RG equations for the charged lepton and neutrino Dirac Yukawa matrices, appearing in the superpotential of Eq. (2.1), at 1-loop order have the forms [35], [36]: c a e = ( 9 5 , 3, 0), (A.1) The RG for the RHN mass matrix at 2-loop level has the form [36]: Let's start with renormalization of the Y ν 's matrix elements. Ignoring in Eq. (A.2) the O(Y 3 ν ) order entries (which are very small because within our studies |(Y ν ) ij | < ∼ 10 −4 ), and from charged fermion Yukawas keeping λ τ , λ µ , λ t and λ b , we will have: This gives the solution where Y νG denotes Yukawa matrix at scale M G and the scale dependent RG factors are given by: [15]. At scale M, after decoupling of the RHN states, the neutrino mass matrix is generated and has the form: where '×' stand for entries depending on Yukawa couplings. After renormalization, keeping λ τ , λ t , λ b and g a in the RGs, the neutrino mass matrix at scale M Z has the form: [15]. We will also need the RG factor relating the VEV v u (M) to the v(M Z ). Thus we define: Analytic expression for r vu derived from appropriate RGs is given by Eq. (A.20) of Ref. [15].
The factor p t is p t ≃ 1/1.0603 [41], while the recent measured value of the top's pole mass is [42]: We take the values of (A.13) as boundary conditions for solving 2-loop RG equations [43], [38] for λ t,b,τ,µ and λ from the M Z scale up to the scale M S . Above the M S scale, we have MSSM states including two doublets h u and h d , which couple with up type quarks and down type quarks/charged leptons respectively. Thus, Yukawa couplings we are considering at M S are ≈ λ t (M S )/s β , λ b (M S )/c β and λ τ,µ (M S )/c β , with s β ≡ sin β, c β ≡ cos β. Above the scale M S we apply 2-loop SUSY RG equations in DR scheme [35]. Thus, at µ = M S we use the matching conditions between DR − MS couplings: Throughout the paper, above the mass scale M S without using the superscript DR we assume the couplings determined in this scheme.
B Baryon Asymmetry from RHS Decays
In this appendix we give details of the contribution to the net baryon asymmetry from the right handed sneutrinos (RHS) -the scalar partners of the RHNs. Estimation of this contribution for specific textures was given in [17], while more detailed investigation was given in [15] (from the lepton couplings taking into account only λ τ and A τ in the proper RGs). Since we have seen that for some cases for the cosmological CP asymmetry decisive is the RG correction via the λ µ Yukawa coupling, here we extend its calculation by taking into account also effects from λ µ and A µ into the asymmetry generated by the RHS decays. We will consider soft SUSY breaking scalar potential which will be relevant for deriving RHS masses and their couplings to the components of the l and h u superfields. Using general expressions of Ref. [35] we write down 1-loop RGs for A ν and B N , which have the forms: We parameterize the matrices B N and A ν as: where entries (M N ) 12 , m B , δ (1,2) BN and elements of the matrix a ν run (their RGs can be derived from the RG equations given above), while m A is a constant. The matrix e (similar to the structure of Y e Yukawa matrix) is e = Diag (A e , A µ , A τ ) . (B.5) Assuming proportionality / alignment of the soft SUSY breaking terms and corresponding superpotential couplings, we will use the following boundary conditions: Using (B.3) for B N 's entries in (B.4) we have: For the elements of a ν we have which show the violation of the alignment between a ν and Y ν due to RG effects. At r.h.s. of (B.8) we kept λ µ,τ,t , A µ,τ,t , gauge couplings and gaugino masses. From this we derive Keeping in mind that the powers of the Y ν couplings can be ignored due to their smallness, the m B can be treated as a constant, and from (B.9), (B.7), (B.4) we obtain: and The form of B N given in Eq. (B.10) will be used to construct the RHS mass matrix. Before doing this, using Eq. (A.5) and ignoring the coupling λ e (as it turns out from the lepton Yukawa couplings all relevant effects are due to λ µ,τ ), forǭ 1,2 at scale µ = M we can get expressions: With the transformation of the N superfields N = U N N ′ (according to Eq. (2.6), the U N diagonalizes the fermionic RHN mass matrix), we obtain: With phase redefinitioñ and by going to the real scalar components and using (B.10), we will have: From (B.14) and (B.17) we obtain the mass 2 terms: and 20) The coupling ofñ 0 states with the fermions emerges from the F -term of the superpotential l T Y ν Nh u . Following the transformations, indicated above, we will have:
B.1 Calculatingñ b s -Asymmetry Viañ Decays
Due to the SUSY breaking terms, the masses of RHS's differ from their fermionic partners' masses. For each mass-eigenstate RHS'sñ i=1,2,3,4 we have one of the massesM i=1,2,3,4 respectively. With the SUSY M S scale M S M < ∼ 1/3, the statesñ i remain nearly degenerate and for the resonantñ-decays the resummed effective amplitude technique [9] will be applied. Effective amplitudes for the real n i decay, say into the lepton l α (α = 1, 2, 3) and antilepton l α respectively are given by [9] where S αi is a tree level amplitude and Π ij is a two point Green function's (polarization operator ofñ i −ñ j ) absorptive part. The CP asymmetry is then given by With Y F and Y B given by Eqs. (B.23) and (B.24) we can calculate polarization diagram's (with external legsñ i andñ j ) absorptive part Π ij . These at 1-loop level are given by: where p denotes external momentum in the diagram and upon evaluation of (B.26), for Π one should use (B.27) with p =M i . In (B.27), taking into account the SUSY masses M S of all non SM states, we are using the refined expression for the Π ij .
In an unbroken SUSY limit, neglecting finite temperature effects (T → 0), theÑ decay does not produce lepton asymmetry due to the following reason. The decays ofÑ in the fermion and scalar channels are respectivellyÑ → lh u andÑ →l * h * u . Since the rates of these processes are the same due to SUSY (at T = 0), the lepton asymmetries created from these decays cancel each other. With T = 0, the cancelation does not take place and one has with a temperature dependent factor ∆ BF given in [44]. 12 Therefore, we just need to compute ǫ i (ñ i → lh u ), which is the asymmetry created byñ i decays in two fermions. Thus, in (B.25) we take S αi = (Y F ) αi and calculate ǫ i (ñ i → lh u ) with (B.26). The baryon asymmetry created from the lepton asymmetry due toñ decays is given by: where an effective number of degrees of freedom (including two RHN superfields) g * = 228.75 was used. η i are efficiency factors which depend onm i ≃ (v sin β) 2 M 2(Y † F Y F ) ii , and account for temperature effects once integration of the Boltzmann equations is performed [44].
Calculating the contribution ∆n b s =ñ b s to the baryon asymmetry from the RHS decays, we have examined various values of pairs (m A , m B ) in the range of 100 GeV -few TeV. As it turned out, the ratioñ b n f b is always suppressed(< 3.4 · 10 −2 ). The results for each neutrino scenario, we have considered in this paper, for one specific choice of (m A , m B ), are given in Table 14 (see its caption for more information). The ranges forñ b s are due to the fact that for each scenario we have considered different values of tan β, M and M S . Upon the calculations, with obtained values ofm i , according to Ref. [44] we picked up the corresponding values of η i and used them in (B.29). While giving the results of the net baryon asymmetry, for each case (see sections 4 and 5), we have included corresponding contributions fromñ b s as well. As we see from the results of Tab. 14, theñ b s is suppressed/subleading for all cases. We have also witnessed (by varying the phases of m A,B ) that the complexities of m A and m B practically do not change the results. This happens because the m A in the Y B coupling matrix appears in front of the Y ν [see Eq. (B.24)], which is strongly suppressed. Irrelevance of the m B 's phase can be seen from the structure of (B.19). Suppression ofñ b s will always happen for the value of |m B | in the range of 100 GeV -few TeV, because the mass degeneracy of n i states is lifted in such a way that resonant enhancement ofñ b s is not realized. (Unlike the case of soft leptogenesis [44] | 11,488.4 | 2017-10-27T00:00:00.000 | [
"Physics"
] |
The Status of Domestic Water Demand: Supply Deficit in the Kathmandu Valley, Nepal
United Nations Sustainable Development Goal 6 targets access to water and sanitation for all people in the next 15 years. However, for developing countries such as Nepal, it is more challenging to achieve this goal given its poor infrastructure and high population growth. To assess the water crisis in the most developed and populated area of Nepal, the Kathmandu Valley, we estimated available water resources and domestic water demand in the valley. We estimated a supply deficit of 102 million liters per day (MLD) in 2016, after completion of the first phase of the Melamchi Water Supply Project (MWSP). If the MWSP is completed within the specified timeframe, and sufficient treatment and distribution infrastructure is developed, then there would be no water deficit by 2023–2025. This indicates that the MWSP will make a significant contribution to the valley's water security. However, emphasis must be given to utilizing all of the water available from the MWSP by developing sufficient water treatment and distribution infrastructure. Alternate mitigation options, such as planning land use for potential recharge, introducing micro-to macro-level rainwater harvesting structures, conjunctive use of surface and groundwater resources, and water demand-side management, would also be helpful.
Introduction
United Nations Sustainable Development Goals (SDGs) [1], which aim to end poverty, protect the planet, and ensure prosperity for all, are scheduled to be achieved over the next 15 years.Of the 17 SDGs, Goal 6 aims to ensure access to water and sanitation for all people.Presently, approximately 663 million people in the world are without access to improved drinking water sources and about 1.8 billion are using drinking water that is fecally contaminated [1].SDG 6 targets universal and equitable access to safe and affordable drinking water and adequate sanitation facilities for all people by 2030.It will be more challenging to achieve these targets in resource-poor developing countries.Nepal, a developing country in South Asia, had a total population of about 26.49 million in 2011, which was expected to reach 28.47 million by 2016 [2].According to a report by the Department of Water Supply and Sewerage, Nepal's water supply coverage was 83.59%, and the sanitation coverage 70.28%, in 2014 [3].To achieve SDG 6, Nepal should invest more in water supply and sewage system infrastructure; when doing so, it needs to consider the increasing population and lifestyle changes, as well as the available water resources.
The Kathmandu Valley (Figure 1) is the most urbanized and populated area of Nepal; it has three districts, Kathmandu, Lalitpur, and Bhaktapur, and is developing in an unplanned manner without proper land use planning.The valley is characterized by acute water shortage and degraded water Water 2016, 8,196 2 of 9 quality due to rapid increases in the population and the degree of urbanization.The deteriorating water quality has a subsequent impact on public health.Kathmandu Upatyaka Khanepani Limited (KUKL) is responsible for the operation and management of the water supply and wastewater services in the valley.The government of Nepal's capital investment and asset management program of 2010 aims to provide 135 liters per capita per day (lpcd) of domestic water to the residents of the valley by 2025 [4].Rapid and largely unplanned urban and population growth, a lack of sustainable water sources, dramatic land use changes, socioeconomic transformation, and a poor management system have resulted in low availability of potable water in the valley.According to a report by the Asian Development Bank [4], inadequate access to water has led to increased disease incidence, health risks and associated economic burdens, which disproportionately impact the poor and vulnerable population of the valley.Also, seasonal variability in the availability and cost of pure water, and inter-sectoral water conflicts, threaten water security in the valley.water quality due to rapid increases in the population and the degree of urbanization.The deteriorating water quality has a subsequent impact on public health.Kathmandu Upatyaka Khanepani Limited (KUKL) is responsible for the operation and management of the water supply and wastewater services in the valley.The government of Nepal's capital investment and asset management program of 2010 aims to provide 135 liters per capita per day (lpcd) of domestic water to the residents of the valley by 2025 [4].Rapid and largely unplanned urban and population growth, a lack of sustainable water sources, dramatic land use changes, socioeconomic transformation, and a poor management system have resulted in low availability of potable water in the valley.According to a report by the Asian Development Bank [4], inadequate access to water has led to increased disease incidence, health risks and associated economic burdens, which disproportionately impact the poor and vulnerable population of the valley.Also, seasonal variability in the availability and cost of pure water, and inter-sectoral water conflicts, threaten water security in the valley.These water security problems have motivated efficient and cost-effective water management strategies, through the accurate estimation of domestic water demand and available fresh water resources, as well as priority-setting, to cope with these problems.Water demand varies with the socioeconomic status of households, the setting (rural or urban), and existing infrastructure, etc., but in most cases, relevant data is not available to local administrative divisions in developing countries (including wards and villages in the Kathmandu Valley).In this context, we investigated the available water resources and domestic water demand in the Kathmandu Valley in a preliminary analysis.We present concise insights into the current and future status of domestic water demand in the valley, and the effect of the Melamchi Water Supply Project (MWSP) on the existing supply deficit, thereby highlighting the seriousness of the valley's water crisis.Potential methods to mitigate the water crisis in the immediate future are also discussed.These water security problems have motivated efficient and cost-effective water management strategies, through the accurate estimation of domestic water demand and available fresh water resources, as well as priority-setting, to cope with these problems.Water demand varies with the socioeconomic status of households, the setting (rural or urban), and existing infrastructure, etc., but in most cases, relevant data is not available to local administrative divisions in developing countries (including wards and villages in the Kathmandu Valley).In this context, we investigated the available water resources and domestic water demand in the Kathmandu Valley in a preliminary analysis.We present concise insights into the current and future status of domestic water demand in the valley, and the effect of the Melamchi Water Supply Project (MWSP) on the existing supply deficit, thereby highlighting the seriousness of the valley's water crisis.Potential methods to mitigate the water crisis in the immediate future are also discussed.
How Large is the Domestic Water Demand?
The population of the Kathmandu Valley grew from 1.59 million in 2001 to 2.42 million in 2011 [5,6]; it is expected to reach 3.08 million in 2016.When we assume a uniform demand of 135 lpcd, then the total demand in the valley becomes 415.5 million liters per day (MLD), which is expected to increase to 540.3 MLD by 2021 (Table 1).Using the Bureau of Indian Standards (BIS) guidelines [7], water demand in the valley is estimated to be 366 MLD in 2016 (present) and is expected to reach about 482 MLD by 2021 (over the entire valley area, i.e., including all KUKL service areas) (See Supplementary Materials).Table 1 gives the population and corresponding estimated water demand in the valley for the period 2001-2021.These figures do not include 20% leakage (physical losses) through distribution networks [8]. Figure 2 shows Village Development Committee (VDC) and municipality-wide demand in the valley using BIS guidelines.Presently (2016), Kathmandu Metropolitan City, Lalitpur Sub-metropolitan City, and the Madhyapur Thimi, Bhaktapur, and Kirtipur Municipalities have the highest water demand due to their highly dense populations and various lifestyle changes (such as the adoption of full flushing toilet facilities connected to the public sewage system, etc.).Table 2 gives the KUKL service area estimated water demand using BIS guidelines.
How Large is the Domestic Water Demand?
The population of the Kathmandu Valley grew from 1.59 million in 2001 to 2.42 million in 2011 [5,6]; it is expected to reach 3.08 million in 2016.When we assume a uniform demand of 135 lpcd, then the total demand in the valley becomes 415.5 million liters per day (MLD), which is expected to increase to 540.3 MLD by 2021 (Table 1).Using the Bureau of Indian Standards (BIS) guidelines [7], water demand in the valley is estimated to be 366 MLD in 2016 (present) and is expected to reach about 482 MLD by 2021 (over the entire valley area, i.e., including all KUKL service areas) (See Supplementary Materials).Table 1 gives the population and corresponding estimated water demand in the valley for the period 2001-2021.These figures do not include 20% leakage (physical losses) through distribution networks [8]. Figure 2 shows Village Development Committee (VDC) and municipality-wide demand in the valley using BIS guidelines.Presently (2016), Kathmandu Metropolitan City, Lalitpur Sub-metropolitan City, and the Madhyapur Thimi, Bhaktapur, and Kirtipur Municipalities have the highest water demand due to their highly dense populations and various lifestyle changes (such as the adoption of full flushing toilet facilities connected to the public sewage system, etc.).Table 2 gives the KUKL service area estimated water demand using BIS guidelines.
What is the Present of Water Supply?
KUKL is an authorized agency supplying potable water to the Kathmandu Valley; it taps water in mountainous (conservation) zones of the valley, from 22 surface water sources, producing about 65.3 MLD and 131 MLD during the dry and wet seasons, respectively [8].The available surface water from these conservation zones is estimated to be about 133.88 MLD and 199.79 MLD during the dry and wet seasons, respectively [9].The KUKL is currently harnessing about 49% and 66% of the available surface water from mountains during the dry and wet seasons, respectively.This shows that there is scope to harness additional surface water for potable purposes in the valley.According to a previous study by Thapa et al. [9], the estimated groundwater potential in the valley is about 1116 billion liters; however, its use is limited by quality concerns.
The KUKL is responsible for supplying water to its 10 service areas (Figure 1).The maximum domestic water supply capacity of KUKL service areas was reported to be 151.19MLD in 2013.However, the actual water supplies during the wet and dry seasons are 115 and 69 MLD, respectively.The total water demand in service areas is estimated to be (by considering 135 lpcd) approximately 361.6 MLD in 2016, with a supply deficit of 210 MLD.The present deficit is currently met through private groundwater pumping, traditional water spouts, wells, supplies from private vendors (through surface, spring, and groundwater), and bottled water industries.This causes over-exploitation of groundwater storage, resulting in drawdown of the groundwater level and drying of wells.The water demand estimated by using BIS guidelines (338 MLD for 2016) gives a supply deficit of about 187 MLD in KUKL service areas (Figure 3).The supply deficit in KUKL service areas in 2021 is estimated to be 322 MLD and 294 MLD, by KUKL and BIS guidelines, respectively, against the present maximum supply capacity of 151.19 MLD and without considering the impact of leakages and the MWSP.
Are Surface and Groundwater Resources in the Valley Drinkable?
To cope with insufficient surface water resources in the Kathmandu Valley, and to reduce the supply-demand deficit, the use of shallow (small diameter tube wells at the household level and public stone spouts) and deep groundwater resources (by private water vendors, hotels, and industries) is increasing.Nearly half of the valley's total water supply during the wet season, and 60%-70% during the dry season, comes from groundwater sources supplied by KUKL [10].Recently, KUKL started to drill 40 deep tube wells to supply 40 MLD of water [11].However, the suitability of groundwater, for drinking purposes, in the area is questionable.The extensive use of groundwater (beyond the rate of recharge), coupled with inadequate management of solid waste and wastewater from urban centers, has increased the vulnerability of the groundwater system to resource depletion, quality degradation, and land subsidence [12,13].
High levels of arsenic, ammonia, and iron in the deep groundwater, and nitrates and E. coli in shallow groundwater, exceeding World Health Organization (WHO) guidelines, have been reported in the Kathmandu Valley [14][15][16][17].A study by Sakamoto et al. [18] showed that almost all water from the rivers and shallow wells in the core valley was not suitable for drinking because of the presence of E. coli, and only 29% of the deep tube wells had drinkable water [18].Similar types of studies have identified surface and groundwater contamination (biological and chemical contamination) in the valley, such as those of Jha et al. [19], Dongal et al. [20], Kannel et al. [21], Chapagain and Kazama [15], and Shrestha et al. [22], etc.This poses a limitation on the valley's drinking water resources-in the absence of proper treatment for biological or chemical contamination-in terms of coping with the present water crisis.Shrestha et al. [22] suggested reducing groundwater pollution by improving the sewer line infrastructure and septic tanks through long-term planning.However, treatment of groundwater at the household level, including disinfection, filtration, and boiling before consumption (particularly during the wet season), is recommended to deal with microbial contamination.For the treatment of physicochemical contamination, the simple-in-operation, low cost, and energy efficient ammonia and nitrate removal system developed by Khanitchaidecha et al. [17] could be used at the household or community scale.For iron removal, aeration, sedimentation and filtration before denitrification could be applied, as suggested by Khanitchaidecha et al. [23].
What Will be the Role of the MWSP on Supply Deficit?
The MWSP is a key initiative of the government of Nepal to supply clean water to the region's 2.5 million people.The first phase (2016-2017) of the project aims to transfer 170 MLD of fresh water from the adjoining Melamchi River (inter-basin transfer) to Kathmandu Valley.The second phase (2016-2023) focuses on source augmentation by transferring 340 MLD from the Yangri and Larke rivers (each with 170 MLD) [4,24].According to KUKL, the first phase of MWSP will be completed
Are Surface and Groundwater Resources in the Valley Drinkable?
To cope with insufficient surface water resources in the Kathmandu Valley, and to reduce the supply-demand deficit, the use of shallow (small diameter tube wells at the household level and public stone spouts) and deep groundwater resources (by private water vendors, hotels, and industries) is increasing.Nearly half of the valley's total water supply during the wet season, and 60%-70% during the dry season, comes from groundwater sources supplied by KUKL [10].Recently, KUKL started to drill 40 deep tube wells to supply 40 MLD of water [11].However, the suitability of groundwater, for drinking purposes, in the area is questionable.The extensive use of groundwater (beyond the rate of recharge), coupled with inadequate management of solid waste and wastewater from urban centers, has increased the vulnerability of the groundwater system to resource depletion, quality degradation, and land subsidence [12,13].
High levels of arsenic, ammonia, and iron in the deep groundwater, and nitrates and E. coli in shallow groundwater, exceeding World Health Organization (WHO) guidelines, have been reported in the Kathmandu Valley [14][15][16][17].A study by Sakamoto et al. [18] showed that almost all water from the rivers and shallow wells in the core valley was not suitable for drinking because of the presence of E. coli, and only 29% of the deep tube wells had drinkable water [18].Similar types of studies have identified surface and groundwater contamination (biological and chemical contamination) in the valley, such as those of Jha et al. [19], Dongal et al. [20], Kannel et al. [21], Chapagain and Kazama [15], and Shrestha et al. [22], etc.This poses a limitation on the valley's drinking water resources-in the absence of proper treatment for biological or chemical contamination-in terms of coping with the present water crisis.Shrestha et al. [22] suggested reducing groundwater pollution by improving the sewer line infrastructure and septic tanks through long-term planning.However, treatment of groundwater at the household level, including disinfection, filtration, and boiling before consumption (particularly during the wet season), is recommended to deal with microbial contamination.For the treatment of physicochemical contamination, the simple-in-operation, low cost, and energy efficient ammonia and nitrate removal system developed by Khanitchaidecha et al. [17] could be used at the household or community scale.For iron removal, aeration, sedimentation and filtration before denitrification could be applied, as suggested by Khanitchaidecha et al. [23].
What Will be the Role of the MWSP on Supply Deficit?
The MWSP is a key initiative of the government of Nepal to supply clean water to the region's 2.5 million people.The first phase (2016-2017) of the project aims to transfer 170 MLD of fresh water from the adjoining Melamchi River (inter-basin transfer) to Kathmandu Valley.The second phase (2016-2023) focuses on source augmentation by transferring 340 MLD from the Yangri and Larke rivers (each with 170 MLD) [4,24].According to KUKL, the first phase of MWSP will be completed by May 2017 (personal communication; it was originally expected that it would be completed by mid-April Water 2016, 8, 196 6 of 9 2016, but was delayed) [25], when it will start pumping 170 MLD of water from the Melamchi River in Sindhupalchok to the valley.However, the Sundarijal water treatment plant, which is under construction and is scheduled to be completed by 2016, has a processing capacity of only 85 MLD, making it impossible to distribute the total 170 MLD of water delivered after Phase One of the MWSP.The remaining excess water (85 MLD) will be released into the Bagmati River until there is a further increase in the treatment and distribution capacity [26].If we consider the water availability from the MWSP to be 85 MLD, then the total supply capacity will be about 236 MLD in 2016, against the KUKL service area's water demand of 338 MLD.This will result in a deficit of 102 MLD for 2016.The government of Nepal has proposed an additional loan from the Asian Development Bank (ADB) to expand the Sundarijal water treatment plant from 85 MLD to 170 MLD (by 2020), to fully utilize the expected supply delivered by the first phase of the MWSP [4].If we assume that the full operational capacity of Phase One of the MWSP will be used successfully until 2021, then the remaining supply deficit in service areas will be 124 MLD (Figure 3).This highlights that the valley will be dragged inevitably into another era of chronic water shortages until 2021.
In addition to the existing supply capacity of 151 MLD, completion of Phase Two of the MWSP will provide about 510 MLD of additional water by 2023.However, its use will depend on the storage and treatment capacities of the valley's water and sanitation infrastructure.If all the water from the MWSP is treated, then it will be sufficient to meet the projected domestic water demand in the Kathmandu Valley (which is approximately 445 MLD for 2021).This will reduce exploitation of groundwater in the valley and highlights the need for planning additional water treatment infrastructure.However, unpredictable factors, such as earthquakes, fuel shortages, and inefficient management competency of contractors, could affect the completion of the project within the specified period, which could in turn lead to exacerbation of the present water crisis in the near future.
How Have Earthquakes Affected the Existing Water Supply?
A catastrophic earthquake of magnitude 7.6, on 25 April 2015 (and 300 aftershocks greater than magnitude 4.0 until 7 June 2015) took the lives of about 8790 people, as well as causing 22,330 injuries.Approximately eight million people (almost one-third of Nepal's population) have been impacted [27].The earthquake damage was widespread, covering the entire development infrastructure including water supply pipes, wells and taps (with breaking of underground water supply pipes leading to more pressure on the groundwater).The earthquake caused damage and losses to the water and sanitation sector that were reported to cost NPR 11,379 million (the equivalent of US $106 million).A recent study by Thapa et al. [9] reported that the valley's water supply infrastructure suffered a reduction in the capacity of water distribution pipe networks of 28%, 30% and 18% in the Lalitpur, Kathmandu, and Bhaktapur districts, respectively.An approximate 40% reduction of supplied water by KUKL was also reported, affecting 0.15 and 0.24 million people during the dry and wet seasons, respectively.Repair of those damaged distribution networks may take longer than 1.5 years to complete (personal communication with KUKL, 7 December 2015).Sewerage systems and septic tanks may have also been broken and dislodged by earthquakes, causing leakages that would pollute groundwater.The earthquake has also delayed the construction of the MWSP, which was expected to complete its first phase in April 2016, and has since been postponed to May 2017.
Discussion and Concluding Remarks
The increase in population is in turn expected to increase pressure on the existing water supply infrastructure of the Kathmandu Valley.The present supply deficit (102 MLD) is expected to increase to 124 MLD after completion of the first phase of the MWSP, assuming KUKL's full supply capacity is realized in 2020-2021.The valley's water crisis should be solved by successful completion of the MWSP (in ideal conditions), with the KUKL's total supply capacity reaching 510 MLD.
The present water crisis will be further exacerbated if the MSWP goals are not achieved in the specified time (i.e., by 2023).Water shortages and poor water quality in the Kathmandu Valley may lead to serious health, environmental and socioeconomic consequences.The time and distance to fetch water will cause extra pressure on women's household activities, and may also result in water-related societal conflicts.Poor households are expected to suffer water shortages, resulting in a reduction in water consumption, which subsequently will cause health and sanitation issues.After completion of the MWSP (both phases), about four million people in the valley will have access to sufficient water; however, drinking water quality will remain a challenge.Therefore, this study recommends immediate planning for additional water treatment infrastructure to utilize the water from the MWSP fully by 2023-2025.We conclude that the effect of MWSP on the valley's water supply will be positive in the short term, i.e., by 2023-2025.However, extreme climate change events such as droughts and unforeseen catastrophes such as earthquakes could affect the supply from the MWSP.This could threaten SDG 6, which targets 95% of the population of Kathmandu Valley having a piped water supply, 99% having a basic water supply, and 90% having access to safe drinking water by 2030 [28].
A study by Shrestha [29] suggested alternative options to minimize the demand-supply deficit of the valley-and decrease the stress on groundwater resources-such as the development of urban centers outside the valley, optimum planning of land use for potential recharge, introduction of micro-to macro-level rainwater harvesting structures, and water demand management.The water from the mountainous region, which is also of sufficient quality for drinking purposes, can be harvested with community-based water resources management and used in conjunction with surface and groundwater in the valley.Strategies to save and reuse water, and avoid wastage, should be implemented at the household level.
In addition to water scarcity, the surface and groundwater of the core valley was found to be unsuitable for drinking purposes.Therefore, to use water from the valley (either surface or groundwater) or the MWSP, there is an immediate need for additional water treatment infrastructure.The Sundarijal water treatment plant is expected to utilize water from the first phase of the MWSP by 2020 (170 MLD).However, treatment of the remaining 340 MLD of water is a challenge due to the limited capacity of existing water treatment plants.Additional water treatment capacity needs to be developed by 2023-2025 to treat this remaining 340 MLD of MSWP water.The construction of such infrastructure will require long-term planning and is hugely expensive.
If it is assumed that, under ideal conditions, all the water from the MWSP (510 MLD) is treated and distributed for drinking and other household purposes, then the operation and maintenance of treatment plants will be very expensive, and the risk of water contamination through distribution networks would also be high.Therefore, there is a need to develop small-scale, energy-saving, and highly efficient water treatment systems suited to local conditions.One possible solution could be the development of community-or household-level small water treatment facilities, where people treat water for drinking purposes only and not for other household uses such as bathing, flushing, washing clothes and utensils, and gardening, etc.The present Science and Technology Research Partnership for Sustainable Development (SATREPS) project in Nepal [30] aims to develop water security index maps and an appropriate, locally fitted, compact and decentralized (LCD) water treatment system for groundwater and surface water in the Kathmandu Valley, which is expected to generate practical solutions to deal with water scarcity and quality issues in the valley.
Limitations
This study used water supply and sanitation statistics for 2011 as constants to estimate water demand according to BIS guidelines.These data may change depending on VDC/municipality-level data availability, or based on the results of a government master plan to obtain 100% water supply and sanitation coverage by 2017.Furthermore, we did not consider the water supply to areas other than those serviced by the KUKL or the floating (migrant) population in the valley.
Figure 1 .
Figure 1.Location of the Kathmandu Valley.Five metropolitan areas are shown with hatches.
Figure 1 .
Figure 1.Location of the Kathmandu Valley.Five metropolitan areas are shown with hatches.
Figure 2 .
Figure 2. Village Development Committee (VDC)/municipality domestic water demand using Bureau of Indian Standards (BIS) guidelines.Kathmandu Metropolitan City, Lalitpur Sub-metropolitan City, and the Madhyapur Thimi, Bhaktapur, and Kirtipur Municipalities have the highest water demand due to their highly dense populations.
Figure 2 .
Figure 2. Village Development Committee (VDC)/municipality domestic water demand using Bureau of Indian Standards (BIS) guidelines.Kathmandu Metropolitan City, Lalitpur Sub-metropolitan City, and the Madhyapur Thimi, Bhaktapur, and Kirtipur Municipalities have the highest water demand due to their highly dense populations.
Figure 3 .
Figure 3. KUKL service area maximum water supply capacity for 2013 and estimated demand deficit.From 2017, an additional supply of 170 MLD from MWSP should be considered; however, this would be affected the by the present capacity of the water treatment plant (85 MLD).
17 Figure 3 .
Figure 3. KUKL service area maximum water supply capacity for 2013 and estimated demand deficit.From 2017, an additional supply of 170 MLD from MWSP should be considered; however, this would be affected the by the present capacity of the water treatment plant (85 MLD).
Table 1 .
Estimated water demand in the Kathmandu Valley.The BIS guidelines are based on water use that varies by setting (rural or urban), water supply and sanitation coverage (instead of assuming constant water demand).Water demand over the KUKL service area is expected to rise to 482 MLD by 2021.
Water demand in the valley (MLD) * Assuming 135 lpcd 214.6 262.8 327.1 415.5 540.3Using BIS guidelines 183.9 224.9 282.5 366.0 481.5 *MLD, million liters per day; lpcd, liters per capita per day; BIS, Bureau of Indian Standards.Note: Projected population in 2016 and 2021.Leakages during water distribution are not considered within the calculations.
Table 1 .
Estimated water demand in the Kathmandu Valley.The BIS guidelines are based on water use that varies by setting (rural or urban), water supply and sanitation coverage (instead of assuming constant water demand).Water demand over the KUKL service area is expected to rise to 482 MLD by 2021.* MLD, million liters per day; lpcd, liters per capita per day; BIS, Bureau of Indian Standards.Note: Projected population in 2016 and 2021.Leakages during water distribution are not considered within the calculations.
*MLD, million liters per day; SA, service area.*Note: From 2017, an additional supply of 170 MLD from MWSP should be considered; however, this would be affected the by the present capacity of the water treatment plant (85 MLD),and is therefore not included in above estimations. | 6,421.8 | 2016-05-11T00:00:00.000 | [
"Economics"
] |
RatingScaleReduction package: stepwise rating scale item reduction without predictability loss
This study presents an innovative method for reducing the number of rating scale items without predictability loss. The"area under the re- ceiver operator curve method"(AUC ROC) is used to implement in the RatingScaleReduction package posted on CRAN. Several cases have been used to illustrate how the stepwise method has reduced the number of rating scale items (variables).
Introduction
Rating scales (also called assessment scale and "the scale" in this study) are used to elicit data about quantitative entities. Often, predictability of rating scales (also called "assessment scales") can be improved. Rating scales often use values "1 to 10" and some rating scales may have over 100 items (questions) to rate. Sometimes, the scale is called a survey or a questionnaire. A questionnaire is a tool for data collection while a survey may not necessarily be conducted by questionnaires since some surveys may be conducted by interviews or by data gathered from web pages. In fact, the main and very important distinction between the scale and both questionnaires and surveys is that the scale is used for assessment (as in "the scale of disaster") while questionnaires and surveys may be used to only collect data. In other words, some kind of "summation procedure" must be provided for questionnaires or surveys to become rating scales.
The recent popularity of rating scales is due to various "Customer Reviews" on the Internet where five stars are often used instead of ordinal numbers. However, the most important examples of rating scales are questionnaires used in examinations. We may risk a statement that the accelerated progress of granting academic degrees can be linked to a better use of rating scales.
Rating scales are predominantly used to express our subjective assessments such as "on the scale 1 to 5 express your preference": "strongly agree to strongly disagree" (with 3 as 'neutral" preference). The importance of subjectivity processing probably inspired introduction of the idea of bounded rationality, proposed by Herbert A. Simon (the Nobel Prize winner), as an alternative basis for the mathematical modeling of decision making. It is often expressed as "good enough is perfect" and gained popularity in the software industry where frequent updates are common. Objective data are more commonly used in so called "strict sciences" while processing subjectivity is still under development. It is worth noticing that the objectivity is illusive. Often, the difference between subjectivity and objectivity is a matter of an arbitrary decision. For example, an item listed for sale for, let us say, 100,000 monetary units, will be very likely sold for 99,999 of such units if such an offer is made. If so, one may also not resist accepting 99,998 monetary units and so on. Setting a limit (so called, "the bottom line") is often a highly subjective decision. Scales help in many cases, but a large number of items (questions) is often a discouragement for its use.
It seems that one of the first successful rating scale reduction (RSR) took place in [13]. The 17-item Hamilton Rating Scale for Depression (HAM-D17) was used to derive even a more reduced version (Ham-D7) with seven items. According to [10], "The clinical utility of the HAM-D17 is hampered, in part, by the length of time required to administer the interview and by the lack of inter-rater reliability."
Heuristic algorithm
A heuristic is, in essence, a simplified method for solving a problem more quickly when well-established methods fail to find a sound algorithm. Usually, this is achieved by the "good enough is perfect" approach, mentioned in the introduction, as characterization of the heuristic solution. Heuristics are expected to produce a reasonable solution when a time frame and accuracy are a problem. The "good enough"solution for an urgent problem is commonly practiced in computer science. It is usually not the best solution to our problem but it may still be of great value. For example, the traveling salesman problem (TSP), often formulated as "find the shortest possible route to visit each city exactly once and return to the origin city", cannot be solved for 50 or more cities by verifying all possible combinations since the total number of such combinations would easily exceeds the number of atoms in the entire Universe. Using heuristics, we can solve TSP for millions of cities with the accuracy of a small fraction of 1%. Most heuristics produce results by themselves but many are used in conjunction with optimization algorithms to improve their efficiency (e.g., differential evolution).
In our case, the number of possible combinations for a rating scale with 100 items (which is not uncommon) is a "cosmic number" hence the complete search must be ruled out. Computing the area under the receiver characteristic curve for all items is the basis for our heuristic. Common sense dictates that the contribution of the individual items to the overall value of the area under the receiver characteristic curve needs to be somehow utilized. We have decided on a stepwise heuristic. Certainly, the results need to be verified and used only if the item reduction is substantial.
The package description
Rating scales are used to elicit data about qualitative entities (e.g., research collaboration). This study presents an innovative method for reducing the number of rating scale items without predictability loss. The "area under the receiver operator curve method" (AUC ROC) is used. The presented method has reduced the number of rating scale items (variables) to 28.57% (from 21 to 6) making over 70% of collected data unnecessary.
Results have been verified by two methods of analysis: Graded Response Model (GRM) and Confirmatory Factor Analysis (CFA). GRM revealed that the new method differentiates observations of high and middle scores. CFA proved that the reliability of the rating scale has not deteriorated by the scale item reduction. Both statistical analysis evidenced usefulness of the AUC ROC reduction method.
Rating scales (also called assessment scale) are used to elicit data about quantitative entities. Often, the predictability of rating scales (also called "assessment scales") could be improved. Rating scales often use values: "1 to 10" and some rating scales may have over 100 items (questions) to rate. Other popular terms for rating scales are: survey and questionnaire, although a questionnaire is a method of data collection while survey may not necessarily be conducted by questionnaires. Some surveys may be conducted by interviews or by analyzing web pages. Rating itself is very popular on the Internet for "Customer Reviews" where five stars (e.g., Amazon.com) are often used instead of ordinal numbers. One may regard such rating as a one item rating scale.
In computer science and mathematical optimization, a heuristic is a technique designed for solving a problem for finding an approximate solution when classic methods fail to find any exact solution. Often, finding such methods is achieved by trading completeness, accuracy, or optimality, for speed.
The main objective of a heuristic is to produce a solution that is good enough to solve our problem. The solution may not be the "best" solution and it may only approximate the solution since the optimal solution may require a prohibitively long time. The traveling salesman problem and virus scanning are probably the most recognized problems where the need for using heuristics is evident. In both cases, the complete search for the optimal search would take thousands of years using the fastest computers built. One of the shortest heuristics may be "22/7" as an approximation of Π constant with two decimal points (3.14), as it is easier to remember and sometimes easier to use.
Herbert A. Simon originally was the proponent of bounded rationality. In practice, it means that human judgments are based on heuristics. He is the only person who received both the Nobel prize and the Turing prize.
Data collected by a rating scale with a fixed number of items (questions) are stored in a table with one decision (in our case, binary) variable. The parametrized classifier is usually created by total score of all items. The outcome of such rating scales is usually compared to external validation provided by assessing professionals (e.g., grant application committees).
Our approach not only reduces the number of items, but also sequences them according to the contribution to predictability. It is based on the Receiver Operator Characteristic (ROC), which gives individual scores for all examined items. The term "receiver operating characteristic" (ROC), or "ROC curve" was coined for a graphical plot illustrating the performance of radar operators (hence "operating"). A binary classifier represented absence or presence of an enemy aircraft. It was used to plot the fraction of true positives out of the total actual positives (TPR = true positive rate) vs. the fraction of false positives out of the total actual negatives (FPR = false positive rate). Positive instances (P) and negative instances (N) for some condition are computed and stored as four outcomes of a 2 contingency table or confusion matrix, as follows: Each patient either has or does not have the disorder. The screening outcome can be positive (classifying a patient as having the disorder) or negative (classifying the patent as not having the disorder). The screening results for each patient may or may not match the subject's actual status.
In means that in the medical terminology we may have: • TP = true positive: patient is correctly identified as having the disorder, • FP = false positive: patient is incorrectly identified as having the disorder, • TN = true negative: patient with no disorder is correctly identified as not having the disorder, • FN = false negative: patent with the disorder is incorrectly identified as not having the disorder.
In simple terms, positive = identified and negative = rejected hence: In assessment and evaluation research, the ROC curve is a representation of a "separator" (or decision) variable. The decision variable is usually: "has a property" or "does not have a property" or has some condition to meet (pass/fail).
The frequencies of positive and negative cases of the diagnostic test vary for the "cut-off" value for the positivity. By changing the "cut-off" value from 0 (all negatives) to a maximum value (all positives), we obtain the ROC by plotting TPR (true positive rate also called sensitivity) versus FPR (false positive also called specificity) across varying cut-offs, which generate a curve in the unit square called an ROC curve.
According to [2], the area under the curve (the AUC or AUROC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming the 'positive' rank higher than 'negative').
This can be seen as follows: the area under the curve is given by (the integral boundaries are reversed as large T has a lower value on the x-axis) The angular brackets denote average from the distribution of negative samples. AUC is closely related to the Mann-Whitney U test which tests whether positives are ked higher than negatives. It is also equivalent to the Wilcoxon test of ranks.
ROC method is implemented by many R packages including: pROC [11] and ROCR [12]. There is also one interesting web application easyROC [3] giving possibility to compute the confusion matrix and plot the curve on-line. The RatingScaleReduction package expands this analysis to carry out the procedure of rating scale reduction.
A package for preprocessing "messy" data into a form is easily analyzed within R is presented in [8]. In [15], the new R package sbtools enables users direct access to the advanced online data functionality provided by ScienceBase, the U.S. Geological Survey's online scientific data storage platform. It can be used for harvesting other data sets.
Rating scale stepwise reduction procedure
The procedure follows the heuristic algorithm represented by Fig. 2. Technically, it is an algorithm since the flowchart, represented by Fig. 2, shows the finite namer of steps. It is, however, a heuristic algorithms since the optimality of the presented approach cannot be guaranteed (as pointed out in Section 2). However, the common sense dictates to select the "best" attribute and keep adding to it the next "best" attribute where the "best" has the meaning of the area under the curve (AUC) value since it is the universally accepted criterion for classifiers (in statistical classification and machine learning). A rating scale total is a classifier. The classification in this study is regarded as an instance of supervised learning. Briefly, it requires a training set of correctly identified examples (observations) with the external evaluation. In our case, a trained professional is needed to determine if a subject (a screened psychiatric patient) had a mental disorder or not). An algorithm that implements a concrete classification is called as a classifier. The most common way of doing i by a rating scale is using the total of all items. Some of them may be negative (e.g., in the Oxford Happiness Questionnaire, see [4]). In the RatingScaleReduction, the implemented algorithm (when reduced to its minimum) uses a loop for all attributes (with the class excluded) to compute AUC. Subsequently, attributes are sorted in the ascending order by AUC. The attribute with the largest AUC is added to a subset of all attributes (evidently, it cannot be empty since it is supposed to be the minimum subset S of all attributes with the maximum AUC). We continue adding the next in line (according to AUC) attribute to the subset S checking AUC. If it decreases, we stop the procedure. There are a lot of checking (e.g., if the dataset is not empty or full of replications) involved. These steps are implemented in startAuc, totalAuc oraz rsr functions of the package.
Before running the RSR procedure the data set should be analysed to detect replicated examples and so-called "gray" examples. One example may be replicated m times, where m is the total number of examples, so that there are no other examples. Such situation would deviate computations and should be early detected. Ideally, no example should be replicated but if the replication rate is small, we can proceed to computing AUC. There is no generally acceptable "golden rule" for the level or replication rate. Moreover the data may contain gray examples which should also be detected, gray example is an example for which there are another examples in the data set having identical values on all attributes but different decision. This analysis of data set can by carry out using functions: diffExamples and grayExamplesN, grayExamples.
The important problem after the scale reduction by RSR procedure is to check for the possible inclusion of the next attribute in the reduced rating scale by maximizing AUC of all included items. In a highly unlikely scenario, all attributes will be included in the reduced (that is, non reduced) set of items. The reduced rating scale of one attribute may be created if there is an identifying attribute. To test the inclusion the function CheckAttr4Inclusion is available in the package.
RatingScaleReduction: overview of the package functions
The RatingScaleReduction package implements the above-stated stepwise procedure using two functions of the pROC package: roc and roc.test. It works on the data as the matrix or data.frame containing columns of attributes and one decision column with two categories, e.g. (0,1). The rows in data.frame represents examples in the sample. All attributes and the decision vector must be numeric. There are to groups of functions available in the package. Because the essence of the procedure is to set the attributes in the correct order good practice is to enter their name using e.g. colnames in R. The first group are dedicated to carry out the RSR procedure: 1. startAuc(attribute, D) -compute the AUC values of every single attribute in the rating scale. 3. rsr(attribute, D, plotRSR=FALSE) -the main function of the package reducing the rating scale according the procedure illustrated by Fig. 1. Setting the argument plotRSR as TRUE the plot of ROC curve of the sum of attributes in reduced rating scale is created.
Additionally, the package provides second group of functions to support the reduction procedure: 1. CheckAttr4Inclusion(attribute, D) -carry out a statistical tests for a difference in AUC of two correlated ROC curves: ROC1 of the sum of attributes from reduced rating scale and ROC2 of this sum plus the next ordered attribute. The function uses The function roc.test from the pROC is used and all implemented tests are available, in particular delong and bootstrap.
Data sets used for verification and working with the package
The examples presents the capabilities of the RatingScaleReduction package. The full code is available for download from https://github.com/woali/RatingScaleReduction/example Rj.r.
The first demonstration example: rating scale reduction
We consider the data BDI data set used in [7]. It is a rating scale for depression BDI (Beck Depression Inventory) with 21 attributes in our relational database. The goal in this example is to show how to reduce the BDI rating scale in the use of three main functions of the package. The data.frame we work on contains 21 columns with attributes and one additional column as a decision (reality The R output shows the tauc.bdi$summary AUC of every single attribute in the second column, sorted in the ascending order. The running total of AUCs is in the thrird column. The initially selected variable (BDI 1) for the first row is the attribute with the largest AUC. Subsequently, we add to it the variable with the largest AUC of the remaining attributes. The process continues while the last attribute of the scale is added.
Values in the running total (from the top to the current variable) are checked for growing. Evidently, the value 0.725 in the first row is the same for the running total as for the single variable (BDI 1). However, the value in the third row (0.795) is not for variable 7 but the total of variables BDI 1, BDI 14, and BDI 7. In particular, the value (0.812) in in the last row is for the total of all variables. Their line plot can be easily created by setting the totalAuc parameter plotT as TRUE. The plot for our BDI scale is illustrated by Fig. 2. The curve peek is for variable #6 which is 15. Printing the value tauc.bdi$item we receive the attribute labels in an ascending order.
> rsr.bdi <-rsr(attribute, D, plotRSR=TRUE) The criteria: Stop first MAX AUC > rsr.bdi$rsr.auc [1] 0.7250092 0.7765412 0.7945490 0.8095300 0.8131352 0.8221669 $rsr.label [1] "BDI_1" "BDI_14" "BDI_7" "BDI_9" "BDI_10" "BDI_15" Setting the rsr parameter plotRSR as TRUE the function generate plot illustrated by Fig. 3. We assume that by selecting the "best" attribute in a loop, we are able to reduce the number of attributes for the best preventiveness. In our case, having the largest AUC is the "best" criterion. Adding the next "best" attribute to the selected attribute from the subset of the remaining attributes until AUC of all selected attributes decreases is the main idea of our heuristic. So far, each and every rating scale has been reduced.
Second illustrative example: using the entire RSR procedure
Le us consider the data set Hepatitis analyzed in [9] and located at http://archive.ics.uci.edu/ml/. It has 20 attributes and 312 examples used in [1]. The goal is to illustrate how our entire RSR procedure may be used. To reduce Hepatitis data set, we use the following attributes hepato: > names(att[,-d3]) "time" "status" "trt" "age" "sex" "ascites" "spiders" "edema" "bili" "chol" "albumin" "copper" "alk.phos" "ast" "trig" "platelet" "protime" "stage" Figure 3: AUC of BDI 1 + BDI 14 + BDI 7 + BDI 9 + BDI 10 + BDI 15 , for DBI scale The following steps are needed: • detect duplicates and gray example in data, • reduce rating scale, • check possible inclusion. Working on a full data set is a time-consuming process since it requires all pair comparisons to be analyzed. A short optimization procedure is used for attributes in two categories. The code below shows how to list by gray examples by comparing the subset of attributes (status, sex, spiders, stage) using the function grayEcamplesN. The key issue is to properly modify the data. After reduction, the scale contains two attributes: stage, bili. The plot in Fig. 4 illustrates AUC of the reduced rating scale and the proper ROC curve. The p-value = 0.04701 shows that according to deLong test the null hypothesis "H0 : true difference in AUC is equal to 0" should not be rejected in favor of the alternative. Fig. 5 illustrates two tested ROC curves.
The potential application targets
Rating scales are by far more important contributors to science that we can address them by this study. Most examinations for granting scientific degrees are rating scales of various shapes and forms. Simplifying them (or reducing in size) is needed since we are subjected to more and more examination for so needed certifications. In bioinformatics, reporting trade-off in sensitivity and specificity, by using a Receiver Operating Characteristic (ROC) curve, is becoming a common practice. ROC plot has the sensitivity on the y axis, against the false discovery rate (1-specificity) on the x axis. ROC curve plot provides a visual tool to determine the boundary limit (or the separation threshold) of a subset (or a combination) of scale items for the potentially optimal combination of sensitivity and specificity. The area under the curve (AUC) of the ROC curve indicates the overall accuracy and the separation performance of the rating scale. It can be readily used to compare different item subsets. As a rule of thumb, the fewer the scale items used to maximize the AUC of the ROC curve, the better.
World Health Organization estimates are included behind selected rating scales for mental disorder. Rating scales are of considerable importance for psychiatry where they are predominately used for screening patients for mental disorders such as: • depression (see [7]) which affects 60 million people worldwide according to [14], • bipolar affective disorder (60 million people), • dementia and cognitive impairment (47.5 million people) Figure 5: ROC curves for the original and reduced rating scales for the Hepatitis data set • schizophrenia (21 million people), • autism and autism spectrum disorder(e.g., [5]) • mania and bipolar disorder, • addiction, • personality and personality disorders, • anxiety, • ADHD; and many other disorders.
Usually, there are many scales for each mental disorder. The most important for screening are global scales. Reducing these global rating scales makes they more usable as indicated in [7]. World Health Organization Media Centre reports that"depression and anxiety disorders cost the global economy US $1 trillion each year" and it is no longer a local problem.
Conclusions
The presented method has reduced the number of the rating scale items (variables) to 28.57% from the original number of items (from 21 to 6). It means that over 70% of collected data was unnecessary. It is not only an essential budgetary saving, as the data collection is usually expensive and may easily go into hundreds of thousands of dollars, but the excessive data collection may contribute to the data collection error increase. The more data are collected, the more errors may occur occur since a lack of concentration and boredom are realistic factors.
By using the proposed AUC ROC reduction method, the predictability has increased by approximately 0.5%. It may seem insignificant. However, for a large population, it is of considerable importance. In fact, [14] states that: "Taken together, mental, neurological and substance use disorders exact a high toll, accounting for 13% of the total global burden." The proposed use of AUC for reducing the number of rating scale items, as a criterion, is innovative and applicable to practically all rating scales. System R code is posted on the Internet (RatingScaleReduction) for the general use as a R package. Certainly, more validation cases would be helpful and the assistance will be provided to anyone who wishes to try this method using his/her data.
Future plans include using the presented method for financial data analyzed but the real aim our collaborative effort is towards psychiatric scales. The reduced scales can be further enhanced by the method described in [5] and [6]. | 5,607 | 2017-03-16T00:00:00.000 | [
"Mathematics"
] |
THE BRAHMI FAMILY OF SCRIPTS AND HANGUL: Alphabets or Syllabaries?
A great deal of disagreement exists as to whether the writing systems of the Brahmi family ol7 scripts from Southeast Asia and the Indian subcontinent and the Hangul script ,..+f Korea should be classified as alphabets or syllabaries. Both have in common a mixture of syllabic and alphabetic characteristics that has spawned vigorous disagreement among scholars as to their classification. An analysis of the alphabetic and syllabic aspects of the two writing systems is presented and it is demonstrated that each exhibits significant characteristics of both types of writing systems. It is concluded that neither label entirely does either writing system justice.
Tn s. document has been reproducea as rece,ved from the person or organaatton Linguists studying the writing systems of the world have traditionally classified them according to three categories, those of logographic, syllabic, and alphabetic scripts.
The Brahmi writing systems found throughout the Indian subcontinent and Southeast Asia a. well as the Korean Hangul script, however, both defy classification.The two have in common a mixture of syllabic and alphabetic characteristics that has spawned vigorous disagreement among the scholars discussing them.
For example, Lambert (1953) refers to the Devanagari script used to write Sanskrit and its daughter languages as a syllabary, Shamasastry (1906) as an alphabet, Coulson (1976:3) :;:escribes it as 'halfway in character between an alphabet and a very regular syllabary,' while Cardona (1987) simply calls it a script and avoids the issue in his overview of Sanskrit.An examination of the various alphabetic and syllabic aspects of these writing systems is therefore in order, and indeed the results of such an investigation would seem to indicate that neither label fully does justice to them.
The Brahmi family of scripts, so named for their descent from the Brahmi script which is first attested in the third century B.C.,1 are distinctive in having in common, to a greater or lesser extent, a number of characteristics that begin to surface in their progenitor.Foremost of these is what Masica (1991:136) hails as 'The great innovation of the Brahmi script, its indication of vowels other than A ([8)) by modifications added to the basic c.:.,risonant symbols.' The vowel corresponding to (a) itself is regarded as assumed or inherent to each consonant in its most basic form, and any vowel pronounced after the consonant is represented by a marker appended in some fashion to the consonantal symbol.Vowels also tend to have distinct allographs when they occur in an initial position.A consonant standing alone must be so indicated by a special diacritic, and consonants otherwise not followed by any vowel, as in consonant clusters, tend to appear in some altered or abbreviated form.
Descendants of the Brahmi script are most commonly associated with the Indic and the Dravidian languages of India.
They are also represented in the two primary members of the Tibeto-Burmese family, as well as in significant members of the Khmer and Kam-Tai families.Brahmi-derived scripts have also made their way to such scattered locales in time and place as Sumatra, the Philippines, and the extinct Tokharian language.2The most widely known member of this family of scripts, however, is the Devanagari script, most particularly as it is employed in writing Sanskrit.
It was also at the hands of the grammarians who adapted Devanagari to the writing of Sanskrit that the aforementioned qualities peculiar to Brahmi writing systems become perhaps most pronounced.While an analysis of Brahmi scripts should consider a representative sampling of them, Sanskrit Devanagari is generally taken as the most representative case, and is therefore the best point at which to begin.
The characters f the Devanagari script are elegant not only in appearance but also, in Sanskrit at least, in operation as well.As mentioned above, Devanagari consonantal characters are considered to include in their basic, 'unmarked' form the vowel [a], corresponding to [ ] in Sanskrit and most of its daughter languages, pronounced after the articulation of the consonant itself.
Thus, the characters for Sanskrit's voiceless unaspirated plosives,EB , and IT , stand for the syllables [ka], [ca], Eta], Eta], and [pa], respectively.When the consonant has no following sourd, as utterance-finally or in isolation, a diacritic known as a vir5ma is placed to the lower right of the character, so that -c.7. and -cT, indicate [c] and [t] alone.
The vowel [a] is overtly indicated only in an initial position, by the character JT .All other vowels and diphthongs have one allograph used initially, and another, smaller one when pronounced following a consonant.These latter allographs may be attached to the consonantal sign at almost any portion of it, such as to the right, as for TW [ta], -m- [ti]; belo , as inir [tu], [ta], [q]; above, as in tr [te], [tai]; above and to the right, as for rr [to], [tau]; and even to the left of the consonantal symbol, as in fh- [ti].
The signs for these vowels in an initial position, on the other hand, are 37r [a] [i],t Two diacritics frequently modify vowels.The anuamara ( ) indicates vowel nasalization and is customarily transcribed, e.g., -am.The visaraa (37: ; h) is an aspirated echo of the vowel it modifies (Coulson 9).
Although Devanagari does not readily lend itself to the representation of consonant clusters, such clusters are quite common in Sanskrit.These are represented by ligatures known as conjunct consonants, wherein two or more consonantal characters are modified to fit together in a larger conglomeration.The two most common means of effecting these combinations are horizontally, which generally involves deleting the vertical stroke where present for non-final members of the clusters, as in RT.
[sta], from "R is] and Tr [ta], or aZi [byal, from "Et [b] and Zr [ya]; and vertically, as in ---;;F [iga], from [g) and 71.[gal, or [dva) , from [d] and 1 [val.Some combinations may be made in either fashion, as inr-r4 or [cca], although the advent of printing has made the former method more desirable.These conjuncts can appear quite formidable and bewildering; Conjuncts involving the flap [r] are of particular interest.
[r] following a consonant is represented by a short diagonal mark to the lower left of the consonantal character, as in [kra].
However, when [r] precedes a consonant, it is indicated by a small hook above and as far to the right of the character as possible, as in Tf [rta].
In syllables involving the diacritic anusvara, p this hook appears even to the right of it, as in <4..rii'zir [yajfiartham], 'for sacrificial purposes.'A question commonly invoked in determining::whether a script might be considered alphabetic or syl;lAbic is whether or not its most basic unit corresponds more or less with the phoneme; that is, whether it approaches an ideal principle of 'one sign per phoneme.' (Gaur 1985:119; see also Kim 1987:888-9).However,:this principle would seem to be for the most part irrelevant in Sanskrit Devanagari.Two points support this view.The first is what Masica (146) refers to as 'phonemic overkill' in the inventory of characters.He argues that the visaraa is in fact an allophone of /s/ t ), and argues that the velar and palatal nasals (;!,and,:q ) were 'largely predictable' in their distribution.It is true that they virtually never appear apart from a homorganic obstruent, and this would tend to indicate that they are less than full-fledged phonemes in Sanskrit and may have been included in the script to provide symmetry by nasals with the velar and palatal series of stops along with those of the retroflex, dental, and labial series ( , T( , R respectively).
The second of these points is embodied in the phenomenon of sandhi.Devanagari was adapted to Sanskrit with the goal of reproducing as faithfully as possible exact pronunciation (see , and the term sandhi, meaning 'juncture,' refers to all of the assimilation in voicing and place of articulation among consonants and the coalescence and glide formation among vowels at word boundaries and between lexical stems in compounding.A word-final segment analyzable phonemically as /t/ may be written, with pronunciation in mind, as 7r , , , , or [1], depending on the initial sound of the following word.Words are not separated from one another within clauses in written Sanskrit unless the first word ends in a vowel and the second begins with a consonant, or the first word ends with a visaroa and the second begins with a voiceless consonant, or unless the regular and predictable sandhi rules result in hiatus between two vowels. In attempting to separate strings of words into their component members, students of Sanskrit must work their way backward through these sandhi rules.The rules for sandhi given their predictability and their application across word boundaries, bear a striking resemblance to the post-lexical rules of the theory of lexical phonology (see J. T. Jensen 1990:84-7, 174-6).It must be concluded from the practices of regular Sanskrit orthography that the script was not adapted to the language with units corresponding to phonemes in mind.The implications of this fact would seem to that, while the orthography was organized to capture each sound as it passes from the lips of the speaker, these individual sounds were not considered meaningful in and of themselves.One is therefore left with no unit of analysis between the phonetic segment and the syllable conceived as a vowel preceded by any number of consonants (see Coulmas 1989:41-2).
A sample of written Sanskrit, accompanied by a transcription and translation, follows (adapted from Katzner 174):
In Hastinapura there was a washerman named Vilasa.His donkey was near death, having become weak from carrying excessive burdens.So the washerman covered him with a tiger-skin and turned him loose in a cornfield near a forest.The owners of the field, seeing him from a distance, fled away in haste, under the notion that he was a tiger.
Most of the modern Indic languages employ Brahmi scripts, and indeed most of these scripts are fairly closely related to Devanagari.Aside from some relatively minor languages, however, only Hindi, Marathi, and Nepali are generally written in the Devanagari script.Masica explains this great number of different scripts by noting that there was no unifying political or religious force, such as the Roman Empire and Catholic Church in western Europe or the Koran in the Islamic world, over most of Indian history (137), so that the sundry language communities tended to develop their own scripts.Then, 'What may have been the high water mark of script differentiation unfortunately coincided with the introduction of printing, which had a tendency to freeze and accentuate many minor differences (144).' He also observes that in the linguistic hodgepodge that is India, languages are under tremendous pressure to maintain a distinct identity, so that 'there is a widespread feeling that a self-respecting language should have its own script. (27).' Even Hindi and Nepali have some divergent orthographic customs for the script they share (145).For the purposes of this discussion, the Devanagari of Hindi and Marathi will be considered, along with the closely related but visually more distinct GujaraLi script and the somewhat less closely related Bengali script.
The Devanagari characters as used for Hindi and Marathi are essentially identical to those of Sanskrit.4 The most significant innovation in shape involves the importation of non-Indic segments such as the Arabic [q] and [f] from Arabic as well as Persian and English.
In these cases a subscript dot is added to the characters phonetically closest to the new sounds.
Thus E [ka] becomes
[qa] and There are, however, two more fundamental changes in the script, pertaining to the manner in which it is mapped onto the spoken language.The first of these renders the script less imposing in appearance.Sandhi rules are no longer taken into consideration, so that separation between words is always maintained.Such rules are not effective within words, either; the modern languages under discussion allow two consecutive vocalic syllable nuclei within a word, with the second represented by the initial allograph, as in c4 [kat] 'several,' or Epqr[bu'A], 'paternal aunt.' In Sanskrit, any such sequence would.havebeen coalesced together, or reduced to a glide-vowel sequence.While individual words in Hindi and Marathi are easily distinguishable, the pronunciation of these words is rather less accessible to the non-native reader than in Sanskrit.
In some, but not all environments, the inherent vowel (a) is deleted.In these instances, a consonantal character stands for its corresponding segment alone, and no additional diacritic is necessary.
The most easily predictable environment is word-finally, as inlql [Par] 'but,' or mo [ksan] 'moment.' Word-medial environments are less obvious.The best discussion of this phenomenon is in Ohala (1983).She argues that he most basic environment for deletion of the inherent vowel is VC_CV (121).This is fairly readily apparent where the two vowels are overtly marked, as in 79 [ kohni] 'elbow,' or -M:14-'1'41 (cunna] 'to choose.'More troublesome are cases where one or both of the the vowels are also the inherent vowel.Ohala argues that the deletion rule then applies right to left from a morpheme boundary.She bases her conclusion on such data as the following: The rare word pronounced [godnagin] 'adopted' is derived from /god+na6in/ 'lap+sitter' but is written in Devanagari as 7.1-/-----RZT (godanain).
If a speaker knows the word is /god+npsin/ he will not pronounce the Z; (d) of /god/ as a CV syllable (i.e., [dal), but will correctly render it as simply the consonant [d]; he will also retain the /a/ in /nasin/.
However, if he doesn't know the true morpheme boundary then he applies his adeletion rule from right to left and pronounces it as [godansin] (124).
Conjunct consonants do occur in Hindi, but they are rare relative to Sanskrit.
Lambert indicates that they do not occur across morpheme boundaries (77); when they do appear, they are often in environments where a- deletion cannot be predicted by Ohala's rule, such as word initially: /sneh/ 'love.'However, they also quite frequently occur where a-deletion is predictable, as in c,,h1 /kacca/ 'raw, uncooked, triiir/taiksi/ 'taxi,' or 54WITQFT/janmadin/ 'birthday.'Many of these are geminates, and Lambert takes pains to make clear that a-deletion cannot occur in loanwords from other languages, particularly Sanskrit (78-83). Nevertheless, while anyone who has internalized Ohala's rule should be Horl use jate dekhta hua apna kalefd thandha karta ra'.Ab larke ki sagai ne der na karni cahie.
Gobar said nothing more.He put his staff on his shoulder and walked away.
Hori looked with pride at the receding figure of his son.He was growing into a fine young man.
The Gujarati script is fairly close in appearance to the Devanagari.
It differs chiefly in the absence of the distinctive headstroke.The phonotactics of Gujarati are quite similar to those of Hindi, except that consecutive vowels are not allowable within a word.A sample of written Gujarati follows, accompanied by a translation and transcription, adapted from Katzner The appearance of the the Bengali script is quite different from that of the Devanagari; broadly speaking, its characters can be described as tending toward a rather triangular shape.The Bengali language itself differs from the majority of Indic languages in that its vowel corresponding to a has drifted in articulation to This then is its inherent vowel, and so the consonantal character Z is taken to stand for [to).The vowels :DI and J , corresponding to Devanagari tr and 31 Hai] and [au)), are pronounced Eoi) and [ou).
One noteworthy feature of the Bengali script is that, in addition to T7) (tii, other non-initial vowels are written br7Dre the consonantal character: C6 [tel, et [toil.Tu..) others are written to either side of it: C5 Signs for other non-intial vowels are not greatly different from their Devanagari counterparts Unlike Hindi words, whose pronunciations are predictable from their written form but not the reverse, in Bengali neither is fully predictable, since inherent vowel-deletion is not regular.Thus, the written form 1U , orthographically [motD], may denote either /Rut/ 'idea, opinion,' or /moto/ 'similar, like' (Lambert 185).
Further compounding difficulties, as is apparent from the latter example, the inherent vowel may also be pronounced (o), so that it overlaps with 3 [o].Ray et al. (1966:15) states that there 'are no simple rules' for this alternation of D/o/O, and Lambert (185) asserts that the proper realization can be understood 'only by a knowledge of spoken Bengali.'A sample of written Bengali, with a transcription (albeit without taking into account the shift in pronunciation of the inherent vowel) and translation follows (from H. Jensen 379-80): ("1 .1 kr-q?rtc-k-t ice.IT-C9T qiC31-47q-5 "1-51.
Among the rich in the old days was a man called Amad Sultan.He possessed great wealth and also a numerous army.
Certain features of some non-Indic Brahmi scripts are worth noting, at least in passing.
Of note in the Tamil script is the pu,li.This is a raised dot corresponding in function to the Devanagari vir-ama, but, unlike its counterpart, as Stevens (1987:734) observes, 'The use of the milli is instrumental in the correct representation of Consonant clusters: g4u/.represents ilma 'now,' not l'ipapa.'Thus, in Tamil conjunct consonants are unnecessary.
The Thai script offers an example of diacritics used to indicate a fairly complex tone system.A consonant sign falls into one of three classes, and this class in conjunction with any of four diacritics or the absence of one determines the tone for thai: consonant's syllable (Hudak 1987:766).
Thai also appears to be unusual among Brahmi scripts in that consonantal characters have no inherent vowel; Yd stands simply for /n/.
In any comparison between the Brahmi family of scripts and the Korean Hangul script, that of the Tibetan language is particularly worthy of note as it is often mentioned as possibly having had some influence on the shaping of Hangul (Gaur 85, Diringer 1968:354, Lee 1983:7).
In this connection perhaps its most significant feature is the tsheg, a syllable-ending point.Otherwise, a narrow space separates each consonant character.Beyond this, it is fairly similar to the Devanagari script in appearance.In contrast with Devanagari, however, Tibetan syllables contain a staggering number of apparently superfluous consonantal signs called pre-, super-, sub-and postscripts, relics of the changes in spoken Tibetan since the script was invented, 'with auxiliary significance or none (Miller 1956:6),' which 'allow for variety in the writing of one and the same phonetic shape;' these 'just have to be memorized word by word: there is no rule to guide in their usage (8).'The Tibetan script does have largely the same system of vowel indication as the Devanagari.
Even if you d n't understand your neighbor, make allowances for him and his peculiarity.
There is no lack of scholarly opinion concerning the question of whether members of the Brahmi family of scripts should be considered alphabetic or syllabic.;Ireement alone is lacking on this topic.Masica refers to the scripts used for modern Indic languages as alphabets (145), while Snell & Weightman (1989:5) introduce Hindi Devanagari as a syllabary.Kachru (1987:474) also writing on Hindi, states that the script is 'syllabic in that every consonant symbol represents the consonant plus the inherent vowel /;/,' but then on the next page the characters of the script are listed under the heading of an alphabet.Klaiman (1987:493), writing on Bengali, describes its script as 'organised according to syllabic rather than segmental units,' and Ray et al. declare that 'It is a syllabary, modified somewhat towards becoming an alphabet' (12).Lambert maintains that all of the Indic scripts set forth in her work are syllabaries.Hudak (764) refers to the Thai script as an alphabet, and Miller (1) calls the Tibetan system of writing 'an alphabetic script on syllabic principles.' Wheatley, writing on the Burmese Brahmi script, declares that the inherent vowel 'sometimes leads to Indic writing systems being incorrectly labeled "syllabic "' (1987:844), but Steever, discussing Tamil's Indic script in the same volume refers to it as a syllabary (1987:734).
Disagreement among scholars of writing in general on the typological classification of Brahmi scripts arises in large measure from their differing definitions of alphabetic and syllabic systems.Gaur stresses that 'in alphabetic scripts... vowels and consonants have equal status' (119) and, since this is clearly not the case for Brahmi scripts, they are classified as syllabic.
Gelb (1965) -s on the whole unwilling to commit himself.He declares, 'The main characteristic of the alphabet is the existence of special signs for both consonants and vowels' ( 184), but then observes that in Indic writing systems the vowel indicators are 'attached to the respeccive syllabic signs' (187).He describes the inherent vowel as an 'abnormal development' (239) and relinquishes the question by calling for 'sharper typological definitions' for future discussions (188).DeFrancis (1989) draws a sharp distinction between syllabic scripts such as that of Japanese which represent syllables by means of Unitary syllabic signs, and Indic scripts which are 'syllabic' only in the quite different sense that they represent phonemes by means of nonunitary signs graphemes representing phonemes which are grouped together to form a syllabic bundle.Such scripts must still be classified as basically phonemic systems.He goes on to observe, 'The unit of writing, the syllable, is not the same as the unit of underlying analysis, the phoneme.' For both Coulmas and DeFrancis, then, it is this unit on analysis that establishes a script's typological status.
H. Jensen and Diringer both gravitate toward the alphabetic viewpoint.
Jensen writes regarding the classification of Brahmi scripts as syllabic: There is some justice in this point of view; on the other hand, however, two things must be emphasized, first that there are no syllablesigns for [e.g.] ki, ku, kg, kg, etc., on the contrary, in these cases a vowel sign is added, and the sign concerned thus has to lose its A and and become a pure consonant-sign; and secondly that when several consonants come together... the many ligatures themselves... show that the signs are first and foremost pure consonant-signs and that the inherence of an represents, not something essential, but a peculiarity. (362-3) Diringer, too, argues the individual representation of sounds in the absence of an inherent vowel gives the Brahmi scripts an alphabetic classification: 'Syllabic forms of writing... are ultimately based on the fact that the smallest unit into which any spoken word or series of sounds can be subdivided is the syllable' (1962:23).
Later, however, he comes to view the inherent vowel as a flaw in the writing system and therefore calls the Devanagari script a 'semi- syllabary.' (1968:283) Both alphabetic and syllabic arguments regarding the typological classification of Brahmi scripts unquestionably have merit.With the exception of postconsonantal /a/, every phoneme receives an explicit segmental representation and, as the scripts were originally conceived at least, // could invariably be considered as present in the absence of any other mark.Still, it should be borne in mind that the existence of this inherent vowel is not some sort of aberration, but has been a part of these scripts from their origin.down into the syllable-based units of which they are composed.
In contrast, in an unambiguously alphabetic script words are constructed directly from their member segments, and these segments always appear in the same linear order relative to pronunciation.
In Brahmi scripts, within syllabic units, although every individual segment may be in evidence, the reader must have at least some ability to arrange these items into the proper order of pronunciation, as the signs 3Tr themselves may appear in virtually any order within their syllabic bundles.
In the Sanskrit word [arthin] 'wanting, petitioning,' the sequence r-th-i appears in reverse order relative to the left-to-right direction of the script.The assessments of Coulson and Diringer that the Devanagari script is neither wholly alphabetic nor wholly syllabic may therefore be said to possess considerable insight, for neither classification does the writing script complete justice.
The Korean Hangul writing system has been widely praised for the logic and straightforwardness with which it was devised.Gale (1912:14), for example, writes, 'In simplicity, the Korean [script] has perharz no equal, easy to learn and comprehensive in its power of expression.'Although it has forty signs corresponding to individual sounds, many of these are formed by regular principles from the more basic signs.The basic consonantal signs are: /k/, L /n/, t. /t/, /1/ ([r] initially),0 /m/, 8 /p/, -4 /s/, 6 (0 initially), 'X ./c/, t /h/.Aspirated plosives are indicated by adding a stroke to the symbols for the unaspirated ones: /kh/, /ph/, Z /ch/.
In like manner, there are eight basic vowel signs: Symbols for two other 'pure vowels' (N.K. Kim 889), /11/ and /0/, are formed by adding ) to the signs for their back counterparts and are alternately analyzed as /we/ and /wi/ (Lukoff 1982:xvi).
One other diphthong combines /i/ and /t1/: BEST COPY AURAE These individual signs are grouped together to form syllable-based blocks, again according to regular principles.The vowel-sign always occupies the central position, thus becoming the 'nucleus' for the syllabic group.Then, depending on wilether the vowel-sign is vertical or horizontal, the syllable-initial consonant is indicated either above or to the left of it: IF- /ca/.This initial position is never left empty; if there is no syllabic onset, a silent 6 appears in the initial position: 6) /i/, /yo/.
The final position may be left empty; when it is filled, it always appears at the bottom of the block, beneath the other two signs:4 /can, /, /to1/, .b/wan/.
These syllabic blocks have customarily been written vertically, although they sometimes are arranged horizontally to accommodate printing.
A sample of written Korean follows, accompanied by a transcription and translation (adapted from Katzner na po-ki-ka yak-kya-ik' ka-sil e-run mal-aps-i ko-hi po-ri u-ri-ta.
From Mount Yag of Yongbyon An armful of azaleas I shall pick, And strew them in your path.
Go now, I pray, with short steps!Let each footstep gently tread The flowers which I have strewn for you.
When you take your leave, Tired of seeing me, Though I should die, I shall not weep.
The pronunciation of the individual signs is not unvarying.
For example, the alternation of /1/ with [r) has been noted, unaspirated stops are voiced word-medially, and in a syllable-final position -4 /s/ is pronounced [t) and the laryngealization contrast is neutralized.
All of these alternations, however, are completely predictable in any given environment, a fact which has by no means been lost on those analyzing the Hangul script.Taylor (1980:68), discussing the script's alphabetic aspects, comments, 'In Hangul the ideal of one symbol for one phoneme is almost realized.'Coulmas writes, 'Of all the systems that were actually invented as writing systems, the Korean script comes closest to treating distinctive features as the basic units of representation°( 120).
DeFrancis goes even further, declaring, 'Korean as written today is more accurately designated as morphophonemic.That is to say, changes in pronunciation are generally not indicated in the spelling if they can be predicted from the environment' (193).
In Hangul, every spoken segment is accounted for in the script, and the phonetic value of any given sign can be ascertained from its environment.Such characteristics would not only tend to indicate that the Hangul script is an alphabet, but a very good one at that.
Taylor, however, stresses the syllabic aspects of the script as well,5 finding certain advantages to the fact that the prixiary visual object is a syllable rather than a phoneme: Sequencing and grouping sounds can be stages in word identification.Problems associated with these stages can be minimized in a syllabary where the syllabic breaks within a word are immediately apparent and a word requires only a short array of letters... Another advantage of a syllabary is that a syllable is a stable and concrete unit to compare with a phoneme.Often a consonant phoneme by itself cannot be pronounced or described until it is paired with vowels to form a syllable.Not surprisingly, a syllabary is easier to develop and to learn than an alphabet.Young children find it easier to segment words into syllables than into phonemes.(70)Coulmas, too, notes the advantages of the script's syllabic arrangement after observing its phonemic accuracy (120), and does not venture to classify it as either alphabetic or syllabic.Among other comentators, Gaur emphasizes the syllabic organization of the Hangul (84-5), although few scripts better meet the criterion of approaching the ideal of one sound per phoneme (119).
In DeFrancis' view, Hangul is no more syllabic than he sees the Indic scripts as being (193); he goes so far as to assert, 'Korean can be called syllabic only in the same sense that English can be called logographic because it groups its letters into words' (192).This, however, would seem to overlook Taylor's arguments regarding the different approach to the script necessitated for the reader by this different arrangement.H. Jensen calls Hangul a 'pure alphabetic script' (211), while Diringer describes it as 'practically an alphabet ' (1968:352).
One apparent source of disagreement is terminological.To DeFrancis, Lukoff, and N. K. Kim, the component members of the syllabic blocks are letters of an alphabet, while for Taylor the blocks themselves are the letters, and J. P. Kim (1983) seems to use the term interchangeably.
Kim does also use the term 'syllabigraph' to refer to these units; he credits typographic designer Ann Sang-oo for coining this word, 'for lack of an existing one to express the way Korean units are constructed... Hangul combines the features of an alphabet and syllabary' (22).
A factor which may impel scholars to typologize such a script as an alphabet is that such prominent theorists of the subject as Gelb ( 201) and H. Jensen (52-3) explicitly regard alphabetic scripts as more evolved and therefore more advanced.To acknowledge the syllable-based aspects of a script might therefore seem to diminish its prestige by implying that it is somehow more 'primitive.'In this connection, it is worth noting, with Gaur, that some scripts do not shed their syllabic characteristics to evolve into full-fledged alphabets simply 'because syllabic scripts are an excellent vehicle for the representation of a large number of languages' (119).
It also remains true that the Korean script is a work of genius by whatever name one chooses to refer to it.DeFrancis aptly describes King Sejong, the script's reputed inventor'who ruled during the fifteenth century, as 'a monarch who, if rulers were ever measured by anything besides military exploits, would surely rank among the foremost of those who have appeared on the stage of history' (188).
In any event, while the Hangul writing system's phonemic representation is nothing short of remarkable, its syllabic orientation, as is true of the Brahmi scripts, is significant enough that it cannot be ignored.
Neither Hangul nor the Brahmi family of scripts may be classified as either alphabetic or syllabic with complete accuracy.One might therefore pause to consider where they fit relative to one another on a continuum between the two script types.
A particularly striking contrast between the two writing systems is the inherent vowel of the Brahmi scripts as opposed to what in Hangul might be considered an 'inherent initial consonant.'No syllabic block may appear with its initial position unfilled; if there is no pronounced syllabic onset, 6 /u/ is written but remains silent.Gale in fact notes that the script originally also had three other silent initials: , o , a , but that 6 was eventually substituted for them (44).
As a result, every written Korean syllable must include an onset of some sort and a vocalic nucleus, although the coda remains optional.In Brahmi scripts such as Devanagari, however, the consonantal character conceived as the most significant element of a syllable may appear in certain circumstances with no following vowel if a virEma is attached.
This indeed is the fundamental difference between the two; in Hangul the vowel which modern theory refers to as the syllabic nucleus occupies the central and most prominent position, while in the Brahmi scripts, it is the consonant immediately preceding this vowel that is considered the basis upon which the rest of the syllable is built.
Immediately preceding consonants, conjoined to this segment, are considered part of this syllable, as Lambert (76) explicitly states.Also indicative of this is the fact that, if in Devanagari the vowel [i] is pronounced after a consonant cluster such as [str-], the vowel-sign is written before the entire cluster:*T Hangul holds a more "modern' conception of the syllable.It is also more regular and more linear in its organization of the syllable; consonants preceding the vowel are always written above or to the left of it, while those following are always below it.Brahmi vowel diacritics, on the other hand, may appear in any direction from the consonant, and even, in the Thai and Bengali scripts, on two sides of it.
It may therefore be concluded, on the whole, that while neither Hangul nor the Brahmi family of scripts is completely alphabetic, Hangul comes much closer to fitting this description.
Nevertheless, the relative typological similarity between the two writing systems, coupled with the recent origin of the Korean script, inevitably raises the question of whether any of the Brahmi scripts might have had some influence on the shaping of Hangul.Of course, by far the greatest outside influence on Korean culture was China, and the Hangul syllabigraphs certainly bear a greater casual resemblance to Chinese characters than to those of any of the Brahmi scripts.DeFrancis affirms, 'What Sejong did was to adapt the Chinese principle (f equidimensional syllabic blocks by grouping the letters that comprise a Korean syllable into blocks separated from each other by white space' (191).
The fact remains, however, that Hangul is much closer typologically to the Brahmi writing systems than to that of Chinese.
H. Jensen reports that before the invention of Hangul Koreans had obtained some utility from various Chinese methods of rendering unfamiliar sounds by adapting ,?.xisting characters to syllabic usage and assumes tAat the Koreans thereby became aware of the syllabic principle (179,211).Gale (1912) argues that one particular set of syllabic characters was in turn inspired by the Devanagari script (42,).An indirect relationship at least is thus demonstrated.
Moreover, a number of scholars, among them Gaur (85) and Lee (6-7) suggest that the Sanskrit and Tibetan languages as well as the scripts with which they were written would quite likely have been known to literate Koreans, and Lee points to these as likely sources for the alphabetic aspects of Hangul.
H. Jensen also mentions a Korean writing system known as the PumsQ script, developed before the time of Sejong, which is used 'in Buddhist ceremonies of prayer and sacrifice for the transcription of foreign Sanskrit words' (216).This script was apparently fairly closely modeled on the Tibetan script.
DeFrancis, too, names India as a likely, if perhaps indirect, source of alphabetic principles (186).
Indeed, unless we are to believe that Sejong and his assistants conceived of representing a single sound with each sign entirely on their own, it is most difficult to imagine from what other source they might have learned of this principle.
Finally, one other question remains from the anomalous typological status of these two writing systems, one of which represents a very significant portion of the world's languages and population, while the other, although isolated, nevertheless presents linguists with an impressive specimen of phonemic analysis.The failure of most commonly accepted definitions for syllabic and alphabetic systems of writing to include such important scripts and script families would seem to suggest that a new typological category is needed to fill this void.
Suggestions such as 'alphabetic syllabary,"alphabet on syllabic principle,' or 'semi-syllabary' might not be the worst compromise, for the time being at least, as they take into account the elements found in these writing systems.
Despite the differences that do exist between Hangul and the Brahmi scripts, they clearly belong together in such a category.Although there is no definitive evidence the majority of scholarly opinion is reasonably confident that the Brahmi script was derived from or at least inspired by a West Semitic source; see especially Shapiro 1969, Masica 1991:133-4, H. Jensen 1970: 368-70, and Diringer 1962:144-5.In rather greater doubt is its precise date of origin.Diringer places it in the seventh century B.C., While H. Jensen (363) asserts that 'literary evidence shows it to have been in widespread general use in the fifth century B.C.' Masica, on the other hand, argues strongly that the script was still quite young in the time of Asoka, after whom the inscriptions bearing the first clear example of the Brahmi script are customarily named.
2.
For a comprehensive inventory of Brahmi scripts, see H. Jensen 361-404, or Diringer 1968:257-351.Lambert identifies those found in the text with Bombay printing houses and the Marathi language, preferring the latter for Sanskrit and Hindi (21,102).In practice, however, associations are less rigid; Coulson as well as Snell & Whitman (1989) use the Bombay characters for their respective textbooks on Sanskrit and Hindi, and Katzner's (1977) sample of Hindi includes the Bombay characters, while the Marathi sample includes the other set.
The Bombay characters will be used in this discussion as they seem both more esthetically pleasing and easier to produce.The subject of Hangul's logographic aspects is briefly entertained in Taylor's article as well (73).This is based largely on the fact that some Korean words are monosyllabic, so that one syllabic block stands for one word, such as g /talk/ 'hen.' This, however, might more appropriately be ascribed to the script's syllabic aspects.
mentioned by Masica 150 and Lambert 103 is an effort in Marathi to regularize initial vowel signs so that they consist of the basic 31 plus the post-consonantal allographs:f31[i], 3 [1],3g JT [5) , 34-[v], 3 [e],[ail.However, this has not gained widespread currency and is certainly not in evidence in the following sample, necessarily brief and tentatively transcribed due to the poor quality of the printed original, adapted from Katzner 189:A-1,^i 47-34-V4 3 341 c4ci .mala ugic nhuk nhuk asa gosti "dthvata..I have a sort of hazy recollection of certain events.5. | 8,758.4 | 1992-01-01T00:00:00.000 | [
"Linguistics"
] |
The Prevalence of T Cells Population in the Liver of Patients with Viral Hepatitis
Background: It has been widely known that viral hepatitis is a major cause of liver disease that can cause + and CD8+ T cells, as well as regulatory T cells (CD25+ and Foxp3+ T cells) in the liver of patients with viral hepatitis in order to understand the comprehensive role of T lymphocytes in the progression of liver diseases attributed to viral hepatitis. Method: Liver biopsies were performed on adult patients presenting to a tertiary hospital in Surakarta, Indonesia with viral hepatitis from 2017 to 2018. Immunohistochemical staining was performed to identify cells expressing CD4+, CD8+, CD25+ and Foxp3 + which represent T helper, T cytotoxic, and T regulatory cells, respectively. Additional data were retrieved from the patients’ medical records, including alanine aminotransferase Results: A total of 25 liver samples were collected from patients with chronic HBV infection (n = 21), chronic HCV infection (n = 2), acute HBV infection (n = 1), and from a with multiple liver nodules. The liver injury is minimum in all patients. The study found that CD8+ and CD4+ T cells were predominant whilst the frequency of T regulatory cells is generally low. Conclusions: expressing CD25+ and Foxp3+
Viral hepatitis causes systemic infection with liver as the main target. The infection causes liver
viruses (HAV and HEV) are transmitted through the fecal-oral route and often being acute self-limiting diseases. In contrast, hepatitis B, C, and D viruses (HBV, HCV, and HDV) are transmitted parenterally and often cause chronic liver diseases that may develop into liver cirrhosis and hepatocellular carcinoma. Although chronic hepatitis are widely studied, there is a lack of knowledge in regards of the immune mechanisms leading to chronicity and irreversible liver damage. 1,2 Previous studies have reported the importance of T cells in the course of viral hepatitis as well as their involvement in response to treatment, particularly in HBV and HCV infection. [3][4][5][6][7] T cells are subpopulation of lymphocytes that are derived from bone marrow, and develop into several types in the thymic gland and continue to differentiate in peripheral sites and blood circulation under certain conditions. Each group of to internal and external stimuli. For example, CD8 + T cells contain cytotoxic properties that enable them to directly destroy virus infected cells and cancer cells. In contrast, CD4 + T cells also known as T helper cells, perform indirect killing of foreign or infected exhibit appropriate immune responses. Further, CD4 + T helper cells are divided into several types: Th1, Th2, Th17, Th9 and Tfh with different roles in immune defence. Another subset of T cells includes regulatory T cells. These cells are involved in immune tolerance mechanisms thus preventing excessive immune responses that may provoke autoimmunity and selfwe sought to determine the prevalence of intrahepatic T of chronicity and immune tolerance in viral hepatitis. METHOD viral hepatitis who presented to Dr. Moewardi Hospital in Surakarta, Central Java, Indonesia from 2017 to 2018. The participation in this study was voluntarily and all the study subjects were given informed consent. We employed total-sampling technique by approaching all the potential participants who met our criteria; i.e., adult, had a clinical diagnosis of viral hepatitis, presented as in-and out-patients at Dr. Moewardi hospital during the study period (2017-2018), and being able to give individual consent. Demographic data, pathology and radiology results were retrieved from medical records. The procedure of liver biopsy was performed by or under supervision of a gastroenterohepatology consultant (TYP), Department of Internal Medicine, Dr. Moewardi Hospital. A total of 25 patients were recruited into this study and following biopsy procedure, the liver samples were sent to both Pathology Anatomy Laboratories at Dr. Moewardi hospital and Faculty of Medicine, Universitas Sebelas Maret (UNS) for the assessment of disease severity and the evaluation of subpopulation of T cells in the area of portal triad using a standard procedure for immunohistochemistry staining with anti-CD4, anti- The Prevalence of T Cells Population in the Liver of Patients with Viral Hepatitis CD8, anti-CD25, and anti-Foxp3 antibodies (abcam). Three specimens were excluded from the analysis of T cell population because the specimens were too small and did not represent the area of portal triad.
The assessment of disease severity was performed and reported by a pathologist at Dr Moewardi hospital and the results were categorized into no fibrosis (METAVIR score=F0), mild fibrosis (METAVIR enumeration of T cells in portal triad was performed by another pathologist (BW) at Faculty of Medicine, UNS by using quantification and scoring system established by a previous study. 8 The frequencies of T cell subsets were categorized into low (0-5%), medium (6-25%), high (26-50%), and very high (>50%). Data were incorporated into a Microsoft Excel spread sheet in which we performed a heat map to visualize each subpopulation of T cells presented in the samples. This study was part of a larger research project studying the molecular epidemiology of blood-borne viruses conducted by A-IGIC research group of UNS. [9][10][11] The protocol of this particular study has been approved by the Human Research Ethics Committee at Dr Moewardi hospital (No. 548/ VIII/ HREC/ 2017).
RESULTS
A total of 25 liver samples were collected from 10 males and 15 females with their age ranged from 18 to 70 years old. The majority of samples were taken from patients with chronic hepatitis B (n = 21), followed by chronic hepatitis C (n = 2), and acute hepatitis B (n = 1). A 62 year old female patient with multiple nodule in her liver (the patient ID number is 7) had a clinical diagnosis of viral hepatitis but upon further laboratory in this patient. All patients with chronic hepatitis B and C had mild liver injury determined either by The HBV DNA viral load ranged from 1.68x10 3 to 1.1x10 8 IU/mL and the HCV RNA ranged from 1.55x10 5 IU/mL to 7.07x10 5 IU/mL. Chronic hepatitis B is characterized by a detectable HBsAg for at least 6 month period with or without the presence of HBeAg. In our study, 28.6% (6/21) of patients with chronic hepatitis B also had reactive HBeAg. Both HBsAg and HBeAg were detected in the subject with acute hepatitis B, indicating the high level of viral replication so that the patient is highly infectious. In fact, this patient's HBV DNA was 4.22x10 7 IU/mL. The frequencies of T helper (CD4 + ) and T cytotoxic (CD8 + ) vary considerably among the samples (Table 1). In general, most samples had medium and high level of T helper and T cytotoxic cells. In contrast, the frequencies of regulatory T cells (CD25 + and Foxp3 + ) were minimum except in a sample taken from a 29 year old female patient with chronic hepatitis B (the patient ID number is 6). In this particular patient, the level of alanine aminotransferase (ALT) and (aspartate aminotransferase) AST were just above the upper normal limit (54 U/l and 39 U/l, respectively). Table 1.
T-cell frequencies analyzed by immunohistochemistry staining
The proportion of T cell subsets is shown by graded colour where darker colour indicates a higher frequency. Samples ID number 5, 18 and 19 were excluded from T cells analysis because the specimens were too small and not being representative, of which did not contain portal triad.
DISCUSSION
Viral hepatitis is a major health problem in the world, especially in endemics area of low and middle income countries such as Indonesia. Therefore, control measures of viral hepatitis are needed to suppress morbidities and mortalities as well as other impacts on the country's socio-economic. According to the etiologies, viral hepatitis can be A dan E often present as outbreaks and the primary control measures include healthy behaviour and high quality environment. Hepatitis B, C and D can be transmitted perinatally so that the prevention can only be achieved through avoiding the source of infection and vaccination. Hepatitis B vaccine has proven to be effective in preventing hepatitis B (and hepatitis D) but unfortunately, there is no available vaccine for preventing hepatitis C.
This study is looking at immunological aspect of host's liver. We found that T helper and T cytotoxic are the predominant subsets of T cells in the liver of patients with HBV and HCV infection. A small proportion of regulatory T cells is also found, indicating the role of these cells in suppressing effector function of T helper subsets of T cells indicates the balance between the effort of immune responses to eliminate the virus and the effort of immune responses to minimize tissue and organ damage. As the results, this contradictory events contribute in the disease chronicity and prevention of immune-associated liver injury. 12,13 The exact mechanism of proliferation, activation, and differentiation of T cells is not fully understood. The continuing exposures to the viral antigens and cytokines may lead to the production and activation of T cells. As for the differentiation of T cells, it has been proposed that during chronic HBV infection, hepatic stellate cells produce tumor growth factor beta cells into regulatory T cells. 14 Another study reported that interleukin 6 is associated with a higher risk of 15 The weakness of this study is that we were only able to recruit a few participants. This is because liver biopsy is a high-risk medical procedure and costly. Another weakness is that we had only a few variety of cases. Most of the study subjects are patients with chronic hepatitis B. Despite the weaknesses, this study subsets of T cells. Future studies need to involve more study subjects and more variety of cases so that more information can be obtained with and a robust conclusion can be achieved.
CONCLUSION
In viral hepatitis, an adequate response of T cells is required to facilitate spontaneous resolution as provides evidence of the involvement of intrahepatic T cells in those two important roles: eliminating the virus as well as preventing immune-associated liver injury. Further studies are needed to understand the comprehensive mechanism of immune responses during the course of viral hepatitis.
FUNDING
The research was supported by Universitas Sebelas Maret (non-tax revenue) grant. | 2,353.4 | 2020-05-21T00:00:00.000 | [
"Medicine",
"Biology"
] |
Modelling of classical ghost images obtained using scattered light
The images obtained in ghost imaging with pseudo-thermal light sources are highly dependent on the spatial coherence properties of the incident light. Pseudo-thermal light is often created by reducing the coherence length of a coherent source by passing it through a turbid mixture of scattering spheres. We describe a model for simulating ghost images obtained with such partially coherent light, using a wave-transport model to calculate the influence of the scattering on initially coherent light. The model is able to predict important properties of the pseudo-thermal source, such as the coherence length and the amplitude of the residual unscattered component of the light which influence the resolution and visibility of the final ghost image. We show that the residual ballistic component introduces an additional background in the reconstructed image, and the spatial resolution obtainable depends on the size of the scattering spheres.
Introduction
The ghost imaging technique produces an image of an object from a measurement of the fourthorder (i.e. intensity) correlation function. A twin beam configuration is used, in which partially coherent light is split into two paths with the object in one (test) arm (see figure 1). The intensity in that arm is obtained with a detector possessing no spatial resolution. The light in the second (reference) arm is detected with spatial resolution, and the final image is reconstructed from the coincidence between the two detectors. Neither of the detectors independently produces an image of the object, but the correlation between the two permits reconstruction of the image.
Ghost imaging has been demonstrated using twin-beams exhibiting quantum entanglement [1]- [3], and with classically correlated light [4]- [8]. The quantum correlation stems from the information-sharing inherent in any entangled system [9]. Quantum ghost imaging exploits the spatial and momentum correlations between the photons generated in a nonlinear medium, typically in a parametric down-conversion process. It was thought until recently that only a quantum-entangled source could produce the correlations needed to obtain ghost images. It has now been shown that pseudo-thermal light divided by a beam-splitter, producing two classically correlated beams, can replicate nearly all experiments performed using a quantum bi-photon source [6].
Ghost imaging can be performed with photon counting detectors [7], with current measuring detectors such as photodiodes, or with a CCD camera [6]. Image visibility (signalto-noise ratio) is generally superior with quantum fields and single photon detectors [4], but applications of the ghost imaging protocol (e.g. medical imaging, image cryptography) would be more practical with CCD imaging detectors and classical light.
In this paper, we use the solution of a wave transport equation describing the propagation of partially coherent light through a multiple-scattering medium to simulate near-field ghost images. We show that the presence of the residual ballistic component of the partially coherent field leads to the presence of a background in the image and the spatial resolution obtainable deteriorates with increasing size of the scattering elements relative to the wavelength.
Background
A schematic of a generic ghost imaging experiment is shown in figure 1. In the case of classical ghost imaging, an intense beam of partially coherent light is divided by a beam-splitter resulting in two beams of light, classically spatially correlated in both the near-and far-field [6,10]. One Schematic of a two-beam imaging experiment. The test arm corresponds to path 1 where the object is placed and a detector D 1 is used, while path 2 is the reference arm where a detector, D 2 , possessing spatial resolution, is located.
beam is incident on the object to be imaged (the object or test beam), and the other propagates to a detector with spatial resolution (the reference beam). The function, in the one-dimensional case, describing the intensity fluctuation correlations G(x 1 , x 2 ) contains the image of the object [6] where [11,12]: where I 1 and I 2 denote the intensities in the test and reference beams, respectively, x 1 and x 2 represent two points in the field and the brackets denote an average over all realisations of the field. As can be seen in (1), G depends on the two intensities at the detectors, and thus is second order in intensity. It is also apparent that a background term, I 1 (x 1 ) I 2 (x 2 ) must be subtracted from the pure intensity correlation term. We consider here a partially coherent, quasi-monochromatic, scalar classical light source described by Gaussian statistics with zero mean [13]. As a consequence, the fourth-order correlation function can be expressed in terms of the second-order (field) spatial correlation function, (x 1 , x 2 ) defined as where E is the spatial field distribution of the source. In the case of classical two-beam imaging it can be shown [14] that G is given by: where h 1 and h 2 are the impulse response functions describing the optical paths of the test and reference beams, respectively [15]. It is apparent from (3) that the image obtained in classical Object Particular arrangement for obtaining a near-field transmission image of an object with two-beam correlated imaging, where in arm 1 the object is very close to the source and a lens is used to collect the light in a bucket detector, while in arm 2 an imaging configuration is used.
ghost imaging is therefore critically dependent on the second-order correlation function, , of the source.
Although the approach adopted is quite general, we here examine the influence on the resulting image in the specific case where the reference beam, shown in figure 2, is in the 2 f :2 f imaging configuration. The object is described by a complex transmission function t (x). In this case it can be shown that [6] G( where λ is the wavelength of the source, k = 2π/λ and f is the focal length of the lens. Ghost imaging experiments often determine the marginal intensity which is equivalent to having a 'bucket' detector in the test arm. It can easily be seen that if the correlation function, (x 1 , x 2 ), is extremely narrow and can be approximated by δ(x 1 − x 2 ), then the marginal intensity will be approximately proportional to the magnitude squared of the transmission function, |t (−x 2 )| 2 , in the case of illumination with light of uniform intensity. The width of the correlation function corresponds to the spatial extent of correlations between two non-identical points in space, and is related to the coherence length of the source. In order to obtain an image of an object with high spatial resolution, must be narrow; i.e. the coherence length of the light incident on the beam-splitter must be small. Simply stated, the smallest feature in the object must be approximately larger in size than the width of (x 1 , x 2 ) [14]. When creating a pseudo-thermal source by passing a coherent beam through a turbid medium, the second-order correlation function of the pseudo-thermal light consists of the superposition of a broad coherent ballistic (unscattered) portion, and a narrow scattered component [16]. The relative fractions of ballistic and scattered components can be changed by altering the concentration and size of the particles in the turbid medium. Previous theoretical 5 simulations of classical ghost imaging using partially coherent light assumed a beam described by a Gaussian-Schell model [17], which does not permit an analysis of the influence on the resulting image of the properties of the scattering medium or the contribution from the residual ballistic component.
A number of methods can be used to calculate the second-order correlation function of scattered light [16], [18]- [20]. Here, we use the method proposed by Cheng and Raymer [16], based on a wave transport approach equivalent to using the extended Huygens-Fresnel principle in the small-angle scattering approximation [18]. Using this technique, it can be shown that after propagation through a medium with optical path length z, the spatial coherence function, and where a is the width of the incoming coherent, assumed Gaussian profile, laser source, N is the number concentration of scatterers, σ is the total scattering cross-section, θ 0 is the width of a Gaussian fitted to the differential scattering cross-section and µ t is the extinction coefficient. These final three parameters can be obtained using Mie scattering theory [18]. Note that any absorption by the particles has been ignored in our simulations. The resulting correlation function can then be used in numerical simulations based on equation (3) to calculate the marginal intensity. Note that the spatial coherence function calculated using equation (6) refers to the field captured by the subsequent optical system. Since the aim of the simulations shown here is to highlight the salient features introduced by scattering, the effect of the finite numerical aperture of any real system is not included in any calculations shown here. Figure 3 shows (x 1 , x 2 ) for a Gaussian beam of 1/e width 2.12 mm that has passed through a 5 mm optical path length water/glycerol mixture containing 20 µm diameter polystyrene microspheres. A water/glycerol mixture is commonly used to achieve neutral buoyancy of polystyrene microspheres [21]. It can be seen that, in general, the correlation function consists of a superposition of a ballistic component which appears as a circular background and a scattered component which has a narrow distribution about the diagonal x 1 = x 2 . Hence, as the strength of the scattered component relative to the residual unscattered field increases, (x 1 , x 2 ) more closely approximates the narrow function required to produce ghost images in the configuration of figure 2. Any ballistic component still present in the beam as it leaves the scattering medium leads to a broad pedestal in (x 1 , x 2 ) which would be expected to produce a low spatial resolution background in any ghost image. It is evident that the strength of the ballistic component will decrease by increasing the concentration or the size of the particles or increasing , (x, −x), through the second-order correlation function of light with a wavelength of 633 nm that has passed through a scattering medium of path length 5 mm containing polystyrene microspheres with a fixed OD of 5, but different diameters. The 1/e intensity width of the incident coherent beam is 2.12 mm. The error quoted is the 1-sigma error estimate.
Correlation function
Diameter (µm) Width (µm) 5 2.92 ± 0.01 Although the width of the ballistic component changes only slightly with changes in the parameters describing the scattering medium, the width of the scattered component, which is critical to determining the spatial resolution obtainable with the resulting pseudo-thermal source, is a sensitive function of the particle size. Table 1 shows the width of the scattered component of (x, −x) (see figure 3(e)) for scattering media containing spheres of different diameters, but with a fixed OD of 5. It can be seen that as the particle size increases, the width of the scattered component also increases and is of the same order as the particle diameter. Further simulations (not shown here) also indicate that the width, (x, −x), is independent of the dimensions of the incident Gaussian beam. Implications for ghost imaging are discussed in the next section.
Imaging simulation
The results of our simulations show that there are two critical parameters in estimating the quality of a ghost image: the relative strength of the ballistic component compared to the scattered component and the coherence length of the scattered component. To investigate the influence that the properties of the scattering medium have on the marginal intensity of a ghost image, correlation functions computed using equation (6) were used to simulate the results for the imaging arrangement shown in figure 2. Images of an opaque mask containing two slits of width 150 µm and separated by 300 µm were calculated. A scattering medium of path length 5 mm and containing particles with diameters of 20 µm was used and the impact of changing the particle concentration on the marginal intensity is shown in figure 4.
It can be seen that the marginal intensity is a superposition of a broad background produced by the residual ballistic component and an image of the object (modulated by the intensity of the field) resulting from the low coherence scattered component. Note that the background is in addition to the intensity background that must be subtracted from the intensity correlations to determine G in equation (1). The presence of a significant ballistic component permits imaging of objects with detail no smaller than approximately the diameter of the original coherent light source, so in this case, where the beam diameter is 2.12 mm, the ballistic portion of the light carries no object information. Figure 4 shows that the OD can be selected to reduce the ballistic contribution to an arbitrarily low level. Secondary methods, such as the inclusion of a rotating diffuser are also widely employed to eliminate any remaining unscattered component.
To demonstrate the effect that increasing the diameter of the scattering spheres has on the resolution of a ghost image, for a fixed OD of 5 and wavelength of 633 nm, the transmission function was changed to represent a double slit mask with a slit width of 20 µm and a slit separation of 30 µm. The results are shown in figure 5.
It is immediately clear from figure 5 that increasing the diameter of the scatterers, while keeping both the OD of the medium and the wavelength of the light fixed, leads to poorer spatial resolution in the ghost image. The larger diameter spheres create a more coherent scattered wave with a narrower differential scattering cross-section. As a consequence, the object becomes less well resolved as the sphere diameter increases. It has previously been shown [22] by a consideration of the near-field speckle size that the spatial resolution obtainable is of the order of magnitude of the particle size which is consistent with the results obtained here. Note that the key parameter determining the differential scattering cross-section, is the ratio of the particle diameter to the wavelength of the light used. Hence, it is anticipated that it is this parameter that will ultimately influence the spatial resolution. It should also be emphasised that our simulations show no influence of the incident beam size on the spatial resolution obtainable. Since perfect ensemble averaging is implicitly assumed in the use of the wave-transport approach, one of its limitations is that it provides no specific information about the obtainable image signal-to-noise ratio. Such issues are better addressed through a stochastic approach. Imaging has been performed with light that has passed through a scattering medium with a fixed OD of 5 containing spheres with different diameters. The wavelength is 633 nm.
Conclusion
The wave transport model used in this paper provides a means to predict the coherence of pseudo-thermal light created by passing initially coherent light through a scattering medium. The calculated coherence function can be used to simulate the results of classical two-beam ghost imaging experiments.
Two central conclusions can be drawn. Firstly, the presence of the residual ballistic component in the scattered wave-field leads to the presence of a low-resolution background in the marginal intensity. We have shown that increasing the OD of the scattering medium can reduce the influence of the ballistic component to an arbitrarily low level, although there are a number of secondary means that could also be used to reduce its strength. Secondly, for a fixed wavelength, the spatial coherence length of the scattered component of the field increases with increasing sphere diameter, leading to a deterioration in the spatial resolution obtainable in the resulting ghost image. More generally, the spatial resolution will be determined by the ratio of the particle size to the wavelength of light used. | 3,689 | 2007-08-01T00:00:00.000 | [
"Physics"
] |
Energy conservation in dissipative processes: Teacher expectations and strategies associated with imperceptible thermal energy
Research has demonstrated that many students and some teachers do not consistently apply the conservation of energy principle when analyzing mechanical scenarios. In observing elementary and secondary teachers engaged in learning activities that require tracking and conserving energy, we find that challenges to energy conservation often arise in dissipative scenarios in which kinetic energy transforms into thermal energy (e.g., a ball rolls to a stop). We find that teachers expect that when they can see the motion associated with kinetic energy, they should be able to perceive the warmth associated with thermal energy. Their expectations are violated when the warmth produced is imperceptible. In these cases, teachers reject the idea that the kinetic energy transforms to thermal energy. Our observations suggest that apparent difficulties with energy conservation may have their roots in a strong and productive association between forms of energy and their perceptible indicators. We see teachers resolve these challenges by relating the original scenario to an exaggerated version in which the dissipated thermal energy is associated with perceptible warmth. Using these exaggerations, teachers infer that thermal energy is present to a lesser degree in the original scenario. They use this exaggeration strategy to productively track and conserve energy in dissipative scenarios.
I. INTRODUCTION
The Next Generation Science Standards [1] emphasize the importance of tracking and conserving energy through physical scenarios. 1 A critical component of tracking and conserving energy is the recognition of the forms of energy present during a scenario. Forms of energy are generally identified by a perceptible indicator, such as motion, sound, height, or warmth, that provides sensory evidence for the presence of energy. In a rollercoaster scenario, for example, changes in height and speed of the rollercoaster are the perceptible indicators used to track energy as it transforms from gravitational energy to kinetic energy.
This method of tracking energy by its perceptible indicators is particularly useful in idealized scenarios that neglect dissipative processes (e.g., a rollercoaster moving on a frictionless track). These are the kinds of scenarios most often emphasized in physics courses. In the case of a real rollercoaster, gravitational energy does not all end up as kinetic energy; some ends up as thermal energy in the rollercoaster, the track, and the surrounding air. We observe that learners who engage with such dissipative processes recognize changes in energy associated with perceptible indicators (e.g., changes in gravitational energy associated with changes in the height of a rollercoaster), but often do not identify changes in energy associated with imperceptible indicators (e.g., the production of thermal energy in a scenario in which the rollercoaster doesn't feel hotter). The disappearance of perceptible indicators can seem to contradict the energy conservation principle. This strong association between forms of energy and perceptible indicators may account for some of the student difficulties described in previous research on applying energy conservation to everyday phenomena (e.g., [2]). Further, we find that this association leads to concern and puzzlement even for learners who do not have "difficulties" with energy conservation in the traditional sense.
Our observations of learners discussing dissipative scenarios in K-12 teacher professional development and high school classrooms have led us to better understand expectations learners have about energy transfers and transformations. We have also identified productive strategies that teachers-as-learners employ in successfully tracking and conserving energy through dissipative processes. In this paper, we make the following claims about learners' ideas regarding energy conservation in dissipative processes: 1. Learners expect that energy associated with a perceptible indicator will also be associated with another perceptible indicator when the energy transforms. In particular, learners expect that kinetic energy associated with visible motion will transform into thermal energy associated with palpable warmth. This expectation challenges their commitment to energy conservation when all energy indicators disappear from perception. 2. Learners accept the presence of thermal energy associated with the imperceptible indicator of warmth when they recognize that warmth would be perceptible in an exaggerated scenario. For example, learners accept the presence of thermal energy in a rollercoaster scenario when they recognize that warmth is perceptible in a spaceshuttle re-entry scenario.
We support these claims by first describing the physics of energy dissipation and the perceptibility of indicators of energy forms (Section II). We then review previous research on learning about energy conservation (Section III) and introduce the context in which our research takes place (Section IV). Next, we share evidence of learners' expectations about perceptible indicators of energy as well as strategies that support their acceptance of imperceptible thermal energy (Sections V and VI, respectively). The significance of these results and the instructional implications are described in Section VII.
II. PHYSICS OF ENERGY DISSIPATION
Energy dissipation, as discussed in this paper, is the process of macroscopic kinetic energy transforming into thermal (or internal) energy through interactions among microscopic particles that randomize their motion and position and spread energy more uniformly throughout a system. Dissipated energy is sometimes described as "energy lost from an open system" [3], where "lost" energy indicates energy that is degraded, or cannot be used for the performance of work [3,4]. The NGSS, to which teachers are accountable, does not explicitly require understanding of energy dissipation [1]. However, the NGSS's primary learning goals about energythat it is conserved, that it manifests in multiple ways, and that is continually transferred from one object to another and transformed among its various formsrequire accounting for energy wherever it goes in the scenario of interest. Further, the NGSS's emphasis on energyefficient solutions to societal problems is reflected in its statements about scenarios involving "diffuse energy in the environment," usually in the form of thermal energy. Though the NGSS refers more to processes of conduction than dissipation (e.g., "When machines or animals 'use' energy, most often the energy is transferred to heat the surrounding environment"), dissipation is a significant feature of energy scenarios that embody NGSS priorities.
In many energy scenarios occurring near room temperature, the thermal energy produced by dissipation cannot be perceived by human senses (we cannot feel any indication of the energy's presence). For example, when a ball rolls to a stop, the motion associated with the ball's kinetic energy disappears and the warmth associated with the thermal energy produced in the ball, air, and ground is likely to be imperceptible. As humans, the disappearance of perceptible indicators for energy leads to a contradiction between what we experience and what we expect to experience. Our intuition supports the assumption that sensory experiences have certain common dimensions that transcend specific modalities of the senses: "for example, bright is like loud because both are intense… In this view, then, the reason that brighter lights are perceived to be like louder sounds is because they share a common property, intensity… Bright and loud are conceptually understood as being about some amount of physical energy" [5]. It follows that a person who accepts that energy is conserved would also expect that the perceptibility of that energy's indicators to be "conserved." For example, in the scenario of a ball rolling to a stop, the disappearance of a perceptible indicator (motion of the ball) without replacement by another perceptible indicator can seem to suggest the disappearance of energy and a violation of the principle of energy conservation. In many cases, energy associated with a perceptible indicator will not be associated with another perceptible indicator when it transforms to another form. Figure 1 shows examples of thermal and mechanical processes requiring varying amounts of energy. Changes in mechanical energy of about one joule may be associated with easily perceptible indicators (e.g., lifting a basketball ¼ m), but if all of that energy were transformed to thermal energy, it would only increase the temperature of a typical room (50 cubic meters) by an imperceptible 10 -5 K (10 -5 °F). To produce an easily perceptible quantity of thermal energy, such as that associated with raising the temperature of a typical room from 40 to 60°F, one would need to drop almost 190,000 basketballs from a height of 1 meter. The difference in the perceptibility of energy indicators for various forms can cause learners to struggle with tracking energy in dissipative processes.
III. PRIOR RESEARCH ON LEARNING ABOUT ENERGY IN DISSIPATIVE PROCESSES
The majority of research analyzing student understanding of energy in dissipative processes has appeared, almost entirely implicitly, in research focused on student understanding of the conservation of energy principle. Many of these studies use physics scenarios that involve dissipative processes (or idealized physics scenarios that would involve dissipation in the real world). For example, one study uses a car that coasts to a stop and a golf ball that is hit and bounces several times, reaching a smaller and smaller height before coming to a stop [6]. Other studies use a damped swinging pendulum [7,8]. Another uses a ball rolling up and down the sides of a bowl, asking students to neglect frictional effects [9]. These scenarios, in the real world, all involve a decrease in total kinetic and potential energy and a compensating yet imperceptible increase in thermal energy (e.g., as a pendulum slows to a stop, it does not feel warmer).
The general consensus of this research, across a variety of contexts, is that many students and some teachers have difficulty understanding and applying energy conservation [2,[6][7][8][9][10][11][12][13][14][15][16]. One study explicitly describes the transformation from kinetic to thermal energy as problematic in secondary education: interviews with 34 German students (15-16 years old) reveal that after physics instruction, students "have difficulties in using the idea of the transformation of kinetic energy to heat energy to explain relevant processes" [7, p. 99]. In a scenario in which a pendulum swings to a stop, only four out of 34 students described kinetic energy as transforming into thermal energy; the rest of the student responses were attributed to a lack of understanding of energy conservation.
Another way in which students and some teachers appear to contradict the conservation of energy principle is to describe the energy in dissipative thermal processes as being used up or lost [6-8, 12, 15]. For example, one British student explained her thinking about the energy conservation principle as it applies to the process of a lamp shining in this way: "That principle of conservation, Miss, I don't believe it. You know when you have a battery and a lamp, and the battery has electrical energy, right? And it goes to heat and light in the lamp. Well, I mean, the heat evaporates and the light goes dim. So the energy has gone. It isn't there is it?" [6] A similar finding appeared in a study in which many university introductory biology students were "unable to apply the idea of energy conservation" to biological settings even though almost 98% of them identified the correct statement of the conservation of energy principle [12]. Some "used the terms used up, created, made or lost in their explanations [of energy processes]" [12]. When students were asked to identify incorrect phrases in a number of sentences describing dissipative processes, "only 4% of the students in the whole group correctly underlined used up as an incorrect phrase and wrote in the scientifically acceptable phrase, converted to different forms" [12]. In our own earlier work, we argued that the idea that energy is used up or lost can be productively aligned with the concept of energy degradation [4]. In this paper, we focus on the challenge to energy conservation that is presented when thermal energy indicators are imperceptible.
Other research found that when students conserve energy in dissipative processes, they sometimes mistakenly describe kinetic energy as transforming into potential energy instead of thermal energy [6,8]. For example, British high school students analyzed the energy at the end of a scenario in which a golf ball bounces to a stop. Rather than describing the energy as dissipated, students claimed that the stopped ball had "stored up" the energy, and that the energy could be used again [6]. University students in the U.S. came to a similar conclusion when asked about a damped pendulum: they described the kinetic energy of the pendulum as transforming into potential energy as the pendulum slowed to a stop [8]. Their response shares features with a canonical account of the energy dynamics of the scenario: it respects the principle of energy conservation by inferring a transformation into a form of energy with no perceptible indicator. However, their response misconstrues "potential energy" as entirely hidden or latent [17], rather than associated with the configuration of interacting objects.
All of these studies characterize students as having difficulty understanding and applying energy conservation without mentioning the possible role of imperceptible energy indicators in dissipative processes. We take as a premise that learners at all levels have rich stores of intuitions about the physical world, informed by personal experience, cultural participation, schooling, and other knowledge-building activities [18][19][20]. Some of these intuitions are "productive," meaning that they align at least in part with disciplinary norms in the sciences, as judged by disciplinary experts [21,22]. Learners may only apply these intuitions episodically: at some moments of conversation with instructors and peers there may be evidence of productive ideas, whereas at other moments productive ideas may not be visible [23]. This perspective suggests that rather than having a "difficulty" or a "misconception" about conservation of energy, the learners in our study are attempting to reconcile understanding of the conservation of energy principle with their intuition that energy indicators should remain perceptible as energy transforms. Our work here aims to build on and reframe previous research about difficulties with energy conservation, showing that learners' intuitions about perceptibility can be used productively to support a greater understanding of energy conservation.
A. Research Methods
This paper reports on a phenomenological study using data gathered by the Energy Project, a six-year NSF grant focused on the teaching and learning of energy. As part of the Energy Project, a variety of classrooms were observed in an effort to better understand how learners view and apply energy concepts. "Learners" is a broad term that we use to refer to three populations: (1) elementary and (2) secondary teachers-as-learners in summer professional development courses held at Seattle Pacific University, and (3) students in high school science courses taught by some of these teachers. Observations of learners' discussions in these three contexts promoted the investigation of the following two research questions:
1.
What challenges learners' commitment to energy conservation in dissipative processes?
2.
What instructional strategies can help address the challenge that energy dissipation presents to the law of energy conservation?
We found examples of this challenge across these diverse groups, suggesting that certain intuitions and understandings of dissipative processes are common to a variety of different learners.
Researchers collaborating with the Physics Education Research Group at Seattle Pacific University observed professional development courses and recorded their observations in real time using field notes, photography, artifact collection (including written assessments and teacher reflections) and video recordings for each observation. In these courses, teachers generally worked in groups of 3-4, with 4-8 groups in each class; two groups were recorded daily. In real time, researchers identified particular moments of interest and marked them for later analysis. Later, researchers chose episodes 2 that addressed the phenomenon of interest. For this analysis, video episodes were identified through (1) initial observations by videographers and (2) a search for key terms in the field notes which could relate to energy dissipation (e.g., dissipation, disappear, missing, spreading, diffusion, thermal energy). Episodes were selected when learners made visible the challenge to energy conservation. In each selected episode, learners articulated in some way the lack of evidence of the presence of energy, often asking "where did the energy go?" or describing the energy as disappearing. The groups in these episodes worked to solve this challenge for the remainder of the discussions. Detailed transcripts and narratives of each episode were produced and corroborated by multiple viewings from multiple researchers. A group of researchers then collaboratively analyzed several aspects of communication including gestures, facial expressions, interactions between participants, bodily behavior, and the context in which the activities occur [24,25]. Fifteen episodes from six distinct discussions were isolated and captioned to illustrate learner engagement with issues of imperceptibility of thermal energy in dissipative processes. These episodes are described in Sections V and VI.
B. Instructional Context
Instructors of both the professional development courses and the high school science courses in this study use Energy Tracking Representations to support learners in thinking about energy scenarios. These representations promote energy conservation and tracking in real-world scenarios [4,[26][27][28][29][30][31]. One of the representations used in all courses is an embodied learning activity called Energy Theater [31]. The rules of Energy Theater are: -Each person is a unit of energy in the scenario.
-Regions on the floor correspond to objects in the scenario.
-Each person has one form of energy at a time.
-Each person indicates their form of energy in some way, often with a hand sign.
-People move from one region to another as energy is transferred, and change hand sign as energy changes form. -The number of people in a region or making a particular hand sign corresponds to the quantity of energy in a certain object or of a particular form, respectively.
An Energy Theater enactment illustrates a group's shared understanding of the energy scenario. For example, a group of teachers-as-learners shown in Fig. 2 analyzes the scenario of a ball being lowered at constant velocity by a person. This group's Energy Theater enactment begins with the configuration shown in the figure: Four teachers represent gravitational energy in the ball, standing in a region on the floor representing the ball with their hands raised over their heads. Two teachers represent chemical energy in the person, using a sandwich-eating gesture (making a chewing motion with their hands holding an imaginary sandwich near the mouth). Finally, two teachers represent kinetic energy, one located in the ball and one located in the person, by their own fists circling each other in front of their stomachs. As they act out the scenario, the teachers representing gravitational energy in the ball and chemical energy in the person each transform into kinetic energy and then into thermal energy. The two teachers representing kinetic energy do not change form or move to another location.
V. REJECTIONS OF THERMAL ENERGY IN DISSIPATIVE PROCESSES
In this section, we present data supporting our assertion that learners expect that energy associated with a perceptible indicator will be associated with another perceptible indicator when it transforms. In particular, we show that learners expect that kinetic energy associated with visible motion will transform into thermal energy associated with palpable warmth. We demonstrate this expectation by showing that learners initially reject ideas that violate it. Specifically, we show that learners reject suggestions that thermal energy is produced in dissipative processes. These rejections have been observed in all Energy Project professional development course levels and in high school classrooms, across a variety of dissipative scenarios (e.g., a ball being lowered at a constant velocity, water waves forming from the wind, an apple falling to the ground, a basketball rolling to a stop).
We categorize learners' rejections into four types, associated with varying degrees of adamancy. First, some teachers implicitly reject suggestions of thermal energy by ignoring thermal energy suggestions and continuing to search for perceptible energy indicators (Section V.A). Second, some teachers and high school students explicitly reject thermal energy as a possible product of a particular process (Section V. B).
Third, some teachers accept the idea that some thermal energy is produced, but reject the idea that all energy ends as thermal energy (Section V.C). Lastly, some teachers accept the production of thermal energy, but do so with skepticism and reluctance (Section V.D.). Each rejection of thermal energy suggests a violation of learners' expectation that perceptible indicators of energy should remain perceptible, thereby providing evidence for their commitment to energy conservation (Claim 1).
A. Implicit Rejection
One common reaction to a suggestion of thermal energy in our professional development courses comes in the form of an implicit rejection, in which listeners do not respond to suggestions about thermal energy. Sometimes they discuss another topic, suggesting that they may not have heard or attended to the suggestion. Other times, they respond to a non-thermal aspect of the suggestion, showing that they heard the statement but are not prioritizing its thermal energy content. Below, we share three episodes from a 20minute discussion about a ball lowered by a person at a constant velocity (the "lowering scenario"). In conversations like this one, which center on tracking and conserving energy, learners must first identify the initial form and amount of energy by some indicator (e.g., motion) and then track that energy through a process. This group of eight secondary teachers quickly notice and articulate discrepancies in energy indicators and then repeatedly ignore (or do not perceive, or decline to take up) suggestions of thermal energy. Their struggle to track the energy exemplifies the conflict between the teachers' commitment to energy conservation and their expectation that the indicators of energy should remain perceptible through a process.
In the first five minutes of this discussion, the teachers contrast the lowering scenario to other scenarios. Kate 3 focuses on the differences between the current scenario and dropping a ball to the ground. The comparison leads the group to articulate that (1) the gravitational energy of the ball and the person decreases, and (2) the kinetic energy of the ball does not increase. The first mention of thermal energy is in response to Kate, who asks where the people representing units of gravitational energy should go when they leave the ball: In this episode, Jennifer proposes that the energy is transformed into "heat" (or thermal energy 4 ) twice in this first excerpt. After her first suggestion, Kate introduces the earth as a new object to consider. Ted and Marta focus instead on the role of the person's hand in lowering the ball. Although Jennifer's suggestion is clearly audible, no one in the group engages her idea, implicitly rejecting thermal energy. After 90 seconds of discussion of Marta's question (whether raising or lowering a bowling ball requires more work), Barry redirects the group to focus on the missing energy in this scenario. Barry This time, instead of simply suggesting that the energy transforms into thermal energy, Jennifer argues for the production of thermal energy using a process of elimination. Thermal energy is the only option left: the energy "is not going anywhere useful," "is not going back into [the person]," and "can't go into the earth." Debra begins to voice support for Jennifer's idea after she states, "It's so much easier if we just have it be heat," but no one else acknowledges Jennifer's proposal.
None of the teachers in this exchange connect Irene's focus on lifting weights to Jennifer's argument for thermal energy. Instead, they consider Barry's suggestion that the energy stays in the ball as kinetic energy. Shortly thereafter, Leah directs the group back to the issue of the missing energy.
Leah: GPE needs to be getting fewer, and the kinetic needs to stay the same. Jennifer: So then how about we have some of the people [units of energy] who are going from GPE to kinetic go away as heat or go into the earth or whatever you're....like they have to be, I mean Irene: So in other words we need another circle [another rope to indicate the addition of the earth as an additional object] Kate: Ok, consider you were the earth… Once again, Jennifer offers a suggestion that thermal energy is produced, but this time accompanied by a possible location for the energy to end up (that the energy goes into the earth, an idea she dismissed in the last excerpt). Irene and Kate take up Jennifer's idea about the earth being involved, but do not address any 4 Learners (including secondary teachers) often use "heat" or "heat energy" to refer to a form of energy indicated by temperature (what we call thermal energy), rather than a transfer of energy driven by temperature difference (what we call heat) [29,[33][34][35]. An association of heat with the temperature of an object is common in everyday speech, in nonphysics textbooks, and in standards documents [35,36]. However, such an association is not aligned with disciplinary norms in physics, in which the energy associated with temperature is often termed ''thermal energy" or "internal energy," and in which the term "heat" refers to energy transfer from a body at higher temperature to one at lower temperature. Differentiation of heat and temperature was not a learning goal in the specific instructional sequences represented here. content related to thermal energy. The group discusses the forces between the earth and the ball for about two minutes following this exchange.
In the first 10 minutes of the discussion of this scenario, Jennifer suggests the idea that kinetic energy transforms into thermal energy a total of five times. Because her statements are clearly audible, and because in several cases members of her group take up other parts of her statements, it seems unlikely that the other participants do not hear her. Instead, it seems that they do not respond to thermal energy as a compelling solution for their missing-energy problem. Because their rejection is implicit, there is little opportunity to infer reasons for their inattention to the thermal energy idea in these episodes. However, in the sections that follow, we can begin to infer the reasons from more explicit rejections.
B. Explicit Rejection
We have observed explicit rejections of thermal energy in several courses. The first episode below is from the same group of teachers as above, and chronologically follows the previous episode. The next episode comes from a high school biology class. i.
"I don't think we need any heat." After a series of implicit rejections of thermal energy described above, another teacher, Marta continues the discussion about the lowering scenario, suggesting that thermal energy is produced.
Marta:
It [the amount of kinetic energy] should be the same, but the amount of GPE is decreasing. Let's just lose one person [unit of energy] to heat or something. Barry: I don't think we need any heat. Marta: Alright, but GPE is decreasing. Jennifer: What's happening over here [in the person] is that more food molecules are being converted to kinetic and then we're just going to say, to hell with heat! Others: [echo this sentiment] Irene: So do we need another circle [rope that represents an object] for the Earth?
In this exchange, Marta repeats the observation that the gravitational potential energy (GPE) in the ball decreases and suggests that some of the energy is lost to heat. This time, Barry explicitly rejects the use of any thermal energy in the representation. Jennifer also explicitly rejects thermal energy as a solution, discarding her original ideas. Her new suggestion (to produce more kinetic energy in the person) is supported by the group, but this does not solve the problem of the missing energy.
For a few minutes the conversation continues to focus on where the energy has gone. Several teachers (first Ted, then Irene, Ted again, Leah, and Ted for a third time) repeat the observation that the energy indicators decrease and revoice the question, "Where did the energy go?" In so doing, they collectively maintain a firm commitment to both conserving and tracking the energy. However, they persist in their attempt to make the representation work without the imperceptible thermal energy. ii.
"The apple's not giving off heat." Another example of an explicit rejection of thermal energy comes from a high school Advanced Placement/International Baccalaureate biology course 5 taught by a teacher who participated in our professional development. In this episode, eight 16-18 year old students participate in Energy Theater, discussing the scenario of an apple falling from a tree. Though the context is different, we observe the same struggle to identify thermal energy with imperceptible indicators while they work to conserve and track the energy in this activity. Prior to the following episode, the group assigns Lou, a senior student, to represent gravitational energy in the apple location as it hangs in the tree. They decide that Lou should transform (change hand signs) into kinetic energy as the apple falls. Another senior student, Aaron, asks the group what the kinetic energy in the apple (represented by Lou) should transform into as the apple hits the ground.
Aaron:
Ok, the energy that Lou is right now [kinetic energy as the apple falls to the ground], he's being used by the apple, so he's not going to stay in there right? Becky: He's not, the apple's not giving off heat. Aaron: The apple, so then what happens with the kinetic energy? You can't stay in there.
Similar to teachers in our professional development courses, these high school students notice that some energy is unaccounted for and attempt to identify where it has gone. When Aaron recognizes that the kinetic energy in the apple is "being used" (which we interpret as "decreasing"), Becky responds with an unprompted and unexplained rejection of the production of thermal energy. Aaron asks the same question voiced in several of the above episodes: "What happens with the kinetic energy?" The students demonstrate their commitment to the principle of energy conservation in that they spend the majority of their remaining time striving to account for all of the energy.
In the end, these students do not identify thermal energy as the resulting energy form. Instead, they decide that Lou should act as potential energy after the apple hits the ground. Similar responses have been observed with university students discussing a swinging pendulum [8]. In that study, student responses were interpreted as indicating confusion between gravitational force and energy. Another possibility is that the students are using "potential energy" as a placeholder for an unidentified energy form, or any form of energy that is not associated with a perceptible indicator. Even without identifying the missing thermal energy, the students' use of potential energy shows a strong commitment to energy conservation within the scenario.
C. Partial Rejection
A third form of rejection observed in our professional development is to reject the idea that all of the kinetic energy in a scenario could transform to thermal energy, but accept that some of it could transform (a partial rejection). We see many instances of this partial acceptance of thermal energy. For example, Marta (Section V.B.i) states that the group should "just lose one person [unit of energy] to heat," not accounting for all of the energy using thermal energy. In the examples below, teachers in both the secondary and elementary professional development courses reject the idea that all of the energy transforms into thermal energy.
Partial rejection of thermal energy production seems to align with the treatment of thermal energy in many traditional physics problems, in which some of the energy dissipates due to friction or drag. A possible counter-claim to our claim that learners reject thermal energy because of its imperceptibility is the claim that learners incorrectly believe that thermal energy is always small in amount, since physics examples often mention thermal energy in reference to friction and minimize or neglect it. However, as we will show in Section VI, teachers in our courses spontaneously bring up examples in which thermal energy dissipates in large quantities with perceptible indicators, suggesting that they do not believe that thermal energy from dissipation is always small.
i. "I'm just saying all of it cannot be going into heat."
Roland, a secondary teacher in the professional development course discussed above, participated in a different group's Energy Theater about the lowering scenario. That other group concluded that the energy all winds up as thermal, but Roland argues against that conclusion. He states, "I'm just saying all of it [the energy] cannot be going into heat." Roland suggests that instead, the energy might transform back into gravitational energy (similar to the conclusion made in Section V.B.ii). In this episode, he concedes that some thermal energy is produced, but continues to search for the remaining energy. Roland repeatedly asks where the energy goes, agreeing that some of the energy transforms into thermal energy. However, he responds negatively to the idea that all of the energy "goes to heat." When the instructor suggests some of the energy transforms into sound energy, Roland expresses satisfaction that not all of the energy transformed into thermal energy. The fact that lowering scenario does not produce any audible sound is not discussed. ii.
"It never made that much heat for all of us to be fanning!" In a professional development course designed for elementary teachers, we find participants engaging in similar struggles with imperceptible indicators of energy, even when the scenarios differ. In the following episode, a group of K-8 teachers focuses on a scenario in which a basketball rolls to a stop. At the beginning, Brice, a middle school teacher, convinces the group to review their current understanding of the energy scenario by enacting Energy Theater. He narrates as they act out the energy processes. It never made that much heat for all of us to be fanning! Brice: Yeah but we're just little amounts of energy. Bart: We're very small. Carrie: Think the ball. Brice: We're like atomically sized. Carrie: You are the ball. Brianna: Very small, very small.
In this enactment, thermal energy is represented by a fanning motion. When Brianna, an elementary teacher, sees the group enact all of the kinetic energy in the ball transforming into thermal energy, she exclaims, "It never made that much heat for us all to be fanning!"i.e., she states that the scenario does not produce a large amount of thermal energy. Brice, Bart, and Carrie reassure her that the units of thermal energy are "very small." The description of the energy units as being "very small" may contradict the rules of Energy Theater (and thus the principle of energy conservation) if the energy units are being described as smaller than they were before the energy transformed. In this interpretation, all four teachers may be seen as rejecting the idea that all energy has transformed into thermal energy, implicitly contradicting the principle of energy conservation. Alternatively, the teachers may be claiming that the total amount of energy in the scenario is very small (and conserved). Either way, the "small" size of the thermal energy units justifies the lack of perceptibility to them.
D. Skeptical Acceptance without justification
In addition to the above types of rejection (ignoring, explicitly rejecting, or partially rejecting thermal energy), teachers sometimes accept the production of thermal energy skeptically. In some cases teachers state their inability to identify perceptible indicators or mechanisms for its production as a reason for their skepticism. At other times, they indicate that they are relying on thermal energy as a catch-all or last-resort explanation when no other account is forthcoming. In this section we return to the group of secondary teachers discussing the lowering scenario and the elementary teachers discussing the rolling-basketball scenario. We then share an episode from another elementary teacher professional development course. i.
Using thermal energy is "just like a Hail Mary pass"
After the secondary teachers discussing the lowering scenario from section V.B.i explicitly reject thermal energy, the group continues to talk through a series of questions about the missing energy and revisits the thermal energy suggestion. Leah: I'm beginning to think that it [thermal energy] going to the air is a good idea. I really am-Ted: That just seems so like, Jennifer: like a giveaway. Ted: It's just like a Hail Mary pass, it's just like I don't know, let's just go [throws an imaginary ball].
Leah's suggestion that thermal energy goes to the air is met with a rejection from Ted. He states that the production of thermal energy is "like a Hail Mary pass," using a term from American football for a long, low-probability throw made in desperation at the end of a game. In using this term and gesturing an aimless throw, Ted expresses a sense that this answer is a last-ditch attempt, unlikely to result in a successful outcome. Jennifer similarly describes Leah's suggestion as "a giveaway," as if thermal energy is the easy answer instead of the right one.
ii. Imperceptible energy indicators require "a leap of faith" In the conversation among elementary teachers discussing the rolling-basketball scenario (Section V.C.i.) one teacher asks, "Why did it [the basketball] slow down? And where did the energy go, that would have kept it propelling at the same rate of speed?" The group reaches a consensus that some of the ball's kinetic energy is transformed into thermal energy. Brice accepts this conclusion, but also looks for other forms of energy, such as sound energy, to make up the rest. Heat led to stopping the ball. The sound didn't lead to stopping the ball. Jack: Did we actually hear anything? Adrienne: Not me. But then I wasn't paying attention. Bart: But by that same token we can't have any of these other [inaudible word] because we've got no way of measuring the ba-the energy that was in the ball, we just assumed there was some. And then, the energy just went away. Carrie: So it's all a leap of faith.
In response to the suggestion that sound energy is also produced, Bart argues that in tracking energy, there are limitations to what you can measure. He states that they "assumed" the energy was there and then it went away. Carrie responds to the group as a whole that "it's all a leap of faith," possibly in reference to the presence of energy at the end of this scenario where it seems to disappear. These teachers may be arguing that anything that is not measurable requires a leap of faith, rather than making an argument specific to thermal energy. In any case, Bart and Carrie state that they must rely on assumptions and a leap of faith to accept thermal energy as the solution in this scenario.
iii. "I just have to say, okay, I believe it." In another elementary teacher professional development course, a small group of K-5 teachers discuss what happens to the energy of a vertically dropped object that hits the ground. The instructor describes bending a paperclip back and forth repeatedly and feeling the metal grow warmer. She uses this as an alternative, perceptible example of dissipation.
Instructor: There're some things, like when we did the paperclip, it seemed like we got a lot of heat out of very little motion. Vicki: Heat out of a little bit of motion -That's interesting too! Instructor: That's really interesting! You know, so. In her statement, Marissa explains that she must "just believe" that thermal and sound energy are present in certain scenarios because she "can't grasp what the evidence is" (i.e., she can't perceive warmth and sound). Earlier in the same conversation, Marissa expressed this concern by describing how she feels about her understanding of energy after it spreads into the atmosphere.
Marissa:
I feel like once it gets to the air, atmosphere level, I have no conceptual understanding, and I know something happens, and that's where we got to the thermal discussion before-Instructor: But you guys have been talking about that in terms of -so those guys have been moving against each other and bounding off one another in mass, what's happening at the molecular level? Marissa: So we can guess that it's thermal.
Marissa expresses a concern that her lack of conceptual understanding leads her to "guess" that the result is thermal. Her doubtful acceptance is similar to Ted's "Hail Mary pass" and Carrie's "leap of faith" in the previous episodes. Marissa's statements are distinctive in that she claims that a lack of perceptible evidence limits her ability to reason about thermal energy.
E. Summary
The evidence above supports our claim that learners expect that energy associated with a perceptible indicator will be associated with another perceptible indicator when it transforms to another form. In particular, we have shown that learners expect that kinetic energy associated with visible motion will transform into thermal energy associated with palpable warmth. Evidence of this expectation is in the form of various degrees of rejection: Learners reject the idea that thermal energy is produced in scenarios in which warmth is not perceptible. We see these rejections from elementary teachers, secondary teachers, and secondary students in a variety of scenarios. These learners demonstrate a substantial commitment to the principle of energy conservation in that they strive to account for the kinetic energy that seems to have disappeared from the scenario. Our observations suggest that difficulties applying the conservation of energy principle to dissipative scenarios may have their roots in a strong association between forms of energy and their perceptible indicators.
We do not typically observe all four types of rejections and a successful identification of thermal energy in one conversation. However, the secondary teachers analyzing the lowering scenario articulate each of these reactions in a particularly illustrative conversation. Over the course of the 20 minute discussion, Jennifer and others suggest thermal energy as a possible solution seven times with various reasoning and all are rejected (see Figure 3). The reasoning used in the suggestion for thermal energy grows in substance as the conversation progresses: from no reasoning, to arguing that the energy is lost, to suggesting that thermal energy goes to the air, to recognizing the warming of the body of the person lowering the ball. As their energy reasoning becomes more sophisticated, the teachers engage more fully in explaining their reactions. They begin by rejecting thermal energy suggestions implicitlyignoring the suggestion, changing the subject, or addressing a different idea unrelated to thermal energy. When Marta suggests heat (the next-to-last suggestion in Figure 3) the rejection becomes explicit. In her statement, she is partially rejecting thermal energy herself by only suggesting a small quantity of energy to transform. Finally, when Leah makes the suggestion shown last in Figure 3), Ted and Jennifer articulate that their skepticism stems from a lack of evidence for the transformation. We attribute the progress of this conversation partly to teachers' use of the Energy Theater representation. Energy Theater is designed to support teachers in conserving and tracking energy in complex physical processes [27,31], including accounting for missing energy. Furthermore, Energy Theater's embodied action supports collaborative teams in theorizing mechanisms of energy transformation [29], including transformations from kinetic to thermal energy [32]. The development of the reasoning behind each suggestion of thermal energy and the teachers' investment in considering thermal energy highlights productive aspects of the Energy Theater activity and the resources that the teachers bring to the activity.
VI. EXAGGERATION STRATEGY FOR JUSTIFYING THE PRESENCE OF THERMAL ENERGY IN DISSIPATIVE PROCESSES
Some teachers successfully resolve the issue of imperceptibility by using exaggeration (Claim 2). These teachers exaggerate the total amount of energy in a scenario so that the thermal energy becomes perceptible, then extrapolate back to infer the presence of thermal energy in the original scenario. Some teachers produce the exaggeration effect by imagining that the scenario repeats many times, building up the effects of the energy changes until those effects become perceptible (Section VI.A and VI.B). Other teachers relate the original dissipative scenario to an extreme version involving more total energy (Section VI.C). In all three episodes below, the exaggeration results in thermal energy that is either indicated by perceptible warmth or associated with burning.
Jennifer, Marta, and Leah had all suggested thermal energy in earlier parts of the conversation (see Figure 3), but their suggestions were not taken up. One possibility is that the group needed multiple opportunities to accept the idea. This interpretation is weakened by Irene's enthusiastic reception of Jennifer's current suggestion, as though it offered a novel solution to their shared puzzle. Another possibility is that the present suggestion is substantively different from the earlier ones. Jennifer had formerly suggested that the energy doesn't go anywhere useful, goes away as heat, or goes to the earth; Marta had proposed they lose one unit of energy to heat; and Leah had suggested the energy goes into the air. Jennifer's latest question, "Couldn't we just have his body heat up?" relates the transformation into thermal energy to a familiar physical experience, and suggests a metabolic mechanism. Irene elaborates the physical experience of metabolic effort with repetitive "weightlifting" gestures (bicep curls), suggesting that even if lowering a ball at a constant velocity once does not produce perceptible thermal energy, doing so repeatedly would "make you sweat."This repetition is the primary difference between the weightlifting scenario and the original lowering scenario. In other words, this group productively uses an exaggeration strategy to identify the production of thermal energy.
B. "Same thing as doing a squat as slowly as you can"
In another professional development course for returning secondary teachers, Rita and Joe also use an exaggeration strategy to successfully locate missing thermal energy in the lowering scenario. Unlike the previous group of teachers, Rita and Joe first decide that the energy must transform into thermal energy, then work to justify the transformation. Rita relates the original scenario to an exaggerated version in which the effort involved in lowering a ball causes "shaking." While she describes the shaking, she acts out the difficulty and effort it takes to lower an extra large, heavy bowling ball by shaking her hands and straining her voice as she lowers the imaginary ball. Joe refers to the experience of doing squats to emphasize that lowering the ball is "hard" to do. The bodily experiences of controlling motion and shaking are used as perceptible indicators of effort that justify the production of thermal energy. The exaggerations, expressed primarily in Rita's imitation of lowering an extremely heavy object, make it more plausible that the indicators of thermal energy can be perceptible and support the idea that lowering a ball with lesser effort also produces thermal energy.
C. "We saw the space shuttle." The same K-8 teachers who discuss the rolled basketball scenario (Sections V.C.ii. and V.D.ii) use exaggeration to justify the presence of thermal energy. Several of them agree that thermal energy is produced and seek to justify why they have settled on thermal energy.
Carrie:
How do you know that it's being transferred into heat energy? Bart: Because [the instructor] said so. Or someone like her. Brice: Because we saw the space shuttle, coming through the atmosphere.
Bart offers skeptical acceptance without justification, similar to the teachers cited in Section V.D. Brice, however, compares the rolling-ball scenario to the extreme scenario of a space shuttle reentering the atmosphere (in which the production of thermal energy is dramatic and consequential). Here, the space shuttle is slowing to a stop through the atmosphere in a similar fashion to the basketball slowing to a stop on the ground.
VII. CONCLUSION
The NGSS emphasize tracking and conserving energy through physical scenarios. In physics, we often track energy using perceptible indicators. However, in dissipative processes, the warmth associated with thermal energy is often imperceptible to human senses and its production goes unnoticed. We find that this imperceptibility of warmth counters learners' expectation that if energy is conserved, so too should the energy indicators be "conserved." The disappearance of perceptible indicators of energy can challenge learners' commitment to energy conservation by violating this expectation. We demonstrate this expectation by showing that learners engaged in tracking and conserving energy during Energy Theater initially reject ideas that violate this expectation. Learners react with some type of rejection of thermal energy, either implicitly, explicitly, partially, or by skeptical acceptance. We see these rejections from learners with different levels of background knowledge and in the context of a variety of scenarios. In many cases learners do not identify thermal energy as the final product in dissipative processes, aligning with the findings in previous literature. However, we believe that their intuition of associating perceptible indicators with particular forms of energy is productive. Teachers in our courses use an exaggeration strategy along with this intuition to imagine scenarios in which the perceptible warmth is created and successfully identify the production of thermal energy. We see this exaggeration strategy as a resource for supporting learners in better understanding the role of thermal energy in common scenarios and more readily accepting energy conservation. This resource was used by scientists in the demise of caloric theory, which emerged from Count Rumford's experiments with machine boring of cannon barrels: scientists recognized that the violent and seemingly inexhaustible increase in thermal energy in this exaggerated scenario could not have been resident previously within the cannon as caloric.
The issue of imperceptible energy indicators is not isolated to dissipative processes involving thermal energy. It can also arise in the production of sound energy, chemical energy, and other forms. We have seen teachers compare the same quantity of energy in two different forms and express surprise that the perceptible indicators and actual amount of energy are not necessarily correlated. For example, Vicki, an elementary teacher in a professional development course, stated, "I always think about all the sound in the city. I mean there's a tremendous amount! It seems intuitively like sound energy, what's it doing? …not much because nothing is heating up much! I mean, there's an apparent amount of a lot of energy sometimes that does very little in the end." Vicki describes a difference in what seems to her to be a large amount of sound energy and the relatively small amount of thermal energy for which she sees evidence in a city. Future work could investigate learners' expectations about perceptible indicators of a variety of forms.
In a world of increasing concerns about energy usage, vast amounts of dissipated thermal energy are produced in day-to-day activities. An emphasis on real world examples can give K-12 teachers and students the opportunity to think about issues of energy use, waste, and efficiency, highlighting the sociopolitical ramifications of the production of thermal energy. Ultimately, instructors can support learners in tracking and conserving energy by (1) using real-world examples that include dissipation, (2) encouraging learners to use the exaggeration strategy, and (3) explicitly contrasting the perceptibility of energy indicators across a variety of forms. Learners who resolve the mysterious loss of energy using exaggerated examples will be better equipped to understand energy conservation, more aware of their own limitations of perception, and more conscious of their own energy use in everyday situations. | 11,768.4 | 2014-10-01T00:00:00.000 | [
"Education",
"Physics"
] |
Extractable Work from Correlations
Work and quantum correlations are two fundamental resources in thermodynamics and quantum information theory. In this work we study how to use correlations among quantum systems to optimally store work. We analyse this question for isolated quantum ensembles, where the work can be naturally divided into two contributions: a local contribution from each system, and a global contribution originating from correlations among systems. We focus on the latter and consider quantum systems which are locally thermal, thus from which any extractable work can only come from correlations. We compute the maximum extractable work for general entangled states, separable states, and states with fixed entropy. Our results show that while entanglement gives an advantage for small quantum ensembles, this gain vanishes for a large number of systems.
Introduction
Thermodynamics and information theory are deeply related [1].Recently much attention has been dedicated to the problem of understanding thermodynamics when considering quantum systems.This has led notably to the development of a resource theory for quantum thermodynamics [2,3], and the study of quantum thermal machines [4,5,[7][8][9].The role and significance of quantum effects (such as entanglement and coherences) in quantum thermodynamics has still to be fully understood, although progress has recently been achieved [10][11][12][13][14].
A problem of particular importance in quantum thermodynamics is to understand which quantum states allow for the storage and extraction of work from quantum systems [15,16].Such states are called non-passive, while states from which no work can be extracted are referred to as passive.Among passive states, one also distinguishes states which are completely passive, that is, from which no work can be extracted even when an arbitrary number of copies of the state are jointly processed.Completely passive states are in fact simply thermal states [15,16].
In general, there exist two different ways in which work can be stored in a non-passive quantum system.First, and most commonly discussed, one stores work locally in the system, by basically rearranging the populations of the energy levels.However, there exists another possibility for storing work in a non-passive state, which makes use of correlations between subsystems.The main goal of the present work is to investigate this second scenario and understand how to optimally make use of correlations among quantum systems for work storage.
Specifically, we consider a quantum system composed of n subsystems (particles or modes).Each subsystem is assumed to be in a thermal state, at the same temperature T .Hence each subsystem is passive and no work can be extracted from it.However, when the system can be jointly processed, here via a cyclic Hamiltonian process (or alternatively a global unitary operation), work can be extracted.This is because the subsystems are correlated, hence the global state is not passive in general.The main point of this approach is that it captures exactly the amount of work stored in the correlations, since locally (at the level of each individual subsystem) no work is available.First, we will see that if no restriction on the global state is made, then it is possible to store in the system the maximal amount of work compatible with the requirement that the reduced states are thermal.In other words, at the end of the protocol, the system is left in the ground state.Notably this is possible thanks to quantum entanglement.It is then natural to ask if the same amount of work can be stored using a separable, or even a purely classical state diagonal in the product energy eigenbasis, that is, with no coherences among different energy levels.We will see that, although the amount of work that can be stored in unentangled states is strictly smaller than to the entangled case for any finite n, in the thermodynamic limit (n → ∞) purely classical states already become optimal.In fact, quantum resources offer a significant advantage only for small n, while neither entanglement nor energy coherences are needed for optimal work storage in the thermodynamic limit.We also consider other natural limitations on the global state, such as fixing a bound on the entropy of the state, and investigate the role of quantum coherence and entanglement in this case.Finally, we show that our results are also applicable to a different framework where the system does not remain isolated but one has access to a bath or a reservoir.
Framework
We consider an isolated quantum system which consists of n d-level subsystems.The local Hamiltonian h = j E j |j j| is taken to be the same for each subsystem, and without loss of generality, it is assumed that the ground state energy is zero.We consider the situation where there is no interaction Hamiltonian between the subsystems, such that the total Hamiltonian H is simply the sum of the individual local Hamiltonians H = i h i .
The class of operations that we consider is the class of cyclic Hamiltonian processes, i.e. we can apply any time dependent interaction V (t) between the n subsystems for a time τ , such that V (t) is non-vanishing only when 0 ≤ t ≤ τ .The corresponding evolution can be described by a unitary operator U where −→ exp denotes the time-ordered exponential.By varying over all V (t) we can generate any unitary operator U = U (τ ) and therefore this class of operations can alternatively been seen as the ability to apply any global unitary on the system.
The task we are interested in is work extraction via a cyclic Hamiltonian process.Since the system is taken to be isolated, we quantify the extracted work by the change in average energy of the system under such a process.More precisely, we define the extracted work W as Within this framework, it is well known that work can be extracted from a system if and only if the system is non-passive, where a passive system, with Hamiltonian H = i E i |i i|, is one which satisfies i.e. a system is passive if and only if it is diagonal in its energy eigenbasis and its eigenvalues are non-increasing with respect to energy.Given non-passive state ρ, the extracted work (1) is maximized by [17]: where ρ and ρ passive have the same spectrum.Importantly, we see that passivity is a global property of a system, and thus this raises interesting possibilities when considering a system comprised of a number of subsystems, as we do here.Indeed, global operations are capable of extracting more work than local ones, as a state can be locally passive but globally not.Such enhancing may have two origins: activation or correlations between subsytems.Activation occurs when (ρ passive ) ⊗k becomes a non-passive state for some k.Interestingly, thermal states are the only passive states that do not allow for activation, as any number of copies of thermal states is also thermal [15,16].On the other hand, states that are locally passive but have a non-product structure (i.e., they are correlated) also offer the possibility for work extraction.An extreme case, which is the focus of this article, is a set of correlated locally thermal states, as in such a case the global contribution uniquely comes from correlations.Our goal, in fact, is to understand how correlations allow for work extraction in systems that locally look completely passive.
We will then focus on a subset of all possible states of the system, specifically those which are locally thermal, that is states ρ such that the reduced local state of subsystem i satisfies for all i, where Tr i denotes the partial trace over all subsystems except subsystem i.Here τ β is the thermal state of the subsystem at (a fixed but arbitrary) inverse temperature β = 1/T , where Z = Tr e −βh is the partition function.Since ρ is locally thermal (4), and since H is a sum of local Hamiltonians, the first term of the right hand side of ( 1) is fixed and given by Tr (ρH) = nE β where E β = Tr(τ β h) is the average energy of the local thermal state.Apart from understanding how to exploit correlations to store work in the system, we will also study the role of entanglement and energy coherences in these processes.We consider three natural sets of correlated states: (i) arbitrarily correlated, and thus entangled, states, (ii) separable states and (iii) states diagonal in the product energy eigenbasis, which we also name classical.Clearly, these do not have any quantum coherence among the different energy levels, which are just classically correlated.We will study work extraction for these three different sets of correlated quantum states.
Optimal work extraction from correlations
We first show that within the above framework there is no restriction on the amount of work that can be stored in the correlations of a quantum system.That is, from equation ( 1), given that the initial average energy is fixed, maximal work extraction amounts to minimising the final average energy of the system, a non-negative quantity (given the convention that the ground state energy vanishes).Thus by ending in the ground state one has clearly extracted the optimal amount of work, equal to W = nE β .To that end, consider the state It is straightforward to verify that Tr i |φ φ| = τ β for all i and as such that |φ is locally thermal.Moreover, since the state ( 6) is pure, there exists a unitary matrix U such that U |φ = |0 ⊗n .Thus all work can be extracted from state |φ .However, it is clear that the state ( 6) is entangled.Hence it is natural to ask whether the amount of extractable work would change if furthermore we restrict ourselves to separable, or even classical (i.e.diagonal), states?If this is the case, then entanglement is clearly enhancing work extraction in the scenario we consider.
Maximal work extraction from separable and classical states
We start by considering the case where the system is initially in a separable state ρ.Hence the conditional entropy of ρ is necessarily positive, that is S(ρ) ≥ S(τ β (h)).This condition places an upper bound on the maximal extractable work from separable states.Indeed, the global state with the least energy compatible with a given entropy is the thermal state [15,16] where β is defined implicitly through the relation S(τ β ) = S(τ β )/n, requiring that the thermal state ρ th has the same fixed entropy as the initial state.The state ρ th is the (possibly unachievable) optimal final state, as the minimal amount of energy is left in the state.The bound on the extractable work for separable states is thus Hence optimal work extraction is impossible with separable states, as the above bound is always strictly smaller than nE β .Moreover, notice that it is not clear whether the above bound can be achieved, since we have assumed only conservation of entropy and did not take into consideration either the restriction to unitary processes and the constraint of being locally thermal.The tightness of this bound will be discussed in more detail later.
Let us now move to the case of purely classical states, i.e. states which are diagonal in the (global) energy eigenbasis.Consider state which is simply the state (6) after being dephased in the (global) energy eigenbasis.Notice that (9) saturates the bound S(ρ) ≥ S(τ β (h)), and in Appendix A we show that it is the only separable state that satisfies it.The maximal extractable work from ( 9) is found, as before, by finding its associated passive state, and then computing This construction provides a lower bound on the maximal amount of work that can be extracted from purely classical states: As the number of subsystems n increases, we see that W cl rapidly converges to W max = nE β .This shows that, in the thermodynamic limit (n → ∞), the difference in capacity between storing work in an entangled quantum state and a diagonal state vanishes, hence quantum coherences and entanglement play essentially no role here.However, for finite n there will still be a difference, and in particular in the regime of n relatively small the ability to store work in entanglement or coherences offers a significant advantage (see Fig. 1).
Protocol for maximal work extraction given an entropy constraint
The previous results can be intuitively understood from entropy considerations.When the correlations in the state are not restricted, it is possible to satisfy the requirement of local thermality with a pure entangled states, therefore attaining optimal work extraction.When the state is separable, the global entropy of the state cannot be zero as it is lower bounded by the local entropy.This explains the gap between entangled and separable states.However, it is still possible to find the classical state (9) that, apart from being locally thermal, has an entropy that does not scale with n.In the limit of a large number of subsystems, this global entropy becomes negligible and the classical state turns out to be effectively optimal.In view of these considerations, it is a natural question to study how the previous bounds are affected when the global entropy of the state is bounded.In fact, it is a natural scenario to consider systems whose entropy scales with the number of subsystems, for example S ∝ n, that is, systems with a non-vanishing entropy per subsystem.
Following the same line of reasoning as in the previous section, we obtain the following bound on the extractable work given a constraint on the entropy of the initial state where β is defined implicitly through the relation In what follows we provide protocols attaining this bound for all n, i.e. there is an initial state ρ which is locally thermal and can be brought to a product of n thermal states at temperature β by application of a suitable unitary.We provide a detailed analysis for the case of qubits; for the general case of qudits see [18], where the same protocol is applied for creating correlations at minimal energy cost.clarity, we work backwards, and exhibit a unitary which takes the final state τ ⊗n β to an initial state ρ which is locally thermal, at any temperature β ≤ β .We first consider the simplest case of two qubits.We define the unitary transformation U α : and consider the initial state ρ = U α τ ⊗2 β U † α .Since U α only generates coherences in the subspace where both qubits are flipped, it is clear that the reduced state of each qubit is diagonal.To calculate the local temperature β it is convenient to introduce the parameter z = 0|ρ 1 |0 − 1|ρ 1 |1 , i.e. the "bias" of the local (qubit) subsystem in state ρ 1 .Indeed, the bias and temperature are related through z = tanh(βE/2).A straightforward calculation shows that under the action of U α , the state τ β (with bias z ) transforms to an initial state ρ with bias z = cos (2α) z .That is, we can achieve any bias z such that |z| ≤ z .As such the local temperature of the initial state, which is simply given by β = 2 E tanh −1 (cos (2α) z ), can take any temperature β ≤ β by an appropriate choice of α.
The above protocol can be readily generalised to the case of n qubits.Let us denote by i 2 angles, we define U α as where ī denotes the bit-wise negated string (with | ī| = n − |i|), i.e. such that | ī = σ ⊗n x |i .U α is seen to perform rotations in all 2-dimensional subspaces comprised of states |i , with energy i|H|i = E|i|, and the flipped state | ī , of energy nE − i|H|i = E(n − |i|), by an angle α k with k = |i|. 1 As we show in Appendix B, choosing α j = α for all j the state ρ = U α τ ⊗n β U † α is locally thermal, and, exactly as in the case of two qubits, the local bias z and temperature β are given by Again, any bias |z| ≤ z can be reached, hence the initial state can take any temperature β ≤ β .To summarise, we have constructed states ρ which are optimally correlated in the sense that the maximal extractable work is equal to the thermodynamic bound, which arises only from entropy considerations.As required they have local thermal marginals (which can take any possible temperature), in such a way that the only origin of the extracted work is the correlation among the subsystems.Notice that the above protocol exploits coherence in all two-dimensional subspaces spanned by |i and | ī .It turns out that, in the limit of large n, such coherences imply the presence of entanglement (see Appendix C for details).Nevertheless, by adapting the above protocol, it is possible to extract the maximal amount of work without generating any coherences.Specifically, by taking n sufficiently large, one can optimally correlate the state in a fully classical manner by considering U α which invert the population (and therefore temperature) between states which have |i| = k and |i| = n − k, corresponding to energy kE and (n − k)E respectively, for a suitable choice of k.Choosing k np , we obtain a bias z . Thus for n sufficiently large, we can achieve approximately any |z| ≤ z , and hence any temperature β ≤ β (see Appendix D for details).
Work from energy coherences
In this section we consider states whose diagonal (in the energy eigenbasis) is set to be equal to that of a global thermal state, together with the initial condition of local thermality.More formally, this approach is equivalent to imposing that all moments of the energy distribution are those of the thermal state, i.e.
Tr H k ρ = Tr H k τ ⊗n β (14) for all k.This contrasts with the previous sections where only the first moment (i.e. the average energy) was constrained.Moreover, notice that the entropy of the initial state is here unconstrained.
Focusing again first on the case of n qubits, we consider states which are maximally entangled in every degenerate subspace: where p = e −βE /Z, and |D n,k = 1/ n k |i|=k |i is the Dicke state of n qubits with k excitations.It is straightforward to verify that the above state satisfies equations ( 4) and (14).
The passive state associated to ( 15) can be found as follows.Notice that the state ( 15) is a mixture of n+1 orthogonal states.Therefore the optimal unitary amounts to rotating each of these states to the n + 1 lowest energy levels.The work extracted via this transformation is thus given by Therefore it is possible to extract all the work contained in the initial state up to a correction of O(1).Moreover, a similar result holds for the general case of n qudits (see Appendix E).An interesting question is whether the state ρ deg features entanglement.Intuition suggests that this may be the case, as large coherences are crucial in this scenario.However, using the techniques developed in [28], we have not been able to witness entanglement for n ≤ 50.
Access to a bath
Finally, we consider a different scenario in which the system is no longer isolated but one has access to a bath at the same (local) temperature.Then it is well-known that the extractable work is bounded by the free energy difference, W ≤ ∆F .In our set-up it reads, which reduces to the quantum mutual information in the bipartite case (thus enforcing our argument that in this setting work is only extracted from correlations).The bound (17) is strictly bigger than (11), which is natural as we consider a bigger set of operations.Equality in (17) can be obtained by a quasistatic process [19,20], which essentially takes an infinite time2 , whereas (11) can be reached with an appropriate controlled unitary operation, which is a fast process (thus enforcing the idea of a tradeoff between time and extracted work).On the other hand, the states ( 6) and ( 9) maximize the right hand side of (17), i.e. the free energy content is maximal, for entangled and separable states respectively, and thus our previous considerations also hold in this framework.
For the case of extracting work from energy coherences, one can readily use (17) by computing the entropy of (15).As ρ deg follows a binomial distribution, its entropy can be straightforwardly calculated, Therefore, ρ deg allows for storing all work in coherences except for a ln n correcting term.Our results thus complement previous studies in this setting, such as a detailed analysis on the extractable work from local/non-local operations [21][22][23], from correlated states [24,25], from entanglement with feedback control [26], and also for deterministic work extraction [27].It is also worth mentioning that when the correlations are not present between subsystems but rather between the system and the bath, they become a source of irreversibility [9].
Conclusion
We have investigated the problem of storing work in correlations between subsystems.To ensure that no work can be extracted from subsystems locally, we focused on quantum states which are locally thermal.Hence all extractable work must come from correlations between subsystems.This gives a new perspective on the problem of passivity, in particular for the case of composed systems.
In the absence of any further constraint, all work can be extracted from the system.Importantly, entanglement was shown to be necessary in this case.Imposing additional constraints on the initial state, such as being separable, having a fixed entropy, or fixing all moments of the energy distribution, led to a reduction of the amount of extractable work.Nevertheless, in the thermodynamic limit, we found that essentially all work can be extracted in the above three cases.This was demonstrated by giving explicit examples of states.
An interesting open question is to investigate the scenario in which not only local marginals are thermal, but also k-body reduced states (nearest neighbours) are locally thermal.This may give insight into the role of different types of multipartite entanglement in the context of work extraction.A further interesting question is to derive bounds in the other direction, i.e., correlated states with minimal work content.This question will be addressed in further work [29].
Since Ω is separable, it can be written in the following form: for some discrete index x, nonnegative λ x s summing up to 1, and some normalised states ρ S i x over S i .Given the condition that the state of S 1 , x λ x ρ S 1 x , is equal to τ and the joint convexity of the relative entropy [1], we have So, the minimal possible value for S(Ω) is S(τ ); and to find the purest Ω we have to saturate both inequalities in the chain (22).The second inequality is resolved trivially, giving that ρ x for all values of x are pure.We denote these states as . Doing the same with respect to, e.g., S 2 , we will get that all ρ S 1 x are also pure (and, as above, are denoted as |S 1 x ).The equality conditions for the first inequality of ( 22) are less trivial [1].If we only consider the nonzero λ x s and denote their number by L, Theorem 8 of [1] [2] will give us for ∀t > 0 and x = 1, ..., L; where the equality holds in the support of . The latter is the projector onto that subspace.Bearing in mind that we consider only nonzero λ x s and doing the same procedure for all other N − 1 systems, we get from (23): We will now concentrate on the first equality and, for simplicity, drop the index enumerating the subsystems.With that, and taking into account that P x ΩP x = λ x P x and P Now we take {|i } d i=1 , the eigenbasis of τ in the Hilbert space of the subsystem (19) Finally, the condition that all partial states are τ : First, let us show that L > d cannot be true.Indeed, substitute (26) But we have (27) and that 0 ≤ a xk ≤ 1 so (30) can be true only if each row of a = a xk consists of zeroes and only one 1.Since none of p k is zero, (28) implies that there must be at least one 1 on each column of a.Let arrange the x so that the first d rows of a look like an identity matrix.Then we get Since x λ x = 1 we have that λ x = 0 for all x ≥ d + 1.
Which is impossible because of ( 26) and the fact that there must be at least one 1 on each row.With the same argument, also d > L is not possible.So, d = L and (31) holds.Also, since now a = I, Appendix B: Protocol for maximal work extraction given an entropy constraint In this appendix we will show that the unitary U α , with α = α • • • α, given by produces a state ρ = U α τ β (H S ) ⊗n U † α that is locally thermal with local bias z and temperature β given by z = cos (2α) z (34) is the bias of τ β (where, for the sake of brevity, we now write τ β in place of τ β (H S ) since no confusion should arise).To see that this is the case, we note first that ρ is symmetric under permutations, since both the initial state τ β (H S ) ⊗n and U α are symmetric.Therefore it suffices to calculate z 1 = 0|ρ 1 |0 − 1|ρ 1 |1 .We note first that this can be re-written as follows which demonstrates that we achieve z ) by swapping only the population between two subspaces within the typical subspace.By applying a sequence of unitaries of the form V with for different values of µ (i.e.corresponding to different subspaces) we therefore see that we can change the local bias (and hence local temperature) of the state by increments of order 1/ (n), which can be made arbitrarily small by choosing n sufficiently large.We note however that the above analysis does not hold if p becomes too small, approximately of the order 1/ √ n.This thus constrains the entropy S of the initial state, which must grow approximately as √ n.
Appendix E: Correlations in degenerate subspaces
Consider the total Hamiltonian where each h i = h := which corresponds to the number of non-zero eigenvalues of (15).Note that we switched the notation for the binomial coefficients from n k to C k n .In order to find the passive state associated to (15), one has to move such eigenvalues to the lowest energy levels.This operation requires knowledge of the spectrum of h i .Nevertheless, it will suffice for our purposes to move them to a sufficiently degenerated energy.The degeneracy of a global Notice that E is of the order of the energy of one subsystem (for instance, choosing k 2 = d and k j = 0 for j > 2, we obtain E = d 2 ), and therefore we can take E min = E obtaining the desired result.
[1] A. Jenčová and M. B. Ruskai, "A unified treatment of convexity of relative entropy and related trace functions, with conditions for equality", Rev. Math.Phys.22, 1099 (2010).[2] We use the part (d) of the theorem in the mentioned paper.
Their and our notation is parallelized as follows: their K is unity in our case, there A = j Aj while here Ω = x λxρ A x ⊗ρ B x , and B = j Bj is τ ⊗ I B d B = x λxρ A x ⊗ I B d B .[3] In that notation Ω = x λxPx.
, and construct the matrix a xk = | S x |k | 2 ≥ 0. With this we rewrite (25) as d k=1 a xk p k = λ x .(26) Also, from the normalization we have k a xk = 1 for ∀x.
1 ,
is then to find the lowest energy, E min , satisfying C so that the work extracted after such a transformation is simply given byW deg = E ρ deg − E min .
into (28), xl a xk a xl p l = p k , multiply the LHS by a xk and sum over k and use x λ x = 1 = xk a xk p k :Given that it must hold that xk a xk p k = 1 we see that (29) can be true only if k a 2 xk = 1 for ∀x. | 6,685 | 2014-07-29T00:00:00.000 | [
"Physics"
] |
Charmonium Mass Spectrum with Spin-Dependent Interaction in Momentum-Helicity Space
In this paper we have solved the nonrelativistic form of the Lippmann-Schwinger equation in the momentum-helicity space by inserting a spin-dependent quark-antiquark potential model numerically. To this end, we have used the momentum-helicity basis states for describing a nonrelativistic reduction of one gluon exchange potential. Then we have calculated the mass spectrum of the charmonium $\psi(c\bar{c})$, and finally we have compared the results with the another theoretical results and experimental data.
INTRODUCTION
During the past years, several models and methodological approaches based on solving the relativistic and nonrelativistic form of the Schrödinger or Lippmann-Schwinger equation have been developed for studying the light and heavy mesons in the coordinate and momentum spaces respectively.
Recently, the three-dimensional approach based on momentum-helicity basis states for studding the Nucleon-Nucleon scattering and deuteron state has been developed [1,2]. We extend this approach to particle physics problems by solving the nonrelativistic form of the Lippmann-Schwinger equation to obtain the mass spectrum of the heavy messons using the nonrelativistic quark-antiquark interaction in terms of a linear confinement, a Coulomb, and various spin-dependent pieces.
In the heavy-quark (c,b) mesons the differences between energy levels are small compared to the particle masses. Hence, the nonrelativistic Lippmann-Schwinger equation can be used to study their quantum behavior. To this end, we have used the nonrelativistic form of the Lippmann-Schwinger equation in the momentum-helicity representation to study the charmonium as a heavy meson. For this purpose, we have used a nonrelativistic quark-antiquark potential based on one-gluon exchange in the momentum-helicity representation.
This article is organized as follows. In Sect. 2, the nonrelativistic LippmannSchwinger equation in the momentumhelicity basis states which leads to coupled and uncoupled integral equations for various quantum numbers is presented briefly. In Sect. 3, a spin dependent quark-antiquark potential model is described in the momentum-helicity basis states. The details of the numerical calculations and the results obtained for the charmonium are presented in Sect. 4. Finally, a summary and an outlook are provided in Sect. 5.
LIPPMANN-SCHWINGER EQUATION IN MOMENTUM-HELICITY BASIS STATES
The nonrelativistic form of the homogenous Lippmann-Schwinger equation for describing the heavy meson bound state is given by: where V denotes the quark-antiquark interaction, m is mass of the quark or antiquark and |Φ Mj j is the meson bound state with the total angular momentum j. M j is projection of the total angular momentum j along the quantization axis. The integral form of this equation in the momentum-helicity basis states is written as [3]: with: where p is the magnitude of the relative momentum of the quark and antiquark, S is the total spin of meson, Λ is the spin projection along the relative momentum and d j Mj Λ ′ (θ ′ ) are the rotation matrices. For an arbitrary total angular momentum j, and singlet case of the total spin state, Eq. (2) leads to one equation: Also for j = 0 and triplet case of the total spin state, Eq. (2) leads to one equation as: For S = 1 and j > 0 it is more complicated. For example for j = 1, Eq. (2) leads to one equation for channel P and two coupled equations for channels S and D as follows: Ψ 211 (p) = 2π where Ψ lSj (p) is the partial wave component of the wave function which is connected to the momentum-helicity component of the wave function as [3]: The inverse relation is written as:
QUARK-ANTIQUARK POTENTIAL IN MOMENTUM-HELICITY BASIS STATES
The spin dependent potential model that we have used in our calculations is sum of the Linear and a simple nonrelativistic reduction of an effective one gluon exchange potential without retardation. This potential in the coordinate space is given in terms of [4]: where σ is the string tension, α s is the strong-interaction fine-structure constant, f c is the color factor which is -4/3 for quark-antiquark and -2/3 for quark-quark, σ 1 and σ 2 are the Pauli matrices and L is the total orbital angular momentum operator. Fourier transformation of this potential to momentum space yields: where q = p ′ − p is the momentum transfer. The kernels of integral equations have singularity. To overcome this problem we have used the regularized form of linear confining and Coulomb parts of the potential [5]. Details of Fourier transformation of regularized parts of the potential are given in Appendix A. Also we have used a Gaussian form factor, exp(− 1 2 λ 2 q 2 ) at the quark-gluon vertex as in Ref. [6] to remove singularity of the kernels due to existence of one gluon exchange potential. The variable λ can be interpreted as size of the quark. In Ref. [7] the pointlike quark-gluon vertex is replaced by a form factor, 1/(q 2 + β 2 ) in which β −1 is the effective quark size to eliminate the singularity. In this work we have used both regularized form and Gaussian form factor for coulomb and f c α s p 2 /(m 2 r) parts of the potential which cause the convergence of numerical results faster. Therefore, the final form of the potential in the momentum-helicity space is written as: where γ =p ′ ·p = cos θ cos θ ′ + sin θ sin θ ′ cos(ϕ − ϕ ′ ) and |p;pSΛ is the momentum-helicity basis state which is eigenstate of the helicity operator S ·p as: Also we have [1]: If the vector p is along z-direction, it is clear that the Eq. (15) is reduced to: For numerical calculations we need the matrix elements of the potential V S ΛΛ ′ (p, p ′ , θ ′ ). These matrix elements is related to the matrix elements Eq. (13) as follows: By considering Eqs. (13), (16) and (17), the final form of the matrix elements of the potential which is inserted in the numerical calculations is written as: with γ =p ′ ·ẑ = cos θ ′ .
DISCUSSION AND NUMERICAL RESULTS
For numerical calculations as a first step we have used the Gaussian quadrature grid points to discretize the momentum and the angle variables. The integration interval for the momentum is covered by two different hyperbolic and linear mappings of the Gauss-Legendre points from the interval [-1,+1] to the intervals [0, p 2 ] [p 2 , p max ] respectively as: Then we have calculated the matrix elements of the potential V ΛΛ ′ (p, p ′ , θ ′ ), from Eq. (18). According to the Eq. (3) integration over the spherical angle variable θ ′ , has been done independently. Finally, we have solved the integral equations (4)-(8) as eigenvalue equations. The integration over momentum variable is cut off at q max = 10 GeV. This selection is carried out so that the numerical results do not depend on this choice. The typical values for p 1 and p 2 are 1 GeV and 3 GeV, respectively. These selections are done till the total number of grid points for momentum intervals are decreased. Other selections can be done but by different grid points for momentum variables. The parameters of the potential model which are shown in Table I are fixed by a fit to the masses of the states η c , J/ψ and h c , similar to what is done in Ref. 9. The results of charmonium mass spectrum are shown in Table II. They are compared with the experimental data and another theoretical work. From Eqs. (7) and (8) it is clear that existence of the tensor term in the potential mix S-and D-partial waves but this mixed as it is shown in Table III is so weak. I show the mixed charmonium states in Table II by their dominant partial wave. As a test of our numerical calculations we have shown convergence of the results as a function of number of grid points N P1 , N P2 and N θ for the momentum and angle variables in Table IV. N P1 , N P2 are the number of grid points for the intervals [0, p 2 ] and [p 2 , p max ] respectively. N θ is corresponding to number of grid points for spherical angle variable. In our calculations we have chosen N P1 =100, N P2 =100 N θ =200 grid points for to achieve an acceptable accuracy.
SUMMARY AND OUTLOOK
In this paper we have extended an approach based on momentum-helicity basis states for calculation of mass spectrum of heavy mesons by solving nonrelativistic form of the Lippmann-Schwinger equation. As an application we have used this approach to obtain the mass spectrum of charmonium. The advantage of working with helicity states is that states are the eigenstates of the helicity operator appearing in the quark-aintiquark potential. Thus, using the helicity representation is less complicated than using the spin representation with a fixed quantization axis for representation of spin dependent potentials. This work is the first step toward for studying single, double, and triple heavy-flavor baryons in the framework of the nonrelativistic quark model by formulation of the Faddeev equation in the 3D momentum-helicity representation. Furthermore, we can apply this formalism straightforwardly for investigation of heavy pentaquark systems, which can be considered as two-body (heavy meson, baryon) systems with meson-nucleon potentials which is underway. n 2S+1 LJ cc PS% PD% 1 3 S1 (1 3 S1 − 1 3 D1) J/ψ 99.93 0.07 2 3 S1 (2 3 S1 − 2 3 D1) ψ ′ 99.90 0.10 3 3 S1 (3 3 S1 − 3 3 D1) ψ ′′′ 99.88 0.12 1 3 D1 (1 3 D1 − 1 3 S1) ψ ′′ 99.88 0.12
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper.
Appendix A: Fourier transformation of the regularized linear confining and Coulomb parts of the potential The three-dimensional Fourier transformation of the potential V (r) is defined as: where q = |p − p ′ |. Fourier transformation of the regularized linear confining and Coulomb parts of the quarkantiquark potential is written as: where potential is kept fixed at cutoff r c . Therefore inserting the linear V (r) = σr, and Coulomb V (r) = f c α s /r, parts of quark-antiquark potential in above equation and calculation of corresponding integrals analytically, yields: V (p, p ′ ) = σ δ(q) r c + 1 2π 2 q 4 2 cos(q r c ) − 2 + q r c sin(q r c ) , V (p, p ′ ) = f c α s δ(q) r c + 1 2π 2 q 2 1 − sin(q r c ) q r c . (A4) | 2,510.2 | 2017-02-19T00:00:00.000 | [
"Physics"
] |
An Efficient Pilot Assignment Scheme for Addressing Pilot Contamination in Multicell Massive MIMO Systems
: The reuse of the same pilot group across cells to address bandwidth limitations in a network has resulted in pilot contamination. This causes severe inter-cell interference at the targeted cell. Pilot contamination is associated with multicell massive multiple-input multiple-output (MIMO) systems which degrades the system performance even when extra arrays of antennas are added to the network. In this paper, we propose an efficient pilot assignment (EPA) scheme to address this issue by maximizing the minimum uplink rate of the target cell’s users. To achieve this, we exploit the large-scale characteristics of the fading channel to minimize the amount of outgoing inter-cell interference at the target cell. Results from the simulation show that the EPA scheme outperforms both the conventional and the smart pilot assignment (SPA) schemes by reducing the effect of inter-cell interference. These results, show that the EPA scheme has significantly improved the system performance in terms of achievable uplink rate and cumulative distribution function (CDF) for both signal-to-interference-plus-noise ratio (SINR), and uplink rate.
Introduction
Equipping the base station (BS) with a large number of antennas (also known as massive multiple-input multiple-output (MIMO)) has been considered one of the fundamental technologies that leads to 5G [1].The introduction of this technology is to meet the increasing demand for mobile data in 5G [2].Although the use of massive MIMO systems increases spectral efficiency, enhances energy efficiency, and reduces the effect of small scale fading [3][4][5][6][7], but invariably promotes pilot contamination.In massive MIMO, time-division duplex (TDD) protocol is preferred over the frequency-division duplex (FDD) [8,9], as the former allows channel estimation in one direction (i.e., uplink) and avoids the estimation of the other side (i.e., downlink) due to channel reciprocity property.In other words, the use of TDD based channel reciprocal minimizes the overhead signals used for channel estimation, which largely saves network bandwidth.Although, the channel estimation ensures high utilization of TDD massive MIMO via uplink transmission, but its channel coherence blocks are restricted in size (limited size).Therefore, the orthogonal pilot sequences cannot be allocated for all users among the cells.To overcome this problem, the orthogonal pilot sequences have to be reused across the cells.Although, pilot reuse approach is a remarkable way forward in addressing the associated problem, however, the channel estimate obtained in a given cell will be contaminated by pilots transmitted by users in other cells.Specifically, the inter-cell interference exacerbates the estimation error and also makes sure the channel estimation of two or more users sharing the same pilot sequence is correlated at a given cell [10].Thus, with multicell massive MIMO systems, its performance deteriorates during uplink and downlink transmission.This issue is referred to as pilot contamination, and depicted in Figure 1.To address the issue associated with pilot contamination, several research methods have been proposed to eliminate/relieve pilot contamination.Among these methods, the pilot assignment technique is identified to be a potential technique for solving this problem.Smart pilot assignment (SPA) method proposed by [11], focused on adjusting the combination between the users and pilot sequences, but did not consider inter-cell interference which causes the pilot contamination.In this paper, we propose an efficient pilot assignment mechanism to improve the performance of users with respect to intense pilot contamination in multicell massive MIMO systems.We summarize our contributions below:
•
We formulate the pilot assignment as an optimization problem and develop a heuristic algorithm, in order to maximize the minimum throughput considering the reduction in the inter-cell interference pilot contamination.
•
We evaluate the performance of the proposed mechanism in terms of signal-to-interference-plus-noise ratio (SINR), and uplink rate with an extensive MATLAB simulation.
•
We compare our work with SPA and other conventional schemes.
The rest of this paper is organized as follows.The related work is summarized in Section 2, the system model is described in Section 3, the pilot contamination phenomenon and the achievable uplink rate are illustrated in Section 4, the EPA scheme is explained in Section 5, the simulation results are depicted in Section 6, and finally, this paper is concluded in Section 7.
Notation: Throughout this paper, the bold lower case letters represent vectors and matrices are represented by bold upper case letters.I M denotes the identity matrix of dimensions M × M. The operators (.) −1 , (.) T , and (.) H are defined for inverse, transpose and conjugate transpose operations, respectively.The expectation operator is represented by E . .
Related Work
Different traditional algorithms based on pilot assignment have been proposed for pilot decontamination [12,13].A vertex graph-coloring-based pilot assignment has been proposed in [12], the pilot sequences are allocated to the users according to the inter-cell interference (ICI) graph.The evaluation of ICI graph depends on both angle of arrival (AoA) correlation and distances between users.However, this scheme requires a second order channel information to construct ICI graph.A deep learning-based pilot allocation scheme (DL-PAS) is proposed in [13] to address the pilot contamination problem in massive MIMO systems.This algorithm aims at learning from the relationship between pilot assignment and users' location.However, the DL algorithm requires high data and subsequently takes a longer time to process the data.
The authors in [14,15] developed the location-based pilot assignment approaches for pilot decontamination.A new expression for line of sight (LOS) interference is derived in [14] which is considered as the criteria for pilot allocation.Although, there was an improvement in the sum spectral efficiency (SE), but the pilot assignment process takes a longer time to be implemented, especially in large networks.The work in [15] characterizes the angular region of the targeted user, and the pilot assignment process was implemented with the aim of making this region interference-free.This angular region is characterized by both the number of BS antennas and the location of the targeted user.However, the pilot assignment problem is formulated by the joint optimization problems which subsequently introduce high computational complexity.
In [16,17], the pilot allocation based pilot reuse (reuse factor more than 1) is also considered for pilot contamination's elimination technique.A systematically-constructed pilot reuse method is proposed in [16].In this approach, the neighbor cells are allowed to use different sets of pilot sequences according to the tree division.To improve performance, it ensures larger distance between cells that share similar pilot sets, the depth of the tree is increased as the pilot contamination severity increases.This approach offers an effective performance when the ratio of the channel coherence time to the number of users in each cell is relatively large.For the purpose of improving the quality of service (QoS) of the edge users, a soft pilot reuse (SPR) scheme was proposed by [17].The channel quality for each user is initially compared with a determined threshold before the pilot allocation procedure, but an increase in complexity was recorded due to additional computational cost incurred by finding the optimal threshold value.
By considering a fairness among users in order to mitigate the pilot contamination, pilot allocation schemes were proposed in [18,19].Specifically, to maximize the sum rate of the system and guarantee fairness among users, a pilot allocation scheme was proposed by [18].An optimization problem is formulated based on a max-product criterion, then both min-leakage algorithm and user-exchange algorithm based on greedy (UEBG) pilot allocation were suggested to solve the optimization problem.Although this scheme almost achieves the same performance as the optimal exhaustive search algorithm (ESA), it still suffers a setback due to high complexity.For the purpose of pilot contamination mitigation in [19], the pilot assignment scheme based on the harmonic SINR utility function was introduced to regulate the fairness among users.However, the system complexity increases as the number of users and network size grows (more than two cells).
Based on performance degradation of users, a pilot assignment scheme has been proposed in [20], the degradation performance is initially evaluated for all users according to the value of the uplink achievable rate.Therefore, the optimal pilot sequences were assigned to users who suffered from the highest degradation in a greedy way.Obviously, this scheme is not effective in bad channel conditions.
In [11,21], the pilot allocation approaches aim at enhancing the performance of users who suffer from bad SINR.The pilot allocation in [21] focused on maximizing the sum capacity of the whole system for pilot decontamination.In this work, the pilot sequences were assigned initially to the users who have bad channel condition.However, the complexity of the pilot assignment procedure increases as the network size is increased.A SPA scheme is proposed in [11] to improve the performance of users with poor SINR.Users with low channel quality were assigned to pilot sequences which resulted in a low interference.However, the achievement of this scheme is limited as it did not consider inter-cell interference which causes the pilot contamination.
Some authors have tried to make a combination of two schemes to get an improved performance as shown in [22,23].As such, a joint pilot assignment scheme has been proposed by [22], in which time-shifted [24] and the SPA [11] schemes were combined in order to mitigate the effect of pilot contamination.Inter-group interference is suppressed according to [24] strategy, whereas SPA is used to reduce intra-group interference.Although an improved overall performance was recorded, the mutual interference between downlink data and uplink pilot signals cannot be eliminated despite the use of SPA scheme.New pilot assignment schemes such as greedy-based and swapping-based were implemented together with pilot contamination precoding design (PCP) for massive MIMO downlinks [23].This combination offers a considerable improvement over the random pilot assignment, but the PCP matrix is changed according to the update in pilot assignment information.
By exploiting the channel sparsity for wideband massive MIMO system, the pilot contamination can be removed with the help of pilot assignment policy in [25].The pilot assignment policy is designed to help identify the subspace of the desired channel.The difficulty in this approach, lies on how to deal with the subspace estimation, which can be realized through multiple frames after randomizing the pilot contamination.
Differing from the aforementioned works [11,20], we consider the source of inter-cell interference throughout pilot assignment, which is essentially the cause of the pilot contamination.In some other works [12][13][14][15], the availability of some factors (e.g., user location, AoA, or LOS interference) are needed for pilot assignment which are not always easy to estimate, while our approach requires only large-scale fading coefficients, which can be tracked easily as they do not frequently change during coherence interval.Besides, comparing to previous works [17][18][19]21], our algorithm is not computationally intensive, and therefore it can be applied for large-scale networks.
System Model
In this section, we describe the system model under which the TDD-massive MIMO systems are implemented.In this model, the uplink comprises L cells, in which each cell contains a BS equipped with M antennas.Furthermore, in each cell coverage area K single-antenna users communicate simultaneously to their designated BS, assuming that M K [2,5].The propagation channels connecting the k-th user located in the j-th cell to the BS in i-th cell is modeled as Rayleigh block fading [26] and the channel vector where g i j k and β i j k denote the small scale-fading vector and large-scale fading coefficient, respectively.The small scale-fading vector has a complex Gaussian distribution with zero mean and unity variance, CN (0, I M ) , while the large-scale fading coefficient is referred to the effect of both path-loss and shadowing and it can be tracked easily as it changes slowly during coherence interval τ c = B c T c [27][28][29].We use B c and T c to denote the coherence bandwidth and the coherence time, respectively.Figure 2 illustrates the coherence block for TDD protocol.We also consider that large-scale fading coefficient is equal for all antenna elements, assuming that the distance between user k and BS is significantly larger than the distances between antenna elements.
Pilot Contamination and Achievable Uplink Rate
Since the size of channel coherence blocks is limited, it is difficult to assign orthogonal pilot sequences to all users in order to prevent pilot contamination.Thus, it is necessary to reuse the pilot sequences in all cells to overcome this limitation [2].The pilot sequences Φ = [φ 1 , φ 2 , ..., φ K ] T ∈ C M×τ p are assumed mutually orthogonal Φ T Φ = τ p I K with length of τ p .During the pilot phase, the pilot sequences are distributed randomly to all users.Thus, the received signal U φ i ∈ C M×τ p at the BS in the i-th cell can be written as: where ρ φ denotes the pilot transmission power, and N φ i ∈ C M×τ p denotes the additive white Gaussian noise (AWGN) matrix which is assumed independent and identically distributed (i.i.d) random variables whose elements have zero mean and variance σ 2 N .The received signal U φ i is called the observation, in which the BS in the cell i can use it to estimate the channel responses.The first term in (3) represents the received pilot signals from users in the serving cell, whereas, the middle term represents the inter-cell interference signal from the neighbor cells, which causes the pilot contamination.Correspondingly, the received uplink data u d i ∈ C M at the BS in the i-th cell can be represented by: where x u jk denotes the uplink transmitted symbol from user k located in the j-th cell, ρ u denotes the power of the uplink transmitted symbol with E | x u jk | 2 = 1, and n u i ∈ C M×τ u denotes the AWGN vector with variance σ 2 n and zero mean value.The minimum mean square error (MMSE) is exploited for the purpose of the channel estimation ĥi jk ∈ C M×1 [9].Therefore, the MMSE estimated channel vector ĥi jk based on the observation U φ i in (2) can be given as [10]: and where k which is called the received proceed signal, Ψ i jk denotes the inverse of the normalized correlation matrix, and R i jk denotes the spatial correlation matrix of the channel to be estimated, . The estimated channel is then used to detect the uplink data symbol and precode the downlink data.Herein, we consider both maximum ratio combining (MRC) and zero forcing (ZF) as a linear detectors at the BS which are given by [30]: The received detected signal is evaluated by multiplying the received uplink data signal u d i by the decoding vector a iH ik , which represents the k-th column of the matrix A i and h i ik is the k-th column of the matrix H i i .Therefore, the detected symbol of user k at a given BS located in a cell i can be expressed as: The first term in (9) represents the desired signal, the second one represents the intra-cell interference, the third term is the effect of pilot contamination (inter-cell interference), and the last one represents the uncorrelated noise.
Consequently, the average SINR of the k-th user in the target cell i can be evaluated as: and where υ i ik denotes the intra-cell interference and uncorrelated noise, in which their effect is almost neglected as the number of antennas increases (M → ∞) [5].Then, the uplink SINR can be described by large-scale fading coefficients β i jk as follows: It is clear from the above expression that the effect of small-scale fading and thermal noise are averaged out as the number of antennas is increased [5].Therefore, the ergodic achievable uplink rate of the user k according to [26] is: where R u ik is calculated in bit/channel use and τ u refers to the uplink duration.From (12), it is obvious that the average uplink rate of multicell massive MIMO systems is limited due to pilot contamination and it cannot be boosted by increasing either the number of serving antennas or both ρ u and ρ p .
Proposed Scheme
In this section, an efficient heuristic algorithm is developed for addressing the multicell massive MIMO associated problem.To do this, the assignment and the reuse of pilot group across cells in the network is formulated as an optimization problem.
Problem Formulation
Formally, we formulate an optimization problem as depicted in: The above optimization problem is based on the method proposed by [11].In this method, it is assumed the number of antennas is very large and as such make use of the large-scale fading coefficients To address problems related to pilot contamination, this study concentrates on assigning the pilot sequences for a specific cell in multicell massive MIMO systems.In the target cell, the number of possible iterations is defined by the number of K users which is usually very high.In contrast, the conventional scheme assigns the pilot sequences Φ = [φ 1 , φ 2 , ..., φ K ] T randomly to K users.
The performance of multicell massive MIMO systems is much degraded by the effect of the strong inter-cell interference from the neighbor cells and is exacerbated when the channel quality of the users in target cell is poor.Specifically, in the SPA scheme, the set of users with the worst channel quality are assigned pilot sequences with the lowest inter-cell interference.Although these pilot sequences have the lowest interference, they are still considered high interference pilot sequences when used by users which have bad channel quality.Therefore, the interference that is associated with such pilot sequences must be minimized.
Proposed Solution
To achieve minimal outgoing inter-cell interference among neighbor cells for the target cell, we ensure a weak channel cross gain of the interfering users against desired users.The large-scale fading coefficients are used to measure the effect of inter-cell interference at the target BS.Thus, the effect of inter-cell interference can be measured by using these fading coefficients as it changes progressively during the coherence interval τ c , as every user measured result is sent to its corresponding BS.The required conditions for finding the large-scale fading coefficients can be met in long term evolution-advanced (LTE-A) systems.These corresponding BSs contain the channel's information for the available BSs.The user keeps tracking these BSs until a reliable BS is identified for suitable handover.To enhance the cooperation among the BSs, we assume acquisition of the coordinated multi points (CoMP).Furthermore, a mobility management entity (MME) is connected to BSs by S1 interface and has a huge ability for computing.As a result, this unit can collect the large-scale fading coefficients from the connected BSs [31,32].
To abate the effect of setback suffered by users due to poor channel quality or high interference, the SINR is optimized.This is done by assigning the pilot sequence, which is associated with low interference, to users having poor channel quality.
In order to achieve this, we propose a heuristic algorithm based on SPA to solve the optimization problem in (13).Before illustrating the algorithm, we need to define a set of parameters η jk which characterizes the squared cross gain of the interfering users from neighboring cells: The interference that is produced by users who shared the same pilot sequence φ k can be evaluated at the target cell as: In addition, the set of parameters k is used to characterize the square channel quality of the target cell's users which can be expressed by: k=1,2,...,K So, the optimization problem in ( 13) can be re-written as the following: The proposed algorithm EPA is summarized in Algorithm 1 to solve the above optimization problem.
Algorithm 1 Efficient Pilot Assignment (EPA).
8: Assign the pilot sequence φ k to the users in V k .end for 9: Find the sum: The available large scale fading coefficients are exploited to measure the interference from the neighbor cells.From the above algorithm, the users in the neighbor cells are classified into different levels according to the value of squared cross gain (η jk ), which gives an indication of the strength of the interference at the target cell i.The users that cause the highest interference (which have the largest η jk ) are classified as the level V 1 users.This level involves the worst interfering users from each neighbor cell.The second level V 2 contains the users which cause less interference than that in V 1 .This classification process will continue until the last level V K , which contains the users that produce the smallest interference.The k-th interference level can be represented by: The amount of interference that is produced by the users in each level is described by (14).After that, the interfering users in each level are assigned the same pilot sequence.For instance, the users in V 1 and V K are assigned the pilot sequences φ 1 and φ K , respectively.As a result, the pilot sequence φ 1 is suffering from the highest interference, whereas φ K is the one with the lowest interference.The remaining pilot sequences have different levels of interference between φ 1 and φ K .After minimizing the inter-cell interference at the serving BS, the second step is to assign pilot sequences to its users and this can be achieved by solving the following formula: Obviously, from the EPA algorithm, the pilot assignment process for the users of the target cell depends on, both the squared channel quality k and the minimized outgoing interference ξ k , which is caused by users sharing the same pilot sequence in the level V k .The optimization problem in (17) can be solved with the help of the SPA algorithm.In this algorithm, users that suffer setbacks due to bad channel quality are exempted from the pilot sequence as it will cause severe interference.Thus, the sets of users with the worst channel quality are assigned pilot sequence with the lowest inter-cell interference.For the remaining cells, the process will continue in a sequential way, excluding the cells that are already included with the target cell.
Furthermore, our algorithm is not computationally intensive in the sense that it ultimately relies on cell sorting, thus the time complexity it incurs is O (L K log K), and therefore it works faster if compared to recent schemes.For example, EPA shows less computational complexity than the work in [19,21], which incur O (L K 3 ) and O (L 2 K log K), respectively.In addition, the scheme in [17] incurs O (M(K 2 e + K 2 CS ), where K e denotes the number of edge users in the network, and K CS represents the number of users in the largest cell.So apparently [17] is much more intense than EPA.The SPA scheme [11], as it is fundamentally limited to only a target cell optimization, unsurprisingly it incurs only O ( K log K).
Simulation Results
The base code implemented is obtained from [26], while Monte Carlo simulation is used to evaluate the performance of the EPA scheme.A typical hexagonal cellular network made up of L cells is considered in the EPA scheme.Each of these cells comprises of a BS which is equipped with M number of antennas and K users with single antennas under its coverage area [2,5].A center cell surrounded by all other cells is considered as a target cell.The system parameters are summarized in Table 1.The parameter β i ik is modeled in decibel as [10]: where d i jk (km) is the distance between the k-th user in the j-th cell and the BS in the i-th cell, α is the path-loss exponent, Υ determines the median channel gain at 1 km as a reference distance which can be calculated according to many propagation models [33], and F i jk N (0, σ 2 s f ) is the shadow fading which creates log-normal random variations around the nominal value Υ + 10 α log 10 (d i jk /1 km).We evaluate the SPA [11] and the conventional schemes [2,5] against the EPA scheme.Figure 3 depicts the average uplink rate per user of the EPA, SPA and conventional schemes against the number of BS's antennas using the ZF as a linear detector.Obviously, the average uplink rate of the EPA scheme outperforms the other schemes.This improvement can be attributed to the policy implemented for pilot assignment in the neighbor cells.This implemented policy ensures a significant reduction of the inter-cell interference at the serving BS, which invariably leads to a better throughput.Due to the pilot assignment in the target cell which was executed according to the users' channel quality, the SPA scheme achieves better performance than the other conventional scheme.However, the performance of both SPA and conventional schemes changes slightly when the number of antennas exceeds certain points (e.g., greater than 150).Figure 4 shows the impact of the EPA scheme when using the MRC as a linear detector.It can be clearly observed that the average uplink rate per user (bits/channel use) is substantially enhanced by the EPA scheme when the number of antennas is increased.The superiority of the EPA scheme over other schemes, arose as a result of the minimization of the inter-cell interference that comes from the neighbor cells.This is achieved by allowing the users in each interference level V k to share the same pilot sequence.Consequently, EPA scheme has shown a low interference from the neighbor cells compared to the other schemes.Figure 5 depicts the performance of the EPA scheme when compared with both conventional and SPA schemes in terms of cumulative distribution function (CDF) of the average SINR.When the number of BS's antennas is 64 with ZF detector, the probabilities of the average uplink SINR being less than −10 dB for the conventional, the SPA, and the proposed EPA schemes are almost 80%, 26.25%, and 10%, respectively.The improvement is achieved because the effect of the interference, which is associated with the pilot sequences, on channel quality of the users in the target cell became slight, which effectively increased the SINR of the system.
Figure 6 depicts the CDF of the minimum SINR when M is 64.It is evident that the minimum SINR of the EPA scheme is significantly improved when compared with SPA and conventional schemes.For example, the probability of the minimum SINR to be less than −20 dB for the EPA scheme is approximately 16.25%, while this probability is about 34.6% and 79.6% for the SPA and the conventional schemes, respectively.The reason behind this improvement is due to assigning the pilots of the users with the lowest interference, in the neighbor cells, to the users who have bad channel quality in the target cell.In consequence, the performance of these users was improved due to the reduction of their interference.Figures 7 and 8 depict the CDF of average and minimum SINR, respectively, using MRC detector when M is 64.As observed from Figures 7 and 8, the EPA scheme outperforms the SPA and the conventional schemes.As shown in Figure 7, the EPA scheme increases the average SINR by 1.8 dB over the SPA scheme, whereas it increases up to 4.69 dB for minimum SINR, as illustrated in Figure 8. From Figures 5-8, the minimum SINR always achieves a better performance.In other words, the performance of edge users is significantly enhanced.This is due to the fact that the inter-cell interference has been greatly reduced at the target cell while the users with poor channel quality are assigned to the suitable pilot sequences in order to maximize its SINR.Moreover, the results obtained as a result of using ZF and MRC linear detections, are approximately comparable when run on the same parameters setting.This is because the inter-cell interference is greatly reduced by the EPA scheme that runs before the process of signal detection.
By using ZF detector, the performance of the EPA scheme has been examined in terms of the CDF of the average uplink rate when M is 64, as shown in Figure 9.It can be seen that the performance of the CDF in the conventional scheme is highly influenced by the pilot contamination.The assignment of the pilot randomly, has led to the worst performance compared to the SPA and the EPA schemes.On the other hand, the EPA scheme outperforms the SPA and the conventional schemes, since the effect of users who cause the highest interference is considered weak compared to users having good channel quality when they are assigned the same pilot sequence.As a result, these interfering users are excluded from sharing the same pilots of users with bad channel quality.Result of evaluation for the CDF of the minimum uplink rate is depicted in Figure 10.It is clear that the EPA scheme performs better than the other schemes.For example, the minimum uplink rate of the EPA scheme is doubled when compared to the SPA scheme.This improvement has been achieved because the interference associated with pilot sequences, which is allocated to users with bad channel quality, was reduced effectively by the EPA scheme.
Figures 11 and 12 represent the CDF of the average and the minimum uplink rate, respectively, when the MRC is utilized and M is 64.The EPA scheme achieves the highest performance when compared with other schemes, especially in the minimum uplink rate.Specifically, the achieved gain in minimum uplink rate is doubled while it is 1.2 times in average uplink rate in comparison with SPA.The reason for this improvement in the minimum uplink rate is due to the priority given to the users having the worst channel quality during the pilot assignment process.In order to verify the effectiveness of the EPA scheme, the average uplink rate against the number of antennas has been evaluated in Figure 13 with different parameters, considering ZF detector.These parameters, which are shown in Table 1, increase the interference severity at the target cell.Obviously, the average uplink rate of EPA schema is higher than other schemes, despite the intensity of interference.
Figure 1 .
Figure 1.The effect of pilot contamination in multicell massive MIMO systems at a cell a, where the solid line represents the direct gain and the dotted line represents the inter-cell interference.
4 : 7 :
Assigning pilot sequences Φ for all users in all j cells ∀j = 1, 2, ..., L 5: Procedure: for each neighbor cells j = i do for all users K in cell j do 6: Evaluate: η jk = β i jk 2 , k = 1, 2, ..., K end for Classifying the users into different levels
Figure 3 .
Figure 3.The average uplink rate per user with zero forcing (ZF) for different numbers of antennas.
Figure 4 .
Figure 4.The average uplink rate per user with maximum ratio combining (MRC) for different numbers of antennas.
Figure 5 .
Figure 5.The cumulative distribution function (CDF) of the average signal-to-interference-plus-noise ratio (SINR) when M = 64 using ZF.
Figure 6 .
Figure 6.The CDF of the min.SINR when M = 64 using ZF.
Figure 7 .
Figure 7.The CDF of the average SINR when M = 64 using MRC.
Figure 8 .
Figure 8.The CDF of the min.SINR when M = 64 using MRC.
Figure 9 .
Figure 9.The CDF of the average uplink rate when M = 64 using ZF.
Figure 10 .
Figure 10.The CDF of min.uplink rate when M = 64 using ZF.
Figure 11 .
Figure 11.The CDF of average uplink rate when M = 64 using MRC.
Figure 12 .
Figure 12.The CDF of min.uplink rate when M = 64 using MRC.
Figure 13 .
Figure 13.The average uplink rate per user with ZF for different numbers of antennas, K = 20 , and R = 300 m. | 7,288.4 | 2019-03-27T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
The SAMME.C2 algorithm for severely imbalanced multi-class classification
Classification predictive modeling involves the accurate assignment of observations in a dataset to target classes or categories. There is an increasing growth of real-world classification problems with severely imbalanced class distributions. In this case, minority classes have much fewer observations to learn from than those from majority classes. Despite this sparsity, a minority class is often considered the more interesting class yet developing a scientific learning algorithm suitable for the observations presents countless challenges. In this article, we suggest a novel multi-class classification algorithm specialized to handle severely imbalanced classes based on the method we refer to as SAMME.C2. It blends the flexible mechanics of the boosting techniques from SAMME algorithm, a multi-class classifier, and Ada.C2 algorithm, a cost-sensitive binary classifier designed to address highly class imbalances. Not only do we provide the resulting algorithm but we also establish scientific and statistical formulation of our proposed SAMME.C2 algorithm. Through numerical experiments examining various degrees of classifier difficulty, we demonstrate consistent superior performance of our proposed model.
Introduction
In machine learning, classification involves the accurate assignment of a target class or label to input observations. When there are only two labels, it is called binary classification; when there are more than two labels, it is often referred to as multi-class classification. Classification algorithms may generally fall into three categories (see Hastie et al. (2009)): • Linear classifiers: This type of algorithm separates the input observations using a line or hyperplane based on a linear combination of the input features. Examples in this algorithm include logistic regression, probit regression, and linear discriminant analysis.
• Nonlinear classifiers: This type of algorithm separates the input observations based on nonlinear functions. Examples include decision trees, k-nearest neighbors (KNN), support vector machines (SVM), and neural networks.
• Ensemble methods: This type of algorithm combines the predictions produced from multiple models. Examples includes random forests, stochastic gradient boosting (e.g., XGBoost), and adaptive boosting (AdaBoost).
In imbalanced classification, the distribution of observations across classes is biased or skewed. In this case, minority classes have much fewer observations to learn from than those from majority classes. In spite of the sparsity, the minority class is often considered the more interesting class yet developing a scientific learning algorithm suitable for the observations presents countless challenges. Several research have been conducted dealing with imbalanced data, yet mostly in the context of binary problems. The commonplace and more direct approach is to use algorithms listed above and handle the class imbalance at the data level. In this case, the class distribution of the input observations is rebalanced by oversampling (or undersampling) from the underrepresented (or overrepresented) classes. One popular approach is the oversampling of underrepresented classes based on the SMOTE (Synthetic Minority Oversampling Technique), a technique developed by Chawla et al. (2002). It is worth noting that generating synthetic observations to rebalance class distributions, especially with multi-class classification, has the disadvantage of increasing the overlapping classes with unnecessary additional noise.
One popular class of algorithms that is believed to be one of the most powerful techniques is boosting, which is based on training a sequence of weak models into a strong learner in order to improve predictive power. A specific boosting technique primarily developed for classification is AdaBoost, a class of so-called adaptive boosting algorithms. AdaBoost.M1, which combines several weak classifiers to produce a strong classifier, is the first practical boosting algorithm introduced by Freund and Schapire (1997). AdaBoost.M1 is an iterative process that starts with a distribution of equal observation weights. At each iteration, the process fits one weak classifier and subsequently adjusts the observation weights based on the idea that more weights are given to input observations that have been misclassified, allowing for increased learning. See Algorithm 1 in Appendix A.
AdaBoost.M1 has been extended to handle multi-class classification problems. One such extension is the so-called AdaBoost.M2, developed by Freund and Schapire (1997) that is based on the optimization of a pseudo-loss function suitable for handling multi-class problems. Another extension is the AdaBoost.MH developed by Schapire and Singer (1999) that is based on the optimization of the Hamming loss function. Both these extensions solve multi-class classification problems by reducing them into several different binary problems; Such procedures can be slow and inefficient. A more popular multi-class AdaBoost extension is the algorithm called SAMME (Stagewise Additive Modeling using a Multi-class Exponential Loss Function) proposed by Zhu et al. (2009), which avoids computational inefficiencies without the multiple binary problems. See So et al. (2021) for details of the iteration process of this algorithm. According to Friedman et al. (2000) and Hastie et al. (2009), the SAMME algorithm is equivalent to an additive model with a minimization of a multi-class exponential loss function and belongs to the traditional statistical family of forward stagewise additive models. Additional variations to these AdaBoost algorithms have appeared in Ferreira and Figueiredo (2012); a recent work of Tanha et al. (2020) provides a comprehensive survey.
In order to further improve prediction within an imbalanced classification, cost-sensitive learning algorithms provides for a necessary additional layer of complexity in the algorithm that takes costs into consideration. The work of Pazzani et al. (1994) was the first to introduce cost-sensitive algorithms that minimize misclassification costs in classification problems. The cost values, estimated as hyperparameters, are additional inputs to the learning procedure and are generally used to reduce misclassification costs, which attach penalty to predictions that lead to significant errors. These costs indeed are used to modify the updating of the observation weights at each iteration within the context of adaptive boosting algorithms. For binary classification, Ada.C2 is the most well-known and attractive method of AdaBoost that combines cost-sensitive learning (Sun et al. (2007)). For details of this algorithm, please see So et al. (2021).
In this article, we suggest a novel multi-class classification algorithm, which we refer to as SAMME.C2, especially designed to handle imbalanced classes. This algorithm is inspired by combining the advantages drawn from two algorithms we earlier described: (1) SAMME, one of the Adaboost algorithms for multi-class classifiers that do not decompose the classification task into multiple binary classes to avoid computational inefficiencies, and (2) cost-sensitive learning employed in Ada.C2. Zhu et al. (2009) showed that SAMME is equivalent to a forward stagewise additive modeling with a minimization of a multi-class exponential loss function and has been proven to be Bayes classifier. These mathematical proofs are important statistical justifications that the resulting classifiers are optimal. However, we find that the training purpose of the SAMME algorithm is to reduce test error rates and this works quite well when classes are generally considered balanced. In the case when classes are severely imbalanced, the SAMME algorithm places more observation weight on classifying majority classes accurately because this contributes more on decreasing test errors. Further, this results in a huge sacrifice of being able to accurately classify minority classes. This leads us to embrace the idea of adding the attributes of cost-sensitive learning techniques to this algorithm. When cost-sensitive learning is added to SAMME, SAMME.C2 is able to demonstrate the superiority of controlling these peculiar issues attributable to class imbalances. This article extends the mathematical proofs that with the addition of cost values, SAMME.C2 retains the same statistical foundations with SAMME.
The practical importance of multi-class classification tasks, especially with severely imbalanced classes, extends to multiple disciplines. Various ad-hoc algorithms, some of which are described above, have been employed. The works of Liu et al. (2017)), Yuan et al. (2018), Jeong et al. (2020), andMahmudah et al. (2021) address real life biomedical applications of such classification tasks in the detection of disease. Spam detection is widely studied in computer engineering; see Mohammad (2020), Talpur and O'Sullivan (2020), and Dewi et al. (2017). The research probe conducted by Kim et al. (2016) applies multi-class classification tasks with cost-sensitive learning mechanisms to detect financial misstatements associated with fraud intention in finance. In operations research, Han et al. (2019) proposes a fault diagnosis model for planetary gear carrier packs as a detection tool for manufacture fault. Finally, in insurance, So et al. (2021) examines the frequency of accidents as a multi-class clas-sification problem with a highly imbalanced class using observations of insured drivers with additional telematics information about driving behavior through a usage-based insurance policy.
The remainder of the papers is as follows. Section 2 introduces the details about this new SAMME.C2 algorithm, which is largely based on the integration of SAMME and Ada.C2. Section 3 presents the mathematical proofs that SAMME.C2 follows a forward stagewise additive model and is an optimal Bayes classifier. To demonstrate the algorithmic superiority of SAMME.C2, Section 4 presents numerical experiment results based on simulated datasets. To show the many varied applications of our work, this section additionally lists some practical researches on multi-class classification. Section 5 concludes the chapter.
2 The SAMME.C2 algorithm For our purpose, let us consider a set of N input observations denoted by (x i , y i ) for i = 1, . . . , N where x i is a set of feature variables and y i ∈ Y = {1, 2, . . . , K} is target classification variable belonging to one of K classes. In the case of binary, K = 2. An important input variable is the cost value for which we denote here as C(y i ) to emphasize that it is a function of the target variable pre-determined by hyperparameter optimization technique described below. SAMME.C2 combines the benefits of boosting and cost-sensitive algorithms for handling class imbalances in multi-class classification problems. Given the input data (x i , y i , C(y i )), the algorithm is an iterative process of fitting weak classifiers denoted by h t (x i ) at iteration t and the process stops at time T . The stopping time T can be a tuned hyperparameter. At iteration t = 1, we set equal observation weights as D 1 (i) = 1 N . In subsequent iteration t, we train weak classifiers using the distribution D t . Any weak classifier can be used but for our purpose, the simplest weak classifiers are decision stumps. We update the distribution of the observation weights using which depends on the error rate of the t-th weak classifier given by and the weight of the t-th weak classifier given by The final classifier is then determined at the final iteration T using For details of the algorithm is in Algorithm 4 in the appendix.
Comparison with Ada.C2 and SAMME
The iteration process for all the three algorithms (Ada.C2, SAMME, and SAMME.C2) are exactly the same. However, the primary differences lie in the comparison of the error rate and the weight of the t-th classifier, as well as the updating of the distribution of the observation weights.
For SAMME and SAMME.C2, the computation of the error rate is exactly the same despite that the SAMME algorithm does not have cost values. For the Ada.C2, unlike SAMME.C2, the cost values are used to compute error rate of the t-th classifier using For SAMME and SAMME.C2, the computation of the weight of the t-th classifier is exactly the same despite that the SAMME algorithm does not have cost values. For the Ada.C2, the weight of the t-th classifier is given by For the misclassified training samples to be properly boosted, the classification error at each iteration should be less than 1/2, otherwise, α t , which is a function of the classification error will be negative and observation weights will be updated in the wrong direction. In which case, after the iteration, the classification error can no longer be improved. In the case of binary as in Ada.C2, this just requires that each weak learner performs a little better than random guessing. However, when K > 2, the random guessing accuracy rate is 1/K, which is less than 1/2. Hence, multi-class problems need much more accurate weak learner than binary problem, and if weak learner is not chosen and trained accurately enough, the algorithm may fail. Zhu et al. (2009) pointed this out and suggested SAMME algorithm, which directly extend AdaBoost.M1 to the multi-class cases as adding one term, log(K − 1), to the updating equation of α t at each iteration t.
The updating of the distribution of the observation weights for the subsequent iteration is exactly the same for the Ada.C2 and SAMME.C2 algorithms; this is not at all surprising since both algorithms consider cost values. For the SAMME algorithm for which it does not have cost values, the distribution of the observation weights for the subsequent iteration is given by The updating principle is based on the idea on how the algorithm correctly classifies (or misclassifies) majority and minority classes. For the SAMME algorithm without cost values, there is an even redistribution of correct classification (or misclassification) regardless of whether it belongs to a majority or minority class. For the Ada.C2 and SAMME.C2, with addition of cost values, the redistribution becomes uneven by assigning heavier weights to observations that belong to minority classes. This leads us to conclude that after enough number of iterations, for cost-sensitive learning mechanisms, weak classifiers are trained with a heavy emphasis on misclassified observations that are in the minority class. See Figure 1 of So et al. (2021).
For a graphical display of the iteration process with emphasis on these differences, please refer to Figure 1. It can be noted that SAMME is a special case of SAMME.C2 by assigning all the cost values to 1, that is, C(y i ) = 1 for all y i ∈ Y = {1, 2, . . . , K} and i = 1, 2, . . . , N . Initial sample weights D 1 (i) = 1 N , i = 1, . . . , N Train weak classifier using the distribution D t , for t = 1, . . . , T Figure 1: Three AdaBoost algorithms: SAMME, Ada.C2, and SAMME.C2
The cost optimization
The critical work involved in implementing SAMME.C2 is the process of determining the cost value given to each class. From the perspective of SAMME.C2, because cost values can be regarded as hyperparameters, this process can be regarded as optimizing or tuning a hyperparameter in a learning algorithm. Various optimization methods of hyperparameters may be used to optimize the cost values. Some of the frequently used optimizing strategies are grid search, random search (Bergstra and Bengio (2012)), and sequential model-based optimization (Bergstra et al. (2011)). The simple and widely used optimization algorithms are the grid search and the random search. However, since the next trial set of hyperparameters are not chosen based on previous results, it is time-consuming. One of the most powerful strategies is the sequential model-based optimization, also sometimes referred to as Bayesian optimization. The subsequent set of hyperparameters is determined based on the result of the previously determined sets of hyperparameters. Bergstra et al. (2011) and Snoek et al. (2012) showed that sequential model-based optimization outperforms both grid and random searches. However, to use the sequential model-based optimization, advanced level of statistical knowledge is required. For our purpose with SAMME.C2, we employ Genetic Algorithm (GA), which is simple, easily understandable, and at the same time, computationally efficient. Developed by Holland (1975) and described in Mühlenbein (1997), GA is one kind of random search techniques, but the primary difference from general random searches is that the sub-sequent trial set of hyperparameters are decided based on the result of previously determined sets of hyperparameters just like the sequential model-based optimization.
In this algorithm, we first create the population set consisting of M arbitrary cost vectors. The cost vector has K elements for K-class problem. We then run SAMME.C2 and perform evaluation step to get the performance metric corresponding to each cost vector. Here, performance metric is referred to as the objective function.
• In the selection step, two cost vectors are chosen from the M vectors, employing the "choice by roulette" method typically used as an operator in GA algorithm with the objective of selecting cost vectors having a larger performance metric with a higher possibility.
• In the crossover step, we combine the selected two cost vectors into a single vector using arithmetic average.
• In the mutation step, we pick a random number within a tiny interval that is used to adjust the elements in the cost vector.
Repeating this selection, crossover, and mutation steps, we can produce a new population with new M cost vectors, for which the procedure is iteratively repeated P number of times to generate the population that will produce the optimal cost vectors.
Proof of optimality
In this section, we provide a theoretical justification of the SAMME.C2 algorithm. Recall that an advantage of the SAMME algorithm is that it is statistically explainable or justifiable. In particular, Zhu et al. (2009) proved that the SAMME algorithm is equivalent to fitting a forward stagewise additive model using a multi-class exponential loss function expressed as In the same fashion, we demonstrate that the addition of cost sensitive learning to SAMME preserves these same theoretical aspects. To prove this, instead of (8), we use a loss function multiplied by cost values, which we may call a multi-class cost sensitive exponential loss function expressed as Just as in the work of Zhu et al. (2009), we justify the use of multi-class cost-sensitive exponential loss function in (9) by first showing that the resulting classifier minimizing (9) is the optimal Bayes classifier. Note that the symbols U , f (x), and cost vector C will be well defined in the subsequent subsections.
Terminology
Suppose we are given a set of data denoted by D = (x i , y i , C(y i )) for i = 1, 2, . . . , N where x i is a set of feature variables, y i is the corresponding response which is a classification variable that belongs to the set Y = {1, 2, . . . , K} and C(y i ) is the corresponding cost value which is a function of y i . For each observation, we attach a cost value that depends on which class observation i belongs to and these are generated outside the algorithm but are based on the minority/majority characteristics of the classification variable. The objective is to learn from the data so that we can build a predictive model for identifying a particular observation will belong to a particular class, given the set of feature variables. Without loss of generality, we re-code the response y i = U i ; all entries in this vector will be equal to −1/(K − 1) except a value of 1 in position k if the observation y i = k. In effect, we have This re-coding is for carrying over the symmetry of class label representation in the binary case (Lee et al. (2004)). It is straightforward to show that K j=1 U ji = 0 for all i = 1, 2, . . . , N . There is a one-to-one correspondence between U i and y i and will be interchangeably used for convenience and clarity whenever possible; each equivalently refers to the class the observation i belongs to.
The loss function for the optimal Bayes classifier
This section provides a theoretical justification for the use of the multi-class cost-sensitive exponential loss function in (9) in the optimization leading to the SAMME.C2 algorithm. More specifically, we show here that the resulting classifier is an optimal Bayes classifier. It is well-known in classification problems that this produces a classifier that minimizes the probability of misclassification. See Hastie et al. (2009).
Lemma 3.1. Denote y to be the classification variable with possible values belonging to {1, 2, . . . , K}, U to be the re-coding of this variable as explained above, C = (C 1 , C 2 , . . . , C K ) to be the cost vector, and to be the classifier function. The following result leads us to the optimal classifier function under the multi-class cost-sensitive exponential loss function: , log Prob(y = j|x) , k = 1, 2, . . . , K.
Proof. For this optimization, the Lagrange can be written as where λ is the Lagrange multiplier. By taking derivative with respect to f k and λ, we reach and the constraint that Next, by summing the first K equations, we get log Prob(y = k|x) and by substituting the last equation, we obtain the following population minimizer, (10): log Prob(y = j|x) , k = 1, 2, . . . , K.
Note that the constraints on f k in Lemma 3.1 allow us to find the unique solution. The following proposition allows us to choose the optimal Bayes classifier.
Proposition 3.2. Denote y to be the classification variable with possible values belonging to {1, 2, . . . , K}. Given the feature variables x, we find the optimal Bayes classifier using the multi-class cost-sensitive exponential loss function: Proof. It is clear that and that is fixed for all k ∈ {1, 2, . . . , K}. It follows therefore that from (10), we have Proposition 3.2 provides a theoretical justification for our estimated classifier in the SAMME.C2 algorithm, and the subsequent proposition provides for a formula to calculate the implied class probabilities within this framework.
3.3 SAMME.C2 as forward stagewise additive modeling In this section, we show that our SAMME.C2 algorithm is indeed equivalent to a forward stagewise additive modeling based on the optimization of the multi-class cost-sensitive exponential loss function expressed in (9).
Given the training set D, (9) can be written as: subject to f 1 + . . . + f K = 0.
Using forward stagewise modeling for learning, the solution to (11) has the linear form where T is the total number of iterations, and g (t) (x i ) are the basis functions with corresponding coefficient β (t) . We require that each basis function satisfies the symmetric constraint whereby K k=1 g (t) k (x i ) = 0 for all t = 1, 2 . . . , T so that g (t) (x i ) takes only one of the possible values from the set Then, at iteration t, the solution can be written as: Forward stagewise modeling finds the solution to (11) by sequentially adding new basis functions to previously fitted model. Hence, at iteration t, we only need to solve the following (β (t) , g (t) ) = argmin where D t (i) does not depend on either β or g(x) and is equivalent to the unnormalized distribution of observation weights in the t-th iteration in Algorithm 4. We notice that g (t) (x) in (14) is a one-to-one correspondence with the multi-class classifier h t (x) in Algorithm 4 in the following manner: k (x) = 1.
Therefore, in essence, solving for g (t) (x) is equivalent to finding h t (x) in Algorithm 4.
Proposition 3.4. The solution to the optimization expressed as has the following form: Proof. To find g (t) (x) in (14), first, we fix β (t) . Let us consider the case where U i = g(x i ). We have On the other hand, when U i = g(x i ), we have Equations (17) and (18) From (19), since only the last sum depends on the classifier h t , for a fixed value of β, solution to g (t) results in: plugging (17), (18), and (20) into (14), we would have The summation component does not affect the minimization so that, differentiating with respect to β (t) and setting to zero, we get and factoring out the term (1/(K − 1)) exp(−β (t) /(K − 1)), we get Since we are minimizing a convex function of β (t) , the optimal solution is The terms t and α t are equivalent to those in Algorithm (4). Subsequently, we can deduce the updating equation for the distribution of the observation weights in Algorithm (4) after normalization.
Proposition 3.5. The distribution of the observation weights at each iteration simplifies to Equation (21) is equivalent to the updating of the weights in Algorithm (4) after normalization.
Proof. From equation (13), multiplying both sides by −(1/K)U i , exponentiating, and multiplying both sides by C(y i ) (t) , we get for which can be written as The weight at each iteration can further be simplified to Multiplying (22) which proves our proposition. To show equivalence to the updating of the weights in Algorithm (4), we note that the cases where It should be straightforward to show that the final classifier is the solution which is equivalent to
Numerical experiments
This section examines the differences between SAMME.C2 and SAMME in how each model is trained, further exploring the superiority of SAMME.C2 over SAMME in handling the issue regarding imbalanced data. To accomplish this, we make use of simulated dataset with a highly imbalanced three-class response variable. To generate such a simulated dataset, we utilize the Scikit-learn Python module described in Pedregosa et al. (2011). The make classification Application Programming Interface is employed with parameterization executed as """Make Simulation""" from sklearn.datasets import make_classification X, y = make_classification (n_samples=100000, n_features=50, n_informative=5-, n_redundant=0, n_repeated=0, n_classes=3, n_clusters_per_class=2, class_sep=2, flip_y=0, weights=[0.90,0.09,0.01], random_state=16) This script generates 100,000 samples with 50 features and 3 classes, deliberately creating a highly imbalanced dataset by setting the ratio for each class as 90%, 9%, and 1%, respectively. Changing the parameter of class sep adjusts the difficulty of the classification task. The samples no longer remain easily separable in the case of a lower value of class sep. To investigate and compare running processes of algorithms with different level of difficulty, three sets of data was created adjusting this parameter: for high classification difficulty, we set class sep=1, for medium classification difficulty, we set class sep=1.5, and for low classification difficulty, we set class sep=2. In Figure 2, we visualize these three different difficulties of classification tasks, with each difficulty separated by columns. The figure clearly shows that low classification difficulty means the samples are easily separable; the opposite holds for high classification difficulty. For ease of visualization, we only use 3 features instead of the 50 features, and we exhibit a 3-dimensional data structure by pairing and drawing three 2-dimensional graphs. We kindly ask the reader to refer to the package for further explanation of the other input parameters. For training, we use 75% of the data, and the rest are used for testing.
For classification problems, the most common performance statistics is accuracy, which is the proportion of all observations that were correctly classified. For obvious reasons, this is an irrelevant measure for imbalanced datasets. As alternative statistics, we consider Recall to measure the performance. The Recall statistics, sometimes called the sensitivity, for class i, R i , is defined to be the proportion of observations in class i correctly classified. It has been discussed (Fernández et al. (2018)) that the Recall, or sensitivity, is usually a more interesting measure for imbalanced classification. To provide a single measure of performance for a given classifier, we use the geometric average of Recall statistics, denoted as MAvG, as follows: It is straightforward to show that when we take the log of both sides of this performance metric, we get an average of the log of all the Recall statistics. This log transformation leads us to a metric that provides for impartiality of the importance of accurately classifying observations for all classes. In the case of severely imbalanced datasets, the MAvG metric allows us to correctly classify more observations of the minority class while sacrificing misclassifications of observations in the majority class. In effect, the MAvG metric is a sensible performance measure for severely imbalanced datasets; this performance metric is also used as the criterion for the hyperparameter optimization in GA to determine the cost values used in the SAMME.C2 algorithm. The concept of MAvG used for imbalanced datasets originated from the work of Fowlkes and Mallows (1983). To examine running processes of SAMME.C2 and SAMME, each algorithm with 1,000 decision stumps is trained using three datasets. The decision stump is a decision tree with one depth, which plays the role of a weak learner in the algorithms. Figures 3, 4, and 5 show the resulting test errors and test MAvG of SAMME.C2 and SAMME, after training newly added decision trees using the datasets of varying (low, medium, and high) level of difficulty. All figures are produced for increasing number of iterations, with each iteration referring to new decision stump. MAvG SAMME.C2 SAMME Figure 3: Comparison of test error and test MAvG between SAMME.C2 and SAMME with 1,000 decision stumps using the dataset of low level of classification difficulty. MAvG SAMME.C2 SAMME Figure 4: Comparison of test error and test MAvG between SAMME.C2 and SAMME with 1,000 decision stumps using the dataset of medium level of classification difficulty. MAvG SAMME.C2 SAMME Figure 5: Comparison of test error and test MAvG between SAMME.C2 and SAMME with 1,000 decision stumps using the dataset of high level of classification difficulty.
With SAMME algorithm, the objective is to reduce the test error, the misclassification rate. Therefore, when model is trained with severely imbalanced data, it puts more weight on a majority class since the majority class can significantly reduce the test error. For example, based on the simulated datasets in these numerical experiments, a model can be constructed assigning all observations in the majority class. In which case, we will get a misclassification rate of 10% which can be deemed small. Therefore, the test error is not a meaningful performance metric for severely imbalanced datasets. All figures show small test errors for SAMME algorithm, but when the SAMME.C2 algorithm is used, test errors are clearly low for low level of difficulty of classification and rapidly becomes worst for very high level of difficulty of classification.
On the other hand, all three figures show that SAMME.C2 algorithm produces better MAvG performance metric for various level of difficulty of classification. It is noted further than in the case when we have a high level of difficulty of classification, the SAMME.C2 produces a much improved MAvG metric than the SAMME algorithm. This results in spite of the worst test errors. This leads us to infer that in order to have a higher accuracy for minority class, SAMME.C2 has to sacrifice accuracy for majority class. This becomes clearer in the subsequent figure.
In Figure 6, we can observe the mechanism of SAMME.C2 in more details by examining Recall statistics of each of the three classes. Regardless of the complexity of the classification task, SAMME.C2 classifies minority classes much more accurately than SAMME. However, accuracy from minority classes is gained by sacrificing the accuracy of majority class. In other words, the primary difference between SAMME.C2 and SAMME occurs based on whether the model is trained focusing on reducing test errors or improving a more balanced accuracy of classification across all classes. Apparently, as the level of classification task increases, Figure 6 shows that, to correctly classify observations in the minority class, SAMME.C2 has to correspondingly reduce accuracy of observations in the majority class. This is a very important result because when observations in the minority class for severely imbalanced datasets are extremely difficult to classify, SAMME assigns nearly all observations in the majority class. Differently said, SAMME assigns nearly no observation in the minority class.
The number of times we iterate to reach an optimal classifier is clearly directly linked to the number of decision stumps we use as weak learners. The more weak learners we use the closer we can reach a desired convergence of our MAvG performance metric. In essence, this can impact the computational efficiency of our iterative algorithm. To do this investigation, we examine for a reasonable number of decision stumps to use for the SAMME.C2 algorithm by exploring the change in the value of MAvG vis-a-vis the number of decision stumps. Figure 7 exhibits the results of this investigation.
In the figure, for each level of difficulty of the classification tasks, we examine how changing the number of trees affects reaching the optimal MAvG performance metric with training. The figure shows the effects for various levels of difficulty of classification, varying the number of decision stumps or trees from 50, 100, and intervals of 100 up to 1000. For each time we train a model, the cost values are newly tuned through Genetic Algorithm. The results in Figure 7 exhibit solid lines determined according to 5-fold cross validation MAvGs. For reference purposes, we also give the corresponding 5-fold cross validation accuracy values shown as dashed lines. For all levels of difficulty, MAvGs increase sharply before 200 decision stumps, however, after that, MAvGs do not improve significantly with increasing number of decision stumps. Therefore, we conclude that at least 200 decision stumps are necessary for SAMME.C2 to perform suitably and favorably.
Finally, we examine the proper number of populations (P ) in GA explained in Section 2.2 to tune the cost values of each class for SAMME.C2. To narrow the possible interval of selecting values, the cost value of the most minority class is fixed at 0.999. Since we should give the largest cost to the most minority class, obviously, cost values for other classes should be between 0 and 0.999. It has been demonstrated by initial experiments that, when we run SAMME.C2 with over 200 decision stumps, the best cost values chosen from GA are in between 0.95 and 0.999. Based on these results, we determine the optimal cost values by choosing from the interval (0.95, 0.999) and we allow for randomness of around 0.001, in the mutation step of GA. Figure 8 reveals 10 values of MAvG according to 10 cost values in each population. As explained in section 2.2, the set of 10 cost values of each population is determined by the 10 MAvG values calculated with trained SAMME.C2 using the set of 10 cost values of the previous population. For all three levels of classification difficulties, we arrive at the best cost values rather quite rapidly. We observe that just after the 4th population, the largest MAvG for each population is nearly similar. The assignment of cost values in SAMME.C2 does not slow the overall estimation and training of the SAMME.C2 algorithm.
We have used numerical experiments to have a better understanding of the SAMME.C2 especially when compared to the SAMME algorithm. We find that SAMME.C2 provides us a much more superior algorithm for learning and understanding observations in the minority class, regardless of the level of difficulty of classification embedded in the data. We also examined how SAMME.C2 performs relative to other algorithms that handle severely imbalanced classes based on insurance telematics data. See So et al. (2021).
Concluding remarks
Because of its potential use in a vast array of disciplines, classification predictive modeling will continue to be an important toolkit in machine learning. One of the most challenging aspects of classification task is finding an optimal procedure to handle observational data with skewed distribution across several classes. We find that there is now a growing body of literature that deals with real world classification tasks related to highly imbalanced multi-class problems. In spite of this growing demand, there is insufficient work on methods to handle severely imbalanced data in a multi-class classification.
In this paper, we presented what we believe is a promising algorithm for handling severely imbalanced multi-class classification. The proposed method, which we refer to as SAMME.C2, combines the benefits of iterative learning from weak learners through the AdaBoost scheme and increased repeated learning of observations in the minority class through a cost-sensitive learning scheme. We provided a mathematical proof that the optimal procedure resulting in SAMME.C2 is equivalent to an additive model with a minimization of a multi-class cost-sensitive exponential loss function. The algorithm therefore belongs to the traditional statistical family of forward stagewise additive models. We additionally showed that based on the same multi-class cost-sensitive exponential loss function, SAMME.C2 is an optimal Bayes classifier.
In order to expand our insights into SAMME.C2 relative to SAMME, our numerical experiments are based on understanding the resulting differences when differing levels of difficulty in classification task is used. We therefore synthetically generated three simulated datasets that are distinguished according to these degrees of difficulty of classification. First, we note that the use of straightforward misclassification, or test errors, does not work well for severely imbalanced datasets. As has been proposed in the literature, the use of MAvG, a geometric average of recall statistics for all classes, is a more rational performance metric as it gives emphasis on being able to train and learn well from observations that belong to the more minority classes. By recording and tracking test errors, MAvGs, and recall statistics, the results of our numerical experiments reveal the superiority of SAMME.C2 in classifying objects that belong to the minority class, regardless of the degree of difficulty of classification. This is at the little expense of sacrificing recall statistics for the majority class. For SAMME.C2, the recall statistics of minority classes are much more improved at each iteration than those of SAMME, but SAMME.C2 has lower recall statistics for majority classes at all iterations than those of SAMME. We also showed the computational efficiency of SAMME.C2 by investigating the most optimal number of weak learners, or iterations, in order to reach convergence. Based on our analysis, training as little as 200 decision stumps as weak learners can rationally stop the iteration. | 8,785 | 2021-12-27T00:00:00.000 | [
"Computer Science"
] |
A note on the adapted weak topology in discrete time
The adapted weak topology is an extension of the weak topology for stochastic processes designed to adequately capture properties of underlying filtrations. With the recent work of Bart--Beiglb\"ock-P. as starting point, the purpose of this note is to recover with topological arguments the intriguing result by Backhoff-Bartl-Beiglb\"ock-Eder that all adapted topologies in discrete time coincide. We also derive new characterizations of this topology including descriptions of its trace on the sets of Markov processes and processes equipped with their natural filtration. To emphasize the generality of the argument, we also describe the classical weak topology for measures on $\mathbb R^d$ by a weak Wasserstein metric based on the theory of weak optimal transport initiated by Gozlan-Roberto-Samson-Tetali.
I
An essential difference in the study of random variables and stochastic processes is that the latter comes in conjuction with filtrations that are designed to model the flow of available information: Let us consider a path space X := N t=1 X t equipped with the product topology where (X t , d X t ) are Polish metric spaces and N ∈ N denotes the number of time steps.We write P(X) for the set of laws of stochastic processes, i.e., Borel probability measures on X. Canonically, we identify the law P ∈ P(X) with the process X, σ(X 1:t ) N t=1 , σ(X), P, X , where X = X 1:N is the coordinate process on X, X 1:t denotes the projection from X → t t=1 X s =: X 1:t , and σ(X 1:t ) the σ-algebra generated by X 1:t .For P, Q ∈ P p (X), that are probabilities in P(X) with finite p-th moment, p ∈ [1, ∞), the p-Wasserstein distance W p is given by where Cpl(P, Q) denotes the probabilities on X × X with marginals P and Q, and d p X (x, y) := N t=1 d p X t (x t , y t ).We equip P p (X) with the topology induced by W p and note that if d X is bounded, W p metrizes the weak topology on P(X).
The starting point for the study of adapted topologies poses the fact that probabilistic operations and optimization problems that crucially depend on filtrations, such as the Doob decomposition, the Snell envelope, optimal stopping, utility maximization, and stochastic programming, are typically not continuous w.r.t.weak topologies.These shortcomings are acknowledged by several authors from different communities, see e.g.[1,14,17,2,7] for more details.The purpose of this note is to recover and strengthen the main result of Backhoff et al. [5] that all adapted ETH Zurich, Switzerland<EMAIL_ADDRESS>on P(X) coincide.In comparison to the original proof, our argument is more conceptional: at its core lies the elementary fact that comparable compact Hausdorff topologies agree.
1.1.Stochastic processes and the adapted weak topology.Subsequently, we want to consider topologies that incorporate the flow of information encoded in filtrations, for processes on general filtered probability spaces.Therefore, we follow the approach of [7] by introducing the notion of a filtered process.
Definition 1 (Filtered process).A filtered process X with paths in X is a 5-tuplet consisting of a complete filtered probability space (Ω X , (F X t ) N t=1 , F X , P X ) and an (F X t ) N t=1 -adapted stochastic process X with paths in X.We write FP for the class of all filtered processes with paths in X, and FP p for the subclass of filtered processes that finitely integrate d p X ( x, X) for some x ∈ X. Although, a-priori FP is a proper class (that contains a lot of redundancy), in the following we will consider equivalence classes [X] of filtered processes in the sense of Hoover-Keisler [14] such that the corresponding factor space FP becomes a set, see for example [4].This factorization can be seen similarly as in classical L ptheory where one considers equivalence classes modulo almost-sure equivalence in order to obtain a Banach space.This equivalence relation can be characterized by an adapted version of the Wasserstein distance, c.f. [7, Theorem 1.5], the adapted Wasserstein distance AW p which will be introduced in detail in Section 1.2 below: for X, Y ∈ FP p we have Henceforth, we consider the factor space FP and remark that equivalent processes share the same probabilistic properties, e.g.being adapted, having the same Doob decomposition and Snell-envelope, . . . .Moreover, we write FP p for those elements The topology induced by the adapted Wasserstein distance is denoted by τ AW and called the adapted weak topology.When equipping FP with the adapted weak topology, we obtain a space rich of topological and geometric properties, see [7].Importantly, we note that as a consequence of the adapted block approximation introduced in [7] the values of AW p (X, Y) (and also CW p (X, Y) which will be introduced down below) is independent of the particular choice of representatives.Similarly, we can equip FP p with p-th Wasserstein topology by letting and remark that W p is not point seperating on FP p .Processes can have the same law, but very different information structure, see for instance [2, Figure 1].An important feature of AW p is the following Prokhorov-type result which will be applied at several occasions in the proofs: To emphasize the significance of Theorem 2 and to give the idea behind the main results, we formulate the following immediate corollary: Then d metrizes the adapted weak topology τ AW .
Indeed, by (4) we find On the other hand, to deduce the reverse implication of (5), let (X k ) k∈N be a dconvergent sequence with limit X.Then the sequence is W p -precompact and therefore AW p -precompact by Theorem 2. Therefore, there exist Y ∈ FP p and a subsequence with lim j→∞ AW p (X k j , Y) = 0.By (5) this sequence also converges w.r.t.d, thus, the triangle inequality yields d(X, Y) = 0. Finally, as d is a metric, we get that X = Y and thus lim k→∞ AW p (X k , X) = 0.
1.2.Adapted topologies.In order to capture the properties of filtrations, numerous authors have introduced extensions of the weak topology of measures on P(X), which we frame in our setting and briefly introduce below.For a thorough overview of the topic and introduction to those topologies we refer to [5] and the references therein.
(A) Aldous [1] introduces the extended weak topology τ A by associating a process X ∈ FP with a measure-valued martingale pp 1 (X), the so-called prediction process, that is here where L (X|F X t ) is the conditional law of X given F X t .Then τ A is defined as the initial topology induced by X → L (pp 1 (X)) when P(P(X) N ) is equipped with the weak topology.(HK) Hoover-Keisler [14] introduce an increasing sequence of topologies τ r HK on FP where r ∈ N ∪ {0, ∞} is called the rank.This is achieved by iterating Aldous' construction of the prediction process.Set pp 0 (X) := X and, recursively define, for r ∈ N ∪ {∞}, and pp(X) := pp ∞ (X).Analogously to (A), for r ∈ N ∪ {0, ∞}, τ r HK is given by the initial topology w.r.t.X → L ((pp k (X)) r k=0 ).We remark that τ 0 HK is equivalent to weak convergence of the law, τ 1 HK = τ A , and τ N−1 HK = τ r HK for r ≥ N (see [7]) and simply write then τ HK := τ N−1 HK .(OS) The optimal stopping topology τ OS is defined in [5] as the initial topology w.r.t. the family of maps where c : {1, . . ., N}×X → R is continuous, bounded, and non-anticipative, that is c(t, x) = c(t, y) if x 1:t = y 1:t for (t, x), (t, y) ∈ {1, . . ., N} × X. (H) The information topology τ H of Hellwig [13] is based on a similar point of view as (A) and (HK).Properties of the filtration are encoded in the laws that are measures on P(X 1:t × P(X t+1:N )).
(BLO) Let the path space X be the N-fold product of a separable Banach space V, i.e., X = V N .In this setting, Bonnier-Liu-Oberhauser [9] embed FP into graded linear spaces V r via higher rank expected signatures, where r ∈ N ∪ {0, ∞} is again the rank, and define τ r BLO as the initial topology w.r.t. the corresponding embedding Φ r : FP → V r .Remark 4. In case that d X is an unbounded metric on X, we will fix for the rest of the paper p ∈ [1, ∞) and consider the subset FP p with the following topological adaptation.The topologies (A), (HK), (OS), (H) and (BLO) are then refined by additionally requiring continuity of To avoid notational excess, we state all results on FP p for some p ∈ [1, ∞).All results are also true when replacing FP p with FP (and if necessary d X with, for example, d X ∧ 1).
Besides using the powerful concept of initial topologies, various authors have constructed adapted topologies based on ideas from optimal transportation.The essence of this approach is to encode filtrations into constraints for the set of couplings and thereby construct modifications of the Wasserstein distance suitable for processes.To illustrate the idea, recall that optimal transport has so-called transport maps T : X → X at its core, satisfying the push-forward condition T # P = Q for P, Q ∈ P(X).We refer to [19] for a comprehensive overview on optimal transport.In our context, where P and Q are laws of processes, causal optimal transport suggests to use adapted maps in order to transport P to Q, i.e., T # P = Q and T is non-anticipative, which means When X resp.Y denote the first resp.second coordinate projection from X×X → X, then this additional adaptedness constraint on couplings can be formulated as where, for σ-algebras A, B, C on some probability space, A ⊥ B C denotes conditional independence of A and C given B. Elements of Cpl c (P, Q) are called causal couplings.When one symmetrices (11) one obtains the set of bicausal couplings Cpl bc (P, Q), that are π ∈ Cpl c (P, Q) such that (Y, X) # π ∈ Cpl c (Q, P).These definitions can be easily extended to FP.
Definition 5 (Causal and bicausal couplings
We call π bicausal if it additionally satisfies Finally, we write Cpl c (X, Y) resp.Cpl bc (X, Y) for the set of causal resp.bicausal probabilities with first marginal P X and second marginal P Y .
(SCW) Lassalle [16] and Backhoff et al. [6] coin the notion of causality, see Definition 5, and introduce the causal Wasserstein "distance" CW p on P p (X).For X, Y ∈ FP p we have Clearly, CW p is not a metric as it lacks symmetry, which motivates to consider the so-called symmetrized causal Wasserstein distance, see [5], which constitutes a metric on FP p .We write τ SCW for the induced topology.(AW) Instead of symmetrizing as in (15), one can directly symmetrize the definition on the level of couplings via the notion of bicausal couplings.Approaches in this spirit but to different extents go back to Rüschendorf [18], Pflug-Pichler [17], Bion-Nadal-Talay [8], and Bartl et al. [7].We define the adapted Wasserstein distance of X, Y ∈ FP p by The adapted Wasserstein distance is a metric on FP p and we denote its induced topology by τ AW .(CW) Finally, we introduce here a new mode of convergence, the so-called topology of causal convergence τ CW , which we describe below: A neighbourhood basis of X ∈ FP p is given by where ǫ > 0. Hence, τ CW can be equivalently described by Remark 6.It is apparent from the definitions in (2), ( 14), ( 15) and ( 16) that for X, Y ∈ FP p .Hence, we have τ W ⊆ τ CW ⊆ τ S CW ⊆ τ AW .
1.3.Characterizations of the adapted weak topology.In this subsection we formulate the main results of this paper.The core ingredient in order to prove the main results, Theorems 8 and 11, and also Proposition 12, is the following simple observation of topological nature.
(2) The topology τ is a least as fine as Note that Lemma 7 in combination with Theorem 2 have Corollary 3 as a consequence.
Next, we provide characterizations of the adapted weak topology on FP p .The equivalence of τ HK and the adapted Wasserstein-topology, τ AW , is due to [7] whereas the characterization in terms of the symmetric causal Wassersteintopology, τ SCW , is novel.Moreover, we remark that the equivalence of the higher rank expected signature-topology, τ BLO and τ HK was already known when, for t ∈ {1, . . ., N}, X t = V and V is a compact subset of a separable Banach space, see [9,Theorem 2].Theorem 8. On FP p we have , then these topologies also coincide with τ N−1 BLO , and τ r BLO = τ r HK .When restricting to sets of processes that have a simpler information structure, e.g.Markov processes or processes equipped with their natural filtration, there are simpler ways to characterize the adapted weak topology.This motivates the next definition of higher-order Markov processes where the transition probabilities are allowed to depend on more than its current state.Definition 9. Let n ∈ N ∪ {∞}.We call a process X ∈ FP p n-th order Markovian (or n-th order Markov process) if, for all The set of all n-th order Markov processes is denoted by . Moreover, we may call ∞-th order Markov processes plain and write FP
with the initial topology τ n
Markov that is given by the maps Remark 10.To illustrate Definition 9, let n = 1.Clearly, FP Markov p,1 is the subset of (time-inhomogeneous) Markov processes in FP p .A family of Markov processes (X k ) k∈N converges to a Markov process X w.r.t.τ 1 Markov if and only if, for 1 In particular, if there exist continuous kernels κ t : X t → P p (X t+1 ) which satisfy κ t (X t ) = L (X t+1 |X t ) almost surely, then convergence in τ 1 Markov can be characterized by the following: for all 1 ≤ t ≤ N − 1 and ǫ > 0, and some x ∈ X.This can be easily deduced, e.g., by using continuity of the kernels (κ t ) N−1 t=1 and Skorokhod's representation theorem.The next result recovers and generalizes the main result of [5].The novelty of the next result is two-fold: On the one hand, the case n = ∞ recovers the results of [5] and additionally gives a new description in terms of τ ∞ Markov .On the other hand, the case n ∈ N extends this result to the subset of n-th order Markov processes.
Theorem 11 (All adapted topologies are equal).Let n, r ∈ N ∪ {∞}.Then the trace on FP Markov p,n of the topologies τ A , τ r HK , τ OS , τ H , τ CW , τ SCW and τ AW are the same.In particular, they all coincide with the trace of τ n Markov .
1.4.Characterization of the weak topology.The line of reasoning prescribed by Lemma 7 can be utilized outside of the framework of the adapted weak topology which is demonstrated by the proposition below.
Proposition 12.The p-Wasserstein topology on P p (R d ) can be metrized by where R d is equipped with the euclidean norm • and Remark 13.The minimization problem depicted in ( 27) is a so-called weak optimal transport problem [12], which is a generalization of optimal transport.In particular, (27) vanishes if and only if there exists a martingale coupling between P and Q.For more details we refer to [11,3] and the references therein.
P
In order to prove the main results, we will verify the assumptions of Lemma 7. By doing so, we will encounter variuous martingales which can be properly treated thanks to the next well-known fact.We recall that a process X = (X t ) N t=1 taking values in P(X) is called a measure-valued martingale with values in P(X) if, for f ∈ C b (X), the real-valued, bounded process (X t ( f )) N t=1 is a martingale.Here, we write p( f ) for the integral f dp when p ∈ P(X) and f ∈ C b (X).Lemma 14.Let X 1 , X 2 , X 3 be measure-valued martingale taking values in P(X) where X is a Polish space.If X 1 ∼ X 3 , then X 1 = X 2 = X 3 almost surely.
Proof.Since there exists a countable family in C b (X) that separates points in P(X), it suffices to show that for f ∈ C b (X) As X 1 , X 2 , X 3 is a measure-valued martingale, we have that Y i := X i ( f ) is a realvalued, bounded martingale and
p,n
. First, we justify that the n-Markov property is preserved under equivalence.
Proof.By Definition 9 the property of being n-Markovian can be deduced from observing the law of the corresponding first-order prediction process.Hence, we conclude by the fact that X ≡ Y readily implies Proof.By definition of a filtered process Y is adapted, therefore the coupling π, given by (id Ω Y , Y) # P Y , is causal from Y to X := (X, σ(X 1:t ) t , σ(X), L (Y), X), where X denotes the canonical process on X.
If Y is plain, c.f. Definition 9, then L (Y|F Y t ) = L (Y|Y 1:t ) P Y -almost surely.Again, as X is adapted, this translates to the following conditional independence are uniquely defined by their law, that is, for Proof.The first claim is a direct consequence of the definition of n-th resp.mth order Markov processes.The second claim then readily follows from Lemma 16.
Lemma 18. (FP
Markov ) is a sequential Hausdorff space.Proof.First, we remark that, for 1 ≤ t ≤ N − 1, the map X → L (T n t (X)) takes values in the Polish (and therefore first countable) space and the existence of a measurable map f t : X 1∨(t−n+1):t → P(X t+1 ) such that almost surely In particular, we have for t = n that L (X 1:n+1 ) = L (Y 1:n+1 ).We proceed to show L (X) = L (Y).Assume that we have already shown for some n + 1 ≤ t ≤ N − 1.By the disintegration theorem and the definition of n-th order Markovian, we may write where we use the notation µ ⊗ k for µ ∈ P(X 1:t ) and a measurable kernel k : X 1:t → P(X t+1 ) to denote the gluing of µ with k, that is the probability defined by This concludes the inductive step.
Finally, we can apply Lemma 16 and conclude X = Y.
By [7] we may assume w.l.o.g. that F X N = F X , F Y N = F Y , and (Ω X , F X ) and (Ω Y , F Y ) are standard Borel spaces.This allows us to consider the conditionally independent product of π and π ′ denoted by π := π ⊗π ′ ∈ Cpl(X, Y, X), see Definition 22. Here, Cpl(X, Y, X) denotes the set of coupling with marginals P X , P Y and P X .We write X and X for the second X-coordinate in order to distinguish them.By induction we show that π-almost surely for all k ∈ N ∪ {0}.Since we know that X = Y = X π-almost surely, we have verified (28) for k = 0. Assume that (28) holds for some k.By causality of π ′ and Lemma 24 we find, for where naturally extend the notation introduced in Definition 5 in order to write products of multiple σ-algebras.Since pp k (X) is F X,Y, X N,0,0 -measurable and pp k (Y) is F X,Y, X N,N,0 -measurable, we obtain by combining (28), (29), and the tower property and similarly, Hence, the triplet (pp k+1 t ( X), pp k+1 t (Y)), pp k+1 t (X)) satisfies the assumptions of Lemma 14, which concludes the inductive step.In particular, we have shown that pp(X) ∼ pp(Y), whence X = Y by [7,Theorem 4.11].
. First, we convince ourselves that (L (X k )) k∈N converges to L (X): Assume that we have already shown that L (X k 1:t ) → L (X 1:t ) for some 1 ≤ t ≤ N − 1.The conditionally independent product ⊗, see [10, Definition 2.8], allows us to rewrite By [10,Theorem 4.1], that is in our context continuity of ⊗ at (L (X 1:t ), L (T n t (X))), we obtain that L (X k 1:t+1 ) → L (X 1:t+1 ).Hence, (L (X k )) k∈N is convergent and therefore tight.Thus, there exists by Theorem 2 a subsequence of (X k ) k∈N converging in τ AW to some Y ∈ FP p .Due to τ AW -continuity, we get Hence, there exist measurable maps f t : X 1∨(t−n+1):t → P(X t+1 ) with the property In other words, Y ∈ Λ n,Markov .Therefore the sequence (X k ) k∈N is also relatively compact in (FP and conclude with Lemma 16 that Y(= X) ∈ FP plain p .
2.2.Causal gluing.This section is devoted to develop auxiliary results concerning the composition of causal couplings with matching intermediary marginal.We recall that due to [7] we can always assume w.l.o.g. that all spaces under consideration are standard Borel.Therefore, we assume for the rest of the section that we have chosen representatives of X, Y, Z ∈ FP such that Definition 22.Let γ ∈ Cpl(X, Y) and η ∈ Cpl(Y, Z).We define the conditionally independent product of γ and η as the probability on satisfying for any U, bounded and F X,Y,Z N,N,N -measurable, that where η ω Y is a disintegration kernel of η w.r.t. the projection on Ω Y .Due to symmetry reasons, we have The term (31) clarifies the naming of γ ⊗η as the conditional independent product: conditionally on ω Y the knowledge of ω X does not affect ω Z and vice versa.This suggests the following probabilistic formulation.
Proof.Let U, V, W be bounded and F X,Y,Z N,0,0 -measurable, F X,Y,Z 0,N,0 -measurable, and F X,Y,Z 0,0,N -measurable, respectively.Write Ŵ for the bounded, F X,Y,Z 0,N,0 -measurable random variable given by W(ω Z ) η ω Y (dω Z ).By Definition 22 and the tower property we get 0,N,0 ] and V was arbitrary, we derive , which shows the first statement.
The second statement is a consequence of applying [15,Proposition 5.8] to the previously shown.
Lemma 24.Let γ ∈ Cpl(X, Y) and η ∈ Cpl c (Y, Z).We have, for Proof.To show item (1), let W be bounded and F X,Y,Z 0,t,t measurable.We obtain from Lemma 23 the first equality in whereas the second stems from causality of η.Here this causality yields under γ ⊗η that, conditionally on F X,Y,Z 0,t,0 , F X,Y,Z 0,N,0 is independent of F X,Y,Z 0,t,t .Since the last term in (32) is F X,Y,Z t,t,0 -measurable, the tower property yields item (1).To establish item (2), let W be as above.Note that causality of γ provides under γ ⊗η that, conditionally on F X,Y,Z t,0,0 , F X,Y,Z N,0,0 is independent of F X,Y,Z t,t,0 .Using that in addition to item (1) and the tower property, we conclude Proof.This result is a direct consequence of item (2) of Lemma 24.
Lemma 26.Let X ∈ FP p .The map by Corollary 25.Hence, we compute
Postponed proofs of Section 1.
Proof of Lemma 7. Due to (2) it remains to show that convergence in (A, τ ′ ) implies convergence in (A, τ).To this end, let (y k ) k∈N be a sequence in (A, τ ′ ) converging to y.By (3) we find a subsequence (y k j ) j∈N that converges in (A, τ) to some element z.Again, by (2) we have that (y k j ) j∈N also converges in (A, τ ′ ) to z, which yields by (4) that y = z.Therefore, y is the only (A, τ)-accumulation point of (y k ) k∈N , from where we conclude that (y k ) k∈N has to converge to y in (A, τ).Proposition 8], where τ W is the topology of p-Wasserstein convergence of the laws.Since τ W and τ have the same relatively compact sets by Theorem 2, we conclude the same for τ ′ .Hence, all assumptions of Lemma 7 are met which yields the first two assertions of the theorem.
The last assertion of the theorem follows mutatis mutandis.Markov is coarser than τ H , τ A , τ r HK , τ OS , τ AW and τ SCW .Similarly, we have that all of these topologies are coarser than τ AW .We remark that τ AW ⊇ τ OS can be seen due to the fact that the map which maps X ∈ FP p to its Snell envelope is τ AW -continuous.
Proof of Proposition 12. Let A = B = P p (R d ) and τ = τ W .It is straightforward to check that V p is a pseudometric and V p ≤ W p .Moreover, as a simple consequence of Lemma 14 we find that V p separates points: If V p (P, Q) = 0 then there exist martingale couplings π ∈ Cpl(P, Q) and π ∈ Cpl(Q, P).Let X = (X t ) 3 t=1 be a Markov process with (X 1 , X 2 ) ∼ π and (X 2 , X 3 ) ∼ π ′ .Therefore, X is a martingale and by Lemma 14 X 1 ∼ X 2 , that is P = Q and V p is a metric on P p (R d ).We write τ V for the topology induced by V p and get τ V ⊆ τ W .It remains to verify Item (3) of Lemma 7.
To this end, let (P k ) k∈N converge to P in τ V and we want to show W p -relative compactness of the sequence.By [3, Lemma 6.1], we have V p (P k , P) = inf where ≤ cx denotes the convex order on P 1 (R d ).Recall that, for µ, ν ∈ P 1 (R
Corollary 17 .
which means that π is bicausal andAW p (X, Y) = 0.For n, m ∈ N ∪ {∞} with n ≤ m we have FP Markov p,n ⊆ FP Markov p,m .Moreover, processes in FP Markov p,n | 6,312.6 | 2022-05-02T00:00:00.000 | [
"Mathematics"
] |
Two Novel Mutations in Myosin Binding Protein C Slow Causing Distal Arthrogryposis Type 2 in Two Large Han Chinese Families May Suggest Important Functional Role of Immunoglobulin Domain C2
Distal arthrogryposes (DAs) are a group of disorders that mainly involve the distal parts of the limbs and at least ten different DAs have been described to date. DAs are mostly described as autosomal dominant disorders with variable expressivity and incomplete penetrance, but recently autosomal recessive pattern was reported in distal arthrogryposis type 5D. Mutations in the contractile genes are found in about 50% of all DA patients. Of these genes, mutations in the gene encoding myosin binding protein C slow MYBPC1 were recently identified in two families with distal arthrogryposis type 1B. Here, we described two large Chinese families with autosomal dominant distal arthrogryposis type 2(DA2) with incomplete penetrance and variable expressivity. Some unique overextension contractures of the lower limbs and some distinctive facial features were present in our DA2 pedigrees. We performed follow-up DNA sequencing after linkage mapping and first identified two novel MYBPC1 mutations (c.1075G>A [p.E359K] and c.956C>T [p.P319L]) responsible for these Chinese DA2 families of which one introduced by germline mosacism. Each mutation was found to cosegregate with the DA2 phenotype in each family but not in population controls. Both substitutions occur within C2 immunoglobulin domain, which together with C1 and the M motif constitute the binding site for the S2 subfragment of myosin. Our results expand the phenotypic spectrum of MYBPC1-related arthrogryposis multiplex congenita (AMC). We also proposed the possible molecular mechanisms that may underlie the pathogenesis of DA2 myopathy associated with these two substitutions in MYBPC1.
Introduction
Distal arthrogryposis(DA) is a group of disorders that mainly involve the distal parts of the limbs and are characterized by congenital contractures of two or more different body areas [1]. Since the Hall's classification of DA was revised [1,2], at least ten different forms of DA (DA1-DA10) have been reported and distal arthrogryposes (DAs) were mostly described as autosomal dominant disorders, but recently autosomal recessive pattern was reported in distal arthrogryposis type 5D(DA5D) [3]. In the gene discovery studies, DA1 (MIM 108120), DA2B (Sheldon-Hall syndrome [SHS], MIM 601680) and DA2A (Freeman-Sheldon syndrome [FSS], MIM 193700) were suggested most common DAs. DA1, DA2B/SHS and DA2A/FSS share some major diagnostic criteria. However, they can be distinguished from one another based on diagnostic criteria, which include the absence of facial contractures in most individuals with DA1, the presence of mild to moderate facial contractures in SHS [4] and the presence of moderate to severe facial contractures in FSS. Nevertheless, making the distinction between SHS and FSS based on clinical characteristics alone is so challenging that Stevenson and his colleagues proposed a strict diagnostic criteria for FSS. In contrast to individuals with classical FSS, patients with SHS have a larger oral opening, a triangular face with small pointed chin and lack an H-shaped dimpling of the chin (H-chin) [5,6]. Additional features commonly found in FSS include scoliosis, prominent superciliary ridge, blepharophimosis, potosis, strabismus, dental crowding, hypoplastic alae nasi, a long philtrum, and feeding difficulty at birth [2,5,7].
Myosin binding protein C (MyBP-C) consists of a family of thick filament associated proteins and it contributes to the regular organization and stabilization of thick filaments and modulates the formation of cross-bridges between myosin and actin [19]. The core structure of MyBP-C is composed of seven immunoglobulin (Ig) domains and three fibronectin type III (Fn-III) repeats, numbered from the NH2-terminus as C1-C10. The C1 domain is flanked by two unique motifs, one enriched in proline and alanine residues, termed Pro/Ala rich motif and a conserved linker, referred to as M motif (Fig. 1) [20]. Three isoforms of MyBP-C exist in striated muscles: cardiac, slow skeletal, and fast skeletal. To date, much of our knowledge on MyBP-C originates from most studies that have focused on the cardiac form, due to its direct involvement in the development of hypertrophic cardiomyopathy. However, research into MYBPC1 is limited, due to only three MYBPC1 mutations reported in human disease [12,21].
In this study, we performed follow-up DNA sequencing after linkage mapping in two large Chinese DA2 families and report two novel missense mutations in MYBPC1 to be involved in DA2. One of the DA kindreds was introduced by germline mosaicism of the de novo MYBPC1 mutation. Our results suggest the immunoglobulin domain C2 of MYBPC1 may play an important role in binding to S2 fragment of myosin. We also suggest the possible molecular mechanisms that may underlie the pathogenesis of DA2 myopathy associated with these two substitutions in MYBPC1.
Patients
These two autosomal dominant Han Chinese DA families (pedigrees X and H) are from Northeast China and each kindred has multiple male and female affected individuals across four or five generations with incomplete penetrance and variable expressivity (Fig. 1). Individuals were considered as affected if they had at least one major diagnostic criterion in the context of an affected family. Eight affected individuals and one asymptomatic carrier in family X were available for clinical evaluation, while seven affected individuals and one asymptomatic carrier in family H were included in the investigation. Clinical findings in these two families include ulnar deviation of fingers, adducted stiff/clasped thumb, camptodactyly and hypoplastic and/ or absent flexion creases in the upper limbs and overriding toes, flexed toes and planovalgus in the lower limbs (details in Fig. 2 and Table 1). These clinical findings meet the major diagnostic criteria for DA. Notably, overextension contractures were observed at the metatarsophalangeal joints or the proximal interphalangeal joints of the toes of 4 affected individuals in family X and one in family H ( Fig. 2 and Table 1).
Minor facial anomalies meeting the diagnosis of DA2 including downslanting palpebral fissures, prominent nasolabial folds, long philtrum, micrognathia, small mouth and small nose with small nostrils were present in these two families. Hypertelorism, ptosis, strabismus, dental crowding and pinched/pouting lips were also observed (Table 1 and Fig. 3). Remarkably, a crease extending laterally and downward from corners of the mouth or "non-H-shaped" cutaneous dimples on the sides of the chin were also present. In several affected individuals, the prominent nasolabial folds extended downward below the corners of the mouth so that which exhibiting like "parentheses" around the mouth (Fig. 3). Prominent superciliary ridges were well defined in three individuals of family X. Moreover, two vertical grooves which paralleling with "the parentheses" around the mouth on the cheeks of case X5, a very small mouth with pouting lips of case X6 and a deep vertical groove besides the left corner of the mouth of case H38 deeply caught our eyes (Fig. 3).
Detailed neurological examinations were performed on the probands of the two families and no weakness of any specific muscle groups was noted. We could not get any muscle biopsy from our patients affected with DA2 to perform the evaluation of myofibril change. Feeding problems during the neonatal period were not reported. Physical examination showed a high degree of variability in expression, from the asymptomatic carrier to full penetrant affected individuals with severe camptodactyly and facial contractures. Blood samples were collected from 20 and 40 members of families X and H, respectively. MYH3 re-sequencing Genomic DNA was isolated for direct sequencing and linkage mapping using standard methods [11]. We first performed mutation screening in MYH3 using previously reported primers in the probands of these two DA2 families(case X6 and case H20) because mutations in this gene are the most common known cause of distal arthrogryposis [6].
Linkage analysis and haplotype analysis
After sequencing analysis of MYH3, linkage analysis was performed using 14 microsatellite markers flanking the 6 reported DAs loci (detailed in Table 2) in these two DA2 families. It seems likely that recombination events happened to 2 of the three microsatellite markers flanking the MYBPC1 locus in family X. Then further linkage analysis using a more set of 7 markers flanking the locus was performed in this pedigree. Two-point linkage analysis was carried out using the MLINK program of the LINKAGE Package (version 5.2) with the following parameters: autosomal dominant inheritance, penetrance of 0.95, a mutation rate of zero, equal malefemale recombination rate, equal microsatellite allele frequency, and a disease allele frequency of 1 in 10,000. To confirm that the DA2 phenotype was introduced into the pedigree X by the germline mosaicism of de novo MYBPC1 mutation in case X1, haplotypes were constructed manually from the observed genotype data of the microsatellite markers.
Mutation analysis
According to mapping results, MYBPC1 (NM_002465.3) gene was sequenced in the probands (case X6 and case H20) of the two families using previously reported primers [12]. A novel MYBPC1 mutation was found in each proband of the families. Ecl136II recognition site was eliminated by the mutation in pedigree X, while Hinf I recognition site was introduced by the novel mutation in pedigree H. To confirm the variations, PCR was performed with primer pairs XF/XR or HF/HR (Table 3) using DNA from all family members of pedigree X or H, respectively, and the PCR product was digested with the restriction enzyme Ecl136II or Hinf I for pedigree X or H according to the protocols of manufacturers. Digested products were fractionated by 8% polyacrylamide gel electrophoresis (PAGE) and analyzed by silver staining. We used this method to screen for the mutations in 220 ethnically-matched chromosome controls. To demonstrate the somatic mosaicism of case X1 in the pedigree X does not exist in her lymphocytes, the amplification refractory mutation system (ARMS) approach was applied. Primer pairs (sequences not given but available upon request) used in ARMS-PCR were designed with online tool (http://primer1.soton. ac.uk/primer1.html) [22]. The ARMS-PCR was carried out with 2×Power Taq PCR MasterMix (BioTeke, Beijing, China). Products were detected and analyzed as RFLP analysis.
In silico prediction of protein function
The possible impact of amino acid substitution on the structure and function of human proteins was estimated with in silico prediction of protein function with the following tools: SIFT (http://sift.jcvi.org/) and PolyPhen-2 (http://genetics.bwh.harvard.edu/pph2/). To estimate the importance of the novel mutations, Vertebrate MultiZ Alignment and Conservation of the region surrounding each affected residue were obtained from the UCSC genome browser (http:// genome.ucsc.edu/). The reference sequences used for MYBPC1 gene with these tools were ENSP00000354849 or ENST00000361466 (Ensembl).
Ethics statement
The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details. Parents or legal guardians provided written informed consent on behalf of minors and this study was approved by the Institutional Review Board at the Key Laboratory of Reproductive Health of the Liaoning Province (Shenyang, China).
Results
Linkage of the DA2 phenotype in these families to MYBPC1 gene locus We first performed mutation screening in MYH3 in these two DA2 families and no diseasecausing mutations were found. Then we performed linkage analysis using 14 microsatellite markers flanking the 6 reported DAs loci (Table 2). Positive LOD scores of 3.01 or 7.22 at theta Identification of two novel mutations in the MYBPC1 gene Subsequent sequencing of MYBPC1 in the probands of the two families identified two novel missense mutations, c.1075G>A (p.E359K) in pedigree X and c.956C>T (p.P319L) in pedigree H (Fig. 1). Through polymerase chain reaction (PCR) using specific primer pairs (Table 3) followed by RFLP analysis, each of the mutations was present in all affected family members and the asymptomatic carrier, and absent in unaffected family members of pedigree X or H (Fig. 1). Both mutations were not identified in 220 control chromosomes of Han Chinese ancestry. Both of the substitutions occur within C2 immunoglobulin domain, which together with C1 and the M motif constitute the binding site for the S2 subfragment of myosin (Fig. 1). Additionally, these two variants were not found in LOVD (http://www.lovd.nl/3.0/home), dbSNP (build 132, http://www.ncbi.nlm.nih.gov/projects/SNP/ and the 1000 Genomes Project pilot data (http:// browser.1000genomes.org/index.html). The results of ARMS-PCR analysis in pedigree X were consistent with RFLP analysis in this family and the PAGE pattern (data not shown) was similar to that of RFLP analysis (Fig. 1). Somatic mosaicism was not detected in her lymphocytes of case X1 in pedigree X and the other somatic samples such as buccal cells, urine sediment, vaginal, and/or cervical cells were not available for the detection of somatic mosaicism.
Prediction of protein function for the two MYBPC1 substitutions
Protein sequence alignment of MYBPC1 orthologs showed that the residues of the two variants are highly conserved down to zebrafish (Fig. 1). Similar to the previously reported DA1 mutations in MYBPC1, assessment using SIFT and PolyPhen-2 predicted tolerated and damaging effects due to these DA2 mutations (Table 5). We believe those novel substitutions (p.E359K and p.P319L) in MYBPC1 are the pathogenic mutations in the two Chinese families with DA2.
Discussion
Classification of these two DA2 families and distinctive facial and limb features DA is a group of clinically and genetically heterogeneous disorders. Classification of DAs may be difficult due to reduced penetrance, variable expressivity and overlapping features of different forms, particularly among DA1, DA2B and DA2A. In a recent report, DA1 and DA2B were suggested be phenotypic extremes of the same disorder [4]. Even in some studies DA1, DA2B and DA2A were proposed in a phenotypic continuum of the same disorder [23,24]. The most recent report's findings indicate that DA3 and DA5 are etiologically related and perhaps represent variable expressivity of the same condition [18]. Despite the recognition of 2 distinct syndromes, DA2A (FSS) and DA2B (SHS), differential diagnosis between these 2 disorders is so challenging that a strict diagnostic criteria for classical FSS was proposed [5]. Except distal contractures in our two DA2 families ( Fig. 2 and Table 1), substantial facial findings meeting the diagnosis of DA2 were observed (Table 1 and Fig. 3). As a result, the two Chinese families evaluated in this study were diagnosed with DA2. However, it's too challenging for us to fit them accurately into DA2A/FSS or DA2B/SHS. In the opinion of Professor Michael Bamshad, "there is no a typical H-chin from our affected individuals, and there are a few dimplings of the lower lip on several chins from our families, however, they are not more specific to that of FSS"(personal communication with Professor Michael Bamshad). Thus the clinical findings of these DA2 families seem to be more consistent with DA2B even though there is no triangular face with small pointed chin in most affected individuals (Table 1 and Fig. 3). However, a crease extending laterally and downward from the corners of the mouth was found in 2 of 9 patients in pedigree X and 5 of 7 in pedigree H, respectively. This feature was first described by Fraser, et al. in an adult with FSS [25] and also found in the most reported FSS adults (Group 1 in Table 6). Moreover, this feature was remarkable in most of the reported infants or children with FSS of classical H-chin (Group 2 in Table 6). More attractively, in some reports the features of the chin of the individuals with FSS could be described as "H-chin", as well as "a crease extending laterally and downward from the corners of the mouth"(Group 3 in Table 6). In contrast, this feature was absent in most reported individuals with DA2B (Group 4 in Table 6). Additionally, other commonly found features in FSS including dental crowding, pininched/ pouting lips, strabismus and very small mouth were also observed in these two DA2 families (Table 1and Fig . 3). Thus we could not exclude the possibility that these two DA2 families are affected with DA2A. Remarkably, in three affected individuals of pedigree X the prominent nasolabial folds extended downward below the corners of the mouth so that which exhibiting like "parentheses" around the mouth. Two vertical grooves which paralleling with "the parentheses" were present on the cheeks of case X5 (this feature was also found in another two DA2 patients of our other 2 DA pedigrees. Unpublished data) and a deep vertical groove besides the left corner of the mouth of case H38 also deeply caught our eyes (Fig. 3). The deep vertical groove besides the corner of the mouth and the "prentheses" were not previously reported and are more likely to be the residual form of "dimpling of the lips". Except DA10 characterized by plantar flexion contractures could be recognized from other DA syndromes by the distinctive lower limb phenotype [26], other forms of DA may have such similar limb phenotypes that they can hardly be distinguished from one another based on distal limb contractures alone [27]. However, intrafamilial variability of the penetrance of hand contractures was underscored to be considerable in families with DA1 [12]. In their DA1B families with MYBPC1 mutations, the lower limb contractures were full penetrant and more severely affected than the upper limbs [12]. In contrast, the upper limb contranctures were more penetrant and severely affected than the lower limbs in these DA2 families caused by MYBPC1 mutations (details in Table 1). Although the same type mutations in the same gene located in Table 6. Summary of the features of the chin of FSS or SHS cases from literatures.
Features of the chin Cases and Figures Reference Memo
A crease extending laterally and downward from the corners of the mouth (abbreviated as "CFCM" below) was present in the reported limited number of adult FSS cases. The features of the chin of FSS cases could be described as either "H chin" or "the CFCM". Two Novel MYBPC1 Mutations in Distal Arthrogryposis Type 2 different domains leading to different types of DA have been exemplified by missense mutations in MYH3, almost all FSS mutations are predicted to affect ATP binding and hydrolysis domain of embryonic myosin whereas mutations that cause SHS disturb amino acid residues on the surface of embryonic myosin [6]. Due to the limited DA cases with MYBPC1 mutations reported, we could not draw a positive correlation between genotype and phenotype similar to that of MYH3 mutations causing FSS or SHS. And thus it seems not clear whether the limbs phenotypic difference between the DA1B and these DA2 families resulted from each own genotype or other factors. On the other hand, together with the study of DA1B families caused by MYBPC1 mutations, our reports of MYBPC1 mutations in DA2 families may add further genetic evidence supporting the hypothesis that DA1, DA2B and DA2A may be in a phenotypic continuum of the same disorder. The overextension contracture was only reported in the first DA1B family and it was present at the proximal interphalangeal joint of the fourth finger in two individuals [12]. Notably, overextension contractures were also observed in these two Chinese DA2 families caused by MYBPC1 mutations. However, these overextension contractures were observed at the metatarsophalangeal joints or the proximal interphalangeal joints of the toes of the affected individuals ( Fig. 2 and Table 1). More studies are needed to ascertain whether overextension contracture is specific to the DA patients with MYBPC1 mutations.
Germline with somatic mosaicism in case X1 of pedigree X Mosaicism in germ cells has been recognized as an important and relatively frequent mechanism at the origin of genetic disorders [28]. Some pedigrees with DAs introduced by germline mosaicism (GM) of TNNI2 or TNNT3 mutations have been reported [4,9,23]. In pedigree X, case X1 was first taken as clinically normal individual. Haplotype analysis indicated common haplotype shared by all affected individuals is derived from case X1 (Fig. 4). This finding suggests that she may be an asymptomatic carrier. However, the MYBPC1 mutation p.E359K in this pedigree was not detected in her lymphocytes by direct sequencing, RFLP analysis and ARMS-PCR approaches. These findings suggest that DA2 of pedigree X may be introduced by GM which exists in her germ cells. The founder of kindred X, case X1 was examined for clinical evaluation again. Minor facial contractures were noted (Table 1 and Fig. 3) and both her upper and lower limbs were not affected indicating that she is a mildly symptomatic individual and may also have the somatic mosaicism of the MYBPC1 mutation. Although it was not detected in her lymphocytes, it may exist in cell populations that were not tested (e.g., buccal cells, urine sediment, vaginal, or cervical cells). Germline mosaicism violates the assumptions underlying classic genetic analysis and may lead to failure of such analysis. Fortunately, common haplotype of the affected individuals was not shared by the unaffected individuals (X16 and X20) in pedigree X (Fig. 4) so that solid statistical evidence could be obtained from classical two point linkage analysis in this kindred. Otherwise, the extended the statistical model used for genetic linkage analysis in the presence of germline mosaicism [29] or the whole exome sequencing could be introduced in identification of disease-causing gene. This is the first report of one big DA2 pedigree introduced by germline mosaicism of the de novo MYBPC1 mutation.
Function prediction
A possible mutation hot spot at the NH2-terminus of MyBP-C slow. To date, only three MYBPC1 mutations have been reported in human disease. Except the two distinct DA1B missense mutations [12], a homozygous MYBPC1 nonsense mutation was reported recently in autosomal recessive lethal congenital contracture syndrome type 4(LCCS4) [21]. The DA1 MYBPC1 p.W236R and p.Y856H mutations are located within the M motif and the C-terminal C8 domain, respectively [12], while both the LCCS4 (p.R318X) mutation and our two DA2 (p.P319L and p.E359K) mutations in MYBPC1 are located in the C2 domain (Fig. 1). It seems likely that the fragment of M motif and C2 domain is a mutation hot spot (4/5) at the NH2terminus of MyBP-C slow. Nevertheless, this finding was drawn from the limited reports and more MYBPC1 mutations involved in human disease are needed. Impact on energy metabolism and homoeostasis during muscle contraction. Muscle contraction requires high-energy fluxes that are supplied by muscle-type creatine kinase (MM-CK), which couples to the myofibril. This coupling is mediated by MYBPC1: MM-CK binds to the C-terminal domain of MYBPC1, which is also the binding site of myosin. Thus, MYBPC1 acts as an adaptor connecting the ATP consumer (myosin) and the regenerator (MM-CK) for efficient energy metabolism and homoeostasis [30]. Given that the coupling of MM-CK to the myofibril by MYBPC1 through its C-terminal fragments (domains C6-C10), but not its NH2-terminal fragments [30], thus the crucial role of MYBPC1 in energy homoeostasis during muscle contraction does not seem to be affected by our two DA2 mutations and one reported DA1 mutation (p.W236R), located within C2 domain and M motif, respectively. On the contrary, the other DA1 mutation (p.Y856H) was located within C8 domain, which was included in the COOH-terminal mediate fragment of MYBPC1 for recruitment of MM-CK to myosin and therefore the adaptor role of MYBPC1 to bridge MM-CK and myosin seems likely to be impacted. It seems possible that the imbalance between ATP production and utilization in muscle contraction may also underlie the pathogenesis of DA1 myopathy associated with the p.Y856H mutation.
Important functional role of domain C2 of MYBPC1 and possible pathogenic mechanism. Much of our knowledge on ligands of MyBP-C originates from the numerous studies that focus on the cardiac isoform, binding of the skeletal isoforms of MyBP-C with their partners is much less characterized [19,31]. Further characterization of the NH2-terminal binding of slow MyBP-C has merely shown that the first two immunoglobulin domains (C1-C2) bind to the S2 region of myosin [32], while the contribution of each domain (C1, M motif and C2) in this binding is still elusive. On the contrary, binding site of each domain of the cardiac MyBP-C on S2 region has been well recognized [19]. Specifically, the M motif has been shown to bind directly to the NH2-terminal 126 residues of the S2 fragment (S2Δ) [33]. Similarly, the C2 domain has been also shown to interact with the same S2Δ fragment, albeit with considerably lower affinity but a highly specific binding, compared to the M motif [34]. Similar to its cardiac isoform, the key contributions of M motif and domain C2 of slow MyBP-C may exist in its NH2-terminal binding to S2 region of myosin. Therefore, these two DA2 MYBPC1 mutations (p.P319L and p.E359K) in the present study may suggest the important role of domain C2 of MYBPC1 in binding to S2 fragment of myosin.
Mutations in MYBPC3 causing autosomal dominant hypertrophic cardiomyopathy through haploinsufficiency [35] may lead us to suppose that mutations in MYBPC1, similar to its cardiac counterpart, cause disease through gene haploinsufficiency. However, MYBPC1 haploinsufficiency was not reported to cause muscle disease in the heterozygous carriers of the Bedoin kindreds with autosomal recessive LCCS4 due to homozygous MYBPC1 nonsense mutations [21]. Moreover, the presence of the DA1 missense mutations does not seem to affect the expression levels of MYBPC1 [36]. More importantly, dominant negative effects of human DA1 MYBPC1 missense mutations on muscle function have been demonstrated in zerbrafish models of arthrogryposis [37]. Thus, it is likely reasonable to speculate that these two DA2 families associated with p.E359K and p.P319L mutations may be caused by dominant negative impairment on the regulatory properties of the NH2 terminus of MYBPC1.
Possible molecular mechanisms
Although, there are extensive studies on domain mapping of the cardiac MyBP-C in binding with its partners [19]; Exogenous expression of MYBPC1 containing the human DA1 mutations in murine muscle demonstrated correct sarcomeric localization of MYBPC1 mutant proteins [12]; In vitro binding and motility assays showed that the actomyosin regulatory properties of MYBPC1 are completely abolished by the presence of the DA1 mutations [38]; and dominant-negative effects of human DA1 MYBPC1 missense mutations have been suggested in zebrafish models of arthrogryposis [37], the molecular mechanism that may underlie the pathogenesis of DA2 myopathy associated with the p.P319L and p.E359K mutations is currently under investigation. The structure of domain C2 of human cardiac MyBP-C (cC2) by NMR spectroscopy and a realistic structural model of the cC2-S2Δ complex have been proposed [34]. Domain cC2 has the β-sandwich structure expected from a member of the immunoglobulin I-set. One sheet is formed by strands ABED, and the other by strands C'CFGA'. According to the alignment of sequences of domain C2 from various MyBP-C isoforms [34], our DA2 mutations, p.P319L and p.E359K, are located in the linker between strands C' and D and the C terminal strand G, respectively. And strand G was included in the C'CFGA' βsheet on which the S2Δ specific binding site was located. In their structural model, cC2 alone binds to S2Δ with low affinity but exhibits a highly specific binding site through the surface charge complementarities which resulted from key polar amino acids of both proteins [34]. For cC2, Glu296 (according to the author's numbering scheme) was included in the residues that defined the S2 binding site. Moreover, Glu296 was one of the most amino acids that make hydrogen bond and/or salt bridge interactions with adjacent residues in S2Δ. And multiple alignments revealed that this residue is extremely well conserved in C2 of all isoforms and species [34]. More significantly, our DA2 mutation p.E359K in MYBPC1 found in pedigree X corresponds to this key residue Glu296. In contrast to other IgI domains from MyBP-C there are no isoform specific insertions or deletions, so that the overall shape of the C2 domain is expected to be very similar in all isoforms [34]. Accordingly, it seems reasonable to speculate that the domain C2 of human slow MyBP-C (sC2) may interact with S2 fragment of myosin similar to cC2. Therefore, glutamic acid-359 contributes directly to the binding of the sC2 to S2 fragment of myosin. Since the absence of positive charges in the S2 binding site of cC2 is well conserved in C2 of all isoforms and species [34], thus the substitution of the negatively charged glutamic acid-359 by a positively charged lysine may directly induce an unfavorable electrostatic potential change that impact binding of sC2 to S2 of myosin, leading to loss of regulation. Consequently, the report of MYBPC1 mutation p.E359K in DA2 seems to provide further genetic evidence in its slow skeletal counterpart that support the proposed structural model of the cC2-S2Δ complex. While the other MYBPC1 mutation p.P319L in DA2 seems to be relatively far from the negatively charged S2Δ binding site and additionally both residues involved in the substitution are nonpolar and hydrophobic. Therefore, it seems likely that a different molecular mechanism may underlie the pathogenesis associated with the substitution of p.P319L, compared to p.E359K. The linker between strands C' and D, within which the substitution p.P319L was located, lies at the C-terminal thin edge of the C2 domain with a wedge-like appearance. And the C terminus of the overall structure of cC2 makes the main contact with S2Δ in their model of cC2-S2Δ complex [34]. In addition, the distinctive cyclic structure of proline's side chain locks its φ backbone dihedral angle at a fixed degree, giving proline an exceptional conformational rigidity compared to other amino acids. This distinct side chain/amine group interaction allows proline to play an important role in the formation of beta turn and other common turns. Although the linker between strands C' and D was not a defined beta turn, it does be a turn from strand C' to D, in the formation of which the proline-319 in human MYBPC1 may aid much. Accordingly, it is tempting to speculate that substitution of the distinctive proline-319 with cyclic side chain by a linear branched-chain leucine may induce an unfavorable conformational change that affects the binding of sC2 to S2 fragment of myosin.
Conclusion
In summary, we identified two novel MYBPC1 mutations in two large Han Chinese families with distal arthrogryposis type 2. We also observed some unique overextension contractures of the lower limbs and some distinctive facial features in our DA2 pedigrees. Our work represents the first report on the link between MYBPC1 and the DA2 phenotype, of which one was introduced by germline mosacism. Our results expand the phenotypic spectrum of MYBPC1-related arthrogryposis multiplex congenita (AMC) and we speculate that the domain C2 of MYBPC1 may play an important role in binding to S2 fragment of myosin. The substitution p.E359K in DA2 may also support the proposed structural model of the cC2-S2Δ complex and that most key interactions of the two partners are between polar amino acids. We expect this report of two novel mutations (p.P319L and p.E359K) located in C2 domain of MYBPC1 in DA2 patients and our suggestion on the possible molecular mechanisms that may underlie the pathogenesis of DA2 myopathy associated with these two substitutions will stimulate future research to further refine the details of the NH2-teminal interaction of slow MyBP-C with myosin or its other parterners and their importance for myopathy associated with AMC. | 7,109.2 | 2015-02-13T00:00:00.000 | [
"Medicine",
"Biology"
] |
Evidence for leptonic CP phase from NO$\nu$A, T2K and ICAL: A chronological progression
We study the synergy between the long-baseline (LBL) experiments NO$\nu$A and T2K and the atmospheric neutrino experiment ICAL@INO for obtaining the first hint of CP violation in the lepton sector. We also discuss how precisely the leptonic CP phase ($\delta_{CP}$) can be measured by these experiments. The CP sensitivity is first described at the level of oscillation probabilities, discussing its dependence on the parameters -- $\theta_{13}$, mass hierarchy and $\theta_{23}$. In particular, we discuss how the precise knowledge or lack thereof of these parameters can affect the CP sensitivity of LBL experiments. We follow a staged approach and analyze the $\delta_{CP}$ sensitivity that can be achieved at different points of time over the next 15 years from these LBL experiments alone and/or in conjunction with ICAL@INO. We find that the CP sensitivity of NO$\nu$A/T2K is enhanced due to the synergies between the different channels and between the two experiments. On the other hand the lack of knowledge of hierarchy and octant makes the CP sensitivity poorer for some parameter ranges. Addition of ICAL data to T2K and NO$\nu$A can exclude these spurious wrong-hierarchy and/or wrong-octant solutions and cause a significant increase in the range of $\delta_{CP}$ values for which a hint of CP violation can be achieved. In fact in parameter regions unfavourable for NO$\nu$A/T2K, we may get the first evidence of CP violation by adding the ICAL data to these. Similarly the precision with which $\delta_{CP}$ can be measured also improves with inclusion of ICAL data.
I. INTRODUCTION
In the present status of neutrino oscillation physics, a fair amount of knowledge about the oscillation parameters has been gained from solar, atmospheric, accelerator and reactor experiments.
In the standard 3-flavour scenario there are 6 parameters governing the oscillation of the neutrinos.
These are the three mixing angles -θ 12 , θ 23 , θ 13 , two mass squared differences -∆m 2 31 , ∆m 2 21 (∆m 2 ij = m 2 i − m 2 j ) and the Dirac CP phase δ CP . Among these the unknown parameters are: (i) the sign of ∆m 2 31 (∆m 2 31 > 0 corresponds to Normal Hierarchy (NH); ∆m 2 31 < 0 corresponds to Inverted Hierarchy (IH) ) (ii) the octant of θ 23 (θ 23 > 45 • corresponding to Higher Octant (HO) or θ 23 < 45 • corresponding to Lower Octant (LO)). (iii) the CP phase δ CP ; a value of this parameter different from 0 or 180 • would signal CP violation in the lepton sector. 1 CP violation has been observed in the quark sector and this can be explained by the complex phase of the CKM matrix [3][4][5]. The origin of this could be complex Yukawa couplings and/or complex vacuum expectation values of the Higgs field. In such cases it is plausible that there can be a complex phase analogous to the CKM phase, in the leptonic mixing matrix as well 2 . This can lead to CP violation in the lepton sector [6]. However experimental detection of this phase is necessary to establish this expectation on a firm footing. The determination of the leptonic CP phase is interesting not only in the context of fully determining the MNSP mixing matrix but also because it could be responsible for the observed matter-antimatter asymmetry through the mechanism of leptogenesis [7,8].
Since δ CP occurs with the mixing angle θ 13 in the MNSP matrix, the recent measurement of a non-zero and moderately large value of this angle by reactor and accelerator experiments is expected to be conducive for the measurement of δ CP . The current best-fit value of θ 13 from global oscillation analyses is sin 2 2θ 13 ≈ 0.10± 0.01 [1,9,10]. If θ 13 was very small then any measurement of δ CP would have required high intensity sources [11]. However the moderately large value of θ 13 makes it worthwhile to explore whether δ CP can be measured and any evidence of CP violation can be obtained by the current and upcoming experiments using conventional beams. Many recent studies have investigated this issue [12][13][14][15][16] in the context of the LBL experiment T2K which is currently running [17] and NOνA [18] which is expected to start taking data in near future. Earlier studies on measurement of leptonic CP violation by conventional superbeam experiments can be found in e.g. [19][20][21][22][23].
A potential problem in determining δ CP comes from the lack of knowledge of hierarchy which gives rise to wrong hierarchy-wrong δ CP solutions [24,25]. A prior knowledge of hierarchy can help to eliminate these fake solutions thereby enhancing the CP sensitivity. However since the baselines of T2K and NOνA are not too large they have limited hierarchy sensitivity. Moreover this depends on the true value of δ CP chosen by nature [24,26]. It has been shown recently in [15] that these experiments can determine hierarchy at 90% C.L. only for favourable combination of parameters -({δ CP ∈ [−180 • , 0 • ], NH} or {δ CP ∈ [0 • , 180 • ], IH}). However for the complementary unfavourable combinations, the hierarchy sensitivity of these experiments is low, because of which their δ CP sensitivity is compromised.
In this paper, we expand on the observation made in [27] regarding the synergy between existing and upcoming atmospheric and long-baseline experiments for measuring δ CP . The central idea is that due to the large value of θ 13 atmospheric neutrinos passing through the earth experience appreciable matter effects leading to an enhanced hierarchy sensitivity. Moreover this sensitivity does not depend crucially on the true δ CP values. Thus addition of atmospheric information to the data from LBL experiments can increase the hierarchy sensitivity in the unfavourable region for the latter. This feature leads to an enhanced CP sensitivity for LBL experiments when atmospheric data is included in the analysis [27]. This is despite atmospheric neutrino experiments themselves not having any appreciable sensitivity to δ CP . Usually, studies of CP sensitivity are done assuming hierarchy and octant to be known. In our study we quantify explicitly the exposures required by a realistic atmospheric neutrino experiment to achieve this.
We analyze in detail the individual and combined δ CP sensitivity of LBL and atmospheric neutrino experiments, both at the level of oscillation probabilities and with simulations of relevant experimental set-ups. For the long-baseline experiments we consider T2K, which is already running and NOνA which is expected to start taking data in 2014. For atmospheric neutrinos we choose the magnetized iron calorimeter detector (ICAL) which is being constructed by the India-based Neutrino Observatory (INO) collaboration 3 . For our study we adopt a staged approach where we look at the data which will be available to us from these experiments at different chronological points over the next 15 years. We explore whether the LBL experiments T2K and NOνA can give any evidence of δ CP being different from 0 and 180 • by themselves or in combination with data from ICAL@INO. We also present results for the precision measurement of δ CP individually for the LBL experiments and in combination with ICAL@INO. In addition to the role of atmospheric neutrino data in resolving hierarchy-δ CP degeneracy, its impact on removing octant-δ CP degeneracy is also examined. We also study how the CP sensitivity of the LBL experiments varies with θ 13 in its current range.
Many future experiments are being planned for addressing the problem of resolution of mass hierarchy and determination of the CP phase δ CP . This includes LBNE [30], LBNO [31], T2HK [32], ESS [33] etc. The planning of these facilities are expected to benefit from a detailed assessment of the capabilities of the currently running or under construction experiments [34]. In this context it also makes sense to survey if T2K and NOνA do not see CP violation with their currently projected run times then whether they can achieve this with extended run times [12,15,16]. In this paper we expound this possibility to explore the ultimate reach of these experiments to detect The paper is organized as follows. In Section II, we describe the δ CP dependence of neutrino oscillation probabilities and how it is correlated with other parameters. Section III gives the experimental details of the long-baseline experiments (NOνA and T2K) and atmospheric neutrino experiment (ICAL) considered in our study. Section IV outlines the results for the δ CP measurement and CP violation discovery potential of NOνA and T2K with different exposures corresponding to different points of time in the future. In Section V we discuss the dependence of the CP sensitivity of NOνA and T2K on neutrino parameters and the synergies between different channels. Section VI analyzes the CP sensitivity of atmospheric neutrino experiments with a magnetized iron detector, focusing on the CP measurement potential for a combination of NOνA and T2K with ICAL.
We summarize the conclusions in Section VII.
II. EFFECT OF HIERARCHY AND OCTANT ON δ CP SENSITIVITY
The sensitivity to δ CP and potential for CP violation discovery can be understood from the oscillation probabilities in matter [29,35,36]. The predominant contribution to the δ CP sensitivity is from the ν µ → ν e oscillation probability (P µe ), which has a dependence on δ CP in its sub-leading term which is suppressed by the small solar mass-squared difference. In matter of constant density, P µe can be expressed in terms of the small parameters α = ∆ 21 /∆ 31 and s 13 as [37][38][39] (1 −Â) 2 +α sin 2θ 13 sin 2θ 12 sin 2θ 23 Here
1.
Since δ CP appears in the expression coupled with the atmospheric mass-squared difference in the term cos(∆ + δ CP ), it suffers from the hierarchy-δ CP degeneracy which potentially limits the CP sensitivity [25]. The ambiguity shows up only in specific half-planes of true δ CP depending on the true mass hierarchy [15]. For neutrinos, the probability P µe is higher for NH than for IH due to matter effects as seen from the first term in Eq. test hierarchy NH, since the matter effect is less the probability will be somewhat lower than the corresponding ones for the neutrinos. In addition the curves for +90 • and −90 • will be interchanged but the separation for these two cases from the CP conserving values still remain comparable. When the test hierarchy is IH, the antineutrino probabilities are higher because of enhanced matter effects. This compounded with the flipping of the +90 • and −90 • probability curves for NH leads to LHP still being favorable for lifting the degeneracy.
For true IH, the opposite is true, i.e. δ CP values in the UHP are favourable for resolving the degeneracy for both neutrinos and antineutrinos.
23
, θ ′ 13 , δ ′ CP ). has been elucidated in detail in [41,42], extending the conventional definition. This includes the possibility that the test value of θ 23 occurring anywhere in the 'wrong' octant may give the same probability. The recent tighter constraint on the value of θ 13 helps to weaken the degeneracy with this parameter, but the ambiguity between the two octants for different values of δ CP still remains.
Whether this degeneracy is manifested in the results for CP sensitivity from an experiment depends on whether the experiment (or combination of experiments) is capable of determining the octant and resolving the octant-δ CP degeneracy to a good enough confidence level. The octant sensitivity arises from the sin 2 θ 23 dependence in the leading term, and is significantly dependent on the true values of oscillation parameters δ CP and θ 23 [41]. Since P µe increases with θ 23 and with δ CP in the LHP, while δ CP in the UHP pulls it down, a true value of θ 23 lying in the higher octant (HO) would be more likely to suffer from the octant-δ CP degeneracy if the true value of δ CP is in the UHP. On the other hand, if true θ 23 is in the lower octant (LO) then true δ CP lying in the LHP would raise P µe and lead to an ambiguity with P µe values corresponding to test θ 23 in the HO. Hence the LHP is favourable for resolving the octant δ CP degeneracy in the case of true HO and the UHP is favourable for true LO. These features are reflected in Fig. 2 where we show the effect of octant degeneracy in distinguishing between the CP conserving and maximally CP violating cases for P µe , for the NOνA baseline. The upper panel is for neutrinos whereas the lower panel is for antineutrinos. The shaded region corresponds to true LO. The first panel shows that for true LO and true δ CP = −90 • (LHP) the two cases cannot be distinguished whereas in the second panel, for δ CP = 90 • (UHP) a clear separation is seen. For antineutrinos the behaviour for LHP and UHP is opposite. This indicates that a combination of neutrinos and antineutrinos would be conducive for removal of octant-δ CP degeneracy [12,43]. For true HO the behaviour is opposite.
III. EXPERIMENTAL DETAILS
For the long-baseline experiments NOνA and T2K, simulation is done using the GLoBES package [44][45][46][47]. T2K (L = 295 Km) is assumed to have a 22.5 kt WaterĈerenkov detector and a 0.77 MW beam running effectively for 5(ν) + 0(ν) or 3(ν) + 2(ν) years by 2016. The initial plan of T2K was to run with with 10 21 protons on target (pot)/year for five years [48]. However because of natural calamity T2K has not been able to achieve its full capacity yet. We take into account its present lower power run as well as the planned upgrades to give a total of 5 effective T2K years till 2016 (i.e. a total of 5 × 10 21 pot). We also consider the option of T2K running for 5(ν) + 5(ν) years by 2021, to ascertain whether such an extension would be advantageous. For NOνA (L = 812 Km), we consider a 14 kt TASD detector with a 0.7 MW beam with 7.3 × 10 20 pot/year running for 3(ν) + 3(ν) years by 2020 and 5(ν) + 5(ν) years by 2024. In this work we use a re-optimized NOνA experimental set-up with refined event selection criteria [14,49]. Detailed specifications of these experiments are given in [14,18,21,48,[50][51][52].
For atmospheric neutrinos, we analyze a magnetized iron calorimeter detector (ICAL) of the prototype planned by the India-based Neutrino Observatory (INO), which will detect muon events with the capability of charge identification [53]. We use constant neutrino energy and angular detector resolutions of 10% and 10 • respectively, unless otherwise specified. Note that the neutrino resolutions using INO simulation codes are currently being generated. However we have checked that the resolutions used above gives similar results as obtained by the INO simulation code using muons [53]. We consider a 1 GeV neutrino energy threshold, 85% efficiency and 100% charge identification efficiency. We look at two detector exposures of 250 kT yr, corresponding to 5 years of running for a 50 kT detector, and 500 kT yr or 10 years of running with such a detector. The detector is currently under construction, with a projected time frame of 5 years to completion, so this data is expected to be available by about 2023 and 2028 respectively. Earth matter effects are included in the atmospheric neutrino analysis using a standard Preliminary Reference Earth Model (PREM) density profile of the earth [54].
Henceforth, we give the exposure of NOνA or T2K as a + b where a and b respectively denote the number of years of neutrino and antineutrino running of the experiment.
For our study of T2K, NOνA and ICAL, we look at the following chronological points: • 2016, when T2K will have completed either a (5+0) or a (3+2) run • 2020, when NOνA will have completed a (3+3) run • 2024, when NOνA will have completed a (5+5) run and ICAL will have at least 5 years of data • 2028, when ICAL will have 10 years of data We also consider the case of T2K going on to a (5+5) run, which can be taken into account in the 2024 analysis along with NOνA (5+5) and ICAL 5.
IV. CP SENSITIVITY OF T2K AND NOνA : CHRONOLOGICAL PROGRESSION
In this section, we study the prospects for CP violation discovery and δ CP precision measurement of NOνA and T2K for different exposures corresponding to progressive points of time in the next 10 years. The experimental capabilities are demonstrated using CP violation discovery plots and δ CP precision plots respectively.
The discovery potential of an experiment for CP violation is computed by considering a variation of the δ CP over the full range [0, 180 • ) in the simulated true or 'experimental' event spectrum N ex , and comparing this with δ CP = 0 or 180 • in the test or 'theoretical' event spectrum N th . The discovery χ 2 in its simplest statistical form is defined as 4 In our calculation we include a marginalization over systematic errors and uncertainties for each experiment. 5 The resultant χ 2 from the various experiments are then added and finally marginalized (unless specified otherwise) over the parameters θ 23 , θ 13 , |∆ 31 | and hierarchy in the test spectrum.
As expected, the discovery potential of the experiments is zero for true δ CP = 0 and 180 • , while it is close to maximum at the maximally CP violating values δ CP = ±90 • . We use the following transformations relating the effective measured values of the atmospheric parameters ∆ µµ and θ µµ to their natural values ∆ 31 and θ 23 [60][61][62]: The effective values ∆ µµ and θ µµ correspond to parameters measured by muon disappearance experiments. It is advocated to use these values in the definitions of priors if the prior is taken from muon disappearance measurements. The corrected definition of θ µµ is significant due to the large measured value of θ 13 , while for ∆ µµ the above transformation is valid even for small θ 13 values. In our analysis we do not use any external priors for these parameters as the experiments themselves are sensitive to these parameters. However it is to be noted that for the effective parameters, there is an exact mass hierarchy degeneracy between ∆ µµ and −∆ µµ and an exact intrinsic octant degeneracy between θ µµ and 90 • − θ µµ . Therefore use of these values in the analysis ensures that one hits the exact minima for the wrong hierarchy and wrong octant in the numerical analysis for the muon disappearance channel. Measurements with the appearance channel and the presence of matter effects can break these degeneracies. Also, the generalized octant degeneracy occurring between values of θ µµ in opposite octants for different values of θ 13 and δ CP is still present for the effective atmospheric mixing angle. For such cases, a fine marginalization grid has to be used in the analysis in order to capture the χ 2 minima occurring in the wrong hierarchy and wrong octant.
The following true values and test ranges of parameters are used in our computation: with specific values of θ tr µµ and δ tr CP .
∆ 21 and sin 2 θ 12 are fixed to their true values since their effect is negligible. External (projected) information on θ 13 from the reactor experiments is added in the form of a prior on θ 13 : with the 1σ error range σ sin 2 2θ 13 = 0.005.
The CP sensitivity of an experiment can also be quantified by the precision measurement of δ CP possible by the experiment. In this case we look at a variation of the δ CP over the full range [0, 360 • ) in both the simulated true event spectrum N ex and the test event spectrum N th . Thus the precision χ 2 is given by We present precision plots which show the test δ CP range allowed by the data for each true value of δ CP , up to a specified confidence level. The allowed values of δ CP are represented by the shaded regions in the figures. For an ideal measurement, the allowed values would be very close to the true value. Thus the allowed region would be along the δ CP tr = δ CP test diagonal. However, due to finite precision of the parameters as well as the parameter degeneracies, other δ CP values are also seen to be allowed.
A. CP sensitivity of T2K (3+2) and (5+0) (2016) T2K is expected to have a neutrino run of 5 years. There are also discussions for a break-up of neutrino and antineutrino runs, for which we consider the case of (3+2) years [12]. In Fig. 3, we depict the CP violation discovery (upper row) and 90% C.L. δ CP precision (lower row) for T2K (3+2) (left panel) and T2K (5+0) (right panel) for θ µµ = 39 o , sin 2 2θ 13 = 0.1 (true values) and true NH. The figure shows that the CP sensitivity of T2K alone is quite low, especially for the (5+0) case, where the discovery potential remains below χ 2 = 2 over the entire true δ CP range. This is because the baseline of T2K (295 Km) is relatively short and earth matter effects are minimal, leading to the hierarchy-δ CP degeneracy predominating in both half-planes when only a neutrino beam is taken. When we consider a neutrino-antineutrino combination, the different behaviours of the neutrino and antineutrino probabilities partially resolves the degeneracy in the favourable half-plane (in this case the LHP) for a (3+2) run. Therefore, as pointed out in [12], a T2K (3+2) run provides better CP sensitivity than a T2K (5+0) run. This is also evident in the precision plots, where the allowed region of δ CP (shaded area) is more for the (5+0) case, indicating that less regions of δ CP are excluded at 90% C.L. show somewhat different dependences on the neutrino parameters. In particular, the degeneracies observed in Fig. 3 can be resolved in some areas by combining T2K with NOνA . We explore how addition of NOνA affects the difference in CP sensitivity between T2K (3+2) and (5+0) runs. of the left and right panels tells us that for both discovery and precision, the advantage offered by T2K (3+2) over (5+0) is lost when we combine T2K with NOνA . While for T2K alone the discovery χ 2 can rise above 2 in the LHP for (3+2) but remains well below 2 for (5+0), the discovery χ 2 of NOνA + T2K is nearly identical for T2K (3+2) and (5+0), and rises to values above χ 2 = 6 (2.5σ) in the LHP. The allowed regions also look similar in the two cases.
This behaviour can be explained as follows. Since NOνA already includes a combined neutrinoantineutrino run, it is capable of resolving the hierarchy-δ CP degeneracy and providing significant CP sensitivity in the favourable half-plane. Therefore the hierarchy degeneracy resolution provided by T2K (3+2) in the favourable half-plane is no longer required when T2K is combined with NOνA . Thus in the combined analysis, the T2K CP sensitivity adds to the NOνA sensitivity irrespective of whether T2K has a (5+0) or (3+2) run. For the subsequent chronological analysis, we choose the T2K run to comprise of (5+0) years.
C. CP sensitivity of T2K (5+0) with NOνA (5+5) (2024) Although the current projection of NOνA is to run for (3+3) years we also consider the possibility of a (5+5) run of NOνA . This is to investigate the possibility of an enhanced sensitivity to δ CP using upgradation of current facilities. In Fig.5, we plot the CP violation discovery (upper row) and 90%/95% C.L. δ CP precision (lower row) for true NH (left panel) or true IH (right panel).
Comparing with Fig.4, it can be observed that the increased NOνA exposure adds to the discovery potential, giving values as high as χ 2 = 9 (3σ) for maximal CP violation in the favourable half-plane in each case and reaching close to χ 2 = 4 (2σ) at some points in the unfavourable half-plane even though the discovery minima still lie in the wrong-hierarchy region there. In the precision figures, the allowed regions shrink to an area along the major diagonal (true δ CP = test δ CP ) corresponding to the right-hierarchy solutions and some off-axis islands corresponding to the wrong-hierarchy solutions arising from the hierarchy-δ CP degeneracy. These are, as expected, in the UHP for true NH and in the LHP for true IH. In this section we consider the possibility of a (5+5) run for T2K in conjunction with NOνA (5+5) run. This is a possible extension beyond the projected timescale of the experiments. Fig.6 illustrates the CP violation discovery potential, 90% C.L. δ CP precision and 95% C.L. δ CP precision for NOνA (5+5) + T2K (5+5) for θ µµ = 39 • , sin 2 2θ 13 = 0.1 and true NH (left panel) or true IH (right panel). It may be observed that in this case the discovery potential rises to well above 3σ for maximal CP violation in the favourable half-plane, and stays above 3σ between −120 • < true δ CP < −60 • for true NH and 60 • < true δ CP < 120 • for true IH. In the unfavourable halfplane a 2σ discovery signal is achieved over part of the true δ CP range, but the discovery minima still occur with the wrong hierarchy. Similarly, while the off-axis islands in the precision plot corresponding to the wrong-hierarchy δ CP solutions vanish at the level of 90% C.L., they are still not ruled out at 95% C.L. This shows the need for some additional input in order to resolve the hierarchy-δ CP degeneracy in the unfavourable half-plane.
It is worthwhile to analyze the relative contributions of NOνA and T2K in this case, where they have equal exposures with both neutrinos and antineutrinos. While T2K has better statistics, NOνA enjoys greater hierarchy sensitivity due to a longer baseline and stronger earth matter effects. To study this, we plot in Fig.7 the allowed fraction of δ CP values at 90% C.L. for T2K (5+5) and NOνA (5+5) as a function of true δ CP . This quantity indicates the fraction of test δ CP values which lie in the allowed region for each specific value of true δ CP . Hence smaller values of the allowed CP fraction signify better CP sensitivity.
The figure is plotted for true NH. The three panels correspond to test NH, test IH and a marginalization over hierarchy. It is observed that for a fixed NH, NOνA does slightly better than T2K. For test IH, NOνA and T2K perform similarly in the unfavourable half-plane (UHP), but NOνA is much better than T2K in the favourable half-plane (LHP) due to its superior hierarchy sensitivity. However, with a marginalization over the unknown hierarchy, NOνA does much worse than T2K in the unfavourable half-plane.
This anomalous feature can be explained from the 90% C.L. δ CP precision plots for T2K (5+5) and NOνA (5+5) (true NH) in Fig.8 for test NH and test IH, and the true UHP -test LHP range remains excluded in both cases. So a marginalization over hierarchy does not cause as much of an increase in the allowed CP fraction for T2K as it does for NOνA .
The reason for this difference in the behaviour of NOνA and T2K can be seen at the level of probabilities. Fig.9 depicts the P µe energy spectrum for the T2K and NOνA baselines for neutrinos and antineutrinos, showing the bands for NH and IH when δ CP is varied over the full range. The curves for δ CP = 90 • and −90 • are highlighted. It is easy to see that due to the greater separation between the NH and IH bands for NOνA , the true NH -test IH case shows a clear degeneracy between the two bands near true δ CP = 90 • and test δ CP = −90 • , leading to the true UHP -test LHP allowed region in the NOνA test IH precision figure. T2K has a much greater overlap between the NH and IH bands, but in this case, the overlap is more prominent in the regions of true UHP -test UHP and true LHP -test LHP, corresponding to the allowed areas in these ranges in the T2K test IH precision figure. Hence in spite of the smaller allowed regions for NOνA compared to T2K especially for true NH/test IH, the location of the allowed regions leads to an anti-synergistic combination for NOνA (5+5), giving an overall poorer CP sensitivity than T2K (5+5).
V. CP VIOLATION DISCOVERY POTENTIAL OF T2K/NOνA : SYNERGIES AND DEPENDENCE ON PARAMETERS
In this section, we study the behaviour of the CP violation discovery potential as a function of the neutrino parameters θ 13 , the neutrino mass hierarchy and the octant of θ 23 . We also examine the synergy between the individual channels. The discussion of synergies and parameter dependence here is for the case T2K (5+0) + NOνA (5+5), i.e. with a time frame till 2024.
A. Synergy between appearance and disappearance channels of T2K/NOνA : The event rates in T2K and NOνA get contributions from both P µµ and P µe channels. Due to the different behaviours of the two channels as a function of δ CP and other oscillation parameters, there is a synergy between them which leads to an enhancement of the CP violation discovery potential of the combination. In Figure 10, the CP violation discovery is plotted as a function of true δ CP for the appearance and disappearance channels of NOνA (upper row), sum of appearance and disappearance channels of NOνA (lower row, left panel) and sum of appearance and disappearance channels of NOνA +T2K (lower row, right panel) for θ tr µµ = 39 • , sin 2 2θ tr 13 = 0.1 and true NH. The following features can be observed: 1. The CP violation discovery potential principally arises from the appearance channel of NOνA /T2K, which is a function of P µe , owing to its dependence on the quantity cos(∆ + δ CP ) in the sub-leading term of Eq.1 as discussed in Section II. The disappearance channel offers a weaker δ CP sensitivity through a sub-leading dependence on cos δ CP [37]. The top right panel shows that by itself, the disappearance channel (P µµ ) has negligible discovery potential.
Due to the different behaviours of the two channels as a function of δ CP and other oscillation
parameters, there is a synergy between them which leads to an enhancement of the CP violation discovery potential of the combination. P µe is a function of both sin δ CP and cos δ CP while P µµ depends only on cos δ CP . From the bottom left panel, it can be seen that the discovery potential of the combination is significantly greater than the sum of the discovery χ 2 of the individual channels.
3. Both NOνA and T2K experience this synergy between the appearance and disappearance channels. In addition, there is a further enhancement of the discovery potential when the two experiments are combined, as discussed in the previous section. The behaviour of CP sensitivity as a function of θ 13 can be understood by looking at the θ 13dependence of the ν µ → ν e oscillation probability P µe . As seen in Eqn. 1, P µe has a leading order term ∼ sin 2 θ 13 that is independent of δ CP , and a sub-leading term ∼ sin 2θ 13 that is a function of δ CP In calculating CP sensitivity χ 2 , the leading order δ CP -independent term cancels out from the true and test spectra in the numerator, but remains in the denominator. For illustrative purposes, the χ 2 can be expressed as χ 2 ∼ P (δ CP ) sin 2 2θ 13 Q sin 2 θ 13 + R(δ CP ) sin 2θ 13 .
P,Q,R are also functions of the other oscillation parameters apart from δ CP . It is easy to show that for small values of θ 13 , χ 2 ∼ θ 13 which is an increasing function. It is also straightforward to consider the other limit, where θ 13 is close to 90 • . In this limit, χ 2 ∼ (90 • − θ 13 ) 2 which decreases with θ 13 . This feature can be understood qualitatively by noting that the leading order term is independent of δ CP and therefore acts as a background to the CP signal [20]. Therefore, CP sensitivity initially increases with θ 13 , peaks at an optimal value, and then decreases with θ 13 . These features are reflected in Fig.11 where we plot the CP violation discovery potential of NOνA +T2K as a function of sin 2 2θ tr 13 for two maximally CP-violating values of true δ CP . We assume θ tr µµ = 39 • and a fixed normal mass hierarchy. A marginalization over θ 13 is done in the left panel and θ 13 is fixed to its true values in the right panel. It can be seen that the discovery χ 2 rises for very small values of sin 2 2θ 13 and reaches its highest value in the range sin 2 2θ 13 ∼ 0.08 − 0.2 before starting to drop off gradually. The vertical lines denote the current θ 13 range (sin 2 2θ 13 = 0.07 − 0.13). This figure shows that the range of θ 13 that nature has provided us with is a fortuitous one, since it happens to lie in a region where the sensitivity to CP violation is maximum with such experiments. Fig.12 depicts the CP violation discovery as a function of true δ CP for NOνA +T2K (true NH, θ 13 and hierarchy marginalized, θ tr µµ = 39 • ) for two values of sin 2 2θ tr 13 at the lower and higher end of its present range and two values θ 13 prior. It can be seen that in the favourable half-plane of δ tr CP , there is a slight increase in the χ 2 with an increase in θ tr 13 in this range, as can be predicted from Fig.11. In the unfavourable half-plane, there is again a complicated dependence of the discovery χ 2 on the intrinsic CP violation discovery of the experiments as well as their hierarchy sensitivity, and since the latter increases significantly with θ 13 , we observe a more definite improvement of the overall discovery potential with increasing θ 13 .
C. Dependence on the neutrino mass hierarchy:
This aspect has been discussed in detail in [27]. Fig.13 shows the CP violation discovery as a function of true δ CP for NOνA +T2K for three values of θ tr µµ , sin 2 2θ tr 13 = 0.1 and true NH (left panel) or IH (right panel). As expected, there is a drop in the discovery χ 2 in the unfavourable half-plane in each case, i.e. in the UHP for true NH and in the LHP for true IH. In these regions, the discovery minima occur with the wrong hierarchy due to the hierarchy-δ CP degeneracy, and the discovery χ 2 is a sum of the intrinsic discovery potential and the hierarchy sensitivity of NOνA +T2K [27]. In Fig.13, we can also observe the dependence of the CP violation discovery χ 2 on the true value of θ µµ . When true δ CP lies in the favourable half-plane, the discovery potential decreases with increasing θ µµ in the current allowed range of θ µµ . In the unfavourable half-plane, the behaviour is more complicated since the discovery minima lie in the wrong-hierarchy region for part of the range, and the hierarchy sensitivity adds to the discovery χ 2 . The hierarchy sensitivity is directly proportional to θ µµ , and therefore the overall CP violation discovery potential in these regions also increases with θ µµ .
As seen in Eqn. 1, P µe has a leading order δ CP independent term ∼ sin 2 θ µµ and a sub-leading δ CP dependent term ∼ sin 2θ µµ . This is similar to the θ 13 behaviour. Thus for smaller values of θ µµ the χ 2 is expected to rise, reaching a peak at an intermediate value of θ µµ and decreasing thereafter. This is reflected in the left panel of fig.14 where the CP violation discovery potential of NOνA +T2K is shown as a function of true θ µµ . This plot is drawn for two maximally CP-violating values of true δ CP , sin 2 2θ tr 13 = 0.1 and a fixed NH with test θ µµ fixed to its true value. The vertical lines give the present 3σ range of θ µµ (θ µµ = 35 • − 55 • ). Therefore, as we increase θ µµ in its allowed range, we see a drop of sensitivity.
The right panel of Fig. 14 is obtained by marginalizing over the octant i.e assuming no prior knowledge of the octant in which θ µµ lies. We find that for θ tr µµ < 40 • or > 49 • , there is no effect of a marginalization over the octant. This is because the octant sensitivity of NOνA + T2K is good enough (at least 2σ) in this range of θ tr µµ to rule out CP discovery solutions in the wrong octant [41]. The octant χ 2 adds to the CP discovery χ 2 in the wrong octant and excludes any minima occurring in that region. For 40 • < θ tr µµ < 49 • , the octant sensitivity of NOνA +T2K is not high enough to exclude wrong-octant solutions, and we see a wiggle in the discovery χ 2 curves signaling the octant-δ CP degeneracy. The behaviour is different for δ tr CP = ±90 • , since the LHP is favourable for resolving the octant-δ CP degeneracy for true HO and the UHP is favourable for true LO (in the neutrino mode, which gives the predominant contribution in these results). This is illustrated in Fig.15, where the discovery potential of NOνA +T2K is plotted as a function of true δ CP for θ tr µµ = 43 • (left panel) and 49 • (right panel) with and without a marginalization over the octant. sin 2 2θ tr 13 = 0.1 and a fixed NH is assumed. These values of θ tr µµ lie within the range of unresolved octant-δ CP degeneracy, which shows up as a drop in the curve in the LHP for θ tr µµ = 43 • and in the UHP for θ tr µµ = 49 • when the octant is assumed to be unknown, as expected from the above argument. The favourable half-plane in each case suffers from no degeneracy. We also see that the drop due to the octant degeneracy is greater in the case of θ tr µµ = 43 • than for 49 • since the former value lies in the central part of the degenerate region, while the latter is at the edge. In general, the CP sensitivity of atmospheric neutrino experiments are limited by the finite detector resolutions. In particular, the angular resolutions need to be very good to have any intrinsic CP sensitivity from these experiments [27,63,64]. It has been highlighted in [27], that in δ CP degeneracy by excluding discovery χ 2 minima occurring with the wrong hierarchy in the unfavourable half-plane of δ tr CP . This is achieved due to the significant and largely δ CP -independent hierarchy sensitivity of atmospheric neutrino experiments. This was demonstrated in in [27] taking ICAL@INO as the atmospheric neutrino detector. In this work we do a combined study for T2K + NOνA + ICAL with different exposures and for both the CP violation discovery potential and the δ CP precision.
A. CP sensitivity of ICAL
In this sub-section we explore some details of the CP sensitivity of atmospheric neutrino experiments. The main issue here is that the atmospheric neutrinos come from all directions. Hence these experiments face a further challenge of accurately reconstructing the direction apart from the energy. We investigate how the intrinsic CP sensitivity of atmospheric neutrinos depend on the energy and angular resolutions and how much sensitivity can be achieved for an ideal detector.
In Fig.16 the CP violation discovery potential of ICAL is plotted as a function of the energy and angular resolution. The curve for angular(energy) resolution is plotted by varying the respective smearing widths between 3 • − 15 • (3% − 15%) while holding the energy(angular) resolution fixed at 10%(10 • ). The figure illustrates the significant role played by the angular resolution of an atmospheric neutrino detector in its CP sensitivity. With present realistic values of detector smearing (15%,15 • ), the CP sensitivity of such an experiment is washed out by averaging over bins in energy and direction, due to the coupling between δ CP and ∆ = ∆ 31 L/4E in the term cos(δ CP + ∆) in P µe [27]. With a hypothetical improved angular resolution of 3 • , the CP violation discovery χ 2 may reach values close to 1, going up to 5 for ideal detector resolutions in both angle and energy. plane still exhibits a hierarchy-δ CP degeneracy and has discovery minima with the wrong hierarchy, but the combination of the hierarchy sensitivity of ICAL (5 years) raises the discovery potential to about 2.5σ over the central part of the unfavourable half-plane for the NOνA (5+5) case, i.e. in the range −120 • < true δ CP < −60 • (true IH) or 60 • < true δ CP < 120 • (true NH). With NOνA (3+3), the discovery potential reaches up to 2.5σ for maximal CP violation in both half-planes when ICAL 5 years data is added. From Fig.18, it can be seen that 10 years of ICAL data provides a complete resolution of the hierarchy-δ CP degeneracy and the discovery potential goes up to 3σ for maximal CP violation in both favourable and unfavourable half-planes.
with δ tr CP in the UHP and δ test CP in the LHP for true NH and vice versa for true IH. These correspond to the CP minima occurring with the wrong hierarchy due to the hierarchy-δ CP degeneracy. From the precision plots in Fig.17, it can be observed that these wrong-hierarchy solutions go away at both 90% and 95% C.L. when atmospheric neutrino information from ICAL (5 years) is combined, since this degeneracy is resolved by the addition of hierarchy sensitivity from ICAL. Thus the combination of atmospheric neutrino experiments with NOνA /T2K can aid the potential for δ CP measurement of the long-baseline experiments by curtailing the allowed range, and for this purpose We also study what happens to the octant-δ CP degeneracy when ICAL is combined with NOνA and T2K. Fig.19 shows the CP violation discovery potential of NOνA (5+5) + T2K (5+0) with and without ICAL (10 years) as a function of true θ µµ for true δ CP = ±90 • for sin 2 2θ tr 13 = 39 • and a fixed NH, with a marginalization over θ µµ . Comparing with Fig.14, we see that the wiggle in the 40 • < θ tr µµ < 49 • range corresponding to the octant-δ CP degeneracy is reduced in amplitude and restricted to the range 41 • < θ tr µµ < 48 • when ICAL data is added. In Fig.20, the discovery potential of NOνA + T2K with and without ICAL (10 years) is plotted as a function of true δ CP for θ tr µµ = 43 • (left panel) and 49 • (right panel) with and without a marginalization over the octant, for sin 2 2θ tr 13 = 0.1 and a fixed NH. These values of θ tr µµ lie within the range of unresolved octant-δ CP degeneracy, even with the combination of ICAL, but an improvement in the discovery χ 2 is seen in the unfavourable half-plane in each case when ICAL is added. Since the drop due to the octant degeneracy is greater for θ tr µµ = 43 • than for 49 • , the addition of ICAL data entirely overcomes the degeneracy and compensates for the drop in the 49 • case, while for 43 • there is only a partial improvement in the χ 2 even when ICAL data is added.
The effect of ICAL information on the octant-δ CP degeneracy is more modest than that for the hierarchy-δ CP degeneracy since the octant sensitivity of ICAL is not as good as its hierarchy sensitivity [41]. It is still helpful to an extent since, like the hierarchy sensitivity, the octant Finally, we examine the benefits of adding ICAL to the projected combination of T2K (5+5) + NOνA (5+5). Fig.6 showed that while this combination provides good discovery potential (> 3σ) over the central part of the favourable half-plane, the unfavourable half-plane still suffers from the hierarchy-δ CP degeneracy and barely reaches a discovery potential of 2σ over its central region.
Further, the wrong-hierarchy solutions in the δ CP precision figure get ruled out at 90% C.L. but not at 95% C.L.
The favourable half-plane, as expected, is unaffected by the addition of ICAL. Also, the small off- axis allowed regions at 95% C.L. in the precision plot for T2K (5+5) + NOνA (5+5) get excluded when ICAL (5 years) is added. Hence the combination of ICAL constrains δ CP with a higher level of sensitivity. With 10 years of ICAL data, the hierarchy-δ CP degeneracy is fully resolved and the discovery potential of the NOνA +T2K+ICAL combination achieves values above 3σ over the ranges −120 • < true δ CP < −60 • as well as 60 • < true δ CP < 120 • , i.e. in both the favourable and unfavourable half-planes for both hierarchies. violation and a more constrained measurement of δ CP .
VII. CONCLUSIONS
Measuring CP violation in the lepton sector is one of the most challenging problems today.
We have performed a systematic chronological study of the CP sensitivity of the current and upcoming long-baseline experiments T2K and NOνA and the atmospheric neutrino experiment with a prototype of ICAL@INO. We analyze the synergies between these set-ups which may aid in CP violation discovery and a precision measurement of δ CP . This has been done for different combinations of these experiments which will be achievable at progressive points in time in the near future. The main role of the atmospheric data is to rule out the wrong hierarchy solutions which increases the CP sensitivity in the unfavourable parameter regions for T2K/NOνA . Usually the analysis of CP sensitivity is done assuming the hierarchy/octant to be known -in which case the wrong hierarchy/wrong octant solutions are excluded a priori. We show how a realistic atmospheric neutrino experiment can achieve this and quantify the exposure which enables one to disfavour the wrong hierarchy and/or wrong octant solutions. Below we list the salient features of our results.
Study of synergies and parameter dependence: • While the CP sensitivity principally arises from the appearance channel of NOνA /T2K, the appearance and disappearance channels are synergistic due to their different dependences on δ CP . P µe depends on δ CP through the quantity cos(∆ + δ CP ), while P µµ only has a cos δ CP dependence. Thus their combination gives a CP sensitivity significantly higher than the sum of sensitivities of the two channels.
• The results for a combination of T2K and NOνA display hierarchy-δ CP degeneracy. This is manifested as a drop in the CP violation discovery potential in the unfavourable half-plane of δ CP , i.e. the UHP (0 − 180 • )for true NH and the LHP (−180 • − 0) for true IH.
• There is also a degeneracy of δ CP with the octant. However because of significant octant sensitivity of the T2K + NOνA combination, this occurs over a restricted range of θ µµ around the maximal value. For example, for a T2K (5+0) + NOνA (5+5) combination, the degeneracy with the octant occurs over the range 40 • < θ tr µµ < 49 • The degeneracy shows up as a drop in the discovery potential in LHP for true LO (θ µµ < 45 • ) and in the UHP for true HO (θ µµ > 45 • ).
• Although a non-zero θ 13 is essential for any measurement of δ CP , large values of this parameter can also impede the CP sensitivity [65]. This is because of the presence of the δ CP independent leading term in P µe ∼ sin 2 θ 13 , which can act as a background for the sub-dominant δ CP dependent term. However we note that for smaller values of θ 13 the CPdiscovery χ 2 ∼ θ 13 and hence increases with θ 13 . On the other hand, for larger values of θ 13 the CP-discovery χ 2 ∼ (90 • − θ 13 ) 2 which decreases with θ 13 . The discovery χ 2 attains its highest value in the range sin 2 2θ 13 ∼ 0.08 − 0.2. This tells us that the range of θ 13 provided by nature lies in an optimal region which is favourable for CP sensitivity with such experiments.
Chronological study: In Table I we summarize the maximum values of CP violation discovery potential in the unfavourable half-plane of true δ CP , and the percentage of true δ CP values capable of giving a CP violation discovery signal at 2σ and 3σ, for different combinations of the experiments T2K, NOνA and ICAL at progressive points of time over the next 15 years. The following observations can be made from these results: • By 2016, T2K is expected to have an effective 5-year run with 10 21 pot/year. We consider the cases of a (5+0) versus a (3+2) run, and find that with T2K alone, a (3+2) run provides a better CP sensitivity than a neutrino only (5+0) run, due to the complementary behaviour of the neutrino and antineutrino probabilities which partially resolves the hierarchy-δ CP degeneracy in the favourable half-plane of δ CP .
• By 2020, NOνA will complete a (3+3) run. We combine this with the T2K results for (3+2) and (5+0) and find that the combination offers similar CP sensitivity in both cases. This is because NOνA , with its combined neutrino-antineutrino run, helps in resolving the hierarchy-δ CP degeneracy in the favourable half-plane and overrides the necessity of resolving it with T2K. Thus a neutrino-only T2K run proves to be as efficient towards CP sensitivity as a combined (3+2) run when it is taken in tandem with NOνA . In this way the combination of T2K and NOνA provides a synergy, apart from the improved sensitivity of the combination purely due to the increased statistics and exposure.
• By 2024, NOνA may have a (5+5) run. Combining this with T2K (5+0) adds to the CP sensitivity due to the higher NOνA exposure, and can provide a CP violation discovery potential of up to 3σ in the favourable half-plane and up to 2σ at some points in the unfavourable half-plane. The δ CP precision determination is also improved but still displays some additional allowed regions in δ CP corresponding to the wrong-hierarchy solutions.
• We also look at an extended (5+5) run of T2K, and consider it with NOνA (5+5). In this case the CP violation discovery potential rises well above 3σ for maximal CP violation in the favourable half-plane. The unfavourable half-plane gives a discovery signal of 2σ over parts of the true δ CP range, but the discovery minima still occur with the wrong hierarchy.
In the δ CP precision plots, the wrong-hierarchy allowed regions are ruled out at 90% C.L.
• Finally we look at a combination of ICAL@INO with NOνA and T2K, and find that it resolves many of the issues with degeneracy observed in the NOνA +T2K results. By 2024, ICAL will have at least 5 years of data. With a T2K (5+0) + NOνA (5+5) + ICAL (5 years) combination, the CP violation discovery potential still exhibits a hierarchy-δ CP degeneracy and has discovery minima with the wrong hierarchy, but due to the hierarchy sensitivity of ICAL, the discovery potential is raised to about 2.5σ over the central region of the unfavourable half-plane. The favourable half-plane is unaffected by the addition of ICAL.
• Combining NOνA and T2K with ICAL (10 years) also gives a modest improvement in lifting the octant-δ CP degeneracy, reducing the range of its effect and improving the discovery potential in the unfavourable half-plane. The advantage in this case is less than for the hierarchy-δ CP degeneracy since the octant sensitivity of ICAL is not as good as its hierarchy sensitivity.
With 10 years of ICAL data, the hierarchy-δ CP degeneracy is fully resolved and the discovery potential of the NOνA +T2K+ICAL combination achieves values above 3σ over the central part of both the favourable and unfavourable half-planes. Thus the addition of ICAL can provide a more consistent signature of CP violation and a more constrained measurement of δ CP .
In conclusion, the combination of T2K and NOνA can provide reasonable CP sensitivity for some values of neutrino parameters but is severely compromised in this regard in other ranges.
The addition of atmospheric neutrino information bearing uniform hierarchy sensitivity may be crucial in measuring δ CP and detecting CP violation in case nature has chosen the parameter values unfavourable for LBL experiments. This fact has valuable ramifications for current experiments as well as for designing future LBL experiments like LBNO [31], where the inclusion of atmospheric neutrino data can significantly influence the exposures required for giving a high CP sensitivity over all allowed parameter values. | 12,769.4 | 2014-01-28T00:00:00.000 | [
"Physics"
] |
Shape Coexistence in Hot Rotating 100 Nb
Temperature and angular momentum induced shape changes in the well deformed 100 Nb have been investigated within the theoretical framework of Statistical theory combined with triaxially deformed Nilson potential and Strutinsky prescription. Two shape coexistence, one in the ground state of 104 Nb between oblate and triaxial shapes and another one between oblate and rarely seen prolate non-collective shapes in excited hot rotating 100 Nb at the mid spin values around 14-16h are reported for the first time. The level density parameter indicates the influence of the shell effects and changes drastically at the shape transition. The band crossing is observed at the sharp shape transition.
INTRODUCTION
The hot rotating compound nucleus which is a many -body system with a complex internal structure, is treated in the framework of Statistical Model [1][2][3] with temperature, spin, isospin and deformation degrees of freedom within a mean field approximation with excitation energy and angular momentum as the input parameters. With increasing excitation energy, the density of quantum mechanical states increases rapidly and the nucleus shifts from discreteness to the quasi-continuum and continuum where the statistical concepts especially the nuclear level density (NLD) [4][5][6][7][8][9], which is the number of excited levels around an excitation energy, are crucial for the prediction of various nuclear phenomena, astrophysics [10] and nuclear technology. The excitation due to the temperature and rotation alter the nuclear structure significantly. The evolution of shapes [11][12][13] and phase transitions in such excited hot and rotating nuclei can be studied experimentally by the measurement of the GDR gamma rays [14]. Sometimes two shape phases appear to coexist with similar energies leading to the phenomenon of shape coexistence [15][16] studied in our earlier works [2,[17][18][19]. The shape phase transitions in excited nuclei also impact the level density and the particle emission spectra shown in our recent work [20,21] and has become a subject of current scientific interest experimentally and theoretically.
Here we present our results on the shape evolution and coexistence in the odd Z (=41) Nb isotopes with A=80-100. This mass region A= 80-100 [22,23] is known to provide exotic nuclear structural phenomena often characterized by the shape coexistence. Since the large parts of N and Z are distributed in fpg shells, thus their level density is high and there is an interplay of the single-particle and collective motion. Also the intruder of the 1g 9/2 orbitals located just above the N = 40 sub-shell closure plays an important role in the shape coexistence hence propose an ideal region to study shape evolution with spin, excitation and isospin. We also investigate the influence of the shape transitions on the level density parameter.
THEORETICAL FORMALISM
To evaluate deformation and shape of the excited nucleus we calculate excitation energy E* and entropy S of the hot rotating nuclear system using the statistical theory of hot rotating nuclei [1][2][3] for fixed temperature T and angular momentum M (given as an input) as a function of Nilson deformation parameter β and γ. The excitation energy E* is incorporated with the ground state energy calculated using triaxially deformed Nilson Strutinsky method [17] and then the free Energy (F = E-TS) [24] is minimized with respect to deformation parameters (β, γ) at T and M.
where δE Shell is the shell correction and E def is the deformation energy due to coulomb and surface effects. E LDM (Z,N) is the macroscopic energy computed using Liquid drop mass formula. The excitation energy E * (T,M,β,γ) is obtained as where E(0, 0) is the ground state energy. The rotational energy is given by The level density parameter 'a' is computed as a = S 2 / 4 E * (4) and the Inverse level density parameter is obtained as K=A/a. Free energy F minima are searched for various β (0 to 0.4 in steps of 0.01) and γ (from -180 o (oblate) to -120 o (prolate) and -180 o < γ < -120 o (triaxial)) to trace the nuclear shapes and equilibrium deformations. (The readers may refer to ref. [1][2] for the detailed theoretical formalism)
RESULTS AND DISCUSSION
We compute ground state deformation and shapes of the Nb isotopes (Z=41) with N = 40-66, using the triaxially deformed Nilson potential and the Strutinsky's prescription which is adequately described in our earlier works [17]. The energy minima are traced for all the nuclei as a function of deformation parameters β and γ which give the equilibrium deformation and shape of each nucleus. Ground state (GS) deformation (at T = 0) varying between 0.1-0.27 starts decreasing as the temperature increases and becomes zero at T=1.5 MeV. The dominant shape phase in this region is found to be triaxial with few oblate shapes. The inclusion of triaxial shapes [26] in our calculation makes them
Shape Coexistence
in Hot Rotating 100 Nb more meaningful especially in this region which is expected to have dominant triaxial deformation space. Deformation shows a minima at N=50 (A=91) although the deformation is not zero as is expected from a shell closure in this odd-even nucleus in the highly deformed region. Shell effects are evident in Fig. 2(b) where we have plotted level density (LD) parameter vs. A for various T = 0.7, 1.0, 1.5, 2.0 MeV. The LD parameter 'a' is a minima at shell closure and maxima in mid shell region at low temperature T=0.7 MeV. With increasing T, shell effects melt away and 'a' varies almost smoothly with slight increasing value with A.
Nuclear structure is strongly impacted as soon as we incorporate the collective and non-collective rotational degrees of freedom. Fig. 3 shows β ( Fig. 3(a)) and γ (Fig. 3(b)) vs. angular momentum for various temperature values T=0.7, 1.0, 1.5, 2.0 MeV and M = 0.5 to 60.5 . The deformation is high even at a high temperature T=2 MeV and further increases with increasing M and reaches up to a value of 0.2. A close inspection of shapes in Fig. 3(b) reveals that at low temperature T = 0.7 MeV where the states are nearly yrast and the effects of rotation are dominant, we find a rarely seen shape phase of prolate non-collective, first anticipated by Goodman [27] and then observed in our earlier works [1][2]28], which diminishes as T and M increases. At this sharp shape transition, band crossing is observed which is evident in the plot of rotational energy vs. M. (in Fig. 3(c)). With increasing T, the band crossing effect diminishes and the rotational energy E rot varies gradually with angular momentum at all temperatures. Fig. 3(d) shows the influence the sharp shape transition on level density parameter of a nucleus where we have plotted the inverse level density parameter K=A/a vs. M. Normally K increases as M increases because the part of the excitation energy is spent in rotation due to which the level density decreases and K increases. However, at the sharp shape transition at M = 16.5 , we note a sharp drop in the value of K which in turn enhances the nuclear level density and hence the particle emission probability shown in our recent works [20,21] which has shown good agreement with recent measurements [29][30] on nuclear level density and emission spectra for medium and heavy mass region nuclei 119 Sb and 185 Re. Our observation [20] of enhancement of level density and drop in inverse level density parameter associated with the deformation and shape changes has provided important inputs to understand various predictions of experimental works [29][30][31]. However, the experimental and other theoretical data for the Nb isotopes investigated in this region is awaited. The phenomenon of shape coexistence is observed in excited hot and rotating nucleus 100 Nb not reported so far in any other work as far to our knowledge. In Fig. 4, we plot free energy F minima vs. β for various γ for M=14.5, 16.5 and 32.5 . We find that the F minima moves from prolate at M= 14.5 (with oblate shape nearly coexisting with energy difference of 369 KeV) to an oblate F minima (slightly deeper than prolate) at M=16.5 coexisting with prolate with merely an energy difference of 52 KeV which identifies a shape coexistence where the shape phases of prolate and oblate non-collective appear to coexist at similar energy but have very different deformations. At higher spin M=32.5 , F minima goes to the expected usual shape phase of oblate non-collective with a well defined single minima.
CONCLUSION
Ground state and highly excited hot and rotating states of 80-100 Nb isotopes are studied within a microscopic approach. Odd -Z Nb isotopes are found to have predominantly triaxial shapes and few oblate shapes with high deformation ranging between 0.1-0.35. Shape coexistence between oblate and triaxial shapes in ground state 104 Nb is observed. The level density parameter shows the influence of shell effects. A sharp shape transition from prolate to oblate non-collective for hot rotating 100 Nb leads to the effects of band crossing and a sharp drop in inverse level density parameter, which slowly fade away with the increasing temperature showing the structural effects disappearing as temperature increases. While undergoing the shape transition from prolate to oblate non-collective, we observe a shape coexistence with oblate and prolate non-collective shapes at mid spin value in 100 Nb. | 2,171.4 | 2018-02-05T00:00:00.000 | [
"Physics"
] |
Schwarzschild quasi-normal modes of non-minimally coupled vector fields
We study perturbations of massive and massless vector fields on a Schwarzschild black-hole background, including a non-minimal coupling between the vector field and the curvature. The coupling is given by the Horndeski vector-tensor operator, which we show to be unique, also when the field is massive, provided that the vector has a vanishing background value. We determine the quasi-normal mode spectrum of the vector field, focusing on the fundamental mode of monopolar and dipolar perturbations of both even and odd parity, as a function of the mass of the field and the coupling constant controlling the non-minimal interaction. In the massless case, we also provide results for the first two overtones, showing in particular that the isospectrality between even and odd modes is broken by the non-minimal gravitational coupling. We also consider solutions to the mode equations corresponding to quasi-bound states and static configurations. Our results for quasi-bound states provide strong evidence for the stability of the spectrum, indicating the impossibility of a vectorization mechanism within our set-up. For static solutions, we analytically and numerically derive results for the electromagnetic susceptibilities (the spin-1 analogs of the tidal Love numbers), which we show to be non-zero in the presence of the non-minimal coupling.
Introduction
Black holes (BHs) are arguably among the most interesting objects in the universe. Their experimental agreement with the gravitational wave (GW) emission of binary systems [1] and with the imaging of the event horizon of a supermassive BH [2], promises to be but the first phase of a research era that is bound to culminate in a deeper understanding on the nature of BHs and gravity, as well as on numerous other related questions in astrophysics and fundamental particle physics.
As with most physical systems, a powerful way to probe BHs is by perturbing them and then see how they respond. While we cannot do this in the lab, such perturbed BHs are naturally produced by the merger of compact astrophysical objects. The details of how the post-merger BH relaxes toward equilibrium may in principle be measured through the GWs emitted during the process-the so-called ringdown phase. Quantitatively, the dynamics of the ringdown can be modeled by a superposition of quasi-normal modes (QNMs), whose characteristic frequencies are in one-to-one correspondence with the observable GW signal.
Although radiation in the form of GWs is a universal outcome of perturbing a BH, it need not be the only one. Indeed, a dramatic event such as a BH merger may reasonably be expected to excite other fields besides the metric, and these too will subsequently relax back to equilibrium via emission of the corresponding radiation. This radiation can again be described by QNMs, i.e. characterized in particular by dissipation caused by the presence of the BH horizon. More importantly, the matter fields' QNM spectra depend on the underlying spacetime, thus serving as an alternative probe of the BH. Furthermore, and crucially, the QNMs of a field encode physical information that is not directly available in the GW signal, namely about the coupling of the respective field with gravity and the equivalence principle.
In view of these considerations, we see at least two reasons that motivate the study of QNMs of matter fields in a BH background. The first concerns the fields themselves. As we have said, BH mergers are phenomena unlike anything we might achieve with Earth-based experiments. Thus, we may hope to make use of them as a way to test the existence of new particles, especially those that dominantly interact with the Standard-Model sector indirectly through gravity. The second reason which we have already alluded to regards the question of how fields couple to gravity. Establishing the existence of matter-gravity interactions beyond those dictated by the minimal coupling prescription is an exciting prospect that may in principle be achieved through the measurement of QNMs. In fact, as we will discuss later, QNMs offer a particularly clean signature of non-minimal gravitational interactions.
In this paper, we study QNMs of a massive vector field on a Schwarzschild BH background with a particular non-minimal coupling with gravity. Before describing our set-up in detail, let us briefly comment on the existing literature on the subject of vector-field QNMs in BH spacetimes. The study of massless, minimally coupled vector fields in four dimensions and with flat asymptotics dates back to the work of Chandrasekhar [27]. The Proca equation for a massive vector field and the corresponding QNM spectra have been investigated in [28][29][30][31] for a Schwarzschild(-AdS) BH and only recently in [32][33][34] for a Kerr BH.
Here we go beyond previous studies of spin-1 particles by considering the most general Lagrangian of a vector field A µ subject to the following assumptions: (i) The Lagrangian is quadratic in the vector field. This follows from our aim to investigate linear perturbations about vacuum solutions of general relativity (GR), specifically the Schwarzschild metric. Generically, this implies that the vector field must vanish at the background level, and therefore it is sufficient to focus on a quadratic theory for the purpose of deriving the QNM spectrum.
While this is true generically, we should remark that there exist vector-tensor theories that admit so-called "stealth" BH solutions, i.e. solutions that coincide with vacuum GR solutions in spite of having a non-trivial vector field background profile [35]. Our analysis therefore does not encompass this case.
A corollary of this premise is that metric and vector perturbations are decoupled at linear order. The QNM spectrum of GWs is thus exactly the one derived in GR [7,27] and so may be ignored.
(ii) The theory describes precisely five dynamical degrees of freedom, i.e. two in the metric and three in the vector field (or two in the case of a massless spin-1 field, which we will treat as a special case). In other words, we demand the absence of additional propagating modes associated to a loss of constraints or to higher-order equations of motion.
For a generic spacetime background, this Lagrangian extends the Proca theory by the addition of two non-minimal coupling operators. Unsurprisingly, these operators are precisely those obtained from linearizing the Lagrangian of the Generalized Proca theory of a self-interacting massive spin-1 field [36,37]. Our derivation thus serves as a proof of the uniqueness of Generalized Proca theory at the level of linear perturbations about the trivial state A µ = 0. For a Ricci-flat background the theory further simplifies, leaving only one non-minimal coupling operator, with F µν the vector field strength and G 6 a coupling constant. Our main objective in this paper is to numerically derive the QNM spectrum of vector field perturbations for a range of values of the parameter G 6 and the bare mass µ of the field. Interestingly, for a given BH mass, G 6 is restricted to a window of values given by where r g ≡ 2GM is the Schwarzschild radius (with G the Newton coupling and M the BH mass). This criterion follows from the requirement of stability (due to ghost and/or gradient instabilities) of the BH under perturbations of the vector field in the localized approximation, i.e. in the limit where the size of the perturbation is much shorter than the typical length scale characterizing the background variation [38,39]. Related to the question of stability of BH spacetimes under perturbations of generalized vector fields, one may ask if the criterion (1.2) is not only necessary but also sufficient for ensuring stability. While we plan to address this with exhaustivity in a dedicated work, here we provide evidence that this is indeed the case for a Schwarzschild BH. Our claim is based on the analysis of quasi-bound states of the vector field, that is solutions of the generalized Proca equation which decay at spatial infinity. Like QNMs, quasi-bound states have an associated spectrum of complex frequencies, which are of particular interest as they may be used to diagnose the presence of instabilities: A quasi-bound state frequency with positive imaginary part signals an exponentially growing mode and thus an unstable system, at least within the linearized regime. It is worth remarking that the same judgment cannot be made based on the QNM spectrum because, as we will review, the imaginary part of a QNM frequency must be negative by the definition of a QNM.
The principal result of this first study of quasi-bound states of a non-minimally coupled vector field is that the fundamental frequency mode for each degree of freedom of the field has a negative imaginary part within the numerically accessible part of the range given by Eq. (1.2). However, as the computational cost of our numerical routine increases as one approaches the bounds in (1.2), we are unable to numerically access values of the coupling G 6 arbitrarily close to the critical points. We partially address this shortcoming by providing an analytical argument, valid for a subset of the spectrum, which shows that quasi-bound states are stable whenever G 6 is within but arbitrarily close to the stability bounds.
We have mentioned that the QNM spectrum of matter fields may serve as a powerful tool to test the minimal coupling paradigm dictating the form of matter-gravity interactions. This question is of fundamental importance, so it behooves us to understand which signatures of a non-minimal coupling operator for a given field might be clean and robust enough so as to be potentially detectable. Perturbations of a massless field are arguably one such probe, since minimally coupled massless fields on BH spacetimes in GR are known to be very special, at least due to two properties: isospectrality of their QNM spectra and vanishing linear response coefficients in the static limit.
Isospectrality refers to the equivalence of the QNM spectra of parity-even and parityodd perturbations [27,40]. The property is featured by massless fields in four dimensions with flat or de Sitter asymptotics, at least for spins s = 0, 1, 2. 1 Isospectrality is however known to fail in higher dimensions [43], for asymptotically anti-de Sitter BHs [13,44], for massive fields [30,45], and for BHs in non-linear electrodynamics [46,47] or in the presence of higher-curvature corrections [16,18]. To our knowledge, the breaking of isospectrality due to non-minimal couplings has not been systematically addressed, although it is known to occur for certain couplings of scalar fields [21,[23][24][25][26]. Here we fill the gap of spin s = 1 by showing through numerical results that the parity-even and -odd spectra of a massless vector field with the non-minimal coupling of eq. (1.1) are indeed distinct.
Although QNMs are the main focus of our work, static perturbations are also interesting in that they define the static response coefficients associated to a given field. For massless spin-2 perturbations the response coefficients physically encode the tidal deformability of the BH and are known as Love numbers [48], see also [49][50][51]. For a massless spin-1 probe field they may be interpreted as the electromagnetic susceptibilities of the field in a BH background. It is a remarkable and well-known property that the static response coefficients of massless fields of spin s = 0, 1, 2 exactly vanish for four-dimensional BHs in GR [52][53][54][55][56][57][58]. The property is however absent in higher dimensions [59,60] as well as for BHs in beyond-GR theories [16,61,62]. In addition, and similarly to isospectrality, the vanishing of Love numbers and susceptibility coefficients is not expected to hold in the presence of non-minimal couplings, although again we are not aware of any exhaustive analyses (see [61] for results in some particular models). Here, we compute the electric and magnetic susceptibilities of dipolar perturbations of a massless vector field as functions of the coupling G 6 in eq. (1.1), and show that they are non-vanishing in agreement with expectations.
We now give an outline of the paper's contents: In Sec. 2, we describe our set-up, including (i) our uniqueness argument for the non-minimal coupling, (ii) the decomposition of the vector field in spherical harmonics, and (iii) the definition of QNMs according to the boundary conditions for the mode equations. In Sec. 3, we present our main results, namely the calculation of the QNM spectra for each mode of the vector field and for a range of values of the coupling G 6 and mass µ. The spectra of a massless field and the breaking of isospectrality are treated as a special case. In Sec. 4, we consider quasi-bound states. This provides evidence for the stability of the system under consideration beyond the local approximation. In Sec. 5, we consider static perturbations and, focusing on a massless field and dipolar modes, derive the electric and magnetic susceptibilities as functions of G 6 . We discuss our results and give some final remarks in Sec. 6. In Appendix A, we provide details of the numerical method used in our calculations.
Non-minimally coupled Proca field
Our study will focus on linear perturbations of a massive vector field A µ about a GR background solution. The background state of the vector field is the trivial one, A µ = 0, as per our definition of a GR solution, i.e. one with vanishing vector hair. It therefore suffices to focus on Lagrangians that are precisely quadratic in the field A µ , while the dependence on the metric tensor is in principle arbitrary. Note that, a priori, we make no restriction on the number of derivatives acting on A µ .
We will additionally require that the theory describe exactly five degrees of freedomtwo in the metric and three in the vector field-so as to avoid Ostrogradsky-type ghosts on all backgrounds. A sufficient condition to achieve this is to demand that the equations of motion of the Stückelberg formulation of the theory be of second order in derivatives. This condition is however not a priori necessary, as it may occur that the theory possess the correct number of constraints even in the presence of higher derivatives in the field equations [63]. We shall nevertheless disregard this possibility here and focus on the simpler set-up with second-order equations of motion.
Our claim is that the most general four-dimensional Lagrangian subject to these assumptions is given by where M Pl is the Planck mass, µ is the mass of the vector field, and G 4,X and G 6 are coupling constants. The notation chosen for the latter two coefficients is explained by the connection between the Lagrangian (2.1) and the Generalized Proca theory. As mentioned in the introduction, eq. (2.1) may be obtained upon linearizing the Generalized Proca Lagrangian about the trivial vector background A µ = 0. 2 In particular, the operators multiplying G 6 in the second line (which may be more compactly written in terms of the dual Riemann tensor) will be recognized as the unique extension, as demonstrated by Horndeski [64], of the standard Einstein-Maxwell theory, here restricted to quadratic order. In this paper we confine our attention to a background given by the Schwarzschild metric and neglect the backreaction of the vector field on the geometry. This assumption is valid at linear order in perturbation theory since, as we remarked, metric and vector fluctuations do not couple at this order. The generalized Proca equation for a Ricci-flat spacetime reduces to In this set-up, we are therefore left with two dimensionless parameters: µr g and g 6 ≡ G 6 /r 2 g (with r g the Schwarzschild radius). Observe that the Lorenz constraint, follows as a consequence of eq. (2.2) whenever µ = 0. In the massless case, we shall instead impose a different constraint as a gauge condition. As mentioned in the introduction, eq. (2.2) features pathological solutions (ghosts and/or gradient-unstable modes) unless the coefficient g 6 is confined to the range [38,39] While this result was obtained from an analysis of localized perturbations, these bounds on g 6 will be seen to translate into the statement that the mode functions of the vector field should be insensitive to additional poles appearing in the equations of motion. In terms of the Schwarzschild radial coordinate r, these poles are given by with r + ≡ g 1/3 6 r g and r − ≡ (−2g 6 ) 1/3 r g . Demanding that these poles be hidden inside the event horizon then yields (2.4). Thus, although this range was originally derived from different considerations, it has the important implication that the equations will allow for consistent QNM solutions, which at least generically would not be possible if one had poles in the physical domain r > r g . 3 2 The Generalized Proca Lagrangian contains the functions G4(X) and G6(X) (among others), with X ≡ − 1 2 A µ Aµ. Our coupling constants G4,X and G6 correspond respectively to G 4 (0) and G6(0), which are finite by our assumption that Aµ = 0 is a well-defined state. 3 QNMs are by definition everywhere regular and with fixed boundary conditions. The presence of a pole would impose an additional matching condition and thus an overdetermined system for the QNM frequency and the amplitude of the QNM function. Such a system will generically have no solution. The same remark, of course, also applies to quasi-bound states.
Uniqueness
Generalized Proca theory was constructed as the most general model which reproduces the (shift-symmetric) scalar Horndeski theory in the so-called decoupling limit where the longitudinal mode of the vector field becomes a dynamical scalar [65,66]. 4 As such, the theory is perfectly general, given the assumptions of its construction, on flat spacetime. The uniqueness of Generalized Proca is however not immediate when the coupling with gravity is taken into account, since the covariantization of the decoupling limit theory need not match, term by term, that of the full theory. In particular, one cannot a priori disregard non-minimal couplings to the curvature tensor beyond those obtained in [37] (see also [72]), as the latter were derived as "counterterms" to cancel the pathological operators that appear upon minimal covariantization. Here, we provide a sketch of the proof of the uniqueness of the Lagrangian (2.1); a detailed proof will be given in a dedicated work where we analyze the general problem without assuming linearity in the vector field.
To reiterate the problem, we seek the most general Lagrangian for a vector field A µ and metric tensor g µν subject to the assumptions of (i) general covariance, (ii) quadratic order in the vector field, and (iii) second-order equations of motion for all the fields in the Stückelberg formulation. Note that we make no assumption on the derivative order of the fields at the level of the Lagrangian.
In the Stückelberg formulation of the theory, the Lagrangian is a functional of the fields (g µν , A µ , φ) and is invariant under diffeomorphism and U (1) gauge symmetries. The latter property implies that the vector and Stückelberg scalar can only appear through the invariants F µν and D µ φ ≡ ∇ µ φ + µA µ (here µ is the mass of the Proca field). The main proposition is that these two building blocks cannot couple with each other in the Lagrangian. To establish this one notes that covariant derivatives of D µ φ may be chosen as fully symmetrized without loss of generality. Indeed, any mixed-symmetric or antisymmetric projection of ∇ µ 1 · · · ∇ µ n−1 D µn φ can be traded by F µν (and derivatives thereof) and/or curvature tensors contracted with fully symmetrized derivatives of D µ φ. Since derivatives of F µν cannot be made fully symmetric, it follows that they cannot be contracted with the tensor ∇ (µ 1 · · · ∇ µ n−1 D µn) φ. An exception to this is the divergence of the field strength, ∇ µ F µν , and its derivatives; for instance, ∇ µ F µν D ν φ is a valid operator that seemingly contradicts our claim. However, the divergence ∇ µ F µν may in principle be solved for algebraically from the vector field equation of motion, implying that any instance of this term in the Lagrangian may be eliminated through a field redefinition.
The Lagrangian is therefore "separable" in the building blocks F µν and D µ φ, which may then be analyzed independently. The operators involving only F µν and the metric constitute purely vector-tensor gauge invariant terms, hence they satisfy the assumptions of the Horndeski theorem for Einstein-Maxwell theory [64], with the known quadratic-order result 5 L ⊃ 1 4 Other prescriptions for constructing vector-tensor theories have been considered in the literature [67][68][69][70], leading to various extensions of Generalized Proca. See also [71] for an effective field theory approach. 5 The double-dual Riemann tensor is defined as R µνρσ ≡ 1 together with the standard Maxwell, Einstein-Hilbert and cosmological constant terms. The remaining operators in the Lagrangian must then all be expressible in terms of D µ φ and covariant derivatives of this invariant. Because the conditions we are imposing on the equations of motion must hold for all field configurations, they must hold, in particular, when A µ = 0. But in this case D µ φ → ∇ µ φ and we have precisely the assumptions of the Horndeski theorem for scalar-tensor theory [73] (with the extra condition that φ may not appear without derivative), with the known quadratic-order result together with the standard scalar kinetic term. In the general case with A µ = 0, we know that the scalar field derivative must appear "covariantized" in D µ φ, so that the result correctly reproduces the Generalized Proca term upon setting unitary gauge φ = 0. This concludes our derivation of the Lagrangian (2.1), independently of its relation with the non-linear Generalized Proca theory. The implication is that any consistent extension or alternative to Generalized Proca must reduce to (2.1) when expanded at quadratic order about the vacuum A µ = 0, provided the theory admits this state.
Decomposition in vector spherical harmonics
We consider the exterior of a Schwarzschild BH spacetime with line element where r g = 2GM , G is the Newton coupling, and M is the mass of the BH. Given the background symmetries, the equation of motion for the vector field is separable after expanding in spherical harmonics, where, in our convention, the vector spherical harmonics are defined as in terms of the standard scalar spherical harmonics Y m (θ, φ). Under a parity transformation, (θ → π − θ, φ → π + φ), the functions Z and may be analyzed separately. The vector spherical harmonics satisfy the orthonormality condition where M µν Z = diag 1/f 2 , f 2 , ( + 1)/r 2 , ( + 1)/(r 2 sin 2 θ) , which is used to factor out the angular dependence in the equation of motion.
In the following, we suppress the supersripts and m in the mode functions u m i and denote partial derivatives with respect to t and r respectively with dots and primes. We also introduce the operator with r * being the tortoise coordinate defined by dr * = f −1 dr. In the next subsection we provide the mode equations for the dynamical degrees of freedom. The case of a massless vector field requires a separate analysis, which is done in the following subsection. Readers interested only in the relevant equations and results may consult Tab. 1 for reference.
Mode equations
In order to eliminate the non-dynamical variables we make use of the Lorenz constraint, eq. (2.3), which reduces tou upon substituting the expansion in eq. (2.9). Here, and in the following, we denote tderivatives with a dot and r-derivatives with a prime.
In the case of monopole ( = 0) perturbations, u 3 and u 4 are absent in the spherical harmonic expansion. Using constraint (2.16) we can further eliminate u 1 in favor of u 2 ≡ u M , with the resulting equation For axial perturbations with ≥ 1, it is convenient to define u − ≡ P 1/2 + u 4 , which produces (2.18) For the polar modes with ≥ 1 we again eliminate u 1 using the constraint (2.16), obtaining two coupled equations for the variables u 2 and u 3 , As anticipated in table 1, the monopole mode is only sensitive to the pole P − , axial modes are only sensitive to the pole P + , while polar modes with ≥ 1 are affected by both.
We remind the readers that the parameter g 6 (implicit in the above equations, cf. (2.5)) is restricted to lie in the stability range (2.4), so that the poles P ± never vanish in the physical domain r > r g . Nevertheless, the observation is pertinent as we shall be interested in exploring values of g 6 close to the bounds.
Massless case
When the bare mass µ vanishes, the Lagrangian (2.1) is gauge invariant and the identification of the dynamical degrees of freedom requires a separate analysis. We will use the gauge freedom to set u 1 = 0 as was done in Ref. [30]. Note that this is a complete gauge fixing for perturbations compactly supported in space and time. For = 0, both u 3 and u 4 are again absent, while the generalized Proca equation implies that u 2 = 0, indicating as expected that there is no dynamical monopole mode. For the higher multipoles with ≥ 1 we introduce in terms of which the parity-even part of the equation of motion can be cast as so that there is a single polar mode (for each , m) in the massless case. As for the axial mode, being gauge invariant, one can directly set µ = 0 in eq. (2.18) to obtain the corresponding equation.
Boundary conditions
We seek solutions to the mode equations in the frequency domain, where they assume the form for the modes u M , u − , u 2,3 and u 0 (cf. table 1), and remembering that the functional V couples both modes u 2,3 in the polar sector. The frequency ω is in general complex, assuming without loss of generality a positive real part. (If u solves the mode equation for some frequency with Re ω > 0, then u * solves the conjugate equation with Re ω < 0.) Presently, we further assume Im ω < 0, deferring a discussion of the opposite case to Sec. 4. The BH horizon serves as a causal boundary admitting only ingoing modes, hence the physical boundary condition is at the horizon, i.e. as r * → −∞.
At spatial infinity, r * → +∞, we have V(u, r) µ 2 u for every mode. The general asymptotic solution at spatial infinity is therefore By definition, QNM solutions correspond to purely outgoing waves at infinity, i.e. with c in = 0. 6 Having fixed boundary conditions at both the event horizon and at spatial infinity, we are left with an eigenvalue problem with a discrete set of QNM solutions characterized by a spectrum of frequencies {ω n } ∞ n=0 .
Quasi-normal modes: numerical results
Recall that our mode equations depend on the two parameters µ and g 6 . The standard Proca theory corresponds to g 6 = 0, whose QNM spectra on a Schwarzschild background were studied in Ref. [30]. Our main aim here is the extension of the analysis to non-zero values of g 6 within the stability range (2.4), sampling also over a range of mass values µ. We restrict our attention to the fundamental QNM frequency (n = 0) and lowest multipoles = 0, 1, except in the massless field case for which we present results also for the first and second overtones (n = 1, 2) of the dipole modes.
We numerically solve the mode equations using a spectral or collocation method with Chebyshev interpolation, using up to N = 80 collocation points to ensure converged results. In essence, the method turns a differential boundary-value problem into a non-linear eigenvalue problem with finite-dimensional matrix. A brief summary of the approach is given in 6 To see explicitly that e √ µ 2 −ω 2 r * is an outgoing wave, note that we choose the convention for the square root such that Re µ 2 − ω 2 > 0, which implies that sign(Im µ 2 − ω 2 ) = −sign(Im ω) = +1. Appendix A; the reader may find a succinct but more general exposition in [33,74], which also provides references to the relevant mathematical literature. Before proceeding, a word about terminology. The polar sector contains two degrees of freedom for each ≥ 1, hence two independent QNM spectra. We will refer to these modes as "scalar" and "vector", following [30]. The rationale behind these names is that, in the massless limit and with g 6 = 0, the polar mode equations match the form of the Regge-Wheeler (RW) equations for massless scalar and vector fields, in agreement with the Goldstone boson equivalence theorem. To see this explicitly, set µ = 0 and g 6 = 0, and introduce where if y 2 = 0, then y 3 is a pure gauge degree of freedom while y 2 satisfies the massless vector RW equation. These considerations can be generalized to the case with g 6 = 0, with the same conclusion: in the massless limit, the polar spectrum can be divided into two classes, one corresponding to a massless scalar field and another corresponding to a vector gauge field.
In Figs. 1-4, we present the results for the = 0, 1 fundamental (n = 0) QNMs, in case of non-vanishing mass µ = 0. The behaviour with 0 < µr g < 0.5 and −1/2 < g 6 < 1 is mapped out in terms of contour plots. To reveal pole-induced behaviour, we also plot the g 6behaviour at fixed exemplary µ. The chosen range for the vector field mass Îij is motivated by the fact that one expects interesting physical effects when the Compton wavelength of the field is comparable or larger than the size of the BH, i.e. µr g 1. This can also be understood more mathematically from the fact that the norm of the QNM frequency can typically be estimated as |ω| 2 ∼ V max , where V max is the height of the centrifugal potential barrier [75]. Now, for the QNM function to have the required wave-like behavior at spatial infinity, one also requires |ω| > µ. It follows that there will be no QNMs if µ 2 is greater than the height of the centrifugal barrier, i.e. if O(1) for the lower multipoles of most physical interest. 7 The individual results can be understood by revisiting Tab. 1. Each of the modes diverges at the critical values g 6 = −1/2 and/or g 6 = +1 iff the respective perturbation equation is affected by the corresponding pole P − (P + ). The = 0 monopole mode, cf. Fig. 1, is affected by P − only. The = 1 axial mode, cf. Fig. 2, is affected by P + only. Finally, the two coupled polar multipole modes are affected by one pole each: the scalar mode, cf. Fig. 3, by P − ; the vector mode, cf. Fig. 4, by P + . While not presenting respective results, we expect the same pole-induced behaviour to persist for all higher > 1 modes.
In case of vanishing mass µ = 0, the perturbations reduce to one axial and one polar mode only, cf. Sec. 2.4. For minimal coupling to the background metric, i.e., for g 6 = 0, the two QNM spectra are known to be isospectral, i.e., the axial and polar spectrum agree. Indeed, the respective perturbation equations (2.24) and (2.18) (with µ = 0) become redundant for g 6 = 0, i.e., for P ± = 1. For any g 6 = 0, isospectrality is broken. We verify this explicitly by presenting the n = 0, 1, 2 massless modes in Fig. 5.
As in the massive case, the observed behaviour close to g 6 = −1/2 and/or g 6 = 1 is determined by the poles in the respective perturbations equations. The axial massless mode, cf. eq. (2.18), is affected by P + only. The polar massless mode, cf. eq. (2.24), is affected by both poles. The onset of this pole-induced behaviour can be explicitly seen for the n = 0 mode. For the n > 1 modes it becomes numerically challenging to resolve.
In fact, numerical convergence of the spectral methods worsens considerably with growing n. This can intuitively be understood as follows. The number of oscillations in r increases with n. The modes thus become more and more challenging to resolve with spectral methods based on a fixed number of collation points. The n > 0 results presented in Fig. 5, thus present the most challenging numerics of this work. Hence, we explicitly present convergence plots for exemplary points in App. A.
The behavior of the QNM spectrum for small values of g 6 is worth remarking. As one can glean from Fig. 5 (although we have also verified it from the numerical data), for each n the polar and axial QNM frequencies display a symmetry in their g 6 -dependence at linear order, exhibiting the same slope (within numerical precision) but with opposite sign. This feature may hint at the existence of an electromagnetic duality for small but non-zero values of G 6 . 8 We will encounter a similar phenomenon when we consider the electromagnetic susceptibilities in Sec. 5.
Finally, we comment on the observed crossing of the axial n = 1 and n = 2 imaginary parts close to g 6 = 1. We are not aware of other examples of such a crossing of imaginary parts. While we find no indication for convergence issues in the applied spectral methods, a confirmation of this result by independent numerical techniques would be welcome. . For any non-vanishing g 6 = 0 isospectrality is broken. We show the fundamental (n = 0) as well as first (n = 1) and second (n = 2) overtone in increasingly lighter shading. Where the curves end, spectral methods with N = 80 are found to be insufficient to ensure proper convergence. Exemplary convergence plots (for the points marked with triangles on the g 6 -axis) are presented in App. A.
Quasi-bound states
Quasi-bound state solutions to the mode equations are defined by boundary conditions corresponding to an ingoing wave at the event horizon and a vanishing amplitude at spatial infinity. The latter requirement selects c out = 0 in (2.27), oppositely to the case of QNMs. Importantly, the bound state behavior u ∼ e − √ µ 2 −ω 2 r * holds regardless of the sign of Im ω.
The last remark is apposite given our interest in establishing whether the theory admits unstable solutions with Im ω > 0, even if the coupling g 6 lies in the range (2.4) in which localized perturbations are stable. The latter condition is necessary for consistency, as it has been shown that localized modes must be either stable or else suffer from ghostor gradient-type instabilities, while tachyon-type unstable solutions cannot occur [39]. The caveat to this statement is that tachyonic solutions cannot be fully diagnosed in the localized approximation, which is by definition oblivious to modes of physical size comparable or larger than the length scales of the background. In other words, we would like to assess if global solutions could undergo instabilities in the regime where the theory is free from pathologies related to ghosts and negative-gradient modes.
Here we provide strong evidence that tachyonic quasi-bound state solutions for a massive vector field cannot occur on a Schwarzschild BH background. Our first argument in support of this claim is given by the numerical results, presented in Sec. 4.1, for the fundamental (n = 0) bound state solution for each of the lowest multipole modes of the field ( = 0, 1), sampling over a range of values of the parameter g 6 and a few values of the mass µ. While this certainly does not constitute a full proof, one naturally expects tachyon modes to appear for the lowest values of n and if they exist at all. 9 Indeed, tachyonic instabilities should, by definition, eventually disappear as the typical radial and angular wavelengths of the solutions (characterized respectively by n and ) become short enough.
A more critical loophole in this numerics-based argument is our inability to access values of g 6 arbitrarily close to the stability bounds (2.4), as our numerical routine becomes increasingly less efficient as we approach those values. This is important in view of the expectation (suggested also by the numerical results) that quasi-bound state frequencies will differ the most from their values in standard Proca theory precisely near the critical g 6 points. Fortunately we can patch this issue by means of an analytical proof which shows that the imaginary part of the frequency cannot be positive. This argument is also not a complete one, however, first because it does not apply to the polar modes with ≥ 1, and second because it assumes that g 6 lies sufficiently close to either of the critical points. These caveats notwithstanding, the argument is otherwise general, valid for any mass µ (with µ 2 > 0) and for all multipoles , m. We describe the argument and its application to the monopole and axial modes in Sec. 4.2.
Numerical results
As for the case of quasi-normal modes, cf. Sec. 3 and App. A, we numerically solve the quasibound state mode equations via spectral methods with Chebyshev interpolation. We restrict the analysis to the fundamental monopole ( = 0) and lowest multipole ( = 1) modes. We make sure that all of the following results are converged to at least 5% accuracy. (For most parameter values the accuracy is much higher, cf. App. A for exemplary convergence plots in the QNM case.) In Fig. 6, we summarize the behavior of the fundamental = 0, 1 quasi-bound states in the complex-frequency plane. 10 To do so, we show the quasi-bound states for −0.4 < g 6 < 0.9 and representative µ × r g = 1, 2/3, 1/2. With µ → 0, these all converge to Re[ω/µ] → 1 and 0 > Im[ω/µ] → 0. This holds for any constant g 6 , at least in the investigated range. For the axial mode, cf. upper-right panel in Fig. 6, we find Im[ω/µ] −→ 0, for all investigated values of µ. We find no indications for the onset of such scaling for the other modes, at least within the investigated range.
In Fig. 7, we exemplify the power-law behaviour that all modes exhibit as µ → 0. Here, we choose to present results at a representative value of g 6 = 1/2 only. The behaviour closely resembles the one previously found for g 6 = 0 [30]. 9 For instance, in the case of a massive spin-2 field on a BH background it is the n = 0, = 0 mode, and only this mode, which is unstable for a certain range of parameters [42,45]. 10 With some abuse of terminology, we refer to the polar = 1 modes as "scalar" and "vector" as we did for QNMs, although in reality quasi-bound states cease to exist in the massless limit and therefore the Goldstone boson equivalence limit is not meaningful. Figure 6. Behaviour of the fundamental = 0, 1 bound states with changing µ and g 6 . For each mode, we present the parametric curves mapped out by −0.4 < g 6 < 0.9 (cf. color legend on the right) for three different values of µ × r g = 1, 2/3, 1/2. The dots indicate the respective value for g 6 = 0.
To summarize, we find no indication for the presence of an unstable mode (i.e., one with Im[ω] > 0). Whenever modes scale towards Im[ω] = 0, we have identified the respective power-law scaling. We view this as strong numerical evidence for the absence of unstable quasibound states.
Integral formula
Next we turn to the analytical proof of the fact that Im ω < 0 in our set-up. The method is essentially the one put forth in Ref. [76] in the context of asymptotically anti-de Sitter BHs (see also Ref. [44] for further applications). The interesting observation is that the argument also applies to bound state perturbations of asymptotically flat BHs, albeit with some differences.
We consider eq. (2.25) in the case of a single ODE, so that V(u, r) ≡ V (r)u. We introduce the redefined mode function v ≡ e iωr * u. The boundary conditions imply that v approaches a constant, v + , at the horizon and that it decays exponentially at spatial We multiply through by v * and integrate, Note that each term in this integral gives a finite result thanks to the exponential decay of v (while V is non-singular by assumption). The first term can be integrated by parts, noting that the boundary term vanishes, Taking the difference of this equation with its complex conjugate we get which can be plugged back in (4.3) to produce We see that if the potential function V were positive definite, then we would immediately infer that Im ω < 0 and conclude the proof. However V is not positive definite in the equations within our set-up. Nevertheless, we can prove that, for each mode, its contribution to the integral is indeed non-negative whenever g 6 is sufficiently close to the critical points, i.e., for values such that the poles P ± coincide with the event horizon. The effective potential of the monopole mode is given by and it is easy to see that V M is not positive definite for all values of µ 2 and g 6 . However, as we are interested in the case when g 6 lies near the bound g 6 = −1/2, we define ≡ g 6 + 1/2 and isolate the leading-order contribution to the integral (4.5) in an expansion in small , i.e., This integral is manifestly positive. Similarly, for the axial modes the effective potential reads which is also not positive definite for all µ, and g 6 . The relevant pole is now g 6 = 1, so we let ≡ 1 − g 6 and evaluate the integral to leading order in the limit of small , 9) and the result is likewise manifestly positive. This establishes that Im ω < 0 for quasi-bound state perturbations corresponding to monopole and axial modes. For the polar modes with ≥ 1 the argument does not readily apply since in this case one has to deal with a system of coupled equations and with additional terms proportional to derivatives of the mode functions, cf. eq. (2.20). Even though an analogue of eq. (4.5) can be straightforwardly derived, we have been unable to find a bound for the integral of the resulting effective potential. Nevertheless, we see no reason why polar perturbations should behave qualitatively different from the rest of the spectrum, and the numerical results certainly seem to confirm this. Moreover, as we remarked previously, the expectation is that unstable modes, if they exist, should manifest themselves at the lower end of the multipole ladder. Given our proof of the stability of monopole fluctuations, we take these combined results as strong evidence for the absence of instabilities in the whole quasi-bound state spectrum and the whole range of allowed values of the non-minimal coupling g 6 .
Electromagnetic susceptibilities
Static response coefficients characterize the change of a system under an external timeindependent field. For a gravitational field, these coefficients correspond to the tidal Love numbers, which are in principle directly measurable through gravitational wave observations, e.g. of binary systems. For a U (1) gauge field the response coefficients are the electric and magnetic susceptibilities defining the polarizability of the object (in analogy with electromagnetism, although the field of course need not be the Standard Model photon).
As mentioned in the introduction, a remarkable property of four-dimensional BHs in GR is that they do not polarize under the effects of a Maxwell-type field. Yet the expectation is that this attribute will be broken in more general set-ups, in particular if the external U (1) field contains additional interactions, either with itself or with the spacetime metric.
Within our set-up of a Schwarzschild BH and in linear response theory, we have seen that it is only the Horndeski non-minimal coupling operator, eq. (1.1), which can contribute to beyond-GR effects without introducing additional degrees of freedom. The question is then whether the electromagnetic susceptibilities are indeed non-vanishing when the coupling G 6 is non-zero. Here we confirm that they are non-vanishing, focusing for simplicity in the case of dipolar perturbations.
Boundary expansion
We consider the mode equations in the gauge invariant case, i.e. eq. (2.24) for the polar or "electric" field and eq. (2.18) (with µ = 0) for the axial or "magnetic" field, setting ω = 0 as we are interested in the static limit.
For each equation, only one linear combination of the two independent solutions is regular at the event horizon. Demanding regularity thus fixes one integration constant, while the other remains arbitrary, simply setting the overall amplitude of the mode function. Then, modulo this overall constant, the solution at spatial infinity is fully determined, and is in general given by a sum of two modes, one which grows and one which decays with the radius r, i.e., The leading coefficient of the growing mode, c ext , is interpreted as the strength of the applied field, while that of the decaying mode, c resp , gives the corresponding response of the system. Their ratio, k ≡ c resp c ext , defines the linear susceptibility of the system for the given external field. There are two remarks to keep in mind about the structure of the boundary expansion in eq. (5.1), both related to the fact that the expansion is a Frobenius series. The first is that the series multiplying r +1 in the growing mode may in general contain logarithmic terms. However, in four dimensions these are always subleading and do not affect the definition in (5.2). 11 The second observation is that, because is an integer, the split between the growing and decaying modes is potentially ambiguous as they contain the same powers of r after some order [77,78]. Various ways to deal with this issue have been proposed in the literature [51,59,79,80], although in our case it will suffice to simply define the growing mode such that it does not contain the power r − (which is not to say that the series terminates, since all subsequent powers may a priori be present). This prescription makes the susceptibility k unambiguous and is physically justified by the fact that k so defined is an observable enjoying the property we seek: it vanishes in the absence of non-minimal coupling but is otherwise non-zero, as we now show.
For illustration, let us focus on the dipole modes ( = 1). Interestingly, we find that there is no logarithmic term in this case. However, unlike in the ordinary Maxwell set-up, the series for the growing mode does not terminate. Explicitly, for the first few terms we find 4c resp − g 6 (10 − 11g 6 )c ext 8 3(2c resp + 5g 6 c ext ) 10 4c resp + g 6 (10 − g 6 )c ext 8 (5.4) respectively for the electric and magnetic components.
Although the mode equations do not seem to admit an exact solution, they may straightforwardly be solved iteratively by expanding in powers of the coupling g 6 . 12 After selecting the regular solution in each case, as explained above, we are then able to infer k. We find, up to order O(g 4 6 ), respectively for the electric and magnetic susceptibilities. The expressions in (5.5) confirm the vanishing of the susceptibility in standard Maxwell theory, i.e. with g 6 = 0. For small but non-zero g 6 we find instead the expected dependence k = O(g 6 ). We observe that the electric and magnetic susceptibilities are equal, up to a sign, at linear order in g 6 . We recall that the same phenomenon was observed for the QNM spectrum in the massless (i.e., gauge-invariant) case, cf. Fig. 5. Also remarkable is that the electric susceptibility does not appear to receive non-linear corrections. These results are intriguing and clearly beg for a deeper physical understanding. We hope to come back to this question in future work.
Numerical results
Having understood analytically the polarizability properties of a BH in the approximation of small g 6 , we now turn to the exact results derived numerically using a shooting method. We have computed the solutions of the mode equations in the vicinity of the BH horizon by expanding in powers of (r − r g ), up to order four. We then evaluate the function at some small (r − r g ), which is used as initial condition to integrate numerically up to some large radius. The result is then matched to the boundary series discussed in the previous subsection, which we expand up to order r −8 , so as to obtain c ext and c resp , and hence the susceptibility k. We have checked that the results are robust against changes in the initial and matching radii as well as in the order at which we terminate the series ansatze. 12 We recall that g6 enters in the equations through the combinations P± = 1 − r 3 ± /r 3 (cf. (2.5)), where r± < rg < r and r 3 ± ∝ g6. Therefore, for any r in the physical domain, the mode equations are indeed analytic at g6 = 0 and the expansion in Taylor series is justified. The results for the electric and magnetic susceptibilities are shown in Fig. 8, plotted as functions of the coupling g 6 within the stability range eq. (2.4). The plots also show the comparison with the approximate analytical behaviors in eq. (5.5), which are indeed in perfect agreement with the numerical results. For the electric susceptibility we confirm the interesting outcome that the linear truncation in eq. (5.5) appears to be exact, within our numerical precision, even as we get very close to the critical values of g 6 (we can reliably compute k for |g 6 − g crit 6 | 10 −3 ). In contrast, the magnetic susceptibility shows a clear departure from the polynomial approximation for sizable values of g 6 , as one would generically expect 13 .
Discussion
The aim of this paper was to initiate the study of global solutions for massive vector fields non-minimally coupled to gravity in the linear approximation about GR backgrounds. We focused on the simplest but physically important case of a Schwarzschild BH background, and restricted our attention to a single non-minimal coupling operator, namely the Horndeski term given in eq. (1.1). In spite of the simplicity of the model under consideration, we showed that the set-up is in fact unique, in the sense that any vector-tensor Lagrangian must reduce to (2.1) upon linearization of the vector field about the vacuum A µ = 0, assuming the theory describes 3 + 2 dynamical degrees of freedom.
Our principal result is the outcome of the numerical calculation of the fundamental QNM frequency for the lowest multipole modes of the vector field, i.e. monopole (Fig. 1), axial dipole (Fig. 2), and polar scalar and vector dipoles (Figs. 3, 4). We explored a physically motivated range of values for the Proca mass µ, as well as the full range for the (normalized) non-minimal coupling parameter g 6 allowed by stability. However, our results exclude values very close to the bounds, eq. (2.4), where our numerics become unreliable. 13 From the truncated Taylor series for kM in eq. (5.5) one may also construct a Padé approximant in order to get a better fit of the numerical results. We thank Hector Silva for pointing this out to us.
It would be desirable to gain a better grasp on the behavior of QNMs when g 6 is at or arbitrarily close the critical values. This is not merely an academic question, since we recall that g 6 ≡ G 6 /r 2 g depends on the BH mass, so for any given non-zero coupling G 6 there will be a BH mass value such that either of the bounds is saturated. Of course, whether such a BH mass is physical, and whether the theory at that scale still makes sense, is a different question.
In the case where the vector field is massless the set-up simplifies considerably thanks to the U (1) gauge symmetry of the theory, and one is left with a single mode (for each , m) in each of the polar and axial sectors, i.e. the analogs of the electric and magnetic fields, allowing us also to compute the first two overtone QNM frequencies (n = 1, 2) as functions of g 6 . One interesting, although perhaps not unexpected, conclusion is that the isospectrality between polar and axial QNMs is broken by the non-minimal coupling, cf. Another set of valuable observables in the gauge-invariant setting is given by the electromagnetic susceptibilities, corresponding to the linear response of the BH to a static external field. While for a minimally coupled U (1) field BHs in GR do not polarize, as recalled in the introduction, our results demonstrate that this property ceases to hold in the presence of the Horndeski non-minimal coupling that we studied. We have shown this here explicitly for the dipole modes, cf. Fig. 8, for which we also provided some analytical understanding of the dependence of the susceptibility coefficients at linear order in the parameter g 6 , cf. eq. (5.5). We plan to undertake a more general analysis in a dedicated work.
The question on the stability of astrophysically relevant GR backgrounds under fluctuations of generalized vector fields motivated us also to study quasi-bound state solutions within our set-up. Unlike QNMs in asymptotically flat spacetimes, quasi-bound state frequencies may in principle develop a positive imaginary part, signaling a tachyon-type instability. Whether this indeed can occur for vector fields is an important issue because a tachyonic destabilization is a possible mechanism to generate compact astrophysical objects with vector hair starting from a hairless initial state-a phenomenon known as vectorization [81][82][83][84]. The no-go result of [39], together with the more general analyses in [85,86], cast doubt on vectorization as a viable mechanism, as they showed that localized perturbations must be either stable or else grow through wrong-sign kinetic or gradient operators. Our present results further supplement this claim by demonstrating that global bound-state solutions on a Schwarzschild BH background likewise do not exhibit tachyonic growth. We warn the reader that our argument does have some potential loopholes, as we explained at length in Sec. 4.2, although they are not expected to be critical. As an incidental outcome of our analysis, we also showed how an integral formula for the imaginary part of the quasibound state frequency due to Horowitz and Hubeny may be applied to asymptotically flat spacetimes. We hope to revisit this problem in a more general setting, e.g. by including matter and spin, in future investigations.
Acknowledgments
We would like to thank Luca Santoni, Hector Silva and Shuang-Yong Zhou for useful conversations and comments. The work of SGS and JZ at Imperial College London was supported by the European Union's Horizon 2020 Research Council grant 724659 MassiveCosmo ERC-2016-COG. The work of AH at Imperial College London was supported by the Royal Society International Newton Fellowship NIF\R1\191008. AH also acknowledges that the work leading to this publication was supported by the PRIME programme of the German Academic Exchange Service (DAAD) with funds from the German Federal Ministry of Education and Research (BMBF). SGS thanks the Peng Huanwu Center for Fundamental Theory at USTC for generous hospitality. JZ is also supported by a scientific research starting grant No. 118900M061 from the University of Chinese Academy of Sciences.
A.1 Method
In this appendix we describe the numerical method used to compute QNMs and quasi-bound states in this paper.
In Sec. 2.2 we have decomposed the Proca equation into its angular and radial components. The QNMs and quasi-bound states can be found by solving the non-linear eigenvalue problem for the radial equations (2.17), (2.18), (2.19), (2.20) and (2.24), supplemented with the appropriate boundary conditions discussed in Sec. 2.5. For the mode equations to be amenable to our numerical routine, it is useful to factor out the wave behavior at the boundaries. This is achieved by redefining with the choice of + sign for QNMs and − sign for quasi-bound states, and where B(r) is a regular function of r (also implicitly of ω) that tends to constant values as r * → ±∞. In the case of axial perturbations, as well as monopole and massless polar perturbations, the radial equation can be written as a single second order differential equation for the function B(r). In order to compute the eigenfrequencies, we first approximate the differential equations with finite-dimensional matrix equations using a collocation method with Chebyshev interpolation. We firstly introduce Note that one can choose other mappings ξ(r), the only requirement being that the singularities introduced by the non-minimal coupling terms are far enough from the domain of ξ such that they do not dramatically affect the convergence. Now we expand B(ξ) in terms of a set of cardinal polynomials p k (ξ), where p k (ξ) is defined by p k (ξ n ) = δ nk , and ξ n are the Chebyshev nodes, ξ n ≡ cos π(2n + 1) 2N + 2 , with n = 0, 1, . . . , N . where M nk (ω) ≡ p k (ξ n ) + C 1 (ω, ξ n )p k (ξ n ) + C 2 (ω, ξ n )δ nk . (A.7) The derivative matrices p k (ξ n ) and p k (ξ n ) can be computed using the second barycentric form [87,88], explicitly p k (ζ n ) = w k /wn ζn−ζ k n = k − k =n p k (ζ n ) n = k , (A.8) p k (ζ n ) = 2p k (ξ n )p n (ξ n ) − 2p k (ξn) ξn−ξ k n = k − k =n p k (ζ n ) n = k . (A.9) With a good initial guess on the eigenfrequency, in our case e.g. the eigenfrequency of the standard Proca field [30], one can solve Eq. (A.6) for ω and the set B(ξ k ).
For the massive polar modes, we instead have two coupled differential equations, say for B 2 (r) and B 3 (r), after we make the ansatz (A.1) for u 2 and u 3 . The procedure is nevertheless the same, i.e. we approximate the two differential equations with a set of algebraic equations of the form (A.6), now with and with the matrix M nk (ω) being enlarged accordingly.
A.2 Accuracy checks
For all the presented figures we have performed accuracy checks (i) at the points closest to the respective poles at g 6 = −1/2 and g 6 = 1 as well as (ii) at other random points. For the fundamental modes, these converence tests agree very well with expectations from an analytical error estimate discussed, for instance, in [33,App. C.4]. In particular, we find exponential convergence with growing number N of Chebyshev nodes. Moreover, we can also see that the convergence properties worsen with closeness to singularities in the complex plane.
We also find that higher modes (n > 1) beyond the fundamental (n = 1) are increasingly difficult to obtain because convergence worsens significantly. While we do not provide an analytical argument, we expect that the underlying reason is an increase in the number of oscillations (in the radial coordinate r) with growing n. The more oscillatory the behaviour, the harder it becomes to resolve these oscillations with fixed number of nodes N .
A.3 Values of QNM frequencies
We provide the numerical values of the frequencies in Tables. 2-5 for readers to make comparison. | 14,130.2 | 2022-02-15T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Automated Classification of Mental Arithmetic Tasks Using Recurrent Neural Network and Entropy Features Obtained from Multi-Channel EEG Signals
: The automated classification of cognitive workload tasks based on the analysis of multichannel EEG signals is vital for human–computer interface (HCI) applications. In this paper, we propose a computerized approach for categorizing mental-arithmetic-based cognitive workload tasks using multi-channel electroencephalogram (EEG) signals. The approach evaluates various entropy features, such as the approximation entropy, sample entropy, permutation entropy, dispersion entropy, and slope entropy, from each channel of the EEG signal. These features were fed to various recurrent neural network (RNN) models, such as long-short term memory (LSTM), bidirectional LSTM (BLSTM), and gated recurrent unit (GRU), for the automated classification of mental-arithmetic-based cognitive workload tasks. Two cognitive workload classification strategies (bad mental arithmetic calculation (BMAC) vs. good mental arithmetic calculation (GMAC); and before mental arithmetic calculation (BFMAC) vs. during mental arithmetic calculation (DMAC)) are considered in this work. The approach was evaluated using the publicly available mental arithmetic task-based EEG database. The results reveal that our proposed approach obtained classification accuracy values of 99.81%, 99.43%, and 99.81%, using the LSTM, BLSTM, and GRU-based RNN classifiers, respectively for the BMAC vs. GMAC cognitive workload classification strategy using all entropy features and a 10-fold cross-validation (CV) technique. The slope entropy features combined with each RNN-based model obtained higher classification accuracy compared with other entropy features for the classification of the BMAC vs. GMAC task. We obtained the average classification accuracy values of 99.39%, 99.44%, and 99.63% for the classification of the BFMAC vs. DMAC tasks, using the LSTM, BLSTM, and GRU classifiers with all entropy features and a hold-out CV scheme. Our developed automated mental arithmetic task system is ready to be tested with more databases for real-world applications.
Introduction
The amount of mental effort performed by each person in response to certain cognitive tasks is called the cognitive workload [1]. The human brain produces different responses to various cognitive tasks, and these responses can be further investigated by analyzing the brain's electrical activity [2]. The information regarding the brain's electrical tures have been used for various applications, such as the detection of generalized and partial epileptic seizures [17][18][19], emotion recognition [20], and brain-computer interface (BCI) [21] applications.
These non-linear entropy features, such as dispersion entropy [22], slope entropy [23], and other entropy measures [24,25], have not been explored for the mental-arithmetic-based cognitive task classification using EEG signals. Recently, deep learning approaches, such as the convolutional neural network (CNN) and recurrent neural network (RNN), have been extensively used for EEG signal processing applications [26]. A RNN is a deep neural network model used for the analysis of sequential data in natural language processing (NLP) and speech recognition applications [27].
This type of network explores the long-range dependencies for the modeling of timeseries data [28]. It considers the information from the previous state and input to evaluate the output at the present time-step. The LSTM-based RNN model has been used for mental arithmetic task classification using EEG signal features [15]. The RNN models can exploit the correlations between the features of EEG signals in different time-steps for the classification of mental arithmetic tasks.
Other RNN variants, such as bidirectional LSTM (BLSTM) and gated recurrent unit (GRU) [29], have not been used for the categorization of mental arithmetic tasks, such as BFMAC vs. DMAC and BMAC vs. GMAC, respectively, using EEG signal features. The novelty of this work is to explore various state-space domain non-linear entropy features and RNNs for mental-arithmetic-based cognitive workload task classification using EEG signals. The major contributions of this work are highlighted as follows: • The slope entropy, dispersion entropy, permutation entropy, sample entropy, and approximation entropy measures were computed from each EEG channel. • The LSTM, BLSTM, and GRU-based RNN models were used to classify mental arithmetic tasks. • The classification strategies, such as BMAC vs. GMAC and BFMAC vs. DMAC, are considered in this work.
The remaining parts of this paper are organized as follows. In Section 2, a description of the multi-channel EEG database for mental-arithmetic-based cognitive workload classification is written. The proposed approach is explained in Section 3. In Section 4, the results are evaluated using the proposed approach, and a discussion on the results is presented. Finally, in Section 5, the conclusions of this paper are written.
Multi-Channel EEG Database for Mental Arithmetic Tasks
In this work, we used a public database (MIT Physionet EEG mental arithmetic task dataset) to evaluate the proposed approach [30,31]. This database contains artifact-free multi-channel EEG recordings from 36 subjects. The multi-channel EEG signals were recorded using the neurocom monopolar 23 channel data acquisition system [30]. The electrode setup to record the multi-channel EEG signals is depicted in Figure 1.
The clinical protocols, such as no clinical manifestations of cognitive impairment, no verbal and non-verbal learning disabilities, and normal vision were followed by the subjects during the EEG recordings [30]. Those subjects who were drug or alcohol addicted, and with psychiatric disorders were excluded during the recording of the EEG signals. Each subject performed an arithmetic task such as the subtraction of two numbers. In this database, multi-channel EEG recordings from each subject were comprised of 180-s resting-state EEG and 60-s mental arithmetic calculation-based cognitive state EEG data. The sampling frequency of the multi-channel EEG recordings was 500 Hz. The multi-channel EEG recordings were divided into two groups-good (G) and bad (B)-based on the arithmetic calculations performed by each subject. The GMAC class is interpreted as the subjects who performed good quality arithmetic calculations with the number of calculations per four minutes as 21 ± 7.4. Similarly, the BMAC is termed as the subjects who performed bad quality mental arithmetic calculations with the number of calculations per four minutes as 7 ± 3.6.
The annotations or the count quality of each multi-channel EEG recording are given in the database. The symbols 'B' and 'G' denote bad and good mental-arithmetic-based EEG recordings based on the number of subtractions performed by the subject. The annotations of the multi-channel EEG recordings for all subjects before performing any mental arithmetic calculations (BFMAC or rest state) and during the mental arithmetic calculations (DMAC or active state) are also given in the database. In this work, the cognitive task classification strategies, such as BMAC vs. GMAC and BFMAC vs. DMAC, are studied.
Method
In this section, we describe the proposed method for the automated classification of BFMAC vs. DMAC and BMAC vs. GMAC using multi-channel EEG signals. The flowchart for the mental arithmetic calculation-based cognitive workload classification task is depicted in Figure 2. The step-by-step procedure for the automated categorization of mental arithmetic calculation-based cognitive workload classification tasks is as follows: • Segmentation of multi-channel EEG recordings into multi-channel EEG frames. • Evaluation of the state-space domain non-linear entropy features from each multichannel EEG frame. • Classification of mental arithmetic tasks using RNN models.
We describe the detailed theory of each block of the flowchart in the following subsections.
Segmentation of Multi-Channel EEG Recordings
In this study, we considered a non-overlapping window of 2 s duration (2 × 500 = 1000 samples) for segmenting each multi-channel EEG recording. A total of 30 frames were evaluated from each multi-channel EEG recordings. The total number of multi-channel EEG frames considered in this work was 36 × 30 = 1080. Figures 3 and 4 show the EEG frames of the Fp1 and O1 channels for BMAC (Figure 3a,b)) and GMAC (Figure 3c,d) tasks as well as DMAC (Figure 4a,b) and BFMAC (Figure 4c,d) tasks. We observed significant differences in the amplitude values and temporal characteristics of EEG signals between BMAC-and GMAC-, and between BFMAC-and DMAC-based cognitive tasks.
A study [32] reported that the positive level amplitude increased during mental arithmetic tasks. The temporo-centro-parietal activity in multi-channel EEG signal increased during the mental arithmetic calculation-based active state compared to the rest state [32,33]. These physiological changes affect both the temporal and spatial characteristics of multi-channel EEG signals. Therefore, the features evaluated from the EEG signals can be used for the automated classification of mental-arithmetic-based cognitive tasks. In the following subsection, the entropy features evaluated from the multi-channel EEG frames are described.
Non-Linear Entropy Features
In this study, we computed five entropy measures, viz. the slope entropy [23], dispersion entropy [22], permutation entropy [24], sample entropy [25], and approximation entropy [34], from each EEG channel for the classification of mental-arithmetic-based cognitive workload tasks. The slope entropy was evaluated using the difference between the consecutive amplitude values of each embedded vector extracted from the signal [23].
Here, we evaluated the slope entropy from each EEG channel. The step-by-step procedure to evaluate the slope entropy for ith channel EEG signal was as follows [23]: Step-1: The ith channel EEG signal is denoted as The jth embedded vector extracted from the ith channel EEG signal is given by y i,L j = [x i (j), x i (j + 1), . . . , x i (j + L − 1)] where L is the embedded dimension. The factor, y i,L j is termed as the jth embedded vector and j = 1, 2,. . . , N − L + 1.
Step-2: The difference between the consecutive sample values of each embedded vector of ith channel EEG signal is evaluated, and the slope signal for jth embedded vector is evaluated as follows: where k = 1, 2. . . L − 1. In vector form, the slope signal jth embedded vector is denoted as follows: Step-3: Each element of the slope signal for the jth embedded vector of ith channel is mapped to negative and positive integer values, and this mapping is given as follows: η and ζ are the slope entropy parameters, and ζ > η.
Step-4: For jth embedded vector of the ith channel, the mapped pattern containing the positive and negative integer values for the EEG signal is evaluated and is denoted as π j .
Step-5: The mapped patterns for all embedded vectors are evaluated. The relative frequency (RF) vector is computed using the pattern matching concept by considering all patterns and is denoted as r i = [r i (1), r i (2), . . . , r i (T)], where T is the total number of elements in the RF vector.
Step-6: The probability evaluated using the RF vector for ith channel is given as follows [23]: Thus, the slope entropy of ith channel EEG signal is evaluated as follows: Similarly, the approximation entropy (AppEN), permutation entropy (PermEN), dispersion entropy (DisEn), and sample entropy (SampEN) features were also calculated from each EEG channel, and for the ith EEG channel, these features are denoted as AppEN i , PermEN i , DesEN i , and SampEN i , respectively. For more details on AppEN, PermEN, DesEN, and SampEN, we encourage readers to refer to [22,24,25,34].
As five entropy features were computed from each EEG channel, a total of 95 dimensional feature vectors were obtained from each multi-channel EEG frame. This 95dimensional feature vector sequence, z(t) was used as the input to the RNN-based model for classification.
Recurrent Neural Network (RNN)
In this study, we used three RNN variants-namely, LSTM, BLSTM, and GRU. The block diagram of the mental arithmetic task classification using the LSTM, BLSTM, and GRU classifiers is shown in Figure 5. The classification strategies, such as BMAC vs. GMAC and BFMAC vs. DMAC, were used in this work. The feature matrix is denoted as Z ∈ R q×s where q denotes the number of instances or time-steps and s = 95 denotes the number of features. The training and the test instances for each type of RNN classifier were selected using both hold-out and 10-fold cross-validation (CV) techniques [6,35].
For hold-out CV, we considered 60%, 10%, and 30% of the instances for training, validation, and testing, respectively, for the LSTM, BLSTM, and GRU models [35]. The numbers of instances used for each class are shown in Table 1. From this table, for the BMAC vs. GMAC classification task, there is a class imbalance problem. We considered random over-sampling during the training of each type of RNN classifier to overcome the class imbalance problem [36]. The LSTM classifier is a type of RNN [37] and has been used in different biomedical applications [28,38,39].
The LSTM layer mainly comprises of the cell, candidate value, input gate, forget gate, and output gate. The cell is considered as a memory and is used to remember the information at different time-intervals [37]. Similarly, the information flow from and to the cell is performed using the gates. The architecture of the LSTM network is shown in Figure 6. In LSTM, the output at the tth time-step was evaluated using the input z(t) and the activation g(t − 1) from the (t − 1)th time-step [40]. Here, we denote different parameters, such as memory cells as m(t), candidate values asm(t), forget gates as FG, update gates as UG, output gates as OG, and outputs at the tth level step as g(t) [37]. The mathematical expressions of LSTM for the forget gate, update gate, candidate value, memory cell, and output are given as follows: [37]: where W FG and b FG are the weight and bias values for forget gate. Similarly, W UG and the b UG are termed as the weight and the bias values for update gate. Moreover, W OU and b OU are the weight and bias values at the output gate. ' f ' is denoted as the sigmoid activation function. The operator ⊗ is termed as the Hadamard product [41]. BLSTM considers both past and future time-step information to model the current time step for RNN model [42]. It consists of both forward and backward LSTM parts. The forward LSTM mathematical expressions are same as in Equations (5)-(10). For backward LSTM, however, the mathematical expressions of forget gate, update gate, candidate value, memory cell and output gate are given as follows [42]: The GRU is the simplified version of LSTM, which consists of two gates-namely, the update gate and the reset gate [43]. The GRU model architecture is shown in Figure 7. The mathematical expressions for the reset gate (RG), update gate (UG), candidate value,m(t), and memory cell m(t) at the tth time-step are given below [43]: In this study, the classification model shown in Figure 4 comprises the input layer, RNN variant layer, fully-connected (FC) layer, softmax layer, and classification layer. The input feature vector at the tth time-step is denoted as z(t). The LSTM/BLSTM/GRU layer was used to process the input feature sequence. The number of hidden neurons considered in this layer is denoted as n h . The output obtained in the LSTM/BLSTM layer is denoted as where T is the total number of time-steps considered for the LSTM, BLSTM, and GRU layers. For the GRU layer, the output is the same as the memory, m(t).
The FC layer was used to convert the LSTM/BLSTM/GRU layer activation to a feature vector containing two features. This feature vector for the ith instance is denoted as The softmax based activation function was used in the classification layer. The kth output neuron activation for the tth time-step was evaluated as follows: where K = 2 is the number of classes. The binary cross-entropy function was used as the cost function for the LSTM, BLSTM, and GRU classifiers [35]. The training parameters used for the LSTM, BLSTM, and GRU models are shown in Table 2. The Adam optimizer was used for the evaluation of the weight and bias parameters. The performance of three RNN variants for the classification of mental arithmetic calculation tasks was evaluated using the accuracy, sensitivity, F1-score, and specificity measures [35].
Results and Discussion
In this section, we discuss the statistical analysis of the selected entropy features for BMAC vs. GMAC and BFMAC vs. DMAC classification tasks and the results of classification using RNN-based models. Student's t-test was used to evaluate the statistical significance of the entropy features of the EEG signals for the BFMAC vs. DMAC and BMAC vs. GMAC classification tasks [44]. The significant level for the t-test of the entropy features of each channel for both classification tasks was considered as 0.05.
Box-plots showing the within-class variations for the Fp1-channel EEG signal dispersion entropy, F7-channel EEG signal slope entropy, C4-channel EEG signal approximation entropy, O1-channel EEG signal sample entropy, and O1-channel EEG signal permutation entropy features for the BFMAC vs. DMAC classification task are shown in Figure 8a-e, respectively. From these plots, the median values of each entropy feature were different for the BFMAC and DMAC classes. Similarly, we show the mean and standard deviation values of different entropy features for the Fp1, F7, C4, and O1-channel EEG signals in Table 3.
For channel Fp1, the p-values of the approximation entropy, dispersion entropy, sample entropy, and slope entropy features were less than 0.05. Significant differences in the mean values of the Fp1 channel EEG signal entropy features in between the BFMAC and DMAC classes were also observed. Moreover, for the F7-channel EEG signal, the p-values of all entropy features were found to be less than 0.05.
For the C4-channel EEG signal, except for the dispersion entropy, other entropy features showed p-values of less than 0.05, and these selected features were clinically significant for the classification of BFMAC vs. DMAC tasks. For the O1-channel EEG signal, higher differences in the mean values of the sample entropy and permutation entropy were observed between the BFMAC and DMAC classes. Similarly, statistical variations were also observed for the entropy features of other EEG channels. Similarly, the box-plots of the Fp1-channel approximation entropy, Fp1-channel sample entropy, F7-channel dispersion entropy, C4-channel slope entropy, and O1-channel permutation entropy features for the BMAC and GMAC classes are depicted in Figure 9a-e, respectively. The statistical analysis of all entropy features of the Fp1, F7, C4, and O1channel EEG signals for the BMAC and GMAC classes are shown in Table 4. From the table, the Fp1-channel approximation entropy, dispersion entropy, sample entropy, and permutation entropy features had higher mean value differences between the BMAC and GMAC classes when compared to the slope entropy features.
Similarly, the F7-channel dispersion entropy, permutation entropy, and slope entropy features had p-values less than 0.05. For the C4-channel EEG signal, higher mean value differences in the slope entropy, approximation entropy, permutation entropy, and sample entropy features were observed. The p-values of these entropy features were found to be less than 0.05 when compared to the dispersion entropy features. Similarly, for the O1-channel EEG signal, the dispersion entropy, permutation entropy, and slope entropy features had p-values that were found to be less than 0.05 for the BMAC vs. GMAC classification.
The p-values of the approximation entropy and slope entropy features of the O1channel EEG signals were more than 0.05 for the BMAC vs. GMAC classification task. From the statistical analysis results, various entropy features computed from different channel EEG signals effectively captured the diagnostic information for the automatic classification of mental arithmetic cognitive tasks. The classification performance of the GRU, LSTM, and BLSTM models for the BMAC vs. GMAC classification tasks using hold-out CV is shown in Table 5. In this work, five trial-based random hold-out CVs were used, and the mean and standard deviation values of each performance measure were evaluated [45]. We observed from this table that all RNN models obtained accuracy, sensitivity, precision, specificity, and F-score values of more than 98% for the BMAC vs. GMAC classification task.
In Table 6, we show the performance of RNN classifiers using 10-fold CV-based multichannel EEG instance selection for the BMAC vs. GMAC classification task. For the GRU model, the accuracy, sensitivity, specificity, precision, and F-score values were more than 98% for each fold. The average values of all performance measures were more than 98% for each fold. The average values of all performance measures were more than 99% for the GRU model. Similarly, for the BLSTM model, the accuracy, sensitivity, and F-score values were higher than 98% in each fold. However, the specificity values for fold 3 and fold 4 were 96.77% and 97.00%, respectively. For the GRU model, the specificity value at fold 1 was 96.43% and 100% for rest of the folds using all entropy features. The classification performance obtained using the LSTM, BLSTM, and GRU models with each entropy feature and all EEG channels is shown in Table 7. The slope entropy features combined with each type of RNN model obtained an average accuracy value of more than 93% when compared to other entropy features. The dispersion entropy features coupled with each RNN model also demonstrated average accuracy values of more than 99% for the sensitivity, precision, and specificity, and the F-score values of each type of RNN model were high using slope entropy features when compared to other entropy features extracted from multi-channel EEG signals.
The classification performance of the LSTM, BLSTM, and GRU models was evaluated using the entropy features of each EEG channel, and this is shown in Table 8. The entropy features evaluated using the O2, Fz, Cz, and Pz channel EEG signals obtained average accuracy values of more than 84% using each type of RNN model when compared to the entropy features of other EEG channels. Therefore, the O2, Fz, Cz, and Pz EEG channels were found to be significant for the classification of mental arithmetic tasks. Table 6. The classification performance obtained using RNN classifiers with the 10-fold CV approach for the automated classification of the BMAC vs. GMAC task. Table 7. The classification performance obtained for RNN classifiers for selected entropy features using the hold out CV approach for the BMAC vs. GMAC classification task.
Model
Feature Selection Accuracy (%) Specificity (%) Precision (%) Sensitivity (%) F-Score (%) The plots of accuracy versus the number of iterations obtained using the LSTM, BLSTM, and GRU classifiers for the automated classification of the BFMAC vs. DMAC tasks are shown in Figures 10-12, respectively. Both the training and validation accuracy values were 100% for the LSTM classifier. Similarly, for the BLSTM classifier, the training and validation accuracy values obtained were 100% and 98.54% after 200 epochs. The training and validation accuracy values were more than 100% using the GRU classifier for the automated classification of the BFMAC vs. DMAC tasks.
Similar variations were also observed in plots of accuracy versus the number of iterations obtained using the LSTM, BLSTM, and GRU classifiers for the automated classification of the BMAC-vs. GMAC-based cognitive workload tasks. We also demonstrated the confusion matrices obtained using the LSTM, BLSTM, and GRU classifiers with one trial of hold-out CV for the automated classification of the BFMAC vs. DMAC and BMAC vs. GMAC tasks. In Figure 13a-c, we show the confusion matrices obtained using the LSTM, BLSTM, and GRU classifiers for the automated classification of the BFMAC vs. DMAC tasks.
The confusion matrix plots obtained using the LSTM, BLSTM, and GRU classifiers for the automated classification of both the BMAC vs. GMAC tasks are depicted in Figure 13d-f, respectively. The true positive and true negative values were high for all three RNN classifiers for the BFMAC vs. DMAC classification tasks. The LSTM and BLSTM classifiers yielded higher false-positive values compared with the GRU classifier for the automated classification of the BMAC vs. GMAC tasks. We also evaluated the performance of the LSTM, BLSTM, and GRU classifiers using subject-independent CV cases for the automated classification of the BMAC vs. GMAC tasks.
The subject-wise accuracy values using the LSTM, BLSTM, and GRU classifiers from multichannel EEG instances are shown in Figure 14a-c. From these figures, the average accuracy values obtained using subject-independent CV for the LSTM, BLSTM, and GRU classifiers were 58.70%, 55.74%, and 60.55%, respectively. The GRU classifier obtained the highest classification accuracy with subject-independent CV over other types of RNN classifiers for the classification of the BMAC vs. GMAC tasks. The classification results evaluated using each type of RNN model with all entropy features of multi-channel EEG signals for the BFMAC vs. DMAC classification task with hold-out CV are shown in Table 9. The LSTM, BLSTM, and GRU models obtained accuracy values of more than 99% using all entropy features of the multi-channel EEG signals for the BFMAC vs. DMAC classification task. The results obtained using various types of RNN-based models with all entropy features of multi-channel EEG with 10-fold CV are shown in Table 10.
From these results, the LSTM and BLSTM models obtained accuracy, sensitivity, and specificity values of more than 97% for each fold. However, the specificity of the GRU -based classifier was less than 97% for fold 3 and fold 6. The average accuracy of each type of RNN classifier was more than 99% for the BFMAC-vs. DMAC-based cognitive workload classification task. Therefore, the entropy features computed using multi-channel EEG demonstrated higher classification performance for the classification of cognitive workload tasks. We compared the classification results obtained using our approach with existing methods to classify mental arithmetic calculation-based cognitive tasks using multi-channel EEG signals obtained from the same database. The comparison results are shown in Table 11. The existing methods used features, such as the mean amplitude, variance, Shannon entropy, energy, other statistical features, and various supervised learning-based classifiers, such as SVM, LSTM, and decision trees, for the BFMAC vs. DMAC based cognitive task classification scheme.
The L2-norm, mean amplitude, energy, and Shannon entropy features combined with the decision tree classifier obtained a classification accuracy of 95.80%. The statistical features of multi-channel EEG signals coupled with the decision tree model achieved an accuracy of 91.67%, which is less than the classification accuracy reported in [13]. Similarly, the SVM classifier combined with the variance, energy, and Shannon entropy features obtained an accuracy of 98.60% for the automated classification of the BFMAC vs. DMAC task. The performance of the SVM classifier depended on the proper selection of the kernel functions, kernel parameters, and number of iterations [46].
Similarly, the training parameters of the decision tree classifier were the depth of the tree, the number of times the split occurred in the decision tree, and the split criteria or information gain, respectively. The optimal training parameters of both the SVM and decision tree classifiers were selected using a grid-search in the nested cross-validation domain [4,47].
Each RNN-based classifier in the proposed approach yielded superior classification performance compared with the existing methods for the automated classification of BFMAC-vs. DMAC-based cognitive classification tasks using multi-channel EEG signals. The LSTM, BLSTM, and GRU models successfully quantified the dependencies in the entropy features of multi-channel EEG signals, which further helped to create a boundary for the classification of BFMAC-and DMAC-based cognitive tasks. The advantages of this mental-arithmetic-based cognitive classification task are given as follows: • Various entropy features computed from various EEG channels were used for the classification of mental arithmetic tasks. • The entropy features from the O2, Fz, Cz, and Pz channel EEG signals demonstrated higher classification accuracy using RNN-based classifiers. • The slope entropy features combined with each type of RNN-based classifier obtained higher classification accuracy over the other entropy features. • The proposed approach obtained the highest classification accuracy at a 99% classification accuracy for the BFMAC vs. DMAC and BMAC vs. GMAC cognitive workload classification tasks. In this work, we used the multi-channel EEG signals from 36 subjects to evaluate the proposed approach of automated classification of BMAC vs. GMAC and BFMAC vs. DMAC arithmetic tasks. Multi-lead EEG signals from more subjects are needed to develop accurate and robust automated classifications of mental-arithmetic-based cognitive workload tasks. Other entropy features, such as the distribution entropy [48], and bubble entropy [49] features obtained from multi-channel EEG signals, can be used for the automated classification of mental-arithmetic-based cognitive tasks.
In this work, the RNN-based models were used for the classification. In the future, we intend to use convolutional autoencoder [50], LSTM-autoencoder [51], convolutional neural network (CNN) [35,52] and CNN-RNN [53]-based deep learning models for feature extraction and classification tasks using a large database with multi-channel EEG signals.
Conclusions
An automated approach for the classification of mental arithmetic calculation-based cognitive tasks using various entropy features obtained from multi-channel EEG signals is proposed in this paper. The state-space domain entropy measures, such as the sample entropy, approximation entropy, dispersion entropy, permutation entropy, and slope entropy features, were computed from multi-channel EEG signals. We used recurrent neural network (RNN)-based models, such as long short-term memory (LSTM), bidirectional LSTM (BLSTM), and gated recurrent unit (GRU), as classifiers to perform cognitive task classification schemes, such as bad mental arithmetic calculation (BMAC) vs. good mental arithmetic calculation (GMAC), and before mental arithmetic calculation (BFMAC) vs. during mental arithmetic calculation (DMAC), respectively.
We obtained classification accuracies of 99.88%, 99.43%, and 99.81% using LSTM, BLSTM, and GRU-based RNN models for the automated classification of the BMAC vs. GMAC classification task. Our proposed approach demonstrated a classification accuracy of more than 99% using all RNN-based models for the automated classification of BFMAC vs. DMAC tasks. The slope entropy features coupled with each type of RNN model obtained the highest classification accuracy for both BMAC vs. GMAC and BFMAC vs. DMAC automated cognitive classification tasks. In the future, our proposed approach can be tested with multi-channel EEG signals to classify more types of mental-arithmetic-based cognitive tasks for brain-computer interface (BCI) applications. | 6,813.8 | 2021-05-02T00:00:00.000 | [
"Computer Science"
] |
Bioprinting Au Natural: The Biologics of Bioinks
The development of appropriate bioinks is a complex task, dependent on the mechanical and biochemical requirements of the final construct and the type of printer used for fabrication. The two most common tissue printers are micro-extrusion and digital light projection printers. Here we briefly discuss the required characteristics of a bioink for each of these printing processes. However, physical printing is only a short window in the lifespan of a printed construct—the system must support and facilitate cellular development after it is printed. To that end, we provide a broad overview of some of the biological molecules currently used as bioinks. Each molecule has advantages for specific tissues/cells, and potential disadvantages are discussed, along with examples of their current use in the field. Notably, it is stressed that active researchers are trending towards the use of composite bioinks. Utilizing the strengths from multiple materials is highlighted as a key component of bioink development.
Introduction
In the last 16 years, bioprinting has taken the world of regenerative medicine by storm. The number of publications featuring bioprinting has grown exponentially as laboratories have adopted this new fabrication technology for their research. Traditional techniques for creating cellularized constructs, including non-cellularized scaffolds seeded post-fabrication and the implantation of non-cellularized constructs which recruit cells from the host, have severe limitations. It is difficult to adequately and uniformly seed these devices and place individually unique cell populations within the constructs. Those implanted without cells are challenged by cell migration within the scaffolds [1][2][3][4]. An alternative to these conventional manufacturing techniques, 3D printing, has several key strengths, including allowing users full control over the shapes and compositions of their printed components, including precise placement of diverse cell populations throughout the entirety of the construct. [5]. In addition, bioprinting allows users to choose/tune the carrier material or bioink used during printing. This allows fine control over chemical and physical environments specific to the end-use of each system. While these particular attributes have proven enticing to researchers who wish to pattern constructs with different cells, mechanical attributes, or chemical profiles, the realities of bioprinting have proven to be nuanced and complex, resulting in a broad field of study without a simple "push this button to print an organ" option. This article aims to give a broad overview of the biological components available as bioinks today, highlighting some of their many uses in the field.
Types of Bioprinters
Before selecting a bioink can begin, researchers must first ascertain two things: the environment they wish to replicate using the bioprinter and which type of bioprinter they will be using. Since the development of the first reported bioprinter in 2003, bioprinter technology has become a field of exponential growth [6]. While the field started with a single modified inkjet printer, today, several types of 3D printing are available to scientists [7,8]. These include inkjet, micro extrusion, laser-induced forward transfer (LIFT), digital light projection (DLP), selective laser sintering (SLS), and stereolithography. Choosing the appropriate printer for specific research is one of the first major decisions to be made. Each printer type requires different attributes from their bioink regarding viscosity, material type, and bonding mechanisms. This article will focus on bioinks developed for micro-extrusion and DLP printing, two commonly used systems for developing macro-constructs with cellular components.
Micro-extrusion printers are one of the more common 3D printers used today, popular among both hobbyists and professionals. These printers consist of motor-driven axes, a media reservoir, and an extrusion orifice ( Figure 1A). To be labeled as a micro-extrusion printer, the orifice should measure less than 1 mm in diameter. This includes nearly all extrusion printers used with cells as researchers move towards higher resolutions in the goals of recapitulating native cellular arrangements. The extrusion can be driven either through tunable pneumatic or mechanical pressure depending on the viscosity of the printing material [9][10][11]. Importantly, for researchers working with cells, cellular viability must be maintained. This means that extrusion bioinks should be composed of non-cytotoxic materials that can be tuned to minimize shear stress due to printing pressures, as high shear stress leads to decreased cellular viability [12][13][14]. Another critical consideration for bioinks destined for micro-extrusion printing is shape fidelity. The gels must maintain their printed shape following extrusion, both in the x-y plane and z-axis, without causing structural deformation, a particular concern as print height increases [15]. Therefore, gels are often stabilized via crosslinking. Crosslinking can be performed either before or after printing [16][17][18]. This is often accomplished via either the addition of a chemical crosslinker or the initiation of free-radical crosslinking through the application of light, visible or ultraviolet (UV). These methods must also be tuned to have minimal impact on the cells seeded within the bioinks. It is important the researchers take note of potential toxicity caused by the crosslinking process. UV light, in particular, has been shown to lead to cell damage with prolonged exposure. While a popular curing technique, it must be carefully managed to prevent unwanted side effects.
DLP printers are a relatively recent addition to the bioprinting arsenal, entering the scene in 2015 [19]. These printers consist of a build platform that can move in the z-axis, a vat with a translucent bottom filled with liquid ink, and a projector, shown in Figure 1B. The DLP system uses projected light to cure bioinks at the bottom of the vat one plane at a time. Unlike extrusion printers that build constructs upwards from the print bed and require inks to support the weight of layers pushing down on them, DLP printers have the curing surface at the bottom of the print. Thus, the materials used must adhere to the layer above them (or the build plate), withstand disassociation from the bottom of the curing vat, and support the weight of additional layers hanging from them as the print grows in size. These two systems show the most promise in creating objects with sizable z-dimensions and thus have significant potential for creating tissues of implantable size. There is quite a bit of overlap in the properties these systems require from their inks: They should be non-cytotoxic (during printing and post-processing), possess low viscosity during printing (either in the vat or when being extruded through the needle), and be physically stable once on the build surface. Further, the bioinks should withstand cell culture environments (media bath, high temperature), either directly after printing or following a post-processing step. Finally, the bioink must provide a hospitable environment for continued cell growth and maturation. The cells must adhere, expand, and be afforded appropriate signaling during the culture phase. Finding a bioink that meets these biological requirements remains a challenge for researchers today.
Identifying the composition of the perfect bioink has been a continued debate, much of which depends on the printing process, the types of cells being used, and the mechanical, physical, and chemical environment the system demands. As such, a divide between synthetic, biologic, and combination bioinks has emerged. Synthetic bioinks are excellent at meeting the physical requirements of printing, maintaining shape post-printing and can be modified for appropriate load-bearing properties [21]. The manufactured synthetic bioinks are highly reproducible with minimal batch-to-batch variations. However, synthetics have relatively few biological binding sites and may be cytotoxic either before crosslinking or during crosslinking. In contrast, biological bioinks are derived from natural sources that are often rich in binding sites and can be derived from the same sources as the target tissues researchers want to replicate, leading to high compatibility between the ink and the cell type used. Here we will briefly review some of the most popular biological bioink components.
Agarose
Agarose is a polysaccharide made up of 1,3-linked β-galactose and 1,4-linked 3,6anhydro-α-l-galactose derived from red algae. The molecule can be dissolved easily in hot water, after which the agarose chains form into side-aligned aggregates as the solution cools. These result in an interlocking network of hydrogen bonds, creating a solid gel of These two systems show the most promise in creating objects with sizable z-dimensions and thus have significant potential for creating tissues of implantable size. There is quite a bit of overlap in the properties these systems require from their inks: They should be non-cytotoxic (during printing and post-processing), possess low viscosity during printing (either in the vat or when being extruded through the needle), and be physically stable once on the build surface. Further, the bioinks should withstand cell culture environments (media bath, high temperature), either directly after printing or following a post-processing step. Finally, the bioink must provide a hospitable environment for continued cell growth and maturation. The cells must adhere, expand, and be afforded appropriate signaling during the culture phase. Finding a bioink that meets these biological requirements remains a challenge for researchers today.
Identifying the composition of the perfect bioink has been a continued debate, much of which depends on the printing process, the types of cells being used, and the mechanical, physical, and chemical environment the system demands. As such, a divide between synthetic, biologic, and combination bioinks has emerged. Synthetic bioinks are excellent at meeting the physical requirements of printing, maintaining shape post-printing and can be modified for appropriate load-bearing properties [21]. The manufactured synthetic bioinks are highly reproducible with minimal batch-to-batch variations. However, synthetics have relatively few biological binding sites and may be cytotoxic either before crosslinking or during crosslinking. In contrast, biological bioinks are derived from natural sources that are often rich in binding sites and can be derived from the same sources as the target tissues researchers want to replicate, leading to high compatibility between the ink and the cell type used. Here we will briefly review some of the most popular biological bioink components.
Agarose
Agarose is a polysaccharide made up of 1,3-linked β-galactose and 1,4-linked 3,6anhydro-α-l-galactose derived from red algae. The molecule can be dissolved easily in hot water, after which the agarose chains form into side-aligned aggregates as the solution cools. These result in an interlocking network of hydrogen bonds, creating a solid gel of agarose chains [22]. Gels can be tuned by using hydroxyethylated agarose for lower strength and melting temperatures, unmodified agarose, or a combination of the two [22]. Agarose on its own is not as cell-friendly as other biologically derived bioinks, presenting low rates of cellular proliferation, cell adhesion/spreading, and biosynthesis of cell components [23,24]. However, this lack of cellular interaction has made agarose an excellent material for creating molds for the 3D formation of cellular aggregates [24].
The physical attributes of agarose have been capitalized on by blending it with other bioink components. It has been combined with alginate, seeded with chondrocytes, and used on an extrusion printer to create honeycomb patterns that maintained cellular viability over 4 weeks, showing potential for cartilage engineering purposes ( Figure 2A) [25]. Chemically modified carboxylated agarose (CA) has been used to create bioinks that can be tuned to specific elastic moduli by varying the degree of carboxylation without significantly altering the shear viscosity [26]. CA was shown to increase the survival rate of human mesenchymal stem cells (MSCs) by 33% compared to native agarose. Studies have shown that increasing the stiffness promoted chondrogenesis and maintained cell phenotypes in extrudable gels seeded with human articular chondrocytes [26,27]. In another study, CA was combined with pluronic, a synthetic polymer. Pluronic acted as a sacrificial material to form tubular structures inside a structure with controlled microporosity to develop a system that mimics the architecture of the ECM surrounding a blood vessel [28]. While this group has yet to test structures with cells, CA's ability to support cellular viability makes this an interesting step forward, as the mechanical properties of agarose can mimic those found in the human body. Agarose has also been used as a support material for freestanding constructs, providing mechanical stability to softer gels such as alginate and gelatin methacrylate (GelMA), which can be cultured while suspended in an agarose slurry to create complex cellularized structures that are supported during cellular maturation but which have easy removal of the support gel post-maturation [29]. While this is not a direct use of agarose as a bioink, it is an important way the molecule can further develop 3D printing processes.
Alginate was used for drug and cell delivery long before its first appearance as a bioink, but has since been used for extrusion, LIFT, and inkjet printing applications. The nearly instantaneous ionic crosslinking has made this bioink of particular interest to researchers developing structured tissues, such as tubules, within their prints. Alginate was one of the first bioinks to be used with an extrusion printer fitted with a coaxial nozzle. Extruding alginate through the exterior needle while extruding a crosslinker through the interior allowed researchers to rapidly manufacture microvessel-like tubules throughout their print [39]. Researchers have also used tri-axial nozzles to create multilayered vessels containing independent layers of human umbilical vein endothelial cells (HUVECs) and human aortic smooth muscle cells, which were implanted into animals as aortic replacements ( Figure 2B) [40].
Despite the strengths alginate has as a bioink which can quickly and non-toxically be crosslinked, alginate is a biologically inert molecule with little to no binding moieties for cell interaction and limited pathways for biodegradation. This limits its ability to act as a hospitable growth environment for cells. However, alginate can readily be combined with other biological molecules to create combination gels, improving these characteristics. One study shows that alginate supplemented with carboxymethyl cellulose maintained cellular viability while improving printability and biocompatibility for human MSCs [41]. Nanofibrillated cellulose has been combined with human chondrocytes and alginate to develop tissue-engineered ears, with the cellulose improving shape fidelity and printing resolution compared to pure alginate [42]. The ionic crosslinking of alginate has proven useful for those pursuing coaxial printing, as alginate blends can be extruded through the outer needle while calcium chloride is extruded through the inner nozzle, creating mechanically hollow tubes, which are being further developed as vascular models and replacements [39,[43][44][45][46][47][48][49]. Gelatin alginate hydrogels have been used for skin wound healing by multiple groups, and the combination of structural support from the alginate and improved cellular adhesion from the gelatin has led to cellularized skin constructs that have the potential for enhanced wound healing and accelerated wound closure [50][51][52][53].
Chitosan
Chitosan is a linear polysaccharide molecule derived from the deacetylation of chitin, an acetylated polysaccharide found in fungi, microorganisms, and the shells of crustaceans/insects [54]. The biopolymer was used for many years in tissue engineering as sponge scaffolds, wound dressings, and for cartilage regeneration, among other applications, prior to the advent of bioprinting [55]. It has many unique biological properties that make it enticing for the field, including mucoadhesion; hemostatic activity; the ability to interact with the cell membrane, leading to reorganization of tight junction proteins; antimicrobial properties; analgesic effects; and controllable degradation [55].
Bioprinting with pure chitosan is difficult due to its poor solubility in cell-friendly conditions and low stiffness, reducing its shape fidelity during the printing process. Chitosan precipitates when its solution has a pH above 6.2, making it difficult to keep in solution while maintaining cellular viability ( Figure 2C) [56]. However, it can be printed without cells and soaked in a basic solution, either post-printing or between layers, and then seeded with cells after curing [57][58][59]. To allow for a bioink that can directly deposit cells, chitosan can be modified with a carboxymethyl group to improve the solubility at physiological pH. Carboxymethyl-chitosan has been combined with alginate to develop constructs capable of supporting bone MSCs and human induced pluripotent stem cells [60,61]. Chitosan can also be altered through the addition of β-glycerophosphate, which allows the chitosan to remain soluble at a neutral pH and induces thermosensitive gelation at 37 • C. This modified chitosan has been used to print IMR-32 neuroblastoma cells with high postprinting viability [62,63]. Chitosan has also been combined with catechol to create a unique bioink that rapidly solidifies when exposed to serum inks, resulting in a system that can be directly printed into media without the need for external crosslinking mechanisms with high mechanical strength and cell viability [64]. [17]. (B) Alginate extruded through a triaxial nozzle for blood vessel replacement [32]. (C) A large multilayered chitosan construct showing mechanical integrity of printed chitosan processed in an acidic environment [56]. (D) Collagen printed over a PCL scaffold (28 mm diameter) for use as a heart valve replacement [65]. (E) A combination of ECM and hyaluronic acid gel was seeded with liver spheroids [66]. (F) DLP printing of branched ECM [67]. (G) A multilayer fibrin skin construct [68]. (H) Examples of multilayer gelatin constructs developed for use with osteoblasts [69]. (I) Hyaluronic acid extruded with chondrocytes for cartilage engineering [70]. (J) Complex structures printed using a DLP printer and silk fibroin ink [71].
Collagen
When referring to collagen as a bioink, researchers generally refer to collagen type I, a triple helical protein derived from the connective tissue of mammals which has limited variability among species, resulting in minimal immunological reactions [72,73]. Collagen is well known for enhancing cellular attachment and growth, which is attributed to the abundance of integrin-binding domains found on the protein. A unique aspect of this biologic is that collagen remains liquid at low temperatures and gels into a fibrous matrix when exposed to high temperatures (physiological and higher). This gelation is relatively slow-printed collagen can stay liquid for more than 10 min following extrusion and complete gelation can take more than 30 min [72].
These mechanical instabilities make pure collagen a challenging bioink for both micro-extrusion and DLP printers. To overcome this, collagen has been blended with synthetic materials, such as Pluronic, which acts as a support during the printing and gelation phases while collagen improves the cellular environment [74,75]. Synthetic supports such [17]. (B) Alginate extruded through a triaxial nozzle for blood vessel replacement [32]. (C) A large multilayered chitosan construct showing mechanical integrity of printed chitosan processed in an acidic environment [56]. (D) Collagen printed over a PCL scaffold (28 mm diameter) for use as a heart valve replacement [65]. (E) A combination of ECM and hyaluronic acid gel was seeded with liver spheroids [66]. (F) DLP printing of branched ECM [67]. (G) A multilayer fibrin skin construct [68].
(H) Examples of multilayer gelatin constructs developed for use with osteoblasts [69]. (I) Hyaluronic acid extruded with chondrocytes for cartilage engineering [70]. (J) Complex structures printed using a DLP printer and silk fibroin ink [71]. Reprinted with the permission from Ref. [17]. Copyright 2018 American Chemical Society; Reprinted with the permission from Ref. [32]. Copyright 2019 AIP Publishing; Reprinted with the permission from Ref. [56]. Copyright 2018 American Chemical Society; Reprinted with the permission from Ref. [65]. Copyright 1969 Elsevier; Reprinted with the permission from Ref. [66]. Copyright 2015 Acta Materialia Inc. Published by Elsevier Ltd; Reprinted with the permission from Ref. [67]. Copyright 2018 Elsevier Ltd; Reprinted with the permission from Ref. [68]. Dr Yoo is an author on this paper and Mary Ann Liebert, Inc. publishers does not require authors of the content being used to obtain a license for their personal reuse of full article, charts/graphs/tables or text excerpt; Reprinted with the permission from Ref. [69]. Copyright 2020 American Chemical Society.; Reprinted with the permission from Ref. [70]. Copyright 2011 American Chemical Society; Reprinted with the permission from Ref. [71]. Copyright 2018 The Author(s).
Collagen
When referring to collagen as a bioink, researchers generally refer to collagen type I, a triple helical protein derived from the connective tissue of mammals which has limited variability among species, resulting in minimal immunological reactions [72,73]. Collagen is well known for enhancing cellular attachment and growth, which is attributed to the abundance of integrin-binding domains found on the protein. A unique aspect of this biologic is that collagen remains liquid at low temperatures and gels into a fibrous matrix when exposed to high temperatures (physiological and higher). This gelation is relatively slow-printed collagen can stay liquid for more than 10 min following extrusion and complete gelation can take more than 30 min [72].
These mechanical instabilities make pure collagen a challenging bioink for both microextrusion and DLP printers. To overcome this, collagen has been blended with synthetic materials, such as Pluronic, which acts as a support during the printing and gelation phases while collagen improves the cellular environment [74,75]. Synthetic supports such as polycaprolactone (PCL) have been used to enhance the structural integrity of pure collagenfor instance, to bioprint aortic heart valves implanted in mice ( Figure 2D) [65]. Collagen has also been blended with other biologics such as GelMA to create a cross-linkable sixlayered skin structure that could withstand implantation and showed accelerated wound healing [76]. Blends made with bioceramics or alginates have been used to fabricate bone constructs, and combinations of collagen and heparin have been used to create spinal constructs [77][78][79]. Collagen can also be chemically modified to alter its physical properties while maintaining its biochemical advantages. To this end, methacrylated collagen has been combined with hyaluronic acid to create liver constructs. The combination has been utilized to create a hospitable cell environment that can be stabilized through UV crosslinking for extended in vitro maturation [80]. Recombinant collagen has also been functionalized with a methacrylamide, making it cross-linkable via UV light and thus allowing for use in DLP printers alongside micro-extrusion systems [81].
Extracellular Matrix
Extracellular matrix (ECM) is a complex network containing collagens, elastin, proteoglycans, and glycoproteins extruded by cells, which is highly specific to the tissue where it is produced [82]. Thus, ECM has extreme advantages in creating microenvironments that recapitulate native tissues. The matrix is usually derived through the dissolution of cellular material from tissue (decellularization), using enzymatic, physical, and chemical processes [83]. The decellularized material can be solubilized, resulting in a soft gel that can be bioprinted, and has shown successful outcomes for cellular viability and proliferation/differentiation when printed [84]. However, similarly to other biological components, the gel resulting from pure ECM is soft and has difficulty physically supporting itself. This has led researchers to utilize ECM as a biological component printed into a PCL frame.
Along with PCL frames, ECM has been used as a component of multi-material bioinks that help drive the development of specific tissue types. Bioinks containing HA, gelatin, and crosslinkers (poly-ethylene glycol diacrylate, alkyne, or acrylate) were successfully used to bioprint organoids using ECM sourced tissues such as liver, heart, and skeletal muscle. In one study, the developed liver bioink was successfully used to create extruded liver spheroids that maintained cell functionality during post-printing culture ( Figure 2E) [66]. Cardiac ECM was printed in combination with vitamin B2 to allow UVA crosslinking, resulting in mechanically stable constructs that increased cardiomyogenic differentiation when seeded with cardiac progenitor cells [85]. ECM has also been successfully used in DLP printers, combined with GelMA and a photoinitiator, to allow crosslinking when the bioink was exposed to a light source. Researchers used this same base gel with ECM from different tissue sources to create tissue-specific bioinks for the liver and heart, maintaining high viability and improving cellular reorganization post-printing ( Figure 2F) [67].
Due to the high efficiency of ECM scaffolds at maintaining and promoting cellular adhesion/growth within their confines, additional work is underway on alternative stabilization techniques. Recent publications have shown that ECM can be stabilized by adding ruthenium and sodium persulfate, compounds that allow the bioink to be crosslinked using visible light following extrusion [86]. These gels were utilized in developing constructs up to 1 cm in size with high cell viability post-printing. ECM can also be methacrylated to create a mono-material ECM bioink which can be stabilized through UV crosslinking postprinting [87]. This material has been used to improve gene expression in skeletal muscle constructs cultured in vitro compared to samples printed using gelatin methacrylate.
Fibrin
Fibrinogen and thrombin, proteins involved in the formation of blood clots, can enzymatically crosslink to result in fibrin. They crosslink into a unique hydrogel with a non-linear elasticity that can be extensively deformed before rupturing [88]. The hydrogel has been used to develop skin grafts, capitalizing on the fact that the proteins are naturally involved in wound healing [89]. It contains amino acid sequences that promote cellular binding, thereby allowing for cell adhesion, growth, and development [90]. Fibrin also has a natural degradation process that promotes the replacement of fibrin with ECM. This is particularly interesting for researchers developing fully degradable constructs that result in new tissue without traces of the implant material. However, the degradation profile of fibrin is rapid due to active cleaving by serine protease plasmin, which limits its use for long-term culturing. In addition, researchers must be cognizant of their fibrin sources, as fibrin from different hosts can result in immune reactions and transmission of infectious diseases [72].
Fibrin poses challenges as a bioink, as pre-crosslinked fibrinogen is very soft with poor shape fidelity, and post-crosslinked fibrin is extremely viscous, making it difficult to extrude [90]. One of the most common techniques to address this is to use pre-crosslinked fibrinogen in multi-material bioinks and crosslink it with thrombin post-printing. Methacrylated and thiolated hyaluronic acid (HA) have been used to increase the stiffness of fibrin gels [91,92]. Gelatin-fibrinogen gels have been used to make liver constructs that were both viable and functional. These gels have been shown to support the coculturing of cardiomyocytes and cardiac fibroblasts [93,94]. Alginate-fibrin blends have been used to create microchips with endothelialized vessels and hepatocytes and in vitro cervical tumor models. This combination takes advantage of the rapid crosslinking of the alginate and the improved biological interactions of fibrin [95,96]. The use of alginate was also shown to elongate the degradation profile of fibrin. The degradation of fibrin can also be delayed through the addition of aprotinin [97,98]. Aprotinin treated fibrinogen gels have been used for wound healing models in vivo implanted for 3 weeks (Figure 2G), as scaffolds for the culture and alignment of Schwann cells, and for the stabilization of human ear-shaped constructs seeded with auricular chondrocytes, which were implanted in vivo for two months [68,99,100].
Gelatin
Gelatin is a denatured collagen protein obtained from animals' skin, bones, and connective tissues. Sources include pigs, cows, and fish. This protein presents as random coils that can self-associate at low temperatures, thus developing helical structures that lead to a thickening of the material, but this reverts to their randomized conformation when heat is applied [101]. Due to the retention of the Arg-Gly-Asp (RGD) sequence found in non-denatured collagen, gelatin promotes cellular adhesion, proliferation, differentiation, and migration [102]. The ease of transformation due to thermal gelation and excellent cellular environment make this a popular addition to bioinks. Gelatin can thicken a gel during the printing process for extrusion printers allowing for better shape fidelity during printing. In addition, uncrosslinked gelatin can easily be leached out of the system post-printing when combined with other cross-linkable components. This makes gelatin particularly intriguing to groups trying to tune the porosity of their gels post-printing. It also has made gelatin an ideal support material for bioprinting. Gelatin bioinks can be printed into areas intended to be hollow while softer gels are printed around them. Following stabilization of the exterior gel during post-processing, the gelatin is warmed and leached out, leaving hollow space behind. This technique has been of particular interest to those working on the development of vascular networks. Gelatin acts as a support material for many new printing techniques, including freeform reversible embedding of suspended hydrogels and coaxial printing [103,104]. However, the gelatin used as a primary bioink requires additional crosslinking for cellularized prints that need to be maintained post-printing. Many methods of chemically crosslinking gelatin have been explored, but these agents are often cytotoxic, making them unsuitable for bioprinting [72]. Fortunately, several enzymatic crosslinkers, including transglutaminase, have been used to print unmodified gelatin bioinks seeded with HUVECs and human embryonic kidney cells without impacting cellular viability on an extrusion printer [105].
To preserve the positive qualities of gelatin while improving its mechanical characteristics as a bioink, researchers have modified the protein to include a methacrylate group. Gelatin methacrylate (GelMA) preserves many of the biologically-friendly aspects of gelatin, but the addition of the methacrylate group allows for crosslinking through free-radical polymerization, activated by the application of UV light. This can improve structural stability after printing with an extrusion printer and makes GelMA a contender for DLP printing. GelMA maintains high cellular viability with high shape integrity postprinting making it an excellent bioink for larger tissue constructs ( Figure 2H) [69]. It has been extensively used in the bioprinting community to create a multitude of constructs, including skin, cartilage, tumors, cornea, blood vessels, and liver and cardiac tissues, capitalizing on the strong bioink-cell interactions and resulting in a bioink that can be used for a wide number of tissues [43,76,[106][107][108][109][110][111][112][113][114][115].
Hyaluronic Acid
Hyaluronic acid (HA) is a non-sulfated glycosaminoglycan found in nearly all connective tissues and cartilage [116]. This linear molecule consists of repeating disaccharide units of d-glucuronic acid and N-acetyl-d-glucosamine moieties linked with alternating β-1,3 and β-1,4 glycosidic segments [117]. The three major functional groups primarily control the chemical activity of HA: glucuronic carboxylic acid, an N-acetyl group, and a secondary hydroxyl group [72]. These groups play a major role in HA's ability to form flexible hydrogels and make it highly biocompatible, particularly for applications that focus on joints such as osteoarthritis [118][119][120][121]. As a molecule, HA has been shown to play a role in early embryonic development, is cell-friendly, and biocompatible throughout its degradation process. As a material for tissue engineering, HA has proven to be highly reproducible, both in its formation and degradation, with mechanics, architecture, and degradation that can easily be manipulated by researchers [72,122]. However, despite the high degree of biocompatibility, HA has relatively poor mechanical properties for bioprinting, a slow gelation rate, and a rapid degradation profile limiting its uses in structures designed for extended periods of culture/use. Therefore, like many of the bioinks discussed in this article, HA is often used either as a component of a multi-material bioink or is chemically modified before bioprinting. This allows researchers to maintain many of the biological characteristics provided by HA while improving upon the physicochemical properties for a specific use. HA has been combined with synthetic molecules such as hydroxyethyl methacrylate to create UV cross-linkable samples that could easily be printed using an extrusion printer, resulting in strong hydrogels that mimic the viscoelastic properties of natural tissues. These supported chondrocytes post-printing ( Figure 2I) [70]. HA can be thiolated and combined with polyethylene glycol (PEG) or methacrylated and combined with GelMA to create photo-cross-linkable bioinks, both of which have been used to fabricated tubular structures reminiscent of vascular structures that supported cells in vitro and in vivo [123,124]. Thiolated HA has also been combined with poly(glycidol)s and extruded alongside alternating strands of PCL to create a strong grid supporting the hydrogel until polymerization. This configuration could extrude both human and equine MSCs, which maintained their viability post-curing for a reported 21 days [125]. HA has also been used in DLP systems, combined with polyurethane to create a structure with mechanical properties similar to articular cartilage. This bioink supported the growth and differentiation of Wharton's jelly MSCs that were seeded post-printing [126]. A combination of HA and gelatin, both modified with phenolic hydroxyl groups, was also used to print human adipose SCs on an extrusion printer. The construct was crosslinked using visible light and showed tunable stiffness and high cellular viability [127].
Scaffold-Free
Scaffold-free bioinks refer to those systems which use cells solely. In comparison, the other inks presented here act as carriers, seeded with cells as a component of the ink prior to printing. Cell-only inks introduce cells in an already close-packed array during extrusion, allowing cells to quickly self-assemble into aggregates within the printed form [128]. Briefly, this technique utilizing coaxial printing or molding to isolate in long fibers while coalescence occurs. Following the aggregation of cells within the fibers, these strands are then de-molded and re-extruded into the desired 3D shape. The final print is then allowed to fuse and mature [128]. This will enable researchers to capitalize on the ability of cells to self-assemble. In addition, the cell-only nature allows for cell concentrations closer to that seen in vivo compared to bioinks that suspend cells in a carrier. However, this also stipulates that large numbers of cells are needed for this printing methodology, which may inhibit the development of large structures. Further, scaffold-free printing cannot be translated to DLP printing at this time due to the lack of cross-linking ability and remains a solely micro-extrudable option.
The use of scaffold-free bioinks has shown high rates of strand fusion and selfassembly to rapidly create aggregated structures [129]. The technique has been used to create strands from multi-cell cultures and develop co-cultures of βTC-3 cells and fibroblasts, which showed cells self-directing into distinct populations post-printing, maintaining their ability to produce insulin and presenting as a pancreatic tissue model [129]. Porosity has also been introduced into scaffold-free printing through the inclusion of porogens during filament maturation. These porous strands showed increased cell viability and proliferation rates compared to solid cell strands while maintaining the capability to fuse into a single tissue. In addition, porous scaffold-free samples seeded with adipocyte-derived stem cells showed high viability and functionality in scaffold-free printing when differentiated into chondrogenic and osteogenic lineages [130]
Silk
Silk is a unique material for bioprinting in that it consists of two types of proteinsfibroin and sericin, which act as a core and glue within the fiber [131]. Silk fibers are common in the medical field, most familiar as sutures; however, these two proteins can be separated, and both have unique properties that make them attractive for bioink development. Fibroin is a biocompatible hydrophobic protein with a tailorable rate of biodegradation, high mechanical modulus, and the ability to self-assemble into a hydrogel when dissolved in an aqueous solution [132]. It forms thermodynamically stable β-sheets which improve mechanical integrity and slow degradation following gelation [133]. While this gelation process is naturally slow, fibroin can be crosslinked through several cytocompatible methods, enhancing its ability to maintain shape fidelity during the printing process [134]. Sericin is the hydrophilic portion of the silk fiber, often referred to as glue or gum. It is immunologically inert, non-cytotoxic when cultured with cells in vitro, and has been shown to stimulate cell migration/proliferation and collagen production at wound sites facilitating wound healing [135][136][137][138][139]. It can form a hydrogel at low concentrations but has low mechanical strength, and its fragility has limited the use of pure sericin hydrogels as a tissue engineering building block [140,141].
Silk proteins have been incorporated into composite bioinks, modulating their viscosity and gelation to improve printing, allowing researchers to take advantage of their mechanical and biological properties post-printing. The addition of PEG to silk fibroin allowed micro-extrusion constructs to maintain their shapes for 12 weeks in vitro and 6 weeks in vivo when seeded with human bone marrow MSCs [142]. PEG has also been used to crosslink fibroin printed into a support medium, allowing samples to fully stabilize post-printing prior to cell seeding. These constructs supported the growth and differentiation of myoblasts [134]. Combining fibroin with gelatin has been shown to create mechanically stable bioinks that supported chondrogenic development, capitalizing on the entanglement and physical crosslinking of the two gels to stabilize the bioinks without additional crosslinkers [143]. Cartilage-like structures have also been developed, crosslinking fibroin with horseradish peroxidase, used to coculture osteogenic and chondrogenic cells as a strategy for osteochondral defect regeneration [144]. Fibroin has also been shown to be suitable for DLP printing as it can be methacrylated, allowing for light-sensitive cross-linking. Methacrylated silk fibroin was used to prepare complex structures via DLP printing that was mechanically robust and showed high cellular viability and proliferation post-printing ( Figure 2J) [71]. Sericin has been combined with GelMA to create highly transparent hydrogels, which were used to cover wounds and allow for real-time monitoring of wound healing [145]. Combinations of fibroin, sericin, and collagen have capitalized on the interaction between the hydrophobic fibroin and hydrophilic sericin to reduce phase separation between the silk and collagen, creating a bioink that was structurally stable for use as a cardiac patch and maintained MSCs during in vitro culture [146].
Bioinks Today and Tomorrow
Biologics are an important facet of bioink development. As highlighted in this article, their strengths lie in their ability to facilitate cell-cell and cell-construct interactions, maintaining high cell viability, and even providing chemical cues crucial for cellular development through their physical and chemical moieties, as summarized in Table 1. However, it can be easily seen that pure biologics have not proven ideal for 3D printing, primarily due to their physical integrity. As such, continued modification of the components to provide additional mechanical stability is important. Initial work creating methacrylated gelatin and ECM is an exciting first step toward creating strong hydrogels with high biocompatibility. Continued research into alternative crosslinking mechanisms that limit the adverse effects is also an area of current development that will allow further use of bioprinting. Development in benign curing techniques may permit researchers to cure gels further to improve mechanical stability and open the door for bioprinting easily damaged cell lines. Table 1. Summary of bioinks presented here, with advantages and disadvantages for each bioink. All components are considered in their non-modified (natural) states.
Bioink Advantages Disadvantages
Agarose Immunologically inert Poor mechanical properties Stimulated cell migration/proliferation Gelation at low concentrations As bioprinting expands, it is not enough to simply develop new bioink options. Those individuals driving bioink development will need in-depth understanding of what their specific systems require-the biologics needed to drive cellular proliferation/maturation and the physical requirements for printing and culturing their construct. This article has highlighted how specific biological components can be chosen and manipulated to provide appropriate, tissue-specific environments. This tailoring should become more finely tuned in the coming years, with researchers worldwide sharing findings on how each component and its blends allow for improved engineering of tissues in the lab.
While the components described in this review are only a few of the many options currently available for bioprinters, we have presented even fewer of the possible alterations and combinations that can be used or formulated to personalize a bioink. This library of biologics, appropriate to bioprinting, will only continue to be expanded and refined. The depths of bioink possibilities are far from plumbed, and the choice of which biologics to include and how to alter/combine them will remain a defining point in bioprinting research as the field evolves.
Conflicts of Interest:
The authors declare no conflict of interest. | 9,006.2 | 2021-10-28T00:00:00.000 | [
"Biology",
"Materials Science"
] |
Assessment of an isogeometric approach with Catmull–Clark subdivision surfaces using the Laplace–Beltrami problems
An isogeometric approach for solving the Laplace–Beltrami equation on a two-dimensional manifold embedded in three-dimensional space using a Galerkin method based on Catmull–Clark subdivision surfaces is presented and assessed. The scalar-valued Laplace–Beltrami equation requires only C0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C^0$$\end{document} continuity and is adopted to elucidate key features and properties of the isogeometric method using Catmull–Clark subdivision surfaces. Catmull–Clark subdivision bases are used to discretise both the geometry and the physical field. A fitting method generates control meshes to approximate any given geometry with Catmull–Clark subdivision surfaces. The performance of the Catmull–Clark subdivision method is compared to the conventional finite element method. Subdivision surfaces without extraordinary vertices show the optimal convergence rate. However, extraordinary vertices introduce error, which decreases the convergence rate. A comparative study shows the effect of the number and valences of the extraordinary vertices on accuracy and convergence. An adaptive quadrature scheme is shown to reduce the error.
Introduction
Hughes et al. [33] proposed the concept of isogeometric analysis (IGA) in 2005. The early works on IGA [10,18,47] focussed on geometries modelled using Non-Uniform Rational B-Splines (NURBS) as these are widely used in computer aided design (CAD). NURBS can be used to model freeform, two-dimensional curves. However, a NURBS surface is a tensor product surface generated by two NURBS curves, thereby imposing limitations for modelling complex geometries with arbitrary topologies. Complex CAD models are always composed of a number of NURBS patches. These patches are often poorly connected in the design stage. When such models are used for analysis, the unmatched patches must be treated carefully to ensure the geometries are watertight. Furthermore, because NURBS can not be B Zhaowei Liu<EMAIL_ADDRESS>1 Glasgow Computational Engineering Centre, University of Glasgow, Glasgow G12 8LT, UK 2 Chair of Applied Mechanics, Friedrich-Alexander Universität Erlangen-Nürnberg, Paul-Gordan-Str. 3, 91052 Erlangen, Germany locally refined, adaptive mesh refinement method cannot be employed. A number of alternative CAD techniques were developed and adopted in IGA to overcome these limitations, including Hierarchical B-splines [29,52], T-splines [11,49], PHT-splines [23,42], THB-splines [13,30] and LR B-splines [24,34]. Some of these recent techniques are being adopted by the engineering design market. However, the majority are the subject of academic research and not widely used in the CAD community. Moreover, computing the basis functions for analysis using these alternative approaches can be expensive. Catmull and Clark [14] developed a bicubic Bspline patch subdivision algorithm for describing smooth three dimensional objects. The use of Catmull-Clark subdivision surfaces to model complex geometries in the animation and gaming industries dates back to 1978. Catmull-Clark subdivision surfaces can be considered as uniform bi-cubic splines which can be efficiently evaluated using polynomials.
In CAD, distortion of regular parametrizations are inevitable and indeed vital when modelling complex geometries. Allowing 'extraordinary vertices' ensures that Catmull-Clark subdivision surfaces can be used for modelling complex geometries with arbitrary topology. Cirak et al. [17] implemented Loop subdivision surfaces for solving the Kirchhoff-Love shell formulation. This was the first appli-cation of subdivision surfaces to engineering problems. Subdivision surfaces have subsequently been used in electromagnetics [19], shape optimisation [6,7] , acoustics [15,38] and lattice-skin structures [56].
Catmull-Clark subdivision surfaces face a number of challenges when used for analysis. Many of these have been discussed in the literature, however a unified assessment is lacking. This manuscript provides a clear and concise discussion of the challenges and limitations of Catmull-Clark subdivision surfaces.
Engineering designs often require exact geometries including circles, spheres, tori and cones. However, subdivision surfaces can not capture these geometries exactly. Moreover, there are always offsets between the control meshes and the surfaces. Fitting subdivision surfaces [37] aim to overcome this limitation. Although the fitting subdivision surfaces still can not model arbitrary geometries exactly as they are interpolated using cubic splines, they can approximate the given geometries closely through least-square fitting. Another challenge of subdivision surfaces is that they can model smooth closed manifolds easily but require special treatment to model manifolds with boundaries. A common solution is to introduce 'ghost' control vertices to provide bases for interpolating. From the perspective of analysis, the shape functions will span into 'ghost' elements [17]. In addition, the spline basis functions do not possess an interpolating property. Thus it is difficult to directly impose Dirichlet boundary conditions. Meshless methods and extended finite element methods have developed strategies to overcome this problem [28,39]. A common strategy is to modify the weak form of the governing equation. Methods include the Lagrangian multiplier method [5], the penalty method [3] and Nitsche's method [32,43].
Conventional Catmull-Clark subdivision surfaces can not be locally refined. Truncated hierarchical Catmull-Clark subdivision surfaces (THCCS), developed by Wei et al. [54], overcome this limitation. They generalise truncated hierarchical B-splines (THB-splines) to meshes with arbitrary topology. Wei et al. [55] subsequently improved their method using a new basis function insertion scheme and thereby enhanced the efficiency of local refinement. The extraordinary vertices introduce singularities in the parametrisation [41,51]. Catmull-Clark subdivision surfaces have C 2 continuity everywhere except at the surface points related to extraordinary vertices where, as demonstrated by Peters and Reif [45], they possess C 1 continuity. Stam [50] developed a method to evaluate Catmull-Clark subdivision surfaces directly without explicitly subdividing, thus allowing one to evaluate elements containing extraordinary vertices. Although the surface gradients can not be evaluated at the extraordinary vertices, they can be evaluated at nearby quadrature points. Thus, subdivision surfaces can be used as C 1 elements as required, for example, in thin shell theory [17]. Nevertheless, the evaluation of points around extraordinary vertices of Catmull-Clark surfaces introduces error. The conventional evaluation method repeatedly subdivides the element patch until the target point fall into a regular patch allowing a uniform bi-cubic B-spline patch to be mapped the subdivided element patch. The extraordinary vertex also introduces approximation errors because of the singular parameterisations at extraordinary vertices [40,44]. Stam's natural parametrisation only can achieve C 0 continuity at extraordinary vertices. Recently Wawrzinek and Polthier [53] introduced a characteristic subdivision finite element scheme that adopted a characteristic reparameterisation for elements with extraordinary vertices. The evaluated limiting surface is at least C 1 everywhere and the numerical accuracy is improved. Zhang et al. [57] optimised the subdivision scheme to improve its approximation properties when used for thin-shell theory.
Using the finite element method to solve the partial differential equations (PDEs) on surfaces dates back to the seminal work by Dziuk [25], which developed a variational formulation to approximate the solution of the Laplace-Beltrami problems on two dimensional surfaces. This method was extended to solve nonlinear and higher-order equations on surfaces by Dziuk and Elliott [26]. Dziuk and Elliott [27] also provided a thorough review on finite element methods for approximating the solution of PDEs on surfaces. Dedner et al. [22] proposed a discontinuous Galerkin (DG) method for solving a elliptic problem with the Laplace-Beltrami operator on surfaces. Adaptive DG [21] and high-order DG [1] methods were also developed for solving PDEs on surfaces. However, the accuracy of these methods depends on the approximation of the mean curvatures of the surfaces. The geometrical error is dominant when conventional Lagrangian discretisation is used to approximate solutions on complex surfaces. Isogeometric discretisation maintains the exact geometry and overcomes this limitation. Dedè and Quarteroni [20] proposed an isogeometric approach for approximating several surface PDEs involving the Laplace-Beltrami operator on NURBS surfaces. Bartezzaghi et al. [9] solved PDEs with high order Laplace-Beltrami operators on surfaces using NURBS based isogeometric Galerkin method. More accurate results are obtained using an IGA approach over the conventional finite element method. Langer et al. [36] present an isogeometric DG method with non-matching NURBS patches allowing the approximation of PDEs on more complex surfaces.
This work presents a thorough and unified discussion of several major issues related to isogeometric Galerkin formulation based on Catmull-Clark subdivision surfaces. The difficulties associated with imposing Dirichlet boundary conditions, the reduction of the approximation power around extraordinary vertices, and the problem of sufficient numerical integration in the element with extraordinary vertices will be examined and discussed. Previous studies [16,17] on Catmull-Clark subdivision surfaces for analysis introduce ghost degrees of freedoms for constructing basis functions in elements at boundaries. We propose a method which modifies the basis functions at boundaries to ensure they are only associated with given control vertices. No additional ghost degrees of freedom are involved. A penalty method is employed to impose Dirichlet boundary conditions. This does not change the size or symmetry of the system matrix and is straightforward to implement. An adaptive quadrature scheme inspired by [35] is presented to increase the integration accuracy for elements with extraordinary vertices. The proposed method can perform isogeometric analysis on complex geometries using Catmull-Clark subdivision discretisations. A test for approximating Poisson's problem on a square plate is conducted to demonstrate the properties of the method in a simplified setting so as to distill the key features. The approach is also used for solving the Laplace-Beltrami equation which is a benchmark problem for curved manifolds [35,41]. A comparative convergence study is conducted between the Catmull-Clark subdivision method and the conventional finite element method. The effects of the extraordinary vertices and modified bases at boundaries on convergence are examined. Catmull-Clark subdivision surfaces are limiting surfaces generated by successively subdividing given control meshes. They are identical to uniform bi-cubic B-splines. Thus, they have difficulty to represent desired geometries exactly. Here, a least-squares fitting method is used to fit any given geometry using Catmull-Clark subdivision surfaces.
This manuscript first summarises the subdivision algorithm and the evaluation method for Catmull-Clark subdivision surfaces. Then, techniques for using Catmull-Clark for numerical analysis and improving accuracy are presented in Sect. 3. Section 4 presents the Laplace-Beltrami problem and Sect. 5 shows a Galerkin method with Catmull-Clark subdivision surface bases. Section 6 showcases the numerical results.
Catmull-Clark subdivision surfaces
There exist a variety of subdivision schemes, but the basic idea is to use a subdivision scheme to generate a smooth surface through a limiting procedure of repeated refinement steps starting from an initial polygonal grid. The Catmull-Clark algorithm can generate curves and surfaces which are identical to cubic B-splines. The algorithms for curves and surfaces are shown in Appendices A.1 and A.2, respectively. This section will briefly introduce the methods for interpolating and evaluating curves and surfaces using the Catmull-Clark subdivision algorithm. Figure 1 shows a curve generated using a subdivision algorithm. The interpolated curve is identical to a cubic B-spline curve. The limiting curve can be interpolated using cubic basis splines and associated control points. With a control polygon containing n control points, the curve is naturally divided into n − 1 elements. Each element in the curve is associated with one segment of the control polygon. To interpolate on the target element, four control points including the neighbouring control points are required. For example, if one aims to evaluate the geometry of element 2 in Fig. 1, the four control points P 1 ,P 2 ,P 3 and P 4 are required and the curve point is evaluated as
Curve interpolation and evaluation based on the subdivision algorithm
where ξ ∈ [0, 1] is the parametric coordinate within an element. The basis functions for element 2 are defined by The bases are visualised in Fig. 2a. They are C 2 continuous across element boundaries. Element 1 in Fig. 1 contains the end of the curve, which has an end curve point that coincides with the control point. In order to evaluate this curve, one needs to mirror the point P 2 to P 0 as The curve point can now be evaluated using basis splines with a set of control points shown in Fig. 2b. However, if one adopts a spline discretisation for analysis, this strategy of end element treatment will introduce additional 'ghost-like' degrees of freedom. To avoid this problem, the expression for P 0 (3) is substituted into the interpolating equation yielding Hence only three control points are required to evaluate a curve point and the modified basis functions for interpolating end elements are defined by Figure 2b illustrates the modified basis functions. It achieves the same basis functions as the cubic B-Spline with p + 1 multiple knots at the two end points. The new basis functions do not possess the Kronecker delta property but do have the interpolating property at the boundary. The performance of modified bases in analysis will be discussed in Sect. 6.1. The global basis functions for interpolating the curve in Fig. 1 are shown in Fig. 2c. It is worth noting that this subdivision curve is a cubic B-spline curve and represents a special case of Lane-Riesenfeld subdivision it can not model conical shapes exactly. This property is significantly different to NURBS and motivates Sect. 3.1 on geometry fitting.
Interpolating and evaluating Catmull-Clark subdivision surfaces
One defines the number of elements connected with the vertex as the valence. A regular vertex in a Catmull-Clark surface mesh has a valence of 4. A vertex with a valence not equal to 4 is called an extraordinary vertex. This allows subdivision surfaces to handle arbitrary topologies. In their seminal paper [14], Catmull and Clark proposed a way to modify the weight distributions for extraordinary vertices in order to describe complex geometries. With this simple solution, Catmull-Clark surfaces can use a single mesh to present surfaces of arbitrary geometries while other splinebased CAD tools, such as NURBS surfaces, need to link multiple patches. The limiting surface of the Catmull-Clark subdivision algorithm has C 2 continuity over the surface except at the extraordinary vertices where they have C 1 continuity as proven by Peters and Reif [45]. This section will illustrate the methods of interpolating and evaluating both regular element and element with an extraordinary vertex in Catmull-Clark subdivision surfaces. Figure 3a shows a subdivision surface element (dashed) which does not contain an extraordinary vertex. In order to evaluate a point in this Catmull-Clark element, an element patch must be formed. The patch consists of the element itself and the elements which share vertices with it. A regular element patch has 9 elements with 16 control vertices. The surface point can be evaluated using the 16 basis functions associated with these control points as
Element in a regular patch
where ξ := (ξ, η) is the parametric coordinate of a Catmull-Clark subdivision surface element. A Catmull-Clark surface is obtained as the tensor product of two Catmull-Clark curves. The basis functions are defined by where N (ξ ) or N (η) are the basis functions defined in Eq. (2) and presented in Fig. 3a.
• is the modulus operator and % denotes the remainder operator which gives the remainder of the integer division. Figure 3b shows the element patch of a subdivision surface element (shaded) that has an edge on the physical boundary. This type of element has only 5 neighbour elements so that it belongs to an element patch which has 12 control vertices. To evaluate this element, a common solution is to generate a set of 'ghost' vertices outside the domain to form a full element patch [17]. However, this method involves additional degrees of freedom in numerical analysis. Instead, the curve basis functions in Eq. (5) are adapted to deal with the element on the boundary. The same strategy is used for elements which have two edges on the physical boundary as shown in Fig. 3c.
Element in a patch with an extraordinary vertex
Extraordinary vertices are a key advantage of Catmull-Clark subdivision surfaces which allows them to model complex geometries with arbitrary topologies. However, it increases the difficulty of evaluating the surfaces. Figure 4a shows a Catmull-Clark subdivision element which contains one extraordinary vertex.
In order to evaluate this element, one needs to re-numerate the control points as shown in Fig. 4a. After applying one level of subdivision, new control points are generated and this element is subdivided into four sub-elements, as shown in Fig. 4b. The sub-elements Ω 1 , Ω 2 and Ω 3 are now in a regular patch. However, the last sub-element (grey) still has an extraordinary vertex. If the target point to be evaluated is in this region, we must repeatedly subdivide the element until the point falls into a sub-element with a regular patch. Then, the point can be evaluated within the sub-element with the new set of control points P n,k , where n is the number of subdivision required and k = 1, 2, 3 is the sub-element index shown in Fig. 4b. The new control point set is computed as where D k is a selection operator to pick control points for the sub-elements. A andĀ are two types of subdivision operators. P 0 is the initial set of control points. The detailed approach is given in [50] and also can be found in Appendix A.3. P n,k contains 16 control points. Then, a surface point in the element with an extraordinary vertex can be computed as whereξ is the parametric coordinates of the evaluated point in the sub-element, which can be mapped from ξ as Equation (9) can thus be rewritten as whereN is the Catmull-Clark subdivision surfaces basis function. DefineN as a set of 2κ + 8 basis functions in an element with an extraordinary vertex and N is a set of 16 regular basis functions defined in Eq. (7).N can be calculated in a vector form aŝ The derivatives of the Catmull-Clark subdivision surfaces basis functions for elements containing extraordinary vertices are expressed as and can be computed by where ∂ξ ∂ξ can be considered as a mapping matrix defined by
Remark 1
The calculation of the basis functionsN at a physical point x involves two mappings. The first is from the physical domain to the parametric domain of an element with an irregular patch, x → ξ . Because the irregular patch does not have the tensor-product nature, n levels of subdivisions are required and the point is mapped to the parametric domain of a sub-element, ξ →ξ . This second mapping is defined in Eq. (10). The value of n approaches positive infinity when ξ approaches the extraordinary vertex which has the parametric coordinate (0, 0). Hence the diagonal terms in the mapping matrix (15) tend to positive infinity as n → ∞. This results in the basis functionsN not being differentiable at ξ = 0. This problem is termed singular configuration in [35], and singular parameterisation in [41,51].
Techniques for analysis and improving accuracy
This section presents three techniques which are essential for using Catmull-Clark subdivision surfaces in numerical analysis. A geometry fitting method using Catmull-Clark surfaces is introduced in Sect. 3.1. Section 3.2 illustrates an adaptive quadrature scheme for integrating element with an extraordinary vertex to improve accuracy. Section 3.3 introduces the penalty method for applying essential boundary conditions.
Geometry fitting
Catmull-Clark subdivision surfaces are CAD tools which can construct limiting surfaces from control polygons and meshes. However, in a number of engineering problems, the geometry is given as an industry design and a limit surface that is a "best approximation" of this desired geometry required. Litke et al. [37] introduced a method for fitting a Catmull-Clark subdivision surface to a given shape. They employed, both a least-squares fitting method and a quasi-interpolation method to determine a set of control points for a given surface. The least-square fitting method is used here. One first chooses a set of sample points S = {s 1 , s 2 , . . . , s n s } ∈ Γ , where Γ is the geometry, n s is the number of sample points. Each sample point should be evaluated using Catmull-Clark subdivision bases with control points as where n b = 2κ + 8 is the number of local basis functions.
Then the set of sample points can be evaluated as where P = {P 1 , P 2 , . . . , P n c } is a set of control points with n c control points. L is an evaluation operator of Catmull-Clark curves or surfaces. Set ξ = (0, 0) to ensure the sample points correspond to the control vertices and n s ≡ n c , then L is a square matrix. The control points can be calculated as If more sampling points n s are chosen than the required number of control points n c , then L is invertible, a least-squares method is used to obtain a set of control pointsP that minimises S − LP 2 aŝ shows that 6 sample points are chosen from the given curve and one assembles the evaluation operator for these sampling points. The control points can be obtained by solving (18). Using these control points, the limit curve can be interpolated. Since 6 sample points is not sufficient to capture the given curve, the limit curve is significantly different to the given curve. Figures 5b and c show the curve fitting with 11 and 21 sample points, respectively. Increasing the number of samples points, the limit curve converges to the given curve.
Adaptive quadrature rule for element with an extraordinary vertex
In numerical analysis, a Gauss quadrature rule is applied to integrate over Catmull-Clark subdivision elements. A one dimensional quadrature rule with n q Gauss points can exactly evaluate the integrals for polynomials of degree up to 2n q −1.
The polynomial degree of a cubic B-spline function is 3.
Because the basis functions of a Catmull-Clark subdivision element in regular element patch are generated as the tensor product of two cubic splines, 2×2 Gauss points can be used in this case. However, if a Catmull-Clark subdivision element has an extraordinary vertex, the basis functions are generated by Eq. (12). In this case, basis functions are not polynomials and the derivatives of the basis functions suffer from the singular parametrisation problem, see Remark 1. Thus, the standard Gauss quadrature can not be used to evaluate the element integral. Inspired by [35], an adaptive quadrature rule, well suited to Catmull-Clark subdivision surfaces is adopted by integration at a number of levels of subdivided elements.
With n d levels of subdivisions, the element is subdivided into 3n d + 1 sub-elements as shown in Fig. 4d. The sub-elements can be evaluated using cubic B-splines with new control vertices except for the ones having an extraordinary vertex. Thus the Gauss quadrature rule can be used to evaluate the integrals in 3n d sub-elements. With a number of subdivisions, the integration error can be reduced. In this work, n d = 7 is chosen in order to obtain sufficiently accurate values of the integrals.
Penalty method for applying boundary condition
The basis functions do not have the Kronecker delta and interpolating properties, so boundary conditions can not be directly applied using conventional methods. The method used here is a penalty method which uses a penalty parameter and boundary mass matrix to apply the boundary conditions approximately. It preserves the symmetry of the system matrix and does not increase its size. However, the penalty parameter should be carefully selected. If fine meshes with more degrees of freedom are adopted, a larger penalty parameter must be chosen. The Dirichlet boundary condition is defined as An L 2 projection is used for applying the Dirichlet boundary condition, where for test function v, one obtains Using the cubic B-spline functions in Eq.
(2) to discretise u and v and the same strategy for formulating the system matrix, one introduces a boundary mass matrix as where n b e is the number of boundary elements, and The right hand side vector for applying the boundary conditions is thus Then the discrete system of equations arising from (21) is We note that the elements for applying boundary conditions are the discretisation of the surface boundary which are one dimensional cubic B-spline curves and only one-dimensional Gauss quadrature rule is used for integration. However, one uses the global degrees of freedom indices to assemble M b and f b , so that they have the same size as the system matrix and global right-hand side vector, respectively. Assume the system of equations is expressed as Ku = f, where K is the system matrix, u is the global coefficients vector to be solved for, and f is global right-hand side vector. Then, we scale M b and f b using a penalty factor β and combine them with the systems of equations as The Dirichlet boundary condition (20) is here weakly applied to the system of equations. A relatively large penalty factor β = 10 8 is selected for all numerical examples. It is sufficiently large to ensure good satisfaction of the constraint but not too large so as to significantly impact the conditioning of the system.
Laplace-Beltrami problem
The governing partial differential equation which we want to solve to illustrate fundamental features of subdivision surfaces is given by where Γ is a two dimensional manifold (with outward unit normal vector n) in three dimensional space R 3 and Δ Γ (•) is the Laplace-Beltrami operator (also called surface Laplacian operator). The Dirichlet boundary condition is expressed in (20). We will use a manufactured solution to compute against the approximate solution. The Laplace-Beltrami operator is defined by where ∇ Γ (•) is the surface gradient operator defined by Hence the surface gradient of a scalar function v can be calculated as the spatial gradient subtracted by its normal part as where ∇(•) is the spatial gradient operator. Hence the surface Laplacian of v is given by where ∇ 2 v is the Hessian matrix of v, and ∇n is the gradient of the normal vector, which is arranged in a matrix as We define the total curvature at a surface point x ∈ Γ as the surface divergence of the normal, that is c(x) := ∇ Γ · n. For a given manufactured solution u m , the right hand side of Eq. (27) can thus be computed as
Galerkin formulation
The weak formulation of problem (27) is where v is an admissible test function. The weak formulation is partitioned into n e number of elements, as Discretising v, ∇u and ∇v using the Catmull-Clark basis functions N given in Eq. (7) produces where J is the surface Jacobian for the manifold, given in a matrix form as For details on the computation of J −1 see [46] and for a discussion of superficial tensors such as J in the context of Laplace-Beltrami equation, see [31]. If the element contains an extraordinary vertex, the shape functions N A are replaced byN A in Eq. (12). The surface gradient of the shape functions is computed as and J = ∂x Integrating the discrete problem using Gauss quadrature, the system of Eq. 34 becomes where A is the assembly operator and n q is the number of quadrature points in each element, w i is the weight for i th quadrature point, n e is the number of elements and n b is the number of basis functions of the element. The basis functions N e are replaced byN e if the element e contains an extraordinary vertex. In this case, the basis functions are not differentiable and their derivatives approach positive infinity when points are close to the extraordinary vertex (see Remark 1). Thus |J| approaches positive infinity at extraordinary vertices. Errors result if quadrature is adopted to integrate the contributions from element containing extraordinary vertices.
The discrete system of equations to solve is thus given by
Numerical results
A 'patch test' [58] on a two-dimensional plate is first presented to assess the consistency and stability of the proposed formulation in a simplified setting. Then, the Laplace-Beltrami equation is solved on both cylindrical and hemispherical surfaces. Convergence studies are conducted. The influence of extraordinary vertices is also investigated.
'Patch test'
The 'patch test' is performed on a two dimensional flat plate where the Laplace-Beltrami operator reduces to the Laplace operator. The problem proposed in Sect. 4 reduces to the Poisson problem expressed given by This partial differential equation is solved on the square plate shown in Fig. 6a with the essential boundary conditions The essential boundary conditions are imposed using the penalty method. Natural homogeneous boundary conditions are applied on the remaining two edges of the plate. Four different manufactured functions for f are used. The functions, analytical solutions for u and their gradients ∂u ∂ x 2 are given in Table 1. We investigate both a regular and an irregular mesh. The regular mesh is a 4 × 4 element patch without extraordinary vertices as shown in Fig. 6b. In all of the tests, a geometry error is absent.
For Test 1, the right hand side f = 0 so that ∂u ∂ x 2 = 2. Solving the equation using the proposed Catmull-Clark subdivision method, the numerical result u h is exactly 2 everywhere as shown in Fig. 7b. Thus passes the consis- π sin(π x 2 ) 1 π sin(π x 2 ) + 2x 2 cos(π x 2 ) + 2 Test 1 has no right-hand side term, thus the analytical solution u is linear and its gradient is a constant. The analytical solutions for Tests 2 and 3 are quadratic and cubic, respectively, and their gradients are linear and quadratic, respectively. Test 4 has a sine function as the right-hand side term which gives a cosine function as the gradient of the analytical solution tency test and the eigenvalue of the system matrix are all positive and non-zero after application of the essential boundary conditions. The gradient ∂u ∂ x 2 for Test 2 and 3 are linear and quadratic respectively. Recall that when interpolating functions in elements with edges on physical boundaries, the basis functions are modified, see Eqs. (3) and (4). In other words, the gradients of the function u are expected to be constant at boundaries. Figure 7a, c and e show the numerical results for these tests. The results are smooth and capture the analytical solutions well. Figure 7d and f compare the numerical results of ∂u ∂ x 2 to the analytical solution for Tests 2 and 3. The Catmull-Clark subdivision method is also compared to linear and quadratic Lagrangian finite element methods. There is a substantial error in both boundary regions in Test 2 for Catmull-Clark subdivision method. This is because the method imposes the gradient to be constant at both boundaries. The numerical result of the Catmull-Clark subdivision method in Test 3 has a substantial error in the region close to the top boundary (x 2 = 2) but captures the gradient in the region close to the bottom boundary (x 2 = 0) well because the analytic solution for the gradient in the bottom boundary region is near-constant. These errors at the boundaries will pollute the numerical result in the interior of the domain, which will reduce the convergence rate. The gradients approximated by the linear and quadratic Lagrangian finite elements are piecewise constant and piecewise linear, respectively. The results of the Catmull-Clark subdivision methods for these two tests lies between the linear and quadratic Lagrangian elements. The gradient ∂u ∂ x 2 in Test 4 is a cosine function which is non-polynomial and it behaves as a constant in both boundary regions shown in Fig. 7h. The Lagrangian elements only possess C 0 continuity across elements and their gradients hence have jumps between elements. The Catmull-Clark subdivision elements capture the gradients of the given function better as they are C 1 smooth. Figure 8 shows the plots of normalised global L 2 and H 1 errors against the element size. The normalised global L 2 error is defined by where • L 2 is the L 2 norm defined as • L 2 = Γ | • | 2 dΓ . The normalised global H 1 error is computed as where • H 1 is the H 1 norm defined as We set the element size of the coarsest mesh as 1. Then, the normalised element size for the refined meshes are 1 2 , 1 4 , . . .. The convergence rate of Tests 2 and 3 are sub-optimal at 2.5 (L 2 error) and 1.5 (H 1 error). The optimal convergence rate for cubic elements should be p + 1 = 4 (L 2 error) and p = 3 (H 1 error), where p is the polynomial degree of the basis functions. The numerical result captures the analytical solution well and the convergence rate for Test 4 is optimal. The same convergence study is now repeated starting from a mesh containing extraordinary vertices as shown in Fig. 6c. Figure 8a and b show the plots of normalised element sizes against the L 2 and H 1 errors, respectively. The same convergence rates are obtained for Tests 2 and 3. However, the convergence rate of Test 4 is also reduced to 2.5 (L 2 error) and 1.5 (H 1 error). Figure 8c and d show the plots of normalised element sizes against L 2 and H 1 errors, respectively, for the mesh with an extraordinary vertex.
The Catmull-Clark subdivision method can pass the patch test when the function gradient is a constant but has difficulties to capture the gradients in boundary regions when they do not behave like a constant. When the gradient behaves like a constant in the boundary regions, the optimal convergence rate can be obtained. If this is not the case, a reduction of the convergence rate is observed. The presence of the extraordinary vertex in the patch also reduces the convergence rate. It is also important to note that the Catmull-Clark subdivision elements have advantages in describing non-polynomial functions since their basis functions are cubic and C 2 continuous.
Comparison with NURBS and Lagrangian elements
We now compare the convergence rate associated with Catmull-Clark elements against conventional Lagrangian elements and NURBS. Bézier extraction [12] is adopted to decompose a NURBS surfaces into C 0 Bézier elements to provide an element structure for the isogeometric Galerkin method. This is a widely-used method for isogeometric analysis using T-splines [48]. As the Lagrangian and Bézier elements can fully pass the 'patch test', they both have no approximation error for Tests 1, 2 and 3. Figure 9 compares their behaviour in approximating non-polynomial solution in Test 4. Mesh 1 is used for all methods. All methods exhibit an optimal convergence rate. Since no geometry error is involved in the 'patch test', the Bézier element provides the same performance as the Lagrangian element without the advantages of exact geometry representation. The Catmull-Clark element is slightly more accurate than other two methods for this specific test.
Cylindrical surface example
The first numerical example considered is a cylindrical surface. The analysis domain of the problem is the cylindrical Fig. 10 The geometry is given in (a) and (b) is the control mesh which constructs the best approximating Catmull-Clark subdivision surface of the given geometry. The control mesh is generated using least-squares fitting. c Shows the numerical result u h on the cylindrical surface Fig. 10a. Surfaces fitting methods are used to construct the control mesh, see Sect. 3.1. The first level control mesh is shown in Fig. 10b. This has no extraordinary vertices. The Laplace-Beltrami problem on this manifold domain is solved using the Galerkin formulation presented in Sect. 5. Essential boundary conditions are applied on ∂Γ . The right-hand side function f is computed using the definition in Eq. (33). Figure 10c shows the numerical result u h which matches the manufactured analytical solution (47) very well. A convergence study is now conducted for this geometry. The refined control meshes are constructed using the least-squares fitting method described in Sect. 3.1. Figure 11 compares the convergence rates between Catmull-Clark subdivision surfaces with two different order Lagrangian elements. In this example, the shortcoming caused by extraordinary vertices and boundary gradients are not present, and the Catmull-Clark subdivision surfaces have the same convergence rate p + 1 as cubic Lagrangian elements.
Hemispherical surface example
The second geometry investigated is a hemispherical surface with radius equal to 1 as shown in Fig. 12a. We use the same strategy to fit the Catmull-Clark subdivision surfaces to the hemispherical surface. The control mesh shown in Fig. 12b is generated to discretise the surface into a number of Catmull-Clark elements. The control mesh has four extraordinary vertices. Figure 12e shows the solution u h .
Convergence study with an isogeometric approach
In engineering, designers usually do not know the geometry of the product in advance. The geometry information is purely from the CAD model. Catmull-Clark subdivision surfaces, as a design tool, provide the geometry which is the Fig. 12 a is a hemispherical surface. b is the control mesh for constructing subdivision surfaces to fit the hemispherical surface. c is 1-Level refined mesh for the hemispherical surface. d is 2-level refined mesh for the hemispherical surface. e shows the numerical result u h on this surface design of the engineering product. In this case, engineers do not need to approximate the given geometry with Catmull-Clark elements. They can directly adopt the discretisation from the CAD model for analysis. For example, we adopt the control mesh shown in Fig. 12b as the initial control mesh. It can be used to generate a limit surface approximating a hemisphere, as shown in Fig. 12a, with Catmull-Clark subdivision bases. It is important to note the limit surface is not an exact hemisphere since it is evaluated using cubic basis spline functions. However, this surface is the domain of our problem and it will stay exact the same during the entire analysis (isogeometric) and h-refinement with subdivision algorithm will not change the geometry.
The same problem is solved on the subdivision surfaces. A convergence study is done with another two levels of subdivision control mesh as shown in Fig. 12c and d. Note, refinement does not change the number of extraordinary vertices. The two new meshes still have four extraordinary vertices. The two control meshes can be used to evaluate the same limit surface shown in Fig. 12a. The Catmull-Clark subdivision surfaces are compared with quadratic and cubic Lagrangian elements. Generally, Catmull-Clark subdivision elements can achieve higher accuracy per degree of freedom than Lagrangian elements. From the initial to the second level of mesh refinement, the Catmull-Clark subdivision elements have a similar convergence rate to cubic Lagrangian elements. After that, the convergence rate is equivalent to quadratic Lagrangian elements as shown in Fig. 13. Figure 14a shows the sparsity pattern of the system matrix K for the Catmull-Clark subdivision discretisation. The size of the matrix is the same as the system matrix assembled using a linear Lagrange discretisation. However, because the Catmull-Clark subdivision discretisation uses cubic basis functions with non-local support and there are 16 shape functions in a subdivision element with no extraordinary vertex, the number of non-zero entries in columns and rows is more than the linear Lagrange discretisation (i.e. the sparsity is decreased and the bandwidth increased). Thus, the system matrix of a Catmull-Clark subdivision discretisation has the same size but is denser than the linear Lagrange discretisation shown in 14b. Figure 14c is the sparsity patterns of cubic Lagrange discretisations. p-refinement increase the number of degrees of freedom as well as the number of non-zero entries in rows and columns. Thus there is no significant change in the density of the system matrices. The Catmull-Clark subdivision discretisation has the same number of non-zero entries in each row or column as the cubic Lagrangian discretisation but has a much smaller size.
Quadrature error
The presence of extraordinary vertices leads to difficulties in integration as described in Sect. 3.2. Figure 15a, c and e show the point-wise errors at surface points for three levels of mesh refinement using the standard Gauss quadrature rule. The number of extraordinary vertices remains 4 after refinement.
For the analysis using the initial mesh, the error in the regions around extraordinary vertices have similar magnitudes to the other regions. However, after a level of refinement, the error in the other regions is reduced more than the area around the four extraordinary vertices. After the second refinement, the error is concentrated in the areas around the four extraordinary vertices. Figure 15b, d and f plot the point-wise errors on the same mesh analysed with the adaptive quadrature rule shown in Sect. 3.2. The errors around extraordinary vertices are now decreased.
Approximation error
The presence of extraordinary vertices introduces approximation errors. Then we investigate the effect of the number and valence of extraordinary vertices on numerical accuracy. Figure 16a, b and c are three control meshes with different numbers of extraordinary vertices. Figure 16a shows a control mesh without an extraordinary vertex. Figure 16b shows a control mesh with four extraordinary vertices, including two vertices with a valence of 3 and two vertices with a valence of 5. The control mesh in Fig. 16c has seven extraordinary ver-tices, including four vertices with a valence of 4, two vertices with a valence of 5 and one vertex with a valence of 6. It is important to note the three different control meshes construct different but similar geometries. The Laplace-Beltrami problem is solved using the Galerkin formulation with the same right-hand side function f computed in (33). Both standard and adaptive Gauss quadrature rules are used for all cases. Figure 17a, c and e show the solution of u on the surfaces constructed using the three meshes. Because of the similarity of the geometries and solutions, the three cases are used to inves- Fig. 17b, d and f. Meshes with extraordinary vertices have larger maximum point-wise errors close to the extraordinary vertices, while the mesh without extraordinary vertices has increased uniform point-wise error. Figure 18 shows the convergence rates for the three cases. Meshes without extraordinary vertices can achieve the optimal p + 1 convergence rate and p = 3. In general, the more extraordinary vertices a mesh contains, the more error results. The extraordinary vertices increase the global errors in the results and reduce the convergence rate. Since the global errors also include quadrature errors, the adaptive quadrature rule serves to reduce the quadrature errors. With the adaptive quadrature rule, the convergence rates are improved for the 4 and 7 extraordinary vertices cases but the results still agree with our assumption that increasing the number and valence of extraordinary vertices will produce higher error. Table 2 compares the computational cost for assembling the system matrix for the standard and adaptive quadrature rules. Because the number of extraordinary vertices remains constant after subdivision, the difference in computational time between the standard and adaptive quadrature schemes diminishes.
Complex geometry
This final example considers the ability of the Catmull-Clark method to provide high-order discretisations of complex geometry. The model considered is that of a racing car from CAD and imported into Autodesk Maya [4] for removal of extraneous geometry and the generation of the surface mesh shown in Fig. 19a. Modelling such geometry using NURBS surfaces would require a number of patches to be spliced together. A model based on Catmull-Clark subdivision surface can directly evaluate the smooth limit surface in Fig. 19b where n e = 9152 for this example. Figure 19c indicates the domain where essential (Dirichlet) and natural boundary conditions are applied. The essential boundary Γ d is composed of two parts as where Domains defined for applying boundary conditions. x 2 (f ) Pointwise error on Γ n .
The natural boundary conditions is applied to the rest of the domain Γ n = Γ \Γ d . The numerical result matches the analytical solution well as shown in Fig. 19d. Figure 19e shows the results on Γ n and a maximum point-wise error 2.8% is observed in Fig. 19f.
Conclusions
A thorough study of the isogeometric Galerkin method with Catmull-Clark subdivision surfaces has been presented. The same bases have been used for both geometry and the Galerkin discretisation. The method has been used to solve the Laplace-Beltrami equation on curved two-dimensional manifold embedded in three dimensional space using the Catmull-Clark subdivision surfaces. An approach to fit given geometries using Catmull-Clark subdivision scheme has been outlined. A method to model open boundary geometries without involving 'ghost' control vertices, but involving errors in function gradients close to boundary regions, has also been described. The penalty method has been adopted to impose the Dirichlet boundary conditions. The optimal convergence rate of p + 1 has been obtained when using a cylindrical control mesh without extraordinary vertices.
A reduction of convergence rates has been observed when the function gradients at the boundaries do not behave like constant, or control meshes contain extraordinary vertices. The adaptive quadrature scheme significantly improves the accuracy. The effect of the number and valence of the extraordinary vertices in convergence rates has been investigated and an adaptive quadrature rule implemented. This successfully improved the convergence rates for the proposed method. The convergence rate of the proposed method is not worse than 2.5 (L 2 error) and 1.5 (H 1 error).
In future work, this method will be investigated with problems requiring C 1 continuity such as the deformations of thin shells.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copy-
A.1 Lane-Riesenfeld subdivision algorithm for curves
The Lane-Riesenfeld algorithm successively refines a curve starting from an initial control polygon. After a number of subdivisions, the curve is limited to a B-spline. Figure 20 illustrates a special case of this subdivision algorithm. The control point P i 2 j in the i th level of refinement is computed from the upper level control points as Point P i 2 j is the mid-point of P i−1 j -P i−1 j+1 , and is called an 'edge point'. The control point P i 2 j+1 is computed as To compute this point, one needs to connect the mid-points of P i 2 j -P i−1 j+1 and P i−1 j+1 -P i 2 j+2 . The point P i 2 j+1 is the midpoint of the connecting line. This type of point is called 'vertex point'. Each 'vertex point' is associated with an upper level control point. Figure 21 shows two levels of refinements using the Lane-Riesenfeld algorithm and the limiting result which is a cubic B-spline curve.
A.2 Catmull-Clark subdivision algorithm for surfaces
The application of the subdivision algorithm to surfaces follows in a similar manner to curves. One face in the original Equipped with these formulae, the new control points on the ith level of refinement P i can be computed as: S is a subdivision operator -a matrix consisting of a set of weights. Each weight is associated with a control point in P i−1 . The weight distributions for different types of control points are shown in Fig. 23. The weight distributions for extraordinary point are shown in Fig. 24. After successive levels of refinements, a smooth B-spline surfaces is obtained.
A.3 Computing control point set for sub-elements
We denote the control points of an irregular patch in Fig. 4a as a set P. The initial control points of the patch are expressed as P 0 = P 0 0 , P 0 1 , . . . , P 0 2κ+6 , P 0 2κ+7 .
The subdivision step is represented as where A is the subdivision operator given by The terms S, S 11 , S 12 , S 21 and S 22 are defined in [50] and S is given in Eq. 61. To evaluate the sub-element Ω 1 , Ω 2 and Ω 3 in Fig. 4b, one needs to pick 2κ + 8 control points out of the new 2κ + 17 control point patch. A selection operator D + κ for sub-element Ω k and k = 1, 2, 3 is used to select the necessary control points from P 1 , that is As shown in Fig. 4c, after successive subdivisions, the nonevaluable element can be limited to a negligible region.
(68) The sub-element index k is determined as | 11,351.2 | 2019-09-23T00:00:00.000 | [
"Physics"
] |
No-Reference Hyperspectral Image Quality Assessment via Quality-Sensitive Features Learning
Assessing the quality of a reconstructed hyperspectral image (HSI) is of significance for restoration and super-resolution. Current image quality assessment methods such as peak signal-noise-ratio require the availability of pristine reference image, which is often not available in reality. In this paper, we propose a no-reference hyperspectral image quality assessment method based on quality-sensitive features extraction. Difference of statistical properties between pristine and distorted HSIs is analyzed in both spectral and spatial domains, then multiple statistics features that are sensitive to image quality are extracted. By combining all these statistics features, we learn a multivariate Gaussian (MVG) model as benchmark from the pristine hyperspectral datasets. In order to assess the quality of a reconstructed HSI, we partition it into different local blocks and fit a MVG model on each block. A modified Bhattacharyya distance between the MVG model of each reconstructed HSI block and the benchmark MVG model is computed to measure the quality. The final quality score is obtained by average pooling over all the blocks. We assess five state-of-the-art super-resolution methods on Airborne Visible Infrared Imaging Spectrometer (AVIRIS) and Hyperspec-VNIR-C (HyperspecVC) data using our proposed method. It is verified that the proposed quality score is consistent with current reference-based assessment indices, which demonstrates the effectiveness and potential of the proposed no-reference image quality assessment method.
Introduction
Hyperspectral image (HSI) with rich spatial and spectral information of the scene is useful in many fields such as mineral exploitation, agriculture, and environment management [1][2][3].To improve the quality of the acquired HSI due to limited spatial resolution, super-resolution is an important enhancement technique [4][5][6][7][8][9][10][11][12].In order to evaluate the reconstructed high resolution HSI, conventional strategy is to degrade the original data into a coarser resolution by down-sampling.Then, the original data are used as reference image and compared with the reconstructed high resolution image.The disadvantage is that as the invariance of the super-resolution performance to scale changes cannot be guaranteed, the performance of super-resolution method on the original data may not be as good as on the down-sampled data [13,14].While it is naturally better to assess the super-resolution method on the original data rather than on the down-sampled data, reference image is not available for assessment if the super-resolution is applied on the original data.
To our knowledge, there is no published work on no-reference quality assessment for the reconstruction of HSI.Alparone et al. proposed a no-reference pansharpening assessment method in [14] where high resolution panchromatic image was needed to assess the reconstructed multispectral image.This method is not applicable to cases where the panchromatic image is not available.There are some other no-reference image assessment methods designed for color images [15][16][17], but they cannot be applied to hyperspectral image directly.These methods can only assess spatial quality and give quality scores which reflect human's subjective visual sense.Furthermore, they cannot deal with spectral fidelity, which is important for the interpretation of HSI.
In this study, we propose a no-reference quality assessment method for HSI.HSI possesses some statistical properties that are sensitive to distortion, deviations of these statistics from their regular counterparts reflect the extent of distortion.These statistics can be extracted as quality-sensitive features.By analyzing the statistical properties of the pristine and distorted HSIs, we extract multiple quality-sensitive features in both spectral and spatial domains.After integrating all these features, we can learn a multivariate Gaussian model (MVG) of these features from the pristine hyperspectral training dataset.The learned MVG is treated as a benchmark to compare with the MVG model fitted on the reconstructed HSI.Distance between the two MVG models is computed as quality measure with high value representing low quality.To apply this method, we partition the reconstructed HSI into different local blocks, and measure the image quality for each of the local blocks.The final quality score of the reconstructed HSI is obtained by average pooling.
We consider four contributions in this paper.Firstly, we propose the first no-reference assessment method for hyperspectral image.Our method does not require any reference image or down-sampling the original image, which is well-suited for practical applications.Secondly, in order to exploit both the spectral and spatial information for quality assessment, other than the off-the-shelf spatial features, we analyze the statistical properties in the spectral domain, design quality-sensitive features for the spectral domain, and integrate them with the spatial features to form a joint spectral-spatial quality-sensitive feature vector.Thirdly, compared with current color image assessment methods, our method can also blindly assess the spectral fidelity.Finally, we verify the potential of our method as a HSI assessment tool by testing it on several real HSIs, which are reconstructed by state-of-the-art super-resolution methods.
The remainder of this paper is organized as follows.In Section 2, we analyze the statistical properties of HSI, and extract quality-sensitive features.The methodology of computing the quality score is given in Section 3. We present the experimental results and give discussions about the experimental results in Sections 4 and 5, respectively.We make the conclusions in Section 6.
Quality-Sensitive Statistics Features
An image possesses statistics that would deviate from their regular counterparts due to distortion, extracting these statistics as features and measuring their deviations makes it possible to assess HSI without any reference [17].Previous quality-sensitive statistics features designed for color images mainly focus on the spatial domain [18][19][20][21][22].In order to exploit the spectral correlation of a HSI, we also need to extract quality-sensitive features from the spectral domain.In this section, we firstly analyze the statistical properties in the spectral domain and design a quality-sensitive spectral feature extraction method.Then, we demonstrate that off-the-shelf spatial features are effective for HSI.By integrating our proposed spectral features and the spatial features, we form a joint spectral-spatial quality-sensitive feature vector.
Statistics Features in Spectral Domain
In this sub-section, spectral quality-sensitive features are proposed after analyzing the statistics in the spectral domain.We observe that locally normalized spectra of a pristine HSI would follow a Gaussian distribution, while those of distorted HSIs would deviate.Given a pristine HSI I ∈ R M×N×L , we first apply local normalization to a spectrum s s where λ = 1, 2, ..., L is the spectral coordinate, and C is a constant to stabilize the normalization when the denominator tends to zero.In our experiment, C is set to 1. µ(λ) and σ(λ) are, respectively, local mean and standard variance: where w = {w k |k = −K, −K + 1, ..., K} is a Gaussian weighting window.K determines the width of the window.The local normalization removes the local mean displacements and normalize the local variance, thus has a decorrelation effect.The locally normalized spectrum would be more homogeneous than the original spectrum.After the local normalization, the spectra of a pristine HSI would approximately have zero mean and unit variance.
We crop a sub-image from AVIRIS data [23], and then apply the above local normalization to the spectrum, as shown in Figure 1.Noise and blurring are common effects caused by distortion in HSI [24][25][26], so we add noise to the pristine HSI or blur it to simulate distorted HSIs. Figure 2 shows the sub-images added with different level of noise (Gaussian noise) and blurring (average filtering).We plot the histograms of all the spectra in the sub-image in Figure 3.It is observed that distribution of the locally normalized spectra of a pristine HSI follows a Gaussian distribution with zero mean, while the locally normalized spectra of distorted HSIs deviate.There are two interesting findings in Figure 3. Firstly, each type of distortion modifies the distribution in its own way.For example, with noise added, the distribution curve becomes flat and tends to be a uniform distribution.When the HSI is blurred, the distribution curve becomes thin and tends to be a Laplacian distribution.Secondly, heavier distortion causes greater modification of the distribution.Noise with standard variance σ = 0.20 makes the distribution curve much flatter than noise with σ = 0.05, and a 5 × 5 blurring kernel generates a narrower bell-shaped curve than a 3 × 3 kernel.
Therefore, some statistical properties in the spectral domain can be modified by the distortion, and measuring the changes of these statistics makes it possible to assess the spectral distortion.Generalized Gaussian distribution (GGD) can be used to capture the statistical changes between the pristine and distorted HSIs.The function of a GGD with zero mean is where where α and β represent shape parameter and scale parameter, respectively.σ is standard variance.The GGD model can describe broadly the statistics of multiple distributions.The GGD model reduces to a Laplacian distribution and a Gaussian distribution when α = 1 and α = 2.It tends to a uniform distribution when α approaches infinity.When distortion is introduced, the locally normalized spectra would deviate from Gaussian distribution and tend to a uniform-like or a Laplacian-like distribution, all of them can be captured by a GGD model.The statistics of a GGD model is described by its model parameters, so we select the parameters [α, β] as the spectral quality-sensitive features, which can be estimated using moment-matching algorithm [17].Therefore, some statistical properties in the spectral domain can be modified by the distortion, and measuring the changes of these statistics makes it possible to assess the spectral distortion.Generalized Gaussian distribution (GGD) can be used to capture the statistical changes between the pristine and distorted HSIs.The function of a GGD with zero mean is where Therefore, some statistical properties in the spectral domain can be modified by the distortion, and measuring the changes of these statistics makes it possible to assess the spectral distortion.Generalized Gaussian distribution (GGD) can be used to capture the statistical changes between the pristine and distorted HSIs.The function of a GGD with zero mean is where To show that our extracted feature is sensitive to image quality, we randomly crop 200 pristine sub-images of size 64 × 64 × 224 from AVIRIS dataset [23].Then, we introduce different types of distortions to each sub-image.After applying the local normalization on the spectra of each sub-image, we fit the histogram of spectra in each sub-image with GGD model.The extracted features [α, β] are plotted in Figure 4.As shown in Figure 4, features belonging to the same distortion form a cluster.It is easy to separate different distortions in the feature space, which demonstrates the sensitivity of the extracted feature to image quality.
Statistics Features in Spatial Domain
Image quality distortion can be reflected in local image structures [17], image gradient [19], and multi-scale and multi-orientation decomposition [20].To exploit these information, we adopt multiple types of spatial features (originally proposed for color images [27]) and verify their effectiveness on HSI in this sub-section.
Statistics of Panchromatic Image
A hyperspectral image often contains large number of continuous spectral bands with narrow bandwidth, extracting the spatial features band-by-band would be time-consuming and result in huge number of redundant features.In order to extract features from the spatial domain in a fast and simple way, we analyze the statistics and extract the spatial features on a synthesized panchromatic image, which is simulated by [27] , where r I , g I , and b I are spectral bands of the HSI with band centers corresponding to the red, green, and blue bands.In the experiment, the weights r w , g w , and b w are set to 0.06, 0.63, and 0.27 as suggested in [27].The simulated panchromatic image of the HSI in Figure 1 is shown in Figure 5.
The structural and textural information contained in the panchromatic image would be exploited in extracting the spatial quality-sensitive features.Similar to the spectral domain, we apply the local normalization to the simulated panchromatic image ( , ) ( , ) ( , ) , ( , ) where i and j are the spatial coordinates, and ( , )
Statistics Features in Spatial Domain
Image quality distortion can be reflected in local image structures [17], image gradient [19], and multi-scale and multi-orientation decomposition [20].To exploit these information, we adopt multiple types of spatial features (originally proposed for color images [27]) and verify their effectiveness on HSI in this sub-section.
Statistics of Panchromatic Image
A hyperspectral image often contains large number of continuous spectral bands with narrow bandwidth, extracting the spatial features band-by-band would be time-consuming and result in huge number of redundant features.In order to extract features from the spatial domain in a fast and simple way, we analyze the statistics and extract the spatial features on a synthesized panchromatic image, which is simulated by [27] P = w r I r + w g I g + w b I b , where I r , I g , and I b are spectral bands of the HSI with band centers corresponding to the red, green, and blue bands.In the experiment, the weights w r , w g , and w b are set to 0.06, 0.63, and 0.27 as suggested in [27].The simulated panchromatic image of the HSI in Figure 1 is shown in Figure 5.The structural and textural information contained in the panchromatic image would be exploited in extracting the spatial quality-sensitive features.Similar to the spectral domain, we apply the local normalization to the simulated panchromatic image where i and j are the spatial coordinates, and µ(i, j) and σ(i, j) are local mean and standard variance, respectively, computed by [17] where w = {w s,t |s = −S, ..., S, t = −T, ..., T} is the Gaussian weighting window, and the window size is determined by S and T. After local normalization, the value of most pixels would be decorrelated and close to zero, the locally normalized result exhibits a homogeneous appearance with a few residual edges, as shown in Figure 6a.In Figure 6b, we present the histograms of the locally normalized panchromatic images simulated from pristine and distorted HSIs.It has been observed that the locally normalized panchromatic image follows a Gaussian distribution with zero mean, while it deviates when distortion exists [17,27].The pattern of curves in Figure 6b is similar to Figure 3, and the statistics of the panchromatic image is modified by distortions in a similar way as what has been discovered in the spectral domain.We also use the GGD model to measure the difference of statistics between the pristine and distorted HSIs.The shape parameter and scale parameter of GGD model are used as spatial quality-sensitive features.
is the Gaussian weighting window, and the window size is determined by S and T .After local normalization, the value of most pixels would be decorrelated and close to zero, the locally normalized result exhibits a homogeneous appearance with a few residual edges, as shown in Figure 6a.In Figure 6b, we present the histograms of the locally normalized panchromatic images simulated from pristine and distorted HSIs.It has been observed that the locally normalized panchromatic image follows a Gaussian distribution with zero mean, while it deviates when distortion exists [17,27].The pattern of curves in Figure 6b is similar to Figure 3, and the statistics of the panchromatic image is modified by distortions in a similar way as what has been discovered in the spectral domain.We also use the GGD model to measure the difference of statistics between the pristine and distorted HSIs.The shape parameter and scale parameter of GGD model are used as spatial quality-sensitive features.
Statistics of Texture
The quality of image can also be revealed by the quality of texture which should be exploited for the quality assessment.Log-Gabor filters decompose an image in multi-scales and multi-orientations, thus can capture textual information.The textures of a HSI are captured in the panchromatic image, so we apply Log-Gabor filters to the simulated panchromatic image.The Log-Gabor filter is expressed as [27] is the Gaussian weighting window, and the window size is determined by S and T .After local normalization, the value of most pixels would be decorrelated and close to zero, the locally normalized result exhibits a homogeneous appearance with a few residual edges, as shown in Figure 6a.In Figure 6b, we present the histograms of the locally normalized panchromatic images simulated from pristine and distorted HSIs.It has been observed that the locally normalized panchromatic image follows a Gaussian distribution with zero mean, while it deviates when distortion exists [17,27].The pattern of curves in Figure 6b is similar to Figure 3, and the statistics of the panchromatic image is modified by distortions in a similar way as what has been discovered in the spectral domain.We also use the GGD model to measure the difference of statistics between the pristine and distorted HSIs.The shape parameter and scale parameter of GGD model are used as spatial quality-sensitive features.
Statistics of Texture
The quality of image can also be revealed by the quality of texture which should be exploited for the quality assessment.Log-Gabor filters decompose an image in multi-scales and multi-orientations, thus can capture textual information.The textures of a HSI are captured in the panchromatic image, so we apply Log-Gabor filters to the simulated panchromatic image.The Log-Gabor filter is expressed as [27]
Statistics of Texture
The quality of image can also be revealed by the quality of texture which should be exploited for the quality assessment.Log-Gabor filters decompose an image in multi-scales and multi-orientations, thus can capture textual information.The textures of a HSI are captured in the panchromatic image, so we apply Log-Gabor filters to the simulated panchromatic image.The Log-Gabor filter is expressed as [27] where θ j = jπ/J, j = {0, 1, ..., J − 1} is orientation; J is the number of orientations; ω 0 is center frequency; and σ r and σ θ determine radial bandwidth and angular bandwidth of the filter, respectively.Applying Log-Gabor filters with N center frequencies and J orientations to the simulated panchromatic image would generate 2N J response maps (e n,j , o n,j ) n = 0, ..., N − 1, j = 0, ..., J − 1}, where e n,j and o n,j represent the real part and the imaginary part of the response, respectively.
In Figure 7a, we present a response map o 1,3 (N = 3, J = 4) as an example.It is shown that texture and edges of the panchromatic image are extracted by the Log-Gabor filter.In order to analyze the statistical difference of the Log-Gabor filtering response between the pristine and the distorted HSIs, we take the response map o 1,3 as an example and plot the histograms of o 1,3 under different distortions in Figure 7b.It is clear that different distortions lead to different distributions of the Log-Gabor filtering response, the distribution of Log-Gabor response can be used as an indicator of distortion.We also use the GGD model to describe the distribution of the Log-Gabor response e n,j and o n,j , the shape parameter and scale parameter of the fitted GGD model form another type of spatial quality-sensitive features.
Remote Sens. 2017, 9, 305 7 of 24 where / , {0,1,..., 1} In order to further exploit the texture information, we also analyze the statistics of directional gradient of the Log-Gabor filtering response map.The vertical gradient of o 1,3 is shown in Figure 8a.The histograms of the vertical gradient of o 1,3 under different distortions are given in Figure 8b.The distribution of the directional gradient is modified by distortion in a similar way to the Log-Gabor response map, therefore GGD model is used to describe the distribution of directional gradients (both horizontal and vertical) of e n,j and o n,j [19,27]; the shape parameter and scale parameter of the fitted GGD model are another spatial quality-sensitive features.
In addition to directional gradient, gradient magnitude of the Log-Gabor filtering response map is also analyzed.The gradient magnitude of o 1,3 is shown in Figure 9a.The histograms of gradient magnitude of o 1,3 under different distortions are presented in Figure 9b.The histogram follows Weibull distribution [27,28] where λ is the scale parameter and k is the shape parameter of Weibull model.Since the distribution of the gradient magnitude can be fitted by the Weibull model, alterations of the Weibull model can be used as an indicator for the degree of distortion.Thus, the parameters λ and k of the fitted Weibull model can be used as quality-sensitive features.
Remote Sens. 2017, 9, 305 8 of 24 where λ is the scale parameter and k is the shape parameter of Weibull model.Since the distribution of the gradient magnitude can be fitted by the Weibull model, alterations of the Weibull model can be used as an indicator for the degree of distortion.Thus, the parameters λ and k of the fitted Weibull model can be used as quality-sensitive features.To demonstrate that the extracted features above are sensitive to image quality, we visualize the features as in Figure 4. We randomly crop 200 pristine sub-images of size 64 × 64 × 224 from the AVIRIS dataset, and introduce different kinds of distortions to each sub-image.We apply Log-Gabor filters on each sub-image, then we fit the histograms of 1,3 o and the vertical gradient of where λ is the scale parameter and k is the shape parameter of Weibull model.Since the distribution of the gradient magnitude can be fitted by the Weibull model, alterations of the Weibull model can be used as an indicator for the degree of distortion.Thus, the parameters λ and k of the fitted Weibull model can be used as quality-sensitive features.To demonstrate that the extracted features above are sensitive to image quality, we visualize the features as in Figure 4. We randomly crop 200 pristine sub-images of size 64 × 64 × 224 from the AVIRIS dataset, and introduce different kinds of distortions to each sub-image.We apply Log-Gabor filters on each sub-image, then we fit the histograms of 1,3 o and the vertical gradient of To demonstrate that the extracted features above are sensitive to image quality, we visualize the features as in Figure 4. We randomly crop 200 pristine sub-images of size 64 × 64 × 224 from the AVIRIS dataset, and introduce different kinds of distortions to each sub-image.We apply Log-Gabor filters on each sub-image, then we fit the histograms of o 1,3 and the vertical gradient of o 1,3 with GGD model.We fit the histogram of gradient magnitude of o 1,3 with Weibull model.The parameters of the fitted model are used as features.Feature of each sub-image is plotted as a point.As shown in Figures 10-12, even though there is some overlapping between different kinds of distortions, most features belonging to the same distortion tend to group into the same cluster.Different distortions occupy different regions in the feature space, which demonstrates the sensitivity of the extracted feature to image quality.In order to extract joint features that contain both structural and spectral information, we need to integrate the spatial features with the proposed spectral features.All the features extracted in the spatial domain are stacked, then they are concatenated with the spectral features, a joint spectral-spatial feature vector that is sensitive to image quality can be obtained, as shown in Figure 13.In order to extract joint features that contain both structural and spectral information, we need to integrate the spatial features with the proposed spectral features.All the features extracted in the spatial domain are stacked, then they are concatenated with the spectral features, a joint spectral-spatial feature vector that is sensitive to image quality can be obtained, as shown in Figure 13.In order to extract joint features that contain both structural and spectral information, we need to integrate the spatial features with the proposed spectral features.All the features extracted in the spatial domain are stacked, then they are concatenated with the spectral features, a joint spectral-spatial feature vector that is sensitive to image quality can be obtained, as shown in Figure 13.In order to extract joint features that contain both structural and spectral information, we need to integrate the spatial features with the proposed spectral features.All the features extracted in the spatial domain are stacked, then they are concatenated with the spectral features, a joint spectral-spatial feature vector that is sensitive to image quality can be obtained, as shown in Figure 13.
Quality Assessment: From Features to Score
If we can extract the spectral-spatial features from the pristine HSI training set and distorted HSIs using the method in Section 2, the distortion of the HSI could be quantified by computing the distance of the quality-sensitive features between the training set and the distorted HSI.In this work, we adopt the strategy of multivariate Gaussian (MVG) learning originally proposed in [18], the flow chart is in Figure 14.In the training stage, there are three main steps: collecting training hyperspectral data, extracting quality-sensitive features, and learning MVG distribution.
Flow chart of the proposed HSI assessment method.
A set of pristine HSI is firstly collected as training set.Noisy bands and water absorption bands are removed.Different local image regions contain different structures, and have different contributions to the overall image quality [18,27].In order to exploit the local structural information of the image, we divide the HSI into non-overlapping local 3D blocks.Quality-sensitive features are extracted from each block.By stacking all the spectral and spatial quality-sensitive features, a feature vector would be extracted from each block.Suppose there are n blocks in the training set in total, a feature matrix There are correlations among different kinds of features; for example, directional gradient and gradient magnitude are highly correlated.In order to remove the correlation and reduce the computation burden, PCA transform is applied to the feature matrix X, a projection matrix Φ and a dimension-reduced feature matrix can be obtained
Quality Assessment: From Features to Score
If we can extract the spectral-spatial features from the pristine HSI training set and distorted HSIs using the method in Section 2, the distortion of the HSI could be quantified by computing the distance of the quality-sensitive features between the training set and the distorted HSI.In this work, we adopt the strategy of multivariate Gaussian (MVG) learning originally proposed in [18], the flow chart is in Figure 14.In the training stage, there are three main steps: collecting training hyperspectral data, extracting quality-sensitive features, and learning MVG distribution.
Quality Assessment: From Features to Score
If we can extract the spectral-spatial features from the pristine HSI training set and distorted HSIs using the method in Section 2, the distortion of the HSI could be quantified by computing the distance of the quality-sensitive features between the training set and the distorted HSI.In this work, we adopt the strategy of multivariate Gaussian (MVG) learning originally proposed in [18], the flow chart is in Figure 14.In the training stage, there are three main steps: collecting training hyperspectral data, extracting quality-sensitive features, and learning MVG distribution.A set of pristine HSI is firstly collected as training set.Noisy bands and water absorption bands are removed.Different local image regions contain different structures, and have different contributions to the overall image quality [18,27].In order to exploit the local structural information of the image, we divide the HSI into non-overlapping local 3D blocks.Quality-sensitive features are extracted from each block.By stacking all the spectral and spatial quality-sensitive features, a feature vector would be extracted from each block.Suppose there are n blocks in the training set in total, a feature matrix There are correlations among different kinds of features; for example, directional gradient and gradient magnitude are highly correlated.In order to remove the correlation and reduce the computation burden, PCA transform is applied to the feature matrix X, a projection matrix Φ and a dimension-reduced feature matrix can be obtained where [ , ,..., ] is the dimension-reduced feature matrix of the training data.Each feature vector in ' X is extracted from different blocks, and there is no overlapping among the A set of pristine HSI is firstly collected as training set.Noisy bands and water absorption bands are removed.Different local image regions contain different structures, and have different contributions to the overall image quality [18,27].In order to exploit the local structural information of the image, we divide the HSI into non-overlapping local 3D blocks.Quality-sensitive features are extracted from each block.By stacking all the spectral and spatial quality-sensitive features, a feature vector x ∈ R d×1 would be extracted from each block.Suppose there are n blocks in the training set in total, a feature matrix X = [x 1 , x 2 , ..., x n ] ∈ R d×n would be obtained from the training set.
There are correlations among different kinds of features; for example, directional gradient and gradient magnitude are highly correlated.In order to remove the correlation and reduce the computation burden, PCA transform is applied to the feature matrix X, a projection matrix Φ and a dimension-reduced feature matrix can be obtained where X = [x 1 , x 2 , ..., x n ] ∈ R d ×n is the dimension-reduced feature matrix of the training data.Each feature vector in X is extracted from different blocks, and there is no overlapping among the blocks.Thus, the feature vectors can be assumed to be independent of each other and all the feature vectors should conform to a common multivariate Gaussian model [21,22].The MVG model can be learned from X with the standard maximum likelihood estimation algorithm, the MVG model is where x ∈ R d ×1 is the feature vector after dimension reduction, and µ and Σ are mean vector and covariance matrix, respectively.Since there is no distortion in the training set, the normal distribution of the features is represented by the learned MVG model, which is a benchmark for assessing distorted image [18].When distortion exists in HSI, the distribution of the feature vector would deviate from the learned MVG model.The deviation can be measured and quality score of a distorted image can be computed.
For each testing HSI, we divide it into several blocks, of which the size is the same as that of training data.After extracting quality-sensitive features and stacking them into a feature vector as in the training stage, we can obtain a feature matrix Y = [y 1 , y 2 , ..., y m ] ∈ R d×m , where m is the number of blocks in the testing image.With the pre-learned projection matrix Φ, dimension-reduced feature matrix is where Y = [y 1 , y 2 , ..., y m ] ∈ R d ×m is the dimension-reduced feature matrix of the testing image.Different blocks make different contribution to the quality of testing image, so we compute quality score on each local block.Every block should be fitted by a MVG model (µ i , Σ i ), and then compared with the learned benchmark MVG model (µ, Σ).
It should be noted that MVG model of each block can be estimated from its neighboring blocks, but it is complex and time-costly.In this work, µ i and Σ i of the i-th block's features are simply approximated by y i and covariance matrix of Y , which is denoted as Σ .A modified Bhattacharyya distance is used to compute the distance between the benchmark MVG and the fitted MVG of the i-th block [27] The distance measures disparity between statistics of the i-th block and the pristine training data, it is used as the measurement for image quality.The smaller the distance, the better the image quality is.Quality score of the whole image is computed by averaging the distances over all the blocks.
Experiment Setting and Data
To demonstrate the effectiveness of the proposed assessment method, we test if the proposed quality scores are consistent with other reference-based indices.We firstly apply five state-of-the-art super-resolution methods to the simulated and real HSIs, and then quality scores of the reconstructed HSIs are computed and compared with reference-based evaluation indices to see if there is consistency.
The following super-resolution methods are used to reconstruct HSIs; these methods are selected due to their good performance in both reconstruction accuracy and speed:
Two datasets are used in the experiment.The first dataset was acquired by Airborne Visible Infrared Imaging Spectrometer (AVIRIS) sensor [23], which consists of 224 spectral bands in the range of 400 ~2500 nm.This dataset includes four images collected over Moffett Field, Cuprite, Lunar Lake, and Indian Pines sites with dimensions 753 × 1923, 614 × 2207, 781 × 6955, and 614 × 1087, respectively.Spatial resolution of Moffett Field, Cuprite and Lunar Lake is 20 m, and spatial resolution of Indian Pines is 4 m.After discarding the water absorption bands and noisy bands, there are 162 bands remaining.The second dataset was acquired by airborne Headwall Hyperspec-VNIR-C (HyperspecVC)sensor over agricultural and urban areas in Chikusei, Ibaraki, Japan [29].It was made public by Dr. Naoto Yokoya and Prof. Akira Iwasaki from the University of Tokyo.The dataset has 128 bands in the range of 363 ~1018 nm.Size of the data is 2517 × 2335.Spatial resolution of the data is 2.5 m.After discarding noisy bands, 125 bands are used in the experiment.
We crop two sub-images from each dataset as testing images, the rest of the dataset is treated as pristine data and used in the training stage.Then we apply the five super-resolution methods on the testing images, and evaluate the enhanced sub-images using the proposed assessment method.The parameters in the algorithm are set as follows.The size in spatial domain of each block is 64 × 64, the size in spectral domain is the number of bands.The window sizes for local normalization in spectral and spatial domains are set as K = 3 and S = T = 2, respectively.The dimension of features after PCA is determined by the number of high order Principal Components (PCs) which has preserved at least 90% information of the original input.The parameters related to Log-Gabor filtering are adopted from [27]: N = 3, J = 4, σ r = 0.60, σ θ = 0.71, ω 1 0 = 0.417, ω 2 0 = 0.318, ω 3 0 = 0.243, where ω 1 0 , ω 2 0 , and ω 3 0 are the center frequencies of Log-Gabor filters at three different scales.All the parameters of the super-resolution methods are tuned to achieve the best reconstruction results.
Reference-Based Evaluation Indices
Peak signal-noise-ratio (PSNR), structural similarity index measurement (SSIM) [21], feature similarity index measurement (FSIM) [22], and spectral angle mean (SAM) are representatives of popular quantitative measures for image quality and have been applied to evaluate enhancement methods.They are selected to compare with the proposed quality score.PSNR computes the mean square errors of the reconstructed HSI.SSIM and FSIM calculate the similarity between the reconstructed HSI and reference.SAM measures the spectral distortion.Mathematically, PSNR of the l-th band is computed as where I max,l is the maximum of the image on the l-th band, I re f l and I rec l are the reference image and reconstructed image on the l-th band.M and N are number of rows and columns.SSIM of the l-th band is computed as [21] SSI M l = 4σ where Our score measures the extent of distortion in the reconstructed HSI, with higher score representing lower quality, which should correspond to, e.g., a lower PSNR.In each table and figure, different methods are arranged in ascending order of PSNR from left to right.As shown in the tables and figures, the corresponding scores of the proposed method are in descending order from left to right, which means that our no-reference score is consistent with PSNR in assessing the reconstructed HSI.We find that the no-reference score is not consistent with SSIM and FSIM of BayesSR, as shown in Figure 17b,c.This is caused by the inconsistency of SSIM and FSIM, both of them are not consistent with PSNR of BayesSR.Nevertheless, our score is consistent with SSIM and FSIM in most cases.PSNR, SSIM, and FSIM are the most common reference-based indices for image reconstruction evaluation, and the consistency between our measured scores and these three indices indicates that the proposed assessment method has potential to be implemented as a no-reference measure in evaluating spatially enhanced HSIs.
It should be noted that the result of SSR on Chikusei-2 is inconsistent with other indices.In Table 4, PSNR values of SSR and SUn are 30.8350dB and 35.4586 dB, showing that the quality of SSR is lower than that of the SUn.However, the proposed score method obtained the scores of 23.3912 for SSR and 23.7168 for SUn, showing the former has a better quality.If SSR can be evaluated correctly, its score should be higher than SUn and slightly lower than sparseFU.This inconsistency may be attributed to the limited number of training samples.AVIRIS dataset contains HSI acquired over multiple sites, more blocks can be extracted for training the benchmark MVG model, so it leads to great consistency on evaluations of Indian Pines and Moffett Field, as shown in Tables 1 and 2. However, the HyperspecVC data are taken only over Chikusei, the number of training blocks is smaller than those taken from the AVIRIS images, which may explain the failure of evaluating SSR in Table 4.
Spectral Distortion Assessment
Spectral fidelity is of high importance for the interpretation of HSI, so assessing spectral distortion is necessary for the reconstructed HSIs.Spectral angle mean (SAM), as a reference-based spectral assessment index, computes the disparity between the spectra of original HSI and reconstructed HSI.In this sub-section, we compute the spectral distortion without reference using the proposed method.Quality-sensitive features are extracted from both spectral and spatial domains in our method.If we extract quality-sensitive features only from the spectral domain and then train the benchmark MVG model, the quality score would measure spectral deviation of the reconstructed HSI from the pristine HSI, which can be treated as a measurement of spectral distortion.The spectral quality scores of the reconstructed HSIs are given in Tables 5-8.We also plot the spectral quailty scores as curves in Figure 23.Different methods are arranged in descending order of SAM in the tables.As shown in Tables 5 and 6, the corresponding spectral quality scores are in descending order as well, which demonstrates that our no-reference spectral quality score is consistent with SAM on AVIRIS data.However, on Chikusei-1, SAM values of SSR and sparseFU are 3.1424° and 2.4779°, indicating that SSR has larger spectral distortion, while the spectral scores of SSR and sparseFU are 1.4210 and 1.4322, showing that sparseFU has larger distortion.Similarly, the spectral score of SSR is inconsistent with SAM on Chikusei-2.The fewer number of training samples that can be extracted from this dataset, the same reason suggested in Section 4.3, may have caused this inconsistency.However, most of our spectral quality scores are consistent with SAM on HyperspecVC data.
Analyzing Each Type of Spatial Features
There are four types of statistics features extracted from the spatial domain.They are based on histogram of the normalized panchromatic image, histograms of Log-Gabor filtering responses, histograms of directional gradient of Log-Gabor filtering responses, and histograms of gradient magnitude of Log-Gabor filtering responses.In order to analyze their contribution separately, we extract the spectral features and incorporate them with only one type of spatial features each time, then train the benchmark MVG and compute the quality score.We report the quality scores in Tables 9-12.We also plot the scores as curves in Figure 24.Different methods are arranged in descending order of SAM in the tables.As shown in Tables 5 and 6, the corresponding spectral quality scores are in descending order as well, which demonstrates that our no-reference spectral quality score is consistent with SAM on AVIRIS data.However, on Chikusei-1, SAM values of SSR and sparseFU are 3.1424 • and 2.4779 • , indicating that SSR has larger spectral distortion, while the spectral scores of SSR and sparseFU are 1.4210 and 1.4322, showing that sparseFU has larger distortion.Similarly, the spectral score of SSR is inconsistent with SAM on Chikusei-2.The fewer number of training samples that can be extracted from this dataset, the same reason suggested in Section 4.3, may have caused this inconsistency.However, most of our spectral quality scores are consistent with SAM on HyperspecVC data.
Analyzing Each Type of Spatial Features
There are four types of statistics features extracted from the spatial domain.They are based on histogram of the normalized panchromatic image, histograms of Log-Gabor filtering responses, histograms of directional gradient of Log-Gabor filtering responses, and histograms of gradient magnitude of Log-Gabor filtering responses.In order to analyze their contribution separately, we extract the spectral features and incorporate them with only one type of spatial features each time, then train the benchmark MVG and compute the quality score.We report the quality scores in Tables 9-12.We also plot the scores as curves in Figure 24.We can make the following two conclusions from the results.Firstly, integrating multiple types of spatial features performs better than using a single type of spatial features.When only one type of spatial features is extracted, the curve is not monotonically descending, which means that some quality scores are not consistent with the reference-based indices, as shown in Figure 24.When all the spatial features are extracted, the scores are consistent with reference-based indices in most cases, as presented in Section 4.3.Secondly, among all these spatial features, the features based on Log-Gabor filtering are more efficient.As shown in Figure 24, the curves of Log-Gabor features are generally in descending order.While other features, such as the features based on locally normalized panchromatic image, could not lead to a satisfactory assessment, as shown in the tables.This phenomenon is reasonable because Log-Gabor filters describe texture, edges, and details, which play a key role in reflecting the quality of image [19,20].We can make the following two conclusions from the results.Firstly, integrating multiple types of spatial features performs better than using a single type of spatial features.When only one type of spatial features is extracted, the curve is not monotonically descending, which means that some quality scores are not consistent with the reference-based indices, as shown in Figure 24.When all the spatial features are extracted, the scores are consistent with reference-based indices in most cases, as presented in Section 4.3.Secondly, among all these spatial features, the features based on Log-Gabor filtering are more efficient.As shown in Figure 24, the curves of Log-Gabor features are generally in descending order.While other features, such as the features based on locally normalized panchromatic image, could not lead to a satisfactory assessment, as shown in the tables.This phenomenon is reasonable because Log-Gabor filters describe texture, edges, and details, which play a key role in reflecting the quality of image [19,20].
Robustness Analysis Over Training Data
To further investigate the robustness of our method, we design an experiment by varying the training data where the benchmark MVG model is trained on data from one sensor and used to evaluate enhanced data of another sensor.We train the benchmark MVG model on HyperspecVC data, then with the trained model, we evaluate the reconstructed images from AVIRIS data.We also compute the spectral distortion by training the benchmark MVG with only spectral features.The quality scores are presented in Tables 13 and 14, the spectral scores are presented in Tables 15 and 16, and the curves of scores are plotted in Figures 25 and 26.
The quality scores of SUn, BayesSR, SSR, and CNMF are consistent with PSNR on both Indian Pines and Moffett Field, but sparseFU cannot be assessed correctly.On Indian Pines, the spectral scores of SUn and SSR are not consistent with SAM.SAM of SUn and SSR are 4.2875 • and 4.1631 • respectively, showing that spectral distortion of SUn is larger than SSR.While our spectral score shows that spectral distortion of SSR is larger than SUn.On Moffett Field, except sparseFU, the spectral scores are consistent with SAM.The above inconsistency may be caused by the huge difference between the training datasets, as HyperspercVC data and AVIRIS data have big difference in spatial resolution and number of spectral bands.Although there is minor inconsistency between the scoring method and conventional reference-based indices, most of the super-resolution methods can still be assessed correctly, which demonstrates the robustness of the proposed method to some extent.
All the experiments are implemented on Matlab 2014a, with Intel PC Core 3.10 GHz, RAM of 12 GB.The training of our method takes about 30 min, while assessing the reconstructed HSI takes about 2 min.All the codes for super-resolution methods are provided by the authors.The quality scores of SUn, BayesSR, SSR, and CNMF are consistent with PSNR on both Indian Pines and Moffett Field, but sparseFU cannot be assessed correctly.On Indian Pines, the spectral scores of SUn and SSR are not consistent with SAM.SAM of SUn and SSR are 4.2875° and 4.1631° respectively, showing that spectral distortion of SUn is larger than SSR.While our spectral score shows that spectral distortion of SSR is larger than SUn.On Moffett Field, except sparseFU, the spectral scores are consistent with SAM.The above inconsistency may be caused by the huge difference between the training datasets, as HyperspercVC data and AVIRIS data have big difference in spatial resolution and number of spectral bands.Although there is minor inconsistency between the scoring method and conventional reference-based indices, most of the super-resolution methods can still be assessed correctly, which demonstrates the robustness of the proposed method to some extent.
All the experiments are implemented on Matlab 2014a, with Intel PC Core 3.10 GHz, RAM of 12 GB.The training of our method takes about 30 min, while assessing the reconstructed HSI takes about 2 min.All the codes for super-resolution methods are provided by the authors.
Discussion
From the experiments, we can make the following interesting discussions: 1.The spectral features based on locally normalized spectra are more efficient than a single type of spatial features.If we only use the spectral features, the quality score reflects the spectral distortion of the reconstructed HSI and they are consistent with SAM, as shown in Section 4.4.However, if we use only a single type of spatial features, the quality scores of some reconstructed HSIs are not consistent with PSNR, as shown in Section 4.5.The efficiency of spectral features for distortion characterization can be further verified by comparing Figure 4b and where spectral features belonging to the same distortion tend to form clusters which are more compact and more separable than that of spatial features.2. Texture information is necessary in reflecting image quality, which is verified by the efficiency of features based on Log-Gabor responses and the gradients.We have tested several types of spatial features to characterize spatial quality of the reconstructed images.By comparing the performance among different types of spatial features, it is found that features based on statistics of Log-Gabor responses and the gradients often lead to better results than statistics of locally normalized panchromatic image, as shown in Section 4.4.It is worth noting that some other filters, such as wavelet and ridgelet [30,31], are also effective in texture analysis, extracting quality-sensitive features using these filters may lead to a better result.
Discussion
From the experiments, we can make the following interesting discussions: Texture information is necessary in reflecting image quality, which is verified by the efficiency of features based on Log-Gabor responses and the gradients.We have tested several types of spatial features to characterize spatial quality of the reconstructed images.By comparing the performance among different types of spatial features, it is found that features based on statistics of Log-Gabor responses and the gradients often lead to better results than statistics of locally normalized panchromatic image, as shown in Section 4.4.It is worth noting that some other filters, such as wavelet and ridgelet [30,31], are also effective in texture analysis, extracting quality-sensitive features using these filters may lead to a better result.
3.
Integrating multiple features is helpful for enhancing the performance.Multiple features are extracted from spectral and spatial domains and incorporated in the proposed method.By comparing the results in Sections 4.3 and 4.3, we find that if we only exploit a single type of features, some reconstructed HSI cannot be assessed correctly, while, if multiple features are exploited, most of the reconstructed HSI can be assessed correctly, which means that these features are complementary to each other in predicting image quality.Additional statistics features can also be integrated in our framework to obtain a better result.4.
The benchmark MVG is robust over the training data.In this study, the training data come from the same sensor with the testing data.When we train the benchmark MVG model on HyperspecVC data and test it on AVIRIS data, it is observed that even though there is huge difference in spatial and spectral configuration of the two sensors, we obtained comparable results; most of the scores are consistent with PSNR and SAM on AVIRIS data.In real applications, if training data from the same sensor are not sufficient, training the benchmark MVG model with data from other sensors may be an alternative option.
5.
The proposed method has potential to be applied in reality.The speed of our assessment method is fast: it only takes less than two minutes to evaluate the reconstructed HSI in the experiments.In addition, the proposed assessment method is fully blind, both the reference image and information related to the distortion type in HSI are not necessary to be known.All those characteristics make it possible to be applied in reality.
However, it should be noted that there are still some questions that need to be studied further in the future: 1.
Research in models that are more efficient in representing the quality-sensitive features.In this study, we learn MVG models to represent the quality-sensitive features of pristine HSI and reconstructed HSI.MVG model is simple and fast to be implemented, but it may not be the most efficient one in feature representation.Some other advanced machine learning models, such as sparse representation [32], which were used in this work, could be more efficient.If we exploit these models to represent the quality-sensitive features, better performance may be obtained.
2.
Determining the optimal number of features.According to our experiments, integrating multiple features is helpful.In this study, one type of spectral features and four types of spatial features are exploited.However, if more quality-sensitive features are exploited in the future, more training samples would be required and the computation burden would increase.In order to balance the computation burden and the performance, we need to determine the optimal number of features.
Conclusions
We propose a no-reference quality assessment method to assess reconstructed HSI.Image distortion can be characterized by statistics of HSI, measuring the deviation of these statistics makes it possible to assess the image quality of HSI.Based on this principle, statistical properties of pristine and distorted HSIs are analyzed, and then multiple statistics that are sensitive to image quality are extracted as features from both spectral and spatial domains.A MVG model is built for the features extracted from pristine training data and treated as benchmark.Reconstructed HSI is divided into several blocks, quality-sensitive features are extracted from each block, and a MVG model of the features is fitted for each block.Quality score of each block is computed by measuring the distance between the benchmark and the fitted MVG.Overall quality score is obtained by average pooling.We apply five state-of-the-art super-resolution methods on AVIRIS and HyperspecVC data, and then compute the quality scores of the reconstructed HSIs.Our quality scores have good consistency with PSNR, SSIM, FSIM, and SAM, which demonstrates the effectiveness and potential of the proposed no-reference assessment method.
Figure 3 .
Figure3.Firstly, each type of distortion modifies the distribution in its own way.For example, with noise added, the distribution curve becomes flat and tends to be a uniform distribution.When the HSI is blurred, the distribution curve becomes thin and tends to be a Laplacian distribution.Secondly, heavier distortion causes greater modification of the distribution.Noise with standard variance =0.20 σ
Figure 1 .
Figure 1.Illustration of the local normalization on spectrum: (a) the 20th band (589.31nm) of a pristine sub-image cropped from AVIRIS data, the size is 256 × 256; (b) spectra curves selected from two pixels; and (c) the locally normalized spectra.
Figure 1 . 24 Figure 2 .
Figure 1.Illustration of the local normalization on spectrum: (a) the 20th band (589.31nm) of a pristine sub-image cropped from AVIRIS data, the size is 256 × 256; (b) spectra curves selected from two pixels; and (c) the locally normalized spectra.Remote Sens. 2017, 9, 305 4 of 24
Figure 3 .
Figure 3. Histograms of locally normalized spectra of pristine hyperspectral image (HSI) and distorted HSIs.
Figure 3 .
Figure 3. Histograms of locally normalized spectra of pristine hyperspectral image (HSI) and distorted HSIs.
Figure 3 .
Figure 3. Histograms of locally normalized spectra of pristine hyperspectral image (HSI) and distorted HSIs.
Figure 4 .
Figure 4. (a) The AVIRIS data of different scenes, 200 sub-images are randomly cropped from them; and (b) spectral quality-sensitive features visualization.Each point represents feature of a sub-image, each color represents a type of distortion.
σ
are local mean and standard variance, respectively, computed by[17]
Figure 4 .
Figure 4. (a) The AVIRIS data of different scenes, 200 sub-images are randomly cropped from them; and (b) spectral quality-sensitive features visualization.Each point represents feature of a sub-image, each color represents a type of distortion.
Figure 6 .
Figure 6.(a) The local normalization of pristine panchromatic image in Figure 5; and (b) histograms of locally normalized panchromatic images, under different kind of distortions.
Figure 6 .
Figure 6.(a) The local normalization of pristine panchromatic image in Figure 5; and (b) histograms of locally normalized panchromatic images, under different kind of distortions.
Figure 6 .
Figure 6.(a) The local normalization of pristine panchromatic image in Figure 5; and (b) histograms of locally normalized panchromatic images, under different kind of distortions.
o
the number of orientations; 0 ω is center frequency; and r σ and θ σ determine radial bandwidth and angular bandwidth of the filter, respectively.Applying Log-Gabor filters with N center frequencies and J orientations to the simulated panchromatic image would generate 2NJ response maps represent the real part and the imaginary part of the response, respectively.In Figure 7a, we present a response map 1,3 o ( =3 N , =4 J ) as an example.It is shown that texture and edges of the panchromatic image are extracted by the Log-Gabor filter.In order to analyze the statistical difference of the Log-Gabor filtering response between the pristine and the distorted HSIs, we take the response map 1,3 o as an example and plot the histograms of 1,3 o under different distortions in Figure 7b.It is clear that different distortions lead to different distributions of the Log-Gabor filtering response, the distribution of Log-Gabor response can be used as an indicator of distortion.We also use the GGD model to describe the distribution of the Log-Gabor response , n j e and , n j o , the shape parameter and scale parameter of the fitted GGD model form another type of spatial quality-sensitive features.
Figure 7 .
Figure 7. (a) Log-Gabor filtering response map 1,3 o of the pristine panchromatic image in Figure 5; and (b) histograms of Log-Gabor filtering response map 1,3 o , under different kind of distortions.
Figure 7 .
Figure 7. (a) Log-Gabor filtering response map o 1,3 of the pristine panchromatic image in Figure 5; and (b) histograms of Log-Gabor filtering response map o 1,3 , under different kind of distortions.
Figure 8 . 1 (
Figure 8.(a) Vertical Gradient of Log-Gabor response map 1,3 o of the pristine panchromatic image in Figure 5; and (b) histograms of vertical gradient of 1,3 o , under different kind of distortions.
Figure 9 .
Figure 9. (a) Gradient magnitude of Log-Gabor response map 1,3 o of the pristine panchromatic image in Figure 5; and (b) histograms of gradient magnitude of 1,3 o , under different kind of distortions.
Figure 8 .
Figure 8.(a) Vertical Gradient of Log-Gabor response map o 1,3 of the pristine panchromatic image in Figure 5; and (b) histograms of Log-Gabor filtering response map o 1,3 , under different kind of distortions.
Figure 8 . 1 (
Figure 8.(a) Vertical Gradient of Log-Gabor response map 1,3 o of the pristine panchromatic image in Figure 5; and (b) histograms of vertical gradient of 1,3 o , under different kind of distortions.
Figure 9 .
Figure 9. (a) Gradient magnitude of Log-Gabor response map 1,3 o of the pristine panchromatic image in Figure 5; and (b) histograms of gradient magnitude of 1,3 o , under different kind of distortions.
1 , 3 o
with GGD model.We fit the histogram of gradient magnitude of 1,3 o with Weibull model.The parameters of the fitted model are used as features.Feature of each sub-image is plotted as a point.As shown in Figures10-12, even though there is some overlapping between different kinds of distortions, most features belonging to the same distortion tend to group into the same cluster.Different distortions occupy different regions in the feature space, which demonstrates the sensitivity of the extracted feature to image quality.
Figure 9 .
Figure 9. (a) Gradient magnitude of Log-Gabor response map o 1,3 of the pristine panchromatic image in Figure 5; and (b) histograms of gradient magnitude of o 1,3 , under different kind of distortions.
Figure 10 .
Figure 10.Visualization of spatial quality-sensitive features extracted from Log-Gabor response map 1,3 o .Each point represents feature of a sub-image, each color represents a type of distortion.
Figure 11 .
Figure 11.Visualization of spatial quality-sensitive features extracted from vertical gradient of Log-Gabor response map 1,3 o .Each point represents feature of a sub-image, each color represents a type of distortion.
Figure 12 .
Figure 12.Visualization of spatial quality-sensitive features extracted from gradient magnitude of Log-Gabor response map 1,3 o .Each point represents feature of a sub-image, each color represents a type of distortion.
Figure 10 . 24 Figure 10 .
Figure 10.Visualization of spatial quality-sensitive features extracted from Log-Gabor response map o 1,3 .Each point represents feature of a sub-image, each color represents a type of distortion.
Figure 11 .
Figure 11.Visualization of spatial quality-sensitive features extracted from vertical gradient of Log-Gabor response map 1,3 o .Each point represents feature of a sub-image, each color represents a type of distortion.
Figure 12 .
Figure 12.Visualization of spatial quality-sensitive features extracted from gradient magnitude of Log-Gabor response map 1,3 o .Each point represents feature of a sub-image, each color represents a type of distortion.
Figure 11 . 24 Figure 10 .
Figure 11.Visualization of spatial quality-sensitive features extracted from vertical gradient of Log-Gabor response map o 1,3 .Each point represents feature of a sub-image, each color represents a type of distortion.
Figure 11 .
Figure 11.Visualization of spatial quality-sensitive features extracted from vertical gradient of Log-Gabor response map 1,3 o .Each point represents feature of a sub-image, each color represents a type of distortion.
Figure 12 .
Figure 12.Visualization of spatial quality-sensitive features extracted from gradient magnitude of Log-Gabor response map 1,3 o .Each point represents feature of a sub-image, each color represents a type of distortion.
Figure 12 .
Figure 12.Visualization of spatial quality-sensitive features extracted from gradient magnitude of Log-Gabor response map o 1,3 .Each point represents feature of a sub-image, each color represents a type of distortion.
Figure 13 .
Figure 13.Flow chart of quality-sensitive features extraction for each HSI.
would be obtained from the training set.
Figure 13 .
Figure 13.Flow chart of quality-sensitive features extraction for each HSI.
Figure 13 .
Figure 13.Flow chart of quality-sensitive features extraction for each HSI.
Figure 14 .
Figure 14.Flow chart of the proposed HSI assessment method.
would be obtained from the training set.
Figure 14 .
Figure 14.Flow chart of the proposed HSI assessment method.
re f l and I rec l are mean of the reference and reconstructed image; and σ and σ I rec l are covariance and standard variance.FSIM of the l-th band is[22]
Figure 15 .
Figure 15.Consistency of our score and reference-based indices on Indian Pines of AVIRIS data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 16 .
Figure 16.Consistency of our score and reference-based indices on Moffett Field of AVIRIS data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 17 .
Figure 17.Consistency of our score and reference-based indices on Chikusei-1 of HyperspecVC data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 18 .
Figure 18.Consistency of our score and reference-based indices on Chikusei-2 of HyperspecVC data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 15 .
Figure 15.Consistency of our score and reference-based indices on Indian Pines of AVIRIS data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 15 .
Figure 15.Consistency of our score and reference-based indices on Indian Pines of AVIRIS data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 16 .
Figure 16.Consistency of our score and reference-based indices on Moffett Field of AVIRIS data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 17 .
Figure 17.Consistency of our score and reference-based indices on Chikusei-1 of HyperspecVC data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 18 .
Figure 18.Consistency of our score and reference-based indices on Chikusei-2 of HyperspecVC data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 16 .
Figure 16.Consistency of our score and reference-based indices on Moffett Field of AVIRIS data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 15 .
Figure 15.Consistency of our score and reference-based indices on Indian Pines of AVIRIS data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 16 .
Figure 16.Consistency of our score and reference-based indices on Moffett Field of AVIRIS data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 17 .
Figure 17.Consistency of our score and reference-based indices on Chikusei-1 of HyperspecVC data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 18 .
Figure 18.Consistency of our score and reference-based indices on Chikusei-2 of HyperspecVC data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 17 .
Figure 17.Consistency of our score and reference-based indices on Chikusei-1 of HyperspecVC data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 17 .
Figure 17.Consistency of our score and reference-based indices on Chikusei-1 of HyperspecVC data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 18 .
Figure 18.Consistency of our score and reference-based indices on Chikusei-2 of HyperspecVC data: (a) our score and PSNR; (b) our score and SSIM; and (c) our score and FSIM.
Figure 20 .
Figure 20.Reconstructed HSI of different super-resolution methods, the images are shown in RGB (band 35, 25, 15).The sub-image with size 128 × 128 × 162 is cropped from Moffett Field of AVIRIS data: (a) original sub-image; (b) result of sparseFU; (c) result of SUn; (d) result of BayesSR; (e) result of SSR; and (f) result of CNMF.
Figure 20 .
Figure 20.Reconstructed HSI of different super-resolution methods, the images are shown in RGB (band 35, 25, 15).The sub-image with size 128 × 128 × 162 is cropped from Moffett Field of AVIRIS data: (a) original sub-image; (b) result of sparseFU; (c) result of SUn; (d) result of BayesSR; (e) result of SSR; and (f) result of CNMF.
Figure 20 .
Figure 20.Reconstructed HSI of different super-resolution methods, the images are shown in RGB (band 35, 25, 15).The sub-image with size 128 × 128 × 162 is cropped from Moffett Field of AVIRIS data: (a) original sub-image; (b) result of sparseFU; (c) result of SUn; (d) result of BayesSR; (e) result of SSR; and (f) result of CNMF.
Figure 24 .
Figure 24.The curves of the quality scores with single type of spatial features used: (a) on Indian Pines; (b) on Moffett Field; (c) on Chikusei-1; and (d) on Chikusei-2.
Figure 24 .
Figure 24.The curves of the quality scores with single type of spatial features used: (a) on Indian Pines; (b) on Moffett Field; (c) on Chikusei-1; and (d) on Chikusei-2.
Figure 25 . 24 Figure 25 .
Figure 25.Consistency of our score and PSNR with HyperspecVC data used for training: (a) on Indian Pines; and (b) on Moffett Field.
Figure 26 .
Figure 26.Consistency of our spectral score and SAM with HyperspecVC data used for training: (a) on Indian Pines; and (b) on Moffett Field.
Figure 26 .
Figure 26.Consistency of our spectral score and SAM with HyperspecVC data used for training: (a) on Indian Pines; and (b) on Moffett Field.
1 .
The spectral features based on locally normalized spectra are more efficient than a single type of spatial features.If we only use the spectral features, the quality score reflects the spectral distortion of the reconstructed HSI and they are consistent with SAM, as shown in Section 4.4.However, if we use only a single type of spatial features, the quality scores of some reconstructed HSIs are not consistent with PSNR, as shown in Section 4.5.The efficiency of spectral features for distortion characterization can be further verified by comparing Figures4b and 10, Figures11 and 12where spectral features belonging to the same distortion tend to form clusters which are more compact and more separable than that of spatial features.2.
Table 2 .
Comparison among PSNR, SSIM, FSIM, and our score on Moffett Field of AVIRIS data.
Table 5 .
Comparison between SAM and spectral quality score on Indian Pines of AVIRIS data.
Table 6 .
Comparison between SAM and spectral quality score on Moffett Field of AVIRIS data.
Table 7 .
Comparison between SAM and spectral quality score on Chikusei-1 of HyperspecVC data.
Table 8 .
Comparison between SAM and spectral quality score on Chikusei-2 of HyperspecVC data.
Table 8 .
Comparison between SAM and spectral quality score on Chikusei-2 of HyperspecVC data.
Table 9 .
Comparison of each type of spatial features on Indian Pines of AVIRIS data.
Table 10 .
Comparison of each type of spatial features on Moffett Field of AVIRIS data.
Table 11 .
Comparison of each type of spatial features on Chikusei-1 of HyperspecVC data.
Table 12 .
Comparison of each type of spatial features on Chikusei-2 of HyperspecVC data.
Table 9 .
Comparison of each type of spatial features on Indian Pines of AVIRIS data.
Table 13 .
Performance on Indian Pines of AVIRIS data, trained on HyperspecVC data.
Table 14 .
Performance on Moffett Field of AVIRIS data, trained on HyperspecVC data.
Table 15 .
Spectral scores on Indian Pines of AVIRIS data, trained on HyperspecVC data.
Table 16 .
Spectral scores on Moffett Field of AVIRIS data, trained on HyperspecVC data.
Table 16 .
Spectral scores on Moffett Field of AVIRIS data, trained on HyperspecVC data. | 15,517.6 | 2017-03-23T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Low-Power Failure Detection for Environmental Monitoring Based on IoT
Many environmental monitoring applications that are based on the Internet of Things (IoT) require robust and available systems. These systems must be able to tolerate the hardware or software failure of nodes and communication failure between nodes. However, node failure is inevitable due to environmental and human factors, and battery depletion in particular is a major contributor to node failure. The existing failure detection algorithms seldom consider the problem of node battery consumption. In order to rectify this, we propose a low-power failure detector (LP-FD) that can provide an acceptable failure detection service and can save on the battery consumption of nodes. From simulation experiments, results show that the LP-FD can provide better detection speed, accuracy, overhead and battery consumption than other failure detection algorithms.
Introduction
The Internet of Things (IoT) has been gaining momentum in both the industry and research communities due to an explosion in the number of smart mobile devices and sensors and the potential applications of the data produced from a wide spectrum of domains [1,2]. Among the IoT application domains, environmental monitoring is receiving increased attention as environmental technology becomes a key area of global sustainable development. For example, underwater resource management [3], wetland monitoring systems [4], emergency management communities [5], urban public safety emergency management early warning systems [6], and so on. These applications require the IoT to maintain a high availability for reliable execution. However, failure is inevitable due to various environmental factors and sensor hardware or software malfunctions; in particular, the inability of sensors to recharge batteries. Thus, it is a challenge to maintain the high availability of environmental monitoring IoT applications.
Failure detection is an essential component of building highly available systems, especially if there are safety applications in the system [7]. Failure detection can periodically identify the state of neighbor nodes, then output the results to achieve routing discovery, application deployment, real time communication, etc. Thus, the existence of failure detection can ensure the high availability of IoT applications. An effective failure detection algorithm can find failure nodes accurately and promptly so that the behavior of the system can be adjusted as soon as possible. At present, many studies regarding failure detection algorithms based on a heartbeat protocol are proposed for distribution systems [8][9][10][11][12]. However, these failure detection algorithms do not consider the application environment of the IoT. For example, a large number of sensors in IoT applications do not have strong computing capabilities and are lacking a sufficient power supply due to the special application environment. Therefore, these failure detection algorithms are not adequate for the IoT.
In this paper, we focus on the problem of failure detection when remote nodes do not have a sufficient power supply [13]. Accordingly, our failure detection algorithm does not consume a large amount of node power and can mitigate the problem of sensor energy consumption to achieve environmental monitoring in remote areas. To facilitate environmental monitoring in remote or inaccessible areas without a sufficient power supply, we present a low-power failure detector (LP-FD) for IoT applications. A key design aspect of the LP-FD is to employ a variable detection period. We assume that the online timing of sensors follows the Weibull distribution, thus the detection period of the LP-FD can be calculated by the reliability function of the Weibull distribution. When detection begins, the detection period of the LP-FD is set to be longer due to the high reliability of the sensors. While in subsequent detections, the detection period of the LP-FD is set to be shorter due to the low reliability of the sensors. Compared to traditional FDs, the LP-FD needs fewer heartbeat messages to achieve failure detection. Thus, the LP-FD can save on communication overhead in order to reduce sensor battery consumption. The main contributions of this paper are presented as follows: • We have designed a novel FD for environmental monitoring based on the IoT that ensures a high availability, and a reliable execution, of applications.
•
The detection period can be calculated by the reliability function of the Weibull distribution, and it has a proportional relationship to the reliability of the sensors. • Due to the variable detection period, the number of communications per unit time is reduced, which saves on sensor power consumption and detection overhead.
The rest of this paper is organized as follows. In Section 2, related work regarding the environmental monitoring of the IoT and failure detection is introduced. Section 3 introduces the system model. The implementation of the LP-FD is proposed in Section 4. The simulation results are reported in Section 5. Finally, the work is concluded in Section 6.
Environmental Monitoring of IoT
The increasingly serious issue of environmental pollution has promoted the rapid development of environmental monitoring [14]. Environmental monitoring has been conducted for more than 50 years. At present, IoT technology is being applied in the field of environmental monitoring as a new technology [15]. In addition, it has brought new opportunities for many technologies, such as intelligent sensing environmental monitoring technology, embedded technology, and so on.
In many countries, intelligent environmental monitoring has become very popular. Many systems use various wireless LAN protocols to achieve environmental monitoring, such as the Home Radio Frequency (Home-RF), which is used in the sensor networks of some home devices, and the ZigBee protocol, with the physical layer and the medium access control layer following the IEEE 802.15.4 standard [16][17][18]. An ecological monitoring system for the distribution and habits of toads has been developed by the Australian Government [19]. The seabirds of Big Duck Island are monitored by an ecological monitoring system [20]. The IoT technology has also been widely used in the field of environmental monitoring. It uses monitoring devices, rather than sensing devices, and connects terminal testing devices or connects with end customers, environmental protection departments, and personal digital display monitoring systems, allowing people to understand the environmental conditions more intuitively and quickly.
There are three levels in environmental monitoring techniques based on the IoT. The first level is the intelligent sensing layer, the second level is the network communication layer, and the third level is the application layer (as shown in Figure 1).
Failure Detection
With the development of distributed systems, failure detection technology has been an important part of building a highly available distributed system. This technology has received a lot of attention since its emergence, and many different types of failure detectors have been proposed, such as the Cassandra distributed database, which uses an accrual failure detector to detect node failure [22]. Aiming at fault tolerance distributed systems, Chandra and Toueg proposed the concept of failure detection for the first time. At the same time, they defined two properties (completeness and accuracy) to describe the detection capability of a failure detector. "Completeness" is the ability of a failure detector to eventually find the node failure. "Accuracy" is the ability of a failure detector to avoid false detection.
Many failure detectors that are implemented employ the heartbeat protocol or the ping protocol. The heartbeat protocol is where the monitored nodes periodically send heartbeat messages to a failure detector, then the failure detector determines the state of the nodes according to whether it receives the heartbeat messages. Whereas the ping protocol is where a failure detector actively sends query messages to the monitored nodes, then the failure detector determines the state of the nodes according to the response of the monitored nodes. There are some other important failure detectors that work as follows.
Chen et al. [23] proposed a Quality of Service based (QoS-based) failure detector in accordance with the probability network model. In the failure detector, a node p sends a heartbeat message m to a node q every unit of time. A sliding window located at node q The perception layer contains various sensors, and the systems on the sensors are used to obtain the environmental parameters.
The network layer is mainly used to transmit data by 5G, GPRS, and ZigBee [21]. The users can conveniently use these data with a terminal computer or mobile.
The application layer is mainly used to analyze and process the information and data, to make reasonable controls and decisions, and to realize intelligent management, application and service.
Failure Detection
With the development of distributed systems, failure detection technology has been an important part of building a highly available distributed system. This technology has received a lot of attention since its emergence, and many different types of failure detectors have been proposed, such as the Cassandra distributed database, which uses an accrual failure detector to detect node failure [22]. Aiming at fault tolerance distributed systems, Chandra and Toueg proposed the concept of failure detection for the first time. At the same time, they defined two properties (completeness and accuracy) to describe the detection capability of a failure detector. "Completeness" is the ability of a failure detector to eventually find the node failure. "Accuracy" is the ability of a failure detector to avoid false detection.
Many failure detectors that are implemented employ the heartbeat protocol or the ping protocol. The heartbeat protocol is where the monitored nodes periodically send heartbeat messages to a failure detector, then the failure detector determines the state of the nodes according to whether it receives the heartbeat messages. Whereas the ping protocol is where a failure detector actively sends query messages to the monitored nodes, then the failure detector determines the state of the nodes according to the response of the monitored nodes. There are some other important failure detectors that work as follows.
Chen et al. [23] proposed a Quality of Service based (QoS-based) failure detector in accordance with the probability network model. In the failure detector, a node p sends a heartbeat message m to a node q every unit of time. A sliding window located at node q can be used to store the last n heartbeat messages m 1 , m 2 , . . . , m n . A 1 , A 2 , . . . , A n are the receipt times according to q's local clock. Subsequently, the expected arrival time of the next heartbeat message is estimated by: where η is the sending interval, decided by the QoS requirement of the user. In the failure detector, the concept of freshpoint is introduced. The freshpoint is the timeout threshold used to determine whether the monitored node has failed. The freshpoint τ k+1 of the next heartbeat message consists of EA k+1 and the constant safety margin SM. One has: where SM means that an additional amount of time is added to the timeout value to improve the detection accuracy. The arrival time of the next heartbeat message is estimated by the constant safety margin in this failure detector. Based on Chen's FD, Tomsic et al. [8] proposed a two sliding windows failure detector (2W-FD) that can adapt to sudden changes in unstable network scenarios. The sliding window is a space used to store the arrival time of the heartbeat messages. In the 2W-FD, there are two sliding windows for storing the past received messages; a small one is used to store a few received messages, and a bigger one is used to store a large amount of received messages. The small window can cope with abrupt changes in network conditions, while the bigger window deals better with stable or slowly changing conditions. The 2W-FD is able to compute two expected arrival times, EA n 1 l+1 and EA n 2 l+1 , according to the two sliding windows. Finally, the bigger estimation is used to compute the next freshness point τ l+1 : where SM is a constant safety margin. A continuous value ϕ is used to represent the suspicion level of the monitored node in ϕ-FD [9]. This method is different from a binary method, which uses trusted or suspect as the output. In the implementation of ϕ-FD, a sliding window is used to store the most recent arrival time of the heartbeat messages. It is supposed that the arrival time of the heartbeat messages follows a normal distribution. Subsequently, the value of ϕ can be calculated as follows: where T last is the time when the fresh heartbeat message arrives, T now is the current time, and P later (t) is the probability that the arrival time of the fresh heartbeat message is more than t time units later than the previous one. Based on the assumption of normal distribution, P later (t) can be computed as follows: where F(t) is the cumulative distribution function of a normal distribution with mean µ and variance σ 2 . ϕ-FD can provide a value of ϕ to the applications that query the ϕ-FD at time T now . Subsequently, each application can carry out different actions according to its threshold Φ, which is set by different QoS requirements. Thus, the different QoS requirements of multiple applications can be met simultaneously.
The ED-FD [24], which is based on exponential distribution, is similar to the ϕ-FD. In the ED-FD, it is assumed that the arrival time of the heartbeat messages follows exponential distribution. Thus, the suspicion level of the monitored node, e d , can be calculated as follows: where T now , T last , and µ have the same meaning as for the ϕ-FD. For the ED-FD, the threshold is E d .
QoS Metrics of Failure Detection
For some distributed applications, there are some timing constraints on the behaviors of failure detectors. A failure detector cannot meet the requirements of these applications if a node starts to be suspected long after it fails, or if the failure detector makes too many mistakes. In order to solve this problem, Chen proposed a series of metrics to restrain the behavior of failure detectors. These metrics can explain how quickly node failure is found and how much error detection is avoided. Moreover, they can describe the performance of a failure detector quantitatively. In these metrics, T represents that a node works normally and S represents that a node is suspected of failure. When a T-transition occurs, it means that the failure detector corrects a false suspicion. When an S-transition occurs, it means that the failure detector suspects a node failure. Based on the above description, following are some primary metrics to describe the QoS of a failure detector:
•
Detection time (T D ) is from the moment a node crashes to the moment it is permanently suspected, i.e., when the final S-transition occurs. • Mistake rate (λ M ) is the number of false suspicions a failure detector makes per unit time, i.e., it is used to describe the frequency of false suspicions of a failure detector. • Query accuracy probability (Q A ) is the probability that the output of a failure detector is correct at a random time.
The first metric is used to describe the detection speed of a failure detector; the others are used to describe the detection accuracy of a failure detector. Because the mistake rate is not sufficient to describe the detection accuracy of a failure detector, it also employs query accuracy probability to indicate the detection accuracy. For example, node p is detected by FD 1 and FD 2 in Figure 2. In the whole detection process (16 s), node p is in a normal state. In Figure 2, T represents that the output of the failure detector is trusted, while the S represents that the output of the failure detector is suspect. For FD 1 , there are two false suspicions in the whole detection process. According to the definition of mistake rate, the mistake rate of FD 1 is 2/16 = 0.125. The output of trust lasts 12 s, and it accounts for 12/16 = 0.75 of the overall output. Therefore, this means that the query accuracy probability of FD 1 is 0.75. For FD 2 , there are two false suspicions in the whole detection process. According to the definition of mistake rate, the mistake rate of FD 2 is 2/16 = 0.125. The output of trust lasts 8 s, and it accounts for 8/16 = 0.5 of the overall output. Thus, this means that the query accuracy probability of FD 2 is 0.5. Both failure detectors have the same mistake rate (0.125), but they have different query accuracy probabilities (0.75 and 0.5).
Network Model
The network model is the basic factor that must be considered in the design of a failure detector. It records the state of the monitored nodes for the suspect list in each failure detector. When a node is suspected by any failure detector, the failure detector must transmit this information to other nodes in network. However, it is very time consuming and load consuming to let all the nodes know this failure information in such a large-scale system. In this paper, we consider the concept that each failure detector only connects to Detection overhead (O D ) is the traffic used to find a failure node. It can be measured by recording the average number of messages for the detection purpose.
Network Model
The network model is the basic factor that must be considered in the design of a failure detector. It records the state of the monitored nodes for the suspect list in each failure detector. When a node is suspected by any failure detector, the failure detector must transmit this information to other nodes in network. However, it is very time consuming and load consuming to let all the nodes know this failure information in such a large-scale system. In this paper, we consider the concept that each failure detector only connects to partial nodes and is responsible for detecting them. More specifically, each failure detector is responsible for detecting 1-hop neighbor nodes. Failure information can be transmitted along neighbor nodes.
Link Failure
In the IoT, wireless communication channels are unstable. Radio interference is a main factor of link failure. If link failure occurs, the packets will be lost. In most cases, a failure detector can correct its own false suspicions because the link failure is temporary. In this paper, we consider that communication channels are unreliable. We assume that the communication channel is a fair lossy channel [25]. This channel allows packet loss, but it cannot copy or modify the message and create a new message. Additionally, node q can receive message m if node p continuously sends that message.
Node Failure
In a malicious environment, sensors may have antenna failure, circuit failure, battery leakage, and other problems. These problems will lead to sensor failure and will affect the system performance. For sensor failure, we consider it belongs to a crash-stop. When a sensor has a crash-stop, it cannot send or receive messages. Under normal circumstances, a sensor will always send or receive messages without failure. Sensor p can determine whether its neighbor sensor q is normal according to the information in the received message.
The Detection Period
In a failure detector, the detection performance is seriously affected by the detection period. For example, a longer detection period will increase the detection time and reduce the detection accuracy, whereas a shorter detection period will generate more heartbeat messages and increase the detection overhead, which means that more communication cost and computation cost will be consumed. In the IoT, it is normal for self-powered sensors to fail due to battery exhaustion. Excessive detection overhead will accelerate battery consumption and cause sensor failure. Thus, we need to find a reasonable detection period configuration method to balance the detection time, detection accuracy, and detection overhead. In this paper, we propose a new method for determining the detection period in the IoT (as shown in Algorithm 1). The definition of the parameters involved in this method are shown in Table 1.
else if η min < ∆t < n · η min 8. Considering the general failure of sensors and the exhaustion of sensor batteries, we assume that the reliability of a sensor follows the Weibull distribution [26]. Therefore, the reliability of a sensor over time can be described by: where the parameters α and β are used to adjust the reliability function.
According to the reliability function, the reliability value R(t i ) of a sensor can be calculated at a certain time, t i . If this reliability value R(t i ) is greater than the preset reliability value R req , we can calculate the detection period η (as shown in Algorithm 1).
By transforming the reliability function, we can attain: We can obtain a time value t req by introducing the preset reliability value R req into Equation (9). Subsequently, we can attain a time difference: If this time difference ∆t > n · η min , we use n · η min as the detection period to ensure detection accuracy (lines 5 and 6). If η min < ∆t < n · η min , we use ∆t as the detection period (lines 7 and 8). Otherwise, we use η min as the detection period (lines 9 to 14). Every time a heartbeat message is sent, we re-calculate the detection period.
Implementation of Low-Power Failure Detector
In environmental monitoring based on the IoT, there are many sensors used to monitor environment and transmit data (as shown in Figure 3). In such a large-scale system, sensor failure caused by software and hardware failure becomes inevitable. Thus, the system needs to know the status of sensors in a timely fashion to ensure the implementation of applications. For example, when a sensor fails (the red node is the failed node), all data transmitted through this node will not reach the destination. This means that the old path through the failed sensor is useless. If the system does not know how many such failed nodes exist, its availability will be greatly reduced. The purpose of a failure detector is to find the failed sensor in the system in time. By employing a failure detector, the system can find the failed sensor and then remove it from the system topology. Finally, the system builds a new path to transmit data using normal sensors. In the IoT, apart from sensor tem needs to know the status of sensors in a timely fashion to ensure the implementation of applications. For example, when a sensor fails (the red node is the failed node), all data transmitted through this node will not reach the destination. This means that the old path through the failed sensor is useless. If the system does not know how many such failed nodes exist, its availability will be greatly reduced. The purpose of a failure detector is to find the failed sensor in the system in time. By employing a failure detector, the system can find the failed sensor and then remove it from the system topology. Finally, the system builds a new path to transmit data using normal sensors. In the IoT, apart from sensor hardware and software failure, battery depletion is also an important failure factor in sensor failure. To reduce the impact of failure detection on sensor battery consumption, an LP-FD is proposed. When the receiver obtains a heartbeat message, the message delay d i can be calculated by: where T pre is the arrival time of the previous heartbeat message and T now is the arrival time of the new heartbeat message. If a message is lost, it is difficult to measure the communication delay between the sender and the receiver. In light of the impact of message loss, our approach uses the average method to deal with the problem. In detail, we can recompute the value of the delay by: where N l is the number of lost heartbeat messages. It is assumed that the value of d i is equal to the message delay of the next heartbeat message d i+1 . Thus, the expected arrival time of the next heartbeat message can be calculated by: where ID k is the sequence number of the heartbeat message and η k is the k-th detection period. Based on the single exponential smoothing method, we can calculate the predictive delayd i+1 as follows:d where k(0 ≤ k ≤ 1) is a constant between 0 and 1, which controls how rapidly thed i+1 adapts to the delay change. Therefore, the safety margin (SM) can be estimated by: where ε is a variable, chosen so that there is an acceptably small probability that the delay for the heartbeat message will exceed the timeout. Finally, we can compute the freshpoint for heartbeat message (i + 1) by: In an LP-FD, the heartbeat approach is used as the basic failure detection strategy. To simplify the description, it is supposed that there are two sensors, p and q, in the system. Sensor q is responsible for detecting sensor p. Algorithm 2 shows the detailed detection algorithm.
for all i > 1 do 23.
at time η i : (the i-th detection period); 24.
send heartbeat message m i to node q; 25.
end for
Sensor p as the monitored sensor sends heartbeat messages to sensor q every interval η i (i > 0). Sensor q, as the detecting sensor, executes two tasks. One task will add sensor p into the suspect list when no heartbeat message from sensor p is received within the last freshpoint. The other task is responsible for computing the freshpoint based on the heartbeat message just received. After sensor q receives the heartbeat message, it can compute the communication delay and the safety margin of the next heartbeat message.
Evaluation and Performance
We conducted extensive simulations using actual data to evaluate the performance of our proposed failure detector and compared it with three other existing failure detectors. To improve the correctness of the experiment, we used the same method as that in paper [24], which applied the same data to replay different failure detectors and then computed the QoS metrics. This ensured that the comparative experiments were achieved in the same network condition.
Data Processing
Our experiments involved two nodes, one which represented the detecting node and the other that represented the monitored node. There was a communication channel between the nodes through a WiFi (802.11 g) network. One node as the monitored node was responsible for sending heartbeat messages, while the other node as the detecting node was responsible for receiving the heartbeat messages. Neither node failed during the experiment. The detecting node was equipped with a 900 MHz ARM Cortex-A7 processor, 1 G RAM, and a CentOS 6.5 operating system (Premier Farnell/Leeds). During the 3 h that the experiment lasted, heartbeat messages were generated at a target rate of one heartbeat every 100 ms. All heartbeat messages were transmitted using the UDP/IP protocol. In total, 88,011 heartbeat messages were sent, among which 87,800 were received (about 0.24% of message loss).
The distribution of arrival time of the heartbeat messages is shown in Figure 4a. From the figures, we can see that the arrival time of the heartbeat messages is concentrated around 100 ms and the heartbeat messages near 100 ms account for 92% of the total. Therefore, it is suitable to use the arrival time of the last heartbeat message to predict the arrival time of the next heartbeat message. Next, we selected the arrival time of heartbeat messages in three periods for observation (as shown in Figure 4b-d). The three periods represent the early, middle, and late stages of the experiment. From Figure 4b,c, we can observe that the arrival time of the heartbeat messages is concentrated around 100 ms. Additionally, the probabilities that the adjacent heartbeat messages have the same delay are 78.6% and 80.6%, respectively, in the early and middle stages of the experiment. From Figure 4d, it can be seen that the arrival time of the heartbeat messages is scattered. This may be caused by the dynamic network conditions; however, the probability that adjacent heartbeat messages have the same delay is 76.6%.
Discussions on Parameters
How the value of the timeout is set directly affects the performance of failure detection. A large timeout means a longer detection time when an actual node failure occurs. This will result in a possible drop in detection speed. On the other hand, a smaller timeout may cause a decrease in detection accuracy. In our failure detector, the value of the timeout was determined by the delay of heartbeat message and safety margin. There were two tuning parameters, k and ε, to compute the delay of heartbeat message and safety margin. As fine-grained k values can affect the performance of a failure detector, in our simulations, we computed the timeout through a series of k values, i.e., k = 0.1, 0.25, 0.5, and 0.75. ε was used to adjust the safety margin as another tuning parameter. In our simulations, we selected ε = 1, 1.5, and 2 to obtain the best failure detector performance. In practice, the optimal values of k and ε can be obtained via similar simulations or experiments.
In 2W-FD, there is a common tuning parameter called the safety margin, SM, in which users can obtain different detection times by setting different safety margins in their experiment. In accrual failure detectors, the tuning parameter is the threshold of ϕ-FD and ED-FD. The parameters of the algorithms are configured as follows: SM ∈ [0, 1000]; for ϕ-FD, the parameters are set the same as those in [7,9]: Φ ∈ [0. 5,16]; and E d ∈ [10 −4 , 10] for ED-FD, as in [24]. Sliding window sizes are set as: 2W-FD: n 1 = 1000 and n 2 = 1. The algorithm can present the best performance of failure detection when it chooses these values compared to the bigger sliding window size. sent the early, middle, and late stages of the experiment. From Figure 4b,c, we can observe that the arrival time of the heartbeat messages is concentrated around 100 ms. Additionally, the probabilities that the adjacent heartbeat messages have the same delay are 78.6% and 80.6%, respectively, in the early and middle stages of the experiment. From Figure 4d, it can be seen that the arrival time of the heartbeat messages is scattered. This may be caused by the dynamic network conditions; however, the probability that adjacent heartbeat messages have the same delay is 76.6%.
Discussions on Parameters
How the value of the timeout is set directly affects the performance of failure detection. A large timeout means a longer detection time when an actual node failure occurs. This will result in a possible drop in detection speed. On the other hand, a smaller timeout may cause a decrease in detection accuracy. In our failure detector, the value of the timeout was determined by the delay of heartbeat message and safety margin. There were two tuning parameters, k and ε , to compute the delay of heartbeat message and safety margin. As fine-grained k values can affect the performance of a failure detector, in our simulations, we computed the timeout through a series of k values, i.e., k = 0.1, 0.25, 0.5, and 0.75. ε was used to adjust the safety margin as another tuning parameter. In our simulations, we selected ε = 1, 1.5, and 2 to obtain the best failure detector performance. The ϕ-FD and the ED-FD sizes are set at: n = 1000. These failure detectors have a better failure detection performance when they use the large window sizes [27]. Moreover, these failure detectors obtain minor improvements when the sliding window size exceeds n = 1000 in experiments. The same results are also mentioned in other papers [28]. Finally, the above parameter settings are the specific settings in their experiments.
Comparison of Failure Detection Metrics
The experimental results of the mistake rate vs. detection time are shown in Figure 5. The x-coordinate is used to indicate the detection time, and the y-coordinate is used to indicate the mistake rate. From Figure 5, we can see that the mistake rate of all failure detectors decreases with an increase in detection time. However, our failure detector had a lower mistake rate than other failure detectors when they had the same detection time. This improvement is because our failure detector can catch most late heartbeat messages by freshpoint under the same network conditions. When T d < 0.29s, the mistake rate of our failure detector is similar to 2W-FD and ED-FD. This shows that our failure detector can ensure detection accuracy during rapid detection. When 0.29s < T d < 0.34s, the mistake rate of our failure detector had an obvious decrease compared with other failure detectors. This is because the calculation approach of the freshpoint can quickly adjust, so our failure detector adapts to the various network conditions better than other failure detectors. d our failure detector is similar to 2W-FD and ED-FD. This shows that our failure detector can ensure detection accuracy during rapid detection. When 0.29s 0.34s d T < < , the mistake rate of our failure detector had an obvious decrease compared with other failure detectors. This is because the calculation approach of the freshpoint can quickly adjust, so our failure detector adapts to the various network conditions better than other failure detectors. The experimental results of query accuracy probability vs. detection time are shown in Figure 6. The x-coordinate is used to indicate the detection time, and the y-coordinate is used to indicate the query accuracy probability. The query accuracy probability of all failure detectors shows a consistency with the increase in detection time. When the detection time increases, the query accuracy probability of all failure detectors also increases. The experimental results of query accuracy probability vs. detection time are shown in Figure 6. The x-coordinate is used to indicate the detection time, and the y-coordinate is used to indicate the query accuracy probability. The query accuracy probability of all failure detectors shows a consistency with the increase in detection time. When the detection time increases, the query accuracy probability of all failure detectors also increases. When 0.29s < T d < 0.34s, the query accuracy probability of our failure detector had an obvious improvement compared with other failure detectors. This result is consistent with the measurement of mistake rate. Figure 7 depicts the relative overhead comparison of two failure detectors. The 2W-FD represents the failure detector with a fixed detection period, while the LP-FD is the failure detector with a variable detection period. We observed that the 2W-FD with a fixed detection period introduced more traffic than our failure detector with a variable detection period in the early experiment (experiment time was less than 1.5 h). As time increases, the reliability of the sensor node decreases and the detection period becomes smaller; thus, the overhead of our failure detector continued to increase until it was the same as the failure detector with a fixed detection period. Figure 7 depicts the relative overhead comparison of two failure detectors. The 2W-FD represents the failure detector with a fixed detection period, while the LP-FD is the failure detector with a variable detection period. We observed that the 2W-FD with a fixed detection period introduced more traffic than our failure detector with a variable detection period in the early experiment (experiment time was less than 1.5 h). As time increases, the reliability of the sensor node decreases and the detection period becomes smaller; thus, the overhead of our failure detector continued to increase until it was the same as the failure detector with a fixed detection period.
FD represents the failure detector with a fixed detection period, while the LP-FD is the failure detector with a variable detection period. We observed that the 2W-FD with a fixed detection period introduced more traffic than our failure detector with a variable detection period in the early experiment (experiment time was less than 1.5 h). As time increases, the reliability of the sensor node decreases and the detection period becomes smaller; thus, the overhead of our failure detector continued to increase until it was the same as the failure detector with a fixed detection period.
Comparison of Battery Consumption
Sensors as static devices collect and transfer data to the sink node periodically. In addition, sensors can be used as relay nodes to forward data to other sensors [29,30]. From the above description, we employed two nodes to simulate the working environment of the IoT. Both the nodes were connected by a wireless link. Each node was not only a monitored node, but also a detecting node with a failure detector. There were multiple processes responsible for sending heartbeat messages or determining the state of the other node, respectively, run on each node. The nodes were equipped with an 800 mAh battery.
Comparison of Battery Consumption
Sensors as static devices collect and transfer data to the sink node periodically. In addition, sensors can be used as relay nodes to forward data to other sensors [29,30]. From the above description, we employed two nodes to simulate the working environment of the IoT. Both the nodes were connected by a wireless link. Each node was not only a monitored node, but also a detecting node with a failure detector. There were multiple processes responsible for sending heartbeat messages or determining the state of the other node, respectively, run on each node. The nodes were equipped with an 800 mAh battery. In every experiment, different failure detectors were deployed on the nodes, then the running time of the nodes was measured.
These accrual failure detectors (ϕ-FD and ED-FD) and the 2W-FD generated more communication overhead than our failure detector at the same time. In addition, these failure detectors needed more calculation and storage, including the calculation of detector parameters and the storage of recent heartbeat messages in each detection period. To analyze the battery consumption of the failure detector, several sliding window size settings were selected (from n = 100 to 10,000), then a detailed comparison was made. For experimental reliability, each experiment of different failure detectors was carried out five times under the same environment and parameters. Finally, the running time of the node with different failure detectors was recorded. The experimental results are shown in Figure 8. In every experiment, different failure detectors were deployed on the nodes, then the running time of the nodes was measured.
These accrual failure detectors ( ϕ -FD and ED-FD) and the 2W-FD generated more communication overhead than our failure detector at the same time. In addition, these failure detectors needed more calculation and storage, including the calculation of detector parameters and the storage of recent heartbeat messages in each detection period. To analyze the battery consumption of the failure detector, several sliding window size settings were selected (from n = 100 to 10,000), then a detailed comparison was made. Figure 8 shows that the node without a failure detector deployed had the longest running time.
We can see that the running time of the node without any failure detector is the longest. It improved by 10% compared to the node with the LP-FD. Among the nodes with a failure detector, the one that deployed ϕ -FD had the shortest running time, and the decrease is obvious with the increase in sliding window size. This may be because the battery consumption of the node is exacerbated when more heartbeat messages are sent, and a lot Figure 8 shows that the node without a failure detector deployed had the longest running time.
We can see that the running time of the node without any failure detector is the longest. It improved by 10% compared to the node with the LP-FD. Among the nodes with a failure detector, the one that deployed ϕ-FD had the shortest running time, and the decrease is obvious with the increase in sliding window size. This may be because the battery consumption of the node is exacerbated when more heartbeat messages are sent, and a lot of calculations are done to ascertain the parameters of the normal distribution model. The fixed detection period introduces more heartbeat messages. While the node with the LP-FD had the longest running time. From the Figure 8, it can be seen that the improvement was up to 18% more than the ϕ-FD when the sliding window size was 10,000. In addition, the LP-FD did not need to maintain the sliding window and was thus unaffected by them. The nodes only maintained five connections in the experiment. In the actual IoT, each node needs to connect to many neighbor nodes to ensure the connectivity of systems. Therefore, the fact that the node that deployed the LP-FD could save battery consumption is more significant in real systems.
Conclusions
In this paper, we introduced our failure detector for environmental monitoring based on the IoT, namely the LP-FD. This failure detector possesses the capabilities to achieve sensor failure detection in a timely and accurate way. In order to save battery consumption and detection overhead, we computed the variable detection period by using the reliability function of the Weibull distribution. Moreover, our failure detector used both the prediction method of the last heartbeat message and the dynamic safety margin to ensure the accuracy of failure detection. According to the experimental results, we found that the LP-FD has a better detection speed, accuracy, overhead, and battery consumption than traditional failure detectors. Therefore, the LP-FD is suitable to provide failure detection services in the IoT. | 9,657.2 | 2021-09-28T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
A stronger acceptor decreases the rates of charge transfer: ultrafast dynamics and on/off switching of charge separation in organometallic donor–bridge–acceptor systems
To unravel the role of driving force and structural changes in directing the photoinduced pathways in donor–bridge–acceptor (DBA) systems, we compared the ultrafast dynamics in novel DBAs which share a phenothiazine (PTZ) electron donor and a Pt(ii) trans-acetylide bridge (–C
<svg xmlns="http://www.w3.org/2000/svg" version="1.0" width="23.636364pt" height="16.000000pt" viewBox="0 0 23.636364 16.000000" preserveAspectRatio="xMidYMid meet"><metadata>
Created by potrace 1.16, written by Peter Selinger 2001-2019
</metadata><g transform="translate(1.000000,15.000000) scale(0.015909,-0.015909)" fill="currentColor" stroke="none"><path d="M80 600 l0 -40 600 0 600 0 0 40 0 40 -600 0 -600 0 0 -40z M80 440 l0 -40 600 0 600 0 0 40 0 40 -600 0 -600 0 0 -40z M80 280 l0 -40 600 0 600 0 0 40 0 40 -600 0 -600 0 0 -40z"/></g></svg>
C–Pt–CC–), but bear different acceptors conjugated into the bridge (naphthalene-diimide, NDI; or naphthalene-monoimide, NAP). The excited state dynamics were elucidated by transient absorption, time-resolved infrared (TRIR, directly following electron density changes on the bridge/acceptor), and broadband fluorescence-upconversion (FLUP, directly following sub-picosecond intersystem crossing) spectroscopies, supported by TDDFT calculations. Direct conjugation of a strong acceptor into the bridge leads to switching of the lowest excited state from the intraligand 3IL state to the desired charge-separated 3CSS state. We observe two surprising effects of an increased strength of the acceptor in NDI vs. NAP: a ca. 70-fold slow-down of the 3CSS formation—(971 ps)−1vs. (14 ps)−1, and a longer lifetime of the 3CSS (5.9 vs. 1 ns); these are attributed to differences in the driving force ΔGet, and to distance dependence. The 100-fold increase in the rate of intersystem crossing—to sub-500 fs—by the stronger acceptor highlights the role of delocalisation across the heavy-atom containing bridge in this process. The close proximity of several excited states allows one to control the yield of 3CSS from ∼100% to 0% by solvent polarity. The new DBAs offer a versatile platform for investigating the role of bridge vibrations as a tool to control excited state dynamics.
Introduction
2][3][4][5][6][7][8] The modular approaches to control charge separation, whereby, for example, separate molecular units are used for catalysis and light absorption, are accordingly of great interest.Such an approach involves a combination of an electron donor, D, an electron acceptor, A, and a bridge, B 'modules' assembled in a combinatorial fashion.The synthetic versatility of such modular DBA design, permitting to change the donor and/or acceptor units, bridge length, or the degree of coupling, has allowed for the extensive investigation of how photoinduced electron transfer is affected by structural and electronic properties of the componentsthe knowledge that can guide future design of efficient light-harvesting systems.
Recent experimental and theoretical studies of ultrafast dynamics in DBA systems have illuminated the crucial role of bridge-localised vibrations in mediating photo-induced electron transfer, [9][10][11][12][13] with the outcome ranging from full inhibition 10,14 to acceleration of charge separation. 152][23][24][25] Multiple asymmetric DBA and symmetric (DBD, ABA) systems based on [-C^C-Pt-C^C-] bridges have been developed as models for charge transfer process 18,[26][27][28][29][30][31] relevant to applications including optoelectronics, power limiting, catalysis or upconversion. 32,335][36] The specic family of DBA molecules which exhibited controllable charge-separation contained an aromatic acid imide, 1,8-naphthalimide (NAP) acceptor, and phenothiazine derivatives (PTZ) as donors, in which the coupling between, and the relative energies of, multiple excited states was tuned by varying modes of attachment of the A and the D to the bridge.In "NAP-C^C-Pt-C^C-PTZ" DBA, absorption of light populates a charge-transfer (CT) manifold D-B + A − , which with a rate of (14 ps) −1 decays over three pathways, including formation of the CSS (D + BA − ) via a reductive quenching of the oxidised bridge by the PTZ donor. 10rucially, branching in such DBAs occurred on a picosecond timescale, commensurate with vibrational cooling, thus allowing one to perturb electron transfer pathway with vibrational excitation.The intriguing and previously unobserved in solution phenomenon was that mode-specic IR-excitation of the bridge-localised acetylide modes in the course of charge separation controlled population of the 3 CSS, with up to 100% efficiency. 14This work, along with other studies, [37][38][39][40] established that the acetylide vibrational modes of the bridge are key contributors to the reaction coordinate in photoinduced electron transfer in these molecules.However, the lowest excited state in the DBA family was an acceptor-based intra-ligand state, 3 IL, and not the charge-separated state.A logical next step is to design the analogous DBA system, in which the potential for IRexcitation of the bridge is preserved, but the lowest excited state becomes a charge-separated state.
Here, the goal of reaching the lowest charge-separated state in trans-acetylide DBA systems has been achieved, by replacing the monoimide NAP acceptor with a stronger acceptor, naphthalene diimide NDI (free NDI has ca.0.9 V less negative reduction potential than NAP) 41 whilst the bridge and the donor remain unchanged, NDI-CC-Pt-CC-PTZ (Fig. 1, 3).Such design reverses the order of the IL and CSS states, potentially enabling a change from a suppression (NAP) to an enhancement of the population of the CSS by exciting bridge vibrations.
We report the unexpected ultrafast photophysics of the novel Pt(II) DBA complex 3, and of its 'building blocks' (Fig. 1, 1 and 2).The chosen donor and acceptor form stable radical cations and anions, respectively, [42][43][44][45][46] each with distinct spectral features in UV/vis absorption spectra, allowing to track electron transfer in real time.Further, the presence of strong carbonyl group vibrations n(CO) in NDI/NDI 1− and of the bridge-localised n(CC) vibrations the frequencies of which are highly sensitive to the changes in electron density, enables the use of time-resolved infrared spectroscopy [47][48][49][50] to follow excited state evolution in the mid-IR region.
A combination of ultrafast time-resolved mid-infrared (TRIR) and transient absorption (TA) methods, ash photolysis, ns-ms time-correlated single photon counting (TCSPC), UV/vis/IR spectroelectrochemistry, broadband uorescence upconversion (FLUP) spectroscopy and DFT calculations, allowed us to map the time-resolved dynamics of the new family of Pt(II) DBA systems over several orders of magnitude in time.We show that the lowest excited state is indeed a desired chargeseparated state, and that the relative order of the 3 IL and 3 CCS states can also be controlled by solvent polarity.Surprisingly, the DBA containing a stronger electron acceptor and a lower energy 3 CSS investigated in this work shows several profound differences compared to the previously reported analogues with weaker electron acceptors, namely: a decreased rate of charge separation, a signicant increase in the rate of intersystem crossing, and the enhanced lifetime of the 3 CSS.The role of structural changes across the bridge, the energetics of the individual steps in determining the differences between ultrafast dynamics in the systems bearing acceptors of variable strength, and solvent effects are discussed.
Synthesis
The acceptor ligand precursor 2-Br-NDI was synthesised via a two-step procedure: 1,4,5,8-naphthalene tetra-carboxylic dianhydride was rst treated with the brominating agent dibromoisocyanuric acid, followed by formation of the diimide through reaction with n-octylamine in reuxing acetic acid.
This synthetic route led to the formation of both 2-Br-NDI and the 2,6-dibrominated species, with the two compounds being efficiently separated by column chromatography.
complex NDI-Pt-Cl (1), a second near identical complex was isolated, with mass spectrometry data indicating the formation of an NDI-Pt-Br complex, presumably through halide exchange with free bromide liberated from the 2-Br-NDI ligand-precursor.This latter complex may additionally be identied through a subtle upeld-shi of the 31 P NMR resonance associated with the coordinated trans-phosphine groups (4.66 ppm vs. 7.37 ppm for 1 in CDCl 3 ).Bis-ethynyl 2 was formed via a Cl-Pt-Ph intermediate, itself synthesised through Hagihara coupling of cis-Pt(PBu 3 ) 2 Cl 2 with phenylacetylene, with subsequent steps being analogous to those of 1.The triad DBA (3) was formed in an analogous manner.The donor fragment N-(4-ethynylbenzyl)phenothiazine was rst complexed with cis-Pt(PBu 3 ) 2 Cl 2 , followed by a reaction with (Me) 3 Si-CCH to form the trans bisacetylide (Me) 3 Si-Pt-PTZ.Finally, attachment of the NDIacceptor was done through in situ deprotection and coupling with 2-Br-NDI with a yield of 60%.All characterisation, and synthetic details are given in the ESI.† The X-ray crystal structure of 3 (NDI-Pt-PTZ) is depicted in Fig. 1, with details in the ESI.†
UV-vis absorption, emission and ultrafast broadband uorescence upconversion (FLUP) spectroscopy
The ground state absorption spectra of 1-3 in CH 2 Cl 2 (DCM) are shown in Fig. 2A.The region from 230 to 380 nm contains absorption bands due to ligand-localised transitions (p / p*).The sharp transitions centred at 380 and 360 nm present in all the spectra are assigned to NDI localised p / p* transitions. 35,48The transitions centred at 329 and 287 nm are common to 2 and 3 and hence assigned to transitions localised on the phenyl and acetylide fragments.The sharp transition at 257 nm that is only present in the spectrum of 3, is assigned to the PTZ ligand.
The broad transitions at 494 nm (1) and 504 nm (2, 3) are due to a mixed-metal-ligand to ligand charge transfer transition (ML-LCT), involving electron density shi from the Pt-bridge moiety towards the NDI acceptor.The energy of the ML-LCT transitions in 2 and 3 are almost identical, whereas 1, lacking the phenylacetylide fragment, has this transition at slightly higher energy.The broadness of the ML-LCT absorption band suggests that it comprises multiple electronic transitions.(TD) DFT calculations establish the presence of at least three lowlying ML-LCT states.Electron density difference plots (Fig. S15 †) between the ground and the rst 4 singlet excited states conrm that the transitions involve electron density shi from the whole Pt-acetylide bridge to NDI in 1-3.Calculated UVvis spectra are in good agreement with the experiment (Fig. S13 †).
The emission spectra of 1-3 in aerated DCM, following excitation of the ML-LCT absorption band, are shown in Fig. 2B.All complexes display weak emission in the region 550-650 nm. 1 and 2 also display near-IR emission, with peaks at 725 and 745 nm, which in similar [Pt-CC-NDI] 32,51 compounds was assigned to emission of 3 IL states localised on the [-CC-NDI].
Ultrafast broadband uorescence upconversion was used to spectrally and temporally resolve emission of 1 and 2 in the subpicosecond domain.The FLUP spectra for 2 (Fig. 2B, inset) shows weak emission in the region 500-620 nm (l max 565 nm) which decays fully within 1 ps.Such ultrafast decay suggests the emission emanates from the initially populated singlet FC state.The rate of intersystem crossing of >(300 fs) −1 is similar to that reported recently for some Pt(II) trans-acetylides, 52 but two orders of magnitude higher than in the NAP-Pt-PTZ analogue. 53omplex 3 in DCM does not show the broad near-IR emission from a 3 IL state observed in 1 and 2, suggesting the presence of a new excited state in 3 involving the PTZ unit: population of this state would offer an additional decay channel for the 3 IL state, quenching its emission.However, in hexane 2 and 3 show identical 3 IL emission (Fig. S1 †), suggesting that the energy, and therefore the probability of the population of this new excited state, are sensitive to solvent polarity.
Fourier transform infrared spectroscopy (FTIR)
The ground state FTIR spectra of 1-3 in DCM in the region from 1500 to 2150 cm −1 are shown in Fig. 3. 1 has a single n(CC) at 2086 cm −1 , whilst the two acetylide groups in 2 and 3 yield asymmetric and symmetric group vibrations at 2072 and 2107 cm −1 , resp.The range 1650-1750 cm −1 is dominated by NDI carbonyl stretching vibrations n(CO) at 1659, 1699 cm −1 , and 1708 cm −1 (sh).Multiple bands below 1650 cm −1 correspond to aromatic ring stretching modes of the NDI (and Ph/ PTZ) groups.The calculated vibrational spectra of the electronic ground state of 1-3 (Fig. S14 †) are in good agreement with the experiment.
Redox properties and (spectro)electrochemistry
Cyclic voltammetry of 1 and 2 in dry DCM (vs.Fc/Fc + ) shows two reversible reduction processes (Fig. S4 †) at E 1/2 −1.12 and −1.53 V for 1, and −1.23 and −1.64 V for 2. These potentials are similar to other systems 32,33 in which the NDI is bound to the Pt via direct attachment of the -CC-bond to the aromatic ring, and are assigned to the sequential reduction of the NDI moiety to NDI 1− and NDI 2− (also conrmed by the spectroelectrochemical data, see below).The reduction potentials are slightly more negative than those of the free NDI, or of the Pt(II) trans-acetylides in which the NDI is linked to the bridge via the N-atom. 34he changes in the IR-spectrum in the course of the oneelectron reduction of 1 (Fig. 3, dashed line) include disappearance of the n(CO) vibrations of the neutral complex and appearance of new vibrations at lower energies at 1515, 1592 and 1628 cm −1 , due to n(CO) of the NDI radical-anion. 48The UVvis spectrum of the one-electron reduced species, 1 − generated electrochemically in dry DCM (Fig. S5 †), exhibits new absorption bands at 486, 530 (sh), 585, 625, and 708 nm.The slight difference in these band positions to those of the unsubstituted, symmetric NDI 1− (480, 608 nm (ref.48, 55 and 56)), are consistent with the signicant electronic coupling between the NDI ligand and the Pt-CC bridge in 1.
Ultrafast dynamics of 1-3
The major transient signals for DBA complexes 1-3 in the UV-vis and IR spectral regions are given in Table 1.Kinetic parameters extracted from different methods are given in the ESI, Table S1.†
Time-resolved infrared spectroscopy (TRIR)
Time-resolved infrared spectra of 1-3 in DCM, following excitation of the ML-LCT transition are shown in Fig. 4 in the 'acetylide region', 1750-2200 cm −1 (Fig. 4B, D and F) and in the 'carbonyl' region, 1300-1800 cm −1 (Fig. 4A, C and E).The complex dynamics were analysed using a combination of singlepoint kinetic analysis and global lifetime analysis (GLA).The resulting decay associated difference spectra (DAS, parallel model) and evolution associated spectra (EAS, sequential model) are shown in Fig. S6 and S7.† The TRIR spectra of 1 following excitation into an ML-LCT band (500 nm, Fig. 4A and B) show an instantaneous bleach of the ground state n(ring), n(CO), and n(CC) vibrations at 1567, 1657, 1696, and 2085 cm −1 , which reach maximum negative absorption values by 400 fs.In the acetylide region, the n(CC) transient initially observed at 1967 cm −1 narrows and shis to 1978 cm −1 whilst increasing in intensity, then continues to shi to higher energy (1982 cm −1 ) with a 3.6 ps lifetimethe behaviour typical of vibrational cooling.In the carbonyl region, the two n(CO) transients at 1641 and 1619 cm −1 appear aer the excitation.The transient at 1641 cm −1 shis to 1644 cm −1 , with the lifetime 0.4 ps, concomitantly with a small increase in signal strength, likely due to vibrational cooling (usually faster process in organic -CO groups vs. a metal-appended -C^C-). 11he transient at 1619 cm −1 shis to 1608 cm −1 with the time constant 8.5 ps (the shi to lower energy rules out vibrational cooling as the sole process responsible for the shi).Partial decay of the bands at 1608 and 1982 cm −1 is then observed over z600 ps, concomitant with the apparent recovery of the n(CO) bleach at 1657 cm −1 and the n(CC) bleach at 2085 cm −1 .However, no such recovery is observed for the bleaches at 1567 and 1695 cm −1 over the same timeframe, suggesting that the apparent partial recovery of 1657 and 2085 cm −1 is not due to the ground state recovery, but to a new transient due to the population of a new excited state.In both the carbonyl and acetylide regions, the TRIR signals only partially decay on the time-scale of the experiment.The lifetime of the nal excited state was determined as 60 ns by ash-photolysis, Fig. S12.† The TRIR spectra of 2 under excitation at 500 nm (Fig. 4C and D) show instantaneous bleach of the symmetric/ asymmetric n(CC) bands at 2069, 2102 cm −1 , the n(CO) (1658, 1700 cm −1 ) and n(ring) vibrations (1565 cm −1 ).In the acetylide region, formation of the bleach signals is accompanied by the rise of a very broad n(CC) IR-absorption band with a maximum at 1885 cm −1 , with two very weak signals at 1962, 2038 cm −1 pronounced.9][50] The 1885 cm −1 The TRIR spectra of 3 aer excitation at 520 nm (Fig. 4E and F) show instant bleaches of the symmetric/asymmetric n(CC) (2069, 2102 cm −1 ), the n(CO) (1663, 1700 cm −1 ) and n(ring) vibrations (1565 cm −1 ).In the acetylide region, an initial, broad n(CC) transient with a maximum at 1885 cm −1 decays on subpicosecond timescales, concomitant with the rise of a new n(CC) band at 1963 cm −1 .This change is followed by a grow-in of a band at 1928 cm −1 , similar in spectral shape to the Fig. 4 TRIR spectra of 1-3 following excitation of the ML-LCT transition, 500 nm, in DCM, in the range up to 3 ns, at selected time delays as shown.Data for compound 1 (panels A and B), compound 2 (panels C and D), and compound 3 (panels E and F).The data in panels A, C and E correspond to the lower frequency region, 1450-1750 cm −1 , the data on panels B, D and Fto the higher frequency "acetylide" region, characteristic IR bands attributed to 3 IL, 3 IL(CT), and CSS states are labelled.In the lower frequency region for 3, immediately following excitation, two n(CO) transients at 1623 (and a 1612 sh) and 1647 cm −1 are observed.The spectral evolution involves an increase in signal at 1612 cm −1 and on the high-energy side of the 1647 cm −1 band, which was best modelled by GLA with time-constants lifetimes 0.33, 3.6, and 61 ps (very minor, and close in shape to 1191 ps component in global analysis, see Fig. S7F †).Signicant decay of the n(CO) transients at 1612/ 1647 cm −1 is observed with a lifetime of 1.2 ns (Fig. 4, panel 3), concomitant with the formation of transient bands at 1512, 1592 and 1626 cm −1 .The nal transients in the carbonyl region decay with the same lifetime of 5.8 ns as n(CC) 2097 cm −1 band.Thus, 3 populates a new excited state, with characteristic vibrations at and 1512, 1592 and 1626 cm −1 (NDI-anion) and 2097 cm −1 (Pt/CC)a charge-separated state.
Overall, TRIR dynamics of 3 (GLA, and lifetime density analysis, 57 Solvent polarity.The dynamics of 3 were highly sensitive to solvent polarity.A change of the solvent from DCM to hexane does not change the TRIR spectra nor dynamics of 2, Fig. S9, † suggesting the lowest excited state is ligand-localised.TRIR spectra of 3 in hexane did not display the n(CC) transients at 1928 and 2097 cm −1 which were observed in DCM, indicating no CSS formation in hexane.Emission spectra of 2 and 3 in hexane are identical, and TRIR and TA dynamics of 3 in hexane is similar to that of 2 (Fig. S9 †) indicating that the PTZ-donor plays no part in the excited state processes in 3 in non-polar solvents.
Ultrafast transient absorption spectroscopy (TA)
The transient absorption spectra of 1-3 in DCM in the range 370-700 nm (Fig. 7A, C, E TA spectra of 1 display a grow-in of broad transients with peaks at 427 and 629 nm, concomitant with the apparent decay of a shoulder at 465 nm (the downward arrow in Fig. 7A) with s = 0.18 ps.A slight redshi of the band at 427 nm to 429 nm, a small apparent loss in the bleach (475-530 nm), and a decrease in the transient at 629 nm (Fig. 7B) are observed with s = 137 ps.The transient spectrum of 1 then remains unchanged over 5 ns.
Compound 2 displays very similar transient behaviour to that of 1, with a peak at 434 nm and broad featureless absorption in the 570-700 nm region.Similar to 1, there is ultrafast decay of a shoulder to the main transient peak at 434 nm, however the change is too small for GLA to extract a lifetime for this process.The later dynamics of 2 follow a similar pattern to 1: a redshi of the transient to 442 nm, and a decrease in the bleach (483-550 nm) and the transient (570-700 nm), s 142 ps (Fig. 7D).
Compound 3 has initial TA spectrum almost identical to that of 2, with a major peak at 431 nm and featureless transients in the region 570-700 nm.As was the case in 1 and 2, fast decay (0.16 ps) of a small shoulder (470 nm) to the main peak is observed.A shi of the peak from 431 to 437 nm, concomitant with the apparent loss of the bleach (480-550 nm) and decay of transient signal in the 570-700 nm range occur bi-exponentially, with s of 2.1 and 88 ps.The subsequent dynamics involves major decay of the transient signal at 437 nm, along with growth in the region 465-660 nm (Fig. 7F), s 929 ps.The nal transient spectrum contains pronounced bands at 589, 625, 706 nm (Fig. S5, † NDI 1− ) and decays with a lifetime of 6.2 ns.The dynamics of the TA data match those of TRIR for 1-3.
Discussion
1-3 show a broad absorption envelope at around 500 nm, comprising multiple ML-LCT electronic transitions as also evidenced by TDDFT calculations.Excitation populates a manifold of (singlet) excited states with varying degrees of CT character.At early times (<400 fs), neither n(CO) corresponding to NDI 1− nor electronic absorption features of NDI 1− were observed, suggesting that the rst state detected in TRIR and TA has only limited electron density on the acceptor, and/or that the FCstate decays too quickly to be observed: in contrast, in the NAP-analog, NAP 1− was detected at early times. 35
Ultrafast intersystem crossing
The excited state dynamics of 1-3 at early times are characterised by a growth of n(CO) transients, with very similar frequencies for all complexes (Fig. 4 and Table 1).The growth is accompanied by a continuous shi of the n(CO) transients to higher energy, suggesting that the dynamics reect a growth in population of an excited state convolved with relaxation within this excited state.This suggests that the observed n(CO) transients belong not to an initial excited state, but rather to one populated by the ultrafast decay of a higher state (likely an initially populated CT state).In similar Pt(II) NDI systems, n(CO) transients at 1607 and 1647 cm −1 were assigned to an excited state localised on the NDI moiety ( 3 IL state). 48On the other hand, an initial (200 fs) broad transient at 1885 cm −1 observed for 2 and 3 is characteristic of the oxidised -CC-Pt-CC moiety, and conrms CT from the Pt-bridge to the acceptor. 34,35The n(CC) transient at 1885 cm −1 decays on the same timescale as is suggested for the ISC process from the FLUP data, and thus the broad transient at 1885 cm −1 is assigned to a singlet CT state, which undergoes ISC (300 fs for 2 and 260 fs for 3) to 3 MLCT.The 3 ps evolution observed in the carbonyl region of 2 and 3 could be attributed to the decay of 3 MLCT to a 3 IL state.
In 1, the 1885 cm −1 band ( 1 CT state) evolves to a narrow n(CC) transient at 1982 cm −1 , typical for acceptor-localised excited state and very similar in shape/position of n(CC) in the 3 NAP. 35As noted previously for the NAP analogs, ISC can occur over a range of timescales, and the same is the case in 1.This "continuous" ISC is approximated by a bi-exponential decay of the FLUP signal (0.24 and 3.3 ps), and of TRIR (0.4 and 3.6 ps).
Ultrafast population of a 3 IL state from a CT-state in 1-3 is also evident in the TA data, by a 200-300 fs decay of a transient at 470 nm and of stimulated emission, accompanied by grow-in at 430, 600-700 nm due to 3 IL states of the CC-NDI unit. 32,33,51ltiple NDI-based triplet states 2 and 3, and a chargeseparated state in 3 DFT calculations corroborate the existence of an 3 IL(CT) state in 2 in DCM.The electron density difference plots of 2 (Fig. S15 and 16 †), between the lowest energy excited triplet state and the ground state clearly show that whilst the main change in the electron density is on the NDI ligand, some electron density change also occurs on the acetylide units.The calculated IR-spectrum of the lowest triplet state (Fig. S17, † blue trace) is in good agreement with the transient IR spectrum of the nal excited state of 2 in DCM (Fig. S17, † black trace), thus the lowest state is assigned as a 3 IL(CT).The TA spectra of 1 and 2 at 3 ns match those obtained by ash-photolysis (Fig. S12 †), with the decay lifetimes of 58 ns (1) and 112 ns (2) that matches the decay of the near-IR emission (60 & 111 ns for 1 & 2, resp.).The lowest excited states in DCM thus is 3 IL in 1, and 3 IL(CT) in 2. Importantly, 2 in hexane (Fig. 7, S8 and 9 †) shows n(CC) bands at 1885 and 1965 cm −1 , but not at 1936 cm −1 conrming that 3 IL(CT) state is not formed in non-polar solvents.
In DBA 3, addition of the PTZ-donor results in substantial decay of the transient signals of the 3 IL(CT) state.The major decay of the n(CC) transient at 1928 cm −1 occurs with a lifetime of 793 ps, accompanied by the growth of a n(CC) transient at 2097 cm −1 .Such large shi in frequency, +169 cm −1 , suggests a large electron density change on the bridge. 34,35In the carbonyl region, the n(CO) transients (1612, 1647 cm −1 ) due to the 3 IL(CT) state decay with a similar lifetime (1191 ps) to the n(CC) 1928 cm −1 transient, along with the formation of new transients at 1592 and 1626 cm −1 .The new n(CO) bands match the IR spectrum of 1 − and are characteristic 48 of NDI 1− (Fig. 3).The new excited state is therefore assigned as the full chargeseparated state 3 CSS, NDI 1− -Pt-PTZ 1+ .The calculated vibrational spectrum of the 3 CSS is in excellent agreement with the data (Fig. S17 †).
In TA, a decay of TA signal of the 3 IL(CT) state in 3 with a lifetime of 929 ps is accompanied by the formation of transient bands which match the electronic absorption bands of NDI 1− .The large change in transient signal is due to formation of the 3 CSS (Fig. S5, † absorption of PTZ 1+ , 525 nm is obscured by the laser scatter).
The 3 CSS absorption bands in TA reach their maximum at ∼2 ns, followed by its decay and a concomitant recovery of the bleach at 380 nm with a lifetime of 6.2 ns.The absence of longer-lived states is conrmed by the transient spectrum in 3 decaying fully within the ∼22 ns instrument response of the ash photolysis set-up.Thus, TRIR and TA data, and DFT are consistent in elucidating the lowest excited state as a 3 CSS in 3 in DCM.
Photophysical processes in 1-3 following ML-LCT excitation are summarised in Fig. 8.The initially populated CT state [evidenced by stimulated emission in TA, the FLUP data, and TRIR transient at 1885 cm −1 in 2 &3] decays on ultrafast time-scale, <500 fs into a 3 MLCT state.In 1, the 3 MLCT decays into 3 IL state (1.71 eV, 60 ns), and 3 IL is the lowest state.In 2 and 3, two 3 IL states are populated from the initial CT state, 3 MLCT / 3 IL / 3 IL(CT) as evident from two distinct n(CC) transients.In 2, the lowest excited state is 3 IL(CT) (1.66 eV) with a lifetime 114 ns.In 3, the The absence of ground state recovery with (1/970) ps −1 rate suggests the 3 IL(CT) state fully decays into the 3 CSS, whilst lack of growth of the 3 CSS with the 0.26 or 4.5 ps lifetimes implies the 3 CSS is not populated from the higher-lying CT states in 3.
However, this consecutive model cannot explain the difference in the lifetime component attributed to the decay of 3 IL state in 3: TA (88 ps), acetylide region TRIR (126 ps) and carbonyl region TRIR (61 ps) with large margins; attempts to t multiple datasets from three different methods with the same (xed) value of this component did not give satisfactory results.We therefore propose a branching step in the excited state decay in 3: 3 IL branches into 3 IL(CT) and 3 CSS state.The n(CC) are more sensitive to the changes of electron density on the CC-bridge and hence more accurately detect the 3 IL / 3 IL(CT) process, approximated with the 126 ps time-constant.The n(CO) transients are more sensitive to the change of electron density on the NDI ligand and also detect 3 IL / 3 CSS process (61 ps).To test this suggestion, Lifetime Density Analysis (LDA) was performed for the TRIR data across 1580-2200 cm −1 regions.The LDA, which shows a 2D-map of relative amplitudes of decay components at each wavenumber, conrms the presence of both 61 ps, and 126 ps components (Fig. S10, † note the broad distribution of the time constants for the evolution of n(CO), Fig. S10c †) conrming two timescales of population of 3 CSS.It is not possible to distinguish between the two branching channels in the TA data, the modelling of which yields decay of the 3 IL state as 88 ps.Comparing the lifetimes extracted from TRIR and TA, we crudely estimate that approximately 40% 3 IL state decays directly into 3 CSS, and 60% 3 IL decays into 3 IL(CT) rst.
Unexpectedly, the lifetime of 3 CSS in 3 (E CSS 1.56 eV) is 6 times higher than that in the NAP-Pt-PTZ (E CSS 2.17 eV) despite a z0.5 eV larger driving force for charge recombination, DG CR = −E CSS .
The DG CR in both cases is sufficiently negative to place charge recombination in the inverted region (assuming reorganization energy <1.2 eV), yet this would have led to the smaller lifetime of 3 CSS state in 3 than NAP-Pt-PTZ, contradicting experiments.
To explain this further we considered the electronic structure of the 3 CSS state in 3 and NAP-Pt-PTZ.The frontier orbitals for these states are given in Fig. 9A and B ground state is the transition f 287 /f 286 , whereas for 3-NAP it is: f 266 /f 264 .Inspection of the orbitals involved for both molecules shows that there is little difference between the molecules and very little overlap between the orbitals.Thus, the likely mechanism of charge recombination in this case is through-space electron tunnelling, 58 whose rate constant k ET decreases exponentially with the donor-acceptor distance r: where k 0 is the rate of electron tunnelling at the closest van der Waals donor-acceptor distance r 0 , and b is a constant.Using previously estimated value b = 1.36 Å −1 for a cis-NDI-Pt-PTZ with r = 14.7 Å and s = 36 ns, 50 and with r = 15.5 Å and s = 104 ns, one can predict 6 ns lifetime of the 3 CSS state in 3, which An increased lifetime of 3 CSS in 3 despite its comparatively low energy could be caused by a slightly longer D-A throughspace distance of 13.04 Å in 3 vs.NAP-Pt-PTZ, 12.55 Å and suggests that charge recombination proceeds via tunnelling mechanism.
TDDFT calculations identify two triplet manifolds for 3, starting from two distinct and stable T 1 geometries, which are close in energy (0.43 kJ mol −1 ).The major differences between these two triplet states are the structure of the PTZ moiety, and the relative order of the 3 IL(CT) and the 3 CSS states.The small predicted energy difference between these two states explains why their order is readily modulated by solvent polarity, explaining lack of population of 3 CSS in hexane.
Conclusions
The intriguing photophysics of new donor-bridge-acceptor Pt(II) triad bearing a strong NDI acceptor directly conjugated into the -CC-Pt-CC-bridge, 3, and its donor-free precursors 1 and 2, has been resolved by a combination of ultrafast TRIR, transient absorption, uorescence upconversion spectroscopies, and TDDFT aided by electrochemical studies.We show that such conjugation of the strong acceptor invokes a switch from an intra-ligand to the charge-separated lowest excite state.
Comparison of the photophysical properties of 1-3 with their analogues bearing a weaker acceptor NAP 10,35 revealed a set of unexpected and counterintuitive observations.First, the rate of intersystem crossing is orders of magnitude higher in NDI − than in NAP-based DBA, despite a similar energy gap of 0.5 eV between the 1 CT-3 IL in 1-3 and the 1 CT-3 CT in the NAP-analogues.Whilst a vast variety of ISC rates has been reported for Pt(II) complexes, the role of many factors (driving force, density of states, vibronic coupling) remains unclear.
Here, the increase in the ISC rate could be tentatively ascribed to the singlet manifold being more delocalised across the CC-Pt-CC bridge in the DBA with a stronger acceptor, promoting structural change, and enhancing interstate coupling.
Secondly, the triad DBA 3 and the dyad 2 (-CC-Pt-CC bridge, no donor) possess two intra-CC-NDI states, a "pure" 3 IL state and a state with a charge-transfer character, 3 IL(CT). 3IL(CT) in 2 has the energy of 1.61 eV and a lifetime of 114 ns, vs. 1.9 eV, 190 ms 3 IL-state in NAP-analogues: the drastic change in the lifetime cannot be explained by the mere 0.3 eV difference in the DG, pointing to the importance of delocalisation across the bridge.
Further, the rate of charge separation in NAP-Pt-PTZ, with the lowest 3 IL excited state, and DG( 3 CT / 3 CSS) = −0.5-0.3 eV, is ∼70 times higher, 14 ps vs. 971 ps 3 IL(CT) / 3 CSS chargeseparation in 3, probably due to the smaller driving force DG = −0.1 eV, and a greater through-space D-A distance.
Finally, unexpectedly still, the lifetime of 3 CSS is 6 times higher in 3 than in NAP-Pt-PTZ despite a z0.5 eV smaller driving force for charge recombination, DG CR = −1.58 vs. −2.17eV, due to distance dependence of electron transfer in the inverted region.
We conclude that in the new DBA trans-acetylide triad 3, the introduction of a stronger acceptor directly conjugated into the bridge changes the nature of the lowest excited state to a chargeseparated state, 3 CSS.We observe a surprising slow-down of both charge separation and charge recombination in the DBA systems with a stronger acceptor vs. its analogues with the same bridge and donor, but a weaker acceptor.We attribute the rst effect to the changes in the driving force for the forward ET, which lies in Marcus kinetic region.The second effect is likely due to distance-dependence of electron tunnelling in the inverted region.The efficiency of charge separation in 3 is strongly modulated by solvent polarity with a full shutdown of charge separation in non-polar solvents.The results provide unexpected insights into the factors governing ultrafast dynamics, whilst the new DBAs offer a versatile basis for investigating the role of bridge vibrations and transient structural change in controlling charge separation.
Experimental
Synthesis, IR/UV-vis spectroelectrochemistry and cyclic voltammetry are detailed in the ESI.† FTIR spectra were recorded on a PerkinElmer Spectrum One spectrometer, UV-vis spectra on a Cary-50-Bio spectrophotometer, Agilent, and emission spectra on Fluoromax-4 or Duetta uorimeter, Horiba Scientic.
Transient Absorption (TA) experiments were performed in the Lord Porter Ultrafast Laser Spectroscopy Laboratory, Sheffield, on a Helios spectrometer (HE-VIS-NIR-3200, Ultrafast Systems).
A Ti:Sapp regenerative amplier (Spitre ACE PA-40, Spectra-Physics) provided 800 nm pulses (40 fs FWHM, 10 kHz, 1.2 mJ).The amplier was seeded by 800 nm pulses (25 fs FWHM, 84 MHz) from a Ti:Sapp oscillator (MaiTai, Spectra-Physics).Two amplication stages of the Spitre ACE were pumped by two Nd:YLF lasers each (Empower).500/520 nm pump pulses (80 fs FWHM, 2.5 kHz, 0.4 mJ) were generated by a TOPAS prime (Light Conversion), pumped by the 800 nm (40 fs FWHM, 10 kHz, 0.5 mJ) output of the spitre ACE.The pump pulse passed through a 2.5 kHz mechanical chopper, to allow for the probing of both pumped and un-pumped sample.The pump pulse was depolarised, and focussed onto the sample cell (fused silica, internal pathlength 2 mm), to a spot diameter #0.3 mm.A broadband white light probe pulse (340-750 nm) was generated by a portion of the 800 nm output of the spitre ACE focussed to a 3 mm CaF 2 crystal.Before generating the white light, the 800 nm pulses were sent to a computer-controlled 8 ns optical delay line (DDS300, Thorlabs, 1.67 fs).The white light was focused on the sample by a protected silver concave mirror (f 50 mm).Kinetic analysis was performed as single points (Origin-Pro 2018), or Global Lifetime Analysis (GLA).
Ultrafast Broadband Fluorescence Upconversion (FLUP) experiments were performed in the Lord Porter Ultrafast Laser Spectroscopy laboratory, on a set-up developed by Ernsting 59 and supplied by LIOP-TEC GmbH.485 nm pump pulses (200 fs FWHM, 10 kHz, 0.5 mJ) were generated as described above for the transient absorption experiments.The pump pulses were sent through a computer-controlled optical delay line (M-IMS400LM, Newport) providing pump-gate delay up to 2.7 ns. Polarization was set to magic angle with respect to vertical axis with a halfwave plate.Pump pulses were focused by a lens (f 200 mm, fused silica) onto the 1 mm quartz sample cell to a spot of d < 0.1 mm.The 1320 nm gate pulse (80 fs FWHM, 10 kHz, 60 mJ) were generated by the same laser system as the pump pulse.The polarization of the gate pulse was set to horizontal using a wire-grid polariser and a half-wave plate.Emission from the sample was collected in a forward-scattering geometry, any transmitted pump light blocked by a beam-stopper.The emission was directed onto a 100 mm thick b-barium borate crystal (EKSMA OPTICS) where it was up-converted (sum-frequency) with the 1320 nm gate pulses.The emission and the gate beams met at an angle of ∼21°at the crystal, within the spot d = 0.6 mm.Type II phase-matching was used to achieve the broadest spectral window.The upconverted emission signal was spatially ltered and focused by a concave mirror to a bre bundle (Ceram Optek).A home-built spectrograph dispersed the signal on a CCD (Andor, iDus DU440A-BU2).The 286-500 nm detection range corresponds to the 360-780 nm emission.
Computational details
The majority of calculations were performed with the SMP version of the Gaussian09 package, rev D.01. 617][68] Excited state calculations were performed with linear-response adiabatic time-dependent DFT within the Tamm-Dancoff approximation (LR-TDA-TDDFT). 69Benchmark studies 70 showed that excellent agreement with experimental data was obtained at a reasonable computational cost using the PBE0 functional with the dhf-SVP basis set on Pt 71,72 and the def2-SVP basis set 73,74 on all other elements.All geometry optimizations were done using ultrane integrals, and were followed by frequency calculations.All minima show zero imaginary frequencies.
Fig. 2 (Fig. 3
Fig.2(A) UV-vis absorption spectra of 1 (black), 2 (red) and 3 (blue), in DCM.Spectra are normalised to the peak of the lowest energy absorption band.Inset: a zoomed-in view of the ML-LCT absorption envelope in hexane for 1-3.(B) Emission spectra of 1 (black), 2 (red) and 3 (blue), in aerated DCM at r. t., excitation 504 nm.The small sharp peaks at 585, 590 nm are solvent Raman bands.The emission spectra of 1 and 2 are normalised; the spectrum of 3 is displayed relative to 2 (measured under identical conditions).Global analysis of the data is given in Fig.S3.† (B), inset: a FLUPs 2D-map for 2, excitation 485 nm.FLUPs data for 1 are given in the ESI (Fig.S2).† Fig.3Infrared absorption spectra of 1 (black), 2 (red) and 3 (blue), at r. t. in DCM.The spectrum of the one-electron reduced species of 1, 1 − , obtained spectro-electrochemically in DCM, applied potential to −1.3 V vs. Fc/Fc + is shown as a dashed line.
transient almost fully decays by 1 ps (its decay is seen clearer at the lower-energy side, 1800-1850 cm −1 , where there are no overlapping signals; the single-point decay kinetics in this region yields 0.3 ± 0.1 ps), whilst 1962 and 2038 cm −1 bands grow.By 4 ps, whilst the intensity of 2038 cm −1 remain the same, a new band starts growing at 1936 cm −1 and the electronic offset decreases.By 12 ps, the peak at 1962 cm −1 almost disappears.The nal n(CC) transients at 1936 and 2038 cm −1 persist over the time-scale of the experiment.In the low-frequency region, two n(CO) transients appear at 1642 and 1618 cm −1 which over the following 15 ps grow and shi to 1643 and 1608 cm −1 , resp., and undergo limited evolution thereaer.A combination of single-point kinetic and GLA of the data yields time-constants of 0.3, 3 and 120 ps, and a 'constant' >8 ns in the acetylide region; and 0.25, 4, 162 ps, and a 'constant' in the carbonyl region.Overall TRIR dynamics of 2 can be described by 0.4 ± 0.2, 3 ± 1, 140 ± 20 ps, and >8 ns (114 ns was obtained by ash photolysis, Fig. S12 †).
and S11 †) at 500 nm excitation show a grow-in of bleach signals (360-380, 480-530 nm), accompanied by the rise of transient signals across the spectral range probed.
Fig. 6
Fig. 6 Expanded view of Fig. 4, panel F: the TRIR spectra of 3 in the region 2060-2130 cm −1 , excitation 520 nm, in DCM.The inset: kinetic trace at 2097 cm −1 ( ), solid line: the best fit obtained from a global analysis of the acetylide region data.
Fig. 7
Fig. 7 Transient absorption spectra (left panel, time delays stated) and kinetics (right panel, wavelengths stated) following excitation at 500 nm in DCM, for 1 (A and B), 2 (C and D) and 3 (E and F).(Bottom panel) The first 20 ps, the time axis is shown in log scale.The kinetics solid traces are the results of global fit of the data (see Fig. S12 † for DAS, EAS).
Unlike 1 ,
in 2 and 3 the broad n(CC) transient at 1885 cm −1 ( 1 CT) evolves not into one but into two bands over 20 ps.The n(CC) evolution proceeds as 1885 / 1962 / 1936 cm −1 , with the lifetimes of 0.3 and 3 ps for 2, and in a very similar manner for 3, 1885 / 1963 / 1928 cm −1 (0.26 and 4.5 ps).The formation of two distinct n(CC) reects a stepwise change in electron density on the bridge over 20 ps, whilst the n(CO) show only limited spectral shi, indicating little change of electron density on NDI.This behaviour suggests the presence of two IL excited states in 2 and 3 with slightly different electron densities on the Pt/CC-bridge.The shi of the n(CC) from 1962/1963 to lower energies, 1936/1928 cm −1 indicates an increase in CT-character, hence the state with the n(CC) at 1936/1928 cm −1 (2/3) will be termed " 3 IL(CT)"; this state is populated from a 3 IL state with the time-constant around 140 ps in 2 and 126 ps in 3.
, respectively.The T 1 state for 3 is given as .f285 f 286 f 287 f 285 E , whereas the T 1 state for 3-NAP is given as .f264 f 265 f 266 f 264 E .Correspondingly, the T 2 state for 3-NAP is achieved by an excitation: f 264 /f 265 .As a consequence the de-excitation back to the S 0
Fig. 8 A
Fig. 8 A summary of the excited state dynamics of 1-3, following excitation at 520 nm in DCM.The energy values are estimated from cyclic voltammetry and emission data.Lifetimes for 1 are given in black, for 2in and 3in .The differential electron density plot for the lowest excited state in 3, 3 CSS, is also shown.A comparison with the energy level diagram for 3-NAP is given in the ESI (Scheme S1).†
Fig. 9 (
Fig. 9 (A) Frontier orbitals for the T 1 state of 3 for both a and b manifolds.The HOMO for the a manifold is orbital 287a, whereas the HOMO for the b manifold is orbital 285b.(B) Frontier orbitals for the T 1 state of 3-NAP for both a and b manifolds.The HOMO for the a manifold is orbital 266a, whereas the HOMO for the b manifold is 264b.
Table 1
Absorption, emission, electrochemical and photophysical properties of 1, 2 and 3 in aerated DCM peak separation is given in brackets.The CVs are given in Fig.S4.fMajor excited state absorption peaks in TA and TRIR following 500-520 nm excitation.
g Spectro-electrochemical data.h Ref. 35. i Methods from which timescales have been derived stated.CSS form : charge-separated state formation; CR: charge recombination. | 10,430 | 2023-09-28T00:00:00.000 | [
"Chemistry"
] |
Effects of the Gas-Atomization Pressure and Annealing Temperature on the Microstructure and Performance of FeSiBCuNb Nanocrystalline Soft Magnetic Composites
FeSiBCuNb powders prepared by the gas atomization method generally exhibit a wide particle size distribution and a high degree of sphericity. In addition, the correspondingly prepared nanocrystalline soft magnetic composites (NSMCs) perform good service stability. In this paper, effects of the gas-atomization pressure and annealing temperature on the microstructure and soft magnetic properties of FeSiBCuNb powders and NSMCs are investigated. The results show that the powders obtained by a higher gas-atomization pressure possess a larger amorphous ratio and a smaller average crystallite size, which contribute to the better soft magnetic performance of the NSMCs. After being annealed at 550 °C for 60 min, the NSMCs show a much better performance than those treated by the stress-relief annealing process under 300 °C, which indicates that the optimization of the soft magnetic properties resulting from the precipitation of the α-Fe(Si) nanocrystalline largely overwhelms the deterioration caused by the grain growth of the pre-existing crystals. In addition, the annealed NSMCs prepared by the powders with the gas-atomization pressure of 4 MPa show the best performance in this work, μe = 33.32 (f = 100 kHz), Hc = 73.08 A/m and Pcv = 33.242 mW/cm3 (f = 100 kHz, Bm = 20 mT, sine wave).
Introduction
FeSiBCuNb nanocrystalline soft magnetic composites (NSMCs) show excellent soft magnetic properties, such as a high saturation magnetic induction (B s ) and permeability (µ e ), and low coercivity (H c ) and loss (P cv ). They are also widely used in electronic devices such as sensors, inductors, and transformers [1][2][3]. Compared with flaky powders prepared after melt-spinning and strip breakage, the gas-atomized powders show a slightly lower amorphous ratio, but slightly higher sphericity, wider particle size distribution and lower preparation energy consumption [4]. As the spherical powders are insulation coated easily, the correspondingly prepared NSMCs generally have good stability, and a low eddy current loss (P e ) and magnetostrictive coefficient (λ s ) [5]. Furthermore, the wide particle size distribution of the gas-atomized powders makes them pile up easily, which is beneficial in improving the µ e [6].
The parameters of gas atomization play a key role in the preparation of soft magnetic powders with high quality. Shi et al. [7] found that, with the increasing gas-atomization pressure, the gas kinetic energy per unit mass of melt increased, and the average particle size of the obtained FeCrMoNiPBCSi powders were smaller. Gao et al. [8] studied the effects of the melt flow rate, gas-atomization pressure and melt superheat on the AlSi10Mggasatomization powders. They showed, that with a larger gas-atomization pressure and a higher melt superheat, the melt flow rate tended to be smaller, which led to the smaller average particle size of the powders. Alvarez et al. [9] studied the relationship between the cooling rate and the gas atomization process parameters, such as the droplet temperature, gas temperature and thermal conductivity and particle size of powders. Ciftci et al. [10] found that the gas consumption and the crystalline fraction of the FeCoPSiNb powders could be reduced by using a higher gas temperature. However, there are few studies that focus on the influence of the gas-atomization pressure on the microstructure of FeSiBCuNb powders as well as on the soft magnetic properties of the corresponding NSMCs systematically.
Nanocrystalline soft magnetic materials are generally obtained by annealing amorphous soft magnetic materials at an appropriate temperature. The annealing process also has an effective effect on the soft magnetic properties of nanocrystalline materials [11,12]. Zhao et al. [13] systematically studied effects of the annealing temperature on the B s and H c of FeSiBCuNbP nanocrystalline powders and the µ e and P cv of the corresponding NSMCs. The results showed that, when the annealing temperature was between 400 • C and 500 • C, only the α-Fe soft magnetic nanocrystalline phase precipitated, and the NSMCs showed good comprehensive soft magnetic properties. However, when the annealing temperature was 550 • C, the comprehensive soft magnetic properties deteriorated sharply due to the precipitation of the hard magnetic phase. Meng et al. [14] systematically studied the effect of annealing time on the properties of the nanocrystalline Fe 83 Si 4 B 10 P 2 Cu 1 ribbons. Li et al. [15] studied the effect of annealing time on the properties of the Fe-B-P-C-Cu nanocrystalline alloy. They found that, when the annealing time was too long, the grain size increased excessively although the residual stress could be released, which resulted in the increase of H c and the decrease of µ e . Luo et al. [16] systematically studied the effect of magnetic-field annealing on the properties of FeCoSiBCu amorphous powders. They found that magnetic-field annealing could refine grains by increasing the nucleation rate of nanocrystalline grains, which could improve the soft-magnetic properties. It is worth noting that the cooling rate during the gas atomization process is not high enough. In addition, in most cases, the obtained powders are composed of nanocrystalline and amorphous phases. Thus, both crystallization of the amorphous matrix and grain growth of the nanocrystalline may occur during the annealing process, where the latter is harmful to the soft magnetic properties of the NSMCs. However, there is little research in this regard.
In this work, FeSiBCuNb spherical powders are prepared by the gas atomization method. Effects of the gas-atomization pressure on the microstructure and soft magnetic properties of FeSiBCuNb powders are systematically and quantitatively studied. Then, the corresponding NSMCs are prepared. The effects of the gas-atomized powders with different average particle sizes and amorphous ratios on the soft magnetic properties of FeSiBCuNb NSMCs, as well as effects of the annealing temperature on the soft magnetic properties of the NSMCs, are then investigated.
Preparation of the Powders
The commercial 1K107 Fe 73.5 Si 13.5 B 9 Cu 1 Nb 3 bulk alloy is used as raw material. The raw alloy ingot is supplied by Advanced Technology (Bazhou, China) Special Powder Co., ltd. The smelting temperature is 1200 • C. Both the smelting and gas-atomization processes are carried out in an argon atmosphere. The gas-atomization pressures are selected as 2 MPa, 3 MPa and 4 MPa, respectively. Fine powders passed through a 325-mesh sieve are used for the study. For convenience, the powders without annealing treatment are named as P2-raw, P3-raw, and P4-raw. A portion of the powders are annealed at 300 • C or 550 • C. Accordingly, the powders after annealing treatment are named as PX-Y, where X and Y refer to the corresponding gas-atomization pressure and the annealing temperature, respectively.
Preparation and Heat-Treatment of the NSMCs
The fine powders are made into the NSMCs. REN60 silicone resin is selected as the insulating coating agent and adhesive, and acetone is selected as the solvent. Firstly, Materials 2023, 16, 1284 3 of 11 3 wt.% REN60 silicone resin is uniformly dispersed in acetone by ultrasonic and mechanical stirring. Secondly, the powders are added to the mixed solution and stirred continuously until the acetone is completely volatilized. Then, the insulated powders are put into the mold after drying in a 60 • C oven for 60 min. The annular NSMCs are obtained under 1400 MPa. The pressurization rate is 360 MPa/min and the holding time is 1 min. Lastly, the NSMCs are cured at 180 • C for 60 min. Similar to the naming method of powders, the NSMCs are named as CX-Y.
Characterization Techniques
The particle size distribution of the powders is tested using a laser particle size analyzer (Bettersize2600, Bettersize Instruments Ltd., Dandong, China). The phase microstructure is analyzed using X-ray diffraction (Rigaku D/max-rB, Cu Kα). Differential scanning calorimetry (TA Instruments, New Castle, DE, USA) is used to characterize the crystallization behavior of powders with a heating rate of 40 • C/min. A scanning electron microscope (Hitachi Regulus 8100, Chiyoda, Japan) is used to test the morphology of the powders. A soft magnetic direct current measuring instrument (DSMC-8200SD, Loudi, China) is used to test the H c of the NSMCs. The µ e and DC-bias performance of the NSMCs are tested by a Lenz capacitor resistance meter (LCR-8210, Taiwan, China, 10 kHz~1 MHz). The P cv of the NSMCs is measured by a soft magnetic alternating current analyzer (MAST-3000SA, Loudi, China, 10 kHz~1 MHz) at 20 mT.
Structure and Soft-Magnetic Properties of the Gas-Atomized Powders
The particle size distribution and the symbolic size of the powders with different gasatomization pressures are shown in Figure 1. One sees from Figure 1 that the FeSiBCuNb powders prepared at a higher gas-atomization pressure not only are finer, but also show a narrower particle size distribution. With the increasing gas-atomization pressure, the gas kinetic energy per unit mass of melt becomes higher. As a result, the crushing of melt is more sufficient, and the average particle size of the powders is finer. P4-raw is shown to be the finest powder, with a D 50 = 15.19 µm.
refer to the corresponding gas-atomization pressure and the annealing temperature, respectively.
Preparation and Heat-Treatment of the NSMCs
The fine powders are made into the NSMCs. REN60 silicone resin is selected as the insulating coating agent and adhesive, and acetone is selected as the solvent. Firstly, 3 wt.% REN60 silicone resin is uniformly dispersed in acetone by ultrasonic and mechanical stirring. Secondly, the powders are added to the mixed solution and stirred continuously until the acetone is completely volatilized. Then, the insulated powders are put into the mold after drying in a 60 °C oven for 60 min. The annular NSMCs are obtained under 1400 MPa. The pressurization rate is 360 MPa/min and the holding time is 1 min. Lastly, the NSMCs are cured at 180 °C for 60 min. Similar to the naming method of powders, the NSMCs are named as CX-Y.
Characterization Techniques
The particle size distribution of the powders is tested using a laser particle size analyzer (Bettersize2600, Bettersize Instruments Ltd., Dandong, China). The phase microstructure is analyzed using X-ray diffraction (Rigaku D/max-rB, Cu Kα). Differential scanning calorimetry (TA Instruments, New Castle, DE, USA) is used to characterize the crystallization behavior of powders with a heating rate of 40 °C/min. A scanning electron microscope (Hitachi Regulus 8100, Chiyoda, Japan) is used to test the morphology of the powders. A soft magnetic direct current measuring instrument (DSMC-8200SD, Loudi, China) is used to test the Hc of the NSMCs. The μe and DC-bias performance of the NSMCs are tested by a Lenz capacitor resistance meter (LCR-8210, Taiwan, China, 10 kHz~1 MHz). The Pcv of the NSMCs is measured by a soft magnetic alternating current analyzer (MAST-3000SA, Loudi, China, 10 kHz ~ 1 MHz) at 20 mT.
Structure and Soft-Magnetic Properties of the Gas-Atomized Powders
The particle size distribution and the symbolic size of the powders with different gasatomization pressures are shown in Figure 1. One sees from Figure 1 that the FeSiBCuNb powders prepared at a higher gas-atomization pressure not only are finer, but also show a narrower particle size distribution. With the increasing gas-atomization pressure, the gas kinetic energy per unit mass of melt becomes higher. As a result, the crushing of melt is more sufficient, and the average particle size of the powders is finer. P4-raw is shown to be the finest powder, with a D50 = 15.19 μm. All the SEM images of the powders are shown in Figure 2, where the SEM images for all of the raw powders made by different gas-atomization pressures with both low and high magnifications are supplemented. The SEM images of P2-raw, P3-raw, and P4-raw at low magnification are shown in Figure 2a-c, respectively, while the corresponding images with high magnification are shown in Figure 2d-f, respectively. One sees from Figure 2 that the powders possess excellent sphericity and a smooth surface, which indicates that the powders can easily be coated, and the correspondingly prepared NSMCs should have good stability and relatively low loss. Moreover, there are almost no satellite powders around.
high magnifications are supplemented. The SEM images of P2-raw, P3-raw, and P4-raw at low magnification are shown in Figure 2a-c, respectively, while the corresponding images with high magnification are shown in Figure 2d-f, respectively. One sees from Figure 2 that the powders possess excellent sphericity and a smooth surface, which indicates that the powders can easily be coated, and the correspondingly prepared NSMCs should have good stability and relatively low loss. Moreover, there are almost no satellite powders around. To identify the contained phases in the FeSiBCuNb powders, the XRD patterns are shown in Figure 3. One sees from Figure 3 that there is a broad diffuse peak corresponding to the amorphous phase, while sharp peaks corresponding to the (110), (200), and (211) crystal planes of α-Fe (Si) appear distinctly at 2θ = 44.8°, 65.3°, and 82.7°, respectively [13]. With the increasing gas-atomization pressure, the intensity of these three peaks decreases, indicating the decrease of the crystalline ratio. In addition, when the pressure is 4 MPa, there is only a very small diffraction peak at 2θ = 44.8°. To quantify the results of the XRD pattern, both the volume fraction of the amorphous phase (Vam) and the average crystallite size (d) are estimated according to Equation (1) [17] and Equation (2) [18], respectively.
= +
(1) = cos (2) where Iam and Icr are integral intensities of the diffraction peaks of the amorphous phase and crystalline phase, respectively, K is the shape factor equal to 0.94, λ is the wave-length of the X-ray used, β is the full width at half maxima, and θ is the angle between the incident and the scattered X-ray. The calculated Vam values of P2-raw, P3-raw, and P4-raw are about 56%, 62%, and 92%, respectively, while the estimated d are 36 nm, 28 nm and 22 nm, correspondingly. To identify the contained phases in the FeSiBCuNb powders, the XRD patterns are shown in Figure 3. One sees from Figure 3 that there is a broad diffuse peak corresponding to the amorphous phase, while sharp peaks corresponding to the (110), (200), and (211) crystal planes of α-Fe (Si) appear distinctly at 2θ = 44.8 • , 65.3 • , and 82.7 • , respectively [13]. With the increasing gas-atomization pressure, the intensity of these three peaks decreases, indicating the decrease of the crystalline ratio. In addition, when the pressure is 4 MPa, there is only a very small diffraction peak at 2θ = 44.8 • . To quantify the results of the XRD pattern, both the volume fraction of the amorphous phase (V am ) and the average crystallite size (d) are estimated according to Equation (1) [17] and Equation (2) [18], respectively.
where I am and I cr are integral intensities of the diffraction peaks of the amorphous phase and crystalline phase, respectively, K is the shape factor equal to 0.94, λ is the wave-length of the X-ray used, β is the full width at half maxima, and θ is the angle between the incident and the scattered X-ray. The calculated V am values of P2-raw, P3-raw, and P4-raw are about 56%, 62%, and 92%, respectively, while the estimated d are 36 nm, 28 nm and 22 nm, correspondingly.
In order to find the proper annealing temperature, the crystallization behavior of the powders is shown in the DSC curves in Figure 4. One sees that there are two peaks in each DSC curve. The first exothermic peak corresponds to the precipitation of the α-Fe (Si) phase, while the second peak corresponds to the hard-magnetic phase, including the Fe-boron phases [19][20][21][22][23]. It can be seen that the gas-atomization pressure has little effect on the onset temperatures (T x1 , T x2 ) and the peak temperatures (T p1 , T p2 ) for both the primary process and the secondary crystallization, as the position of the peaks barely changes with the increasing gas-atomization pressure. Similarly, the gas-atomization pressure does not effectively affect the thermal stability of the powders. Furthermore, the temperature intervals between two peaks for all the powders are relatively large (>135 • C), indicating the good nanocrystallization-forming ability of the FeSiBCuNb system to some extent [24,25].
The hysteresis loops of the powders are shown in Figure 5. It is well known that the microstructure, noticeably the crystallite size, essentially determines the hysteresis loop of a ferromagnetic material. While d is lower than 100 nm, the H c of the powders is positively correlated with d 6 [26]. One sees that the H c of P2-raw, P3-raw, and P4-raw are 1407.1 A/m, 721.9 A/m, and 288.2 A/m, respectively. The H c of materials can be simply represented as follows [27]: where P c is a constant, J s is the saturation magnetization, K 1 is the magneto-crystalline anisotropy constant, and A is the ferromagnetic exchange constant between adjacent grains. As mentioned above, the powders prepared with a higher gas-atomization pressure show a smaller d. Thus, according to Equation (3), the H c of the powders should decrease sharply with the increase of the gas-atomization pressure, which is in accordance with the results shown in Figure 5. The B s of P2-raw, P3-raw, and P4-raw are 1.25 T, 1.18 T, and 1.22 T, respectively. There is no significant association between the gas-atomization pressure and B s , since B s is mainly related to the content of the ferromagnetic elements. In order to find the proper annealing temperature, the crystallization behavior of the powders is shown in the DSC curves in Figure 4. One sees that there are two peaks in each DSC curve. The first exothermic peak corresponds to the precipitation of the α-Fe (Si) phase, while the second peak corresponds to the hard-magnetic phase, including the Feboron phases [19][20][21][22][23]. It can be seen that the gas-atomization pressure has little effect on the onset temperatures (Tx1, Tx2) and the peak temperatures (Tp1, Tp2) for both the primary process and the secondary crystallization, as the position of the peaks barely changes with the increasing gas-atomization pressure. Similarly, the gas-atomization pressure does not effectively affect the thermal stability of the powders. Furthermore, the temperature intervals between two peaks for all the powders are relatively large (>135 °C), indicating the good nanocrystallization-forming ability of the FeSiBCuNb system to some extent [24,25].
The hysteresis loops of the powders are shown in Figure 5. It is well known that the microstructure, noticeably the crystallite size, essentially determines the hysteresis loop of a ferromagnetic material. While d is lower than 100 nm, the Hc of the powders is positively correlated with d 6 [26]. One sees that the Hc of P2-raw, P3-raw, and P4-raw are 1407.1 A/m, 721.9 A/m, and 288.2 A/m, respectively. The Hc of materials can be simply represented as follows [27] where Pc is a constant, Js is the saturation magnetization, K1 is the magneto-crystalline anisotropy constant, and A is the ferromagnetic exchange constant between adjacent grains. As mentioned above, the powders prepared with a higher gas-atomization pressure show a smaller d. Thus, according to Equation (3), the Hc of the powders should decrease sharply with the increase of the gas-atomization pressure, which is in accordance with the results shown in Figure 5. The Bs of P2-raw, P3-raw, and P4-raw are 1.25 T, 1.18 T, and 1.22 T, respectively. There is no significant association between the gas-atomization pressure and Bs, since Bs is mainly related to the content of the ferromagnetic elements.
Soft-Magnetic Properties of the NSMCs
After preparing the gas-atomized powders into the NSMCs, the effects of the an ing temperature on the soft magnetic properties of the NSMCs are investigated. Th pendence of frequency (f) on μe for the NSMCs is shown Figure 6a. One sees that a samples exhibit excellent high-frequency stability of the μe, which keeps at a con value below 100 kHz. One sees from Figure 6 that, after the heat treatment, the μe of F CuNb NSMCs prepared by the gas-atomized powder with a higher gas-atomization sure is shown to be larger. The μe of FeSiBCuNb NSMCs also increases with the increa annealing temperature. Figure 6b shows the μe of the NSMCs at f = 100 kHz. Amon the samples, the C4-550 has the highest μe, which is 33.32 (f = 100 kHz).
Soft-Magnetic Properties of the NSMCs
After preparing the gas-atomized powders into the NSMCs, the effects of the annealing temperature on the soft magnetic properties of the NSMCs are investigated. The dependence of frequency (f ) on µ e for the NSMCs is shown Figure 6a. One sees that all the samples exhibit excellent high-frequency stability of the µ e , which keeps at a constant value below 100 kHz. One sees from Figure 6 that, after the heat treatment, the µ e of FeSiBCuNb NSMCs prepared by the gas-atomized powder with a higher gas-atomization pressure is shown to be larger. The µ e of FeSiBCuNb NSMCs also increases with the increasing annealing temperature. Figure 6b shows the µ e of the NSMCs at f = 100 kHz. Among all the samples, the C4-550 has the highest µ e , which is 33.32 (f = 100 kHz). As is known, μe is closely related to residual stresses, defects, and the content of nonmagnetic materials, [28] which can be expressed by Equation (4).
where a and b are constants, is the residual stress, is the intrinsic permeability of the soft magnetic powders, and c is the non-magnetic material content. The non-magnetic material content mainly includes the air gaps and defects between powders during forming and the organic insulating coating agent, since these materials contain no magnetic elements. Generally, the c is mainly related to the electrical resistivity of the NSMCs.
According to Equation (4), the μe of the FeSiBCuNb NSMCs with a smaller d should be higher, which is in accordance with the results shown in Figures 3 and 6. Meanwhile, as the powders under a higher gas-atomization pressure show a smaller average particle size (see Figure 1), they pile up more easily. Thus, there is less air gap in the NSMCs made by the finer powders, which reduces the c and then causes the increase of μe to some extent. In other words, under the same annealing treatment, the higher μe of FeSiBCuNb As is known, µ e is closely related to residual stresses, defects, and the content of non-magnetic materials, [28] which can be expressed by Equation (4).
where a and b are constants, σ is the residual stress, µ is the intrinsic permeability of the soft magnetic powders, and c is the non-magnetic material content. The non-magnetic material content mainly includes the air gaps and defects between powders during forming and the organic insulating coating agent, since these materials contain no magnetic elements. Generally, the c is mainly related to the electrical resistivity of the NSMCs. According to Equation (4), the µ e of the FeSiBCuNb NSMCs with a smaller d should be higher, which is in accordance with the results shown in Figures 3 and 6. Meanwhile, as the powders under a higher gas-atomization pressure show a smaller average particle size (see Figure 1), they pile up more easily. Thus, there is less air gap in the NSMCs made by the finer powders, which reduces the c and then causes the increase of µ e to some extent. In other words, under the same annealing treatment, the higher µ e of FeSiBCuNb NSMCs made by the powders with a higher gas-atomization pressure should be attributed to the lower d and c.
In order to discover the effect of annealing treatment on the d of the NSMCs as well as to exclude the effect introduced by the insulation coating agent, the XRD patterns of the gasatomized powders with different annealing treatments are shown in Figure 6. Comparing Figure 7a with Figure 3, one sees that the amorphous ratio and the d of the powders barely change after being annealed at 300 • C, where the slightly decreased amorphous ratio and slightly increased d result from the grain growth of the pre-exist nanocrystalline in the amorphous matrix. However, comparing Figure 7a with Figure 7b, one sees that the amorphous ratio decreases to 0% and the d decreases sharply for all the powders after being annealed at 550 • C, which indicates that large amounts of α-Fe (Si) nanocrystalline grains with a smaller d have precipitated from the amorphous matrix, though further grain growth of the pre-existing nanocrystalline is also promoted. It is known that annealing at an appropriate temperature can not only release the residual stress caused by crystallization and cold pressing, but also promote the precipitation of nanocrystalline. Relief of the residual stress in the NSMCs reduces the pinning effect and the magnetic anisotropy, which can reduce λs and then improve μe [10], while the precipitation of nanocrystalline effectively increases the μe, as λs of the α-Fe (Si) nanocrystalline is lower than that of the amorphous matrix. Thus, it can be speculated that the increased μe of the NSMCs after being annealed at 300 °C should mainly result from the relief of the residual stress, since both the amorphous ratio and the d barely change compared with those without annealing treatment. Comparing those after annealing at 300 °C, the largely increased μe of the NSMCs after annealing at 550 °C should be mainly attributed to the obviously decreased d of the gas-atomized FeSiBCuNb powders, according to Equation (4). Furthermore, with the increasing annealing temperature, the evaporation of the insulation coating agent causes the reduction of c to some extent, which also leads to the increase of μe.
The DC-bias performance of the NSMCs is shown in Figure 8. One sees that the NSMCs made by the powders with a higher gas-atomization pressure show a lower DCbias performance. However, the DC-bias performance worsens with the increasing annealing temperature, which is opposite to the trend for μe, as the higher the μe is, the greater the magnetic induction intensity of the NSMCs will be under the same applied magnetic field. In turn, it is easier for the magnetic induction to reach saturation. When the applied magnetic field is 100 Oe without annealing, the best DC-bias performance in this study can reach 85%, corresponding to C2-raw (see Figure 8), while the worst is about 57%, as the precipitation of the nanocrystals in C4-550 tends to largely increase the μe. The NSMCs with excellent DC-bias performance can be widely used in a high current field.
The dependence of the frequency on the Hc and the Pcv of the NSMCs is shown in It is known that annealing at an appropriate temperature can not only release the residual stress caused by crystallization and cold pressing, but also promote the precipitation of nanocrystalline. Relief of the residual stress in the NSMCs reduces the pinning effect and the magnetic anisotropy, which can reduce λ s and then improve µ e [10], while the precipitation of nanocrystalline effectively increases the µ e , as λ s of the α-Fe (Si) nanocrystalline is lower than that of the amorphous matrix. Thus, it can be speculated that the increased µ e of the NSMCs after being annealed at 300 • C should mainly result from the relief of the residual stress, since both the amorphous ratio and the d barely change compared with those without annealing treatment. Comparing those after annealing at 300 • C, the largely increased µ e of the NSMCs after annealing at 550 • C should be mainly attributed to the obviously decreased d of the gas-atomized FeSiBCuNb powders, according to Equation (4). Furthermore, with the increasing annealing temperature, the evaporation of the insulation coating agent causes the reduction of c to some extent, which also leads to the increase of µ e .
The DC-bias performance of the NSMCs is shown in Figure 8. One sees that the NSMCs made by the powders with a higher gas-atomization pressure show a lower DC-bias performance. However, the DC-bias performance worsens with the increasing annealing temperature, which is opposite to the trend for µ e, as the higher the µ e is, the greater the magnetic induction intensity of the NSMCs will be under the same applied magnetic field. In turn, it is easier for the magnetic induction to reach saturation. When the applied magnetic field is 100 Oe without annealing, the best DC-bias performance in this study can reach 85%, corresponding to C2-raw (see Figure 8), while the worst is about 57%, as the precipitation of the nanocrystals in C4-550 tends to largely increase the µ e . The NSMCs with excellent DC-bias performance can be widely used in a high current field.
The dependence of the frequency on the H c and the P cv of the NSMCs is shown in Figure 9a,b, respectively. One sees from Figure 9 that a similar trend can be found between H c and P cv . In addition, both the H c and P cv of the NSMCs prepared by the powders with a higher gas-atomization pressure are shown to be smaller under the same annealing treatment. Compared with those without heat treatment, the P cv of FeSiBCuNb NSMCs decreases after being annealed at 300 • C, and further decreases after being annealed at 550 • C. As is known, P cv includes hysteresis loss (P h ), P e and residual loss (P r ) [29]. In addition, P r generally results from magnetization relaxation and resonance of the domain walls, which can be ignored in most cases [30]. Thus, the P cv of the NSMCs can be represented as follows: where K h and K e are the hysteresis loss coefficient and the eddy current loss coefficient, respectively, and B m is the maximum magnetic induction strength. Due to the presence of the insulation coating, the P e of the NSMCs is relatively small. As a result, the P h is dominant for the P cv of the NSMCs, and the K h or P h is proven to be positively correlated with the H c of the NSMCs. According to Equation (3), The H c decreases with the decreasing d. Thus, under the same annealing treatment, the lower P cv of the FeSiBCuNb NSMCs made by the powders with a higher gas-atomization pressure should be attributed to the lower H c of the raw powders, as these powders show a smaller d and higher V am . Compared with the NSMCs without annealing, both H c and P cv of the NSMCs decrease after being annealed at 300 • C, and decrease more after being annealed at 550 • C. As mentioned previously, the stress-relief annealing treatment at 300 • C only has a slight effect on grain growth but largely reduces the residual stress in the NSMCs, which leads to the reduction of the K h and H c of the NSMCs. Meanwhile, the nanocrystallization annealing treatment under 550 • C can not only further reduce the residual stress but also promote the precipitation of α-Fe (Si) nanocrystals from the amorphous matrix, which in turn further decreases the H c and P cv .
Materials 2023, 16, x FOR PEER REVIEW 9 of 11 the insulation coating, the Pe of the NSMCs is relatively small. As a result, the Ph is dominant for the Pcv of the NSMCs, and the Kh or Ph is proven to be positively correlated with the Hc of the NSMCs. According to Equation (3), The Hc decreases with the decreasing d.
Thus, under the same annealing treatment, the lower Pcv of the FeSiBCuNb NSMCs made by the powders with a higher gas-atomization pressure should be attributed to the lower Hc of the raw powders, as these powders show a smaller d and higher Vam. Compared with the NSMCs without annealing, both Hc and Pcv of the NSMCs decrease after being annealed at 300 °C, and decrease more after being annealed at 550 °C. As mentioned previously, the stress-relief annealing treatment at 300 °C only has a slight effect on grain growth but largely reduces the residual stress in the NSMCs, which leads to the reduction of the Kh and Hc of the NSMCs. Meanwhile, the nanocrystallization annealing treatment under 550 °C can not only further reduce the residual stress but also promote the precipitation of α-Fe (Si) nanocrystals from the amorphous matrix, which in turn further decreases the Hc and Pcv.
Conclusions
In this work, effects of the gas-atomization pressure on the size distribution, microstructure and magnetic properties of FeSiBCuNb powders, as well as the effects of annealing temperature on the soft magnetic properties of the corresponding NSMCs, are systematically studied. The following conclusions are as follows: (1) The obtained powders contain the amorphous phase and the α-Fe (Si) phase. With the increasing gas-atomization pressure, the soft-magnetic properties of the corresponding powders and NSMCs tend to be better, which can be attributed to the smaller d and the larger amorphous ratio of the powders. The powders prepared by the 4 MPa gas-atomization pressure without annealing treatment show the highest amorphous ratio of 92% in this study. (2) After annealing treatment, the µ e of NSMCs increases compared with the raw NSMCs, while H c and P cv decrease. In addition, the nanocrystallization annealing treatment at 550 • C can largely optimize the soft magnetic properties of the NSMCs, which is better than the stress-relief annealing treatment at 300 • C. It can thus be suspected that the improvement of soft magnetic properties resulting from the precipitation of the α-Fe(Si) nanocrystals largely overwhelms the deterioration caused by the grain growth. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
All data included in this study are available upon request by contact with the corresponding author. | 7,856.8 | 2023-02-01T00:00:00.000 | [
"Materials Science"
] |
Simultaneous Estimation of Overall Score and Subscores Using MIRT, HO-IRT and Bi-factor Model on TIMSS Data
In educational testing, there is an increasing interest in the simultaneous estimation of the overall scores and subscores. This study aims to compare the reliability and precision of the simultaneous estimation of overall scores and sub-scores using MIRT, HO-IRT and Bi-factor models. TIMSS 2015 mathematics scores have been used as a data set in this study. The TIMSS 2015 mathematics test consists of 35 items, four of which are polytomously scored (0-1-2), and the rest of the items are dichotomously scored (0-1). The four content domains include number (14 items), algebra (9 items), geometry (6 items), and data and change (6 items). Ability parameters were estimated using the BMIRT software. The results showed that the MIRT and HO-IRT methods performed similarly in terms of precision and reliability for subscore estimates. The MIRT maximum information method had the smallest standard error of measurement for the overall score estimates. All three methods performed similarly in terms of the overall score reliability. The findings suggest that among the three methods compared, HO-IRT appears to be a better choice in the simultaneous estimation of the overall score and subscores for the data from TIMSS 2015. Recommendations for the testing practices and future research are provided.
INTRODUCTION
Many tests in educational and psychological testing generally measure more than one ability, which makes them multidimensional inherently (Reckase, 1985;1997). Tests may be inherently multidimensional due to the intended content or construct structure of the tests (Ackerman, Gierl, & Walker, 2003). Tests consisting of different content domains often measure a primary ability and additional abilities; thus, each item measures the primary ability and one additional secondary ability. Content categories can be considered as the source of secondary abilities. That is, while the primary ability is the estimated overall score, subscores for content categories are considered secondary abilities (DeMars, 2005). Subscores estimated from secondary abilities have been of substantial importance recently (DeMars, 2005;Reckase & Xu, 2015;Sinharay, Haberman, & Wainer, 2011;Wedman & Lyren, 2015). It is because of the potential diagnostic value of the subscores in future remedial work in which students have a chance to know their weaknesses and strengths in different content domains that the test measures (Haberman & Sinharay, 2010). Haberman (2008) and Sinharay (2010) focused on the added value of subscores over the total score by using Classical Test Theory methods. Brennan (2012) suggested the utility index similar to Haberman's method. Besides, the subscore augmentation method developed by Wainer, Sheehan, and Wang (2000) is used to examine whether getting information from other portions of the test (augmented subscore) estimates the subscore more accurately.
In multidimensional tests, when the overall score is reported, it shows the test-takers' achievement levels concerning the overall construct of the test subject. Subscores, on the other hand, give additional information about the strengths and weaknesses of test-takers in the domain abilities while the overall score presents a general profile of the test-takers. For example, the TOEFL test, which is the Englishlanguage test, has four content domains (reading, listening, speaking, and writing). For this test, testtakers receive four subscores related to each skill and a total score as a representative of general Englishlanguage ability. Since many tests have a multidimensional structure, the interest in estimating and reporting overall scores and subscores simultaneously has increased (Liu & Liu, 2017). Simultaneous estimation of those scores provides test takers and educators with more detailed information about the primary and secondary ability levels of students (Yao, 2010). More clearly, as opposed to the separate estimation of the primary and secondary abilities, simultaneous estimation means one can have the information on those abilities with one single analysis.
There are studies discussing the methods estimating the overall score and subscores simultaneously (de la Torre & Song, 2009;de la Torre & Song, 2010;Liu, Li, & Liu, 2018;Soysal & Kelecioğlu, 2018;Yao, 2010). In all these studies, it is emphasized that the reliability of scores is very important when the overall scores and subscores need to be reported. Yao (2010) states that the simple averaging method is the most commonly used method to obtain the overall score by averaging the domain scores. She also indicates that simply averaging the domain scores ignores (a) different maximum raw score points of different domains, (b) correlation between the domain abilities, and (c) the possibility of having a different relationship between overall scores and domain scores at different score points. In order to overcome these problems, Yao (2010) proposed using the Multidimensional Item Response Theory (MIRT) maximum information method for the overall score instead of the simple averaging method. The proposed method does not assume any linear relationship between the overall score and domain scores. In the study, subscores were estimated by using MIRT, and the overall scores were estimated by using the MIRT maximum information method. Estimated overall and subscores were compared to those obtained from the Higher-Order Item Response Theory (HO-IRT), Bi-factor, and unidimensional IRT methods. It is found that the MIRT method provides reliable subscores similar to the HO-IRT method and also reliable overall score. The MIRT maximum information method produced overall scores with the smallest standard error of measurement (Yao, 2010).
de la Torre and Song (2009) also proposed using Higher-order Item Response Theory approach for simultaneous estimation of overall and domain abilities. The HO-IRT method assumes a linear relationship between the overall score and the domain score, unlike the MIRT method. In the study, the HO-IRT method was compared with the unidimensional IRT (UIRT) in which the overall ability is estimated using all items ignoring the multidimensional structure of the data, and the domain abilities are estimated using corresponding subsets of items, separately. The findings of the study show that the overall and domain abilities can be estimated more efficiently by using the HO-IRT method. Additionally, in the HO-IRT framework, it is possible to obtain efficient overall and domain ability estimates with small sample sizes and small number of items (de la Torre & Song, 2010).
To estimate the overall score and domain scores based on the bi-factor model, Liu et al. (2018) introduced six methods in the framework of the bi-factor model and compared them with the MIRT method. The weights of the general and domain factors were calculated in different ways in those six bi-factor methods. It is found that the most accurate and reliable overall and domain scores in most conditions were obtained using Bi-factor-M4 and Bi-factor-M6 methods, weights of which were computed using discrimination parameters for a specific domain. In the bi-factor methods, the domain-63 specific factors are orthogonal to the general factor and each other, unlike the MIRT and HO-IRT methods.
Related research regarding simultaneous estimation of the overall and subscores seems to be few in number (de la Torre & Song, 2010;Liu et al., 2018;Soysal & Kelecioğlu, 2018;Yao, 2010). The present study aims to contribute to the related research. The purpose of the study is to investigate by using which method simultaneous estimation of the overall score and subscores yields more accurate and reliable ability estimates. For this purpose, MIRT, HO-IRT, and bi-factor general model, the most suggested methods in literature, were used in the study. This study also differs from earlier research in that it runs the analysis on mixed-format data, including both dichotomously and polytomously scored items, whereas all other studies used data consisting only dichotomously or polytomously scored items. At this point, using mixed-format data is thought to be important since tests containing a mixture of multiplechoice and constructed-response items are used in many testing situations (Lane, 2005;Yao & Schwarz, 2006).
Multidimensional Item Response Theory
Multidimensional Item Response Theory is a method that provides "a reasonably accurate representation of the relationship between persons' locations in a multidimensional space and the probabilities of their responses to a test item" (Reckase, 2009, p. 53) with a particular mathematical expression. An essential distinction between MIRT models related to the structure of the data is whether the probability of responses to any test item is influenced by one latent dimension or not. If this is the case, the structure of the data is defined as between-item dimensionality (simple-structure). If responses to one item are affected by more than one ability, then, it is denoted as within-item dimensionality (complex structure; Adams, Wilson, & Wang, 1997). In this study, the data were assumed to follow a simple structure because each item was modeled as depending on one specific ability dimension.
Additionally, there are several models within MIRT varying basically in terms of the number of possible score points for the items: MIRT models for dichotomously scored items and MIRT models for polytomously scored items. All of the MIRT models can be considered as generalizations of unidimensional IRT models (Reckase, 1997). However, many tests contain both dichotomously and polytomously scored items on the same test form, which creates a need to use different item response models together (Yao & Schwarz, 2006). TIMSS mathematics achievement test also contains mixed item types. Therefore, in the present study, the TIMSS data were examined using the multidimensional three-parameter logistic (M-3PL) model for dichotomously scored items and the multidimensional twoparameter partial credit model (M-2PPC) applied to polytomously scored items as suggested in the study of Yao & Schwarz (2006). For a dichotomous item j, the probability of a correct response to item j for an examinee with ability ⃗ ⃗ i = (θi1, θi2, ..., θiD) for the M-3PL model (Reckase, 1997) is where = the response of examinee i to item j ⃗ ⃗ j = the parameters for the j th item ( 2 , 1 , 3 ) ⃗ ⃗ = a vector of dimension D of item discrimination parameters ( 2 1 , …, 2 ) 1 = the scale difficulty parameter 3 = the scale guessing parameter ⃗ ⃗ ⊙ = a dot product of two vectors.
64
For a polytomous item j, the probability of a response k−1 to item j for an examinee with ability ⃗ ⃗ i for the M-2PPC model (Yao & Schwarz, 2006) is where = the response of examinee i to item j (0, …, − 1) ⃗ ⃗ j = the parameters for the j th item ( ⃗ ⃗ , 2 , …, ) ⃗ ⃗ = a vector of dimension D of item discrimination parameters ( 2 1 , …, 2 ) = the threshold parameters for k = 1, 2, …, ; 1 = 0 and = the number of response categories for the j th item.
Higher-Order Item Response Theory
de la Torre and Song (2009) proposed a higher-order multidimensional IRT approach in which overall and domain abilities can be specified simultaneously. In this model, the first order describes domainspecific abilities, while the second-order can be viewed as the overall ability. It is considered that each domain is unidimensional; the second-order ability contains all the domain abilities, so the overall ability is also viewed as unidimensional. de la Torre and Hong (2010) stated that a test is deemed multiunidimensional in the HO-IRT framework.
The HO-IRT method uses a hierarchical Bayesian framework (de la Torre et al., 2011), and the domain abilities are considered as linear functions of the overall ability, expressed as where = the overall ability, ( ) = the domain-specific abilities, d = 1, 2, …, D, ( ) = the latent coefficient in regressing the ability d on the overall ability, = the error term following a normal distribution with a mean of zero and variance of 1 − ( )2 , and | ( ) | ≤ 1.
The latent regression coefficient, ( ) , also means the correlation between the overall and domain abilities. Mathematically, ( ) can have negative values, but it is generally expected to be positive since domain abilities are typically related to the overall ability.
Focusing on estimating abilities of test-takers (Equation 3), the model parameters that need to be estimated are the overall ability, domain abilities, and the latent regression parameters (1) , (2) , … , ( ) . With a hierarchical Bayesian framework, the model formulation is expressed as follows (de la Torre & Song, 2009):
Bi-factor General Model
The bi-factor model (Gibbons & Hedeker, 1992) defines a general factor on which all the items load and domain-specific factors on which the items related to that dimension load. The domain-specific factors are orthogonal to the general factor. The method provides estimates of the overall ability and domain abilities at the same time. It is considered that the domain factors are nuisance traits within the Bi-factor framework, which yields a more meaningful overall ability (DeMars, 2013;Yao, 2010). Cai, Yang, and Hansen (2011) demonstrated the factor pattern of the standard item bi-factor measurement structure as .
As seen in the pattern, there are six items, one general and two domain-specific factors. The as are the indicators of item discrimination parameters, which are similar to the factor loadings. The first factor is the general factor, and the last two columns refer to the domain factors (Cai et al., 2011).
As defined in Liu et al.'s (2018) study, in the vector of item discrimination parameters, only the one for the general factor ( ) and one discrimination parameter of s th subscale ( ) have values other than zero. The ability vector of each examinee includes one overall ability for the general factor ( ) and domain-specific abilities for S specific factors ( 1 , … , , … , ).
Based on the Bi-factor model, estimation of the overall score and domain scores can be expressed as follows: and where 1 = weight of the general factor for the overall score 1 = weight of the domain factors for the overall score 2 = weight of the general factor for the domain scores 2 = weight of the domain factors for the domain scores. Thus, the overall score (Equation 7)) is a weighted composite of the general factor ( ) and all domain factors (( 1 , … , , … , ), while the domain score (Equation 8) for the s th factor is a weighted composite of the general factor ( ) and the relevant domain-specific factor ( ). In the current study, the Bi-factor general model was employed by using 1 and 0 as the weights, as in the study of Yao (2010): 1 = 1, 1 = 0 and 2 = 0, 2 = 1. In this method, the general factor represents the overall score, while the domain-specific factors represent subscores.
Data Description
Eighth graders' responses to the mathematics test in Trends in International Mathematics and Science Study (TIMSS) 2015 were used in this study. Each country's data from the 1 st booklet of mathematics achievement test were merged into a whole data set. The reason behind choosing 1 st booklet is that it is the booklet that has the largest number of polytomously-scores items (four items). For handling missing data, the listwise deletion method was utilized because the researchers aimed to analyze the data consisting of the subjects who answered all of the items The final version of the data consists of 5732 students from all the countries who were administered the 1 st assessment booklet in TIMSS 2015. Table 1 shows the distribution of scoring types and contents for the chosen test form for the current study. As shown in Table 1, the test has four content domains, which are number (14 items), algebra (9 items), geometry (6 items), and data and change (6 items). The total number of items is 35, four of which are polytomously scored (0-1-2), and the rest of the items are dichotomously scored (0-1).
Dimensionality analysis
In order to improve interpretations and uses of scores, the dimensional structure of the data is essential to get evidence of validity (Reckase & Xu, 2015). Dimensionality shows the relationship between a test and response patterns, which gives clues about the latent structure measured by the test. Wainer and Thissen (1996) mention the fixed and random forms of dimensionality. While random dimensionality is a concept explaining the possibility of encountering some "unexpected" dimensions, fixed dimensionality is a somewhat "expected" situation. In particular, it is usual to see multidimensionality in scores when the test has multiple content domains. It can be assumed that the data have a multidimensional structure when the test has content domains. Under this circumstance, it is said that it might be more reasonable and effective to use confirmatory dimensionality assessment (Zhang, 2016). Therefore, confirmatory methods were used to assess the dimensionality structure of the data in this study. Confirmatory Factor Analysis (CFA) and content-based confirmatory mode of Poly-DETECT (Zhang & Stout, 1999a, 1999bZhang, 2007) were the methods utilized as dimensionality analysis in the current study.
The poly-DETECT analysis was done through the sirt package (Robitzsch, 2018). The result of the analysis gives the indices DETECT, ASSI and RATIO. The information about the evaluation of these indices is presented in Table 2 (Jang & Roussos, 2007;Zhang, 2007)
Estimating overall score and subscores
Three estimation methods (MIRT, HO-IRT, and Bi-factor) were used to obtain the overall score (mathematics achievement) and subscores (number, algebra, geometry, and data and chance) for 5732 test takers who were administered the first booklet of TIMSS 2015. Ability parameters for the methods were estimated using the BMIRT software (Yao, 2003;Yao, 2013;Yao, Lewis, & Zhang, 2008). In the present study, the data were analyzed using the M-3PL model for dichotomously-scored items, and the M-2PPC applied to polytomously-scored items for all of the estimation methods. The following are brief explanations of the estimation methods and what they estimate in the context of the current data: -MIRT: the simple structure MIRT analysis was used to estimate abilities based on four content domains. It gives four thetas (θ), each of which represents single subscore. The overall score was obtained by domain scores using maximum information method as in Yao (2010).
-HO-IRT: It is assumed that there is a linear relationship between the overall score and subscores, so the parameters for the overall ability and domain abilities were estimated simultaneously.
-Bi-factor: The Bi-factor general model estimated five abilities. The first one was the general dimension, and the other four abilities were content-specific dimensions, respectively. In the bifactor model, content-specific dimensions are orthogonal to each other and the general dimension, and there is no correlation between dimensions.
The default priors of BMIRT software were used for the analyses in this study. The mean and variance of the ability prior distribution were 0.0 and 1.0, respectively. The priors were taken to be lognormal for the discrimination parameters with a mean of 1.5 and variance of 1.5. For the difficulty or threshold parameters, a standard normal distribution with a mean of 0.0 and variance of 1.5 was used. Guessing parameter c had prior beta (α, β) distribution, in which α = 100 and β =400.
Evaluation criteria
The conditional standard error of measurement (cSEM) was used to evaluate the accuracy of overall scores and subscores. The BMIRT program calculated the cSEM values for each student's ability parameters under studied methods estimating the overall and domain scores simultaneously. Then, the analysis of variance (ANOVA) on repeated-measures data for the cSEM was conducted to examine whether there is a significant difference among the mean errors calculated by estimation methods.
The other criterion for the evaluation of methods is reliability. A method proposed by de la Torre & Patz (2005) called Bayesian marginal ability or empirical reliability (Brown & Croudace, 2015) was applied for this study. The reliability of test d can be obtained from The observed (Equation 10) and marginal posterior (Equation 11) variance of the overall or domain ability estimates are computed from the estimated ability scores ̂ and their standard errors (SE) in a sample of N test takers: For this study, reliability measures for one overall score and four subscores were obtained from the equations above for each studied methods. Higher marginal reliability indicates higher reliability of scores from the methods tested (Md Desa, 2012).
Dimensionality Analysis
Poly-DETECT (confirmatory mode) and Confirmatory Factor Analysis were conducted in order to examine the multidimensionality due to the content domains for mixed-format TIMSS data used in this study. Table 3 shows the results of the content-based Poly-DETECT analysis. As seen in Table 3, the results yielded an essential deviation from unidimensionality in which ASSI = .459 and RATIO = 0.522. DETECT index, which is .406, means moderate multidimensionality. The values of indices obtained from the Poly-DETECT analysis provide evidence of multidimensionality for the current data.
A four-factor model was tested through CFA. The content domains with related items were taken as factors, and the model fit was evaluated. Fit indices for the data and the associated criteria are presented in Table 4.
69
CFI and TLI indicated that the model fits the data well (≥ 0.95). Likewise, the RMSEA value (≤ 0.05) showed a good fit (Table 4). According to the results of CFA, the four-factor model had a good fit with the present data, which supported content-based multidimensionality. After providing evidence of the content-based multidimensionality of the data, the overall and domain abilities were obtained with the aforementioned methods.
Precision of Estimates
The selected three methods (MIRT, HO-IRT, and Bi-factor) for the current study were used through running the BMIRT program to estimate the overall and subscores simultaneously. BMIRT also provided standard errors for the estimated scores. The means for standard errors for the overall and domain ability estimates under each estimation method are summarized in Table 5. Table 5 shows the means and standard deviations for the standard errors for each ability. Generally, MIRT and HO-IRT yielded similar results, but the HO-IRT estimation method performed slightly better than MIRT for domain abilities. The Bi-factor model gave the worst standard errors for the domain abilities among all the methods and similar to the MIRT for the overall ability. The repeated-measures ANOVA results whether the difference between standard errors are statistically significant are presented in Table 6. Table 4, the HO-IRT method had the lowest standard errors for all domain abilities, and MIRT had the second-lowest standard errors. Domain abilities from the Bi-factor model were not as accurate as the other two methods.
Therefore, it can be concluded that HO-IRT elicited a statistically significant reduction in standard errors of domain ability estimates. Likewise, the overall ability results showed that the standard errors were significantly affected by the type of estimation method (F(1.692, 9696.490) overall = 8162.767, p < .05, partial η 2 = .588). Post hoc tests using the Bonferroni correction revealed that all pairwise comparisons were significantly different from each other. The HO-IRT had the highest mean for standard errors. The MIRT and Bi-factor model had low and similar standard errors for the overall ability. In general, the three estimation methods were significantly different for all the abilities, including the overall and domain abilities.
Reliability of Scores
The overall and four domain ability estimates from the studied methods were compared in terms of marginal reliability. Estimated reliability coefficients are presented in Table 7. Table 7 presents the Bayesian marginal reliability of the overall score and subscores based on four content domains. In general, MIRT and HO-IRT had substantially higher reliability across all content domains compared to the reliability of the Bi-factor model. The reliability of the Bi-factor model was extremely low for the domain scores, especially for geometry (i.e., 0.253) and data and chance (i.e. 0.161). In addition, the reliability of domains varied slightly between domains for MIRT and HO-IRT. The reliability coefficient of HO-IRT subscores was for number, 0.894; for algebra, 0.838; for geometry, 0.824, and for data and chance, .809. It can be concluded that HO-IRT was the most reliable method of estimating subscores, followed by MIRT, for all content domains for the data used in the current study. Furthermore, the reliabilities of all methods decreased as the number of items in the domains decreased. The reliability of the overall score was for MIRT, 0.816; for HO-IRT, 0.815, and for Bi-factor, 0.876. Unlike the subscores, the Bi-factor model was the most reliable method for the overall score estimation. The other two methods (MIRT and HO-IRT) also estimated the overall score with high reliability.
DISCUSSION and CONCLUSION
When the overall and domain abilities are reported to the test takers and used by the authorities, it is important to obtain accurate and reliable estimates of the overall score and subscores. The overall scores are useful in reporting the test-takers' general achievement and taking important decisions such as rankordering the test takers. On the other hand, the subscores provide test takers, teachers, or policymakers with more diagnostic information such as strengths and weaknesses in each domain. The simultaneous estimation of those scores can be another solution to both of the needs.
This study examined three methods of estimating the overall score and subscores simultaneously in the same model, including MIRT, HO-IRT, and Bi-factor, and compared the reliability and precision of these methods across the overall and domain ability estimates. For this purpose, the real data of mixed item types from TIMSS 2015 were used. The results of Poly-DETECT and CFA provided evidence for the content-based multidimensional structure of the data. The study showed that the MIRT and HO-IRT methods performed similarly in terms of precision and reliability for subscore estimates. However, HO-IRT had slightly lower standard errors and higher reliability than MIRT. Likewise, de la Torre and Song (2009) stated that domain ability estimates can be more efficient by using the HO-IRT model. In addition, Yao (2010) found that MIRT and HO-IRT were quite similar in terms of estimating subscores. The precise ability estimation and reliable scores by using HO-IRT also supported the use of subscores for reporting for the current data. The Bi-factor general model had the highest standard errors and lowest reliability estimates for the domain scores. Liu et al. (2018) also did not recommend the Bi-factor, the original factor method, for reporting scores. They proposed six other methods of reporting overall and subscores as weighted composite scores of the overall and domain-specific factors in a bi-factor model.
For the overall ability estimation, the MIRT maximum information method and Bi-factor model outperformed the HO-IRT method with regard to standard errors. The MIRT maximum information method had the smallest standard error of measurement for the overall score estimates, as in the study of Yao (2010). While all three methods performed similarly and relatively good in terms of the overall score reliability, the reliability of Bi-factor model was a bit higher than the other two methods.
The analyses of the current study suggested that overall, HO-IRT seems the best solution for the simultaneous estimation of the overall and subscores for the data from TIMSS 2015. Soysal and Kelecioğlu (2018) also recommended the use of HO-IRT in estimation of overall and subscores in their study.
In the present study, only real data were used to examine the relative performance of the three methods, since the true model for the data was not known. Therefore, it is quite possible to get different results for other samples. It is suggested that future research can be done by using other real data. It is also advisable that when the simultaneous estimation of the overall and domain abilities must be done in testing practices, the relative performance of the estimation methods should be checked before reporting the scores to test takers. | 6,391 | 2020-03-24T00:00:00.000 | [
"Education",
"Mathematics"
] |
TECHNO-ECONOMIC ASPECT OF THE MAN-IN-THE-MIDDLE ATTACKSTECHNO-ECONOMIC ASPECT OF THE MAN-IN-THE-MIDDLE ATTACKS
approach to the analysis of phenomena. In justification of theoretical propositions and arguments, following scientific methods were widely used: hypotheticodeductive method, axiomatic method, analytical-deductive method, and comparative method, scientific abstraction, induction and deduction, synthesis, This paper analyzes some aspects of the man-in-the-middle (MITM) attacks. After a short introduction, which outlines the essence of this attack, there are presented used scientific methods and hypotheses. The next chapter presents technology of MITM attacks and benefits that a successful attack provides the attacker with. Some of the most significant examples of such attacks, which have a larger scale and significant impact on the broader Internet community, are presented. This part of the article ends with an analysis of possible protection against MITM attacks. Later, on the basis of available data, the analysis of MITM attack from an economic point of view is given. In Conclusion, the summary of the whole analysis is performed.
Introduction
Every IT expert has heard of the Man-in-the-Middle Attacks (MITM), but this type of attacks is very rarely described in details and well classified. Also, there are rarely shown the benefits the attacker hopes to attain. The aim of this paper is to present an analysis of technology of MITM attacks, their relationship with other types of attacks, and some economic factors in this regard.
In successful MITM attacks an attacker can have the ability to receive data and retransmit it without changing or after changing it, so that results can be the eavesdropping or manipulation.
Every IP implementation must include Internet Control Message Protocol (ICMP). To provide services in the safe way most Internet applications use encrypted connections provided by Secure Sockets Layer and Transport Layer Security protocols on the application layer. Although SSL/ TLS can create a two-way trust relationship, because of the complexity in administration, SSL/TLS is mostly used with the one-way trust relationship, which means that only one participant can validate the connection. This method of SSL/ TLS application represents a weakness that can be exploited by an attacker.
Articles about MITM attacks can be found in many sources as, for example: [1 -6], etc.
In the past, MITM attacks mainly affected laptops, but, now, mass population of cell phone users can be under attack. It is hard to expect that such different crowd can protect itself. Except for standard attacks on IP and data, MITM attacks can target rel="nofollow" in the mobile devices, and it can be particularly worrying. A successful attack can allow a hacker to identify a person's location, intercept messages or even eavesdrop on conversations [7].
Used scientific methods and hypotheses
Methodological basis of this research includes the principles of the systemic-functional approach to the analysis of phenomena. In justification of theoretical propositions and arguments, following scientific methods were widely used: hypotheticodeductive method, axiomatic method, analytical-deductive method, and comparative method, scientific abstraction, induction and deduction, synthesis, two wireless devices share their secret keys by creating a secure channel between them, this is nothing but the Diffie-Hellman exchange [9]. More about scanning for victims, auto detection of local interfaces and default gateways, as well as about the setting up the MITM attacks for the victims, routers, IP forwarding, and restoring the victim after attack was done, can be found in numerous sources, e.g. [10 or 9].
Are MITM attacks rare?
Man-in-the-middle attacks existed long before the appearance of computers. One good example might be a malicious postman who opens people's letters and takes or changes their contents before handing over the letter to its recipient. But now man-in-the-middle attacks are essentially eavesdropping and/or manipulating attacks.
According to McAfee research [11] the most frequent are denialofservice and browser attacks. Together, they make 64% of all attacks. They together with SSL attacks constitute the MITM attack. Many protocols that are used every day are vulnerable to various attacks in one way or another, simply because it's quite hard to devise a protocol that's completely secure against MITM. Most solutions are only "best effort", and not "completely and absolutely secure" solutions.
Are the "man in the middle" attacks actually rare in the real world? Data say that MITM is quite credible for concern. The Dutch High Tech Crime Unit's data say that according to their 32 data breach, statistics 15 involved MITM actions [12, p. 69].
In June 2015, 49 persons were busted in Europe for Man-in-the-Middle bank attacks [13]. They were arrested on suspicion of using MITM attacks to sniff out and intercept payment requests from email. This fraud was at the level of €6 million, and was conducted in a "very short time". Targets were medium and large European companies. A similar attack was when crooks were targeting customers of Absa, one of the Big Four banks in South Africa, in 2013. In that case, fraudsters made a fake site looks very professional buyers who will reach it by clicking on a link in a phishing e-mail (a good reason to avoid doing it; instead, type in the URL yourself), asked users to enter their passwords and the Random Verification Number code that Absa sends to mobile phones as a one-time password [13]. The whole scam was carried out with a lot of errors, but it was nevertheless in many cases successful. Although in e-banking some of the controls brought in by banks (two-factor authentication etc.) were applied to combat the attacks on customers, this case shows that they are not always sufficient.
There are, also, many other ways to attack e-banking users as the use of malware to place a Trojan on the client PC, but MITM is still relatively easy in most cases. The main reasons for MITM attacks are: low risks -physical and to be caught, comparative analysis, as well as analysis of time series, graphical interpretation etc.
Using basic features, their resolving power, and analytical base of each of the above methods used in accordance with their epistemological potential landmark in the process of solving theoretical and empirical tasks, allowed in the context of a single algorithm to achieve the goal of the article and to provide high representativeness of the results and conclusions.
The null hypothesis was set as: H o -"MITM attacks are extremely rare and make no damages to any user." The alternative hypothesis was set as: H 1 -"MITM attacks are not extremely rare and can cause losses to victims".
MITM technology
The man-in-the-middle attack, by using different techniques, intends to intercept a communication between two nodes, client, and server. The attacker splits the original TCP connection into 2 new connections. Once the TCP connection is intercepted, the attacker gets the opportunity to read, insert and modify the data in the intercepted communication [5].
An example of the MITM attack is shown in Fig. 1. The attacker interrupted the connection of his victims and usurped the role of a proxy.
Fig. 1 An example of the MITM attack
Source: Authors based on the [3] How to conduct a simple man-in-the-middle attack is described in details in the eponymous article [8], therefore, it will not be presented here. After catching the username and the password, the attacker has all that he needs to attack. The attacker can have additional benefits if the victim uses the same username and password for all services and systems.
The main drawback of millions of HTTPS, SSH and VPN servers is that prime numbers generated by Diffie-Hellman key exchange are all the same. With advancements in technology new algorithms appeared, as Field Sieve, which can very efficiently break Diffie-Hellman connections. Nowadays, when One attack of enormous size using MITM technology was performed by the NSA in 2013. Tor was attacked to be compromised. Previous attacks failed to directly break Tor, but this attack was more successful by using vulnerabilities in Firefox to target certain Tor users. The attack was possible because of the major telcos letting the NSA put servers directly off the backbone. More detailed explanation of this attack can be found in [22].
One of the recent man-in-the-middle attacks was in July 2015 hacking a Jeep Cherokee, which caused a major recall by Chrysler Corporation. Without important security safeguards being put in place and rigorously tested, hackers can eventually control the vehicles' basic functions, such as brakes, steering, and acceleration which could be highly dangerous [23]. A modern car may be connected to multiple networks including cellular, V2V/V2I/V2X, Bluetooth, Wi-Fi and Wired Automotive Ethernet, and this appears as an added risk. Many people still don't realize, but beside the TVs, the IoT will soon involve many devices as washing machines, refrigerators, etc. Each home device will have an IP address and therefore, will be vulnerable to attacks.
In March 26, 2016, GitHub experienced the largest DDoS (distributed denial of service) attack in its history. The attack involved a wide combination of attack vectors. These included every vector they had seen in previous attacks as well as some sophisticated new techniques that used the web browsers of unsuspecting, uninvolved people to flood github.com with high levels of traffic [24]. Netresec made a deeper analysis of this attack and concluded that China was using their active and passive network infrastructures in order to perform a packet injection attack, known as a man-on-the-side attack against GitHub [25]. The man-on-the-side attack is similar to MITM attack, with similar technology, but with less controlling of a network node.
In October 21, 2016, a series of DDoS attacks caused rough disruption of legitimate internet activity in the US. The attacks targeted the Domain Name System and were perpetrated by directing huge amounts of bogus traffic at targeted servers belonging to Dyn which is a major provider of DNS services to other companies. A lot of activities such as online shopping, social media interaction, etc., were not possible to use for some periods of time. The length of disruptions varied, but in some cases, it took several hours. Detailed information about October 21 attack can be found in [26].
And finally, the answer to: "Are MITM attacks rare?" is No! Some, more stringent, analysts say that any instance of an SSL root getting a bad cert can consider it as a sign of an attack. One should always bear in mind that MITM can be part of a denial-of-service attack [27].
some effort in coding the exploit can lead to real world monetary gain, and the code can then be reused or sold to other criminals.
It seems that the larger problem is how to wash the stolen money and not to be detected than to reveal the fraud.
However, the theft of money is not always the goal of the scam. Some say that "…employer does an MITM attack on us. They use it in order to monitor our email and prevent us from sending attachments" [14]. Michael Hex [15] claims that MITM attacks within companies happen daily and more than once. Others think that MITM attacks are "common enough to be an official government policy" [16].
One of the most interesting incidents happened in the year 2008. FARC (Revolutionary Armed Forces of Colombia) was attacked by series of DoS and MITM attacks in order to free up 15 hostages. These attacks freed them without a single ammunition being involved [17].
One of the first, well-known MITM attacks was the Mitnick attack. To take over a session Mitnick exploited the basic design of the TCP/IP protocol. The attack was performed through: identifying weaknesses of the network and collecting the necessary information, silencing the actual network server and replacing it with own computer, and hijacking.
Mitnick's attack to Shimomura's computer is in details described in [18]. An identical attack is nowadays impossible because we don't use rsh; but we use SSH [19].
Nowadays many other possible scenarios can exist: command injection; useful where one-time authentication is used, malicious code injection; malicious code insertion into an email or web pages, key exchanging; public key exchanged by server and client modification, parameters and banners substitution; Parameters exchanged by server and client can be substituted in the beginning of a connection. For example, the attacker can force the client to initialize an SSH1 connection instead of the SSH2, -IPSEC failure; Block the key material exchanged on the port 500 UDP. If the client is configured in rollback mode, there is a good chance that the user will not notice that the connection is in clear text, -PPTP attacks; The Point-to-Point Tunneling Protocol as the method for implementing VPNs has many known security issues, -Transparent proxy; The attacker adds his own URL in the front when the victim loads the URL of a defaced web page.
More details about mentioned scenarios can be found in [20 -21].
attacker was there. A few things can be done to better defend against session hijacking [30]: to do online banking from home, to be cognizant and keep an eye out for things that seem unusual, and to secure own internal machines; such attacks are mostly executed from inside the network.
SSL hijacking is virtually undetectable from the server side because for the server the communication with a client is quite normal. He can't see that he communicates with a proxy. Some things can be done from the client's side [31]: to ensure secure connections using HTTPS, to do online banking from home, and to secure own internal machines.
The Economic Aspect
It is rather rare to find real world data on MITM attacks. One of the reasons is that MITM attacks are by their nature usually targeted at individuals. On the other hand, "a lot of the attacks you hear about are just the tip of the iceberg. Banks often won't even tell an affected customer that they have been a victim of these man-in-the-middle attacks" [32]. Franklin also said: " 'man-in-the-browser' attacks are emerging to compete in popularity with middleman threat", and that (in Europe, Middle East and Africa, in 2007) "3.5 million adults remembered revealing sensitive personal or financial information to a phisher, while 2.3 million said that they had lost money because of phishing. The average loss is US$1,250 per victim".
The situation with defining costs caused by MITM attacks is more complicated when we know that, as mentioned earlier, MITM attacks are closely connected with the major attacks, including DDoS.
Fig. 2 The average attack length
Source: Authors based on [34] Analyzing a DDoS attack in October 21, 2016, Lafrance [33] claimed that in the year 2014 "For more than one-third of companies, a single hour of a DDoS attack can cost up to
How to confront MITM attacks?
Michael Gregg [28] named six ways how one can become a victim of MITM attack: - Man-in-the-browser, -Man-in-the-mobile, -Man-in-the app, -Man-in-the-cloud, and -Man-in-the-IoT.
It is a great variety of possible attacks. Complete elimination of MITM attack is a very difficult task, but the careful user can significantly reduce the risk.
Several security vendors have solutions to scan encrypted traffic (for example, Palo Alto Networks, Kaspersky Internet Security 2015, etc.) and the companies can activate this feature. To do this, the firewall/proxy device is simply granted a certificate from internal Certificate Authority (CA) which is already trusted by all clients. When an application asks for a secure connection, the firewall/proxy device generates a new certificate for the target server on the fly and sent it to the client. Since the client trusts the internal CA, it also trusts the device certificate and will happily start a "secure" connection.
MITM attacks are the preferred choice of attack for surveillance groups who want to sniff on the data on a connection [9]. From defender´s point of view, ARP cache poisoning happens in the background with very few chances to be controlled by the user. Although difficult, some of the countermeasures can be adopted to provide a shield. There is no catch-all solution, but proactive and reactive measures can be taken.
New patched and updated operating systems must be used on a network. Also, security of network should be the primary concern while designing it [9]. If the network configuration is not changing frequently, it is quite feasible to make a listing of static ARP entries and deploy them to clients via an automated script. This can ensure that devices rely on their local ARP cache rather than relying on ARP requests and replies [6]. This way the process is little less dynamic.
DNS spoofing is mostly passive by its nature so it is difficult to defend. Users never know that their DNS is being spoofed until it has happened. In very targeted attacks it is possible that the user may never know that he has been tricked into entering his credentials into a false site until he receives a bill from his bank. But, there are still a few things that can be done to defend against these types of attacks [29]: internal machines securing, not to rely on DNS for secure systems, use of IDS, and use of DNSSEC.
Unless the attacker makes some of the obvious action when he hijacks a session, one may never know that an Fig. 3 shows the actors that influence the cost of incident resolving. The third-party involvement caused an increase of $14.
Carried out attacks that prevent companies from doing business on the Internet mostly affect those companies that are more oriented to the Internet and especially companies that operate in the most developed countries. The costs of business losses were particularly high in the case of US companies, as shown in Fig. 4.
These losses include reputation losses, goodwill diminishing, increasing of customer acquisition activities, and the abnormal turnover of customers.
Conclusions
Theft and eavesdropping have existed since the beginning of time. Today they are largely migrating to the Internet. The struggle is constant and the attackers usually take advantage of both in terms of knowledge and technology at their disposal. MITM attacks, despite some limitations, remain effective technology for carrying out attacks and acquiring illegal benefits. They are performed in different versions, but with the same basic idea. An MITM is often combined with other attacks or built into them.
MITM attacks are usually performed in order to acquire some benefit, financial or non-financial. In cases where private individuals were attacked, the attacks often remain undiscovered and statistically unrecorded. In the cases of attacks on economic operators the attacks often remain hidden to public to preserve the company's image, so in these cases it is also difficult to accurately assess the consequences. Only in cases of large-scale attacks, when they hit a lot of Internet users the extent of the damage caused comes to light.
Despite difficulties in collecting relevant data, this analysis on some examples showed the extent of the damage that can be caused by MITM attacks. Also, the analysis showed that the most vulnerable are mobile devices and Wi-Fi data transmission and that the biggest threat to users is when they are connected to the Internet via a public Wi-Fi connection.
It is not possible to provide a protection that would be effective in all circumstances and in all situations, but for all users, a good idea is not to use public Wi-Fi in situations when doing anything sensitive and/or confidential.
Analysis showed the great potential of IoT, but also the risks that may occur from insufficient protection.
Finally, the research showed that the null hypothesis H 0 is rejected. The research showed that the MITM threat is real, and that can bring significant losses to victims. That way the alternative hypothesis H 1 is proven. $20,000". Matthews [34], upon an examination, concluded that "the data reveals there are no predictable patterns as to how long an assault will last". Some statistics is given in Fig. 2. It is easy to multiply cost for an hour by the number of hours and the number of attacks. For some companies, it can reach millions. "The airline Virgin Blue lost $20 million in a period of IT outages that spanned 11 days in 2010" [33].
Per Ponemon global study and research of 2016 cost of data breach [35] that covered 383 companies, the average total cost was increased from $3.79 (in 2015) to $4 million (in 2016). The average cost of stolen or lost record containing sensitive information was increased from $154 (in 2015) to $158 (in 2016). Comparing to 2013 total cost of a data breach is increased or 29%, or 15% per capita. It is interesting that risks from a data breach are not evenly distributed. Organizations in Brazil and South Africa are much more exposed to material data breaches then organizations in Germany and Australia.
The Ponemon analysis showed that a cost per compromised record, or per capita, in average is on the level of $158. The highest values are in healthcare organizations with $335, then in education ($246), transportation ($129), research ($112), and public sector ($80). The most data breaches were caused by hackers and criminal insiders. The analysis showed that 48% were caused by criminal attacks. The average cost of attacks resolving was $170, while costs of system glitches and human errors were $138 and $133 respectively. The most expensive resolving of attacks was in the US ($236), and the cheapest was in India ($76 per record).
Fig. 4 Lost business costs for 383 companies in US$ million
Source: [35] | 4,975.6 | 2017-04-30T00:00:00.000 | [
"Computer Science"
] |
Two-point Functions and Bootstrap Applications in Quantum Field Theories
We study two-point functions of local operators and their spectral representation in UV complete quantum field theories in generic dimensions focusing on conserved currents and the stress-tensor. We establish the connection with the central charges of the UV and IR fixed points. We re-derive c-theorems in 2d and show the absence of their direct analogs in higher dimensions. We conclude by focusing on quantum field theories with a mass gap. We study the stress tensor two-particle form factor, derive implications of unitarity and define concrete bootstrap problems in generic dimensions.
Introduction
The numerical S-matrix bootstrap program was recently revived in [1][2][3] and received further attention in [4][5][6][7][8][9][10][11][12][13][14][15]. This program allows to numerically construct scattering amplitudes which obey crossing and unitarity at all energies. In [16] the authors proposed to extend the S-matrix bootstrap program to accommodate form factors and spectral densities of local operators in a general number of dimensions. 1 When preparing [16] it became clear that the systematic treatment of two-point functions, spectral densities and their relation to central charges in a generic number of dimensions is missing in the literature.
The first goal of this work is to fill this gap. The second goal is to define concrete bootstrap problems in higher dimensions. In sections 2 and 3 we provide the main definitions and setup the formalism. The main results are given in sections 4, 5 and 6. More precisely, in section 4 we compute explicitly spectral densities of conserved currents and the stress-tensor in conformal field theories. In section 5 we show that in generic quantum field theories in d ≥ 3 the asymptotic behavior of spectral densities of conserved currents and the stress-tensor is driven by the central charges, in d = 2 instead we obtain the integral sum-rules for the central charges which lead to the "c-theorems". 2 In section 6 focusing on quantum field theories with a mass gap we discuss the stress-tensor two-particle form factor and partial amplitudes. We then derive semi-positive definite constraints coming from unitarity and discuss applications to bootstrap. Various computations and technical details supporting the main text are given in appendices A -G.
All our results and conclusions are clearly stated in sections 4, 5 and 6. As a consequence we do not dedicate a separate section to conclusions. In order to somewhat compensate for this and also to facilitate the reading of the paper we provide however an extended summary of the paper and its key points.
Summary of the paper
In section 2 we study Euclidean two-point functions. Their most general form compatible with rotational and translational invariance is given by (2.1) and (2.23) for conserved currents and the stress-tensor respectively. In the presence of conformal symmetry, the two-point functions of conserved currents and the stress-tensor are completely fixed up to the numerical coefficients C J and C T called the central charges, see (2.4) and (2.25). In a generic quantum field theory (QFT) we assume that both its UV and IR fixed points are described by the UV and the IR conformal field theory (CFT). 3 This requirement at the level of two-point functions translates into conditions (2.7), (2.8) and (2.26), (2.27). The most important result of section 2 are the integral expressions for the difference of the UV and IR central charges given in (2.12) and (2.32). In section 2 we also show that two-point functions might contain a parity odd part in d = 2 and d = 3 dimensions. In d = 2 it is completely fixed by the global anomaly C J for conserved currents and by the gravitational anomaly C T for the stress-tensor. 4 In d = 3 the parity odd part does not contain any information about the UV and IR fixed points.
In section 3 we study Wightman and time-ordered two-point functions in the Lorentzian signature. 5 We define in section 3.1 the spectral densities as Fourier transformed Wightman two-point functions. We define components of the spectral densities as the coefficients in their decomposition into a basis of tensor structures. This basis is constructed from the projectors (objects mapping finite irreducible representations of the Lorentz group into the finite irreducible representations of the Little group), see (3.18) and (3.40) for their explicit expressions. We then show the non-negativity of the components of spectral densities. The explicit spectral decomposition of the two-point function of conserved currents and the stresstensor is given in (3.26) and (3.51) respectively. We study time-ordered two-point functions in section 3.2. Their spectral decomposition, known as the Källén-Lehmann representation, in the case of conserved currents and the stress-tensor is given in (3.59) and (3.64) respectively. Under the Wick rotation the time-ordered two-point functions get mapped precisely to the Euclidean two-point functions. This allows to define the Källén-Lehmann spectral decomposition of Euclidean two-point functions.
In conformal field theories the Wightman two-point functions are completely fixed by the conformal symmetry, hence the spectral densities are also completely fixed. In section 4 we explicitly compute the components of the spectral densities in the case of Lorentz spin one and Lorentz spin two operators, see (4.10) and (4.13).
In section 5 we show that the central charges C J and C T in d ≥ 3 define the asymptotic behavior of certain components of the spectral densities, see (5.2) and (5.15). In d = 2 we recover the known integral expressions (5.6) and (5.19). The latter prove immediately the "c-theorems" in d = 2. In order to show all these statements systematically we employ the sum-rules (2.12) and (2.32) and perform the Källén-Lehmann decomposition of its integrands. We provide the technical details of this strategy in appendix F.
In section 6 we discuss bootstrap applications. We start in section 6.1 by studying the two-particle form factor of the stress-tensor. We discuss its generic form, the relation to the stress-tensor spectral density and its projections to definite Little group spin, see (6.8), (6.19) and (6.29). In section 6.2 we derive the unitarity constraints as semi-positive conditions on the matrices involving partial amplitudes, the stress-tensor form factor and the stress-tensor spectral density, see (6.30), (6.36) and (6.41). In section 6.3 we define concrete bootstrap problems which can be studied with modern numerical techniques.
Notation
Let us comment on the notation of the paper. We will use Latin letters to indicate the Euclidean space a, b = 0, 1, 2, . . . , d − 1. Throughout the text we will also use the following (manifestly translation invariant) objects We will also use sometime vector notation for spatial coordinates
Euclidean two-point functions
We start by studying two-point functions in Euclidean signature. We refer to them as the Euclidean two-point functions. We attribute a = 0 component of the Euclidean coordinate x a to Euclidean time. The Euclidean two-point functions are "time-ordered" with respect to this Euclidean time, see appendix C for details. In what follows we will study Euclidean two-point functions of conserved currents and the stress-tensor at non coincident points. 6 We will derive their most general form fixed by the rotational and translational invariance. 7 We will define central charges and derive integral expressions (sum-rules) they satisfy. This section develops on the ideas presented in [47,48]. 6 Treating coincident points correctly is very difficult due to presence of contact terms, see for example [35] and section 3.1 of [36] for a discussion of two-point functions in CFTs. Luckily in position space one can often avoid talking about them. The situation is different in momentum space where one has to integrate over the whole space including the coincident points. From this perspective working with momentum space correlators is much more difficult. For works on CFT correlators in momentum space see [35][36][37][38][39][40][41][42][43][44][45]. 7 For a concrete perturbative computation of time-ordered two-point functions of the stress-tensor in gauge theories see [46].
Conserved currents
Consider the local conserved current J a (x). Such an operator is generally present in systems with a U (1) symmetry. The generalization to the case of non-Abelian symmetries is trivial. 8 Due to rotational and translational invariance the Euclidean two-point function of conserved currents has the following generic form 0|J a (x 1 )J b (x 2 )|0 E = 1 r 2(d−1) × h 1 (r)δ ab + h 2 (r) x a 12 x b 12 r 2 + n ig n (r)T ab n (x 1 , x 2 ) , r ≡ |x 12 |, where h 1 (r), h 2 (r) and g n (r) are dimensionless functions which contain dynamical information of a particular theory and T ab n are the parity odd tensor structures (structures containing a single Levi-Civita symbol). 9 Since the form of the Levi-Civita symbol depends on the number of dimensions, the parity odd tensor structures should be discussed separately for each dimension. We postpone this discussion until the end of this section. Notice, that since r is a dimensionful quantity one needs at least one dimensionful parameter in the theory in order for the functions h 1 , h 2 and g n not to be simply constants. Suppose we have a single dimensionful parameter a in the theory with the mass dimension [a] = 1. Then the functions in (2.1) would have the following arguments h 1 (ar), h 2 (ar), g n (ar). (2.2) Notice also that we exclude the r = 0 point from the discussion in order to remove the contact terms which do not play any role in our further investigation.
Since the Euclidean two-point functions are time-ordered, the following symmetry condition must be obeyed Clearly, the parity even structures in (2.1) satisfy this condition automatically. In the presence of conformal symmetry there are further constraints on the two-point function (2.1). We derive them in appendix A. Here we simply quote the final result where we have defined (2.5) 8 In case the system under consideration is invariant under a non-Abelian group, the corresponding conserved operator would be J a A (x), where A is the index in the adjoint representation of the non-Abelian group. The twopoint function in (2.1) gets an additional overall tensor structure which depends on the adjoint indices, namely 0|J a A (x1)J b B (x2)|0 E ∼ tr(tAtB), where t A are the generators of the symmetry in the adjoint representation. One can always choose a basis of these generators such that tr(tAtB) = δAB. 9 The imaginary unit i in the parity odd part of (2.1) is introduced for future convenience.
The constants C J and C J (partly) characterize the dynamics of the conformal field theory. They are called the central charges of two currents. 10 In unitary theories C J > 0 and −C J ≤ C J ≤ +C J , see (A.15) and appendix B for details. The central charge C J was introduced in [49], it corresponds to the parity even structure I ab and is a universal quantity in any number of dimensions. The central charge C J corresponds to the parity odd structure E ab and can only be present in d = 2 dimensions. Notice that the parity odd structure E ab automatically obeys the symmetry condition (2.3). This is not obvious at first glance, but can be shown using the following identity The two-point function (2.4) is a special case of (2.1) where all the dimensionless functions h 1 (r), h 2 (r) and g n (r) are constants (since there are no dimensionful parameters in the CFTs) appropriately related to form conformally covariant tensor structures (2.5). Given that we work with a UV complete QFT at high energies (UV) or equivalently at small distances we should recover conformal invariance, namely 11 Analogously at low energies (IR) or equivalently at large distances we again recover conformal invariance In quantum field theories with a mass gap such as QCD, the IR CFT is simply empty.
Parity even part
Let us first focus on the parity even terms in (2.1) and (2.4). We can rewrite the condition (2.7) and (2.8) as follows In other words, the central charges C U V J and C IR J determine the asymptotic behavior of the functions h 1 (r) and h 2 (r). 10 For a generic local operator in a CFT the constant appearing in its two-point function defines the normalization of this operators. Its value can be set to one by rescaling the normalization of this operator. For conserved operators this is no longer the case since they obey a particular symmetry algebra which fixes their normalizations. In the Abelian case we have Q ≡ d d xJ 0 (x)δ(x 0 ) and [Q, O] = qOO, where qO are the charges. In the non-Abelian case instead we have Q A ≡ d d xJ 0 A (x)δ(x 0 ) and [QA, QB] = ifABC QC , where A, B and C are the adjoint indices of some non-Abelian group and f are its structure constants. 11 Here we take the limit r → 0 in such a way that r is always positive. It can be infinitely close to zero but never becomes zero. In other words it does not probe contact terms.
We will now derive an integral expression for the UV and IR central charges in terms of the two-point function of conserved currents in a generic QFT. Conservation of the currents implies the following differential equation Integrating both sides of (2.10) and using the asymptotic conditions (2.9) we get This can be equivalently rewritten by using (2.1) as 12 As shown in appendix B.1 in unitary theories the following constraints hold ∀r : As a result the integrand in (2.11) does not have a definite sign and one cannot derive any inequality for the difference of the UV and IR central charges simply using (2.13), in other words using (2.13) one cannot prove the "c-theorem" for conserved currents. The proof of the "c-theorem" for conserved currents exists however and will be given in section 5.
Parity odd part
Let us now focus on the parity odd terms in (2.1) and (2.4). Since the number of indices in the Levi-Civita symbol depends on the number of dimensions, we will address the case of d = 2, d = 3 and d ≥ 4 dimensions separately. Let us start from d = 2. Using rotational and translational invariance one can write two parity odd tensor structures in (2.1), namely n ig n (r)T ab n (x 1 , x 2 ) = ig 1 (r) ab + ig 2 (r) x a 12 bc x c 12 r 2 . (2.14) Requiring (2.3) and using (2.6) one obtains the following constraint on the unknown functions As a result we get the following most general form of the parity odd part of the two-point function of two currents (2.16) 12 As we will see shortly, the parity odd tensor structures obey δ ab T ab = 0 and x a x b T ab = 0.
Conservation implies that
In other words the expression for the parity odd part of the two-point function of conserved operators (2.16) is identical to the one of conformal field theories in (2.4). The asymptotic conditions (2.7) and (2.8) imply that This requirement shows that the central charge C U V J is well defined along the flow and remains unchanged in the IR. It is nothing but the anomaly coefficient of the global U (1) current. 13 Using the standard anomaly matching argument of 't Hooft one can argue that the global anomaly must be an invariant quantity along the flow in accordance with (2.18).
In d = 3 one can write only a single parity odd tensor structure The expression (2.19) automatically complies with the condition (2.3) and satisfies conservation. Since there are no allowed parity odd terms in the CFT two-point function in d = 3 the asymptotic conditions (2.7) and (2.8) require Moreover in unitary theories due to reflection-positivity the following condition holds ∀r : −h 1 (r) ≤ g 1 (r) ≤ +h 1 (r). (2.21) This can be shown by plugging (2.19) into (B.10). The parity odd contribution to the twopoint function (2.19) is the Chern-Simons like term. 14 It does not contain any information about the UV or IR fixed points and thus it will not be studied further in this paper.
In the case of d ≥ 4 no parity odd structures can be constructed. This follows from the simple fact that the Levi-Civita has too many indices and that the contractions of the form abcd x c x d trivially vanish.
Stress-tensor
Let us now turn our attention to the local stress-tensor T ab (x) totally symmetric in its indices, namely T ab (x) = T ba (x). In a d-dimensional non-conformal quantum field theory the stresstensor transforms in the reducible representation of the rotational group SO(d). It can be decomposed as a direct sum of the trivial and the symmetric traceless representations: • ⊕ d.
The trivial representation corresponds to the trace of the stress-tensor which we denote by 13 For further reading on global anomalies in d = 2 see for example section 19.1 in [50]. See also section 6 in [51]. 14 See for example chapter 5 of David Tong's lectures on the Quantum Hall Effect [52].
Logically the discussion in this section will be identical to the one of conserved currents with several minor complications. The most general Euclidean two-point function consistent with the rotational symmetry and the translational invariance has the following form Here T abcd m and T abcd n denote parity even and odd tensor structures respectively. The imaginary unit i is introduced in the parity odd part for the later convenience. The functions h m (r) and g n (r) multiplying these structures are dimensionless. Since the Euclidean correlation functions are time-ordered one has the following symmetry condition (2.24) In the presence of the conformal symmetry the form of the two-point function (2.23) gets severely restricted and the two-point function becomes where the objects I ab and E ab were defined in (2.5). We derive this expression in appendix A. The coefficient C T is called the stress-tensor central charge. It was first introduced in [49]. 15 It is a universal quantity in any number of dimensions. In unitary theories C T > 0. The quantity C T is another central charge which is allowed only in d = 2 dimensions. In unitary theories −C T ≤ C T ≤ +C T , see (A.18). Given that our quantum field theory is UV complete, namely its UV fixed point is described by a UV CFT (and by an IR CFT in the IR), we have the following conditions The stress-tensor defines the conformal algebra. For instance the dilatation operator is defined as D ≡
Parity even part
Let us now focus on the parity even part of the two-point function (2.23). In general number of dimensions one can write five linearly independent tensor structures which read as (2.28) In d = 2 only four tensor structures are linearly independent due to the following relation Using (2.23) one can express h 2 (r) as some contraction of the stress-tensor two-point function.
As a result (2.31) can be brought to the following equivalent form 16 For further details see [48] and section 2.6 in [16].
where we have defined In appendix B.1 using reflection positivity we show that ∀x : 0|Θ(x)Θ(0)|0 E ≥ 0, (2.34) in particular see (B.9). Because of this in d = 2 the integrand in the right-hand side of (2.31) is a non-negative function which is integrated over a positive region. As a result we get a simple inequality known as the Zamolodchikov's c-theorem [28,47]. Using the machinery of appendix B.1 no positivity statement however can be made about h 2 (r), thus no statement similar to (2.35) can be made in d ≥ 3 using these arguments. We will prove the c-theorem one more time but in a different way in section 5.
Parity odd part
As before we need to consider d = 2, d = 3 and d ≥ 4 dimensions separately.
We start with d = 2. One can naively write six parity odd tensor structures, however only four of them will be linearly independent. Moreover, due to the symmetry condition (2.24) there exist two additional constraints. Taking them into account we are left only with two structures which reads as T abcd 1 (x 1 , x 2 ) ≡ δ ac bd + δ ad bc + δ bc ad + δ bd ac Notice that the symmetry property required by (2.24) is not manifest here. One needs to use relations between different tensor structures in order to show that (2.36) obeys (2.24). Conservation of the stress-tensor implies the following differential equations g 1 (r) = 0, g 1 (r) + g 2 (r) = 2 r × (2g 1 (r) + g 2 (r)). (2.37) Solving them and taking into account the asymptotic constraints (2.26) and (2.27) we get The central charge C U V T remains well defined and invariant along the flow all the way to the IR fixed point. One can identify C U V T with the gravitational anomaly in d = 2. 17 In d = 3 one can construct two parity odd tensor structures which automatically satisfy the condition (2.24). They read as (2.39) Conservation implies We emphasize that even though no parity odd terms in the two-point function of the stresstensors are allowed at the fixed points, they can be present along the flow. The parity odd terms in d = 3 do not contain any information about the UV or IR CFTs and thus will not be studied further in this paper.
In d ≥ 4 no parity tensor structures can be constructed.
Trace of the stress-tensor
It is useful to make several statements about the trace of the stress-tensor. From (2.23) and the explicit expressions of tensor structures (2.28), (2.36) and (2.39) it follows that in any number of dimension one has Using the asymptotic conditions (2.30) we get then We define a particular quantum field theory as a deformation of some UV CFT. In practice it means that we pick a scalar operator O with the conformal dimension ∆ O which has the following UV CFT two-point function 17 See for example [53] for the discussion on gravitational anomalies. In other words O must be irrelevant.
Lorentzian two-point functions
In this section we discuss two-point functions in the Lorentzian signature. Contrary to the Euclidean signature where only the time-ordered two-point functions exist, in the Lorentzian signature we can define Wightman, time-ordered, advanced and retarded two-point functions.
In what follows we will discuss the first two. The Wightman two-point functions suit best for defining spectral densities. They are automatically well defined at coincident points and do not have any contact terms. Time-ordered Lorentzian two-point functions will be employed in section 5 due to the following property: they simply become the Euclidean two-point functions under the Wick rotation. It will be sufficient to work with time-ordered correlators at non coincident points. This allows to avoid complications due to presence of contact terms. The discussion presented in this section is completely generic for d ≥ 4. In d = 2 and d = 3 two-point functions are allowed to have a parity odd contribution. Bellow we will completely ignore this possibility.
Wightman two-point functions
The scalar Wigthman two-point function in position space is defined as the ordered vacuum expectation value of two real scalar operators O(x) as The operators O and O belong to different bases. One basis is more natural for working at hight energies and the other one is more natural for working at low energies.
See appendix C for further details. The small imaginary part in the time component is needed to regularize various integrals of the Wightman correlation function. The prescription in (3.1) should be understood as follows: perform all the necessary manipulation with the finite but small and then take the limit. The notation 0 + indicates that we approach zero from the positive values. We define the spectral density ρ O of the local operator O(x) as the Fourier transform of its Wightman two-point function as 19 The appearance of the Heaviside step function θ(p 0 ) enforces the fact that we work with non-negative energies p 0 ≥ 0 only. For convenience we also define the s Mandelstam variable The reason why s ≥ 0 will be explained shortly. It is standard to rewrite the second entry in (3.3) by adding a δ-function and integrating over it as We refer to the object ∆ W (x; s) as the scalar Wightman propagator. Its explicit form can be found in (E.10).
In unitary poincare invariant QFTs the states transform in the unitary infinite-dimensional representation constructed by Wigner. They are labeled by −p 2 and by the irreducible representation of the Little group to be defined shortly. There are three distinct possibilities, namely −p 2 < 0, −p 2 = 0, −p 2 > 0.
In QFTs one deals only with the last two options. The reason for that is the necessity to have a unique vacuum state which is defined to be the lowest energy state in the theory. States with −p 2 < 0 would obviously allow for arbitrary small negative energies. In the case −p 2 > 0 using Lorentz transformations one can obtain any d-momentum p µ from a standard frame which is conventionally chosen to bē Here we have simply performed the change of variables. Notice that enters the right-hand side of the above equations as e − . It plays the role of a dumping factor.
where M > 0 is some real constant. The group of transformations leaving invariant (3.6) is called the Little group. Clearly in this case it is SO(d − 1). The most universal irreducible representation of the Little group which exists in any dimension is the traceless symmetric representation . . .
We refer to (3.7) with boxes simply as the spin (Little group) representation. For further details in the d = 4 case see appendix A in [15]. In any particular QFT model we can choose a basis of states which we denote schematically by |b . As discussed above these states transform in the unitary representation of the Poincaré group. One chooses the basis to diagonalize the generators of translations P µ , namely P µ |b = p µ b |b . The following completeness relation holds where the summation over b is a schematic notation which stands for summing Poincare and all the additional labels characterizing the state.
In the case −p 2 = 0 the standard frame is usually chosen to bep µ ≡ {M, 0, . . . , 0, M } leading to a different Little group which is ISO(d−2). It is usually assumed that "translation" generators of this group are realized trivially and the Little group in this case effectively becomes SO(d − 2). This changes the set of labels b needed to describe the state compared to the −p 2 > 0 case. In (3.8) and below we keep b at a schematic level, thus our discussion applies for both −p 2 = 0 and −p 2 > 0 cases. In future sections however when we need the explicit structure of the Little group we will restrict our attention to the −p 2 > 0 case only.
Let us inject (3.8) into (3.1) we get In the second equality we have used the translation invariance Comparing (3.3) with (3.9) we get the desired expansion of the spectral density Since for each basis state we have p 0 b ≥ 0 and −p 2 b ≥ 0 we conclude from the expression (3.11) that p 0 ≥ 0 and −p 2 ≥ 0 in accordance with (3.4).
The basis states |b at this point are rather abstract. There is a large class of quantum field theories however for which the basis |b can be defined in a straightforward constructive way as a tensor product of n free particle states dressed with the Møller operators, see section 2.1 in [16] for further details. Such basis states are called asymptotic and are denoted here by |n in or |n out . 20 In other words |b = |n in or |b = |n out . (3.12) Analogously to (3.11) for massive theories we get where p µ n is the d-momenta of the |n in asymptotic state and n stands for summation over all possible number of particles and integrating over their relative motion. The matrix element out n|O(0)|0 is called the form factor. See section 2.4 in [16] for a discussion of from factors and their properties.
We will now define the spectral density of conserved (Abelian) currents and the stresstensor.
Conserved currents
The two-point Wightman function of two spin one Lorentz currents J µ (x) is defined as (3.14) As in the scalar case the spectral density ρ µν J (which has two Lorentz indices now) is defined as the Fourier transform of the Wightman two-point function (3.14) as In QFTs where the asymptotic states can be defined, the decomposition of the spectral density into the form factors out n|J µ (0)|0 reads as Because of the Lorentz invariance the spectral density ρ µν can be written in the following most general form where ρ 0 J and ρ 1 J are the (spin 0 and spin 1) 21 components of the spectral density ρ µν and the tensor structures are defined as The overall factor p 2 and the minus sign in the second term in (3.17) were introduced for the later convenience. The objects (3.18) have a more profound meaning than being simply the tensor structures. Let us zoom on this. We are in the situation when −p 2 > 0. Consider the Lorentz spin one operator J µ (p) in momentum space. It transforms in the irreducible representation of the Lorentz group, however the states it creates from the vacuum transform in irreducible representations of the Little group SO(d − 1). It is thus important to know how to decompose (or project in other words) irreducible representations of the Lorentz group SO(1, d − 1) into irreducible representations of the Little group SO(d − 1). For Lorentz spin one representation one has It is easy to perform such a decomposition explicitly in the frame (3.6). One has where we have defined In a generic frame this decomposition is achieved by The equivalence of (3.20) and (3.22) is trivial to see in the frame (3.6). Thus, the objects in (3.18) are the Little group spin 0 and 1 projectors. From their definitions it is straightforward to check that they satisfy the standard properties of projectors, namely is a d×d hermitian semi-positive definite matrix for any value of p µ satisfying −p 2 > 0. We then can evaluate this matrix in the standard frame (3.6). The semi-positivity then translates into non-negativity of the spectral density components Since the components of the spectral densities are scalar quantities, they remain invariant under any Lorentz transformation, thus the inequalities (3.24) hold true in any frame. It is also useful to deduce the mass dimensions of the components of the spectral density. Since the Heaviside step function is dimensionless from (3.15) we get Analogously to the scalar case using the definition of the components of the current spectral density we can bring the second entry in (3.15) to a very convenient form where ∆ µν i are the Lorentz spin one Wightman propagators. By using the explicit expressions for the projectors (3.18) and the integration by parts procedure, these two propagators can be written in terms of the Wightman scalar propagator (3.5) as The scalar Wightman propagator ∆ W in the limit s → 0 (for space-like separated points x 2 > 0) is given in (E.26) . It remains finite and depends only on x 2 . Consequently the Wightman propagators in (3.27) remain finite in the limit s → 0. This is the reason for introducing the overall factor p 2 in (3.17).
Finally, conservation of the current implies Using (3.17) and (3.18) these conditions in turn imply where A is some dimensionful constant whose mass dimension follows from (3.25).
Stress-tensor
Let us consider an operator transforming in the two-index symmetric reducible representation T µν (x) and consider its Wightman two-point function The spectral density ρ µν; ρσ T of the operators T µν (x) is the Fourier transform of (3.30), namely In QFTs which can be described in terms of asymptotic states the spectral density can be written as a sum of the form factors out n|T µν (0)|0 as The operator T µν is in the reducible representation, one can decompose it into two irreducible representations as The operators transforming in theses two irreducible representations are They are the trace and the traceless-symmetric part of T µν . Now instead of (3.30) one should consider the following three (generically independent) Wightman two-point functions In the case when T µν is the stress-tensor, the conservation condition however mixes all three correlators in (3.35). Using the splitting (3.35) one can define the following spectral densities (3.37) Analogously to ρ µν ΘT (p) one can define the spectral density ρ µν T Θ (p). One can show however that the latter is identical to the former. Using these one can write The decomposition of the Lorentz spin 2 operator into the irreducible representations of the Little group SO(d − 1) reads as The decomposition (3.39) can be done by using three projectors constructed out of (3.18). They read The projectors (3.40) are required to be symmetric and traceless in both pairs of indices (µν) and (ρσ). They also satisfy the following relations (3.42) Using the projectors (3.40) we can write the decomposition (3.39) explicitly aŝ where the Little group spin 0, 1 and 2 representations, analogously to (3.22), read aŝ (3.44) Using the Lorentz invariance one can write the decomposition of the spectral densities into components as ρ µν It is also useful to deduce the mass dimensions of the components of the spectral density. It simply reads [ρ µν T ] = d.
Apart from some singular points at p 2 = 0, the condition (3.48) leaves us with two components of the stress-tensor spectral density, namely ρ Θ and ρ 2 T and we can compactly write Since the spectral density (3.32) is a hermitian matrix we conclude that Plugging (3.49) into (3.31), analogously to section 3.1, we obtain the spectral representation for the conserved stress-tensor. It reads where we have defined Up to an overall constant the expression (3.51) matches precisely the equation (3.1) of [25].
To conclude let us express the Wightman propagators of the stress-tensor in terms of the scalar Wightman propagator (3.5). Taking (3.52) and expressing the momenta as derivatives one can write straightforwardly (3.53)
Time-ordered two-point functions
Time-ordered correlators are widely used because of several reasons. First they can be straightforwardly computed in perturbation theory. Second, they appear in the LSZ reduction formula and third they can be easily mapped to Euclidean correlators using the Wick rotation. In this section we will discuss two-point time-ordered correlaotrs. Given two real scalar operators O 1 (x) and O 2 (x) the time-ordered two-point function is defined as Plugging the expression for the Wightman two-point functions in terms of the spectral density (3.5) we get where ∆ F is called the Feynman propagator. 22 Its explicit expression can be found in (E.11). The representation of the time-ordered two-point correlation function in terms of the spectral density (3.55) is called the Källén-Lehmann representation. We also exclude x µ = 0 point from the discussion to avoid talking about contact terms. Using (3.55) one can also express the spectral density in terms of the real part of the time-ordered two-point function. To show that we use the relation inside the Feynman propagator, where P stands for the principal value. Performing the inverse Fourier transformation we obtain In what follows we derive the analogs of (3.55) for conserved currents and the stresstensor.
Conserved currents
Analogously to the scalar case the time-ordered two-point function of two Lorentz spin one operators is defined as Plugging here the spectral representation of the current Wightman function (3.26) we get the Källén-Lehmann representation for the currents. It reads where the Feynman propagators are defined as Using (3.27), the derivative of the step function In order to rewrite the Feynman propagator defined in the second line of (3.55) in a conventional form given by the last line of (3.55), one uses the integral representation of the step function and performs a change of variables. In most textbooks the equivalence of the second and third lines of (3.55) is shown backwards by integrating the last line of (3.55) in p 0 using the residue theorem. and the following property of the scalar Wightman propagator 23 one obtains the following simple expressions for the Feynman propagator
Stress-tensor
The identical discussion holds for the time-ordered two-point correlation function of the stresstensors. In what follows we will only state its Källén-Lehmann representation in terms of the components of the stress-tensor spectral density. It reads where the Feynman propagators read as
Spectral densities in Lorentzian CFTs
In the previous section we defined spectral densities as the Fourier transform of Wightman two-point functions. In the presence of conformal symmetry the two-point functions are purely kinematic objects. In other words their form is completely fixed by the conformal symmetry. As a consequence we can straightforwardly compute the CFT spectral densities. Let us start from the very well known case of a real scalar operator O with the scaling dimension ∆ O . In unitary theories there is a lower bound on this scaling dimension which reads as ∆ O ≥ d−2 2 . As in the previous section we will work in the Lorentzian metric here. The Wightman two-point function of the operator O reads where N O is the normalization constant which can be set to one. Plugging (4.1) into the definition (3.3) one gets Performing the integration and taking the limit we arrive at the following expression for the spectral density 24 .
In what follows we will derive the spectral densities of a generic Lorentz spin one and Lorentz spin two operators. Notice, that we will completely ignore the parity odd part in d = 2.
Lorentz spin one operator
Consider a generic Lorentz spin one operator J µ with the scaling dimension ∆ J ≥ d − 1. In the Euclidean signature the two-point function of such operators was already given in (2.4). Analogously in the Lorentzian signature we have For a generic Lorentz spin one operator C J is a normalization constant. When J µ is a conserved current instead, C J becomes the central charge and the scaling dimension ∆ J saturates the unitarity bound ∆ J = d − 1. Plugging (4.5) into the first equation in (3.15) we get The integrals in this expression have already been evaluated in (4.2) and we are only left with taking derivatives. Using the properties 25
7)
24 This result can be found for example in equations (2.22) in [42], see also [39]. Notice that for some special values of ∆O the integration procedure leads to additional terms in (4.3). These terms are not present however for all the cases relevant to this section due to unitarity bounds on scaling dimensions. 25 To see that the last entry in (4.7) indeed vanishes, notice that because of the δ-function it can be non-zero only if µ = 0 and q 0 = 0. This leads however to the step function of a negative argument which vanishes unless all q i = 0, with i = 1, 2, . . . , d. The latter case also gives zero because of the q µ factor.
and (4.2) together with (4.3) one can show that
where we have defined the parameter (4.10) Let us now focus on the case when J µ is a conserved current. The expressions (4.10) then simplify and read (4.12) In d = 2 the spin 1 component vanishes and the spin 0 component is proportional to δ(s). This is in agreement with (3.29). By comparing (4.11) and (3.29) we can even determine the coefficient A introduced in (3.29), it reads A = 2π C J . In d ≥ 3 the spin 1 component instead is always non-zero whereas the spin 0 component always vanishes. This is again in agreement with (3.29) since the coefficient A introduced there is a dimensionful quantity and thus must vanish in CFTs, A = 0.
Lorentz spin two operator
Using the identical logic we can derive the components of spectral densities for the Lorentz spin two operatorT µν . We keep the hat in order to indicate explicitly that the operator is trace-less. Skipping all the details we provide only the final answer which reads -25 -where the stress-tensor central charge C T is defined in (2.25). 26
Spectral densities and central charges
As explained in section 3, in the case of a conserved current J µ (x) there is a single (Little group spin one) component of the spectral density denoted by ρ 1 J (s). Analogously, in the case of the stress-tensor T µν (x) there are two components of the spectral density, namely the trace part ρ Θ (s) and the Little group spin two part ρ 2 T (s). 27 In what follows we will explain how the information about the UV and IR central charges (for their precise definition see either section 4 or section 2) are encoded in the components of spectral densities. We will see that d = 2 and d ≥ 3 are drastically different. We will consider only the continuous part of the spectral densities excluding the s = 0 point from the discussion. 28
Conserved currents
Taking into account (4.11), the requirement that the quantum field theory under consideration has the UV and IR fixed points described by the UV and IR conformal field theories respectively at the level of spectral densities is imposed by the conditions 29 These are completely equivalent to the position space conditions (2.7) and (2.8). Due to (4.11) the above requirement can also be written as where C IR J and C U V J are the usual IR an UV conserved current central charges and the numerical coefficient κ is given by (4.4). We see that in d ≥ 3 the central charges govern the asymptotics of ρ 1 J (s). In d = 2 instead the right-hand side of (5.2) simply vanishes and surprisingly the dependence on the UV and IR central charges disappears. 26 The expression (2.25) is given in the Euclidean signature. Its parity even part in the Lorentzian signature is obtained straightforwardly by simply replacing the Kronecker delta with the Lorentzian metric. 27 Spectral densities give an alternative description of the two-point functions to the position space functions hi(r) introduced in section 2, see (2.1) and (2.23). For instance in the case of conserved currents we have two functions h1(r) and h2(r) related by a single differential equation. In the case of the stress-tensor we have five functions hi(r) with three differential constraints. 28 We are allowed however to be infinitesimally close to s = 0. 29 We remind that s plays the role of energy squared. At very small and very large energies we expect to restore conformal invariance since we approach the IR and UV fixed points.
In order to understand what is happening in d = 2 dimensions let us re-derive (5.2) in a different way. Consider the integral expression for the difference of U V and IR central charges (2.12) valid in d ≥ 2. It contains the Euclidean two-point function of conserved currents. We then write its spectral (Källén-Lehmann) decomposition in terms of ρ 1 J (s). This is done by applying the Wick rotation to the Lorentzian spectral decomposition (3.59). We discuss all the technical details in appendix F and state here only the final answer: in d ≥ 3 the sum-rule (2.12) reduces to the asymptotic conditions (5.2), instead in d = 2 one gets the following integral expression This result is not well known in the literature, nevertheless it was obtained long before this paper, see [26].
In d = 2 another central charge C J also exists, see (2.4), which is actually the global anomaly and according to the discussion of section 2.1 remains invariant along the RG flow.
In other words For more details see section 2.1. In order to write (5.3) in a canonical form we define the holomorphic and the anti-holomorphic parts of the conserved currents. The associated central charges are denoted by k andk respectively and are related to C J and C J in the following way For details see the end of appendix A. In terms of k andk the sum-rule (5.3) due to the condition (5.4) reads as In section (3.1) we proved that ρ 1 J (s) ≥ 0 for all the energies. As a result from (5.3) we conclude that Alternatively from (5.6) we conclude that
Stress-tensor
The identical discussion holds for the stress-tenors. The requirement that the UV and IR fixed points are governed by the UV and IR conformal field theories translates into the conditions on the components of the stress-tensor spectral density. We start with the spectral density of the trace of the stress-tensor ρ Θ (s). In a conformal theory we strictly have Θ(x) = 0. In a quantum field theory instead the trace operator is given by the relevant scalar operator O deforming the UV CFT, namely The operator O has the scaling dimension ∆ O < d and the coupling constant g has the mass The coefficient κ is given by (4.4) and N O is the normalization constant of the operator O, see (4.1). Analogous we can write where O is an irrelevant operator describing the deformation of the IR conformal field theory.
We have then lim The operators O and O are related by some change of basis. One basis is more convenient for working at high energies, the other one is more convenient for working at low energies. For some extra details on the trace of the stress-tensor see the last paragraph of section 2.2. Let us address now the Little group spin two component of the spectral density ρ 2 T (s). Its asymptotic behavior due to (4.13) reads as lim s→0 (5.14) These are equivalent to As we can see, the asymptotic behavior of ρ Θ (s) is governed by the properties of the UV and IR "deforming" operators, instead the asymptotic behavior of ρ 2 T (s) is governed by the central charges. We remind however that ρ 2 T (s) exists only in d ≥ 3. In d = 2 the central charge information is encoded instead into ρ Θ (s) in a very non-trivial way. To understand how, we use (2.32) and plug there the Wick rotated spectral decomposition of the two-point function of the stress-tensor (3.64). We recover (5.15) For more details see appendix F. The sum-rule (5.16) was derived in [25]. In d = 2 there is also another central charge C T , see (2.25), which is actually the gravitational anomaly and thus remains invariant along the RG flow, in other words For more details see section 2.2. In order to write (5.16) in a canonical form we define the holomorphic and the anti-holomorphic parts of the stress-tensor. The associated central charges are denoted by c andc respectively and are related to C T and C T in the following way For details see the end of appendix A. Taking into account (5.17), the sum-rule (5.16) can be written as In section (3.1) we proved that ρ Θ (s) ≥ 0 for all the energies. As a result from (5.16) we conclude that Alternatively from (5.19) we conclude that The inequalities (5.20) and (5.21) were found by A. Zamolodchikov [28], see also [47]. They are referred to as the "c-theorem". Notice that the equal sign can appear only if the theory is conformal where ρ Θ (s) = 0 for all the energies according to (4.13). In d ≥ 3 the situation is very different. Due to unitarity C U V T ≥ 0 and C IR T ≥ 0. However from (5.15) one cannot deduce further relations between them, in other words both options C U V T ≥ C IR T and C U V T < C IR T are perfectly viable.
Applications to bootstrap
In section 3 of [16] it was shown how to use unitarity to construct non-trivial constraints on partial amplitudes, form factors and spectral densities. This was done in a presence of a single scalar local operator. Here we extend the analysis of section 3 in [16] to include the full stress-tensor. We will conclude this section by defining concrete bootstrap problems.
We will focus on quantum field theory with a mass gap (or equivalently on the QFTs with an empty IR fixed point). The spectrum of such theories is described by one-particle asymptotic in and out states. We will work here with identical scalar particles for simplicity. For precise definitions of asymptotic states see section 2.1 in [16]. One can build the twoparticle asymptotic in and out states by taking the symmetrized tensor product of two oneparticle states. 30 We denote such two-particle states by |m, p 1 ; m, p 2 in , |m, p 1 ; m, p 2 out . (6.1) The four-momenta of the one-particle asymptotic states by definition obey We also define where s is the squared total energy of the two-particle state.
Stress-tensor form factor
Let us start by recalling the definitions of the form-factor and its properties in the case of the stress-tensor. (See also sections 2.4 and 2.6 of [16].) The trace of the stress-tensor two-particle form factor is defined as 31 F Θ (s) ≡ out m, p 1 ; m, p 2 |Θ(0)|0 . (6.4) The two-particle form factor of the full stress-tensor is defined as F µν T (p 1 , p 2 ) ≡ out m, p 1 ; m, p 2 |T µν (0)|0 . (6.5) Analogously, one can define the stress-tensor form factors with the in asymptotic states. They are however related in a simple way to the ones here due to the CPT invariance. One can decompose it in the basis of tensor structures which are totally symmetric in µ and ν indices. The conservation of the stress-tensor leads to the following condition (p 1 + p 2 ) µ F µν T (p 1 , p 2 ) = 0. (6.7) 30 Symmetrization is required for identical particles in order to make the state invariant under the exchange of two particles. 31 Compared to [16], in all the formulas here we drop the subscript 2 for the form factors in order to simplify the notation. Originally this subscript was introduced to stress that we deal with two-particle form factors.
As a result, the most general form of the stress-tensor form factor in d ≥ 3 reads as where the functions F (0) and F (2) are the coefficients in the tensor structure decomposition. 32 We notice also that the first tensor structure in (6.8) due to (6.3) is precisely the Π µν 1 (p) projector defined in (3.18). In d = 2 the two tensor structures in (6.8) are equal to each other, thus we should keep only one of them in the decomposition. The simplest way to proceed is to leave the second tensor structure which means that we effectively set F (0) (s) = 0 in d = 2. Contracting both sides of (6.8) with the metric η µν and comparing with (6.4) we conclude that Due to the fact that the stress-tensor enters the definition of the Poincaré generators, one can derive the following normalization conditions where const is some undetermined constant. For the detailed derivation see appendix G. Consider now the Fourier transformed stress-tensor T µν . Using (3.34), (3.43) and (3.44) we can split the stress-tensor form factor into three pieces with the Little group spin 0, 1 and 2 as follows where p µ was defined in (6.3). We notice immediately that the Little group spin 1 term vanishes identically leaving us only with the first and the last terms. Let us now introduce the center of mass (COM) frame for two-particle states where due to the conditions (6.2) we have s = 4m 2 + k 2 . Plugging (6.8) into (6.12) and going to the center of mass frame one gets 14) 32 Since we work in the Lorentizian signature the two tensor structures introduced in (6.8) have poles when p1 = ±p2. The appearance of these poles is completely artificial. As a result they must be removed in the full expression of the stress-tensor form factor by the presence of appropriate zeros in the components of the stress-tensor form factor F (0) (s) and F (2) (s).
where the expression in the left-hand side of (6.14) vanishes if µ = 0 or ν = 0 and the indices m and n are defined as µ = {0, m} and ν = {0, n}. The first term in (6.14) corresponds to the Little group spin 0. Taking into account (6.10) we see that it is simply driven by the trace of the stress-tensor form factor. The second term in (6.14) corresponds to the Little group spin 2.
Relation with the spectral density
The stress-tensor spectral density in terms of its components ρ Θ (s) and ρ 2 T (s) is given in (3.49). We reproduce it here again for convenience In what follows we compute the components of the spectral density in terms of the components F Θ and F (2) of the form factor defined in (6.8) and (6.10). To do this we use (3.32). Writing explicitly the contribution of two-particle states and denoting by . . . the contribution of multiparticle states, we can write 2πρ µν; ρσ The overall 1/2 factor here appears because we deal with identical particles. Going to the center of mass frame (6.13) and switching to the spherical coordinates according to (A.17) in [16] one gets 2πρ µν; ρσ We can now perform the integration in (6.17) and plug the result into (6.18). One then obtains 2πρ Θ (s) = ω 2 |F Θ (s)| 2 + . . . , where the coefficient ω reads as (6.20) The spherical angle Ω n is defined in (E.7) and the coefficient N d reads as
Projection to definite spin
Let us introduce now the two-particle in and out asymptotic states in the center of mass frame projected to a definite SO(d − 1) Little group total spin j, The projector Π j was defined in equation (2.14) of [16], it reads where C k j is the Gegenbauer polynomial, θ 1 is the angle of the (d − 1)-dimensional vector k with respect to the x d−1 spatial axis. The coefficient γ j is defined as (6.24) We will also need to introduce the following state where the coordinatex µ has a small imaginary part in the time direction according to (3.2). The factor m −d/2 is introduced to match the dimensions of the states (6.22). Let us study now the inner product of the states |ψ 2 and |ψ 3 µν . One has In the second line we used (3.10) and the definition (6.5). Applying (6.23) to (6.14) we obtain 33 Π j F µν T (p com 1 , p com 2 ) = 0, ∀j = 0, 2. (6.28) The only non-zero result appears for j = 0 and j = 2. In the former case only the first term in (6.14) gives a non-zero contribution. In the latter case only the second term in (6.14) gives a non-zero contribution. More precisely
Unitarity constraints
Having set up all the necessary ingredients, let us finally address unitarity. We start by taking all possible inner products of the states (6.22). Skipping the detail, which were explained in section 3 in [16], we arrive at the following matrix 1 S * j (s) S j (s) 1 0, ∀j = 0, 2, 4 . . . and ∀s ≥ 4m 2 (6.30) which must be semi-positive definite in unitary theories according to the discussion of appendix B. Here S j (s) is the partial amplitude related to the full scattering amplitude S(s, t, u) (the amplitude containing the disconnected piece) as Π j S(s, t(s, cos θ 1 ), u(s, cos θ 1 )), (6.31) where the Mandelstam variables can be explicitly expressed in terms of the scattering angle θ 1 as follows The coefficient κ j was computed in equation (2.41) in [16]. It reads . (6.33) In order to proceed we also need to consider the inner product of the state (6.25) with itself Here we used (3.10), performed the change of variables and employed (3.31).
Let us now consider the following three states Taking all possible inner products of these states we obtain a 3x3 hermitian matrix which components were carefully derived in section 3 of [16]. Using the unitarity requirement, as explained in appendix B, we obtain the following semi-positive definite constraint This condition should be satisfied for all the energies s ≥ 4m 2 . We can also consider the following three states instead where we have defined Taking all possible inner product of these states, removing the overall δ-function and using (6.29) we obtain the following semi-positive definite condition where we have defined The condition (6.39) should be satisfies for all the energies s ≥ 4m 2 and angles θ 1 ∈ [0, π].
Notice the appearance of the angle θ 1 compared to the trace case. As we will see shortly, the strongest bounds come from θ 1 = 0 configuration. Thus, the semi-positive constraint (6.39) simply reduces to the following form where we have defined The semi-definite positive conditions (6.36) and (6.41) are the main results of this section. Here and below all the inequalities are given in the physical domain of squared energies s ≥ 4m 2 . Second we recover the non-negativity of the components of the stress-tensor spectral densities ρ Θ (s) ≥ 0, ρ 2 T (s) ≥ 0 (6.44) already derived in section 3.1. Third, we derive the inequalities
Sylvester's criterion
We notice that the strongest constraint in (6.45) comes from θ 1 = 0 (or equivalently θ 1 = π) configuration since the function f −2 (θ 1 ) has its minimum there. These bounds are in a perfect agreement with (6.19). Finally the determinants ot the matrices (6.36) and (6.39) lead to the following set of constraints We notice now that the first term in both equalities is non-negative. Thus, the strongest bound happens at the minimum of the function f −2 (θ 1 ) which is at θ 1 = 0, since it becomes harder to compensate for the negative second term and for the potentially negative third and fourth terms.
Elastic unitarity
In the special region of energies s ∈ [4m 2 , 9m 2 ], (6.47) called the elastic regime, the inequality (6.43) become saturated, 34 namely |S j (s)| 2 = 1, ∀j = 0, 2, 4, . . . (6.48) Using this fact we can rewrite the equations (6.46) as These are known as the Watson's equations. They allow to express the partial amplitudes in terms of the components of the (two-particle) stress-tensor form factor in the "elastic" range of energies (6.47).
Asymptotic behavior
Let us now study the inequalities (6.45) in the s → ∞ limit. Using (5.11) and (5.15) together with (6.20) and (6.21) we obtain (6.51) 34 In d = 2 this can also happen for s > 9m 2 in the case of integrable models.
We remind that the trace of the stress tensor at high energy is given by the relevant scalar operator O (with the scaling dimension ∆ O and the two-point normalization N O ) which deforms the UV CFT and g is the dimensionful coupling governing the deformation. The numerical constant κ was defined in (4.4). In 2d the inequality (6.50) was first derived in [24], see formulas (3.33) and (3.34).
It is interesting to notice that even if one constructs a scattering amplitude such that all its partial amplitudes obey the unitarity condition (6.43) at all the energies, it is not clear if one can read off any UV CFT data from it. 35 The conditions (6.46) in the limit s → ∞ together with (5.11), (5.15) and (6.50), (6.51) could in principle provide this connection. Under closer investigation it does not seem however that one can draw from them any generic statements.
Bootstrap problems
One can use the semi-positive definite constraints (6.30), (6.36) and (6.41) to define several bootstrap problems. 36 There are at least two distinct possibilities.
Let us start with the first one. The constraint (6.30) allows to bound various nonperturbative S-matrix coupling constants using the numerical procedure of [2,3], see also section 1 of [15] for a concise summary. One can now re-run this procedure in the presence of (6.36) and (6.39) where we inject some known numerical 37 data about the stress-tensor form factors and the spectral density. This provides a more restrictive setup and injects model specific information in the numerical procedure.
The second possibility in d ≥ 3 is to apply the numerical procedure [2,3] to (6.30), (6.36) and (6.41), writing an ansatz for the components of the stress-tensor form factor and the spectral density, in order to minimize the following quantities (6.52) where m and n are some real parameters. Their allowed range is constrained by the convergence of the integral at large values of s due to the asymptotic behaviour (5.13) and (5.15) of the components of the stress-tensor spectral density. For example one concludes that n > d/2 − 1. The only disadvantage of this procedure is that the quantities in (6.52) do not have any clear physical meaning. Notice that the non-trivial result of such a minimization procedure is guaranteed by the presence of the form factor normalization conditions (6.11).
In d = 2 instead of (6.52) we can minimize the UV central charge c U V given by the integral expression (5.19). This was already employed in [16]. In the presence of global symmetries one can also minimize the conserved current central charge k U V given by the integral expression (5.6). This for example can be employed to study further the O(N ) models in d = 2. 35 Notice however that in d ≥ 3 using holography one can argue that the regime of hard scattering (high energy and fixed angle) should be directly related to the UV CFT [54]. 36 Notice that these constraints are already written in the form which is straightforward to implement into the semi-definite problem solver SDPB [55,56]. 37 One could obtain some numerical data using Hamiltonian truncation methods, see for instance [57][58][59].
A Correlation functions in Euclidean CFTs
The conformal group in d-dimensional Euclidean space is SO(1, d + 1). We will consider local operators with spin , namely the ones transforming in the traceless-symmetric representation of the SO(d) subgroup. 38 Such operators can be encoded in the following index-free objects where z a are real vectors called polarizations. One can invert (A.1) as where x ≡ x(x+1) . . . (x+ −1) is the Pochhammer symbol and D a is the Todorov differential operator defined as The Todorov operator is strictly defined for d ≥ 3. One can still use it in d = 2 by keeping d generic and taking the limit d → 2 in the very end of the computation. The conformal group can be realized linearly in D ≡ d + 2 dimensional embedding space. Using the formalism developed in [60] one can represent the traceless-symmetric local operator (A.1) as a function of D-dimensional light-cone coordinates X A ≡ {x a , X + , X − } and polarizations Z A ≡ {z a , Z + , Z − }. 39 The metric in the light-cone coordinates is 40 In d = 3 all the bosonic representations are traceless-symmetric. In d ≥ 4 even bosonic representation can be non-traceless symmetric. 39 In order to work with more general representations other embedding formalisms are required. For general representations (bosonic and fermionic) in d = 3 see [61]. For general representations in d = 4 see [62][63][64]. For general bosonic operators in d ≥ 4 see [65]. 40 The Cartesian coordinates in D-dimensions read as X 2 = X a X a − (X d ) 2 + (X d+1 ) 2 . The light-cone coordinates are then defined as X + ≡ X d + X d+1 and X − ≡ X d − X d+1 .
The map between the embedding space and the original space is given by It is straightforward to construct n-point functions in the embedding formalism. They read where g I are some undetermined functions of the conformally invariant variables u, v, . . . also known as the cross-ratios and T I are the tensor structures. For n = 2 and n = 3 there are no cross-ratios, thus the functions g I can only be constants. Tensor structures are built as products of the following conformally invariant objects (A.7) These are parity even objects. One can also construct various parity odd conformally invariant objects which contain a single D-dimensional Levi-Civita symbol. The number and the structure of such objects depend on the number of dimensions. For instance in d = 2 (D = 4) and for n = 2 one can write a single object where Using the map (A.5) we can write the projection of the invariants to the original ddimensional space. One gets Analogously for the parity odd invariant (A.8) we have where the Levi-Civita symbol in Euclidean d = 2 space is 01 = 01 = +1.
Examples
As the first application consider the two-point functions of Abelian conserved currents where C J and C J are some constants undetermined by the conformal symmetry. They are called the current central charges. The imaginary unit i was introduced in the second term for the later convenience. Using the projections (A.9), (A.10) and the Todorov operator (A.3) we get the following indexful expression for the Euclidean two-point function where we have introduced the auxiliary objects Notice that both of these objects are translation invariant as they should be. Moreover they are also invariant under the the transformation a ↔ b and x i ↔ x j . 41 It is straightforward to check that the two-point functions (A.12) is automatically conserved. As will be discussed in appendix B.1, Euclidean two-point functions in unitary theories must obey reflection positivity. For Lorentz spin one current this condition is given in (B.10). Plugging (A.12) into (B.10) and using (B.13) we get 14) The semi-positive condition (A.14) can be satisfied only if the matrix in (A.14) is hermitian. As a result both C J and C J must be real. Furthermore using the Sylvester's criterion, the condition (A.14) leads to the following constraints As the second example let us consider the two-point function of the stress-tensor where as before C T andc C T are some constants undetermined by the conformal symmetry coefficients referred to as the central charges. Again using the projections (A.9), (A.10) and the Todorov operator (A.3) we get the following indexful expression for the Euclidean two-point function This expression is automatically conserved. As in the case of conserved currents reflectionpositivity imposes constraints on the central charges C T and C T . Plugging (A.17) into (B.15) and using (B.13) we get that both must be real and obey the following inequalities Notice the presence of parity odd terms in d = 2 both in (A.12) and (A. 17). No such terms can be constructed in d ≥ 3. In general it can be shown that two-point functions of local primary operators transforming in the irreducible Lorentz representation have a single tensor structure, see for example [66]. In d = 2 both J a and T ab transform however in a reducible representation of the Lorentz group. 42 As we will shortly see they can be split into irreducible representations which have a single tensor structure in their two-point functions.
Conventions in d=2
Let us summarize now the standard d = 2 notation. One defines the complex coordinates (A. 19) In theses coordinates one can define the following components of the spin one Lorentz operators where the coefficients k andk read as (A.23) 42 In d = 2 the Lorentz group is SO(2) with U (1) being its double cover. As a result the SO(2) representations can also be labeled by the U (1) charges. For instance the spin one SO(2) representation is the direct sum of ±1 U (1) charges. Analogously the spin two representation is the direct sum of ±2 U (1) charges.
From (A. 15) it follows that k ≥ 0 andk ≥ 0. As an example one can use free theory of a massless Dirac fermions which has k =k. Using (A.23) these values can be read off from equation (5.6) in [49]. 43 Analogously for the stress-tensor we can define the following components The conformal invariance implies Θ(z,z) = 0. where the coefficients c andc read as From (A.18) we conclude that c > 0 andc > 0. In a free theory of a single real scalar and also in a free theory of a single Dirac fermion c =c = 1, see equations (5.5) and (5.6) in [49].
For an alternative derivation see also appendix C of [16].
B Unitarity
Unitary quantum field theories are defined to have non-negative norms of all its states. Consider some state |ψ . The unitarity then requires In a more complicated situation when we have N states |ψ I with the label I = 1, . . . N , the above condition becomes the semi-positive requirement on the N × N hermitian matrix In what follows we will use (B.1) and (B.2) to derive some concrete constraints on two-point functions. We will give the discussion in the Euclidean and in the Lorentzian signature separately.
B.1 Implications in Euclidean signature
We start with the Euclidean signature. As indicated in the main text we pick the first coordinate and assign to it the role of Euclidean time The hermitian conjugation of local operators contrary to the Lorentzian signature is very non-trivial in the Euclidean signature. With the choice of the Euclidean time made above the hermitian conjugation of a generic real 44 operator with spin reads as 45 where the prefactors κ are defined as Let us give special names for the following coordinates We then choose the following state where I is a collective index for the indices a, b, . .
Second, if the operator is a vector, the condition (B.8) reads as (B.10) 44 One defines real and complex operators in the Lorentzian signature and then analytically continues to the Euclidean signature. See appendix B.2 for some details. 45 For the derivation of (B.4) see section 7.1 of [67]. 46 The choice of the state (B.7) is very particular. More generally one should define a state by smearing the operator with some "test" function. By changing the test function one changes the state. As a result one gets an infinite number of smeared constraints (B.8).
As an application consider the parity even part of the two-point function (2.4), We have then Plugging this expression into (B.10) and using the Sylvester's criterion for semi-positive definiteness of a real matrix we get the following conditions When we are concerned with two-point functions in conformal field theories it is convenient to write explicitly their tensor structures in a "reflection-positive" frame Finally consider the case of the stress-tensor. The reflection-positivity condition (B.8) becomes a 4 × 4 block matrix spanned by the collective indixed I and J, where I ≡ ab = {00, 0i, j0, ij} and J ≡ cd = {00, 0k, l0, kl} and i, j, k, l = 1, . . . , d − 1. In order to write a compact formula we also define Then the condition (B.8) in terms of (B.14) reads +K 00; 00 +K 00; 0l +K 00; k0 +K 00; kl −K 0i; 00 −K 0i; 0l −K 0i; k0 −K 0i; kl −K j0; 00 −K j0; 0l −K j0; k0 −K j0; kl +K ij; 00 +K ij; 0l +K ij; k0 +K ij; kl
B.2 Implications in Lorentzian signature
Consider the Lorentzian space. We denote the Lorentzian time and the spacial coordinates in the following way Consider now some real local operator with spin. The hermitian conjugation has a very straightforward action on such an operator in the Lorentzian space. It reads The coordinates x µ are mostly real, however we often include a small imaginary part in the time component in order to regularize two-point functions, see appendix C. This is the reason why we kept x * in the right-hand side of (B.17). Consider now the following state 47 |ψ ≡ O(x * )|0 , (B.18) 47 The same comment as in the footnote 46 applies here.
where O is some real scalar local operator and as in section 3 we have defined Similarly for the Lorentz spin one operator we can construct the following states Unitarity condition (B.2) together with (B.17) imply then the following constraint on the ordered two-point function It is also important to note that the reality condition (B.17) poses further constraints on ordered two-point functions of real operators. Consider for example the case of conserved currents. One has As an example let us consider the ordered two-point function of conserved currents in Lorentzian conformal field theory. One has where we have defined The Levi-Civita symbol obeys 01 = − 01 = +1. The expression (B.24) can either be derived from scratch adapting appendix A to Lorentzian signature or can be simply translated from the Euclidean expression (A.12) using (C.15). Plugging (B.24) in (B.23) and taking into account the following properties which simply follow from the definitions (B.25), one concludes that C J and C J are purely real. Plugging (B.24) into (B.22) we obtain the condition where we have used From (B.27) we get the following conditions which are identical to the ones obtained in the Euclidean metric and given in (A.15).
C Euclidean vs. Lorentzian operators
Here we will discuss Euclidean and Lorentzian correlators. We then provide a formal way to define the latter as various analytic continuations of the former. Part of the discussion here is based on section 7 and appendix B of [67].
Euclidean correlators In the Euclidean space two-(and higher-) point correlation functions are computed using the path integral approach. They are denoted by We introduced the subscript E for both the coordinates and the operators in order to emphasise that we work in the Euclidean metric. By construction the correlation function (C.1) is time-ordered with respect to Euclidean time, namely We can also reinterpret the correlator (C.1) as the vacuum expectation value of local operators in some Hilbert space. 48 This is done as follows. The vacuum expectation value of two local operators is denote by The order of operators in this expression is important. The correlator (C.3) makes sense only if x 0 E > y 0 E . This is easy to see by rewriting (C.3) in the following way where H is the Hamiltonian of the system. Here we simply used the translation invariance One can think about states and operators as vectors and matrices in the infinite-dimensional space. The vacuum state is the state with the lowest energy.
where P a are the generators of translations and H ≡ P 0 . The Hamiltonian H is an infinitedimensional matrix with non-negative eigenvalues. In other words the eigenvalues of H are bounded from below. The operator e −H(x 0 E −y 0 E ) has all finite eigenvalues only if x 0 E > y 0 E . It becomes unbounded from above if x 0 E < y 0 E . In this case (C.4) formally diverges. The only option to avoid this and to define the two-point correlation function for any values of x E and y E is as follows By construction this is the time-ordered (with respect to Euclidean time) correlation function. We refer to it as the Euclidean correlator. The equivalence between the path integral formulation (C.1) and the operator formulation (C.6) leads to Lorentzian correlators Let us now consider the vacuum expectation value of the local operators in the Lorentzian signature This quantity is not well-defined since it generically contains poles when (x L − y L ) 2 = 0 at x µ L = y µ L . In order to define the above vacuum expectation value correctly one should specify how to deal with these poles. In practice we allow for a small imaginary part for the Lorentzian time and then send it to zero. Several different ways (prescriptions) exist. They define different types of correlators, namely Wightman, time-ordered (Feynman), advanced and retarded correlators. For instance the Wightman function is defined as follows where 1 > 2 . This is the simplest possible prescription. One can use now the Wightman correlator to define all the other types of correlators (instead of going though various prescriptions). For example the time-ordered (Feynman) correlator is defined as By definition the Wightman two-point function (C.9) is a distribution. When integrated with a test-function the i prescription leads to an unambiguous result. The time-ordered twopoint function (C.10) instead is not a distribution due to the presence of the step functions.
Relation between Euclidean and Lorentzian correlators
We can formalise the discussion of Lorentzian two-point functions by defining them as various analytic continuations of the Euclidean two-point function in time. Let us denote the Euclidean time by t E ≡ x 0 E and the Lorentzian time by t L ≡ x 0 L . Let us start from the following Euclidean correlator where 1 and 2 are real Euclidean times which obey 1 > 2 . For 1 > 2 only the first term in (C.6) survives and we can thus drop the subscript E. We then analytically continue this object to complex times and send both 1 and 2 to zero. The resulting object formally defines the Wightman two-point function, namely Here we have decided to relate the Euclidean and Lorentzian times as The equality between the last two entries in (C.12) lead to the formal relation between the Euclidean and Lorentian scalar local operators For local operators with spin the relation between Euclidean and Lorentzian operators is more complicated. For instance for the vector operators we have The analytic continuation which follows the path is known as the Wick rotation. Without loss of generality let us set y = 0 by using translation invariance. Using the Wick rotation one can define the (Feynman) time-ordered two-point function as Using the definition of the Euclidean propagator in the right-hand side of (C.18) the last equality can be written as where we have used the fact that t L ≈ + if t L > 0 and t L ≈ − if t L < 0. Using the first line of (C.12) one sees the equivalence between the above expression and (C.10).
D Källén-Lehmann representation in Euclidean signature
The Källén-Lehmann representation of the Lorentzian time-ordered two-point functions was derived in section 3.2. In this section we would like to translate those result to the Euclidean signature. For that we apply the following change of variables Which is in agreement with (C.13). Let us start with the Källén-Lehmann representation for the scalar operators given by (3.55). Performing the above change of variables we get See appendix B of [16] for some additional details. Notice the absence of the i since there are no poles for real p 2 to be regularized. The Källén-Lehmann for the conserved currents and the stress-tensor were derived in (3.59) and (3.64). Analogously to the scalar case one gets where we have defined the Euclidean projectors in the coordinate space
E Scalar propagators
In this appendix we compute the explicit form of the scalar Euclidean, Wightman, Feynman, retarded and advanced propagators. In what follows we completely ignore the case of coincident points, in other words x = 0.
Let us consider first the space-like separation of points. In this situation we can perform the Lorentz transformation to set x 0 = 0. We can then perform the integral (E.4) as follows .
(E. 6) In the first line of (E.6) we have switched to the spherical coordinates in d − 1 dimensions and defined k ≡ | k| together with χ ≡ | x|. We performed the integration over d − 3 angles to get the spherical angle Ω d−2 , where Ω n ≡ n π n/2 Γ(n/2 + 1) .
The variable η reflects the integration over the last remaining angle. For details see formulas (A.3) -(A.6) in [16]. In the second line of (E.6) we use the result 3.387 2. from [68]. The function J n (x) stands for the Bessel function of the first kind. Finally in the third line of (E.6) we use formula 6.564 1. from [68]. The function K n is called the Bessel function of the second kind. Let us now consider the time-like separation of points, namely when x 2 < 0. In this situation we can perform the Lorentz transformation to set x = 0. We can then perform the integral (E.4) as follows In the first line of (E.8) we switch to the spherical coordinates in d − 1 dimensions and defined k ≡ | k|. In the second line of (E.8) we performed yet another change of variables where ξ ≡ √ 1 − s −1 k 2 . Finally in the last line of (E.8) we use formula 3.387 4. from [68]. Notice that it was crucial to have a small imaginary part in order to define the integral properly. The function H (1) n is called the Hankel function of the first kind. Notice also that the time component x 0 does not have a definite sign.
We still need to perform some work to bring the result (E.8) to its final form. To do that we split (E.8) into two parts: one with x 0 > 0 and one with x 0 < 0. Using properties of the Hankel function we get then We can now perform the Lorentz transformation in order to write the expressions (E.6) and (E.8) in a generic frame. Effectively this is done by replacing χ → √ x 2 and x 0 → ± √ −x 2 . Taking this and (E.9) into account we arrive at the final expression for the scalar Wightman propagator
Feynman propagator
The scalar Feynman propagator was defined in (3.55). Using (E.10) we can get its explicit expression which reads as For completeness let us also introduce the retarded and advanced propagators −i∆ R (x; s) ≡ +θ(+x 0 )D(x; s), where we have defined To obtain this result we have plugged (E.10) into the first line of (E.13) and rewrote the sum of two Hankel functions as a Bessel function of the first kind.
The results for Feynman, retarded and advance propagators are well known in d = 2 and d = 4 dimensions, see for example [69] for the clean summary. The scalar Feynman propagator in general dimensions was also computed in [70], see formula (27). Our results match the ones present in the literature. In order to perform the comparison note the following properties of the special functions (z), (E.14) together with d ≥ 2 (even) : where Y n is the Neumann function and z is real.
Consistency check
For the space-like separation of points x 2 > 0 the Euclidean, the Wightman and the Feynman propagators are simply related as For the time-like separation of points x 2 < 0 one should be able to obtain the expressions (E.10) and (E.11) using the analytic continuation of the Euclidean result. This re-derivation should be seen as the consistency check of the above computations. For simplicity let us work in the frame where x = 0, in other words x 2 = −t 2 L , where t L is the Lorentzian time. Let us start from the Wightman propagator. Using the first line of (C.12) we can write Using the explicit expression (E.3) we get .
(E. 19) By splitting this expression in two distinct cases of t L < 0 and t L > 0 we can take the limit explicitly The Bessel functions of the second kind are related to the Hankel functions via the following relations where z > 0.
. (E.23) As before using the fact that lim →0 −st 2 L + i = +i st 2 L and the first entry in (E.21) we conclude that .
(E.24)
Here we have also used the second entry in (E.14). This expression is equivalent to (E.11) when x = 0 or in other words when −x 2 = t 2 L .
Massless limit
Let us study the massless limit s → 0 of the explicit expressions for the Euclidean, Wightman (E. 25) In the first expression γ is the Euler constant. We see that for d ≥ 3 the scalar Wightman propagator is completely finite at s = 0. In d = 2 it has a divergent part. For the space-like separation of points x 2 > 0 due to the relation (E.17) the identical expression holds for the Euclidean and Feynman propagators. For the time-like separation of points in d ≥ 3 we get θ(x 0 ) + (−1) d−2 θ(−x 0 ) + O(s), (E.26) Spinning Wightman propagators are obtained by taking derivatives with respect to coordinates. Thus, spinning propagators are finite at s = 0 for d ≥ 2. The identical conclusion holds for the time-like separation.
F Spectral densities and central charges: technical details
In section 2 we have derived the sum rules for the central charges C J and C T in terms of the Euclidean two-point functions of the conserved current and the stress-tensor respectively. They are given in (2.12) and (2.32). In this appendix we will derive their implications for the spectral densities.
We can plug the expressions (D.3) and (D.4) into the sum rules (2.12) and (2.32), and obtain the desired relation between the central charges and spectral densities. The details of this manipulations are subtle. In what follows we carefully derive the result for conserved currents and then simply state the answer for the stress-tensor. We can explicitly evaluate this function by using (E.3) and taking its derivatives. Performing the straightforward algebra we get We can now plug the function (F.3) into (F.1), exchange the order of integrals and make a change of variables from r to the dimensionless quantity u ≡ r √ s. We get It is important to stress that we cannot simply permute the limits with the integration in (F.4). To see it consider for example the limit r min → ∞. Due to (F.6) the integrand can give a finite contribution. We are thus required to split the integral over s in three pieces (F.8) Here the cut-off parameters Λ min and Λ max can be chosen arbitrarily since nothing depends on them explicitly. In the end of the analysis we will take their values to be very small and very big respectively. This explains the order of limits in (F.8).
We take the limits in r min and r max under the integral in (F.8) where it is allowed and use (F.6). We also replace the spectral density ρ 1 J by the CFT ones for small and large energies. We arrive at the following expression then (F.9) We can now plug the explicit expression of the spectral density (4.12) and obtain the final result. In d = 2 both UV and IR CFT spectral densities vanish. As a result we are left only with the second term in (F.9). We thus get the final answer Taking the limits 49 we arrive at the following non-obvious equality We have checked numerically that the following relation indeed holds true
Stress-tensor
Analogously for the stress-tensor let us plug the spectral decomposition (D.4) into the sum rule (2.32). One gets where we have defined R abcd (x)Π ab; cd 2 (∂; s)∆ E (x; s).
(F.15)
We also remind that the object R abcd (x) was defined in (2.33).
Applying the logic identical to the case of conserved currents in d = 2 we simply get
G Form factor normalization
The stress-tensor defines the generators of translations as (G.1) Let us now evaluate the matrix element of P µ with one-particle states. By convention the one-particle states obey the following normalization m 1 , p 1 |m 2 , p 2 = 2p 0 1 δ m 1 m 2 × (2π) d−1 δ (d−1) ( p 1 − p 2 ). (G.2) 49 Notice, that after taking the limits rmin and rmax nothing depends on Λmin and Λmax.
Since the one-particle states are the eigenstates of translations, for identical particles one gets m, p 1 |P µ |m, p 2 = 2p 0 1 p µ 1 × (2π) d−1 δ (d−1) ( p 1 − p 2 ), (G. 3) where from the definition of one-particle states one has Let us now recall the definition of the stress-tensor form factor (6.5). Using crossing symmetry we conclude that 50 m, p 1 |T µν (0)|m, p 2 = F µν T (p 1 , −p 2 ). (G.7) Plugging (G.7) into (G.6) we obtain Using (G.4) we can rewrite this as the following normalization condition of the stress-tensor form factor lim p 2 →−p 1 F 0µ T (p 1 , p 2 ) = 2p 0 1 p µ 1 . (G.9) The remaining task is to find the consequence of the condition (G.9) on the components of the stress-tensor form factors. In order to do that we recall the decomposition of the stress-tensor form factor into tensor structure given by (6.8). It reads F µν T (p 1 , p 2 ) = − F (0) (s) × (p 1 + p 2 ) 2 η µν − (p 1 + p 2 ) µ (p 1 + p 2 ) ν + F (2) (s) × (p 1 − p 2 ) µ (p 1 − p 2 ) ν . (G.10) Here compared to (6.8) we have slightly redefined the tensor structures in order to remove the kinematic singularities. The relation between the components of the form factor in (6.8) and (G.10) is given by (G.11) 50 The crossing equations for the form factors in 2d are discussed for example in [17] and [71]. In general dimensions they can be derived in the QFT framework using the LSZ procedure. For the derivation of crossing equations in the case of scalar form factors in 4d see chapter 7.2 in [72].
Plugging (G.10) into (G.9) we obtain lim s→0 F (0) (s) = −const, lim s→0 F (2) (s) = 1 2 , (G. 12) where const is an undetermined constant not fixed by the normalization condition (G.9). The minus is introduced for convenience. We can now translate the result (G.12) to the original components of the form factor (6.8). We simply have lim s→0 s −1 F (0) (s) = const, lim s→0 F (2) (s) = −2m 2 , (G.13) Furthermore using the expression of the trace of the stress-tensor form factor in terms of the F (0) (s) and F (2) (s) components given by (6.10) and using the normalization conditions (G.13) we obtain the following normalization of the trace of the stress-tensor form factor lim s→0 F Θ (s) = −2m 2 . (G.14) For works on the stress-tensor form factor normalization in the presence of particles of non-zero spin see for example [73,74] and references therein. | 21,666.4 | 2020-12-15T00:00:00.000 | [
"Physics"
] |
A deep learning-based intelligent decision-making model for tumor and cancer cell identification
ABSTRACT
INTRODUCTION
Deep learning (DL) plays a crucial role in healthcare systems, especially in the area of disease prediction [1].As tumors evolve, it becomes harder for current algorithms and models to keep up.In the brain, tumors come in 120 different varieties.There are both rapidly developing and slowly growing cancers.Rapidly expanding tumors, which can progress into cancer, pose the greatest risk to people of all ages.Experts have a hard time using automation to determine where a tumor is in its progression [2].As time goes on, non-communicable diseases (NCDs) become the leading cause of death worldwide.For a long time now, scientists have been working on AI-based systems that use decision-making strategies to lessen the burden of disease prediction [3].Disease prediction problems can also be tackled with the use of computer-based clinical decision support systems (CDSSs) [4].The CDSS can be used to classify the progression of cancer in a wide range of tumor types.
Several different types of automated methods, such as content-based image retrieval (CBIR), have been developed to identify brain cancers.The primary focus of this method is on comparing the outcomes of both low and higher-level visual data extraction from magnetic resonance imaging (MRI) scans [5].These tiers are utilized in order to lessen the space.Disease prediction using IoT devices at the edge is a common 511 application of edge computing [6].Edge-based methods are utilized to locate the true borders of malignancies.Using high-quality photos, cancers can be detected using a model called generative adversarial networks (GAN) [7].When training on MRI scans of brain tumors, the convolutional neural network (CNN) is crucial for extracting high-quality features from the pictures.Various approaches, including segmentation, extraction, and detection from MRI images, are utilized to identify tumor regions.Segmentation involves separating the cerebral venous system using a fully automated algorithm [8].Combining tumor segmentation with CNN yields improved results.Both 3D CNN and U-Net, as segmentation networks, demonstrate superior performance in accurately predicting tumor regions, leading to precise tumor detection [9].
To address the issues raised in [10], we propose intelligent decision-making approach model (IDMA) in this study.The proposed approach was designed with human brain tumor analysis in mind.Now, DL is the most popular choice for a wide variety of sophisticated uses.The primary goals of DL algorithms are speeding up computation and handling massive amounts of data.Input images are analyzed by the pretrained model for illness patterns.The high-resolution output photos are a result of the sophisticated filters removing noise from the input photographs.In this study, high-resolution images derived from noise filters are used to identify malignant growths.Based on the assessed attributes of tumors, IDMA can identify them as either malignant or benign.To further improve the proposed system, the reinforcement learning method has been implemented.The accuracy and efficiency with which tumors are located are both enhanced by this learning method.Figure 1 is a representation of the IDMA architecture for identifying tumors that are both malignant and benign.
Figure 1. Proposed model framework
In section 1 of the literature review, we examine how disease prognosis is predicted using a variety of existing methodologies that are part of DL models.These algorithms' benefits and weaknesses, as well as how well they function, are discussed.Pre-trained model VGG-19, optimal threshold segmentation, and CNN are all discussed in section 2 of the paper.IDMA and bilateral filter are both explained there.After discussing the implementation outcomes in section 3, a conclusion is presented in section 4. Health-cyber-physical system (CPS) was proposed by Zhang et al. [11] for designing patientfocused healthcare delivery.The cloud and large amounts of data are integrated into this system.The proposed method involves accumulating several layers.When compared to earlier methods, the CPS's performance was a success.The semantic segmentation strategy was proposed for tongue tumor segmentation by Trajanovski et al. [12].When applied to the HSI data, the proposed method yields reliable outcomes.To identify persistent MB signatures, Hyun et al. [13] created a 4-layer CNN.The superior performance of 4-layer CNN can be attributed to its dual-mode channel design.The DL method was proposed by Krijgsman et al. [14] for the classification of breast cancers.The author created a method for identifying CD 8+ cells in breast MRI scans, which are indicative of malignancy.The proposed method also aimed to identify dense areas.The multi-view knowledge-based collaborative (MV-KBC) approach introduced by Xie et al. [15] and it can separate cancerous tumor cells from chest computerized tomography (CT) scans.Chest pictures' characteristics were dissected using the 3D lung nodule method.To improve the detection rate, a KBC used ResNet-50 to create three distinct types of image patches.In the end, the results from this model were superior because they were more reliable.The three-step preprocessing technique (PPA) developed by Musallam et al. [16] is optimal for MRI scans.Glioma, meningioma, and pituitary tumors were all detected using the deep convolutional neural network (DCNN) employed by the PPA.This model is often referred to as a lightweight model because it contains relatively few layers.The PPA was able to reach 99% accuracy.
The local noise power spectra were used by Divel and Pelc [17] to provide a framework for noise reduction in a given input image (NPS).The fourier transform eliminates noise caused by localized areas by taking the square root of the spatial correlation between them.The spatially correlated noise is used to overlap the patches using the standard deviation.Image compression using a deep wavelet autoencoder (DWA), which combines the default reduction approach with an auto-encoder to break down a wavelettransformed image, was first introduced by Mallick et al. [18].DNN is combined with the DWA to perform the classification.The DWA-DNN outperforms standard DNN methods in terms of accuracy.Noreen et al. [19] introduced a new feature extraction model that diagnoses of brain cancers in the early stages, the performance of the classification technique is enhanced by employing two pre-trained models, namely Inception-v3 and DensNet201.
In order to categorize brain tumors, Gumaei et al. [20] created an enhanced approach to extract the characteristics by employing the regularized extreme learning machine (RELM).Edge and location contrast in brain MRI images can be enhanced with the help of the preprocessing approach min-max normalization.The RELM is then used in the categorization process.The proposed strategy outperforms prior methods in terms of classification accuracy.
The combined multi-model fusion and CNN strategy was first introduced by Li et al. [21].This method is a natural progression from 2D-CNN to 3D-CNN, allowing for the use of different model features in 3D space for the extraction of brain lesions.The suggested method incorporates various potential layers, including pooling and the softmax layer, to improve performance.In this case, the loss function is employed to fine-tune the lesion region's improved feature learning.When compared to the 2D-CNN, the accuracy has increased.The hybridized fully convolutional neural network (HFCNN) was proposed by Dong et al. [22] and is used for liver tumor segmentation.This method aids in the diagnosis of liver cancer in the patient.The HFCNN is a powerful tool for analyzing liver cancer.
To detect malignancies in MR images, Majib et al. [10] presented the VGG-SCN.Images of tumors and healthy tissue are automatically sorted by VGG-SCN.The proposal addresses multiple problems by facilitating faster and more precise work.Khan et al. [23] presented new research on a wide range of neurological disorders, including Alzheimer's disease, brain tumors, Parkinson's disease, and more.Several feature extraction methods, pre-trained models, and pre-processing methods are explored.Twenty-two datasets are investigated in this study to identify neurological disorders.Deep learning system (DLS) with data augmentation and transfer learning was proposed by Anaya-Isaza et al. [24].To effectively detect cancers, researchers employ ResNet50, a pre-trained network.Liver tumors were discussed by Zhang et al. [25].Liver tumors might be spotted in a patient's CT scan images.Tumor segmentation is refined using a unique level-set method.The probability distribution of the liver tumor is calculated using fuzzy C-means clustering (FCM) clustering.Segmentation is carried out using this method on two different datasets.Table 1 provides a comparative analysis of various deep learning models, their respective performances, and the datasets utilized in the study.[19] Inception-v3 and DensNet201 Brain MRI 99.34 and 99.51 Gumaei et al. [20] RELM Brain MRI Accuracy-94.23%Li et al. [21] DenseNet-121 NIH Chest X-ray AUC-91% Dong et al. [22] MobileNet-v2 CAMELYON16 Accuracy-91%, AUC-96% Anaya-Isaza et al. [24] ResNet-50 TCGA-LGG F1 score-92.34%Zhang et al. [25] A 2D U-net and a 3D FCN MICCAI 2017 AUC-97.5%
METHOD
In order to detect malignancies in any human tissue, the authors of this research presented an IDDMA.Several procedures are used to identify cancer cells: i) VGG 19, ii) preprocessing with bilateral filter, iii) segmentation, iv) CNNs for classification, and v) cancer cell detection using IDMA.Regarding the extraction of features from the ImageNet dataset, we relied on the VGG19 pre-trained model.The input tumor MRI images are used for segmentation using an optimal threshold, and then the tumor pictures are fed into a CNN for successful classification.The entire process of identifying the tumor and cancer cells process using MRI images of the brain is depicted in Figure 1.Using sophisticated image filters like the bilateral filter, we hope to eliminate the background noise in tumor images (BF).BF's purpose is to smooth pictures and eliminate noise, allowing for clearer visualization of tumors.Then, we used our FBDMA method to locate and detect cancer cells in the provided tumor images.Automatic brain tumor identification will help doctors and nurses quickly diagnose their patients and begin appropriate treatment.
VGG 19 model
For
Preprocessing
Gaussian measures the dense areas of a bilateral filter, which can be either a constant color or a light color.The emphasis here is similarly on locating the extreme shade variation.The edges of the input image are where this filter shines.Image_I's pixel-level application of a Gaussian filter can be expressed as where: Input image pixel location (x, y) has intensity value (I(x, y)).
The intensity value I'(x, y) of the filtered pixel at coordinates (x, y) in the final image.The distance between pixel (i, j) and all other pixels is used to calculate the pixel's spatial weight, denoted by wp(I ,j), which is a Gaussian function (x, y).Pixel (i, j)'s range weight, wd(||I(I ,j) -I(x,y)||), is a Gaussian function of the intensity difference between (i,j) and (x,y).(x, y).The normalization constant, Wp is the total of the weights for each pair of nearby pixels, both in terms of distance and location.The pixel brightness in the provided input image was calculated using (2).C is an N-by-N square matrix, where N is the number of grayscale levels.Each ij in the matrix has a specific definition given by ( 2): To define the dimensions of the image in terms of its width and height I, 1(I, j), we use the parameters W and H, respectively.
If I get a 1, the intensity of the pixels is very high; if I get a 0, the pixel intensity is very low.
Segmentation
Segmenting an image into its foreground elements and its background elements is what image thresholding does.In this method, pixel values are assigned based on the specified thresholds.Thresholding is typically performed on grayscale images in computer vision.
Optimal threshold
Image threshold is the method used to determine the unique areas in the input MRI image and estimate the threshold values.To minimize segmentation-based pixel misclassification, the best possible threshold is used.The iterative method determines the best threshold for measuring the misclassification loss of a pixel.Background pixel PDF is calculated as follows: assigning a pixel to a class (foreground or background) based on its intensity value and selected threshold results in an error that is quantified by the misclassification loss of a pixel in optimal thresholding.For each given image, the optimal threshold is the value of the threshold that produces the least amount of misclassification.To define the misclassification loss of a pixel in a binary image where 1 and 0 represent the foreground and background classes.− If a pixel is mislabeled as background while it actually belongs to the foreground class (its true label is 1), the misclassification loss is 1. − If the pixel's true label is 0 (i.e., it belongs to the background class), and its predicted label is 1 (i.e., it belongs to the foreground class), then the misclassification loss is also 1. − If the pixel is correctly labeled, the misclassification loss is zero.
Convolutional neural networks
By training the network using a dataset of tagged pictures of tumor and non-tumor cells, CNN techniques can be used to categorize tumor and non-neoplastic cells.During training, the network is fed a series of images and its weights are adjusted so that the error between the anticipated label and the actual label is as small as possible.The trained network can then be used to label previously unseen images.
The input images are often preprocessed to improve the properties of interest, such as the form, size, and texture of the cells before being used for tumor and non-tumor cell classification.After the photos have been preprocessed, they are given into a CNN, which consists of several convolutional layers to extract features and one or more fully connected layers to do the classification.The last layer produces a probability distribution over tumor and nontumor classes; the class with the highest probability is selected as the anticipated label.To indicate the ReLU's integration in CNN, we write f(k)=max.(0,k).
Methods used to identify cancer cells
In each determined area, a filtered (noise free) image is captured showing tumors or tumor classifications.Each pixel in the suspected cancerous area has its color intensity calculated and compared to a predetermined threshold; if the pixel's value is lower than the threshold, it is colored black, and if it is higher, it is colored white.Finally, compute the percentage of incorrectly labeled pixels.
Each pixel in an image must be compared with its ground truth labels and its anticipated labels in order to calculate the percentage of erroneously classified pixels.To determine how many correct identifications, incorrect identifications, and false positives and false negatives there were for each pixel, we can utilize a confusion matrix.An illustration of how to determine the fraction of improperly labeled pixels: i) find the image's pixel count (N) and ii) the number of mislabeled pixels, we can be calculated as follows: = + , iii) determine the proportion of mislabeled pixels (P) by using the formula: = ( / ) 100%.
Decisions based on the accurate classification of cancer cells using deep learning can have a major impact on patient outcomes.Early detection and diagnosis of cancer can increase the likelihood of a positive outcome from treatment.Moreover, correct categorization of cancer cells might aid in directing therapy decisions and tailoring treatment strategies to patients.
RESULTS AND DISCUSSION
All algorithms were written in the Python programming language and implemented using highpowered library packages such as Pandas, Keras, NumPy, Matplotlib, Seaborn, and scikit-learn, on a computer with high configuration system with 8 GB RAM and 1 TB hard drive with 11 th generation system.
Dataset details
The Br35H dataset can be found in two different files, one for development and one for evaluation.There are a total of 1,000 training MRI pictures, 500 of which are benign 500 of which are malignant.There are a total of 1,000 MRI scans in the testing folder, 600 of which are of tumors and 400 of which are of healthy tissue; among the 600 tumor scans, some are cancerous and others are not.Apply the confusion matrix to the data, taking into account parameters like sensitivity/recall, accuracy/precision, F1-score, and time in milliseconds.Tables 2 and 3 illustrate the results of a comparison between the performance produced by our proposed model and the performance of existing models for the detection of tumors and cancer cells.
In comparison to all of the other models, our suggested model demonstrated superior classification accuracy for malignant and tumorous growths.
Bulletin of Electr Eng & Inf ISSN: 2302-9285 A deep learning-based intelligent decision-making model for tumor and cancer cell … (Putta Durga)
ISSN: 2302-9285 Bulletin of Electr Eng & Inf, Vol. 13, No. 1, February 2024: 510-518 512 − Literature survey the provided training images, we utilize a pre-trained model called VGG-19 shown in Figure 2. To provide superior precision in large-scale image processing, this model employs 19 layers of 3×3 convolution filters at a stride of 1. VGG19 is a crucial component of this effort due to its efficacy as a model for accurately extracting features from huge datasets.In this study, we apply VGG19 to improve the precision of brain tumor categorization.The 19 layers of this model are broken down as follows: 16 are convolutional layers used for feature extraction, while 3 are dedicated to picture classification.Each of the five categories of feature extraction layers is called a max-pooling layer.The model's output depicts the object in an input image of 224 by 224 pixels.
Figures 3 and 4
Figures 3 and 4 display bar graphs indicating the performance of tumor and cancer cell identification.Our proposed model demonstrates superior accuracy compared to other models in the comparison.The steps taken to locate and identify the MRI image of the brain tumor are depicted in Figure 5 and it identifies Figure 5(a) a normal MRI image, Figure 5(b) noise removed image, Figure 5(c) segmented image, and Figure 5(d) tumor affected region.The input image has noise removed using optimal thresholding.The elimination of noise has a noticeable effect on productivity.Partitioning the segmented image follows this process.The input image after noise reduction is used for edge identification to locate the true boundaries of the MRI images.The impacted region of the tumor is shown in red in Figure 5(d).
Table 1 .
Comparison of existing survey of literature A deep learning-based intelligent decision-making model for tumor and cancer cell … (Putta Durga) 513
Table 2 .
The performance of different models for detection of tumors cells
Table 3 .
The performance of different models for detection of cancer cells | 4,110.4 | 2024-02-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
coatings Stefan Blowing Impacts on Unsteady MHD Flow of Nanofluid over a Stretching Sheet with Electric Field, Thermal Radiation and Activation Energy
: In this paper, a mathematical model is established to examine the impacts of Stefan blowing on the unsteady magnetohydrodynamic (MHD) flow of an electrically conducting nanofluid over a stretching sheet in the existence of thermal radiation, Arrhenius activation energy and chemical reaction. It is proposed to use the Buongiorno nanofluid model to synchronize the effects of magnetic and electric fields on the velocity and temperature fields to enhance the thermal conductivity. We utilized suitable transformation to simplify the governing partial differential equation (PDEs) into a set of nonlinear ordinary differential equations (ODEs). The obtained equations were solved numerically with the help of the Runge–Kutta 4 th order using the shooting technique in a MATLAB environment. The impact of the developing flow parameters on the flow characteristics is analyzed appropriately through graphs and tables. The velocity, temperature, and nanoparticle concentration profiles decrease for various values of involved parameters, such as hydrodynamic slip, thermal slip and solutal slip. The nanoparticle concentration profile declines in the manifestation of the chemical reaction rate, whereas a reverse demeanor is noted for the activation energy. The validation was conducted using earlier works published in the literature, and the results were found to be incredibly consistent.
Introduction
Nanofluids contain suspended nanoparticles with a size less than a hundred nm, which are used for improving thermal conductivity. The nanofluids have wide applications in the fields of coolants [1], reactors [2], bubble absorption technologies [3], pharmaceutical processes [4], refrigerator chiller systems [5], aggregate manufacturers [6], heat exchangers [7] and solar collectors [8]. Nowadays, nanofluids are developed for medical applications in treating brain tumors, heart surgery, cancer therapy and safe surgery by cooling. Masuda et al. [9] performed spearheading research on ultrafine scattered particles that were steadily suspended in a base fluid. Afterward, Choi et al. [10] named the mixture suspension of the ultrafine particles in the base fluid as nanofluids. The work of nanofluids in the scope of the boundary layer was started by Buongiorno [11], who emphasized the mechanisms of Brownian diffusion, which afterward was known as the Buongiorno model. The main component of nanofluids is its ability to improve the heat transfer apparatus' efficiency, as recorded by Phillpot et al. [12]. Khairul et al. [13] examined the results regarding the magnificent thermal assets of the nanofluids. Numerous noteworthy works have investigated the nanofluid boundary layer flow. Bagh et al. [14] examined the significance of nanoparticles on the dynamics of the boundary layer rotating flow and found that Brownian motion is responsible for enhancing the heat transfer of the base fluid. Bagherzadeh et al. [15] studied the dispersion of nanoparticles and boundary layer flow in a microchannel subject to magnetic field. Ali et al. [16] conducted an analysis of the magnetic dipole influence on single and multi-wall carbon nanotubes to examine the dynamics of the micropolar boundary layer flow. Irfan et al. [17] discussed the result of activation energy and binary chemical reaction on the dual nature structure of the time-dependent flow of Carreau magnetite nanofluid over stretching /shrinking sheet. He demonstrated that Brownian motion and the thermophoresis of nanoparticles both escalate with the growing values in the lower solution over the temperature distribution. On the other hand, they show the opposite behavior in the upper solution. Maleki et al. [18] studied the Brownian motion impacts on the transportation of heat. Pordanjani et al. [19] discussed the thermal radiation and Brownian motion inside a cavity using nanoparticles. Kashif et al. [20] studied the significance of radiation and Soret on hybrid-based nanoparticles along the upright channel flow.
The past examinations neglected the electrically conducting character of the liquid. A lot of liquids mixed with salts transfer electrical charges. The Swedish physicist Hannes Alfven (1942) discovered the new field of physics, magnetohydrodynamics (MHD). Nowadays, MHD is a noteworthy nuclear designing scheme. Similarly, with the imposition of a magnetic field, it is possible to deal with the heat transfer rates in waterways, tubes, etc., adequately. The driving force of MHD is essentially the electromagnetic force, which can also be used to energize the transport of charged particles [21]. Magnetohydrodynamic flow with all its characteristics is still complex in the field of atomic coolant pumping [22] and tokamak liquid metal structures [23]. Maleki et al. [24] analyzed the influence of the slip condition on the heat transfer and dynamics of nanofluids over a porous plate. Gireesha et al. [25] conducted an analysis of the magnetohydrodynamic boundary layer flow and the heat transfer of nanofluids over a flat stretching sheet. Several numerical studies have been reported to predict the characteristics of magnetized fluids, such as MHD mixed convection flows [26], MHD non-Newtonian fluid flows [27], MHD impact on nanofluid flows [28] and the magnetic dipole effects on micropolar fluid flows [29,30].
The characteristic of activation energy is the minimum energy required to begin a specific chemical reaction. Generally, the relationship between a chemical reaction and mass transfer is complicated. It can be checked by the digestion of reactant species and fabrication at different rates in the mass transfer process and liquid flow. Arrhenius et al. [31] proposed the use of species chemical reactions along with arrhenius activation energy for the first time ever. He said that the threshold energy is required to make the molecules or atoms in the chemical system work to initiate a chemical reaction. The value of activation energy also imposes a great influence on nanoparticles' movement in basic carrier fluids. Recently, valuable works have been targeting the activation energy effects on nanofluids. A binary chemical reaction and activation energy effects on the dynamics of the tangent hyperbolic nanofluid flow is considered by Ali et al. [32]; the Eyring-Powell nanofluid subjected to activation energy is studied by Reddy et al. [33]; the chemical reaction and thermodiffusion effects on Casson nanofluid dynamics is analyzed by Faraz et al. [34]; and the Maxwell nanofluid along with activation energy is examined by Ali et al. [35].
Far-field and wall conditions are significant in the problem of the convective transport of nanofluids. Fang and Jing [36] investigated the impacts of the Stefan blowing on species transfer in the transport of nanofluids, and then Fang [37] further extended Ref. [36] to the time-dependent analysis. It was found that the blowing of nanoparticles results in the enhancement of the blowing velocity and mass gradient. Hamid et al. [38] mentioned the potential application of Stefan blowing in the paper drying processes and studied the impacts of Stefan blowing on the mass transfer near the stagnation surface. Lund et al. [39] analyzed the significance of the Stefan blowing in the dynamics of Casson nanofluids subject to a stretching sheet. The species transfer was found to fluctuate over the flow field affected by the mass blowing at the wall. Uddin et al. [40] took the Stefan blowing into consideration and investigated the effects of multiple-slip by using the MATLAB nonlinear equation solver fsolve and ODE solver ode15s.
The above literature review suggests the Stefan blowing effects are of practical significance in various settings. Motivated by this fact, we intend to investigate the Stefan blowing effects on the unsteady magnetohydrodynamic flow of nanofluid over a stretching surface under the joint action of the electric field, activation energy and thermal radiation. The investigated problem has a configuration similar to the work of Daniel et al. [41], but Daniel et al. did not consider the Stefan blowing effects, activation energy and chemical reaction in their investigation. We used the appropriate similarities transformation to change the system of nonlinear governing equations to a system of nonlinear ordinary differential equations (ODEs). We also solved the nonlinear ODEs numerically with the Runge-Kutta 4 th order with the shooting technique. The influences of the inserted parameters on the local Nusselt number −θ (0), skin friction − f (0) and Sherwood numbers −φ (0) are differentiable with the help of numerical solutions attained by the Runge-Kutta shooting technique. Some of the applications in industrial use of the current study have been found in paper manufacturing, the extrusion of a plastic sheet and glass blowing. The comparison between recently distributed information in the literature and the information revealed in this work suggests that the conclusions from this study constitute a magnificent improvement in the understanding of the Stefan blowing effects.
Physical Model and Mathematical Formulation
We consider the time-dependent magnetohydrodynamic (MHD) mixed convection, 2D incompressible electrically conducting, laminar and viscous flow of nano-sized nanoparticles over a stretching sheet, which has been examined in the presence of chemical reaction, Arrhenius activation energy and viscous dissipation. The flow is subjected to the transverse magnetic and electric field strength B and E, which are assumed to be applied in the direction y > 0 and normal to the surface (see Figure 1). The magnetic and electric fields follow the law of Ohm's J = σ(E + V × B), where V, J and σ represent the velocity of fluid, Joule current and the electrical conductivity, respectively. The Joule (current) effect mentioned also plays roles in the transport of a charged particle in the electromagnetic field [42]. Since the magnetic Reynolds number is small, the induced magnetic field and Hall current effects are ignored [43]. The impacts of Stefan blowing has also been considered. Assuming the velocity of the linearly stretching sheet is u w (x, t), the velocity of mass transfer is v w (x, t), where x−axis and y−axis are assumed along the stretched sheet and (t) its time. Suppose that the values of the ambient concentration and temperature are symbolized by (ζ ∞ and T ∞ ), respectively. It is also assumed that the temperature and concentration at the surface has a constant value of T w and ζ w . Using these assumptions, the boundary layer equations governing the conservation of mass, linear momentums, thermal energy and nanoparticles volume friction in the vector form are as follows: (ρc) f ∂T ∂t ∂C ∂t respectively, the external mechanical body forces, dynamic viscosity, density of the nanofluid, specific heat, heat capacity of the nanoparticle, gravitational acceleration, volumetric thermal, solutal expansion coefficients and the function of viscous dissipation, respectively. In the right- T represents the modified Arrhenius function, where kr is the chemical reaction rate and n the fitted rate constant.
In light of above-mentioned assumptions, the governing equations are written as [11,44,45]: The boundary conditions are expressed as [46,47]: where (ρc) f , σ * , K * and (ρc) f stands for ther-mal diffusivity, the Stefan-Boltzmann constant, the mean absorption co-efficient and the heat capacity of the liquid, respectively. We also have , and E a signifies the slip factor of temperature, slip factor of velocity, slip factor of concentration, capacity ratio and the activation energy, respectively. Now, we transform the flow problem into a dimensionless form by using the following dimensionless quantities [48,49].
In view of Equation (11), the system of following nonlinear ODEs from Equations (6)- (8) are obtained: 1 The transformed boundary conditions of Equations (9) and (10) are as follows: where the dimensionless velocity, temperature and concentration are f , θ and φ, respectively. The unsteadiness parameter is denoted by ν is the slip parameter of the solutal concentration, λ = Gr Re 2 is the mixed convection parameter; the Grashof number is Gr = represents the buoyancy ratio parameter; the Reynolds number is Re = u w x ν , the Prandtl number is Pr = ν α ; the parameter of Brownian motion is Nb = ; the parameter of ther- ; the activation energy is A e = E a K B T ∞ ; and the temperature relative parameter is
Physical Quantities
The most important relationships for the practical concern in the present exploration are the local skin friction coefficient c f = τ w
Execution of Method
The numerical solution of ordinary differential Equations (12)- (14) with the boundary conditions shown in Equations (15) and (16) has been obtained by using the Runge-Kutta 4 th order with the shooting technique (see Figure 2). However, different analytical and numerical methods have been implemented to examine the nonlinearity in the flow process. The most commonly used approximate analysis methods for flow problems include HAM (homotopy analysis method), VPM (variation of parameter method), ADM (Adomain decomposition method), VIM (variational iteration method), etc. However, the Runge-Kutta 4 th order is more effective. This method with the shooting technique is a powerful scheme for solving ODEs. The short Runge-Kutta 4 th order solves boundary value problems precisely, adequately and rapidly. The Runge-Kutta strategy has been utilized via commercial software, for example, ADINA, ANSYS, ABAQUS, MATLAB and so on. We transformed Equations (12)-(14) into a set of first-order ordinary differential equations (ODEs). For this purpose, we introduced the following variables, To clarify the arrangement of the first-order initial values problem via the shooting technique, seven initial conditions are needed. Consequently, we estimated three unknown conditions y 3 (0) = a 1 , y 5 (0) = b 1 and y 7 (0) = c 1 . The reasonable suppositions for these three unknown missing conditions are selected such that the three boundary conditions are known to be closely fulfilled for η → ∞. In order to improve the missing initial conditions, Newton's iterative structure will be applied until the desired approximation is seen. For several developed parameters and a right bounded domain [0, η max ] in place of [0, ∞), the calculation has been completed, where η max is a positive real number, and the value is chosen so that the resulting value does not change significantly greater than η max . The criterion for stopping the iterative process is max |y 2 (η max ) − 0|, |y 4 (η max ) − 0|, |y 6 (η max ) − 0| ≤ ζ 1 , where ζ 1 is a very small positive real number.
Results and Discussion
In this article, we will discuss the numerical results of the dimensionless concentration, velocity and temperature profiles for different flow parameters, such as the Stefan blowing parameter S B , the hydrodynamic slip parameter L f , the electric current E I , the solutal buoyancy ratio parameter N r , the unsteady parameter σ t , the magnetic parameter M, the thermal buoyancy λ, the radiation parameter Rd, the thermal slip parameter L θ , the Eckert number Ec, the solutal slip parameter L φ , the Brownian motion parameter Nb, the activation energy A e , thermophoresis Nt, the Schmidth number Sc, the chemical reaction rate Ω and the Prandtl number Pr. We have verified our results with the existing literature listed in (Tables 1 and 2) before presenting the final results. In Table 1, we compared our results of the skin friction − f (0) for different values of parameters σ t and M with the works of Ali et al. [50] and Liaqat et al. [51]. An excellent relationship has been achieved. An excellent agreement among the previous articles has been accomplished for the heat transfer rate (see Table 2 Figure 3a indicates that the component of velocity reduces with the increasing magnetic field strength. Exposing an electrically conducting nanofluid to a transverse magnetic field causes a surge in a resistance force recognized as a Lorentz force. This force tends to inhibit fluid velocity, and from Figure 3b, it is clear that the momentum of the boundary layer rises with an increase in the λ. Physically, the given detail is elaborated as to why the velocity profiles rise as more forces are added with these increases in buoyancy parameters λ. The f (η) shows a rising trend against a higher input of S B . From this figure, the fluid velocity is intensified for Stefan blowing (S B = 1.0) as compared with mass suction (S b = −1.0). The injection of tiny particles (nanoparticles) through the boundary energizes species diffusion, while the evacuation of tiny particles inhibit diffusion, so growing values of blowing increase the fluid velocity. To observe the buoyancy ratio parameter influence and the unsteadiness parameter with the Stefan blowing effects on the dimensionless velocity profile demonstrated in Figure 4(a − b). The result shows in Figure 4a that the increasing value of the buoyancy ratio caused to decline fluid velocity, and Figure 4b shows the similar trend against growing strength of unsteadiness parameters. The distribution shows that the thermal diffusion from the sheet surface to the ambient condition prominent over mass buoyancy forces of tiny particles, so the strength of the convection current decrease the fluid velocity. Figure 5(a − b) shows the impact of the electric field parameter E I and hydrodynamic slip parameter L f with Stefan blowing S B on the non-dimensional velocity profile. The effect of the E I has been demonstrated in Figure 5a. As the estimations of electric field parameter increments, the momentum boundary layer increases significantly above the sheet expressively. The electric field associated with Lorentz force, in general, prompt nanomaterial particles stress exchanging the effective viscosity of scattering as well as yield improvement because of energy transformations. Figure 5b shows that the escalation of the slip parameter of velocity causes a reduction in the velocity profile. Figure 6(a − b) show the results of the nanofluid temperature profile θ(η) with distinct values of the Rd, M and S B . The temperature distribution increases with an increase in M, Rd and S B . Since the intensity of M increments in an electrically conducting nanofluid, it starts to create a resistive power (Lorentz power) and nanoparticle dissolves the vitality as heat; therefore, the boundary layer gets thicker (see Figure 6a). The effect of conduction increases against the growing value of radiation, and the fluid temperature raises at each point away from the surface, which intensifies the temperature (see Figure 6b). The impact of the Eckert number is exposed in Figure 7a. The Eckert number effects include expanding the temperature due to frictional heating. Normally, the heat transfer ascends in the manners of electronic chips, power generation systems, fluid metal liquids, just as cooling of nuclear reactors. The ratio between the flow kinetic energy and enthalpy of the boundary layer difference is called the Eckert number. Physically, the temperature can be increased due to the multiplication of the Joule heating effect from the product of the magnetic and Eckert number. Figure 7b uncovers the physical attributes of the thermal slip boundary L θ on θ(η). The impact is identical to the velocity profile (see Figure 5b). Basically, the temperature is represented by a nonlinear connection between the foothold and slip in the polymer dissolve. The temperature decreases radically under permeable circumstances. Figure 8(a − b) present the behavior of the nanofluid temperature profile with different values of the unsteady parameter σ t and thermal buoyancy λ. The impact of unsteadiness σ t on the dimensionless temperature distribution θ(η) (see Figure 8a) is identical to the effect of the velocity profile (see Figure 4b). The performance of the thermal buoyancy parameter λ on θ(η) is presented in Figure 8b. The temperature of the nanofluid along the extending sheet is generous at the force convection stream location (λ = 1.0), which results in different values of Stefan blowing. For the heated extending surface location (λ > 1.0), the convection current diminishes the temperature profile. Figure 9a. Basically, Brownian motion supports heating the fluid in the boundary layer and immediately reduces particle deposition away from the fluid on the surface. Therefore, as Nb rises, the temperature increases. Figure 9b summarizes the effect of changes in thermophoresis parameters Nt on the temperature profile. Figure 9b outlines the influence of the variation of the Nt on the temperature profile. As the thermophoresis parameter surges, the thickness of the thermal boundary layer grows as Nt increments. In reality, nanoparticles have moved from a hot surface to a cold ambient liquid, so the temperature of the boundary layer rises. This impacts the development of the thermal boundary layer thickness. Figure 10(a − b) present the behavior of the nanoparticles concentration profile with different values of the thermophoresis parameter Nt, Brownian motion parameter Nb and Stefan blowing parameter S B . The impression of the thermophoresis parameter Nt on the concentration profile (see Figure 10a) is identical to that of the temperature profile θ(η) (see Figure 9b). The impact of Nb on the distribution of the concentration exists in Figure 10b. The concentration profile was diminished because of changes in the uneven development of the nanoparticles in the framework. This reinforces the thermal exhibition as vitality changes hugely, prompting a decrease in the layer thickness. Figure 11a) is the same as the velocity profile (see Figure 4b). Figure 11b indicates the justification of the behavior of L φ on the concentration distribution. This indicates that the liquid particles have a stream conduct influenced by the solid boundary. The thickness of the solutal layer and concentration decrease radically underneath the linear permeable surface. Figure 12(a − b) display the impact of the activation energy parameter A e and chemical reaction parameter Ω on the nanoparticle's concentration profile. Figure 12a emphasizes the graphical potential of activation energy A e on concentration φ(η). The result depicted by the activation energy parameter A e asserted an expanding concentration distribution. The activation energy indicates a major effect in numerous interesting manners. It assumes an important job to improve the reaction phenomenon, and Figure 12b manifest that more grounded Ω leads to a decrease in the nanoparticle concentration. The clarification for this conduct is that the destructive chemical rate upgrades the mass transfer rate and results in a decrease in the nanoparticle concentration. These factors influence the dampness and temperature destruction fields, causing the obliteration of yields due to freezing, vitality generally move to the drizzly cooling tower, and so on. Figure 13a. Expanding the Schmidt number relates to a low Brownian dispersion coefficient, which prompts short entrance profundity for the concentration profile. Subsequently, the concentration at the surface abates with the expanding impact of the Schmidt number. Figure 13b illustrates the result of N r on φ(η). The concentration profile boots up by increasing N r .
Conclusions
In this study, we studied the unsteady electrical MHD mixed convection flow and heat transfer of nanofluid on the linear stretching sheet in the existence of thermal radiation, magnetic and electric fields, chemical reaction, activation energy and the Stefan blowing effects. The current work outcomes are confirmed as: • The growing values of magnetic field, solutal buoyancy N r and hydrodynamic slip parameter L f slow down the fluid velocity, but the velocity profile rises against growing values of thermal buoyancy λ and electric current parameters. • The temperature profile θ(η) rises with rising values of Brownian motion Nb, magnetic field M, Eckert number Ec, thermophoresis Nt and thermal radiation Rd.
•
The fluid velocity, temperature and nanoparticle concentration profile are observed to have deteriorated with augmentation in the unsteadiness parameter σ t . • The temperature of the fluid declines with increasing values of the thermal slip L θ and thermal buoyancy λ parameters. • Due to an enhancement in the activation energy A e , thermophoresis Nt and buoyancy ratio N r parameters, the nanoparticle concentration profile φ(η) increases. • With the increasing values of Brownian motion Nb, L φ , Ω and Sc, the nanoparticle concentration profile φ(η) decreases. • The nanoparticle volume fraction, temperature and velocity function are decreased against wall suction S B = −1.0 and enhanced against wall injection S B = 1.0. | 5,510.4 | 2021-08-30T00:00:00.000 | [
"Physics",
"Engineering",
"Materials Science"
] |
An improved MIMLRBF natural scene image classification based on spectral clustering
Natural scene image classification problems can be showed by multi-instances multi-labels learning model (MIML), and MIMLRBF algorithm achieved good effect. MIMLRBF algorithm is based on the clustering technology and neural network for classification. Related experiments show that the measure of the package and the selection of the cluster center have an important impact on the result of image classifications, in order to obtain better clustering accuracy, first of all, this article introduced the spectral clustering method in the training process, which can make the sample package center more reasonable; Second, we redefined the distance between the sample packages, to overcome effectively the influence of the isolated examples on the distance to the sample packages. The experimental results show that the proposed approach can effectively improve the classification accuracy, and it is better than MIMLRBF algorithm on the various performance.
Introduction
Early method to solve the problem of MIML(Multi -Instance Multi -Label) is to decompose it into the problem of multi-instances or multi-labels.This strategy of degradation ignored the correlations between instances and labels and had the disadvantages of low classification accuracy and long running time.Neural network is a kind of practical techniques in machine learning, and it plays a role in the problem of MIML.Zhang M-L [7] proposed MIMLRBF(Multi-Instance Multi-Label Radial Basis Function) based on neural network, the algorithm made full use of the relationships between instances and labels and get better effects.The key of the MIMLRBF algorithm is to obtain clustering center.Another key of the MIMLRBF algorithm is the measure way of the distances between two packages, and the performance of algorithm is improved when improving the distances between two packages in the literature [6] and literature [7].
In this paper, in order to obtain better clustering accuracy, we do the following improvement based on MIMLRBF algorithm framework.The one improvement is using the method of spectral clustering instead of k-medoids in the training process to obtain clustering center.There are not all instances being effective expression of target characteristics in all the instances.The clustering algorithm of k-medoids can't effectively eliminate the effects of these ineffective instances and the clustering center of image packages is not accurate, while the clustering algorithm of spectral clustering can effectively tap the similarity of samples, which can effectively improve the accuracy of clustering center.The other improvement is improving the measure way of the distances between two packages by redefining the distances between two packages.
Improved measure way of distance of two packages
Hausdorff distance is a measure way of describing the features of two groups of point set, and it has been better applied in some algorithm.Hausdorff distance can be divided into three kinds: the maximum Hausdorff [6], the minimum Hausdorff [6] and the average Hausdorff [7].Experiments show that the average Hausdorff can get the best performance in the problem of MIML.The average Hausdorff is to solve the average distance of the minimum distance between the every samples of a package and all samples of another package.When solving the average value, sometimes the instances of individual far away may increase the distance between two packages and reduce the contribution of the some minimum distances between two packages, sometimes the instances of individual close distance may also affect the real distance between two packages.On this issue, the improved algorithm has been made the further revision to the average Hausdorff through the linear combination between the maximum Hausdorff and the minimum Hausdorff, and proposes the weighted distance formula.It's shown as follows: Where , minH X X respectively expresses the average、 maximum and minimum Hausdorff between the two point sets 1 X and 2 X .
Clustering center based on spectral clustering
This paper adopts the k-way of Ncut algorithm.Set as the training sample sets, i X shows the image packages expressed as 9 x 15 dimensional feature vector, i Y shows the pre-determined sets of labels, N shows the numbers of the samples, and the numbers of label set as m .Firstly, solve the distances between every two image packages of the N training sample sets according to the formula (1) and construct the distance matrix of N x N dimensions about the training sample sets.Then use the standard Laplacian matrix as follows Where, D is degree matrix, S is similar matrix.Reduce the dimension of the distance matrix of the samples, and construct the new distance matrix of the samples with higher similarity.In the end, cluster through the clustering algorithm of k-medoids and solve the clustering center of each cluster.
Where, i X is the image packages of input samples, i c and i are respectively the center parameter and width parameters of the first i hidden layer nodes.
, , , , , , In the training process, according to the distance between the input sample and the centers retained in the process of spectral clustering, use the method of the gradient descent to train weights.In the test process, input samples get the output value according to the distance with centers retained in the process of spectral clustering and the function of the weigh.
Experimental design
The data sets consist of 2,000 natural scene images belonging to the classes: desert, mountains, sea, sunset and trees, and each image has already been split to nine subdomains by SBN method and every subdomain stands for a 15 dimensional feature vector.There are 22.85% of samples belonging to over one class.
The basic procedure of the MIMLRBF algorithm is as follows: 1) Input the sample sets shown as , , , , , , , , m Y y y y ; Use the method of ten-fold across validation to divide these samples into 10 portions, and select nine of them as the training sets T and the following one as the test sets R.
2) As for each label
as the package sets with the label of l ; Divide l U into ( ) unrelated groups through the spectral clustering algorithm and the improved distance formula (1); Solve and retain the centers of each cluster.3) Train the network model according to the distances between input samples and the clustering centers retained in the process of spectral clustering.4) As for each test sample, solve the output value of this test sample on each label according to formula (3).
Experimental results
In this experiment, two improvements are mainly put forward based on the original MIMLRBF algorithm.The one is to improve the measure way of the distance between two sample packages, and the other is using the method of spectral clustering instead of k-medoids in the training process.Here are the experiments aimed at the two improvements respectively.
The effect of the improved distance formula on the algorithm
In this experiment, we only use the distance formula (1) to measure the distance between two instance packages and the other part is the same based on MIMLRBF.We called it as MIMLRBF-DISTANCE and the experimental result is shown as table 1.It can be seen from table 1, the algorithm performance is improved.
The effect of the spectral clustering on the algorithm
In this experiment, we use the method of spectral clustering instead of k-medoids in the training process to cluster samples and the other part is the same based on MIMLRBF.We called it as MIMLRBF-SC and the experimental result is shown as table 1.It can be seen from table 1, the algorithm performance is improved.Note: "↑" shows that bigger is better, "↓" shows smaller is better.
Experimental analysis
Table 1 shows the experimental results of these algorithms on the same sample sets, and the data with better performance display in bold type.As accurate measurement of the distance between two packages plays a key role in the results of classification in the problem of MIML, so the performance of MIMLRBF-DISTANCE is better than the performance of MIMLRBF.As the spectral clustering algorithm is based on the characteristics of the correlation and it can dig the similarity between samples, which can effectively improve the accuracy of the clustering center, so the performance of MIMLRBF-SC is better than the performance of MIMLRBF.The imMIMLRBF algorithm is combing the above improvements, and the performance is the best.
Conclusions
In this paper, we study and learn the problem of MIML based on RBF neural network, and introduce spectral clustering algorithm into the problem of MIML to improve the RBF neural network.There are many instances of image package without effective information of the target characteristics.Although k-medoids algorithm can eliminate the impact of these instances without effective information to a certain extent, the effect is not very good.However, the spectral clustering algorithm is based on the characteristics of the correlation to cluster, it can get more accurate clustering center when clustering samples.Experiments in this paper have also proved that the improved algorithm with the introduction of spectral clustering can get better classification effect.
Fig 1
Fig 1 is the frame of the algorithm in this paper, the whole process is divided into training process and test process.The training process has two steps.The first step is to train the clustering center according to the distance matrix constructed by the distance between every two SBN (Single Blob with Neighbors) package, then train the clustering center of every group with the method of spectral clustering.The second step is to train the neural network according to the distance between sample and clustering center calculated in the first step.In test process, output value results are obtained according to the distance between sample and clustering center retained in training process and network model trained in training process.
2. 4
Train and test based on RBF neural network MIMLRBF neural network trains samples with two layers of network structure as shown in Fig 2.This paper uses mathematical model of the RBF network for single hidden layer of a gaussian kernel function.
Table 1 .
Performance comparisons of the improved algorithms | 2,300.2 | 2016-12-31T00:00:00.000 | [
"Computer Science"
] |
Protection Against SQL Injection Attack in Cloud Computing
:- Cloud Computing is a promising paradigm that allows customers to obtain cloud resources and services according to an on-demand, self-service, and pay-by-use business model. There is the number of web application threats in cloud computing one among them is the SQL injection attack (SQLiA). In this, the attackers produce a query of their interest to have illegal access to the database. To prevent this, we use the Twofish encryption algorithm is used to secure the client's sensitive information. In this algorithm, the file uploaded by the data owner is encrypted using the Twofish algorithm and then stored it in a database.
which is stored in the cloud is secure from accidental erasure, which is related to security. If one machine in the cloud crashes, the data is duplicated on other devices in the cloud.
C. SQL Injection Attack (SQLIA) Process
The websites like data-driven are vulnerable to SQL Injection attacks, where a database is a black box in three-tier architectures. In this architecture, the SQL statements are generated in response to HTTP requests. These HTTP requests may have parameters that are utilized by attackers to generate a query of their concerns to have illegal access to the database, as shown in Fig. 2. //connect to a database mysql_connect(servername,username,password); //store user input in the variables collected from the user input login form $username=$_POST [username]; $password=$_POST [password]; //dynamically build the query from the user input $query="SELECT"FROM the_users WHERE username="$username" AND password="$password" //execute a query $result-mysql_query($query); If ($result) return true; else return false; Fig. 5. PHP Code snatch to generate dynamic query in response to client input.
In next Fig.8 at the same form user try to attempt a simple SQLIA to bypass the authentication.
SELECT
*FROM tbl users WHERE usernamae='user_Name OR 1=1 AND password='Whatever' Fig. 8.Dynamic generated query in reply to above input.
In Fig. 8 attacker try to ignore the password by using the statement operator as the whole thing would be unobserved after the comments operator even the password. In these circumstances, the user name is tried to be accurate using the OR operator. This, the simple situation, and with different methods intruders want to add query of their importance to have access to them in order of their interest.
E. Consequences of SQL Injection Attacks
To gain information about database fingerprints like the type of database, SQL language used, etc. This information helps the attacker proceeds or use more sophisticated attacks. 1) To gain knowledge about user credentials. 2) To get the database schema. 3) To extract and modify the database. 4) To carry out Denial of Services like shutting down the dropping tables, database, etc. 5) Alternate of files with false or raged information. 6) Execution of remote commands. 7) Shoplifting, account balance change. 8) Interacting with the underlying operating system.
II. TWOFISH ENCRYPTION ALGORITHM
Twofish is our submission to the AES selection process. It meets all the required NIST criteria-128-bit block; 128-, 192-, and 256-bit key; efficient on various platforms; etc.-and some strenuous design requirements,
A.TWOFISH DESIGN GOALS
Twofish was intended to meet NIST's design criteria for AES. Specifically, they are: • A 128-bit symmetric block cipher.
• Effectiveness, both on the Intel Pentium Pro and other hardware and software platforms. • Supple design: e.g., accept additional key lengths; be executable on a wide variety of platforms and applications; and be appropriate for a stream cipher, hash function, and MAC. • Simple design, both to make ease of analysis and ease of implementation. Furthermore, we imposed the following routine criteria on our design has: • Recognize any key length up to 256 bits.
• Encrypt data in less than 500 clock cycles per block on an Intel Pentium, Pentium Pro, and Pentium II, for a completely optimized version of the algorithm. • Be able of setting up a 128-bit key (for optimal encryption speed) in less than the time required to encrypt 32 blocks on a Pentium, Pentium II and Pentium Pro. • Encrypt data in less than 5000 clock cycles per block on a Pentium, Pentium Pro, and Pentium II with no key setup time. • Not contain any operations that create it inefficient on other 32-bit microprocessors. • Not comprise any activities that make it ineffective on 8-bit and 16-bit microprocessors. • Not bring any actions that reduce its efficiency on proposed 64-bit microprocessors, e.g., Merced. • Not include any elements that make it inefficient in hardware. • Have a assortment of performance tradeoffs concerning the significant schedule. • Encrypt data in less than 10 milliseconds on a product 8-bit microprocessor. • Be executable on an 8-bit microprocessor with only 64 bytes of RAM. • Be implementable in hardware using less than 20,000 gates. Our cryptographic goals were as follows: • 16-round Twofish (without whitening) should have no chosen-plaintext attack requiring fewer than 280 chosen plaintexts and less than 2 N times, where N is the key length.
• 12-round Twofish (without whitening) should have no related-key attack necessitates fewer than 264 chosen plaintexts, and less than 2N/2 time, where N is the key length.
B.TWOFISH
Twofish is a symmetric block cipher with a size of 128 bits and key size length up to 256 bits. Twofish is connected to the earlier block cipher Blowfish.
Two fish's characteristic features are the use of precomputed key-dependent S-boxes, and a comparatively complex key schedule. One half of an n-bit key is use as the definite key of encryption and the other part of the n-bit key is used to adjust the encryption algorithm (keydependent S-boxes). Twofish borrows some elements from other intends; Twofish has a Feistel structure like Data Encryption Standard. Twofish also utilizse a Maximum Distance divisible matrix. Twofish is a Feistel network. It means that in each round, half of the text block is driven through an F function, and next XORed with the further part of the text block.
In every round of Twofish algorithm, two 32-bit words hand out as input into the F function. Each word is wrecked up into four bytes. The four bytes are transferred through four different key_dependent substitution matrices(Sboxes). The four number of output bytes (the S-boxes contain 8-bit input and output) are joined using a Maximum Distance Separable matrix and combined into a 32-bit word. After that the two 32-bit words are united using a Pseudo-Hadamard Transform, further added to two round sub keys, then XORed with the right half of the text. There are two one-bit rotation going on, one previous to and one following the XOR. Two fish also has something named as pre-whitening and post-whitening supplementary subkeys are XORed into the content block both before the initial round and after the final round.
Each stride of the round function is bijective function. i.e., every output is achievable. We have seen too many hits or attacks against ciphers that don't have these possessions not to comprise it. The round function merges up process from different algebraic groups: S-box replacement, an MDS matrix in Galois Field GF(2 8 ), adding in GF(2 32 ), adding up in GF(2) (also called XOR), and 1-bit rotations. This builds the algorithm difficult to attack accurately.
The key-dependent S-boxes are designed to be challenging against the two big attacks linear cryptanalysis and differential cryptanalysis, and resistant against whatsoever unknown attacks come subsequently. Our method using this two fish algorithm which is good enough against known attacks, and enough spite to resist unfamiliar attacks. Key-dependent Substitution boxes be one way we performed that.
Key-dependent Substitution boxes were not chosen randomly, as like in Blowfish. As an alternative, the S-box
International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181 http://www.ijert.org construction rules, and experienced them with all possible 128-bit keys (and a subset of possible longer keys) to create sure that all the S-boxes were indeed strong. This approach permitted us to combine the strength of fixed, strong S-boxes with the strength of secret S-boxes. And Two fish has no feeble keys, as Blowfish does in reducedround variants.
The MDS matrix was cautiously chosen to give good diffusion, to keep its MDS property even subsequent to the 1-bit rotation, and to be rapid in both software and hardware. This means that we had to investigate through all probable matrices and find the one that best gather our principles.
The PHT and key addition make available diffusion between the sub-blocks and the key. And using the Load Effective Address instruction on the Pentium processors, and that able to do all four additions in immediate two operations. The round sub-keys are vigilantly intended, using a mechanism similar to the S-box structure rules, to avert related-key attacks and to provide good key mixing. One of the things that learned during this process is that a good key plan is not attached onto a cipher, but intended in tandem with the secret message. The 1-bit rotation is calculated to split up the byte structure; devoid of it, everything activates on bytes. This operation exists to aggravate cryptanalysts; it certainly frustrated our attempts at crypt analyzing the two fish algorithm.
Fig.9.TWOFISH FUNCTIONALBLOCK DIAGRAM
The pre-whitening and post-whitening give the impression to add at least a round to the complexity of any attack. In view of the fact that eight XORs are smaller than a round, it creates sense to depart them in.
C. TWOFISH'S PERFORMANCE
Two fish algorithm has a selection of choices. It takes very longer for key setup and the encryption scampers earlier; this makes sense for encrypting huge amounts of plaintext with the similar key and setup the key rapidly and encryption is slower; this makes sense for encrypting a series of short blocks with rapidly changing keys.
On smart cards, two fish also have a range of transactions. The RAM calculates approximately assume that the key must be stored in RAM. If the key can be accumulated in EEPROM, then the algorithm only needs 36 bytes of RAM to run. The code size comprises both encryption and decryption code. If only encryption has to be executed, the code size and speed numbers relatively improved.
The critical key setup on this processor is about 1750 clocks per key, which can be cut significantly at the cost of two additional 512-byte ROM tables. And the 6805's be deficient in of a second index register has a significant impact on the code size and performance of the two fish algorithm; a processor with multiple index registers will be an improved fit for the algorithm.
These approximations are for a key of 128-bit. For larger keys, the added code size is insignificant: with a reduction of 100 bytes for a 192-bit keys, and less than 200 bytes for a key of 256-bit. The encryption time amplifies by less than 2600 clocks for a 192-bit key, and about 5200 clocks for a 256-bit key. Likewise, the key list pre-computation enlarges to 2550 clocks for a key of 192-bit, and to 3400 clocks for a 256-bit key.
The plaintext is dividing into four 32-bit words. In the input whitening step, these are XORed with words of four key. This is followed by 16 rounds. In every round, the two words on the left are used as input to the g functions. function consists of four byte-wide key-dependent substitution boxes, followed by a linear mixing step support on an MDS matrix. The solutions of the two g functions are combined using a PHT, and two keywords are added. These two results are then XORed into the words on the right (one of which is turn around left by 1 bit first, the other is rotated right afterwards). The left and right bisects are then swapped for the subsequent round. Subsequent to all the rounds, the exchange of the last round is reversed, and the four words are XORed with four more key words to produce the ciphertext. More properly, the 16 bytes of plaintext p0……… p15 are primary split into 4 words P0,…, P3 of 32 bits each using the little-endian convention.
=0
In the input whitening step, these terms are XOR with 4 words of the expanded key.
0, = ⨁ = 0, … . ,3 In every 16 rounds, the first two words are used as input to the function F, which also takes the round integer as input. The third word is XOR, with the first output of F and then revolved right by single bit. The fourth word is revolved left by one bit and then XOR with the second output word of F. At last; the two halves are exchanged.
D. CRYPTANALYSIS OF TWOFISH
Our attack works against five rounds of Two fish, without the pre-whitening and post-whitening. It requires 222.5 chosen-plaintext pairs and 251 works. We anticipate further research and techniques will extend this attack a few more rounds, but don't believe that there are any attacks against in excess of nine or 10 rounds.
We also comprise a related-key attack. It's a fractional chosen-key attack on 10 rounds of Two fish without the pre-whitening and post-whitening. To increase the attack, we have a pair of related keys. We acquire to choose 20 of the 32 bytes of every key. We have absolute control over those 20 bytes of both keys. We don't know the enduring 12 bytes of essential, but we do know that they are the same for both keys. We end up trying about 264 chosen plaintexts under each key and doing about 234 works to recover the remain unknown 12 bytes of the key. No, it's not a realistic attack, but it's the best we can do. And we have reduced-round attacks on simplified variants: Twofish with fixed S-boxes, Twofish without the 1-bit rotations, and so on.
III.SURVEY PAPER Ensuring data storage security in cloud computing
The main objective of this paper is to resolve the security issues that are to avoid illegal access; it can be done with the assist of a distributed scheme by using homomorphism token to give security of the data in the cloud. Drawbacks: The servers are having got to necessary to operate on specified rows to check accuracy and verification for the computation of requested token.
Privacy -protecting public auditing for data storage security in cloud computing
This paper suggests a secure cloud storage system that is underneath privacy-preserving public auditing. The TPA is to carry out audits for multiple users simultaneously and efficiently. Drawbacks: It can endow with weaker security representations.
Hidden attribute-based signatures withoutanonymity revocation
This paper presents hidden attribute based signature from pairings.
Drawbacks:
It can provide some reset attacks.
Dynamic audit services for reliability verification of outsourced storages in clouds
This paper proposes an active audit service for verifying the integrity of untrusted and outsourced storage. Drawbacks: It can provide a small, constant amount of overhead.
Provable data possession at untrusted stores
This paper introduces a model for provable data possession (PDP) that permits a client that has stored data at an untrusted server to confirm that the server possesses the original data without retrieving it. The form generates probabilistic proofs of possession by sampling random sets of blocks from the server, which drastically reduces I/O costs. Drawbacks: It must require a small, constant amount of communication per challenge.
IV. PROPOSED SOLUTION In this article, a solution is proposed that is based on two fish encryption algorithm.
A. Proposed Solution Architecture
The proposed model, based on the two fish that works on the owner who uploads the data, will be encrypted from the vulnerabilities that occur from the hacker. While encrypting the key that is generated will be visible only to the user. When the data user wishes to download the file uploaded by the user, he requests the generated key to the owner. By accepting the request, the owner provides it to
C.IMPLEMENTATION AND EVALUATION OF PROPOSED SOLUTION
To estimate the proposed solution, its performance is compared with the previous sections. These algorithms and proposed solutions are applied to find the SQLIA and block the SQLIA in various types of web application specified in 2) EVALUATION SCENARIOS The following criteria are used to judge the performance of the Twofish encryption algorithm. 1) Registration of data owner or data user 2) Uploading the file to encrypt 3) Encryption of data in the file 4) Downloading the file 5) The decryption of the file 6) Data stored in the database The following dataset is used to evaluate the conditions as mentioned above. V. CONCLUSION Among many other web application threats, SQL Injection Attack has emerged as significant threats. Many solutions were proposed to detect SQLIA vulnerabilities in web applications. The proposed solution based on the Twofish algorithm has performed well to detect and block the SQLIA. One significant advantage of the proposed solution is that it can handle the advanced SQLIA techniques as the knowledge base is updated to handle modern types of threats.
D. Evaluation Results
VI. REFERENCES | 3,945.2 | 2020-02-29T00:00:00.000 | [
"Computer Science"
] |
Uniform Hausdorff dimension result for the inverse images of stable L\'evy processes
We establish a uniform Hausdorff dimension result for the inverse image sets of real-valued strictly $\alpha$-stable L\'evy processes with $1<\alpha\le 2$. This extends a theorem of Kaufman for Brownian motion. Our method is different from that of Kaufman and depends on covering principles for Markov processes.
Introduction
Let X = {X(t), t ≥ 0, P x } be a real-valued strictly α-stable Lévy process with α ∈ (0, 2]. Its characteristic exponent is given by, for ξ ∈ R, with some constants σ > 0 and β ∈ [−1, 1] which are respectively the scale parameter and the skewness parameter. Throughout log = log e denotes the natural logarithm. Notice that, in the case of α = 1, X is a symmetric Cauchy process. When α = 2, X is a (scaled) Brownian motion. For 0 < α < 2, X shares the properties of self-similarity, independence and stationarity of increments, with Brownian motion, but it has heavy-tailed distributions and its sample functions are discontinuous. As such, stable Lévy processes form an important class of Markov processes. Many authors have studied the asymptotic and sample path properties of Lévy processes. We refer to the monographs [2] and [21] for systematic accounts on Lévy processes, and to [24,26] for information on their fractal properties.
This note is concerned with a uniform Hausdorff dimension result, Theorem 1.1, for the inverse images of real-valued strictly α-stable Lévy processes and is motivated by the following results of Hawkes [8] and Kaufman [11].
Hawkes [8] considered the Hausdorff dimension of the inverse image X −1 (F ) = {t ≥ 0 : X(t) ∈ F } and proved that if 1 ≤ α ≤ 2 and F ⊆ R is a fixed Borel set, then for every x ∈ R, Note that the null event on which (1.1) does not hold depends on F . It is natural to ask if the following uniform Hausdorff dimension result holds: For every x ∈ R, Such a result, when it is valid, is more useful than (1.1) because, outside of a single null event, the dimension formula holds not only for all deterministic Borel sets F ⊂ R but also for random sets F that depend on the sample path of X.
We claim that, in the case 0 < α < 1, there is no uniform result like (1.2). This is because The referee has asked us the following question that complements the aforementioned claim: 1 For every x ∈ R, does Here C is the family of all deterministic Borel sets F ⊂ R with dim H F ≥ 1−α. To answer this question, we first recall Theorem 2 of Hawkes [8] : If 0 < α < 1 and F ⊂ R is deterministic and satisfies dim H F ≥ 1 − α, then [8, p.93] for the notation), then The answer to the referee's question is "yes" because we can choose a Borel set F ∈ C such that F \F * is polar for X (cf. [8, p.96]), then it follows from Hawkes' result (iii) that for any x ∈ R the probability in (1.3) is not more than Motivated by the referee's question and Hawkes' result (1.5), one may further ask to characterize the family G of deterministic Borel sets F such that for some x ∈ R (depending on G), This question seems to be rather nontrivial. We can imagine that (1.7) may hold for certain family of self-similar sets on R, but this goes beyond the scope of the present paper. Our objective of this paper is to study the uniform dimension problem (1.2) for 1 ≤ α ≤ 2. The validity of (1.2) in the case α = 2 (X is a Brownian motion) is due to Kaufman [11]. His proof relies on the uniform modulus of continuity of Brownian motion as well as the Hölder continuity of the Brownian local time in the time variable. For 1 ≤ α < 2, the sample paths of an α-stable Lévy process are discontinuous, hence Kaufman's method is not applicable.
In the special case of F = {z}, it follows from Barlow et al [1, (8.7)] that if 1 < α ≤ 2 then This gives a uniform Hausdorff dimension result for the level sets of X. However, for 1 ≤ α < 2, it had been an open problem to prove (1.2) for all Borel sets F ⊆ R; see [26,Sec. 8.2] for a discussion.
In this note, we verify (1.2) by proving the following theorem.
As mentioned above, the case of α = 2 has already been proved by Kaufman [11] whose proof relies on special properties of Brownian motion. Our proof of Theorem 1.1 provides an alternative proof of his theorem.
The proof is split naturally into the upper bound part and lower bound part. To show the upper bound, we design a new covering principle (see Lemma 2.2 below) for the inverse images of recurrent processes (thus it is applicable to α = 1). This covering lemma constitutes the key technical contribution of the present paper, and we expect it to be useful for other discontinuous Markov processes. Note that Lemma 2.2 in this paper is different from the covering lemma of [22,Lemma 2.2], which is only applicable to transient Markov processes (see Remark 2.3 in Section 2 of this paper). To prove the lower bound in (1.2), we make use of the uniform modulus of continuity (in time) of the maximum local time of X due to Perkins [18], together with a covering principle for the range of X in [10,26,22]. Since X has no local time when α = 1, the proof of the lower bound in Theorem 1.1 is valid only for 1 < α ≤ 2. We think that (1.2) holds for α = 1 as well, but have not been able to give a complete proof.
Proof of the upper bound
In this section we assume that 1 ≤ α ≤ 2. We will show that For any Borel set B, we denote by T B the first hitting time of B by the process X. We state an asymptotic result due to Port [ (1). If 1 < α ≤ 2, then for any bounded interval B and any x ∈ R, where L B (x) is bounded from above on compact sets and is positive for x ∈ B, the closure of the set B. Here, f (t) ∼ g(t) means lim t→∞ f (t)/g(t) = 1.
(2). If α = 1, then for any bounded interval B and any x ∈ R, where L B (x) is bounded from above on compact sets and is positive for x ∈ B.
The main tool to obtain our upper bound is the following covering lemma. Before stating this lemma, we introduce some notation. Let U n be any partition of R with intervals of length 2 −n and D n be any partition of R + with intervals of length 2 −nα . The choices of partitions have no effect on the result. Therefore, Note by spatial homogeneity and scaling, we have that Due to the right continuity of the sample paths, we have X(τ k−1 ) ∈ U as τ k−1 < T . By the strong Markov property, we obtain By induction, we obtain Next we show that there exists a constant c T such that p n ≤ 1 − c T 2 −nα(1− 1 α ) . By the independence of increments and the fact that X(1) is supported on R ([23, Thm. 1]), as desired. For n, K ≥ 1, define the event A δ n by T ] cannot be covered by 2 nδ intervals of length 2 −nα .
Here U ∈ U n ∩ [−K, K] means that U ∈ U n and U ⊂ [−K, K]. We have for δ > α − 1, Since any interval of length 2 −nα is covered by two intervals from D n , the conclusion for all U ⊂ [−K, K] follows from the Borel-Cantelli Lemma. Letting K → ∞ completes the proof.
(2) Now consider α = 1. The proof of this case is basically the same as that of Part (1), except that 1 − p n ≥ c T /n by Lemma 2.1.(2), and We omit the details.
Remark 2.3. As is said in the Introduction, the covering principle in [22, Lemma 2.2] is not applicable here. Intuitively, a recurrent process visits a fixed interval infinitely often, hence we could not expect that the inverse images could be covered by finite number of intervals.
Mathematically, the condition in [22] is for some δ, p > 0 and ∞ n=1 r p n < ∞, which is not satisfied for recurrent Markov processes.
Let us prove the upper bound (2.1).
Fix a T > 0 for now. By Lemma 2.2, each X −1 (U i ) ∩ [0, T ] can be covered by 2 · 2 n i δ intervals {I i,k } (of length 2 −n i α ) in D n i , thus we see that and T ↑ ∞ yields the desired upper bound. Now we consider the case of α = 1. One could repeat the argument above and use Lemma 2.2 to get the desired conclusion. Here we present an alternative argument. It follows from Hawkes and Pruitt [10] (see also [22]) that the following uniform dimension result holds:
Proof of the lower bound
We assume that 1 < α ≤ 2. It follows from Kesten [12] and Hawkes [9] that X hits points and has local times {L x t , t ≥ 0, x ∈ R}. The local times characterize the sojourn properties of X via the occupation density formula: For all t ≥ 0 and all Borel measurable function Moreover, there is a version of the local times, still denoted by {L x t , t ≥ 0, x ∈ R}, which is jointly continuous in (t, x); see e.g., [2,16].
We use the Hölder continuity of the local times of X to prove the uniform lower bound for the inverse image sets. This approach has been previously used by Kaufman [11], which was extended by Monrad and Pitt [17] in their study of inverse images of recurrent Gaussian fields. In both articles, the uniform modulus of continuity of the sample paths were used. Since the sample paths of the α-stable Lévy process X are discontinuous, we will apply a covering principle in [26,22] for the range of X. Denote C n any partition of R + of intervals of length 2 −n . We recall here the covering principle, tailored to our situation.
There exists a finite positive integer K, such that P x -a.s., for all n large enough, X(I) can be covered by K intervals of diameter 2 · 2 −nγ , for all I ∈ C n .
Let L * ([s, t]) = sup x∈R (L x t − L x s ) be the maximum local time of X on [s, t]. We recall now the following result due to Perkins [18] on the uniform modulus of continuity (in time) of the maximum local time of a strictly α-stable Lévy process X with index α ∈ (1, 2]. [14,15,16] for more sample path properties (in the space variable) of the local times of symmetric Markov processes.
We are ready to give the proof of the lower bound in Theorem 1.1.
Proof of Theorem 1.1: lower bound. It suffices to consider compact set F . For any compact F ⊂ R and ε > 0, by Frostman's lemma (cf. [6]) there exists a probability measure µ supported on F such that µ(B) ≤ |diam(B)| dim H F −ε for any interval B ⊂ R with |B| ≤ 1. Define the random measure λ by It is clear that λ(dt) is supported on X −1 (F ) ⊂ R + , λ(R + ) > 0, and Let n be sufficiently large, we have by Lemma 3.2 that uniformly for a ∈ [0, 1 − 2 −n ]. On the other hand, by Lemma 3.1, there exist a sequence of intervals {I i } 1≤i≤K of length 2 −nγ with γ < 1/α such that the closure of X([a, a + 2 −n ]) is covered by the union of I i , therefore, We thus obtain It follows that λ(B) ≤ diam(B) 1− 1 α +γ dim H F −2ε for all Borel sets B with sufficiently small diameter. This and Frostman's lemma imply that Letting γ ↑ 1 α , then ε ↓ 0 yields the desired lower bound for dim H X −1 (F ). This finishes the proof of Theorem 1.1.
Concluding remarks
This note raises several interesting questions for further investigation. In the following, we list three of them and discuss briefly the main difficulties. Solutions of these questions will require developing new techniques for Lévy processes.
(i). As having mentioned in the Introduction, we think that Theorem 1.1 holds for α = 1.
However, without a local time, it is not clear to us how to construct a random Borel measure supported on X −1 (F ) such that Frostman's lemma is applicable. (ii). In [20,Thm. 22.1], the asymptotic result for the hitting times was obtained for recurrent Lévy processes with regularly varying λ-potential densities, see also the recent development by Grzywny . We believe that a similar result also holds for a large class of more general Markov processes including stable jump diffusions, stable like processes and Lévy-type processes as considered in [22]. However, proving such a result would require establishing first the asymptotic results for the hitting times and local times of these Markov processes. This is pretty challenging and goes well beyond the scope of the present paper. We will try to tackle this in a subsequent paper. (iii). It is natural to expect that the packing dimension analogue of Theorem 1.1 also holds.
Namely, if X is a real-valued strictly α-stable Lévy process with 1 ≤ α ≤ 2, then for any x ∈ R one has P x dim P X −1 (F ) = 1 − 1 α + dim P F α for all Borel sets F ⊆ R = 1. (4.1) Here dim P denotes packing dimension; see Falconer [6,Chapter 3] for its definition and properties, and [24,26] for examples of its applications in studying sample path properties of Markov processes. By using the connection between packing dimension and the upper box-counting (Minkowski) dimension (cf. [6]), one can see that the proof of the upper bound of Theorem 1.1 also implies that P x -a.s., In order to prove the reverse inequality, one may apply the lower density theorem for packing measure in [25,Theorem 5.4] and prove that for any γ < 1/α and ε > 0, where λ is the random measure defined in (3.2) and c 2 is a finite constant. We are not able to prove this because (unlike the Hausdorff dimension case) the terms µ(I i ) in (3.3) can not be controlled for all i by the same n. | 3,572.8 | 2018-01-09T00:00:00.000 | [
"Mathematics"
] |
Magnetic-field-induced dielectric behaviors and magneto-electrical coupling of multiferroic compounds containing cobalt ferrite/barium calcium titanate composite fibers
Multiferroics have broad application prospects in various fields such as multi-layer ceramic capacitors and multifunctional devices owing to their high dielectric constants and coupled magnetic and ferroelectric properties at room temperature. In this study, cobalt ferrite (CFO)/barium calcium titanate (BCT) composite fibers are prepared from BCT and CFO sols by an electrospinning method, and are then oriented by magnetic fields and sintered at high temperatures. The effects of magnetic fields and CFO contents on the nanostructures and magnetoelectric properties of the composites are investigated. Strong coupling between magnetic and ferroelectric properties occurs in CFO/BCT composites with magnetic orientation. More interestingly, the dielectric constants of CFO/BCT composites with magnetic orientation are found to be enhanced (by ∼1.5–3.5 times) as compared with those of BCT and CFO/BCT without magnetic orientation. The boost of dielectric constants of magnetic-field orientated CFO/BCT is attributed to the magneto-electrical coupling between CFO and BCT, where the polar domains of BCT are pinned by the orientated CFO. Therefore, this work not only provides a novel and effective approach in enhancing the dielectric constants of ceramic ferroelectrics, which is of tremendous value for industrial applications, but also elucidates the interaction mechanisms between ferromagnetic phase and ferroelectric phase in multiferroic compounds.
Introduction
With the rapid advancement of electronics, electronic materials are currently developed towards miniaturization, multifunction and integration to satisfy different application circumstances and requirements [1,2]. Multiferroics, which refer to the functional materials that possess two or more properties of the ferromagnetism, ferroelectricity and ferroelasticity over a certain range of temperature, are important members of electronic materials family because of their promising applications in advanced electronic devices [3e5]. Among multiferroics, functional materials possessing electro-magneto-coupling are known as magnetoelectric (ME) materials [6,7], which have wide and important applications in the fields of microwave devices and sensors for magnetic field detection [3e5,8e10].
Recently, composite materials combining multi-phase ferroelectric and ferromagnetic materials for magnetoelectric applications are of great interest [11e15]. Particularly, it has been of great interest in achieving strong magnetoelectric coupling by combining ferroelectric barium titanate (BTO) with magnetic cobalt ferrite (CFO) [16e18]. BTO is one of the most typical ferroelectric materials with high dielectric constant and low dielectric loss [19]. As a widely used soft magnetic material, CFO is a spinel ferrite which exhibits excellent electromagnetic properties, high chemical stability, magneto-crystalline anisotropy and large magnetostriction coefficient [20]. Nevertheless, the desired magnetoelectric coupling properties of the composites have to be adjusted by volume ratio of the constituent phases, as well as the degree of interconnectivity and the properties of interface between BTO and CFO [16e18,21].
The BTO/CFO multi-phase magnetoelectric materials have been prepared by various methods. Stenaciu et al. prepared BTO/CFO multi-phase magnetoelectric materials by the spark plasma sintering (SPS) method and investigated their magnetoresistance effect [22]. They found that the magnetic performance of the multiphase material at 150 K reached an optimum at a CFO content of 30%. Raidongia and Kalyan prepared the coreeshell BTO/CFO composite nanomaterial and studied the dielectric properties of coreeshell nanoparticles under magnetic fields [23]. They concluded that the dielectric constant decreased with the application of magnetic fields. Zhang et al. prepared the BTO/CFO nanocomposite film with good ferroelectricity and ferromagnetic properties at room temperature via a method combining solegel and electrophoretic deposition [24]. However, the barium titanate crystals in the nanocomposites exhibited a tetragonal to orthorhombic phase transformation at~9 C and might damage the composite, causing adverse effects in the applications of BTO/CFO [25]. Therefore, in this work, calcium is first doped into BTO to prepare barium calcium titanate (BCT) crystals, suppressing the phase transformation of BTO in the magnetoelectric composite.
Furthermore, fiber-like BCT/CFO magnetoelectric composites are prepared by electrospinning which is a facile route in preparing polymeric and inorganic nanofibers [26e28]. It is envisaged that the huge specific surface area of BCT/CFO nanofibers could enhance the surface or interfacial effects on the magnetoelectric coupling and thus the magnetoelectric properties of the BCT/CFO nanofibers could be improved [29,30].
The preparation route for the magnetoelectric nanofibers is illustrated in Scheme 1. First, BCT/CFO composite nanofibers are prepared by electrospinning based on the precursors prepared by the solegel method. Subsequently the fibers are dried under a magnetic field to form a film, which is then collected and pressed into a wafer. The wafer is sintered at 900 C to promote the crystallinity of samples. The morphology, crystal structure, and thermal properties of the aligned BCT/CFO nanofibers are characterized by scanning electron microscopy (SEM), X-ray diffraction (XRD), and TGeDTA analysis. The effects of materials' compositions and microstructure on their magnetic properties, ferroelectric properties, and dielectric properties are studied. It is found that the dielectric constants of BCT/CFO composites with magnetic orientation are enhanced (by~1.5e3.5 times) as compared with BCT and BCT/CFO without magnetic orientation, although their remnant polarization are smaller than those of the non-orientated samples. The temperature-dependent dielectric loss (tand) for BCT/CFO composite is measured to reveal the interaction between CFO and BCT in the compound. The mechanisms of boosting the dielectric constants of BCT/CFO composites through a magnetic-field induction approach are elucidated, which have not been well revealed in previous work. This work not only provides a novel and effective approach in enhancing the dielectric constants of ceramic materials, which are much desirable for electronic applications such as multilayered ceramic capacitors, but also elucidates the interaction between ferromagnetic and ferroelectric phases in multiferroic compounds.
Materials
All the chemical reagents used in the preparation of materials are of analytical grade. Barium acetate, calcium acetate, tetrabutyl titanate, ethylene glycol, and glacial acetic acid were purchased from Sinopharm Group Chemical Reagent Co., Ltd. CoFe 2 O 4 material with an average nanoparticles diameter of~40 nm was purchased from Beijing Dk Nano Technology Co., LTD. PVP was purchased from Zhangjiakou Kerma Fine Chemicals Co., Ltd.
Preparation of BCT/CFO precursor sol
A certain amount of barium acetate were weighted, subsequently dissolved in hot aqueous solution of glacial acetic acid (36 wt% and 60 C), and stirred for 30 min. Then, the corresponding amount of calcium acetate were added into the mixture and continuously stirred for 30 min. The resulting clear solution was cooled to room temperature. In the next step, viscous tetrabutyl titanate was added dropwise into the ethylene glycol methyl ether solution and stirred with a magnetic stirrer to obtain a clear solution, which was further slowly added dropwise to the abovementioned acetic acid solution under magnetic stirring to obtain a pale yellow and transparent BCT solution. CoFe 2 O 4 nanoparticles (with Co:Ti molar ratios of 1:3, 1:9, and 1:15) were added to the BCT Scheme 1. Schematic illustration of preparation of the aligned BCT/CFO nanocomposite fibers via a magnetic field induction approach. solution. The mixture was heated and stirred with a magnetic stirrer at 80 C for 20 min, then cooled to 40 C, and subsequently stirred for 10 h. Finally 1 wt% of PVP was added at room temperature, and dissolved uniformly in the aforementioned mixture to obtain BCT/CoFe 2 O 4 precursor sols.
Preparation of BCT/CFO composite fibers
The BCT/CoFe 2 O 4 (BCT/CFO) precursor sol was processed into composite fibers by electrospinning at a spinning voltage of 20 kV. The as-prepared primary fibers were calcined at 500 C and then ground to obtain fiber powders. The BCT/CFO composite fiber powders were pressed into wafers (F10 mm  2 mm) and sintered at 900 C for 3 h to form ceramic sheets for testing, which were labelled as BCT/CFO-x, where x ¼ 0.0625, 0.1, 0.25 were the contents of CFO in the ceramics.
Preparation of aligned BCT/CFO composite fiber
At room temperature, the fiber powders were first dispersed in the PVP solution. The solution was dried under a magnetic field to form a film, which was then collected and pressed into a wafer. The wafer was sintered at 900 C for 3 h to obtain the BCT/CFO ceramic sheet for testing, which was labelled as BCT/CFO-x-orientated.
Characterization
The microstructure and particle size distribution of the samples were observed by scanning electron microscope (SEM, Hitachi S-4300). The phase structure and crystal structure of the samples were studied by the X-ray diffractometer (Bruker Company, D8), using the Cu-Ka radiation (l ¼ 0.15418 nm). Thermogravimetric analysis (TGA Q50, PerkineElmer) was carried out in air to determine annealing conditions for the sample. Measurements on magnetic properties were performed on a vibrating sample magnetometer (VSM, Riken Denshi, BHV-525). Polarization-electric field (P-E) hysteresis loops were characterized using a standard ferroelectric tester (Radiant Precision, Germany). Measurement of the dielectric properties was performed by an impedance analyzer (WK5000B,UK). Capacitance-temperature (C-T) characteristics of the samples were measured using an LCR meter (HP 4284A, Agilent, Palo Alto, CA).
Characterization of BCT/CFO composite samples
The CoFe 2 O 4 nanoparticles used in this work are analyzed by SEM. The results are shown in Fig. S1. CoFe 2 O 4 nanoparticles are mainly spherical and relatively uniform in morphology, with a primary diameter of~45 nm and an agglomerate diameter of 210 nm. The agglomeration occurs due to the intrinsic magnetic effects of CoFe 2 O 4 nanoparticles. Fig. 1 shows the SEM images of the as-spun composite fibers prepared with different CoFe 2 O 4 contents. As shown in Fig. 1a, the as-spun fiber of pure barium calcium titanate with a diameter of~625 nm has a smooth surface. Fig. 1bed shows the as-spun composite fibers with different contents of cobalt ferrite, which have a diameter of~600e1500 nm. With increasing CFO content, the diameter of BCT/CFO as-spun fiber increases significantly. In the processing of precursors for electrospinning, magnetic stirring was kept to avoid the agglomeration of CoFe 2 O 4 nanoparticles, which might result in uniformly distributed nanoparticles on the surfaces of BCT/CFO as-spun composite fibers, as shown in Fig. 1. In addition, it can be observed that the BCT/CFO as-spun fibers possess a dense structure before sintering. Fig. 2a shows the thermogravimetric analysis curve of BCT/CFO as-spun fibers, indicating that the weight loss process of sample can be divided into three stages. The first stage is from room temperature to~276 C with a mass loss of 2.5%, mainly due to the removal of physically absorbed water and a small amount of volatile organic matter. The second stage which is an exothermic reaction is in the temperature range of 276e528 C with a mass loss of 19%, mainly due to the combustion of PVP. In the third stage, very small weight loss occurs in the temperature range of 528e900 C. The TG curve becomes basically parallel to the temperature axis, indicating no further weight loss. The thermogravimetric analysis indicates that the heat treatment at 900 C can completely remove the PVP ingredient of the as-spun fibers. The effects of sintering temperatures on XRD patterns of BCT/CFO composite fibers are shown in Fig. S2a. The diffraction patterns for samples sintered at 700 C show that the main peaks correspond to whiterite-BaCO 3 (CaCO 3 ) and even CoTiO 3 phases (PDF-# 15-0866), as marked in the XRD patterns. Other phases (e.g. Ba 2 TiO 4 ), if any, are below the XRD detection limit. The results imply that BaCO 3 , CaCO 3 , and CoTiO 3 may be the impurities. When the sintering temperature is increased to 800 C, the strong diffraction peaks of (100), (220), [31,32]. After calcined at 900 C, the XRD diffraction peaks become sharp, as shown in Fig. S2a. The XRD patterns for BCT/CFO composite fibers with different contents of cobalt ferrite calcined at 900 C correspond well to CFO and BCT crystal phases. The XRD diffraction peaks can be indexed as the orthorhombic perovskite structure with a space group Amm2 for BCT (Fig. S2b) which is consistent with those reported by Acosta et al., [33] and the crystal planes of (220) and (311) for CFO (the inset in Fig. 2b), respectively. Fig. 2b also indicates that the BCT/CFO composites with less BCT contents have sharper diffraction peaks. Fig. 3 shows the SEM images of BCT/CFO composite fibers with different CoFe 2 O 4 contents calcined at 900 C. The fibers with different CFO contents are all found to have rough structures. It could be caused by the removal of PVP in the as-spun fibers when they were sintered at 900 C. As compared to those of non-sintered samples shown in Fig. 1, the diameter of fibers does not significantly change after sintering. In addition, it is noted that the surfaces of sintered fibers with higher BCT contents are smoother and denser.
As illustrated in Scheme 1 the fibers with different CoFe 2 O 4 contents were milled, pressed into pellets, and then sintered at 900 C into ceramic wafers. Fig. S3 shows the SEM images of ceramic wafers. It is observed that the particle sizes of the BCT/CFO sample are slightly larger than that of the pristine BCT sample. It could be caused by the addition of CFO in the BCT matrix which reduces the melting temperature of BCT, facilitating the growth of BCT particles in the fibers. It is also observed that the fibers with higher content of BCT exhibit more continuous microstructures and could have higher density. The results indicate that BCT may facilitate the homogeneous distribution of liquid phase of fiber during the sintering processes. Specifically, the BCT/CFO-x fiber powders with x ¼ 0.25 calcined at 900 C were added to 10 wt% PVP and ethanol solution. The mixture was orientated under an external magnetic field and dried into a film. The morphology of obtained film was characterized by SEM, as shown in Fig. 4. The image clearly indicates one-dimensional orientation of fibers in the film and the fibers are aligned into bundles with a diameter of 4e6 mm. The orientation and alignment process could be related with the effects of applied magnetic field during drying. When the external magnetic field is applied, the BCT/CFO nanoparticles immersed in the liquid mixture could align with the magnetic field because of the interaction between the magnetic moments of CFO and the applied filed. Those BCT/CFO nanoparticles will eventually accumulate along the direction of the magnetic field. The uniform distribution of CFO and BCT particles inside the fibers is confirmed by energy dispersive spectrometer (EDS) mapping, as shown in Fig. 4c. The elements (oxygen, calcium, iron, cobalt, and barium) are found to distribute uniformly throughout the sample. This can be attributed to the advantages of the processing method. Fig. 5 shows the hysteresis loops for BCT/CFO-x composite fibers with different contents of CoFe 2 O 4 . For simplicity, the molar ratio Co:Ti of the sample is denoted as n(Co):n(Ti), and 'Orientated' represents that the composite fibers have been subjected to magnetic orientation treatment during processing. All the samples exhibit the characteristic hysteresis loops for typical soft magnetic materials, indicating that the magnetism of CFO is not affected by incorporating BCT in the fibers. The saturation magnetization of the composite fibers without magnetic orientation treatment much decreases with increasing BCT content (as shown in Fig. 5), which could mainly result from the fact that the continuity of ferromagnetic CFO phase or the magnetic ordering might be interrupted by the presence of BCT fine grains [32]. As a consequence, the regional magnetic moments significantly decrease with increasing BCT content. The saturation magnetization of the orientated composite fibers measured under a magnetic field vertical to the fiber orientation is found to be dramatically higher than that of composite fibers without magnetic orientation treatment. Magnetic anisotropy can be clearly observed in the composite fibers with magnetic orientation treatment. Fig. 6 shows the typical ferroelectric hysteresis (P-E) loops for BCT/CFO-x composite fibers with different amounts of CoFe 2 O 4 . In Fig. 6a, it is indicated that the composite fibers could exhibit ferroelectricity. The remnant polarization, saturation polarization and coercive field of the sample all increase with increasing BCT content. Fig. 6b shows the P-E loops of composite fibers with and without magnetic orientation treatments. It can be found that the coercive field, the saturation polarization and the remnant polarization of the sample with magnetic orientation treatment are all smaller than those of sample without magnetic orientation treatment. MHz. It is found that the dielectric constant of BCT/CFO-x composite fibers decreases obviously with increasing CFO content x. The XRD analysis has indicated that the BCT/CFO composite fibers are a two-phase system, in which CFO has a resistivity of~10 7 Um, thus belonging to a semiconductor, whereas BCT has a resistivity of~10 12 Um and is an insulator [34]. Therefore, BCT has a higher dielectric constant than CFO. Since BCT has a content of~75e93.7 wt% and acts as a continuous phase or a matrix, the addition of CFO can decrease the dielectric constant of the two-phase composite. Thus the dielectric constant decreases with increasing CFO content, which is consistent with those reported by Qu et al. [34]. It is also found that within the frequency range of 20 Hze15 MHz, the dielectric constant of BCT/CFO composite fibers slightly increases with increasing frequency. When the frequency reaches 8.68 MHz, the dielectric constant of BCT/CFO composite fibers exhibits a resonance peak, and further decreases rapidly with increasing frequency. The dielectric constants of BCT/CFO-x composite fibers with different CFO contents exhibit similar natural resonant frequencies. For the ceramic dielectric materials, the nonuniformity in materials structure causes the non-uniformity in dielectric and electrical conductivity. Under the external electric field, charges accumulate at the grain boundaries of dielectric material and then result in MaxwelleWagner interfacial effect [35]. When the testing frequency is close to the natural resonant frequency, the grain boundaries, dislocations, and surface lattice defects in dielectric materials have enough time to accumulate a large number of induced charges, resulting in higher dielectric constants. When the frequency of electric field is higher than the resonant frequency, however, the interfacial accumulation rate of charges does not meet the varying speed of electric field, resulting in lower dielectric constants. Most importantly, it is observed that the dielectric constants of BCT/CFO-0.25 composite fibers with magnetic orientation are significantly enhanced (by~1.5e2 times) as compared with those of BCT and BCT/CFO-x without magnetic orientation. For the BCT/CFO-0.0625 sample (that is n(Co): n(Ti) ¼ 1:15), the oriented one has a dielectric constant~3.5 times as large as that without magnetic orientation, as shown in Fig. 7 and Fig. S4. 3.5. Temperature dependence of the dielectric loss (tan d) of BCT/ CFO composite fibers Fig. 8aee shows the temperature-dependent dielectric loss (tand) for BCT and BCT/CFO-x composite, respectively, which is measured at different frequencies. For BCT, tand measured at 10 kHz exhibits a maximum at around 121.2 C, which corresponds to the ferroelectric-to-paraelectric phase transition temperature or Curie temperature T c . The BCT/CFO-x samples show different dielectric behaviors with different CFO contents x. The peak value of dielectric loss and T c are listed in Table 1. The Curie temperature T c is found to decrease significantly with increasing CFO content. Meanwhile, the dielectric losses at T c for BCT/CFO-x samples are larger than that for BCT. For the BCT/CFO-0.25 composites, the sample with magnetic orientation has obviously higher T c and tand as compared with that without magnetic orientation.
Dielectric properties of BCT/CFO composite fibers
The dielectric relaxation of BCT ceramics and BCT/CFO-x composite ceramics can be well described by the Arrhenius relation as follows [36], where f 0 is an attempt frequency, E a is the apparent activation energy characterized the relaxation process. The Arrhenius plots for BCT and BCT/CFO-x samples are shown in Fig. 9a. The fitting of activation energy by Eq. (1) is listed in Table 1. The activation energy for BCT ceramics or BCT/CFO-x composite ceramics with low CFO contents (x ¼ 0.0625) is about 2.6e2.7 eV, which is typical for the relaxation of domain walls in ferroelectric phases close to T c . However, the activation energy for BCT/CFO-x composite ceramics with high CFO contents (x > 0.0625) is much lower than 1.0 eV, suggesting that the nature of dielectric relaxation could be related with ions such as oxygen ions or atomic defects. In particular, the dielectric loss of BCT/CFO-0.25 with magnetic orientation is much higher than that of BCT/CFO-0.25 without magnetic orientation, suggesting that the applied magnetic field could lead to the ordering of those oxygen ions or atomic defects. Judging the fact that T c for BCT/CFO-0.25 with magnetic orientation is much higher than that without magnetic orientation, it is thus envisaged that the ordered oxygen ions or atomic defects could act as pinning centers for domain-wall motions or rotations.
Mechanisms of enhanced dielectric properties of BCT/CFO composite fibers
As shown in Fig. 7, the BCT/CFO composite ceramics without magnetic orientation mainly show the dielectric constant-frequency characteristic of BCT, while the influence of CFO ingredient on the relationship between dielectric constant and frequency is relatively weak. Considering the fact that the dielectric constant of BCT/CFO is mainly attributed to the ferroelectric BCT phase, the effect of CFO content on the dielectric constants of those BCT/CFO composites could be also weak at the frequency ranging from 20 Hz to 15 MHz. For the magnetic-field orientated BCT/CFO composite ceramics with 25 wt% CFO, although the dielectric constant-frequency characteristic is similar to that for the BCT/CFO composite ceramics without magnetic orientation, its dielectric constants are significantly boosted at all testing frequencies as compared with those of BCT/CFO composite ceramics without magnetic orientation. Therefore, the effect of orientated magnetic CFO phase on the dielectric constants of ferroelectric BCT phase could be significant.
In macroscopic points of view, the dielectric constant ε r of an isotropic dielectric material is defined by the following equation [37]. P ¼ ε 0 cE; (2) where P is the macroscopic polarization, E is the applied electric field, c is the susceptibility, ε 0 is the vacuum dielectric constant and ε r ¼ 1þc. Since the crystalline solids have different dielectric characteristics along different crystallographic orientations, the dielectric constant depends on the arrangement of molecules or ions in the crystals. In general, the direction of polarization and the applied electric field are not the same, therefore, ε r is determined by the magnitude of P which is an average over all polarization orientations in a dielectric material. In the BCT/CFO composite fibers without magnetic orientation the fibers are randomly distributed, so as the polarizations of BCT/CFO fibers. Under a magnetic field applied along the sample surface (in the lateral direction), the BCT/CFO fibers are aligned along the lateral direction, as shown in Fig. 4. By applying an external electric field E perpendicular to the sample surface, the magnitude of P for the aligned BCT/CFO fibers should be larger than that for the un-aligned fibers, resulting in the obviously enhanced ε r .
The enhanced dielectric constant of BCT/CFO composite ceramics with magnetic orientation could be also explained by a microscopic mechanism based on the magnetic, ferroelectric and dielectric analyses described in Sec.3.2e3.5. Due to the electromagneto-coupling, the CFO nanoparticles in the BCT matrix could act as pinning centers for the polar domains in BCT, as illustrated in Fig. 9b. Basically, the motions or rotations of domain walls of polar domains in the ferroelectric BCT regions could be hindered by the magnetic CFO nanoparticles due to the electro-magneto-coupling effect. In the BCT/CFO composite ceramics without magnetic orientation, it is suggested that the un-orientated CFO ferrite nanoparticles could spatially interrupt the ordering of ferroelectric BCT phase although the electro-magneto-coupling between CFO and BCT might be weak [38], as illustrated in Fig. 9b. In the BCT/CFO composite ceramics with magnetic orientation, the magnetic ordering of CFO phase driven by an applied magnetic field could lead to the enhanced local magnetic fields around the CFO nanoparticle. As a result, the electro-magneto-coupling between CFO nanoparticles and the polar domains in BCT is increased. Consequently, the ordering of ferroelectric phase of BCT could be weakened [39]. Such effect of CFO on the ferroelectric properties of BCT is well demonstrated by the ferroelectric P-E loop results (Table 1), as shown in Fig. 6, which indicate the significantly reduced saturation polarization and remnant polarization of the BCT/CFO-0.25 composite ceramics with magnetic orientation as compared with those of ceramics without magnetic orientation. On the other hand, such pinning effects of CFO nanoparticles on the ferroelectric properties of BCT are also consistent with the results of dielectric relaxation, which indicates the significantly increased dielectric loss of ferroelectric-to-paraelectric transition in BCT/CFO-0.25 composite ceramics with magnetic orientation as compared with those of ceramics without magnetic orientation. Therefore, the strong pinning effect of magnetic-orientated CFO nanoparticles on the motions or rotations of domain walls of polar domains in the ferroelectric BCT regions leads to the increased dielectric
Conclusions
In summary, BCT/CFO composite fibers with enhanced dielectric constants were prepared by a combination of electrospinning and magnetic orientation approach. The effects of magnetic field orientation and CFO contents on the magnetic, ferroelectric and dielectric properties were studied. The BCT/CFO composite ceramic exhibits huge magnetic anisotropy. The dielectric constant of BCT/ CFO composite ceramic with magnetic orientation is 1.5e3.5 times as high as that without magnetic orientation. This study thus demonstrates that the magnetic-field induced orientation in multiferroic composites could be facile and effective in boosting their dielectric constants, providing a novel technology to further enhance the dielectric properties of lead-free ferroelectric materials. | 5,882.8 | 2018-04-05T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Waste Management in the Circular Economy. The Case of Romania.
Applying the principles of sustainable development in Romania involves a new approach to ecological waste using basic concepts of circular economy to weigh accurately the proposed projects in this area taking into account existing environmental resources and zero waste objectives. The paper is focused on: quantitative and qualitative measures of waste prevention in Romania, the changing status of the waste by selling it as product, the mechanisms for paying for treatment and / or disposal which discourage waste generation and the use of financial resources obtained from secondary raw materials for the efficiency of waste management.
Introduction
According to Executive Director of the European Environment Agency, Hans Bruyninck"circular economy concept has advanced in European policy decision as a positive perspectives, based on solutions for achieving economic development while respecting environmental limits". Romanian European Environment Agency is ready to support the transition towards a circular economy through analyses and evaluations.
Unlike traditional perspective of linear economy, circular economy is trying to respect environmental limits by increasing distribution of renewable or recycled resources and reducing the consumption of raw materials and energy. Thus, both the emissions and resources wasting will be reduced. Also, concepts such as eco design, distribution, reuse, repair, recycling of products and materials will play an important role in maintaining the use and value of products, components and materials. It is essential to move to a circular economy in order to meet the resource efficiency agenda established under the Europe 2020 strategy for smart, sustainable and inclusive growth [1]. It can get higher performance and sustainable efficient use of resources, which can bring major economic benefits. In the industrial sector, is already recognizes that improving resource productivity is a strong business case. It is estimated that improving resource efficiency along the value chain could reduce material needs with 17% -24% by 2030 [2] and a better use of resources should be a general potential saving of 630 billion per year for European industry [3] .
Waste generation
Waste generation is highly dependent on local consumer habits, the type of buildings, general living conditions and the type of industry and commerce. For example, in areas with individual fireplaces with wood, coal or coke, waste is considerably more and denser in winter compared to buildings with central heating. In rural areas, the amount of waste collected per capita is lower than in urban areas due to in-situ composting. In some societies waste are very wet in the autumn time and has a high content of organic matter due to conservation of vegetables. This can be problematic for waste incineration installations which may need additional fuel. Homes, with a relatively low income, produce less waste but more organic than homes with a relatively high income. The single measurement of waste generation there is made by weighing platforms from the landfill, recycling stations and treatment facilities with quality recording of waste type and transport origin. Based on these statistics, there are recorded quarterly/ or seasonal waste variations and can do prognosis.
Waste classification
In terms of nature and places of production, waste is classified as follows: 1. Household waste -waste from the household sector or other similar sectors (including hazardous waste they contain) and can be taken by our current pre-collection systems or villages collection systems; 2. Street waste -specific waste of public traffic routes, from the daily activity of the population, city green spaces, animals, deposition of solids in the atmosphere; 3. Similar waste with municipal waste -waste from small or large industry, commerce, public and administration sectors, which presents similar composition and properties to household waste and can be collected, transported, processed and stored with them; 4. Bulky waste -solid waste from different sources, which due to its size, cannot be taken with the usual pre-collection or collection systems, but requires a differentiated treatment. 5. Construction waste -waste from demolition of industrial and civil constructions; 6. Hazardous waste -toxic, flammable, explosive, infectious waste or otherwise, which after was introduced into the environment, can harm plants, animals or humans; 7. Agricultural waste -waste from agricultural and livestock units (manure, animal waste from slaughterhouses and meat industry); 8. Industrial waste -waste from technological processes; 9. Hospital waste -waste from the hospitals activities and health units which are incinerated in crematoria; Depending on specific local conditions, could be other types of waste that requires special attention, for example, marine ballast (polluted with oil / chemicals) and mining waste.
The main objectives of solid waste management
The main objectives of solid waste management are: protect public health; protect the environment; maintaining cleanliness in public for these places to be aesthetically acceptable; conservation of natural resources through waste reduction policies and recycling. All these objectives are achieved through a good collection and waste safe treatment, a proper waste storage and disposal. By my opinion integrated waste management is vital for the community, for the following reasons: 1. The deposits capacity decreases continuously -Location and construction of new deposits is a difficult and expensive process. 2.Many waste materials are rare natural sources, imposing their recovery, decreasing the environmental impact and increasing humans' life quality.
3.The materials found in waste volume may be an opportunity to start a business. 4.A system that does not rely on one alternative is more flexible to economical, technological and legislative changes. 5.The investors or creditors favour capital projects that are part of a carefully eco -designed strategy. Local authorities are in an advantageous position in assessing the proposals for a new facility, when they have the chance to thoroughly examine the entire system. Pollutant emissions from industry are very rich and very diverse. In very vast area, some principles are advanced only to reduce emissions of pollutants that need to be adapted for each industrial branch separately.
Industrial Product Life Cycle Stages
Life cycle approach means considering the environmental impacts and resources used throughout the lifecycle of products (goods or services).
Approaching life cycle helps us to identify sensitive points and aspects of a product which can be improved as lower environmental impact, a reduced use of resources along the life cycle stages or compromises between different options on products. Sensitive issues can arise at any stage of the life cycle, from raw material extraction and conversion, and manufacturing and distribution, to customer use and / or consumption.
It ends with reuse, recycling, energy recovery and final disposal. The key objective of lifecycle based approach is to contribute to decisions or express any managerial transfer of responsibility. This means minimizing impacts on the environment in a life cycle stage or in a geographic region or a certain category of impacts, while helping to avoid environmental impacts increase elsewhere. For example, saving energy during the use phase of a product without simultaneously increase the amount of material required for its delivery and impacts associated with this supply. At each stage of the life cycle there are raw materials consumed and chemicals are released as emissions. They contribute to different environmental impacts and issues like resource scarcity.
Measures to reduce waste, in order to achieve a circular economy in Romania
Circular economy approaches, exclude concept of waste and usually involve innovation along the value chain, rather than rely on solely solutions for the life cycle end of a product. For example, these approaches may include: 1. Reducing the amount of material required to provide a given service; 2. Extending the useful product life cycle (durability); 3. Reducing the use of energy and materials within production of use phases (efficiency); 4. Reducing the use of hazardous materials or difficult to recycle into new products and production processes (substitution); 5. The creation of markets for secondary raw materials, based on standards and public acquisitions (recycling); 6. Designing products that are easily maintained, repaired, refurbished, remanufactured or recycled (eco-design); 7. Development of services that consumers need to reduce waste (service maintenance / repair, etc.); 8.Stimulating and supporting the activities of waste reduction and quality separation, made by consumers; 9. Stimulating the separation and collection systems that minimize the costs of recycling and reuse; 10.Facilitating group activities to prevent the transformation in waste of industrial products (industrial symbiosis) and encouraging the expansion and consumers choice improvement. In this case the customer could use leasing and exchange of services as an alternative to holding of products. We need to protect consumers' interests (in terms of cost, protection, information, contract terms, issues related to insurance, etc.).
In order to show the relationship between industrial product lifecycle and industrial emissions, I have realized a matrix named "Eco-friendly methods for reduction of industrial pollutant emissions-Industrial Product Life Cycle Stages Matrix"table 1. According with Luminiţa I. POPA and VasileN.POPA [4],the Industrial Product Life Cycle Stages are the follows: market needs; research and development; idea generation; opportunity identification and concept definition; research design and development; prototype &production; distribution and manufacturing; marketing; sales; maintenance &service; products feed-back; removal& disposal. As we can see in the matrix" research and development" and "research design and development "lifecycle stages have the highest score: 7. Another two lifecycle stages ("prototype &production" and "distribution and manufacturing"), have a high score: 6. In this case, the industrial company managers have to take in consideration that these three metrics are very important, because they could affect the entire industrial product life cycle. Also another three metrics,"Eco-friendly production output" (score 10),'Eco-friendly operational life cycle" (score 10) and"Implementing TQM method" (score 8) are the most important ways to reduce the cost of green acquisitions.
European Union Resource efficient Scoreboard
The Resource Efficiency Scoreboard presents indicators covering themes and subthemes of the Roadmap to a Resource Efficient Europe. The scoreboard aims to monitor the implementation of the roadmap, to communicate the link between resources and economy and to engage stakeholders. Indicators are arranged in three groups -lead, dashboard and theme-specific indicators. For several indicators it is more meaningful to view the data at country level rather than at EU-28 level. The scoreboard by default presents the data for Belgium, the first country in the Member States list (sorted in protocol order). Data for the EU and other Member States were selected from the country list. The indicator shows trends in waste generation, both EU-wide and for individual EU countries. It covers both non-hazardous and hazardous waste from: all sectors of the economy (production); households (consumption). It does not cover mineral wastes or soil. Over 90% of these come from the mining and construction sectors, which are subject to considerable fluctuation over time. Waste generation from which major mineral wastes are excluded reflects general trends more accurately than statistics on total waste generated. The indicator shows the amount of waste generated annually in the EU as a whole and in individual countries, expressed in kilos per inhabitant. It is based on data collected in the way stimulated by the Waste Statistics Regulation and is available for every second year as of reference year 2004 [5] (source: http://ec.europa.eu/eurostat/web/waste/generation-ofwaste-excluding-major-mineral-wastes ).
Landfill rate of waste excluding major mineral wastes
The indicator is defined as the volume of waste landfilled (directly or indirectly) in a country per year divided by the volume of the waste treated in the same year. Waste taken into account excludes major mineral wastes, dredging spoils and contaminated soils. This exclusion enhances comparability across countries, as mineral waste accounts for high quantities in some countries due to economic activities such as mining and construction. One exception, however, is that the indicator explicitly includes combustion wastes and solidified, stabilised and vitrified wastes, despite them being completely or partly mineral. The indicator is derived from the two-yearly reporting of the countries according to the Waste Statistics Regulation. It covers landfilling of hazardous (hz) and non-hazardous (nh) waste from all economic sectors and from households, including waste from waste treatment (secondary waste). [5](source: http://ec.europa.eu/eurostat/cache/metadata/FR/t2020_rt110_esmsip.htm).
Recycling rate of municipal waste
The recycling rate is the tonnage recycled from municipal waste divided by the total municipal waste arising. Recycling includes material recycling, composting and anaerobic digestion. Municipal waste consists to a large extent of waste generated by households, but may also include similar wastes generated by small businesses and public institutions and collected by the municipality; this latter part of municipal waste may vary from municipality to municipality and from country to country, depending on the local waste management system. For areas not covered by a municipal waste collection scheme the amount of waste generated is estimated. [5] (http://ec.europa.eu/eurostat/cache/metadata/FR/t2020_rt120_esmsip.htm).
Electrical and electronic equipment waste (WEEE)
WEEE poses on the one hand a risk to the environment (hazardous components), on the other hand it has a high potential for recycling to replace raw materials by secondary raw materials, such as precious metals and other highly valuable special materials. For the calculation of recycling rates it is crucial to know the volume of end-of-life electrical and electronic equipment. As this is for many devices and many countries difficult to deduct, the volume of put on the market during the previous 3 years (considered as easier to deduct) is considered as proxy for the volume of WEEE in the reference year (see statements in Article 7 of the WEEE-Directive 2012/19/EU). The collection rate is calculated as the collected volume of WEEE in the reference year, divided by the average sum of EEE put on the market in the three previous years. The 'recycling rate of e-waste' (this indicator) equals the 'total collection' in the present year divided by the average of the 'put on the market' of the three preceding years multiplied with the 'reuse and recycling rate' (of treatment facilities), considering that the total amount of collected e-waste is sent to treatment / recycling facilities. [5](http://ec.europa.eu/eurostat/cache/metadata/FR/t2020_rt130_esmsip.htm).
Comparison of "Turning waste into a resource" indicator. Romania vs. European Union.
Using the indicator"Recycling rate of e-waste" (table 2) for Romania and EU, it is made an average of three (2010-2012) as following: Romania: 13,125; European Union: 31,433; it is chosen this period of time because there is after 2007, the year in which Romania has become full member of European Union.This relevant indicator shows that Romania is 2,4 times less efficient in the field of "Recycling rate of e-waste" than EU countries. Using the indicator "Recycling rate of municipal waste" (table 2) for Romania and EU, it is made an average of three years (2010-2013) as following: Romania: 13,125; European Union: 56,425; it is chosen this period of time because there is after 2007, the year in which Romania has become full member of European Union.This relevant indicator shows that Romania is 4.3 times less efficient in the field of "Recycling rate of municipal waste" than EU countries [5]. It is important to note that the term "consumption" as used in DMC denotes apparent consumption and not final consumption. DMC does not include upstream hidden flows related to imports and exports of raw materials and products. The indicator is a Resource Efficiency Indicator. It has been chosen as a lead indicator presented in the Resource Efficiency Scoreboard for the assessment of progress towards the objectives of the Europe 2020 flagship initiative on Resource Efficiency. The DMC is defined as the total amount of material directly used in an economy and equals direct material input (DMI) minus exports Domestic material consumption -is represented in "tonnes per capita". (Source: http://ec.europa.eu/eurostat/cache/metadata/DE/t2020_rl110_esmsip.htm) (2007-2014), the domestic material consumption in Romania, has increased 1.72 times, compared with EU average in the same period of time (8 years). That means that Romanian people has consumed as a consuming society, as is shown in the figure 1.
Conclusions
The transition benefits towards a circular economy in Europe could be considerable by reducing environmental pressures reducing drastically Romanian economy dependence on imports which, if gradual increases, can become a source of national vulnerability. Increasing global competition for natural resources has contributed to increased prices and volatility. Also circular economy strategies applied could have as effect of reducing costs and increased competitiveness of Romanian industry by net benefits which consist of job opportunities. Creating a circular economy in Romania requires fundamental changes in the value chain, from product design and production processes to new models of circularity business and consumption patterns.
In this manner, recycling will turn waste into a resource, and product life extension facility will contribute to reducing natural resources consumption. Some Romanian companies are already are experimenting new circularity business models such as those based on functions and services of collaborative consumption model specific for circular economy. For the future Romanian managers could take some measures for waste reduction: the changing status of the waste by selling it as product; the mechanisms for paying for treatment and / or disposal which discourage waste generation; the use of financial resources obtained from secondary raw materials for the efficiency of waste management. | 3,903.2 | 2016-11-01T00:00:00.000 | [
"Environmental Science",
"Materials Science"
] |
High-temperature ultrafast ChipHPLC-MS
Herein, we present a miniaturized chip-based HPLC approach coupled to electrospray ionization mass spectrometry utilizing temperature to achieve high-speed separations. The approach benefits from the low thermal mass of the microfluidic chip and can form an electrospray from the pre-heated mobile phase. With the help of this technology, isothermal and temperature-programmable operations up to 130°C were pursued to perform reversed-phase separations of pesticides in methanol and ethanol-containing eluents in less than 20 s. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1007/s00216-023-05092-w.
Introduction
Developing analytical systems that provide chemical data in ever shorter time frames is a current trend in chemical research.It can increase productivity and reduce operating costs, particularly in separation science [1][2][3][4].Within this field, HPLC MS has established itself as the preferred method for qualitative and quantitative analysis of complex chemical samples.Although the combination of HPLC and MS is a powerful and high-resolution technique, its chromatographic front part is often time-consuming and leads to extended analysis times.Therefore, various approaches have been pursued to accelerate the analysis times from minutes to a few seconds, giving rise to the high-speed or ultrafast HPLC-MS field [5].
In addition to conventional approaches using sub-2-µm particles and increased pressures above 600 bars to accelerate HPLC, such as UHPLC, high temperature offers an exciting approach to get even faster [6][7][8].The ability of high-temperature HPLC (HT-HPLC) to speed up chromatographic separation is based on the reduced viscosity of the employed mobile phase solvent and the increased mass transfer kinetics between stationary and mobile phase [9][10][11][12].
Another attractive feature of HT-HPLC is that the elution strength of the mobile phase increases with temperature so that gradient elution can be achieved by temperature programming similar to gas chromatography [13][14][15][16].In addition, thermal equilibration of the column between runs occurs much faster than equilibration for solvent gradient and is another factor in accelerating HPLC cycle times.Since the speed of temperature adaptation scales with thermal mass, miniaturized systems such as the recently introduced high-temperature chip-based HPLC (HT-chipHPLC) are particularly promising when rapid heat exchange is required [17][18][19].Despite the advantages that additionally result from the miniaturization of HT-HPLC, such as low microliter solvent consumption, absence of eluent preheating, and blazingly fast separations by thermal gradients, detector coupling in HTchipHPLC is challenging, as the column temperatures normally exceed the boiling point of the eluents used [20,21].This can lead to uncontrolled evaporation of the solvent before detection and cause signal instabilities [22].As one of the remedial strategies to avoid the phase transition, the integration of back pressure stabilization in the post-column region of the HT HPLC microchip is proposed in the scientific literature.With this technology, the boiling point of the eluent is raised and the eluent remains in its liquid state.
Back pressure stabilization is convenient, especially with optical detection techniques, such as fluorescence, because conventional HPLC back pressure regulation equipment can be used without contributing to extra-column band broadening.Although maximum temperatures up to 180°C at backpressures of 69 bars can be reached briefly, undesirable temperature-dependent effects, such as signal quenching, limit fluorescence detection in HTchipHPLC [23].
In contrast to optical detection methods, coupling between HTchipHPLC and mass spectrometry is very appealing due to the higher information content.The electrospray process tolerates nanoliter per minute flow rates of heated eluents under ambient conditions.Although temperature alters ESI critical parameters, such as eluents surface tension and viscosity, numerous studies have emphasized the positive impact of temperatures up to 300°C during the electrospray process, leading to improved MS signal sensitivity without thermal degradation of the analyte [24][25][26][27][28].
Depending on the column's inner diameter, interfacing strategies for HTLC MS are based on pre-detection-cooling and flow restriction [29][30][31][32][33].As both approaches require additional post-column volume, either for sufficient heat exchange or for attachment of an external restrictive capillary, their use has a detrimental effect on separation performance.In this context, the lab-on-a-chip technology, with its unique feature of seamlessly integrating multiple functions onto a single chip device, can significantly improve by avoiding additional extra-column band broadening while providing the rapid heat exchange necessary to reduce the heated eluents temperature [34].
Various high-temperature chip chromatographs have been developed.However, a coupling between HTchipH-PLC and MS remains vacant.Fueled by this motivation, the presented study aims to check the feasibility of MS coupling to HTchipHPLC.Therefore, we functionalized a glass-made microchip that utilizes the advantages of microfluidics to allow semi-automatically separate nanoliter sample fractions within a few seconds and achieve efficient MS detection of heated eluents using a monolithic chip emitter.Such a technology would push the boundary of analysis run times closer toward real-time analysis and broaden the analytical scope of HTchipHPLC by accessing the beauty of unambiguous identification through accurate mass measurements.
Microfluidic chip fabrication
The borosilicate (BOROFLOAT®33)-based microfluidic HT-HPLC chip (45 mm × 10 mm × 2.2 mm) has been man- ufactured by iXfactory, now part of Micronit (Germany), according to our design, and is illustrated in Fig. 2A.Chip fabrication was based on photolithographic, wet-etching, powder-blasting, and fusion-bonding techniques [35].A microfluidic channel network is etched into the bottom slide during this process.The etched microfluidic channel network consists of a column structure (l 35 mm, w 90 µm, d 45 µm) with flow-restrictive weir structures (l 10 µm, w 45 µm, d 10 µm) on both ends.Furthermore, the column structure contains a packing channel diverting orthogonally halfway through the column and a microfluidic cross for sample injection and flow splitting.Peripheral accessibility to the microfluidic channel system was ensured by six conical-shaped connection openings integrated into the cover plate.The stationary phase was implemented by pressuredriven slurry packing temperature-stabile BEH C18 particles (dp=2.5 μm) via the packing channel [36].The chip was submerged within a sonication bath to avoid particle aggregation of the slurry (1-3 mg•mL −1 prepared in ACN).
Porous monolithic frits were inserted at the beginning and end of the column before the packing process to retain the respective stationary phase material.After completing the slurry packing process, the packed column was sealed with a non-porous polymer.Both integrated polymers, the porous monolithic frits and the non-porous plug, are selectively introduced by LED-assisted radical polymerization.
For MS interfacing, a monolithic electrospray emitter was integrated into the microfluidic glass microchip.Therefore, the cuboidal-shaped front end of the glass microchip was ablated using a rotary grinder (Proxxon, Luxemburg).The resulting pyramidal-like shape served as an emitter tip.To increase contact angles and facilitate the electrospray formation of hydrophilic eluents, the monolithic emitter underwent silanization by dip-coating.After hydrophobization, the functionalized HT-HPLC microchip was installed into the chip thermostat and in front of the MS orifice for measurement (Fig. 3F).
Chip thermostat
The microcolumn thermostat's components and basic functionality were described previously [19].In brief, two cylindrical micro thermoelements (20 W, t max =260°C) embedded into a polyether ether ketone (PEEK) housing, driven by a 24 V DC power supply, served as infinite heat sources.Temperature surveillance by three Pt100 provides a sensing element for an integrated PID loop controlled by custom-built LabVIEW software.HPLC chip and chip thermostat are interfaced by a clamp fixture to directly contact-heat to the microcolumn both-sided up to 200°C with a rate of 4.7°C/s.Before the chromatographic operation, the microcolumn was thermally equilibrated for 60 s.Infra-red radiometric imaging was conducted using a FLIR PRO One (USA) mobile camera.
Sample injection and elution
Sample injection onto the packed column is realized by an adopted hydrodynamic pinched injection scheme as described earlier [37].Briefly, this is realized by two pumpdriven fluidic situations, referred to as injection and elution.
An overview of both fluidic situations, injection and elution, is provided in the Electronic Supplementary Material Fig. S1 and S2.Switching from injection to elution mode is done via two external 10-port nano switching valves (Cheminert, 100 µm bore size, VICI, Switzerland), of which the first nano valve is equipped with a 4.6-μL sample loop to forward a sample plug to the second nano valve.The second nano valve selectively directs the sample and pinch stream or elution stream towards the injection cross on the microchip.During the sample injection, a partial volume of the sample plug is loaded onto the column head and remains there until its elution.During the study, the injection mode was maintained for 15 s before switching to elution mode.During elution, the eluent pump flow increases the pressure at the injection cross and flushes the sample plug from the column head along the column.The entire process was executed by a semi-automated injection protocol using Clarity Software Package (Data Apex, Czech Republic).
Mass spectrometric detection
A high-resolving quadrupole-orthogonal time-of-flight instrument (micrOTOF-Q II, Bruker Daltonics, Germany) was used for mass spectrometric detection.
A home-built chip interface allowed the micro-thermostat, two 10-port nano valves, and a pressure sensor (Duratec, Germany) to be securely affixed near the associated microchip.An XYZ-axis stage micromanipulator (Thorlabs, USA) ensured precise spatial positioning in front of the inlet of the mass spectrometer, whose inlet capillary extension was customized for chip interfacing.For chromatographic experiments, all mass spectra were acquired with up to 4-8 Hz in full scan mode (mass range 100-415 m/z).At the MS inlet, a potential between 2.5 and 3.5 kV was applied.Low dry gas flows of 1 L/min at 225°C were used during operation.Ion chromatograms and mass spectra were analyzed with Data Analysis 4.3 (Bruker Daltonics, Germany).
Results and discussion
The main focus of the study is to develop a technical approach to interface between high-temperature chipHPLC and ESI mass spectrometry.By increasing the column temperature, the increased eluotropic strength of the eluent and enhanced mass transfer kinetics should be exploited to shorten the analysis time of the chip-based chromatography.This objective raises the question of the temperature limits within which electrospray formation on the heated chip emitter is possible.In a previous HTchipHPLC study, the chip outlet was connected to an external HPLC backpressure regulator to prevent evaporation of the mobile phase in the microchip through backpressure stabilization [19].Assuming that only a minor part of the microchip, containing the packed column, is heated and the emitter is located in the non-heated part, trouble-free electrospray operation could be possible due to the rapid heat exchange and pressure drop within the microchip.This was investigated in a preliminary experiment by dosing a sample plug (20 nL of 150 µM metolachlor dissolved in MeOH) onto a heated soda-lime glass test chip with subsequent MS analysis.
The home-built soda-lime glass test chip contained a single micro-structured channel and was designed to heat parts of the channel when placing the chip into the microchip heater.For MS interfacing, an emitter was integrated on-chip by cutting and grinding off glass material in the front of the test chip to form a monolithic emitter in a pyramidal shape.The emitter tip underwent hydrophobization to prevent spray instabilities caused by low contact angles.The final sodalime glass test chip was connected to a nano injection valve (Cheminert, VICI, Switzerland) to allow injection of a small nanoliter injection volume into the 1 µL/min eluent stream consisting of 70:30 v/v MeOH:H 2 O, 0.1% FA.Due to the fluidic connection between the external injection valve and the test chip, the chip operated at ground potential and could, therefore, form an electrospray between the emitter tip and the high voltage MS inlet to detect protonated metolachlor ions, [M+H]+, under ambient conditions.
Since this preliminary experiment aimed to investigate the MS performance at higher temperatures, the chip thermostat gradually heated the soda-lime glass test chip from 30 to 180°C.At each temperature step, a nanoliter sample plug of metolachlor was injected onto the heated chip, and its extracted ion chromatogram (EIC) at 284 m/z was detected.The recorded data indicate that it is possible to record ESI-MS signals of metolachlor for temperatures up to 180°C.Furthermore, the detected maximum intensities of the metolachlor ion chromatograms increased to temperatures of 130°C.
Since phase separation is observed when working at high temperatures, it was unexpected that electrospray formation could be maintained so far above the boiling point of the solvent (bp =71°C) in a microfluidic channel that provides a back pressure of 1 bar under the given conditions.The observed effect of increasing ESI-MS sensitivity at elevated temperatures is consistent with work done on ESI-MS using dedicated ESI sources with heated nitrogen or inlet capillaries [24,27,28].
More information about the preliminary investigation, including an illustration of the soda-lime test chip, a detailed experimental setup, and the recorded MS data, can be seen in the Electronic Supplementary Material Fig. S3.
After these encouraging first experiments with a simple partially heated test chip with a non-restrictive channel and an emitter tip, we developed a functional HT-chip HPLC.A photographic image of the HT-HPLC microchip and a general overview of the experimental fluidic setup used in this HTchipHPLC approach are shown in Fig. 1A and B.
The layout of the developed HT-HPLC microchip consisted of an injection cross, a 35-mm-long HPLC column packed with temperature-stabile C18 BEH particles.In contrast to the previous HTchipHPLC approach, there is now an electrospray emitter at the end of the chip (4 mm from the column end) instead of the capillary attached to a conventional HPLC backpressure regulator.The electrospray emitter, shown in Fig. 1C, was manufactured by grinding and made hydrophobic by silanization [35].
The HT-HPLC microchip was connected to an external fluidic circuitry via home-built steel clamps to ensure operation [38].The connecting clamps provide a pressure-stable connection even at 200°C.With regard to high-temperature compatibility, flow restrictions in the pre-column region were adapted to apply the pressure necessary to avoid phase separation of the heated eluents.
For temperature control, the HT-HPLC microchip assembly was installed to the microchip heater (Fig. 1D), securely affixed to the XYZ micromanipulator, and positioned in front of the aperture of a mass spectrometer.An electrospray could be formed after precise alignment with the MS inlet while the microchip was held at ambient temperature.The developed assembly has been evaluated by injecting a sample mixture of five phenyl urea pesticides.To this end, optimizing the eluent composition at a constant column temperature was necessary.Here, an isocratic solvent After the eluent optimization was completed, experiments were carried out to investigate the performance of the developed chipHPLC-MS setup under high-temperature conditions.For this purpose, the column temperature was increased stepwise without sample injection.During the temperature increase, electrospray formation and the behavior of the total ion current (TIC) was observed.
Using an MS inlet voltage of 3.5 kV, the developed HTch-ipHPLC MS system could generate an MS ion signal for a column temperature of 110°C.
The microscopic examination confirmed the formation of an electrospray consisting of a cone, a jet, and a spray plume between the emitter tip and the MS inlet (Fig. 2C).Since the cone formation requires a liquid state, it can be assumed that the heated eluent in the post-column area is subject to a strong heat exchange.In the case of the used eluent consisting of 50:50 v/v MeOH:H 2 O, 0.1% FA, a heat exchange below its boiling point of 76°C is therefore assumed.Furthermore, it was found that the fluctuations of the TIC increase with increasing column temperature.For example, the relative standard deviation of the 1-min TIC acquisition is 1.8% at 30°C and rises to as much as 24% at 130°C (Fig. 2B).To investigate possible causes of the TIC fluctuations, the temperature distribution over the entire microchip operating at 130°C was visualized by thermographic imaging using a mobile thermal imaging camera (Fig. 2D).For a better orientation, a true-scale illustration of the HTchipHPLC arrangement in front of the MS inlet is displayed in Fig. 2F.
The thermographic image illustrates that surface temperatures of areas in direct contact with microchip thermostat correlate to the set column temperature.In contrast, areas at the chip edges generally have lower surface temperatures, indicating the rapid heat transfer of the microchip.Exemplarily, the surface temperature in the post-column region was lowered from 124°C at the end of the column to 70°C at the emitter tip (Fig. 2E).
This heat exchange was sufficient to reduce the surface temperature below the boiling temperature of the eluent.Realizing an even larger temperature difference in the postcolumn region remains a challenge since the borosilicate glass substrate cannot dissipate the applied heat fast enough.More efficient heat dissipation should be possible by active means, e.g., integrated microfluidic cooling channels or a cooling gas flow directed to the ESI tip.
In the presented setup, gas bubbles were observed to elute from the column into the post-column region of the HT-HPLC chip at too-high temperatures.
Since phase transition can be prevented by raising the post-column pressure, the ability of the post-column channel (trapezoidal cross-section, l 4 mm, w 70 µm, d 30 µm) to build up sufficient backpressure was questioned.Subsequent estimates using the Hagen-Poiseuille equation (viscosity (50:50 MeOH:H2O) = 1.34 cP, total elution flow rate = 80 µL/min) show that the pressure drop in the postcolumn channel is in the millibar range, which explains the presence of gas bubbles in the post-column region of the chip and the increasing fluctuations of the TIC.Nevertheless, it is remarkable that the TIC oscillates constantly, even with more significant fluctuations, and never breaks off entirely up to a column temperature of 130°C.This indicates a certain tolerance towards an outgassing eluent [39].
In the following set of experiments, injections of the pesticide mixture were made at increasing column temperatures from 30 to 130°C.A 60-s equilibration period was implemented before each new temperature step to avoid undesired temperature gradients.A plot of the recorded isothermal separations with ascending column temperature from top to bottom is shown in Fig. 2A.
Based on the recorded chromatograms, selected parameters such as retention time, peak width, and maximum peak intensity of the separated compounds were analyzed to evaluate the impact of the different isothermal column conditions.Before discussing each parameter individually, it is worth noting that increased column temperature had a positive effect on all of the parameters mentioned.For example, the total run time of recorded chromatograms was dramatically reduced from over 5 min at 30°C to less than 20 s at 130°C, with retention time reproducibility of 0.7% for selected analytes (n=3, retention of metolachlor at 70°C, illustrated in Electronic Supplementary Material Fig. S5).To continue assessing retention time data on the level of individual compounds, a Van't Hoff plot was created by plotting the logarithmic separation factor of selected analytes against the reciprocal microcolumn temperature (see Electronic Supplementary Material Fig. S6).The observed linear relationships indicate that increasing temperatures equally affect the retention mechanism of each desired compound.A reduction in peak width accompanies decreased chromatographic run time of known pesticide sample mixture (see Electronic Supplementary Material Fig. S7).This confirms the effect of reduced longitudinal diffusion of the sample, as the residence times on the microcolumn are drastically reduced due to the temperature-induced increase in flow rate.In addition to the reduced retention time and peak width, the maximum MS signal intensity as a function of temperature was investigated as a third parameter from the series of injections under isothermal conditions ranging from 30 to 130°C.For this purpose, the maximum peak intensities of the extracted ion chromatograms of the protonated analyte ion species were retrieved and analyzed and are plotted in Fig. 3.
There, an analyte-dependent 5 to 10-fold increase in the MS signal is observed for cyanazine, fluometuron, and metolachlor when the column temperature is increased from 30 °C to 130 °C.Since temperature reduces the surface tension of the methanol-water eluent used, the desolvation process is improved during electrospray ionization.Studies investigating ESI signal intensities using non-miniaturized columns reported a 1.7-fold when column temperatures were raised to 150°C [31].When evaluating the chromatograms of the isothermal separations of the pesticide mixture, it is noticeable that the early eluting hydrophilic compounds have insufficient resolution for separations above the 70°C mark.Since the developed HTchipHPLC MS system allows rapid temperature adjustment, a thermal gradient can be applied to improve the separation.A two-step temperature program was started simultaneously with sample elution, raising the column temperature from 60 to 140°C within 30 s.This Fig. 3 Dependency between column temperature and maximal peak intensity of selected analytes, data are withdrawn from separations illustrated in Fig. 2A resulted in a total analysis time of only 36 s.Compared to an isothermal operation, the thermal gradient significantly improved resolution and peak shape, as can be seen in the example of two selected critical peak pairs (*R C , **R C ) in Fig. 4.
For completeness, a comparative illustration between a solvent and thermal gradient can be found in Electronic Supplementary Material Fig. S8 and Tab.S1.Like other chemical research areas, sustainability awareness is becoming increasingly crucial in separation sciences.Therefore, the so-called green chromatography is currently a highly active field of research [40,41].An essential aspect of greening liquid chromatography involves the employment of environmentally less harmful eluents, such as ethanol.It is much easier to replace conventional methanol-based eluents with ethanol-based eluents in HTchipHPLC as viscosity and column backpressure are reduced by utilizing higher temperatures.The effect of column back pressure decrease of the used HT-HPLC chip is displayed in Electronic Supplementary Material Fig. S9.
The results of the successful usage of an ethanol-based eluent are shown in Fig. 5.A detailed version can be seen in Electronic Supplementary Material Fig. S10, and Tab.S2.There, a pesticide mixture is separated with a binary ethanol-water eluent at a temperature gradient of 60 to 110 °C to accelerate chip chromatography under ambient conditions from over 500 s to 60 s.It should be pointed out that compared to methanol, ethanol's stronger eluotropic strength reduces the organic modifier's volume fraction down to 30%.As the higher surface tension of the ethanol-containing eluent and the assumed phase transition make it difficult to form an electrospray, the column temperature is limited to 110°C [42].
Since phase transition must be avoided to accelerate high-temperature chip chromatography MS further, prospective developments should focus on integrating pressure-controlling elements in the post-column region.A restrictive micro-channel connecting column and emitter manufactured by state-of-the-art micro-machining or implementing a microfluidic pressure regulator are promising options [43][44][45].
Conclusion
This study demonstrates the first successful coupling between high-temperature chip-based HPLC and ESI-MS.The developed HTchipHPLC ESI MS system could operate Due to the low thermal mass of the HT-HPLC chip and the associated rapid heat exchange, it was possible to apply a thermal gradient using environmentally friendly ethanolwater eluents as an alternative to the solvent gradient.
The low microliter system volume of the HT-HPLC chip and the use of temperature as an external, easy-to-use elution control parameter provide an ideal combination to push cycle times in ultrafast chromatography further into the range of a few seconds and below.
Fig. 1
Fig. 1 Overview of the microchip design and experimental setup of the HTchipHPLC-MS approach.A Photographic image of the functionalized HT-HPLC microchip.B Schematic drawing of the external fluidic circuitry in injection mode.It includes the microfluidic design of the microchip (top view) and fluidic components, such as HPLC pumps, valves, and a pressure sensor.The flow paths of the sample stream (green), pinch stream (blue), and eluent stream (black) are colored.Since the sample and pinch streams leave the chip at the waste outlet to flow into one of the restriction coils (R), the corresponding tubing is colored cyan.Capillary tubings not connected to a pumping device in this fluidic situation are colored in gray.In addition, arrows indicate the direction of flow.C Insight of the postcolumn layout of the HT-microchip.D Photographic image of the microchip thermostat with the functionalized HT-HPLC microchip installed.More details about the experimental setup used in the presented study, including the fluidic circuitry for elution mode, visual inspections of the microfluidic cross injecting a fluorescent sample, and pressure data, are given in the Electronic Supplementary Material Fig. S1 and S2
Fig. 4
Fig. 4 Illustration of a HTchipHPLC ESI MS measurement utilizing a thermal gradient condition.2-step thermal gradient from 60 to 140°C, column length: 35 mm, material: XBridge C18 BEH, dp=2.5 µm, maximal elution pressure: 133 bar, sample solution was identical to Fig. 2A.R C , resolution of the critical peak pair | 5,509 | 2023-12-19T00:00:00.000 | [
"Chemistry",
"Engineering",
"Environmental Science"
] |
Crab cavities for colliders: past, present and future
The numerous parasitic encounters near interaction points of some particle colliders can be mitigated by introducing a crossing angle between beams. However, the crossing angle lowers the luminosity due to reduced geometric overlap of the bunches. Crab cavities allow restoring head-on collisions at the interaction point, thus increasing the geometric luminosity. Crab cavities also offer a mechanism for luminosity leveling. KEKB was the first facility to implement the crab crossing technique in 2007, for the interaction of electron and positron beams. The High Luminosity Large Hadron Collider (HL-LHC) project envisages the use of crab cavities for increasing and leveling the luminosity of proton-proton collisions in LHC. And crab cavities have been proposed and studied for future colliders like CLIC, ILC and eRHIC. This paper will review the past, present and future of crab cavities for particle colliders.
Introduction -first ideas
A crossing angle is sometimes introduced between beams at the interaction point of colliders in order to mitigate parasitic collisions and/or get rid of the spent beam and debris from the collision. The crossing angle however reduces the peak luminosity of the collisions because it reduces the geometric overlap of colliding bunches as shown in Fig. 1.
In 1988 R. B. Palmer proposed the crab crossing scheme for an electron-positron linear collider [1], but actually it applies to any kind of collider. The scheme allows large crossing angles without loss in luminosity as it reestablishes head-on collisions. Fig. 2 illustrates the crab crossing scheme.
A crab cavity is a deflecting cavity operated such that the phase is zero when the bunch is at the cavity center. The center of the bunch will receive a null kick whereas its head and tail will receive opposite kicks. The bunch Email address<EMAIL_ADDRESS>(Silvia Verdú-Andrés) will wiggle along its path due to the crabbing kick. The phase advance between the crab cavity and the IP location must be 90 degrees so that the momentum kick provided by the crab cavity fully transforms into a rotation of the bunch in the IP. The bunch can be uncrabbed by another set of crab cavities after the IP (local scheme) or can wiggle all around the accelerator (global scheme).
First implementation of crab crossing technique
The B-factory KEKB was the first collider in implementing the crab crossing scheme in 2007. Beambeam studies had predicted that head-on collisions would increase the beam-beam tune shift from 0.055 to about 0.15 leading to higher luminosity gain than just the geometric luminosity gain [2].
KEKB was a 8 GeV electron and 3.5 GeV positron circular collider with a single IP. A global crab crossing scheme was implemented to reduce costs of cavities and cryogenics. There was only one crab cavity per ring. The required deflecting voltage per cavity was 1.4 MV at 500 MHz to compensate for a horizontal crossing angle of 22 mrad.
The KEKB crab cavities were single cell structures working at 4.5 K and operating in the TM 110 mode at 509 MHz. The cavity had a coaxial coupler to extract the TM 010 (fundamental) mode and large beam pipes for the HOMs. The cell had a squashed shape to select the polarization mode [3].
The cavities successfully crabbed the KEKB bunches of high intensity beams to provide head-on collisions and maximize the geometric luminosity gain. The measured vertical beam-beam tune shift, 0.088, was however below the predicted value from simulations and the luminosity gain from beam-beam tune shift was therefore below the expected value [4,5].
KEKB operation terminated in June 2010 for the upgrade towards SuperKEKB. The maximum peak luminosity reached with the crab crossing scheme was 21.1 × 10 33 cm −2 s −1 . Up to date, KEKB has been the only facility were the crab crossing scheme has ever been implemented.
3. Crab cavities for the luminosity upgrade of LHC LHC will reach a luminosity of 2 × 10 34 cm −2 s −1 , twice the nominal peak luminosity, with 14 TeV energy collisions by 2023 (expected integrated luminosity of 300 fb −1 ). The High Luminosity LHC project, or HL-LHC, aims at increasing the integrated luminosity of LHC by one order of magnitude by 2035.
The HL-LHC will require to install new magnets and collimators, update vacuum, cryogenics and machine protection systems, upgrade injectors, implement new beam optics and crossing schemes, among other actions [6]. As part of these upgrades, β * will be reduced from 0.55 to 0.15 m in order to increase the luminosity of LHC. As the two beams of LHC share the same vacuum pipe along a 120 meter-long section of each IP, a smaller beta function value at the IP will result in larger Long-Range Beam-Beam (LRBB) effects. The crossing angle will be almost doubled from 290 to 590 mrad in order to reduce these LRBB effects [7].
Crab cavities thus become instrumental to fully benefit from the β * reduction, as crab crossing can reestablish head-on collisions and so increase the peak luminosity. The expected improvement in peak luminosity by operating LHC with a 400 MHz crab cavity system is, when β * is 0.15 m, about 70%. Fig. 3 shows the luminosity dependence on β * for the scenarios with normal crossing, no crossing angle and with crab cavities. Crab cavities also provide mechanisms for luminosity leveling. The crossing angle can be varied from a large value to head-on configuration as the particles burn off during collisions. Alternatively, crab cavities allow the implementation of the recently proposed crab kissing technique in which the two bunches collide over their longitudinal plane. This technique does not only allow for luminosity leveling but also pile-up density reduction [8].
The HL-LHC upgrade envisages the implementation of crab crossing at the IPs of both ATLAS and CMS (respectively, IP1 and IP5) following a local scheme configuration. Bunches from the two colliding beams will be crabbed before and uncrabbed after each IP. The local scheme requires twice the number of cavities than the global scheme but is preferred to avoid the severe phase advance constraints between IPs required by the global scheme.
The LHC crab cavity program consists in four main stages. The first stage initiated in 2003 and was dedicated to the conceptual design, feasibility study and development of the LHC crab crossing scheme.
The cavities will be SRF CW single-cell cavities operated at 2 K. The deflecting mode frequency is chosen to be 400 MHz as a compromise between compactness and reduced head-tail effect on the bunches. In the KEKB case, bunches were 5.8 mm long for the positron beam and 6.4 mm long for the electron beam, with the half-wavelength of the KEKB crab cavity deflecting mode being more than 46 times the bunch length of the KEKB beam. The half-wavelength of the LHC crab cavity deflecting mode will be only 5 times larger than the nominal LHC bunch length of 75 mm.
A total deflecting voltage of 12 MV must be provided at 400 MHz for full head-on collision. The present layout foresees that the delivery of total deflecting voltage is shared by 4 identical cavities.
The distance between the two beam pipes of LHC constrains the maximum width and height of the crab cavities to be 194 mm at room temperature for horizontal (CMS) and vertical (ATLAS) kick configuration, respectively.
The cavities will be installed between D2 and Q4 at both sides of the IP1 and IP5. There are only 10 meters available at each crabbing site to fit in 8 cavities (4 per beam) with their helium vessels and cryostats. Design studies focused their efforts in providing very compact cavities that will satisfy the above mentioned space constraints in LHC [9].
Three compact designs were selected: the RF dipole cavity designed by ODU-SLAC [10], the Double Quarter Wave (DQW) cavity designed by BNL [11] and the 4-rod cavity designed by Lancaster University [12]. Fig. 4 illustrates how compact the LHC crab cavities are by comparing the KEKB crab cavity with the DQW cavity.
The high bunch intensity of LHC requires a strong damping of HOMs. Dedicated HOM filters are being developed for the LHC crab cavities to mitigate the appearance of instabilities. The second stage comprised the design finalization, construction and cryogenic tests of one Proof-of-Principle (PoP) cavity for each of the three designs. All of them went through successful cryogenic tests reaching the nominal deflecting voltage of 3.4 MV, and exceeding it in the case of the RF dipole and the DQW cavities [11,13]. Larger deflecting voltages than the nominal might be required to implement the crab kissing technique. Fig. 5 shows the three PoP cavities. The third stage started in 2014 and is devoted to the design and manufacturing of prototype cavities and cryomodules for a validation test with beam in SPS by 2016-2017. It will be the first time that crab cavities are exposed to hadron beams. In this stage one cryomodule with two DQW cavities and another one with two RF dipole cavities will be tested. The test has the main scope of validating the cavity operation with beam: deflecting voltage, cryogenic performances, tuning system, HOM damping and impedances, effective bunch crabbing, performance limits, emittance growth and non-linearities of the cavity field.
LLRF controls are crucial for successful crab crossing. The phase synchronization between the crab cavities of the two colliding beams is important to avoid introducing any transverse offset and thus guarantee full head-on collision. Phase synchronization between the cavities at both sides of the IP is also important to crab and fully uncrab the bunches, in order to avoid the propagation of instabilities in the accelerator. The cavities at one side of the IP will be distributed along a non-uniform beta region, which may have implications for control and operation of the cavities. The control system will also need to deal with quench detection and response. Cavities must be able to become transparent to the LHC beam during injection, ramp and squeeze. The SPS tests will then need to validate the RF control systems for the crab-uncrab operation, quench detection and response. Machine protection mechanisms will also be evaluated. Important information can be extracted to prepare for different cavity failure scenarios and define instrumentation and interlocks for operation in LHC. Cavity transparency during injection, ramp and squeeze must also be proved.
The fourth stage extends from 2017 until 2023 and envisages the preparation of cavities and cryomodules for LHC, installation and commissioning.
The baseline layout foresees two-cavity cryomodules to ease maintenance and reduce complications during operation. So a minimum of 32 cavities and their corresponding 16 cryomodules should be prepared. An alternative layout would consist of an eight-cavity cryomodule per side per IP for reduced warm-to-cold transitions.
Future linear colliders: CLIC and ILC
The two proposed future linear colliders, CLIC and ILC, will require crab cavities at the end of the linacs that reestablish head-on collisions for maximal peak luminosity. The crab cavities for both machines share common challenges related to synchronization and vertical wakefield kicks.
CLIC beams will collide in a single collision point with a 20 mrad crossing angle. Crab crossing will increase the luminosity to 95% of the head-on case.
The CLIC crab cavities will be located prior to the Final Doublet (FD) and at a distance of 90 • phase advance from the IP. The cavities are normal conducting multi-cell traveling wave structures operating at 11.994 GHz. The deflecting voltage required at this frequency is 2.55 MV. Control of voltage phase and amplitude will have a major impact on luminosity. The phase must be controlled within 0.02 degrees (4.6 fs for 11.994 GHz cavities) for a luminosity loss of 2%. Higher order modes must be effectively damped to reduce impact of vertical wake filed kicks [14].
A CLIC crab cavity prototype is currently under preparation for being high-gradient tested soon in the XBox2 facility at CERN.
ILC will have a single collision point with a 14 mrad crossing angle in the horizontal plane [15]. Two 3.9 GHz superconducting 9-cell cavities will be located at 13.4 m from the IP. The cavities will deliver a 5 MV/m deflecting kick, enough to reestablish headon collisions for 500 GeV beam. Phase jitter between cavities for positron and electron beams must be tightly controlled to ensure maximal bunch overlapping. A feasibility test of a 7-cell 1.5 GHz cavity conducted at JLab ERL showed that it is possible to maintain the phase jitter within 37 fs. Strong damping of higher and lower order modes as well as vertical polarization of same order mode are required to limit vertical deflection in the IP.
eRHIC
The future electron-Relativistic Heavy Ion Collider (eRHIC) will have a crossing angle of 10 mrad in the horizontal plane. Crab cavities will be needed to reach a luminosity of 10 33 cm −2 s −1 for collisions of 21.2 GeV electron beam and 250 GeV polarized proton beam [16]. The collider will have 2 IPs where crab crossing will be performed following the local crossing scheme.
Based on the DQW design, one crab cavity of about 676 MHz will be enough to provide 1.9 MV deflecting voltage required to tilt the bunches of the 21.2 GeV electron beam. The rms bunch length is 50 mm for 250 GeV proton beams [16], so several harmonic cavities will be needed to correct the non-linear kick.
One 676 MHz cavity will provide a deflecting voltage of 0.76 MV, two 450 MHz cavities will provide 2.79 MV each and four 225 MHz cavities will provide 6.19 MV each.
Overview
Crab cavities open the possibility to increase luminosity in colliders with finite crossing angle as well as they offer mechanisms for luminosity leveling and pile up density reduction.
The phase control of crab cavities and the appropriate damping of modes other than the deflecting one become the most important technical issues to guarantee a successful crab crossing. LHC crab cavities show additional challenges for fabrication and cleaning due to their complex geometries that at last may impact the cavity performances.
The development of compact crab cavities for LHC has given birth to a variety of cavities that might be of interest for other applications. Crab cavities can be used as deflecting cavities when operated at a different phase. In this context, an RF dipole cavity -similar to the RF dipole cavity for HL-LHC -has been recently proposed as alternative to the kicker currently under construction for the beam switching system of LCLS-II [17,18]. | 3,371.2 | 2016-04-01T00:00:00.000 | [
"Physics"
] |
A new lapsiine jumping spider from North America, with a review of Simon’s Lapsias species (Araneae, Salticidae, Spartaeinae)
Abstract A new spider genus and species from México and Guatemala, Amilaps mayanagen. et sp. nov., is described, distinct from other members of the jumping spider tribe Lapsiini (subfamily Spartaeinae) by its four retromarginal cheliceral teeth and the large sclerite cradling the embolus. It is the first living lapsiine known outside of South America. This tribe has received attention recently for new species and genera in Ecuador and Brazil, but Simon’s original four species of Lapsias, described from Venezuela in 1900 and 1901, remain relatively poorly known. Accordingly, new illustrations of Simon’s type material are given, and a lectotype is designated for L. cyrboides Simon, 1900. The three forms of females in Simon’s material from Colonia Tovar, Aragua, are reviewed and illustrated, and they are a tentatively matched with the three male lectotypes of his species from the same location.
Introduction
For more than 100 years after Eugene Simon's (1900) description of the jumping spider genus Lapsias Simon, 1900, the only known species were the four he described from Venezuela (Simon, 1900(Simon, , 1901. Indeed, these were the only species described of the broader group now recognized as the Lapsiini, one of only two salticid groups in the New World that fall outside the major subfamily Salticinae (the other being the Lyssomaninae). Considerably more lapsiine diversity has been revealed since 2006 through work by Maddison (2006Maddison ( , 2012, Makhan (2007), Ruiz and Maddison (2012), and Ruiz (2013), giving us now five described genera containing 21 species (WSC 2019). All of the living lapsiine species known to date are from South America, but recently García-Villafuerte (2018) described a fossil of Galianora Maddison, 2006 from Miocene amber in Chiapas, México.
Here I report the north-westernmost known living lapsiine, Amilaps mayana sp. nov., from southern México and Guatemala. In addition, new illustrations of Simon's four species of Lapsias from Venezuela are provided to supplement Galiano's (1963) redescriptions, and the matching of males and females is reconsidered.
Materials and methods
The preserved specimens were examined under both dissecting microscopes and a compound microscope with reflected light. Photographs were taken under an Olympus SZ61 stereo microscope (bodies) and a Nikon ME600L compound microscope (palpi) and focus stacked using Helicon Focus 4.2.7. Drawings were made with a drawing tube on an Olympus BH-2 compound microscope (Amilaps mayana sp. nov.) and a Nikon ME600L compound microscope (Simon's species).
Terminology is standard for Araneae. Measurements are given in millimetres. Carapace length was measured from the base of the anterior median eyes not including the lenses to the rear margin of the carapace medially; abdomen length to the end of the anal tubercle.
Etymology. An arbitrary combination of letters, composed to contain a reference to the Mayan word for spider ("äm", Christensen 1987) and to Lapsias, to be treated grammatically as feminine.
Diagnosis. Differs from all described lapsiines in having a large sclerite (p in Fig. 3) cradling the tip of the embolus, and in having four retromarginal teeth on the chelicerae (two in all others; see Ruiz and Maddison 2012, Maddison 2012, Ruiz 2013. Differs from Lapsias, Soesiladeepakius, and Thrandina in lacking a prolateral pre-embolic spermophore loop (see Ruiz and Maddison 2012), although the loop may be present on the retrolateral side (see below under "Relationships"). Unlike most Lapsias species, Amilaps has the PME displaced medially, as far as the medial edge of the ALE.
Relationships. The four retromarginal cheliceral teeth suggest that Amilaps is outside a clade including all previously described lapsiine genera, which share the synapomorphy of a reduction to two teeth (Ruiz and Maddison 2012) from the plurident condition in other Spartaeinae. There are no clear characters linking Amilaps to any particular lapsiines: it lacks the highly reduced RTA of Lapsias, the round tegulum of Galianora, the large PME and robust median apophysis of Thrandina, and the many peculiarities of Soesiladeepakius and Lapsamita. The spermophore of Amilaps appears to lack the pre-embolic loop approaching the median apophysis, widespread in lapsiines (e.g., Figs 14, 21, 30, 41;Maddison 2012: figs 7, 11, 12; see Ruiz and Maddison 2012: character 17). In Amilaps mayana the spermophore does in fact closely approach the median apophysis (MA), but on the retrolateral side of the bulb. In ventral view, it passes just retrolateral to the MA, but in retrolateral view it can be seen to be curved, reaching its ventralmost point just proximal to the MA. If this is the same pre-embolic loop but displaced retrolaterally, it hints to the possibility that the base of the embolus of A. mayana may be unusually large, occupying a large proportion of the prolateral side of the bulb.
If Amilaps is outside the clade of previous lapsiines, then an open question is whether it belongs with them at all. The tribe Lapsiini has no known morphological synapomorphies (Maddison 2015) other than the reduction in cheliceral teeth (Ruiz and Maddison 2012). Our understanding of morphology gives little reason to expect that salticids in the Americas left over once salticines and lyssomanines are removed would form a clade, but the molecular data suggests this, at least among those species studied (Maddison et al. 2014). Amilaps is exactly that: a generalized salticid that is not a salticine or lyssomanine. Were it to have been found in New Guinea, Amilaps would fit equally happily among the cocalodines according to our current knowledge. Thus, its current placement among the lapsiines is tentative. Etymology. Refers to the distribution of this species in the lands of the Maya. Description. Male (holotype). Carapace length 2.0; abdomen length 1.7. Carapace (Figs 6, 7) with long fovea; anterior eye row approx. as wide as carapace, and wider than posterior row. PME small, displaced medially to lie behind medial edge of ALE. Ocular area medium brown under alcohol and darker around eyes, dusted with dull brown and tan scales that are oriented concentrically around the unusually large PLE. Thoracic area brown, with paler medial longitudinal band, and paler spots just above each of the leg coxae. Clypeus (Fig. 5) narrow and with a few scattered whitish hairs and scales. Chelicerae vertical and relatively small. Four small but distinct teeth on retromargin of chelicerae (Fig. 9); promargin not observed (on the specimen from Guatemala, three promarginal teeth). Palp (Figs 1-4) with embolus arising on prolateral side, narrowing abruptly, then bending directly to the retrolateral, where it meets a large sclerotized projection (p in Fig. 3) that envelops it so completely that the terminal third of the embolus is most easily seen as a dark line within the projection; the tip of the embolus rests within the tip of the projection. The projection consists of a plate at the distal edge of the bulb, which then narrows before swelling and curving to a point that projects ventrally. (Regarding its homology to the conductor in Lapsias, see comments below.) Median apophysis distinct (separated from the tegulum by a membrane) but relatively small, almost hidden by the sclerotized projection. Cymbium with proximal prolateral conical projection. Retrolateral tibial apophysis a short flange (Fig. 4) whose ventral edge extends proximally and forms a round pocket facing retrolateral side. Patella with two retrolateral apophyses, the larger one being hooked. Legs (Figs 10, 11) pale honey-coloured, darkening to nearly black on distal half of femora, and with broad darker annuli on tibiae and metatarsi. First tibia macrosetae as follows: three pairs of ventral, two to three anterior lateral, two posterior lateral, and one dorsal. First metatarsus macrosetae as follows: two ventral pairs, two anterior lateral, and two posterior lateral. Fourth legs distinctly longest; leg formula 4132. Abdomen (Figs Natural history. My field notes for the holotype indicate it was found on a limestone rock face, and the back of the vial's label says "on limestone cliff face on forested slope". Both the holotype from México and the male from Guatemala (according to its locality) were associated with caves. The holotype was not in the cave, but on a cliff near the cave.
Although Galiano (1963) redescribed Simon's original four species, her illustrations are limited in number and detail. Thus, I give new figures of Simon's original four species, including the first published figures of their bodies and more detailed illustrations of their genitalia. Among Simon's specimens are three forms of female, only one of which (under L. cyrboides Simon, 1900) was described by Simon and Galiano. As these females are all from the same site (Colonia Tovar) from which the males of three Lapsias species were described, we are faced with a puzzle as to which females match which males. This is considered below under the notes for each species.
All four of Simon's species have two retromarginal teeth on the chelicera, and three pairs of ventral macrosetae on both the tibia and metatarsus of leg 1. The median apophysis of the palp is a long narrow blade, hooked at the tip and separated from the tegulum by a membrane. There is a small apophysis just retrolateral from the base of the embolus in L. estebanensis, L. tovarensis Simon, 1901, and possibly L. ciliatus Simon, 1900 (see c? in Figs 14, 43) that by position is likely homologous to that called the conductor by Maddison (2012) in L. canandea Maddison, 2012 and in Thrandina species (see discussion by Ruiz 2013). The sclerite functioning as a conductor in Amilaps mayana (p in Fig. 3) is likely not homologous, given its more distal position outside the loop of the spermophore. The female spermathecae of all three species (Figs 28,38,49) are thick-walled and bear a pale rough-edged extension to the anterior (most easily seen in Fig. 28; partially hidden behind the fertilization ducts in Figs 38 and 49). Simon, 1900 Figs 12-18 Type material. In MNHN, 2 males from La Cumbre, San Esteban, Carabobo State, Venezuela, with label "21196 Laps. estebanensis E.S., S. Esteban! La Cumbre!". Galiano (1963) designated one male as lectotype, which I presume to be that in a separate microvial with her label "Typus? M.E. Galiano II 1959". The type vial also has a recent label "det Szűts 0015". Because the old handwritten label was fragile and fragmenting, I made a copy, which I added to the type vial.
Lapsias estebanensis
Notes. This is the most robust of the four Venezuelan species, with males having enlarged chelicerae (Figs 15-17). The retromarginal tooth closest the fang is larger and curved (Fig. 18). The palp bears a close resemblance to that of L. tovarensis, but differs in the shorter, straighter embolus and distinctly larger apophysis (c? in Fig. 14) accompanying the embolus. Simon, 1900 Figs 19-29 Type material. In MNHN Paris, 25 males from Colonia Tovar, Aragua State, Venezuela, most in a single vial with label "21083 Laps. ciliatus E.S., Tovar!" and more recent label "det Szűts 0012". When I received the specimens from the MNHN, one male matching this species was in a separate vial without label except one in Galiano's handwriting reading "Typus? M.E. Galiano II 1959" and another "det Szűts 0013". Insofar as Galiano (1963) indicated she designated a lectotype from the type vial, this specimen can be safely considered that specimen. I have therefore made a copy of the label "21083 Laps. ciliatus E.S., Tovar!" and placed it in that male lectotype's vial. The same applies to a female separated and with only Galiano's label "Allotypus ♀ det. M.E. Galiano II 1959". The vial with most specimens also includes 7 females, which cannot be considered type material because Simon's description makes no mention of females.
Lapsias ciliatus
Notes. The female is illustrated for the first time in Figs 27-29. The epigynal openings are beneath a common central hood. Although Galiano separated off a female and labelled it as allotype, neither she nor Simon gave any acknowledgement or description of a female of L. ciliatus. The matching of these females to males of L. ciliatus is reasonably secure, even though three species of Lapsias occur at Colonia Tovar. The females of form shown in Figs 27-29 and the males matching the lectotype appear to have been abundant together, judging by the numbers of specimens. Both are larger and more robust, with wider carapaces, than the other two smaller, more delicate species from Colonia Tovar (L. cyrboides and L. tovarensis). Both male and female show a faint pale spot just posterior to the PLE. Simon, 1900 Figs 30-40 Type material. In MNHN, 3 males, 4 females, 3 juveniles from Colonia Tovar, Aragua State, Venezuela, in a vial with label "20924 Laps. cyrboides E.S., Tovar!" and a more recent label "det Szűts 0014". Galiano (1963) designated one male as a lectotype, in separate microvial with her label "Typus? M.E. Galiano II 1959". She mentions one female designated also as lectotype, but no female is separated and labelled as such. Because Galiano (incorrectly) designated two lectotypes, the name is not yet fixed to a single specimen. This ambiguity is resolved by designating her male lectotype as the only lectotype.
Lapsias cyrboides
Notes. Simon (1900) described a male and female. However, as noted by Galiano (1963), there are two species of female among the four females in the type vial, similar in body but easily distinguished by the epigyne. Two of the females (Figs 37-40) have an anteriorly placed guide (Fig. 37), while the other two females (Figs 48-51) lack such a guide and instead show two wing-shaped atria extending laterally (Fig. 48). It is reasonable to assume that these two kinds of female belong to the two smaller-bodied Lapsias at Colonia Tovar, L. cyrboides and L. tovarensis. Under L. cyrboides Simon described the female kind with anterior guide ("Plaga genitalis...longior quam latior"), but he did not justify this choice nor even mention the second form of female. Galiano followed Simon's choice of matching female. The two forms of female are approximately the same size and carapace shape and are too faded to supply distinctive markings by which to match to the males. Nonetheless, I tentatively support Simon's and Galiano's matching based on an expected correlation between the form of the female's suggesting a simple or small guide along the epigastric furrow, as is common in salticids, while the male of L. cyrboides has an unusual dorsally projecting tibial apophysis, which predicts an unusually-placed female guide. Thus, the female with anterior guide is tentatively considered that of L. cyrboides, and the female with wing-shaped atria is considered that of L. tovarensis. Simon, 1901 Figs 41-51 Type material. In MNHN, three males from Colonia Tovar, Aragua State, Venezuela, with label "21092 Laps. tovarensis E.S., Tovar!". Galiano (1963) designated one male as lectotype, in separate microvial with her label "Typus? M.E. Galiano II 1959". Notes. This is one of the two smaller-bodied species from Colonia Tovar. See the discussion under L. estebanensis for how to distinguish it from that species, and the discussion under L. cyrboides regarding the identity of the female. | 3,476.2 | 2019-11-21T00:00:00.000 | [
"Biology"
] |
Increased Resting Intracellular Calcium Modulates NF-κB-dependent Inducible Nitric-oxide Synthase Gene Expression in Dystrophic mdx Skeletal Myotubes*
Background: The mechanisms by which NF-κB signaling is up-regulated in dystrophic muscles are unclear. Results: [Ca2+]rest is elevated in mdx myotubes as a result of both sarcolemmal Ca2+ entry and SR release, resulting in NF-κB-induced iNOS expression. Conclusion: Ca2+ alterations at rest modulate NF-κB transcriptional activity and pro-inflammatory gene expression. Significance: This allows for understanding the mechanism that relates elevated resting calcium and altered gene expression in muscular dystrophy. Duchenne muscular dystrophy (DMD) is a genetic disorder caused by dystrophin mutations, characterized by chronic inflammation and severe muscle wasting. Dystrophic muscles exhibit activated immune cell infiltrates, up-regulated inflammatory gene expression, and increased NF-κB activity, but the contribution of the skeletal muscle cell to this process has been unclear. The aim of this work was to study the pathways that contribute to the increased resting calcium ([Ca2+]rest) observed in mdx myotubes and its possible link with up-regulation of NF-κB and pro-inflammatory gene expression in dystrophic muscle cells. [Ca2+]rest was higher in mdx than in WT myotubes (308 ± 6 versus 113 ± 2 nm, p < 0.001). In mdx myotubes, both the inhibition of Ca2+ entry (low Ca2+ solution, Ca2+-free solution, and Gd3+) and blockade of either ryanodine receptors or inositol 1,4,5-trisphosphate receptors reduced [Ca2+]rest. Basal activity of NF-κB was significantly up-regulated in mdx versus WT myotubes. There was an increased transcriptional activity and p65 nuclear localization, which could be reversed when [Ca2+]rest was reduced. Levels of mRNA for TNFα, IL-1β, and IL-6 were similar in WT and mdx myotubes, whereas inducible nitric-oxide synthase (iNOS) expression was increased 5-fold. Reducing [Ca2+]rest using different strategies reduced iNOS gene expression presumably as a result of decreased activation of NF-κB. We propose that NF-κB, modulated by increased [Ca2+]rest, is constitutively active in mdx myotubes, and this mechanism can account for iNOS overexpression and the increase in reactive nitrogen species that promote damage in dystrophic skeletal muscle cells.
Duchenne muscular dystrophy (DMD) 2 is a lethal human X-linked genetic disorder caused by mutations in the dystrophin gene (1). DMD is a progressive muscle-wasting disease characterized by loss of the ability to walk between 6 and 12 years of age and death, caused by respiratory failure and cardiac dysfunction in their twenties (2). Like humans with DMD, mdx mice lack dystrophin due to an X-linked mutation providing an accepted model to study the human disease (1). In normal skeletal muscle, dystrophin is associated with a complex of glycoproteins known as dystrophin-glycoprotein complex, providing a linkage between the extracellular matrix and cytoskeleton (3). Dystrophin has an important role in stabilizing the sarcolemma, so in muscle fibers that lack this protein, membrane damage is recurrent (4,5). However, although membrane fragility is an important factor, it does not fully explain the onset and progression of DMD.
The microenvironment of dystrophic muscle consists of activated immune cell infiltrates and up-regulated inflammatory gene expression (6). Nuclear factor-B (NF-B) consists of a family of transcription factors that play critical roles in inflammation, immunity, cell proliferation, differentiation, and survival (7). The NF-B transcription factor family in mammals consists of five proteins, p65 (RelA), RelB, c-Rel, p105/p50 (NF-B1), and p100/52 (NF-B2), that form homo-and heterodi-meric complex (7). NF-B has been implicated in mdx pathology, because blockade of this pathway through pharmacological or genetic approaches improves muscle histology, reduces pro-inflammatory gene expression, and ameliorates damage (8 -12). NF-B activity is increased in muscles from mdx mice and DMD patients (10,(13)(14)(15). The p65/p50 heterodimer is the predominant form of NF-B in most cells and controls the expression of a wide array of genes critical in the immune response and inflammation (16). IkB␣ retains the p65/ p50 heterodimer in the cytoplasm. Upon stimulation, IkB␣ is quickly phosphorylated by the IKK complex, ubiquinated, and degraded, thus allowing the translocation to the nucleus of the NF-B complex (7).
IKK␣/ or p65 gene ablation in transgenic animals or by adeno-associated virus improves pathology in mdx mouse muscles (8 -12). Acharyya et al. (10) have shown that NF-B activity can be seen in both muscle and immune cells and that mdx muscle pathology was improved in mdx/p65 ϩ/Ϫ but not mdx/ p50 ϩ/Ϫ mice.
iNOS or NOS2, originally discovered in cytokine-induced macrophages, is a largely calcium-independent NOS, which is expressed at highest levels in immunologically activated cells and is normally absent in resting cells (21). iNOS expression is increased in muscles from mdx mice and can be reversed by curcumin (9,22,23). High levels of nitric oxide (NO) production lead to the formation of peroxynitrite, a highly reactive species contributing to muscle oxidative damage (21,24). In addition, iNOS expression has been associated with S-nitrosylation of type 1 ryanodine receptor (RyR1), calcium dysregulation, and muscle pathology in mdx mice (22).
Although there are many examples in the literature indicating that resting intracellular free Ca 2ϩ concentration ([Ca 2ϩ ] rest ) is higher in skeletal muscle cells from mdx mice and DMD patients compared with normal cells (25)(26)(27)(28)(29)(30), others authors have not (31,32). The mechanism that has been proposed for causing this elevation in [Ca 2ϩ ] rest assumes recurrent membrane damage due the failure of dystrophin function to stabilize the sarcolemma (5,33), allowing Ca 2ϩ leak into the cell through the damaged membrane. An alternative explanation is an increased Ca 2ϩ entry through transient receptor potential channel 1 (TRPC1) and hyperactive store-operated calcium entry (SOCE) in mdx muscle fibers (25, 34 -38).
Several studies have shown that NF-B activity can be modulated by intracellular Ca 2ϩ levels (39 -42). In skeletal muscle cells, depolarization with high K ϩ or electrical stimulation activates NF-B through Ca 2ϩ signals elicited by the ryanodine (RyR) and inositol 1,4,5-triphosphate (IP 3 R) receptors (40).
In dystrophic muscle cells, increased [Ca 2ϩ ] rest , has been mainly thought to cause necrosis through calpain activation and mitochondrial permeability transition pore (29,43,44). Here, we have revisited the issue of elevated [Ca 2ϩ ] rest in dystrophic mdx skeletal muscle cells showing that it is a complex process that involves sarcolemmal Ca 2ϩ entry as well as SR Ca 2ϩ leak, both through RyRs and IP 3 Rs. In addition, we demonstrate that the level of [Ca 2ϩ ] rest modulates the transcription factor NF-B activity and iNOS expression in mdx myotubes.
MATERIALS AND METHODS
Cell Culture-All procedures for animal use were in accordance with guidelines approved by the Bioethical Committee at the Facultad de Medicina, Universidad de Chile. Primary myotubes from wild type C57BL/6 and mdx mice were isolated according to the method of Rando and Blau (45). The myoblasts were grown and differentiated as described previously (46).
Determination of [Ca 2ϩ ] rest by Ca 2ϩ -selective Microelectrodes-Double-barreled Ca 2ϩ -selective microelectrodes were prepared and calibrated as described previously (47). Only those electrodes with a linear relationship between pCa3 and pCa8 (Nernstian response, 28.5 mV per pCa unit at 24°C) were used experimentally. To better mimic the intracellular ionic conditions, all calibration solutions were supplemented with 1 mM Mg 2ϩ . All electrodes were then re-calibrated after making measurements of [Ca 2ϩ ] rest , and if the two calibration curves did not agree within 3 mV from pCa7 to pCa8, the data from that microelectrode was discarded. Myotubes were impaled with the double-barreled microelectrode, and potentials were recorded via high impedance amplifier (WPI Duo-773). The potential from the 3 M KCl barrel (V m ) was subtracted electronically from VCa E to produce a differential Ca 2ϩ -specific potential (V Ca ) that represents the [Ca 2ϩ ] rest . V m and V Ca were filtered to improve the signal-to-noise ratio and stored in a computer for further analysis. The experiments were performed in Krebs-Ringer solution (in mM: 125 NaCl, 5 KCl, 2 CaCl 2 , 1.2 MgSO 4 , 6 glucose, and 25 Hepes/Tris, pH 7.4). The low Ca 2ϩ solution was prepared replacing the CaCl 2 with MgCl 2 (Ϸ7 M Ca 2ϩ ). Ca 2ϩ -free solution was prepared by omitting Ca 2ϩ and adding Mg 2ϩ (2 mM) and EGTA (2 mM). We avoided measurements of [Ca 2ϩ ] rest after long incubations in both solutions (more than 5 min) because, despite the fact that the solution was supplemented with 2 mM Mg 2ϩ , all myotubes began to show a significant depolarization (Ͼ8 mV) after this interval.
In every experiment, we determined the [Ca 2ϩ ] rest in control conditions in both WT and mdx myotubes, and data were expressed as the total average for basal [Ca 2ϩ ] rest for WT and mdx myotubes (Fig. 1).
Sarcoplasmic Reticulum Ca 2ϩ Content-To estimate the total amount of Ca 2ϩ stored in the intracellular compartments, primarily from the SR stores, myotubes were loaded with 5 M Fluo-4-AM or Fluo-5N-AM for 30 min at 37°C. Cells were placed on the stage of an inverted microscope equipped with epifluorescence illumination (XCite Series 120 or Lambda DG4) equipped with a CCD cooled camera (Retiga 2000R or Stanford Photonics 12 bit digital). The cell-containing coverslips or -clear 96-well/plates (Greiner Bio-one) were placed in the microscope for fluorescence measurements after excitation with a 488-nm wavelength filter system (Lambda 10 -2 or DG4, Sutter Instruments). The emission signal was acquired at a frequency of 10 frames/s. The amount of SR Ca 2ϩ was estimated by taking the area under the curve of the signal induced by 5 M ionomycin in Ca 2ϩ -free solution to minimize Ca 2ϩ entry. Fluorescence data (F) was analyzed by normalizing with respect to basal fluorescence (F 0 ) and expressed as (F Ϫ F 0 )/F 0 .
Resting Ca 2ϩ Entry-Myotubes were loaded with Fura2-AM (5 M) for 30 min at 37°C and the cells were perfused with low Ca 2ϩ solution for 1 min; then the perfusion system was switched to Mn 2ϩ -containing solution (in mM: 125 NaCl, 5 KCl, 0.5 MnCl 2 , 2.7 MgSO 4 , 6 glucose, and 25 Hepes/Tris, pH 7.4) for 1 min. During the quench, the perfusion system was switched to Mn 2ϩ containing solution with gadolinium trichloride for an additional 1 min (Gd 3ϩ , 20 M), to study the effect of the later on Ca 2ϩ entry. To calculate the fluorescence quench rate, the stable part of the signal was fitted to a linear regression. The derived slope was expressed as fluorescence arbitrary units/s. The excitation wavelength used to measure Mn 2ϩ quench of Fura-2 was monitored using a 357/7-nm excitation and 510/80-nm emission filter.
Immunofluorescence and Confocal Microscopy-For immunofluorescence localization of the NF-B p65 subunit, differentiated myotubes were fixed in 4% paraformaldehyde for 10 min at RT. Cells were rinsed with PBS, then blocked with PBS, 1% BSA for 1 h at room temperature, and incubated overnight with p65 antibody at a 1:200 dilution at 4°C. Cells were washed and then incubated for 1 h with Alexa Fluor-488 anti-rabbit antibody (Invitrogen). Hoechst was used for nuclear visualization. Immunofluorescence was observed in a confocal microscope (Carl Zeiss, Axiovert 200, LSM 5-Pascal) and images were deconvolved using Iterative Deconvolution software of ImageJ.
To determine the nuclear localization of the NF-B p65 subunit, the fluorescence intensity of nuclear and cytosolic region of interest was calculated for at least 10 different myotubes in three different experiments and averaged to calculate the ratio of nuclear over cytoplasmic intensity using the ImageJ program. z-stack images were reconstructed using the Interactive 3D Surface Plot ImageJ plugin (rsb.info.nih.gov) that translates the luminance of an image as height for the plot. DAF-FM diacetate (Molecular Probes) fluorescence was detected according to the manufacturer's instructions by confocal microscopy with an excitation 488 nm wavelength argon laser.
siRNA Transfection-NF-B p65 and scramble siRNA were purchased from Santa Cruz Biotechnology. NF-B p65 siRNA is a pool of four target-specific double-stranded siRNAs. Myoblasts at 50 -70% confluence were transfected with both siRNAs (50 nM) with DharmaFECT Duo (Dharmacon) for 3 h at 37°C and 5% CO 2 in 35-mm culture plates in Opti-MEM (Invitrogen). Following transfection, myoblasts were differentiated for 48 h and lysed for protein detection by Western blot.
NF-B Luciferase Reporter Activity Determinations-A plasmid containing five tandems repeats of NF-B-binding sites cloned upstream of a luciferase reporter gene (pNF-B-Luc) was obtained from Agilent Technologies and subcloned in a lentiviral vector with neomycin resistance, and lentiviral particles were produced by transient transfection of HEK 293T cells as described (48). Supernatants were collected, and myoblast cultures were transduced immediately after isolation at a multiplicity of infection of 1:500 in the presence of 6 g/ml protamine sulfate for 3 h. Cells were allowed to recover for 48 h and then selected with neomycin (400 g/ml) for 9 days. After infection and selection, myoblasts were completely normal and differentiated into myotubes similarly to uninfected cells. To minimize clonal variations, we pooled together more than 100 G418-resistant clones from each transduction to perform the experiments. Luciferase activity was determined using a Dual-Luciferase reporter assay system (Promega) according to the manufacturer's instructions, and light detection was carried out in a Berthold F12 luminometer. Results were normalized with total protein, and the relation "luciferase mg Ϫ1 protein" was shown. We studied the response to lipopolysaccharide (LPS), a strong activator of NF-B as a control (data not shown).
Real Time PCR-Total RNA from myotubes cultures was obtained using TRIzol reagent (Invitrogen) according to the manufacturer's protocol. cDNA was prepared by reverse transcription of 1 g of total RNA, using SuperScript II (Invitrogen). Real time PCR was performed using a Stratagene Mx3000P as follows. Primers were used at a final concentration of 400 nM. Briefly, 1-4 l of cDNA reaction together with the appropriate primers was added to 10 l of Brilliant III UltraFast SYBR Green QPCR master mix (Agilent Technologies) to a total volume of 20 l. No-template control reactions were also prepared for each gene. The cycling parameters for all genes were as follows: 95°C for 3 min, then 50 cycles of 95°C for 20 s, and 60°C for 20 s. Expression values were normalized to GAPDH and are reported in units of 2 Ϫ⌬⌬Ct Ϯ S.E (49). PCR products were verified by melting-curve analysis, resolved by electrophoresis on 2% agarose gel, and stained with ethidium bromide.
The TNF-␣, IL-1, IL-6, iNOS, and GAPDH mRNA transcripts were quantified using oligonucleotide primers designed based on sequences published in NCBI GenBank TM with the open-source PerlPrimer software (50). The forward and reverse primers sequences used in this study are shown in Table 1.
Statistics-All values are expressed as mean Ϯ S.E. from at least three different determinations. Results of luciferase activity, p65 immunofluorescence, DAF-FM fluorescence, and Western blot were transformed with the WT basal average (y ϭ yЈ/(WT basal average)) to normalize to 1 with S.E. Statistical analysis was performed using an unpaired two-tailed t test or analysis of variance-Bonferroni to determine significance (p Ͻ 0.05).
Blockade of Ca 2ϩ Entry in mdx Myotubes Reduced but Did Not Normalize [Ca 2ϩ ] rest -Several studies have suggested that [Ca 2ϩ ] rest is increased in mdx skeletal muscle cells, due to an increased Ca 2ϩ entry from extracellular space through TRPC1 and/or SOCE channels (25, 34 -38). To explore the contribution of extracellular Ca 2ϩ to [Ca 2ϩ ] rest in WT and mdx myotubes, we used four different strategies as follows: low Ca 2ϩ solution, Ca 2ϩ -free solution, Krebs-Ringer solution supplemented with gadolinium trichloride (Gd 3ϩ , 20 M), and Ca 2ϩfree solution with Gd 3ϩ (see under "Materials and Methods"). Cells were incubated for 2 min before [Ca 2ϩ ] rest determinations were made. We observed a nonsignificant reduction in [Ca 2ϩ ] rest in WT myotubes after the addition of low Ca 2ϩ solution, Ca 2ϩ -free solution, and Gd 3ϩ solution (92.9 Ϯ 1 nM, n ϭ 20, 91.0 Ϯ 1 nM, n ϭ 15, and 86.3 Ϯ 1 nM, n ϭ 20, all p Ͼ 0.05 compared with the WT basal value) ( Fig. 2A). In mdx myotubes, there was a significant decrease in [Ca 2ϩ ] rest in all conditions (184 Ϯ 8 nM, n ϭ 12, with low Ca 2ϩ , 148 Ϯ 6 nM in Ca 2ϩ free solution, n ϭ 10, 147 Ϯ 1 nM, n ϭ 11, after Gd 3ϩ , all of them p Ͻ 0.001 compared with the mdx basal value) ( Fig. 2A). The addition of Gd 3ϩ to the Ca 2ϩ -free solution did not cause a further reduction of [Ca 2ϩ ] rest , suggesting that Gd 3ϩ by itself was able to block the active Ca 2ϩ -entry pathway. In addition, we estimated Ca 2ϩ entry by Mn 2ϩ quench of Fura-2 fluorescence. Rates of Mn 2ϩ quench were significantly higher in mdx myotubes (Fig. 2, B and C), and this was completely blocked by the addition of Gd 3ϩ (20 M). Although inhibition of Ca 2ϩ entry by either Gd 3ϩ or removal of extracellular Ca 2ϩ reduced [Ca 2ϩ ] rest in mdx myotubes, it did not return it to WT levels, suggesting an additional mechanism(s) causing [Ca 2ϩ ] rest dysregulation in mdx myotubes.
Inhibition of RyRs and IP 3 Rs Reduced [Ca 2ϩ ] rest in mdx Myotubes-We have previously reported that [Ca 2ϩ ] rest depends largely on a Ry-insensitive leak by RyR1 ("RyR1 leak") and was unaffected by Ry treatment (47). Bastadin 5 (B5), a brominated macrocyclic derivative of dityrosine, isolated from the marine sponge Icanthellabasta (51), interacts with RyR1, modulating RyR1 gating behavior in an FKBP12-dependent manner. B5 can be used as a pharmacological tool to convert RyR1 from its leak conformation into a gating conformation, restoring the Ry sensitivity (47). We studied the contribution of RyR1 to the [Ca 2ϩ ] rest in mdx myotubes (Fig. 3A). Ry treatment alone did not modify [Ca 2ϩ ] rest in WT myotubes but did cause a significant reduction of [Ca 2ϩ ] rest in mdx myotubes (99 Ϯ 3 nM, n ϭ 15, p Ͼ 0.05, 213 Ϯ 5 nM, n ϭ 19, p Ͻ 0.001 compared with WT and mdx basal values, respectively). Addition of B5 in the presence of Ry significantly diminished [Ca 2ϩ ] rest in both WT and mdx myotubes (80 Ϯ 1 nM, n ϭ 9, p Ͻ 0.05, 166 Ϯ 9 nM, n ϭ 9, p Ͻ 0.001 compared with WT and mdx basal values, respectively) but also did not reduce mdx basal values to those seen in WT.
We have previously demonstrated that the expression of IP 3 Rs, as well as the total mass of inositol 1,4,5-trisphosphate, is increased in both mdx and human DMD-derived cell lines compared with normal cells (52). U-73122 (a PLC inhibitor) and Xestospongin C (an IP 3 R blocker) significantly reduced the [Ca 2ϩ ] rest only in mdx myotubes (241 Ϯ 7 nM, n ϭ 24, and 232 Ϯ 6 nM, n ϭ 20, respectively, both p Ͻ 0.001 compared with the mdx basal value), without any significant effect in WT myotubes (113 Ϯ 2 nM, n ϭ 21, and 97 Ϯ 2 nM, n ϭ 16, respectively, p Ͼ 0.05 compared with WT basal values) (Fig. 3A), whereas U-73343 (an inactive PLC inhibitor) did not modify [Ca 2ϩ ] rest in either WT or mdx myotubes.
To quantify the level of the SR Ca 2ϩ store, we exposed WT and mdx myotubes loaded with Fluo-4-AM to 5 M ionomycin in Ca 2ϩ -free solution. Under these conditions, the total Ca 2ϩ released was significantly smaller in mdx myotubes compared with WT myotubes (area under curve ϭ 23.7 Ϯ 2.4 versus 40.4 Ϯ 4.2, p Ͻ 0.01) (Fig. 3, B and C, and representative fluorescence images in supplemental Fig. S1). Similar results were obtained in Fluo-5N-loaded myotubes (supplemental Fig. S2). Moreover, treatment (3 h) with either Ry (30 M) or XeC (5 M), partially restored the SR Ca 2ϩ content in mdx myotubes (Fig. 3C), suggesting that the reduction in the SR Ca 2ϩ levels is
NF-B Activity Is Up-regulated in Dystrophic Myotubes and Can Be Reversed with Inhibitors That Reduce [Ca 2ϩ
] rest -We studied the subcellular distribution of the p65 subunit of NF-B. Fig. 4A shows an increased nuclear localization of p65 in mdx myotubes compared with WT myotubes, measured by immunofluorescence and confocal microscopy. Three-dimensional reconstruction of z-stack images shows that p65 is located primarily in the cytosol, but in mdx myotubes, the distribution is diffuse with both cytoplasmic and nuclear localization. Basal fluorescence ratio of p65 between nucleus and cytosol was increased about 50% in dystrophic myotubes compared with normal myotubes (Fig. 4B). We assessed the transcriptional activity of NF-B using a reporter that contains five tandems repeats of NF-B-binding sites cloned upstream of a luciferase gene (see "Materials and Methods"). Luciferase activity was increased 2.5-fold in mdx myotubes compared with WT myotubes (Fig. 4C). To establish a correlation between [Ca 2ϩ ] rest and NF-B transcriptional activity , we treated myotubes for 6 h with Gd 3ϩ , Ry, and XeB (53, 54) as described previously at the same concentrations. None of these drugs caused a significant change in the luciferase reporter activity in WT myotubes (p Ͼ 0.05) (Fig. 4C). However, blockade of sarcolemmal Ca 2ϩ entry with Gd 3ϩ reduced the luciferase reporter activity by 19% (p Ͼ 0.05), and pretreatment with Ry or XeB reduced it by 58 and 38%, in mdx myotubes, respectively (p Ͻ 0.001 and p Ͻ 0.01, respectively, compared with the mdx basal value). Furthermore, 1,2-bis(2-aminophenoxy)ethane-N,N,NЈ,NЈ-tetraacetic acid tetrakis(acetoxymethyl ester) (BAPTA AM, 50 M) treatment reduced NF-B transcriptional activity in mdx myotubes by 43% (p Ͻ 0.05) without any significant effect in WT myotubes. To confirm the contribution of Ca 2ϩ released by SR, we measured the subcellular distribution of p65 in the presence of Ry or XeB. Both inhibitors reduced nuclear/cytosol p65 fluorescence by mdx myotubes (Fig. 4B).
TNF-␣, IL-1, and IL-6 Gene Expression Was Similar in WT and mdx Myotubes-To identify gene targets that can be modulated by [Ca 2ϩ
] rest -dependent NF-B up-regulation, we studied the levels of mRNA for TNF-␣, IL-1, and IL-6 in both myotube models. We did not observe any significant difference in the mRNA levels of these cytokines between WT and mdx myotubes under resting conditions (supplemental Fig. S3).
iNOS Is Overexpressed in mdx Myotubes and Is Dependent on [Ca 2ϩ ] rest and NF-B Activity-We observed increased iNOS mRNA levels and protein expression in mdx myotubes (p Ͻ 0.001 and p Ͻ 0.01, respectively, compared with WT myotubes) (Fig. 5, A and B). Moreover, nitric oxide (NO) production, assessed by DAF-FM fluorescence, was Ϸ20% higher in mdx compared with WT myotubes (Fig. 5C). In myotubes transfected with p65 siRNA, the expression of p65 protein, after 48 h, was reduced by 89 and 82% in WT and mdx myotubes, respec-tively (Fig. 5D). p65 knockdown in mdx myotubes normalized iNOS protein levels to WT values showing that the latter is regulated by the activity level of the former (Fig. 5D). Moreover, treatment with compounds that lower [Ca 2ϩ ] rest for 6 h significantly reduced iNOS mRNA levels in mdx myotubes (75% Gd 3ϩ , 86% Ry, and 66% XeB), but it had no significant effect in WT myotubes (Fig. 5A).
p38 MAPK Is Involved in NF-B Up-regulation in mdx Myotubes-Several Ca 2ϩ -sensitive pathways can modulate the activity of the NF-B signaling pathway (41,55). To determine the signal transduction pathways involved in the [Ca 2ϩ ] rest -dependent NF-B up-regulation, we used specific pharmacological blockers for ERK1/2, JNK, p38 MAPKs, Ca 2ϩ /calmodulindependent kinase II, calcineurin A, and protein kinase C (PKC) (Fig. 6). Only p38 MAPK inhibition with SB-203580 (10 M) significantly reduced the NF-B luciferase reporter activity in both WT and mdx myotubes by 82 and 73%, respectively (p Ͻ 0.001).
DISCUSSION
In dystrophic skeletal muscle cells, increased [Ca 2ϩ ] rest has been mainly related with calpain activation and mitochondrial permeability transition pore aperture as factors that induce death in skeletal muscle fibers. Here, we show the first evidence that elevated [Ca 2ϩ ] rest in dystrophic myotubes causes altered function of the transcription factor NF-B leading to iNOS expression. In addition, our data show that the increased [Ca 2ϩ ] rest in mdx myotubes is multifactorial, involving both Ca 2ϩ entry and Ca 2ϩ SR leak, through RyRs and IP 3 Rs.
We have previously shown that the [Ca 2ϩ ] rest in DMD muscle fibers was ϳ370 nM, although in normal muscle fibers it was ϳ100 nM (27). Other authors have shown similar calcium concentrations in adult mdx fibers compared with the WT fibers (25,28). A possible explanation to why some authors did not find elevated [Ca 2ϩ ] rest in dystrophic muscle cells may be due, in part, to methodological differences in fluorescent dye calibration, the previous contractile activity, and age of the fibers. Moreover, fluorescent dyes, as 1,2-bis(2-aminophenoxy)ethane-N,N,NЈ,NЈ-tetraacetic acid derivatives, are Ca 2ϩ chelators and can artificially reduce [Ca 2ϩ ] rest . Thus in most cases, the [Ca 2ϩ ] rest values that have been reported in muscle cells using this method are significantly lower (range 20 -80 nM) than those reported using Ca 2ϩ -selective microelectrodes (100 -120 nM). TRPC1-dependent Ca 2ϩ entry is increased in mdx muscle fibers (25,34,36). Both GsMTx4 and streptomycin reduced [Ca 2ϩ ] rest and prevented the rise of the [Ca 2ϩ ] rest following eccentric contractions improving the muscle function and increasing myofiber regeneration in mdx mice (25,28,36). In addition to stretch channel activation, SOCE has emerged as another contributor in increased resting Ca 2ϩ entry in mdx fibers (35,37,38). We have found that Gd 3ϩ , an unspecific blocker of Ca 2ϩ -entry through SOCE and transient receptor potential channels (56), reduced [Ca 2ϩ ] rest by 52% in mdx myotubes and that long term treatment with Gd 3ϩ was associated with a reduction in NF-B activity and iNOS expression. However, blocking Ca 2ϩ entry did not completely normalize [Ca 2ϩ ] rest , suggesting a possible intracellular contribution.
In primary mdx myotubes, treatment with Ry reduced the [Ca 2ϩ ] rest by 31% and adding B5 decreased it further to 46%. We have previously shown that [Ca 2ϩ ] rest depends largely on Ryinsensitive leak of RyR1 channels (RyR1 leak) that can be blocked by Ry ϩ B5 treatment in normal myotubes (47). Bellinger et al. (22) have shown that RyR1 isolated from mdx skeletal muscle shows an age-dependent increase in S-nitrosylation coincident with muscle pathology, which depleted the channel complex of FKBP12, resulting in "leaky channels." Depletion of FKBP12 from RyR1 channel to nitrosative stress may render it sensitive to Ca 2ϩ -mediated activation (22). Both IP 3 R blockade with XeC (IP 3 R blocker) and U-73122 (PLC inhibitor) treatment resulted in 25 and 22% reduction in the [Ca 2ϩ ] rest in mdx myotubes, respectively. These combined data strongly demonstrate that the SR plays an important role in the dysregulation [Ca 2ϩ ] rest observed in dystrophic myotubes.
There is a controversy concerning the SR Ca 2ϩ levels in mdx skeletal muscle cells. Robert et al. (57) demonstrated an increased SR Ca 2ϩ loading capacity after depletion in mdx compared with WT. However, other authors have shown a reduced expression of calsequestrin-like proteins, lower SR Ca 2ϩ loading (58), and reduced sarco/endoplasmic reticulum Ca 2ϩ -ATPase activity in mdx muscles (59). Recently, Robin et al. (60) demonstrated an elevated passive SR Ca 2ϩ leak in mdx fibers, using fibers voltage-clamped at Ϫ80 mV and exposed to cyclopiazonic acid. Our results have shown that Ry-or XeCtreated mdx myotubes have an increase in SR Ca 2ϩ store content suggesting that SR leak occurs through these Ca 2ϩ channels. SERCA1a overexpression in mdx diaphragm muscle by adeno-associated virus gene transfer resulted in a reduction of centrally located nuclei and reduced susceptibility to eccentric contraction-induced damage (61). More recently, ␦-sarcoglycan-null and mdx mice animals that overexpress SERCA1 through transgenesis show an improvement in muscle damage and excitation-contraction coupling and restore the [Ca 2ϩ ] rest and [Ca 2ϩ ] SR in both dystrophic models (62). Together, this suggests that the filling state of the SR contributes significantly to the dysregulation [Ca 2ϩ ] rest observed in mdx muscles.
Several reports indicate that resting membrane potentials are more positive in mdx muscles fibers than WT (27,(63)(64)(65). We have found that mdx myotubes showed a partial membrane depolarization compared with WT. None of the drugs used in this study, all of which have a major effect on [Ca 2ϩ ] rest , induced a significant repolarization in the mdx myotubes. These are not surprising results because none of them have any effect on ion permeability or ion-translocat- ing enzymes involved in maintaining the resting membrane potential value.
Numerous facts indicate that the dystrophic skeletal muscle cells have impaired excitation-contraction coupling. Comparisons of the cytosolic Ca 2ϩ transients evoked by a single action potential have shown that the Ca 2ϩ transients are reduced in mdx and mdx;utr Ϫ/Ϫ fibers compared with WT fibers (66 -68). Muscle weakness observed in isolated fibers from mdx mice and DMD patients has not been fully explained. The reduction in the Ca 2ϩ transient evoked by single action potential, increased V m , increased [Ca 2ϩ ] rest , and a reduced Ca 2ϩ loading capacity of the SR could provide a mechanism for contractile dysfunction and impaired force production in DMD patients.
Several studies have shown that NF-B activity is increased in mdx skeletal muscles (8 -15), but the mechanisms causing this abnormality have not been previously unveiled. Acharyya et al. (10) reported an increased NF-B DNA binding activity and IKK activation, without any change in IB␣ expression and phosphorylation and normal levels of p65 with an increased phosphorylation. The authors proposed direct p65 activation by IKK (10). On the contrary, Singh et al. (15) have found an increase in the expression of both p65 and IB␣ and increased IB␣ phosphorylation, indicating that NF-B activation in mdx muscles is due to a complex mechanism and not only IKK activation. Both examined activation of NF-B in whole muscle extracts. Because dystrophic muscles are associated with a large amount of activated immune cell infiltrates, which have increased NF-B activity (7,10), it is possible that this increase was not due to changes in muscle cells. Here, we used the myotube model to determine whether NF-B can be activated in dystrophic skeletal muscle cells without contribution from the immune system. We observed that NF-B transcriptional activity, measured by a luciferase reporter, was increased in mdx myotubes, and we observed a significant increase in p65 nucleus/cytosol fluorescence. Both luciferase activity and p65 nuclear localization could be reduced by agents that modulate [Ca 2ϩ ] rest in mdx myotubes but were not changed by these drugs in WT myotubes.
We do not know the exact mechanism that accounts for [Ca 2ϩ ] rest -dependent activation of NF-B in muscle cells. Several Ca 2ϩ -sensitive pathways can modulate the activity of NF-B (41). We have previously shown that membrane depolarization activates NF-B through increases in intracellular Ca 2ϩ through RyR and IP 3 R. This Ca 2ϩ -dependent modulation has been attributed to calcineurin A, PKC, and ERK1/2 pathway activation in normal myotubes (40). We have not found any significant effect in the luciferase activity when we preincubated with specific blockers of these signaling pathways in WT and mdx myotubes. Similar results were obtained with Ca 2ϩ / calmodulin-dependent kinase II and JNK inhibitors. Surprisingly, p38 inhibition by SB-203580 dramatically reduced the luciferase activity of the NF-B reporter. The p38 MAPK is activated by various stimuli, including exercise, contraction, insulin, environmental stress, and pro-inflammatory cytokines (69). SB-203580 is a specific blocker of the p38 MAPK that inhibits the catalytic activity of this protein (70).
Badger et al. (71) has shown that SB-203580 blocks IL-1induced p38 kinase activity, NO production, and iNOS expres-sion in chondrocytes. In addition, in C6 glioma cells, the stimulation with LPS increases iNOS mRNA expression, NO production, phosphorylation of p38, and the activation of NF-B. Treatment with SB-203580 reduced iNOS expression and NO production; however, it did not modify the NF-B DNA binding activity (72).
Nakamura et al. (73) have shown that calcineurin A, JNK1, and p38 signaling pathways were constantly activated in dystrophic mdx;utr Ϫ/Ϫ hearts, associated with an increased p38 phosphorylation. However, in skeletal muscle, a reduction in p38 phosphorylation has been shown but was accompanied by an increase in p38 protein expression in whole lysates from mdx tibialis anterior muscles (74). Several reports have shown that calcium activates p38 MAPK, but the mechanisms by which it does so are poorly understood. In cerebellar granular cells, glutamate stimulates the activity of p38 through Ca 2ϩ entry from extracellular space and Rho GTPase activation (75,76). In myotubes, caffeine increases p38 phosphorylation via Ca 2ϩ /calmodulin-dependent kinase II activation and participates in the expression of PGC-1␣ and mitochondrial biogenesis (77). Further studies will be required to clarify this issue in mdx skeletal muscle cells and the precise mechanism involved in the NF-B activation.
Finally, we observed that iNOS expression could also be modulated by [Ca 2ϩ ] rest through NF-B under resting conditions. p65 knockdown normalized the iNOS protein levels in mdx myotubes to WT levels, similar to the effect of the agents that lowered [Ca 2ϩ ] rest had on iNOS mRNA expression; iNOS overexpression by this mechanism could be responsible for the oxidative and nitrosative stress observed in mdx muscles (26) and can provide a positive loop for Ca 2ϩ deregulation in dystrophic skeletal muscle cells.
Overexpression of TRPC3 (skeletal muscle-specific transgenic mice) and the associated increase in calcium influx resulted in a phenotype of muscular dystrophy (78). The authors have shown an increase in central nucleation of fibers, increased numbers of smaller myofibers, fibrosis, and infiltration of inflammatory cells. Moreover, sarco/endoplasmic reticulum Ca 2ϩ -ATPase overexpression in Sgcg Ϫ/Ϫ , mdx, and in TRPC3 transgenic mice mitigated biochemical and histological features of muscular dystrophy improving the altered intracellular Ca 2ϩ handling (62). As described above, S-nitrosylation of RyR induces Ca 2ϩ alterations related with an augmented spontaneous Ca 2ϩ spark frequency (22). In addition, transient receptor potential channels elicited robust elevation of Ca 2ϩ in response to the NO donor S-nitroso-N-acetyl-DL-penicillamine, especially TRPC5 (79). TRPC5, TRPA1, and TRPM1 channels were increased in mdx skeletal muscle at certain stages (80). These modifications induced by NO could exacerbate the pathology in mdx muscles.
We did not find any difference in cytokine expression in mdx myotubes (supplemental Fig. S3). Because macrophages and lymphocytes are specialized immune cells (infiltrated in dystrophic muscle), we think that they may be responsible for the secretion of these cytokines. This hypothesis is reinforced by IKK (upstream activator of NF-B) deletion in myeloid cells from mdx mice, a procedure that reduced inflammation and concomitantly TNF-␣ and IL-1 expression (10). In addition, production of pro-inflammatory cytokines is probably a complex process that requires simultaneous activation of pathways other than NF-B. iNOS promoter has two bona fide NF-Bbinding sites (reviewed in Ref. 81). TNF-␣ is often described as one of the classical NF-B-dependent cytokines. However, there are numerous contradictory data for a role for NF-B as a classic activator of TNF-␣, and it seems that expression of this cytokine requires nuclear factor of activated T-cells activation, as well as other co-activators (reviewed in Ref. 82).
In summary, we have found that increased [Ca 2ϩ ] rest is modulated by Ca 2ϩ entry as a result of SR unloading caused by Ca 2ϩ leak through RyR and IP 3 R in dystrophic myotubes and that this alteration increases NF-B activity and iNOS expression, likely through p38 activation. These mechanisms can provide several potential therapy targets to improve muscle degeneration observed in DMD patients and explain the progressive damage observed in this pathology (Fig. 7). | 8,362.2 | 2012-05-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Pay “Attention” to your Context when Classifying Abusive Language
The goal of any social media platform is to facilitate healthy and meaningful interactions among its users. But more often than not, it has been found that it becomes an avenue for wanton attacks. We propose an experimental study that has three aims: 1) to provide us with a deeper understanding of current data sets that focus on different types of abusive language, which are sometimes overlapping (racism, sexism, hate speech, offensive language, and personal attacks); 2) to investigate what type of attention mechanism (contextual vs. self-attention) is better for abusive language detection using deep learning architectures; and 3) to investigate whether stacked architectures provide an advantage over simple architectures for this task.
INTRODUCTION & RELATED WORK
Any social interaction involves an exchange of viewpoints and thoughts.But these views and thoughts can be caustic.Often we see that users resort to verbal abuse to win an argument or overshadow someone's opinion.On Twitter, people from every sphere have experienced online abuse.Be it a famous celebrity with millions of followers or someone representing a marginalized community such as LGBTQ, Women and more.We want to channelize Natural Language Processing (NLP) for social good and aid in the process of flagging abusive tweets and users.Detecting abuse on Twitter can be challenging, particularly because the text is often noisy.Abuse can also have different facets.[10] released one of the initial data sets from Twitter with the goal of identifying what constitutes racism and sexism.[9] in their work pointed out that hate speech is different from offensive language and released a data set of 25k tweets with the goal of distinguishing hate speech from offensive language.
Stop saying dumb blondes with pretty faces as you need a pretty face to pull them off !!! #mkr In Islam women must be locked in their houses and Muslims claim this is treating them well Table 1: Tweets from [10] data set demonstrating online abuse They find that racist and homophobic tweets are more likely to be classified as hate speech but sexist tweets are generally classified as offensive.[4] introduced a large, hand-coded corpus of online harassment data for studying the nature of harassing comments and the culture of trolling.Keeping these motivations in mind, we make the following salient contributions: • We build a deep context-aware attention-based model for abusive behavior detection on Twitter .To the best of our knowledge ours is the first work that exploits context aware attention for this task.• Our model is robust and achieves consistent performance gains in all the three abusive data sets • We show how context aware attention helps in focusing on certain abusive keywords when used in specific context and improve the performance of abusive behavior detection .
RELATED WORK
Existing approaches to abusive text detection can be broadly divided into two categories: 1) Feature intensive machine learning algorithms such as Logistic Regression (LR), Multilayer Perceptron (MLP) and etc. 2) Deep Learning models which learn feature representations on their own.[10] released the popular data set of 16k tweets annotated as belonging to sexism, racism or none class 1 , and provided a feature engineered model for detection of abuse in their corpus.[9] use a similar handcrafted feature engineered model to identify offensive language and distinguish it from hate speech.[2] in their work, experiment with multiple deep learning architectures for the task of hate speech detection on Twitter using the same data set by [10].Their best-reported F1-score is achieved using Long Short Term Memory Networks (LSTM) + Gradient Boosting.
On the data set released by [10], [5] experiment with a two-step approach of detecting abusive language first and then classifying them into specific types i.e. racist, sexist or none.They achieve best results using a Hybrid Convolution Neural Network (CNN) with the intuition that character level input would counter the purposely or mistakenly misspelled words and made-up vocabularies.[6] in their work ran experiments on the Gazetta dataset and the DETOX system ( [12]) and show that a Recurrent Neural Network (RNN) coupled with deep, classification-specific attention outperforms the previous state of the art in abusive comment moderation.In their more recent work [7] explored how user embeddings, user-type embeddings, and user type biases can improve their previous RNN based model on the Gazetta dataset.Attentive neural networks have been shown to perform well on a variety of NLP tasks ( [13], [11]).[13] use hierarchical contextual attention for text classification (i.e attention both at word and sentence level) on six large scale text classification tasks and demonstrate that the proposed architecture outperform previous methods by a substantial margin.We primarily focus on word level attention because most of the tweets are single sentence tweets.
The best choice for modeling tweets was Long Short Term Memory Networks (LSTMs) because of their ability to capture long-term dependencies by introducing a gating mechanism that ensures the proper gradient propagation through the network.We use bidirectional LSTMs because of their inherent capability of capturing information from both: the past and the future states.A bidirectional LSTM (BiLSTM) consists of a forward LSTM − → f that reads the sentence from x 1 to x T and a backward LSTM ← − f that reads the sentence from x T to x 1 , where T is the number of words in the sentence under consideration and x i is the i t h word in the sentence.We obtain the final annotation for a given word x i , by concatenating the annotations from both directions (Eq.[1]).[1] show that LSTMs can benefit from depth in space.Stacking multiple recurrent hidden layers on top of each other, just as feed forward layers are stacked in the conventional deep networks give performance gains .And hence we choose stacked LSTM for our experiments.
Word Attention
The attention mechanism assigns a weight to each word annotation that is obtained from the BiLSTM layer.We compute the fixed representation v of the whole message as a weighted sum of all the word annotations which is then fed to a final fully-connected Softmax layer to obtain the class probabilities.We first feed the LSTM output h i of each word x i through a Multi Layer Perceptron to get u i as its hidden representation.u c is our word level context vector that is randomly initialized and learned as we train our network.Once u i is obtained we calculate the importance of the word as the similarity Data Set Tweets Count [10] 15,844 [9] 25,112 [4] 20,362 Table 2: Data sets and their total tweets count of u i with u c and get a normalized importance weight α i through a softmax function.The context vector u c can be seen as a tool which filters which word is more important over all the words like that used in the LSTM. Figure 2 shows the high-level architecture of this model.W h and b h are the attention layers weights and biases.More formally,
EXPERIMENTS
In this section we talk about data sets first and then go on to show our results obtained on these three data sets .We also show some examples where our model failed .Finally we show how attention helps us understand the model in a better fashion.
Data Sets
We have used the 3 benchmark data sets for abusive content detection on Twitter.At the time of the experiment, the [10] data set had a total of 15,844 tweets out of which 1,924 were labelled as belonging to racism, 3,058 as sexism and 10,862 as none.The [9] data set had a total of 25,112 tweets out of which 1498 were labelled as hate speech, 19,326 as offensive language and 4,288 as neither.For the [4] data set, there were 20,362 tweets out of which 5,235 were positive harassment examples and 15,127 were negative.
We call [10] data set as D1 , [9] data set as D2 and [4] as D3 For tweet tokenization, we use Ekphrasis which is a text processing tool built specially from social platforms such as Twitter.
[3] use a big collection of Twitter messages (330M) to generate word embeddings, with a vocabulary size of 660K words, using GloVe ( [8]).We use these pre-trained word embeddings for initializing the first layer (embedding layer) of our neural networks.
Results
The network is trained at a learning rate of 0.001 for 10 epochs, with a dropout of 0.2 to prevent over-fitting.The results are averaged over 10-fold cross-validations for D1 and D3 and 5 fold cross-validations for D2 because [9] reported results using 5 fold CV.Because of class imbalance in all our data sets, we report weighted F1 scores.
Table 3 shows our results in detail.We compare our model with the best models reported in each paper.Because [4] is a data set paper, we cannot fill the corresponding row.* denotes the numbers from baseline papers.All the results were reproducible except for the one marked red.For (Waseem and Hovy, 2016) data set, (Badjatiyaet al., 2017) claim that using Gradient Boosting with LSTM embeddings obtained from random word embeddings boosted their performance by 12 F1 from 81.0 to 93.0.When we tried to reproduce the result, we did not find any significant improvement over 81.Results show that our model is robust when it comes to the performance on all of the three data sets.3: Data sets and the results of different models.We reproduced the results for each model on three of the data sets.
We also share some examples from the three data sets in Figure 2 which our BiLSTM attention model could not classify correctly.On closer investigation we find that most cases where our model fails are instances where annotation is either noisy or the difference between classes are very blurred and subtle.
Why Contextual Attention?
Attention mechanism enables our neural network to focus on the relevant parts of the input more than the irrelevant parts while performing a prediction task.But the relevance is often dependant on the context and so the importance of words is highly context dependent.For example, the word islam may appear in the realm of Racism as well as in any normal conversation.The top tweet in Figure 3
Attention Heat Map Visualization
The color intensity corresponds to the weight given to each word by the contextual attention.
Figure 4: The first tweet is a sexist tweet from [10] where as the second tweet is an example of racist tweet from the same datset .The third tweet is from [9] data set labelled as offensive language.
CONCLUSION AND FUTURE WORK
We successfully built a deep context-aware attention-based model and applied it to the task of abusive tweet detection.We ran experiments on three relevant data sets and empirically showed how our model is robust when it comes to detecting abuse on Twitter.We also show how context-aware attention helps us to interpret the model's performance by visualizing the attention weights and conducting thorough error analysis.As for future work, we want to experiment with a model that learns user embeddings from their historical tweets.We also want to model abusive text classification in Twitter by taking tweets in context because often standalone tweets don't give a clear picture of a tweet's intent.
Figure 2 :
Figure2: The first tweet is a tweet from[10], the second tweet is a tweet from from[9] data set and the third from the[4] datset belongs to None class while the bottom tweet belongs to Racism class.
Figure 3 :
Figure 3: An example showing how our model captures diverse context and assigns context-dependent weights to the same word in two different tweets. | 2,672.6 | 2018-09-24T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Foreign Direct Investment (FDI) in Retail in India: Raison Detre of Growth
According to the Investment Commission of India, the retail sector is expected to grow almost three times its current levels to $660 billion by 2015. Investments are sought by Indian Retailers also to get necessary push for evolution of organized retailing in India, which has been much slower as compared to rest of the world. This is significant to mention that despite of the on-going wave of incessant liberalization and globalization the absence of political will to attract advanced technology and to adopt new retail format is holding retail revolution. FDI in Indian Economy is present since ages, though it is revealed from the chart that there are some states and cities where FDI inflows are larger in comparison of rest of the India. Maharashtra, Tamilnadu, Delhi, Karnataka and Andhra Pradesh are attracting two third of the total investment being the main centers of IT development in last 15 years. Moreover, on city to city basis, it is revealed that there is huge segregation in the inflows as more than 50 percent inflows are dropping in few cities Bangloru, Mumbai and National Capital Region (NCR).
Indian Economy: paradigm shift towards mass consumerism
Indian Economy is considered one of the rapidly growing economies in the world as evident from the attention it grabbed from all corners in the global economy. The recent spur in growth specially sectoral growth rate revealed the growth spree across newly emerging sectors such as Fast Manufacturing Consumer Goods (FMCG) including wholesale & Retail, Information Technology Enabled Services (ITES), health and education. It is well cited that performance in post recession period has been dismal and economy has lost pace of growth cited earlier. However, under the new regime the recovery is anticipated. There were several obvious reasons for the debacle and certain steps taken by new government are pushing economy back on the path of development. In the previous regime opening up of the economy for rest of the world has been the centre of discussion. It was believed that to open up the retail trade sector to foreign investment was a change ushered in by policy makers to project Indian Economy as ever expanding market in order to attract investment in technology and innovation 1 . Still Indian masses, business class, and even policy makers carrying strong voting politics led reservations towards this issue considering employment opportunities, procurement from international market, competition and loss of share of local entrepreneurs are oblivion and doing away their fear by constantly debating the issue in public domain. Government tried to show some courage in a series of moves to open up the retail sector slowly to Foreign Direct Investment (FDI) but finding very difficult to make inroads amidst strong opposition on account of lack of strategy and defined road map. Recently Government has also brought up major policy change in terms of FDI in various sectors especially in defense and railways 2 .
According to the Investment Commission of India 3 , the retail sector is expected to grow almost three times its current levels to $660 billion by 2015. Investments are sought by Indian Retailers also to get necessary push for evolution of organized retailing in India, which has been much slower as compared to rest of the world. This is significant to mention that despite of the on-going wave of incessant liberalization and globalization the absence of political will to attract advanced technology and to adopt new retail format is holding retail revolution. This paper is genuine effort to evaluate the willingness of domestic retailers both organized and unorganized and experts under the various stimuli present and ongoing issues where some are real while many are simply preoccupied.
Review of Literature
N.V.Shaha and M.A.Shinde 2013 4 have analyzed the India being a signatory to World Trade Organization's General Agreement on Trade-in Services, which includes wholesale and retailing services, had to open up the retail trade sector to foreign investment. There were initial reservations towards this issue arising from fear of job losses, procurement from international market, competition and loss of entrepreneurial opportunities to locals. However, the government in a series of moves opened up the retail sector slowly to Foreign Direct Investment (FDI).
Gaurav Bisaria 2012
5 has discussed about various modes of Foreign Direct Investment (FDI) in retailing in India. FDI or foreign investment refers to the net inflows of investment to acquire a lasting management interest (10 percent or more) in an enterprise operating in an economy other than that of the investor. Foreign direct investment is the sum of equity capital, reinvestment of earnings and other long or short term capital as shown in the balance of payments. It usually involves participation in management, joint venture, transfer of technology and expertise. 6 Retailing is one of the world"s largest private industries. Liberalizations in FDI have caused a massive restructuring in retail industry. The benefit of FDI in retail industry superimposes its cost factors. Opening the retail industry to FDI will bring forth benefits in terms of advance employment, organized retail stores, availability of quality products at a better and cheaper price. It enables a countries product or service to enter into the global market.
Research Methodology
Analytical, descriptive and comparative methodology was adopted for this study. To make study more pressing and direct reliance has been equally distributed on primary and secondary data sources such as books, journals, newspapers and online database. However, the interpretation of the data and suggestions made assume importance for the healthy growth of the retail sector in the country. For the convenience of study and the topic it was assumed on the basis of deliberations in the above segment of the paper departmental stores are most sought avenue for FDI and local kirana stores are going to face more competition. Brand Value was taken as dependent variable because FDI seems to be attracted by organized branded retailers in both single retail and multi brand retail segments and its relations was studied with three other variables. Moreover, perception M a y 0 8 , 2 0 1 5 of retailers, customers and experts was also taken into reference for detailed analysis using correlation, regression tabular presentation.
So far the FDI in Indian Economy is concerned following statistics is important to confirm necessary initializing of the FDI in various sectors economically, geographically and inter-sectoral basis.
FDI in India: Schematic Representation
Foreign Direct Investment in India most sought and debated issue after the inception of New Economic Reforms and policy of Liberalization, Privatization and Globalization (LPG). The issue of FDI in retail though debated strongly on the issues of employment and losses of local small entrepreneurs is diagonally related to the other aspects of mass consumerism, socio-economic development and transforming economy. Before going in detailed discussion of the various dimension of FDI in retail pertaining to consumer perception this will be better to provide adequate insight of the FDI in India since ages
4.1FDI: Pathways and Servicing
In Indian Economy FDI and other investments are routed through strict governmental control. There are distinct pathways for various investments but automatic route is opened for FDI. (Fig-1
Regional FDI flow in India
FDI in Indian Economy is present since ages, though it is revealed from the chart that there are some states and cities where FDI inflows are larger in comparison of rest of the India. Maharashtra, Tamilnadu, Delhi, Karnataka and Andhra Pradesh are attracting two third of the total investment being the main centers of IT development in last 15 years. Moreover, on city to city basis, it is revealed that there are huge segregation in the inflows as more than 50 percent inflows are dropping in few cities Bangloru, Mumbai and National Capital Region (NCR) (Fig-2). M a y 0 8 , 2 0 1 5
Fig-4 Series of steps to open up FDI for various sectors in India
It is evident from the regional inflow of the FDI that FDI is not sector specific rather business specific and route plays an important role. In case it is automatically routed the investment surge if visible. Citing the approval and actual turn up of investment in domestic circuit clearly reveals the inflows intensity. Despite of 100 percent approval in cash and carry inflows are meager till date because the investments are routed through government approval (http://indialiaison.com/fdifinal.htm March 13, 2015).
FDI inflows and need of the hour
The need of the foreign investments in every economy more or less depends upon its capital formation economic growth. The growth accompanied by foreign investments in the form of technology and monetary terms always desirable and pushes economy forward. The adjoin figure clearly reveals potential of FDI in India increased in the last 10 years which could not be converted into actual investment.
4.3FDI in various sectors
FDI ironically was not realized in equal terms despite of being approved absolutely routed through many ways. Barring 4-5 sectors such as telecom, advertising and pharmaceutical sector FDI in figures has not been very attractive. Even in telecom foreign companies are facing lot of structural problems in operations and distribution. Here it is advisable that Indian Retail Sector and its structure is going to be major factor in attracting investment. M a y 0 8 , 2 0 1 5
Structure of Indian Retail Sector
Retail is a sale for final consumption in contrast to a sale for further sale or processing (i.e. wholesale), a sale to the ultimate consumer. Thus, Retailing is the last link that connects the individual consumer with the manufacturing and distribution chain. A retailer is involved in the act of selling goods to the individual consumer at a margin of profit. However, in operations retail industry in India is dichotomously divided into exclusively organized and unorganized, Unorganized and local without establishment Retailing. Organized retailing refers to trading activities undertaken by licensed and integrated retailers, that is, those who are registered for sales tax, income tax and applying various modes and applications of marketing and operational integration. Excusive organized retailing comprises MNCs and corporate-backed hypermarkets and retail chains, exclusive departmental stores and also the individual and localized large retail businesses. Unorganized retailing, on the other hand, refers to the traditional formats of low-cost retailing for example, the local kirana (provision or grocery store) shops, sole-proprietor general stores etc. While various formats of individual disintegrated localized vending shops, convenience stores, hand cart and pavement vendors, etc constitute local without establishment retail shops and vendors. The Indian retail sector is highly fragmented on the basis of domestic geographical and socio-economic set up where most of the business is held by local retailers to fetch the need of localities.
Citing the structure of retail sector in India it is envisaged that application of FDI in any retail format is no issue but to identify the avenues of investment in suitable format and business. Mere statement that Indian retail industry has huge growth potential and mass consumerism in rural and urban segments in the last decade has attracted all big retailers to focus on India is not going to help in finalizing the road map for the FDI in retail.
Avenues for FDI in retail
The structure of retail in India is self driven and has its own ecosystem to thrive. Not whole of the retail is in need of the FDI. It is noteworthy that FDI in retail sector became the part of business strategies of Multi National Corporations (MNCs) and various countries interested to trade with India. The FDI in Indian retail can be explained on the basis of five types of FDI given by Chryssochoidis, Millar & Clegg, 1997 7 (see notes). Indian retail and FDI is firstly is type one of FDI as first type of FDI which is usually found in the countries like India is taken to gain access to specific factors of production, e.g. natural resources such as coal, land & labour, technical knowledge, material know-how. Such factors of production are not readily available in many foreign countries, and are not easy to transfer therefore the foreign firm try to invest in India in order to secure access. Recent governments" initiative of "Make in India" is also dwelling upon the first type of FDI. Make in India initiative also targets second type of FDI in which company shall invest in order to gain access to cheaper factors of M a y 0 8 , 2 0 1 5 production, e.g. low-cost labour 8 . However, third and fifth type of FDI are most talked issue in recent perspective of FDI in retail. The third type of FDI is especially being targeted for Retail sector involves international competitors undertaking mutual investment in one another, e.g. through cross-shareholdings or through establishment of joint venture, in order to gain access to each other's product ranges. The fifth type of FDI which is very recent in Indian Perspective relates to the trade diversionary aspect of regional integration. This type occurs when there are location advantages for foreign companies in their home country but the existence of tariffs or other barriers of trade prevent the companies from exporting to the host country. The foreign companies therefore jump the barriers by establishing a local presence within the host economy in order to gain access to the local market. The local manufacturing presence need only be sufficient to circumvent the trade barriers, since the foreign company wants to maintain as much of the value-added in its home economy 9 .
On the basis of types of FDI in any economy Indian retail would like to invite FDI in order to find employment for low-cost labour, to utilize available infrastructure, to acquire advanced retail formats and technology and to provide competitive edge to domestic retailers.
Avenues and fear of FDI
As per the FDI and its avenues in Indian Retail there are several fears and issues which are needed to be answered in order to provide necessary back up for the investment.
Customer Perception
As per customer perception regarding organized retailing and application of FDI they were found concerned with various services and facilities at various stores in malls and markets. From data it is observed that there are many common areas of preference for customers in both local kirana stores and organized departmental stores. On the attributes and preference level for these attributes the number is quite large and absolutely significant to the attributive advantage at departmental stores and local kirana stores M a y 0 8 , 2 0 1 5 Out of 150 customers contacted 99 preferred departmental stores especially in malls and organized retail over local kirana stores. M a y 0 8 , 2 0 1 5 Chart 5: Retailer"s Perception for expansion to take advantage of FDI in retail
Perception of FDI
Most of those retailers who have not expanded much and operating with huge investment accept that no of footfall may go down and also market share will be affected. However, retailers with investment lower than Rs 4 Lacks would see no considerable impact at all.
Chart 5: Retailer"s Perception for effect of FDI in retail on their establishment and business
6.3Experts
All of the experts from various domains are refuting any fear of job losses and loss of local entrepreneurs on account of FDI in single and multi brand retail segments rather they find significant improvement in the competitiveness and growth of all the sectors apart from retailing in India.
Result & Discussion:
R 2 value is significant to the dependent variable and location of outlets has positive correlation with brand name. Findings reveal preferred perception of all the respondents to the brands, organized retailers and quasi-organized retailers (domestic retailers camouflage the organized big retailers). It is also established that the retailers with large investment are more concerned with competition and their parallel growth and expansion to survive. Table 4 Dependency of brand value on location, service delivery and product quality
Limitations of the Study
The major limitations of the presented study are given below:-1. The primary limitation is the uncontrollability of some variables like cultural impact on the buying behaviour of customers. 2. There is possibility of sample respondent bias in their reporting of perceptual and attitudinal underpinnings on certain statements. 3. There is a problem in generalizing some findings as there are some unique variables at play. 4. The sample size may also be an issue as it may not reflect the true behaviour of the universe. 5. The study was conducted during January 2014 to February 2014 a period which is observed as the time of vibes operating against policies of incumbent government. | 3,886.4 | 2015-05-08T00:00:00.000 | [
"Economics",
"Business"
] |
Magnetic properties and giant magnetoimpedance effect for CoFeMoSiB surface modified amorphous ribbons covered by water based ferrofluid
Giant magnetoimpedance (GMI) effect is a powerful technique for magnetic label detection. Cobased amorphous ribbons are cheap materials showing high GMI effect at low operation frequencies for close to zero magnitostriction compositions. In this work magnetic properties and GMI were studied for CoFeMoSiB amorphous ribbons in as-quenched and surface modified states without and in the presence of water-based ferrofluid with electrostatic stabilization of γ-Fe2O3 nanoparticles. Surface modification by ultrasound treatment resulted in appearance of round defects with average diameter of about 150 micrometers. The GMI difference for as-quenched ribbons in absence and in the presence of ferrofluid was measured for the frequency range of 0.5 to 10 MHz. Although proposed surface modification by the ultrasound treatment did not improve the sensitivity limit for ferrofluid detection, it did not decrease it either. Observed changes of GMI are useful for understanding of functionality of GMI biosensors.
Introduction
Giant magnetoimpedance (GMI) effect in soft ferromagnets is promising for applications in the area of small magnetic field sensors including magnetic biosensors [1][2][3][4].Magnetoimpedance phenomenon is a change of the total impedance of ferromagnetic conductor under application of an external magnetic field when high frequency alternating current flows through it [2].Different kinds of materials were proposed for GMI biosensors [3,[5][6][7].The idea of GMI biosensor based on amorphous ribbon is attractive because this kind of material is cheap and sensitive element can be designed as a disposable strip [5].Two types of GMI-based biosensors are discussed in a literature.The first one is related to magnetic label detection [3] and the second is for label free detection process [8].In the second case the rapidly quenched ribbon sensitive element slightly changes the mass during the sensing process.The surface layer properties of the rapidly quenched ribbon differ from the properties of the central part.Therefore, gradual removal of the surface layer causes the change of the effective magnetic anisotropy and GMI [8].
Such a parameter as surface roughness is important in the case of amorphous ribbon based biosensors.On one hand, it might be seen as an obstacle in the case of magnetic label detection influencing the sensitive element response (due to strong contribution of the surface anisotropy).On the other hand, one can expect the improvement of the magnetic flux closure for the rough surface with magnetic ferrofluid spread on it.The origin of sensitivity is still not fully understood and it depends on surface features, size and agglomeration state of magnetic labels and other parameters [6,9].
Mechanical processing of the surface modification of the amorphous materials is practically impossible due to their physical properties.For the magnetic detection of the MNPs they usually select Mo or Cr doped compositions with enhanced corrosion stability.Consequently, chemical processing of these kinds of ribbons (like lithographic process including chemical etching) is also not simple.There were attempts to use advanced technique of lithography for surface defects creation or even mechanical drilling [10,11,12] but employment of such an expensive technique makes whole device less adapted for applications.Search for new methods of creation of artificial surface defects for the ribbons with enhanced corrosion stability is therefore a challenge.
In this work, spherical magnetic nanoparticles were fabricated by the laser target evaporation.Electrostatically stabilized water-based suspensions were prepared on the basis of obtained MNPs.Their physical properties were studied by different techniques prior to GMI measurements without and with immersing the sensitive element into a magnetic suspension.
Experimental
Iron oxide nanoparticles (MNPs) were synthesized by the laser target evaporation (LTE) method using Ytterbium (Yb) fiber laser with 1.07 μm wavelength.More details of LTE technique can be found elsewhere [12][13].Structural studies of as prepared MNPs were performed by transmission electron microscopy (TEM) using a JEOL JEM2100 microscope operating at 200 kV.The X-ray diffraction (XRD) studies of the MNPs and the ribbons were performed by DISCOVER D8 (Bruker) diffractometer using Cu-Kα radiation (wave lengths λ=1.5418Å).Electrostatically stabilized ferrofluid (FF) was prepared by ultrasound treatment using sodium citrate solutions (5 mM) in distilled water.The final concentration of MNPs in ferrofluid was 5.0 %.The specific surface area of MNPs was measured by the lowtemperature sorption of nitrogen (Brunauer-Emmett-Teller physical adsorption method, BET) using Micromeritics TriStar 3000 analyzer.
Co68.6Fe3.9Mo3.0Si12.0B12.5 amorphous 0.7 mm wide and 20 μm thick ribbons were prepared by rapid quenching onto Cu weal technique (tangential velocity of ~30 m/s).Saturation magnetostriction of this ribbon was close to zero [14].Surface modification of the amorphous ribbon was done in an ultrasonic bath with 5% H3PO4 acid concentration during 120 minutes treatment for creation of equidistantly distributed artificial surface defects.As-quenched ribbon was called as AQ, surface modified as SM.
Surface features of the ribbons were studied by the scanning electron microscopy (SEM) using by TM3000 HITACHI instrument.Magnetic measurements at room temperature were done by vibrating sample magnetometer (VSM, Lake Shore 7404).The total impedance (Z) was measured by the four-point technique for different frequencies of 0.5 to 10 MHz and of the driving current peak-to-peak intensity of 5 mA.Giant magnetoimpedance ratio for total impedance was defined as follows: ΔZ/Z= 100×(Z (H)-Z (Hmax))/Z (Hmax), where Hmax= 200 Oe.For GMI measurements in presence of water-based ferrofluid with MNPs of the ferrofluid 3.5 cm long sample was placed into a plastic tube of 30 mm length and 1 mm in diameter.Ribbon was located in centre of a tube filled with ferrofluid.
Results and discussion
Fig. 1 shows XRD spectrum with evidence of amorphous structure of the ribbon (very wide peak near 45 ± 3 o for 2θ angles).The SEM micrographs (Fig. 1, insets) show the surface morphology before and after ultrasound treatment in acid revealing sufficiently smooth surface of the bright side of AQ ribbon with typical for rapid quenching technique defects in the direction of the ribbon displacement during the solidification.SM ribbon has surface defects which can be described as round indentations with average diameter of about 150 micrometers with no anisotropy in the shape of the defects or their orientation.Surface defects were equidistantly separated from each other as a Surface modification by the controlled ultrasound treatment results in a slight decrease of saturation magnetization from 87 to 76 emu/g and does not affect the coercivity.Saturation magnetization decrease is consistent with surface modification which includes the removal of the initial surface layer and formation of a passivation layer with lower saturation magnetization, comparing with the material of the AQ ribbon.
XRD spectra of MNPs gave a mean crystallite size for air-dried LTE MNPs of 19 ± 2 nm (log-normal distribution) in a good agreement with TEM data and specific surface area evaluation (Fig. 2).Although the experimental XRD data were well fitted by the magnetite database, it was impossible to distinguish the magnetite and maghemite on the basis of XRD studies solely.The chemical composition of LTE MNPs was determined by the combination of redox titration and the lattice period analysis provided by XRD: it was close to the stoichiometric maghemite (Fe2.72O4).Also we can see that the Ms of the MNPs is about 57 emu/g.This value is lower than for the bulk maghemite as to expect for the MNPs of the observed average size due to nanoscaling effects [12].At the same time magnetic measurements confirm well the concentration of the MNPs of ferrofluid previously defined by chemical titration technique.This behavior is consistent with skin depth changes: the condition of strong skin effect appears for the frequencies above 4-5 MHz.In a frequency range 0.5-10 MHz GMI responses of the as-quenched and surface modified ribbons had very similar shapes but the GMI values were slightly higher for AQ ribbons.M(H) hysteresis loops are similar.We therefore assign GMI difference to the surface anisotropy contribution and dynamic magnetic permeability.An increase of the surface roughness and the partial removal of very thin surface layer during ultrasound treatment due to formation of the round defects can be a reason of (ΔZ/Z)max value decrease.(ΔZ/Z)max(f) curves were very much affected by the presence of FF in AQ and lesser in SM ribbons.For AQ ribbons covering by the FF resulted in a decrease of GMI ratio in comparison with response without FF.The change in the GMI ratio due to the presence of the MNPs can be explained by the effect of their fringe fields on the superposition of the applied direct current magnetic field and the induced transverse alternating current field created by the driving current.Field dependences of GMI ratios (Fig. 3) shows that the general shapes of the ΔZ/Z(H) in both cases (AQ and SM ribbons) correspond well to the longitudinal effective magnetic anisotropy (one-peak GMI curve) [13][14].At the same time the appearance of double peak shape related to small contribution of the surface component [8] of transverse magnetic anisotropy is evident in a small field.Increase of the frequency from 3 to 9 MHz results in the change of the shape of ΔZ/Z(H) curves and double peak behaviour become more obvious: dip of the double-peak at zero field increases with the frequency increase.For the frequency of 3 MHz maximum GMI in presence of FF decreases from 65% to 46% and for surface modified ribbon decreases from 53% to 43%, i.e.GMI response of AQ ribbon offers more stable detection opportunities with better noise-tosignal characteristics.For frequency 9 MHz similar conclusion can be made as the maximum GMI in presence of FF for AQ ribbon decreases from 85% to 61% and for SM ribbon decreases from 78% to 68%.
Creation of round defects makes ribbon less sensitive to the presence of FF but the origin of this sensitivity is not trivial.The GMI sensing process counts with the magnetic field of the alternating current passing through the ribbon, the constant external magnetic field applied during GMI measurements for the change of magnetic permeability of the sensitive element, the stray fields created by MNPs and stray fields of round created defects.Surface modification changes demagnetizing fields: defects corresponding to rapid solidification become removed but new defects in the shape of round indentations are formed.The indentations are large, comparing with typical quenching defects but out of the border of the round defect the surface roughness become even smaller.We can suppose that the flux closing by the magnetic chains formed in FF [15] contributes to GMI.
Conclusions
Amorphous Co68.6Fe3.9Mo3.0Si12.0B12.5 ribbons with good corrosion stability were prepared by rapid quenching technique.Magnetic and GMI properties were studied in as-prepared state and after creation of artificial surface defects by the ultrasound treatment in the acid.
GMI responses in as-quenched and surface modified state in the absence and in presence of LTE MNPs based ferrofluid model the magnetic biosensor functionality for magnetic labels detection.GMI curves were affected by the presence of ferrofluid in as-quenched and much less in the surface modified ribbon cases.Covering by the ferrofluid resulted in a decrease of GMI ratio in all frequency range.For the frequency 3 MHz maximum GMI in presence of FF decreases from 65% to 46% and for surface modified ribbon decreases from 53% to 43%.GMI response of AQ ribbon shows better noise-to-signal characteristics.The change in the GMI ratio with ferrofluid covering can be explained by the effect of fringe fields of MNPs and magnetic flux closure due to the MNPs chains formation.Obtained results can be useful for the development of the magnetic biosensor prototype with cheap disposal amorphous ribbon based GMI sensitive element operating at low frequencies of the order of 5 MHz.
The work was supported in part by the Government of the BC is acknowledged for financial support under the Elkartek Program, the Project Micro4Fab (KK-2016/00030) and the Ministry of Education and Science of the RF, Project Nº 055, within the state job 3.6121.2017/8.9.We thank I.V. Beketov, A.I. Medvedev and A. Larranga for special support.Selected measurements were made at SGIKER services of UPV/EHU.
Fig. 1 .
Fig. 1.XRD pattern for Co68.6Fe3.9Mo3.0Si12.0B12.5 amorphous ribbon in as-quenched state.Inset shows SEM image of bright side of the AQ ribbon (a).VSM hysteresis loops for AQ and SM states, inset shows SEM image of bright side of the SM ribbon (b).
Fig 3 .
shows frequency dependences of the maximum GMI ratio (ΔZ/Z)max for AQ and SM ribbons in the presence and absence of FF: until f= 5 MHz, GMI maximum increases and afterwards it changes very little.
Fig. 2 .
Fig. 2. XRD pattern for LTE MNPs from ferrofluid, inset shows TEM image of LTE MNPs (a).Histogram of particle size distribution and log-normal fits (blue line) (b).Hysteresis loops of as-prepared LTE MNPs and electrostatically stabilized water-based suspension of LTE MNPs (c).
Fig. 3 .
Fig. 3. Frequency dependences of the (ΔZ/Z)max ratio in the presence and in absence of ferrofluid for as-quenched (a) and surface modified (b) and Field dependences of GMI ratio in the presence and in absence of ferrofluid for as-quenched (c) and surface modified (d) CoFeMoSiB amorphous ribbons. | 2,948.4 | 2018-07-04T00:00:00.000 | [
"Materials Science"
] |
Study on Mesoscopic Damage Evolution Characteristics of Single Joint Sandstone Based on Micro-CT Image and Fractal Theory
(e different directions of joints in rock will lead to great differences in damage evolution characteristics. (is study utilizes DIP (digital image processing) technology for characterizing the mesostructure of sandstone and combines DIP technology with RFPA2D. (e mesoscale fracture mechanics behavior of 7 groups of jointed sandstones with various dip angles was numerically studied, and its reliability was verified through theoretical analysis. According to digital image storage principle and box dimension theory, the box dimension algorithm of rock mesoscale fracture is written in MATLAB, the calculation method of fractal dimension of mesoscale fracture was proposed, and the corresponding relationship between mesoscale fractal dimension and fracture damage degree was established. Studies have shown that compressive strength as well as elastic modulus of sandstone leads to a U-shaped change when joint dip increases. (ere are a total of six final failure modes of joint samples with different inclination angles. Failure mode and damage degree can be quantified byD (fractal dimension) and ω (mesoscale fracture damage degree), respectively. (e larger the ω, the more serious the damage, and the greater the D, the more complex the failure mode. Accumulative AE energy increases exponentially with the increase of loading step, and the growth process can be divided into gentle period, acceleration period, and surge period. (e mesoscale fracture damage calculation based on the fractal dimension can be utilized for quantitatively evaluating the spatial distribution characteristics of mesoscale fracture, which provides a new way to study the law of rock damage evolution.
Introduction
Because of the long-term influence of various geological processes, rock mass is cut into each other by different directions and different sizes of structural planes, forming discontinuous bodies with special structures, which leads to the formation of complex mesoscopic structures, and its failure mechanism will be more complicated [1][2][3][4]. In the process of rock failure, deformation problems such as crack initiation, shear zone formation, and stress concentration area distribution are closely related to its internal mesostructured. e heterogeneity of rock and the geometric distribution characteristics of joints with different dip angles have a vital effect on macroscopic failure mode and mesoscale damage evolution process of rock. erefore, studying the macroscopic failure mode and mesoscale damage evolution process has important theoretical significance for revealing the macroscopic nonlinear mechanical behavior and damage mechanical properties of the jointed sandstone fracture process.
In recent years, scholars at home and abroad have never stopped the research on the cutting direction effects of joint on failure mode and damage evolution law of rock masses and have achieved rich results. Lou et al. have carried out a detailed study of the interrelatedness of joint dip angle with failure modes and shale strength by numerical simulation experiments [5]. Yang et al. examined the joint inclination impacts as well as spacing on the fracture effect of sandstone and proposed a calculation model for joint rock failure [6]. Reik et al. conducted true triaxial compression tests on jointed rock mass specimens and studied the effects of joint direction and intermediate principal stress on specimens' compressive strength [7]. Sun et al. performed triaxial and uniaxial compression tests on jointed rock specimens through laboratory experiments and systematically analyzed the internal relationship between mechanical parameters such as elastic modulus and joint inclination angle [8]. Qian et al. studied the mechanical response and damage process of joint rock mass under stress wave [9]. Wasantha et al. carried out uniaxial compression on cement mortar joint specimens, showing that all the compressive strength was affected by trace length, inclination angle along with joint position [10]. Morteza et al. have studied the effect of joint direction and spacing on the macroscopic rupture of joint rock mass [11]. As' habi and Lakirouhani studied the peak strength and damage pattern of the joint rock by numerical simulation [12]. However, Liang et al. presented that mechanical process of rock fracture has self-similarity, and its failure has fractal characteristics. Stress in process of loading determines the fractal dimension of its damage [13]. ere have been many research results on the relationship between fractal of the mesostructure of the rock and its compressive strength [14,15]. Zhao et al. studied the propagation process of rock cracks through rock mechanics experiments and established a rock fractal damage constitutive model based on fractal theory [16]. Li et al. used numerical simulation methods to study the failure and fractal characteristics of rock in uniaxial compression tests [17]. Zhang et al. used physical tests and numerical simulation to study the association among fractal characteristics of cracks' geometrical distribution along with their mechanical properties after the rock failure finally in uniaxial compression tests [18].
Even though the above-mentioned research findings provide valuable points of reference for better understanding the damage of a jointed rock mass as well as mechanical properties at macroscopic scale, only a few researches have investigated, at the mesoscale, the local failure caused by an uneven stress distribution resulting from meso inhomogeneity of rock mass. While rock masses have mechanical characteristics and failure modes, their mesostructure is strongly connected to these qualities and failure modes; that is, macroscopic mechanical properties along with fracture processes of a rock mass are dependent on the materials' mesoscale behavior as well as mesostructure. By properly representing the mesostructure of the rock in the mesoscopic mechanical model, it will be possible to gain a better understanding of the failure mechanism and damage evolution process of the rock.
For this reason, this paper utilizes DIP technology for characterizing the real mesostructure of sandstone and combines it with the rock fracture process analysis system (RFPA2D) to establish a real mesostructure numerical model considering jointed sandstone with different inclination angles. Macroscopic mechanical characteristics and damage evolution of 7 groups of jointed sandstone with different dip angles under uniaxial compression are simulated, and the influence of mesostructure on macroscopic mechanical behavior and damage evolution of jointed sandstone with various dip angles is analyzed. Based on fractal theory, distribution of acoustic emission, the damage evolution, and the fractal characteristics of failure mode in the rock fracture process are deeply discussed.
Regional Geological Characteristics
e "Golden Triangle" of Yunnan, Guizhou, and Guangxi is a part of the Youjiang Basin that lies on the southwest edge of the Yangzi block. Near about 50,000 square kilometers area is covered by it and it extends approximately 400 kilometers east to west (Figure 1(a)).
In addition to being one of the most significant gold resources on the planet, Carlin-type gold deposits are also the world's primary source of gold. is kind of deposit is distinguished by the fact that the ore is housed in sedimentary rocks and that the gold is fine-grained and dispersed throughout [19][20][21]. e Lannigou gold deposit in the "Golden Triangle" is the world's largest Carlin-type gold deposit. In terms of geology, this is a classic fault-controlled deposit [22]. e Bianyang Formation, Niluo Formation, and Xuman Formation are the most exposed strata in the mining region. Terrigenous clastic turbidite is the dominant lithology that is composed mostly of calcium-bearing sandstone, mudstone, and siltstone from deep-water basins. Faults regulate the morphology of the ore body, and the ore body is mostly concentrated along the northwest-trending fault F3 and where it meets with the northeast-trending fault F2. A variety of calcareous fine-grained mudstones and sandstones, ranging in age from the Xuman Formation to the Bianyang Formation, are responsible for the mineralization of the region's ore (Figure 1(b)). e sandstone cores in the F3 fault fracture zone of the Bianyang Formation were selected for high-resolution CT scanning ( Figure 2) and X-ray mineral diffraction analysis; the sampling locations are shown in Figure 1 Using an X-ray diffractometer, the mineral composition of sandstone can be calculated, as seen in Table 1. Table 1 shows that quartz is the most abundant mineral in sandstone, accounting for 50.9 to 62.9% of the total mineral composition. In the sandstone samples, illite is the most abundant clay component, followed by mixed layer illitemontmorillonite (6 percent-17 percent), with a small amount of chlorite (3 percent -28 percent) and kaolinite. erefore, results obtained from the test show that samples are primarily composed of brittle minerals for example quartz.
Finite Element Method for Rock
Failure Process
Digital Image Characterization of Sandstone
Mesostructure. DIP technology identifies the spatial distribution and geometric shape of the mesocomponents of materials based on variations in gray-scale value and color, rather than on the basis of their physical properties. To evaluate the segmentation thresholds for different media within rock based on brightness and color, this technique is utilized. Once the segmentation thresholds are determined, the technique is utilized for classifying the image into various media, thus generating an image that characterizes the nonuniformity of material [23]. Tianjin Sanying Company performed high-resolution CT scanning, which resulted in the CTslice shown in Figure 3. is is a true color, 24 bit sandstone image with calcite-filled joint that has been processed. Sandstone is the dark-colored substance, and calcite is the light-colored material. e image has a resolution of 500 × 500 pixels, and the real dimension of the image is 50 mm × 50 mm. In order to detect the color change, multithreshold segmentation by analyzing the variation in values of intensity (I) in the HIS (Hue, Saturation, and Intensity) color space, image processing to stretch contrast in order to increase the tonal distinction between the features was performed [23,24]. Figure 3 depicts the location of the scan line AA′ as it moves through an image, and Figure 3 depicts plot of how the I value changes as scan line AA′ goes through the image. Many experiments in the image J software were conducted to determine the segmentation threshold that was determined to be 150 by comparing the mineral medium through which the scanning line travels with the change in the curve. is results in a two-section I value, ranging from 0 to 150 (sandstone) and 150 to 255 (calcite), indicating that the test sandstone sample's internal mesoscopic medium can be divided into two categories as per the I value distinction. Figure 3 is a characterization image obtained after image processing of Figure 3. According to Figure 3, the characterization imagery obtained from threshold segmentation can show the shape and spatial distribution of calcite in the sandstone sample more precisely.
Constitutive Relationship for Damage on the Mesoscopic
Scale. As per the strain equivalence assumptions, in RFPA2D, ω (damage variable) is described as a change in elastic modulus [25]. e constitutive connection following material damage as a result of an external force may be described as follows [26,27]: where damage variable is represented by ω and undamaged and damaged material's elastic modulus is represented by E 0 and E. Because sandstone's compressive strength is significantly higher as compared to its tensile strength, we utilize the Mohr-Coulomb strength criterion as the criterion for element failure, with the tensile criterion serving as the failure criterion. e constitutive association of a mesoscopic element under uniaxial tension is seen in Figure 4 (tension or compression). Primarily the stress-strain curve is linearly elastic, whereas no evidence is found for any structure damage. It is the meso element that suffers brittle breakage after it has undergone the maximum tensile strain. Brittle rocks are vulnerable to tensile-induced failure, which is the most common kind of failure [28]. If tensile stress surpasses the element's tensile strength ft, according to primary damage criterion, damage occurs. e following is the expression for the tensile damage function [29]: where principal stress vector is represented by σ. e constitutive connection of the mesoscopic element under : where λ represents the mesoscopic element's residual intensity coefficient, described as f tr � λf t (where mesoscopic element's uniaxial tensile strength is given by f t , whereas f tr is the residual strength at the element's initial tensile failure), and element's ultimate tensile strain is given by εtl. When the element's uniaxial tensile strain goes to ultimate tensile strain, then the element goes to tensile fracture state, i.e., complete failure. Ultimate strain coefficient is given by η, particularly characterized by εtl � ηεt0. εt0 is tensile strain corresponding to elastic limit that can be named as tensile failure strain threshold, which is calculated as [31] εt0 When a mesoscopic element is exposed to uniaxial compression, as illustrated in the first quadrant of Figure 4, the Mohr-Coulomb criteria for damage are used as the second criterion, which defines element damage under compressive or shear stress conditions [30]: where friction angle is given by ϕ, principal stresses is given by σ 1 and σ 3 , and uniaxial compressive strength is given by fc. ε element's damage variable under uniaxial compression may be given as [32,33] where the coefficient of residual strength is given by λ, described as ftr/ft � fcr/fc � λ, whereas at elastic limit compressive strain is given by εc0, which can be determined as [32,33] εc0 � fc E 0 .
Establishment of the Numerical Model.
is research makes use of DIP techniques in conjunction with finite element modelling. In FEM, study objects should be organized into several small grid elements. Given that digital image is made up of pixels that are organized in rectangle, where every pixel has the size of a small square, thus, the pixel may be considered as finite element mesh in the following way ( Figure 5). e entire image of characterization can be transformed into several end element grids according to which the material parameters of every material composite are allocated to the image according to color and the uniform coefficients of the different components are added to the numerical model. e numerical simulation in this work is carried out utilizing the rock failure procedure analysis system RFPA2D that is capable of simulating the mesoscopic fracture progression as well as the whole process of rock fracture [31]. In the numerical computations, we use the assumption that mechanical characteristics of calcite as well as sandstone's matrix components follow the Weibull distribution function [34], which takes into consideration the heterogeneity of the material: Shock and Vibration 5 where variables, for example, strength properties, Poisson's ratio, Young's modulus are given by u; the corresponding mean value is represented by u 0 , where m describes the f (u) shape, signifying the heterogeneity degree that can be called as heterogeneity index, and f (u) is the material elements' statistical distribution density of mechanical properties. e inhomogeneity of sandstone and calcite is measured in this model, whereas Monte-Carlo technique is utilized for assigning the mesoelements' mechanical parameters [35,36]. Table 2 shows the mechanical parameters of mesomedia inside the sandstone [37]. e actual size of the numerical model is 50 mm × 50 mm, the mechanical loading diagram is shown in Figure 6, the displacement compression loading control is adopted in axial direction, and planar stress was assumed. e initial displacement is 0.001 mm, and singlestep increment is 0.001 mm, loading until the specimen failure.
To study the sandstone's mechanical properties with distinct dip angles and the mesoscopic inhomogeneity effects on the macroscopic sandstone fracture caused by the size, distribution along with shape of the calcite-filled joints is necessary to keep the basic medium of the image untouched irrespective to dip angle. To do this, the frames from the same section of a two-dimensional micro-CT image are clipped from a squeeze of 50 mm × 50 mm at various angles. e square center is fixed and digital images are recorded at 15°reverse clockwise, with a total of 7 images. Figure 7 shows the digital images' azimuth angles "are α � 0°, α � 15°, α � 30°, α � 45°, α � 60°, α � 75°, and α � 90°, where α is the angle" between the horizontal direction and calcite-filled joint.
Mechanical Properties of Sandstone under Uniaxial
Compression.
e simulation results of the specimen at α � 45°are selected to analyze the distribution characteristics of the stress. Figure 8 represents the elastic modulus distribution and principal stress in specimen when α � 45°at the initial loading stage. Due to heterogeneity of rock mesostructure, the brightness of different areas in the picture has a certain difference. Compared with the elastic modulus diagram, it is found that the internal stress distribution of the specimen filled with calcite veins is inhomogeneous, at the critical surface (weak structural surface) between calcite veins and sandstone, and it has higher brightness and significant stress concentration distribution, which indicates that the greater the brightness, the greater the stress. is shows that, in sandstone, the presence of calcite veins and mesostructure's heterogeneity have an important influence on the distribution of stress. Table 3 shows the joints' elastic modulus and peak strength with distinct inclination angles. As demonstrated in Figure 9, there is clear anisotropy of the compressive strength and elastic module of jointed sandstone and variations in U-form as joint inclination increases. is might be because of inherent anisotropy of the sandstone and poor cementation of the calcite due to the arrangement of matrix and minerals. is conclusion is in good agreement with the study conclusions of Wang et al. [38] and Sun et al. [8], and it also shows that the results of numerical simulation are reliable. e compressive strength of sandstone reaches the maximum when α � 0°, which is 81.47 MPa; when α � 60°, it reaches the minimum, which is 55.68 MPa. If the angle of inclination for the joint is 60°, the friction angle of the specimen is larger than the internal angle. When the exemplar is squeezed, the shear strength on the contact surface between calcite along with sandstone, which is the explanation for the discrepancy, is larger than the overall frictional power and the cohesive force. is will lead to a shear failure alongside calcite-sandstone contact, with a very low compressive strength. When the azimuth angle is 90°or 0°, this test component will no longer move over the surface, thereby substantially improving compressive strength. As shown in Figures 9 and 10, due to the influence of sandstone mesostructure, the compressive strength and macroscopic failure mode of jointed sandstone reflect significant anisotropic characteristics.
In acoustic emission diagrams, white color in Figure 10 shows compressive shear damage created by element during the current loading stage and yellow denotes tensile damage at the current step, while black elements indicate all the damage.
As shown in the figure, this is observed when α � 0°. On the sample's left side, the cracks as well as calcite veins start to crack at about 45°, the accumulation of tensile failure resulting in the stable expansion of the cracks, which eventually leads to the penetration of cracks and formation of oblique Z-shaped failures. When α � 15°, the cracks started to sprout along the left end of the specimen as well as spread steadily perpendicularly to the calcite vein. With the increase of axial stress, a large amount of tensile failure occurred inside the specimen and was accompanied by a small amount of shear failure, resulting in crack expansion and penetration, eventually forming M-shaped failure. When α � 30°, on the left side of the specimen, the cracks begin to crack along the weak surface of calcite vein and When α � 60°, cracks start to sprout at the calcite veins' lower end and expand along calcite veins. As stress increases, the crack increases steadily along the direction of maximum principal stress, a macroscopic shear zone is formed due to a large amount of tensile failure inside the sample, and finally N shape failure is formed. When α � 75°, the cracks start at the calcite vein's upper end and expand through calcite vein. As stress increases, the veins of calcite are penetrated, causing the specimen to eventually fail and form a linear failure. When α � 90°, the initiation of cracks starts at both ends of calcite veins. As stress rises, 2 key cracks occur on both sides of the calcite vein and 2 key cracks are about 30°from the calcite vein, and as loading progresses, a large amount of tensile failure occurs inside the sample, which causes calcite veins to penetrate and form an oblique N-shaped failure. Figure 10: Crack propagation characteristics and fracture process evolution diagram of jointed sandstone with different dip angles.
Shock and Vibration
From the acoustic emission evolution diagram, it can be seen that the elements are mostly tensile failure (yellow), and the macroscopic shear bands created by the specimen failure are connected mostly by tensile failure elements.
is is because there is a stress concentration zone in compressed rock, the calcite veins filled in the sandstone are weak structural planes, and damage and failure occur first when they are mechanically loaded. Stress values will first exceed rock strength at specific local position and cause damage; this is the main impact of stress concentration which causes variations in the evolution of damage as well as mechanical failure behavior of distinct mesostructures.
Acoustic Emission Evolution Characteristics.
In rock deformed under load, acoustic emission is a measurable response that occurs every time a microfracture occurs. is is a useful instrument for investigating the development of internal damage in rock, since this is caused by the fast discharge of sound energy during microcrack formation and growth. Because RFPA2D shows that the failure of every element represents the source of an acoustic event and because sample element failure will release the stored elastic energy throughout deformation process, RFPA2D may be used to mimic acoustic emission activities [23]. e evolution of acoustic emission process describes the whole rock fracture process, making it feasible to evaluate the fracture evolution law while the rock is fractured by counting the number of damaged components and the energy released by the resulting acoustic emission. Figure 11 is a trend diagram of load-displacement, AE energy, and accumulated AE energy with loading steps under different loading conditions. e obvious difference of acoustic emission counts in the failure process of jointed sandstone with different dip angles is related to the failure mode of the samples. Figure 11 clearly shows that loading step is directly proportional to stress. Although there is an evident stress reduction following the peak strength, there is still significant residual strength. All samples can be divided into three stages during the failure: elastic stage, failure stage, and yield stage. Since no element damage occurred at the loading initial stage, the AE count and accumulated AE energy were basically 0. With the continuous action of the axial compressive stress, when α � 90°, element damage appears first in the sample. AE energy is the elastic energy released under compression; this can be observed from the figure that AE energy gradually rises with the load rise and reaches the maximum near the peak strength. is is because the greater the load is, the more the elements are damaged, and the more the elastic energy is released. When α � 90°, two microcracks in sample sprout along the two ends of calcite vein (Figure 10), so the AE energy distribution is the densest, and internal damage is the most severe ( Figure 11). Secondly, the AE energy distribution is relatively dense in the interval of α � 0°̴ 45° ( Figures 11(a)-11(d))), and, finally, the distribution of AE energy is relatively sparse in the interval of α � 60°̴ 75° (Figures 11(e)-11(f ))). Accumulative AE energy increases exponentially with the increase of loading step, and the growth process can be divided into gentle period, acceleration period, and surge period. is is because gentle period is in the initial stage of loading, AE events are less, the AE signal is relatively weak, the stress is in the linear elastic stage, and no obvious cracks are generated. When acceleration period is reached, the cumulative AE events show a linear increase, there are more AE events, a large amount of elastic energy is suddenly released, the AE signal is strong, and cracks extend and expand rapidly. When stress reaches the peak, it enters surge period, sandstone sample suddenly is failure, and cumulative AE energy increases instantly and reaches the maximum.
Fractal Analysis of Mesoscale
Damage of Sandstone
Images Fractal Analysis Based on Box Dimension.
Mandelbrot proposed the fractal damage theory, analyzing as well as investigating various unstable, irregular, and extremely complex phenomena that occur in nature and are based on mathematical calculations. It has found widespread use in a variety of areas, including geology and nonlinear science [24,39]. Specifically, in this work, we choose a selfsimilarity box dimension computation technique, which is described as follows [40,41]: where self-similar fractal dimension of the damaged region is Ds, and the developed reducing sequence with element's square box size is r k . e least number of grids necessary for covering the target set A with a square box of size r k is Nr k (A). is study aims to utilize box dimension to investigate the fractal of mesoscale failure element area of jointed sandstone with varying dip angles in order to better understand the failure mechanism. It is possible to determine the fractal dimension of the mesoscale failure element area of jointed sandstone with varied dip angles and different stresses at different stress levels. Figure 12(a) presents the binary image of mesoscale fracture evolution at different stress levels when α � 30°. In different areas, the density and coverage area of the failure element are different, the number of pixels covering the failure element area is different, and accordingly, the fractal dimension is also different. e author uses the box coverage technique for calculating fractal dimension of the failure element by using the number of pixels covered by damage element area, and the image resolution is 500 pixel × 500 pixel. Figure 12(b) is the box covering various regions, which is dividing the mesoelement failure area into a small square grid with r k side length (the length of each image pixel is described as 1 in this paper) and then counting the number Nr k of all boxes comprising failure element region. A dichotomy is used to construct r k in this paper. If the failure element distribution in this area satisfies the fractal features, a formula shows that "when the r k ⟶0, lgNr k /lgr k ⟶D, the fractal dimension of the failure element field (acoustic emission field) in this area is D. Hence, in the double logarithmic coordinate system, the data points (lgr k , lgNr k ) are linearly fitted by the least square method, and the straight line equation" can be found: where D is the box counting dimension of the field of the failure element. MATLAB programming is utilized to automatically mesh and statistically analyze the acoustic emission evolution image of a mesoscale rock failure and to determine the fractal dimension of the rock failure based on the aforementioned approach. e calculating procedure is depicted in Figure 13.
Whereas damage variable may be utilized to statistically characterize the progression of the microfracture, it is not reflective of the microfracture's spatial distribution and the results are limited. Xie has proven that rock failure procedure has fractal properties since a fracture begins [42]. Following this understanding, the fractal dimension of the acoustic emission field is used as the characteristic parameter for describing the mesoscale damage evolution of the rock. Not only the fractal dimension based on the acoustic emission field is able to quantitatively analyze the evolution process of the mesoscale failure of the rock, but also the damage evolution and macrodamage characteristics of the meso element in the material can be unified. e connection between the degree of destruction of the rock mesoscale fracture and the D fractal dimensional value of the "acoustic emission field" in specimen may thus be developed and stated as where D is the fractal dimension of the damaged area of the mesoscopic element of the rock after stress loading, D 0 is the fractal dimension of the initial damage area of the mesoscopic element of the rock before stress loading, and D max is the fractal dimension when the mesoscopic element of the rock reaches the maximum damage area. e acoustic emission charts under different dip angles and corresponding levels of stress are analyzed using MATLAB. Figure 14 is a process of calculating the box dimension of the acoustic emission binary image, and Figure 15 is fractal fitting diagram of sample failure when stress level is 50% and α � 30°. e damage degree, acoustic emission energy values, and fractal dimensions of specimens at various stress levels are given in Table 4.
It can be observed in Figure 16 that the increase of the acoustic emission energy curve in each group of azimuth angles is relatively flat and the change trend tends to be consistent when the stress level is lower than 70%. e acoustic emission energy curve of the sample rapidly increases when the stress level is higher than 80% and reaches the maximum when α � 90°. When α � 15°and α � 60°, the sample takes second place, and when α � 75°, the acoustic emission energy value of the sample is the minimum. erefore, it shows that when α � 90°, the energy released by the sample failure after being loaded is the largest, and the ultimate damage is the most severe. It can be seen from Figures 17 and 18 that under all azimuth angles, as per the continuous increase of the stress level, the fracture damage degree along with the fractal dimension continues to increase; the fractal dimension along with fracture damage degree has same changing trend. e fractal dimension of rock damage zone is positively correlated with load, and the rise of the fractal dimension is synchronized with the change of damage. e samples are in the elastic stage and the Ds � 0 when the stress level is 10%, which means that the samples are not damaged. e Ds of the specimen under all azimuth angles will increase rapidly when the stress level is lower than 30%. When the stress level is above 40%, with the increase in stress levels, the fractal and fracture damage of the samples is increased and trends are comparable. When α � 90°, the fractal sampling dimension value is at 100% and the fracture damage level is 0.90, both being the maximum. When α � 75°and when 100% is stress level, the fractal sample dimension value is 1.52 and 0.76 is fracture damage degree, both of which are the minimum. As shown in Section 4.2, when α � 90°, the sample shows an oblique N-shaped failure, the final failure mode is the most complex whereas damage is the most serious; thus, rupture damage and fractal dimension are the largest. When α � 75°, the cracks initiated, expanded, and penetrated along the calcite veins, so the fracture damage and fractal dimension are the smallest. For specimens whose final fracture modes are V-shaped, oblique Z-shaped, inverted N-shaped, and M-shaped, the fractal dimension is between linear failure and oblique N-shaped. us, the larger the Ds, the more complicated the final failure mode, the greater the fracture damage degree, and the more severe the final damage of the specimen.
Reliability Verification of Numerical Simulation Results.
Since the above research is based on numerical simulation, to verify its reliability, the established numerical model needs to be verified. is section verifies its reliability by theoretical analysis; in Figure 19 is a mechanical model with a single joint, and β is the angle between joint and direction of the maximum principal stress. According to Mohr circle theory, the normal stress σ and shear stress τ acting on the joint surface are For specimens with single joints, the conditions for failure along the joints are
Shock and Vibration 13
In the formula, c j , ϕ are the adhesive force and the joint surface's internal friction angle, angle between the joint, and σ 1 is given by β.
Under uniaxial compression (σ 3 � 0), formulas 12 and 14 can be simplified to , Under uniaxial compression, the compressive strength of the model with joints is (1 − tan ϕ tan β) � 0.42265 > 0; k � 3. e results show that the jointed model will fail along the joint surface under uniaxial compression. Figure 20 is a numerical simulation fracture process diagram of jointed sandstone. e initial cracks start along the calcite veins and propagate along both Figure 15: Fitting curve diagram of fractal characteristics of damage area when joint inclination is 30°(stress level is 50%). [38] results. However, Wang et al. [38] and Sun et al. [8] only studied the macromechanical properties of rocks and did not consider the influence of rock mesostructure on its macromechanical behavior. Nevertheless, the stress distribution and failure mode of rock are closely related to its mesostructure, and the macroscale fracture process and mechanical properties depend on mesoscale behavior as well as the mesostructure of material. In this paper, the failure mode of jointed sandstone with different dip angle is studied based on the consideration of its mesostructure in the numerical model, and the outcomes demonstrate that the energy released by the fracture of α � 90°is the largest, the final failure degree is the most severe, and internal damage is the most serious. Further, the more fully the rock is broken, the more complex the fracture mode is. In mining activities, it is necessary to fully understand its geological conditions and choose a position where the joint inclination is close to vertical for blasting.
is will make the ore crush more fully, thereby improving mining efficiency.
Fractal Characteristics and Application of Box Dimension of Acoustic Emission Evolution Image.
e fractal study shows that fractal dimension can characterize quantitatively the complexity of sandstone failure mode and the degree of fracture damage can quantitatively describe the degree of sandstone damage. Figure 15 shows that the correlation coefficient is R 2 � 0.973, which indicates that the damage evolution process of sandstone is fractal and the mesoscale fracture distribution has good self-similarity, and fractal dimension has high credibility, which is consistent with the study conclusion of Liang et al. [13]. However, the damage fractal of rock throughout loading is related to the stress [14,15]. Zhao et al. studied the propagation process of rock cracks through rock mechanics experiments and established a rock fractal damage constitutive model based on fractal theory [16]. Zhang et al. used physical tests and numerical simulations to study the correlation between fractal characteristics of cracks' geometrical distribution along with their mechanical properties after the rock finally fails in uniaxial compression tests [18]. Rock failure is actually the process of cumulative damage development. If the fractal dimension of the damage that occurs during failure is calculated for rocks under various stress levels, the variation law of the fractal dimension in the process of damage evolution can be observed. However, the above studies only consider the variation in fractal dimension during a certain stage of the fracture process, and there has been little research on the entire process from initial damage to the final failure of the rock. Numerical simulation of the fractal characteristics of rock is usually based on the assumption that rock microstructure is randomly distributed, without considering the nonuniformity of rock.
In this study, through the development of a numerical model that takes into account the real mesostructure of rock, the authors investigated the fractal characteristics of acoustic emission under various stress levels. e results of the fractal research revealed that the material damage evolution procedure is fractal, whereas fractal dimension is an attribute quantity reflecting the degree of material damage, respectively. e use of fractal theory in geotechnical engineering allows individuals to have a better knowledge of the rock itself. e box dimension based on the acoustic emission field is used as a parameter for describing rock mesoscale fracture, which not only solves the problem of difficult quantification of discontinuous interfaces in the rock but also associates the microcracks evolution in the rock with the macroscopic mechanical behavior that overcomes the disadvantages. According to the research of this paper, the author developed a digital image box dimension calculation program, which can be utilized for calculating fractal dimension to analyze the mechanical characteristics of rock damage, in order to further reveal the rock's failure mechanism.
Conclusion
(1) On the basis of digital image storage and box size theory, MATLAB is used to develop a rock dimension algorithm based on a digital picture for the mesoscale failure box and establishes a mesoscale fracture damage assessment index based on an acoustic emissions field box dimension. is approach may be used for explaining the progression of rock mesoscale failure in quantitative terms, and the bigger the fractal dimension, the more the rock damage.
(3) In the present study, the oblique N-shaped failure mode has a fractal dimensional value of 1.80 and that is the greatest. e linear failure mode has 1.52 fractal dimension value, the smallest value. Between these two values are the fractal dimensions of M-shaped, N-shaped, V-shaped, and oblique Z modes. e fractal size may effectively define the failure mode of the joined sandstone, which demonstrates that the bigger the fractal size, the more complicated the rock failure mechanism. (4) When the acoustic emission field is used as a parameter for characterizing rock mesoscale failure, the evolution of rock mesoscale failure is connected to macromechanical behavior evolution, which overcomes the problem of other damage definition techniques that require several rock characteristic parameters and provides a new way for quantitatively evaluating the damage degree of rock acoustic emission field (5) AE energy increases with the increase of load and reaches the maximum near peak strength. is is because the greater the load is, the more the elements are damaged, the more the elastic energy is released, the denser the AE energy distribution is, and the more severe the internal damage is. When α � 90°, the AE energy distribution is the densest, is denser in the interval of α � 0°̴ 45°, and finally is relatively sparse in the interval of α � 60°̴ 75°. Accumulative AE energy increases exponentially with the increase of loading step, and the growth process can be divided into gentle period, acceleration period, and surge period. | 8,909.6 | 2021-10-04T00:00:00.000 | [
"Physics"
] |
Physicochemical properties and lubricant potentials of Blighia sapida Sapindaceaeae seed oil in solid dosage formulations
Purpose: To investigate and compare the physicochemical properties and lubricant potentials of Blighia sapida seed oil (BSSO) with those of magnesium stearate, a commercial lubricant. Methods: The dried seeds of Blighia sapida (BS) powder were macerated with n-hexane for five days to separate the oil. The physicochemical properties; solubility profile, acid value, saponification value, iodine value of the oil were determined using standard methods. Batches of ascorbic acid tablets compressed at same compression settings using different concentrations of BSSO as lubricant were evaluated for their friability, weight uniformity, tablet hardness, disintegration and dissolution. Results: BSSO had a density of 0.9 g/ml, acid value of 2.65 ± 0.20 mg KOH/g, saponification value of 141.65 ± 0.75 mg KOH/g, iodine value of 62.50 ±3.71 mg I2/g among other parameters. Fatty acid methyl ester analysis (FAME) revealed 96.89 % of monounsaturated fatty acids and esters in the range of C15-C23; a C23 compound, 22-tricosenoic acid was the dominant compound (46.82 %). The oil showed excellent lubrication properties in ascorbic acid tablets at a low concentration (0.5 %), similar to 2 % magnesium stearate. However, higher concentration (5 %) of BSSO resulted in granules that could not be compressed into tablets. Tablets containing BSSO demonstrated satisfactory friability, weight uniformity, hardness, disintegration and dissolution characteristics. Conclusion: Blighia sapida seed oil is a potentially useful low-cost tablet lubricant. However, further investigations on the excipient, including stability, toxicity, etc, are required to ascertain its suitability.
INTRODUCTION
Plant materials are major source of low-cost and novel excipients.These materials can be tailored for many applications, since they can be modified to achieve new materials with various physicochemical and functional properties.The search for under-utilized, novel, and renewable materials as sources of low cost excipients for pharmaceutical industry has been a major focus for research [1,2].
Excipients are essential for manufacturing and administration of the dosage form and also help in stability and bioavailability of the active drug [3].Exploring local sources of low-cost substitutes could lead to the discovery of excipients with outstanding properties [4].Products derived from plants sources can be used as binders, disintegrants, diluents, vehicles, lubricants in formulation of different dosage forms.
Lubricants improve powder processing by reducing or preventing friction, heat and wear.The fundamental principles of lubrication in terms mechanisms of action in pharmaceutical processes have been well documented [5].In terms of their chemical structure, commonly used lubricants for boundary lubrication include;-long chain molecules with active end-groups such as -NH 2 (long chain amine); -OH (long chain alcohol); -COOH (long chain fatty acids); and metal ions such as Mg 2+ [6].
Blighia sapida (Ackee) is a fruit tree which originates from West Africa [7].The fruit when matured is noted for its food, medicinal and aesthetic values [8].The black shiny seeds revealed after ripening are usually thrown away in most places.Djenontin et al [9] reported that the BSSO contains monounsaturated and saturated fatty acids.Blighia sapida seeds are practically thrown away most times after consumption of the fruit aril.However, efficient utilization of the seed oil as lubricant would require adequate information on its characteristics, functional, physicochemical and even storage properties.
To the best of our knowledge, there is no report available on the use of BSSO as a pharmaceutical lubricant.This work was aimed at investigating the lubricant potentials of BSSO and its possible application as a novel, effective and low-cost alternative lubricant in solid dosage formulations.
EXPERIMENTAL Materials
Ascorbic acid powder, maize starch, magnesium stearate, sodium starch glycolate and talc were gifts from Ecomed Pharma Ltd, Ota, Ogun State, Nigeria.All other chemicals and solvents used were of analytical grade and were manufactured by BDH Chemicals, Poole, England.
Plant collection and extraction of seed oil
The dried seeds of Blighia sapida were collected in August 2015 from local farmers in Ogbomosho, Nigeria.They were identified and authenticated by Mr. O.O.Oyebanji of the Department of Botany, University of Lagos, Nigeria and given the voucher number LUH 6709.The specimen sample was deposited in Department of Botany, University of Lagos herbarium for future reference.The seed coats were removed manually by peeling with knives and dried at 40 ºC for 48 hours.The dried seeds were milled and screened through a mesh with a sieve aperture of 0.5 mm.The resulting powder was packed in a sealed polyethylene bags and stored at room temperature until use.Two kilograms of the powder was macerated using nhexane for 5 days.The oil was thereafter separated and stored at 4 ºC [9].
Physicochemical characterisation of Blighia sapida seed oil
The macroscopic and organoleptic properties of the Ackee seed oil were evaluated; appearance, colour, odour, flavour were examined and documented.The oil was also evaluated for solubility in water, ether, acetone, benzene, chloroform and ethanol in accordance with standard protocols [10].
For the determination of acid, saponification, iodine and peroxide values of BSSO, standard methods were employed [11].
Fatty acid methyl ester (FAME) analysis of Blighia sapida seed oil
The BSSO fatty acid methyl ester (FAME) analysis was carried out using Gas Chromatography-Mass Spectrometry (GCMS) (GCMS-QP2010 Plus Shimadzu, Japan) column oven: 50 o C, injection volume:
Formulation of ascorbic acid granules and tablets
Six batches of granules containing ascorbic acid (Table 1) were prepared using wet granulation method; batch A1 as reference standard containing magnesium stearate as lubricant while batches A2 to A6 containing various concentrations of the seed oil as lubricant.Each batch was prepared by sieving the required amounts of the bulking agent (maize starch) and ascorbic acid into a clean bowl and mixed intimately.Sufficient quantity of the binder (2 % w/v maize starch mucilage) was added to the dry powders and mixed to form a wet mass.The wet mass was dried in a tray drier at 50 o C for an hour in an oven (RDTD-48, RidhiPharma, India) and then sieved through an American Standard Sieve No 16.The resulting granules were further dried for about 12 hours.The disintegrant (4 % w/w sodium starch glycolate) and glidant (4 % w/w talcum) were added and mixed with the granules for 5 minutes before mixing with the lubricant for 2 minutes.
The granules prepared were evaluated for their flow properties and then compacted using a rotary press (Double Rotary Press, Cadmach Ahmebad-B, India) at the same compression settings.However, batch A6 containing 5 % BSSO as lubricant could not be compressed into tablets at the same compression settings as other batches.
Evaluation of ascorbic acid tablets
The weight uniformity, tablet thickness, tablet hardness, tablet friability and disintegration times of the different ascorbic acid formulations were determined using the methods reported in a previous study [12].
The average weight of each batch was determined by weighing individually and collectively on a digital weighing balance (Contech CB series, Greifensee, Switzerland) twenty tablets that were randomly selected.The deviation of each of the individual tablets in each batch from the average weight of the sample was determined.The percentage deviation was also calculated.
The thicknesses of ten tablets were evaluated using a calliper (Mitoyo Digimatic calliper-Mitoyo Japan), the mean and standard deviation and the confidence interval was then computed.
For hardness test, ten tablets were randomly selected and the force required to crush each of them was determined using a hardness tester (Mosanto, Dolphin TM, Mumbai, India).
The friability of ten randomly selected tablets was determined for each batch in a friabilator (ET-2 model, Electrolab, India).The drum was rotated at 25 rotations per minute (rpm) for 4 minutes.Loss of tablet weight with respect to the initial weight was then calculated after the tablets were re-dusted, weighed and observed for capping and lamination.
Disintegration time for six randomly selected tablets from each batch was determined using a USP disintegration tester (model ED-2AL, Electrolab, Mumbai, India).The apparatus was operated using distilled water as the medium and immersion fluid, the set up was maintained at 37 ± 2 o C. One tablet was placed in each of the six tubes of the disintegration basket.The time taken for all tablet particles in each unit to pass through the mesh was recorded.
Dissolution test was carried out using the method stated in the USP [13].Drug release was determined using a USP dissolution tester (TDT 08L, Dissolution tester, Electrolab, Mumbai, India) the paddle type.The dissolution medium used was 900 mL of distilled water at 50 rpm paddle speed.The apparatus ran for 45 minutes, after which 20 mL of each sample was withdrawn and filtered.It was immediately titrated against standard dichlorophenol-indophenol VS.The end point was detected when a rose pink colour that persists for at least 5 sec was observed.Ascorbic acid (A) content was calculated using equation 1.
where vs is titrant volume consumed by the sample (mL), vb =titrant volume consumed by the blank (mL), F= factor (concentration of the titrant in terms of the equivalent of ascorbic acid (mg/mL), VM = volume of medium, 900 mL, a= volume of the aliquot taken for analysis while L= labelled amount of ascorbic acid (mg/tab).
Statistical analysis
The data emanating from the study was analyzed using OriginPro 2016 (64-bit) software (OriginLab Corporation Northampton, MA 01060 USA).Mean comparison with the standard was evaluated using one-way analysis of variance (ANOVA) at 95 % confidence level (p < 0.05).Significant differences of mean values were determined by Tukey test.
Percentage yield, organoleptic properties and solubility of Blighia sapida seed oil
The percentage yield of the oil was 12.5 %.The oil had a faint sweet smell and was soluble in benzene and chloroform but insoluble in ethanol and water.
Physicochemical properties of Blighia sapida seed oil
The chemical properties of BSSO are presented in Table 2. Acid value of BSSO obtained was 2.65 mg KOH/g while the saponification value was 141.65 mg KOH/g.Blighia sapida seed oil had iodine value (mgI 2 /g) of 62.50 while a peroxide value of 8.68 mg reactive O 2 g -1 was obtained for the oil.3).The BSSO contained predominantly monounsaturated and saturated fatty acids and also straight chain hydrocarbons and aromatic compound.
Properties of ascorbic acid granules and tablets
The properties of the ascorbic acid granules (Table 4) showed that the Hausner's ratios of the granules containing different concentrations of the oil gave values (1.00-1.04)that were less than 1.15 while the values of compressibility index (%) varied from 0.34 to 11.59.The lowest concentration of the seed oil (0.5%) had lowest Hausner's ratio (1.00) and compressibility index (0.34 %) while the highest concentration of the seed oil (5 %) had highest Hausner's ratio (1.13) and compressibility index (11.59%).Batch A2 granules containing 0.5 % BSSO had The properties of the formulated ascorbic acid tablets are presented in Table 5.All batches (A1 to A5) passed the weight uniformity test.The results obtained for hardness test showed that there was no significant difference between formulations A1 containing 2 % magnesium stearate and A2 containing 0.5 % BSSO.Formulation A5 containing 4 % BSSO as lubricant had the least crushing strength (Table 5).
The results of the disintegration time (Table 5) showed that A1 and A2 had the same disintegration time (2.5 minutes) while A4 had the least (1.3 minutes).0.5 % of BSSO gave the same disintegration time (2.5 minutes) as 2 % of magnesium stearate.The dissolution test indicated that all the formulated ascorbic acid tablets released 85 % of the drug content at 45 minutes.
DISCUSSION
The low yield of the oil (12.5 %) compared to those recorded in previous studies; 21.6 % [9], 15.61% [14], 21.0 % [15] might be attributed to various factors such as the differences in the genetic make-up of the plants, maturity of the plant at time of collection and place of collection of the plant materials [16], and also mode of extraction employed in the different studies.Hence, there is need for optimization of extraction process prior to large scale seed valorization.
The good flavor and faint sweet pleasant smell of the seed oil are desirable in materials to be used as excipients in the pharmaceutical industry so as to ensure good appeal to the patient and to encourage compliance.
Acid value is a measure of the free fatty acids (FFA) present in the oil.The lower the acid value of oil, the fewer free fatty acids it contains which makes it less exposed to rancidity [14,16].Acid value could also be used to check the level of oxidative deterioration of oil by enzymatic or chemical oxidation.The acid value (2.65 mg KOH/g) obtained for the BSSO was lower than those reported in earlier studies (39.49mgKOH/g, 14.2 mg KOH/g) [14,18] but was similar to that of beniseed oil ((2.77mg KOH/g and 2.74 mg KOH/g) [19].The lower acid value of BSSO mighty be an advantage in terms of stability when used in formulations.The low acid value reported for BSSO might be attributed to the absence of polyunsaturated FAs in the oil.
The saponification value (141.65 mg KOH/g) of the BSSO reported in the study was similar to the value (145.0 mg KOH/g) reported by Djenontin et al [9].Saponification value determines the quantity of potassium hydroxide (in mg) needed to neutralize the acids and saponify the esters contained in 1 g of the lipid [17].The higher the saponification value of an oil, the higher the lauric acid content and the better the suitability in soap and shampoo formulations.Hence, BSSO could be a good ingredient for soap and shampoo formulations.
Iodine value (mgI 2 /g) of 62.50 obtained for BSSO was similar to those reported in earlier studies; 66.0 [9], 65.4 [20] and 65.0 [21] but higher than that reported by Onuekwusi et al [14].Iodine value measures the degree of unsaturation in a fat or vegetable oil.The value showed that BSSO contained a reasonable amount of carbon-carbon double bonds.BSSO peroxide value of 8.68 mg reactive O 2 g -1 is within WHO/FAO stipulated maximum peroxide level of not more than 10 M equivalent of peroxide oxygen/Kg for food oils [17].Hence, BSSO might be considered safe for oral consumption.
The fat acid methyl ester composition of Blighia sapida oil obtained from different sources by different authors had been reported [9,14,15] and some differed significantly.The main features of FAME analysis of BSSO obtained in this study were its content of FAs in the C 23 range; which accounted for 46.82 wt%, followed by the C 20 range; 24.07 wt%, then the C 18 range; 21.39 wt%.Djenontin et al [9] reported 63.6 wt% monounsaturated fatty acids, while this study revealed 73.4 wt% of monounsaturated fatty acid and 23.49 wt% of monounsaturated oleic acid ethyl ester-ethyl oleate; only 3.11 % of saturated fatty acids were present and polyunsaturated compounds was present.Owing to the high concentration of monounsaturated fatty acids and absence of polyunsaturated compounds, the seed oil would be expected to be stable against oxidation hence formulations containing BSSO would be more stable compared to those with polyunsaturated FAs content.Oleic acid found in significant amount in the seed oil is an omega-9fatty acid with very good food, medicinal and health benefits.Ethyl oleate, which is the more lipid soluble form of oleic acid, is widely used as a solvent for steroids.Thus, BSSO could be used as food, surfactant, lubricant etc.The lubricant effect stemming from the carboxylic acid groups present since boundary lubrication could be achieved by employing long chain molecules (long chain fatty acids) with active -COOH endgroups [6].
The four batches formulated using Blighia sapida seed oil as lubricant as well as A1 containing 2% magnesium stearate passed the weight uniformity test (Table 3).The BP [10] specified that for a tablet with 350 mg weight, the weight variation must be within ±5 %.One of the reasons for this observation might be attributed to lubricants employed in the formulations which contributed to improved flow properties of the granules and also uniform filling of the die cavity.However, batch A6 granules could not be compressed into tablets as it was not cohesive.This might be attributed to the high concentration of BSSO (5 %), the lubricant.Over lubrication might interfere with the bonding of particles resulting in weak tablets [22].
For hardness, 0.5 wt% gave tablets with 3 kgf hardness, subsequent batches gave lesser values, which implies that the higher the concentration of BSSO as lubricant, the softer the tablets become.It can be deduced that BSSO 0.5 wt% gave the best result compared to magnesium stearate regarding tablet hardness.BSSO (0.5 %) gave the same disintegration time (2.5 min) as 2 % of magnesium stearate.The disintegration time showed a decrease with increasing concentration of BSSO until the 4 % concentration where hydrophobicity effect of the oil had set in, generally, all the tablets passed the disintegration test which is specified to be not more than 15 minutes [10].Lubricants generally have a strong negative effect on the water uptake if tablets contain no disintegrants.If a strong disintegrant is present (e.g., sodium starch glycolate used in this study), disintegration time is rarely affected.Sodium starch glycolate has been reported to be unaffected by the presence of hydrophobic lubricants unlike other disintegrants [23].
For most dosage forms to be efficacious, the active pharmaceutical ingredient (API) must be absorbed into the systemic circulation so that it can be transported to its site of activity.A1 to A5 batches of ascorbic acid tablets all released 85 % of the drug at 45 min therefore complying with the compendia requirement [13].This observation implies that there was noninterference of BSSO with the dissolution profile of the ascorbic acid tablets.
CONCLUSION
Blighia sapida seed oil (BSSO) with some desired functional and physicochemical properties has been successfully from Blighia sapida seeds.The findings of this study show that BSSO compares favourably with the wellknown magnesium stearate as a tablet lubricant, and therefore, has a potential for use as a lowcost lubricant in solid dosage form manufacture.However, further investigations on the excipient, including stability, toxicity, etc, are required to fully ascertain its suitability.funding model which does not charge readers or their institutions for access and distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by /4.0) and the Budapest Open Access Initiative (http://www.budapestopenaccessinitiative.org/read), which permit unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
Table 1 :
Composition of ascorbic acid granules/tablet formulations
Table 3 :
Fatty acid methyl ester composition of Blighia sapida seed oil
Table 5 :
Properties of formulated ascorbic acid tablets | 4,223.8 | 2017-03-06T00:00:00.000 | [
"Medicine",
"Materials Science",
"Chemistry"
] |
Relativistic Cosmology with an Introduction to Inflation
In this review article, the study of the development of relativistic cosmology and the introduction of inflation in it as an exponentially expanding early phase of the universe is carried out. We study the properties of the standard cosmological model developed in the framework of relativistic cosmology and the geometric structure of spacetime connected coherently with it. The geometric properties of space and spacetime ingrained into the standard model of cosmology are investigated in addition. The big bang model of the beginning of the universe is based on the standard model which succumbed to failure in explaining the flatness and the large-scale homogeneity of the universe as demonstrated by observational evidence. These cosmological problems were resolved by introducing a brief acceleratedly expanding phase in the very early universe known as inflation. The cosmic inflation by setting the initial conditions of the standard big bang model resolves these problems of the theory. We discuss how the inflationary paradigm solves these problems by proposing the fast expansion period in the early universe. Further inflation and dark energy in fR modified gravity are also reviewed.
Introduction
With the advent of general relativity in 1916, spacetime transformed itself into one of the four fundamental interactions of the universe and the geometrical structure attached to it was taken to demonstrate gravity in a dynamical way [1]. The force of gravity was replaced by the curvature of spacetime that is mirrored through the geometric structure of metric tensor g µν . The spacetime became an integral part of the universe and a dynamical medium where the whole phenomenal universe exists. Any solution of the field equations of general relativity entails a certain structural geometry of spacetime or just a spacetime that represents a universe itself, therefore determining a solution of the field equations is similar to coming across a specific model of the universe.
Cosmology studies the universe as a whole [2] encompassing its beginning in spacetime or as spacetime itself, its evolution, and its eventual ultimate fate. The history of cosmology dates back to ancient Greeks, Indians, and Iranians with its roots at that time in philosophy and religion. Before modern scientific cosmology emerges, it has been nurtured in the womb of Ibrahamic religions especially Judaism, Christianity, and Islam. Cosmology as modern science begins with the surfacing of general relativity when Einstein first himself put it to use to formulate a cosmological model of the universe mathematically. The model brought about a dynamic universe but was rendered to be static as there was no cosmological evidence of its contraction or expansion at that time [3]. Einstein's static model was afterward proved to be inconsistent with cosmological observations and was discarded; however, its formulation as the first mathematical model based on the field equations of general relativity laid the foundational stone for the inception of modern relativistic cosmology as science.
Cosmology takes into account the largest scale of spacetime that is the causally connected maximal patch of the cosmos from the perspective of its origin, evolution, and futuristic eventual fate. It gives the universe a mathematical description as large as the cosmological observational parameters reveal and allow consequently. The modern relativistic cosmology was established on general relativity which brought forth the big bang model of the universe. The big bang model was marred with some inward problems related to it, which were removed by introducing an exponentially expanding phase in the early universe known as inflation. de Sitter presented a model of the universe devoid of ordinary matter, however with the cosmological constant term retained. The geometry of the model was proved to be accelerating [4]. The de Sitter universe corresponds to the specific case related to one of the very early solutions of Einstein's Field Equations (EFE). The importance of the de Sitter model was not recognized until the introduction of inflation in the late 20th century, as the actual universe must be considered as a local set of perturbations in the geometry of de Sitter having validity at large. de Sitter geometry represents Euclidean space with a metric that depends on time. It was found that the inflation could be the de Sitter in general or quasi-de Sitter geometry which has an innate impact on the evolution of the geometry of FLRW spacetimes. It further bears its relation with the late-time accelerated expansion of the universe and to the dynamic geometry of the spacetime intrinsically cohered with it. The paradigm of inflation, as it is propounded, has a profound impact on the evolution of the universe as the geometry of spacetime. The de Sitter universe represents the inflationary phase of the universe with slightly broken time translational symmetry. Alexander Friedmann predicted theoretically the universe to be dynamic, the one which can expand, contract, or even be born out of a singularity [5]. George Lemaitre, unaware of Friedmann's work at that time, independently reached the same conclusion. In 1931, he also proposed a theory of the primeval atom which came to be known later on as the big bang theory by Fred Hoyle accidentally [6]. Edwin Hubble first proved the existence of other galaxies besides the Milky Way and afterward in 1929 discovered based on observational evidence that the universe is actually expanding [7]. This was actually discovering what Friedmann already had predicted theoretically in 1922. In the late 1940s, George Gamow (1904Gamow ( -1968) and his collaborators, Ralph Alpher (1921Alpher ( -2007 and Robert Herman (1914Herman ( -1997, independently worked on Lemaître's hypothesis and transformed it into a model of the early universe. They made supposition about the initial state of the universe comprising of a very hot, compressed mixture of nucleons and photons, thereby introducing the big bang model on the basis of comparatively strong evidences. They did not associate a particular name with the early state of the universe. Based on this model they were successful in calculating the amount of helium in the universe, but unfortunately, there was no authentic observational evidence through which their calculations could be compared [8].
The standard relativistic model of cosmology underpinning big bang theory could not explain the global structure of the universe and the origin of matter in it. The distribution of matter in it homogeneously on large scales and the spatial flatness also remained enigmatic. The big bang model just made an assumption about these but could not solve them. In the framework of effective field theory, the aspects of nonsingular cosmology were explored by Yong Cai et al. It is shown that the effective field theory assists in having the clarification about the origin of no-go-theorem and helps to resolve this theorem [9].
The inflationary era was proposed in the standard model of cosmology which propounds the big bang theory of the creation of the universe. Inflation solves the problems encountered in the big bang cosmology. Gliner, in 1965, hypothesized an era of exponential expansion for the universe earlier than any significant inflationary model surfaced [10]. It was found that the scalar fields are dynamic in nature, and in 1972 it was proposed that during phase transitions the energy density of the universe as a scalar field changes [11]. Andrei Linde, in 1974, realized that scalar fields can play an important role in describing the phases of the very early universe. He speculated that the energy density of a scalar field can play the role of vacuum energy dubbed as a cosmological constant [12].
In 1978, Englert, Brout, and Gunzig [13] forwarded a proposal of "fireball" hypothesis attempting to resolve the primordial singularity problem. They based their investigations on the entropy contained in the universe and approached the issue of early evolution of the universe by introducing particle production in it. They inquired deep down into it and on the basis of their hypothesis inferred that a universe undergoing a quantum mechanical effect would itself appear in a state of negative pressure and would be subject to a phase of exponential expansion. A work was mentioned by Linde in his review article [14] where he sought, in collaboration with Chibisov, to develop a cosmological model based upon the facts known in connection with the scalar fields. Considering the supercooled vacuum as a self-contained source for entropy, they tried to bring about the exponential expansion of the universe to be concerned with it. They, however discovered instantly that the universe becomes very inhomogeneous after the bubble wall collisions take place.
Slightly before Alan Guth's original proposal of inflation surfaced, Alexei Starobinsky in 1980 proposed a model of inflation on the base of a conformal anomaly in quantum gravity. His proposal was presented in the framework of general relativity where slight modification of the equations of general relativity in matter sector was proposed and quantum corrections were employed to it in order to have a phase of the early universe. Starobinsky's model can be considered as the first model of inflation which is of semirealistic nature and evades from the graceful exit problem [15]. It was hardly concerned with the problems of homogeneity and isotropy which occur in the relativistic cosmological model of the big bang. His model, as he himself accentuated, can be considered the extreme opposite of chaos in Misner's model. The model is found to agree with cosmological observations with slight deviations form recent measurements. Tensor perturbations that represent gravitational waves have also been predicted in Starobinsky's model with a spectrum that is flat.
Alan Guth employed the dynamics of a scalar field and with a clear physical motivation presented an inflationary model [16] in 1981 on the base of supercooling theory during the cosmic phase transitions where the universe expands in a supercooled false vacuum state. A false vacuum is a metastable state containing a huge energy density without any field or particle so that when the universe expands from this heavy nothingness state its energy density does not change and empty space remains empty so that the inflation occurs in false vacuum [17]. The duration of inflationary phase in Guth's original scenario is too short to resolve any problem, although was supposed to solve these problems and consequently the universe becomes very inhomogeneous which leads to the graceful exit problem [18,19]. The problem prevents the universe from evolving to later stages and is inherently existing in the originally proposed version of Guth. The graceful exit problem was addressed independently by Linde, Steinhard, and Albrecht [20][21][22][23][24][25], where they introduced a phase of slow roll inflation at the end of the normal inflationary phase inclusively known as new inflation. The resolution of the problem was sought by constructing a new inflationary paradigm where the inflation can have its inception either in an unstable state at the top of the effective potential or in the state of false vacuum. In this scenario, the dynamics of the scalar field is such that it rolls gradually down to the lowest of its effective potential. It is of great importance to note that the shifting away of the scalar field from the false vacuum state to other later states has remarkable consequences. When the scalar field rolls slowly towards its lowest socalled slow roll inflation, the density perturbations are generated which seed the structure formation of the universe [26][27][28]. The production of density perturbations during the phase of slow roll inflation is inversely proportional to the motion of the scalar field [29,30]. The basic difference between the new inflationary scenario and that of the old one is that the advantageous portion of the inflation in the new scenario, which is responsible for the large scale homogeneity of the universe, does not take place in the false vacuum state, where the scalar field vanishes. This means that the new inflation could explain why our universe was so large only if it was very large initially and contained many particles from the very beginning.
The course of 20th century has presented many challenges to the standard cosmology. In the framework of the standard model, in addition to inflation, another breakthrough came forth in 1998 when the observation-based accelerated expansion of the universe was discovered [31][32][33]. Before this discovery, however, it was thought that in the perspective of all known forms of matter and energy that obey the strong energy condition ρ + 3p > 0, the expansion of the universe would slow down with the passage of time. This was a natural consequence of Friedmann equations that play a central role in the evolution of the universe. From the acceleration equationä a = − 4πG 3 (ρ + 3p), the universe must be undergoing deceleration characterized by deceleration parameter q 0 = − aä a 2 ; however, astoundingly the value of q 0 < 0 was observationally determined, meaning that the expansion of the universe is accelerating rather to be decelerating. The discovery of accelerated expansion has won Noble prize in 2011. To explain the cause of accelerated expansion an exotic form of energy density was introduced hypothetically known usually as dark energy. The present budget of the universe from the observational data is contributed by dark energy 70%, dark matter 25% and 5% ordinary baryon matter [34,35]. Dark energy is effective on the largest scales of intra galaxies and does not affect gravitationally bound systems. To explain the origin of dark energy, there is a large number of proposed models. Many independent observations lend support to the existence of dark energy such as CMB, SN Ia, BAO, etc. Today dark energy constitutes a very significant subject of relativistic cosmology with observational data by providing information about its basic nature, for reviews see in [36][37][38]. In this article, we study the standard model of cosmology by investigating the geometric structure of spacetime related with it in the framework of general relativity. Beginning with Euclidean space, we study spacetime in special and general theory of relativity. We discuss problems encountered in the standard big bang cosmology, and the inflationary solutions introduced into it, by proposing a phase of accelerated expansion in the early universe. A discussion of f (R) modified gravity is also presented with discussion of how inflation and dark energy can be described in its framework.
The layout of the paper is as follows. In Section 2, we discuss the structure of Euclidean space beginning with the axioms of Euclidean geometry and the significant role played by the Pythagoras theorem in its development. It has four subsections discussing space, time, and spacetime in relativity and pre-relativity physics. Section 3 begins with relativistic cosmology with a discussion on its underlying principles. The standard model of cosmology is discussed in Section 4 with nine subsections about its geometric structure. In Section 5, the derivation of Friedmann equations is carried out. Section 6 describes different aspects of embedding a geometrical object in a space of higher dimensions. It is has four subsections. Section 7 presents the very first relativistic model developed by Einstein himself. It has two subsections that discuss the instability of Einstein's universe and de Sitter's empty universe model respectively. In Section 8, a discussion on conformal FLRW line elements is presented in addition to the vacuum, radiation, and matter-dominated eras. It has 12 subsections covering related topics. Section 9 with its four subsections is devoted to the discussion of cosmological problems faced by the standard model. In Section 10, we embark on inflation and discuss its dynamics. Section 11 describes how the proposal of exponential expansion in the early universe solves the cosmological problems. ΛCDM and f (R) are discussed in this section. In the last Section, we provide a summary of the paper. Four indices are added in the end.
Euclidean Space
Euclidean geometry is established on a set of simple axioms and the definitions derived from these axioms. These axioms were first stated by Euclid in about 300 B.C. [39]. A space at the level of mathematical abstraction is the set of points where each point represents a specific position in it. When an abstract space is mapped onto a physical space, each point of it represents a physical location in it. Euclidean space is what entails on the base of axioms of Euclidean geometry. Geometrically, a space can be described by reducing it to a certain specification of the distance between each pair of its neighboring points. In order to reduce all of the geometry of a space to a certain specification of the distance between each pair of neighboring points we use the metric or line element which measures the space and describes its nature. A line element specifies a certain geometry and its form varies corresponding to different coordinate systems. Five basic postulate lie at the core of Euclidean space and are the basis of standard laws of geometry:
1.
Any two points can be joined by a straight line, i.e., the shortest distance between two points is a straight line.
2.
A straight line can be extended to any length.
3.
A circle can be drawn with a given a line segment as radius and one end as center of the circle. 4.
All right angles are congruent.
5.
Given a line and a point not on the line, it is possible to draw exactly one line through the given point parallel to the line, i.e., parallel lines remain a constant distance apart. Pythagoras theorem was known before Euclid and can also be derived from the five postulates and is used to find distance between any two points in Euclidean space. A mathematical space is an abstraction used to model the physical space of the universe. The Euclidean space consists of geometric points and has three dimensions. Now the Pythagorean theorem for a right triangle describes how to calculate the length of hypotenuse when the lengths of other two sides namely base and altitude are given. The length of hypotenuse gives distance between two points. Figure 1 below shows Pythagoras theorem: Now, as the space can be described everywhere consisting of geometric points, we can define mutual relation for every infinitesimally close three points of the space forming a right triangle so that we can determine the element of distance between any two points with the help of Pythagoras theorem. Using rectangular Cartesian coordinate system we can express distance between two points in differential form as The distance-measure by Pythagoras theorem in Equation (2) will be known as metric or line element in two dimensions and defines Euclidean metric for two dimensional space. The distance measured between two points by the metric in Equation (2) does not change on rotating the coordinate system in which these two points are specified as Figure 2 manifests it: The distance between two points remains invariant which means that The Pythagorean theorem in three dimensions can be described as Three mutually perpendicular planes along three dimensions of the Cartesian coordinate system divide it in 3-planes as is shown in Figure 3: Now, in reference to a coordinate system each point of this space will have three coordinates (x, y, z) if we approach its structure through Cartesian scheme, i.e., in Cartesian coordinates each point of it is represented by three coordinates which are the distances measured starting from the origin of the coordinate axes along the corresponding axes, i.e., x-axis, y-axis, and z-axis, respectively. These three axes stand for three dimensions of space. We find the distance between two points with Cartesian coordinate for three points separated infinitesimally which gives the metric of three-dimensional space The distance between two points with Cartesian coordinates (x, y, z) and (p, q, r) will be The infinitesimal distance between any two points (x, y, z) and (x + dx, y + dy, z + dz) can be had using the metric written above in Equation (7) in three dimensional Euclidean space.
or in tensor form where δ µν is the Kronecker delta function representing a symmetric tensor of rank two and can be expressed as a 3 × 3 matrix form and trace of therefore, δ µν = diag(+1, +1 . . . . . . . . . . . . + 1) in Equation (13) defines an n-dimensional Euclidean space. Now Equation (11) can be expanded using Einstein summation convention Equation (21) can also be written in the form From Equation (22), we can see that the inner product in three dimensional Euclidean space can be perfectly described, that is why three dimensional Euclidean space is an example of a complete inner product space. An explanatory discussion of maximally symmetric 3 space can be consulted in Appendix B.
Newtonian Mechanics: The Structure of Space and Time
Space and time are absolute structures in classical physics and can be distinguished from one another in an independent way. Newton's Mechanics is based specifically on three laws of motion, a law of gravitation and Galilean principle of relativity which are inherently related with the properties of space and time. Newtonian space is a threedimensional extension around us which constitutes absolute space. Absolute space in Newton's own words is described as "Absolute space, in its own nature, without relation to anything external remains always similar and immovable", therefore space is rigid, motionless, and can be viewed as colossally empty three-dimensional cubic or cuboidal box where material objects reside and all physical phenomena take place. Newtonian space has the properties of Euclidean pace where infinitesimal distance between any two points is a straight line and if three points constitute a right angled triangle, then three sides are related by Pythagoras theorem which ascribes to it the properties of a flat space. Sum of angles in a triangle in such space is 180 • . Newtonian space is homogeneous and isotropic which entails Newtonian Mechanics. Homogeneity implies translational invariance of the properties of space which means that it has similar properties at every point contained in it. The property of being homogeneous is called homogeneity that leads to the invariance of physical laws performed in two or more coordinate systems. Newton's 3rd law, law of conservation of momentum and energy, etc. come out as a consequence of homogeneity of space. It is also an isotropic that implies rotational invariance of the properties of space. It means that it has similar properties in all directions and is therefore direction-independent. Thus, isotropy implies homogeneity but the converse is not true. The absolute time has been enunciated as follows "Absolute time, and mathematical time of itself and from its own nature flows equably without relation to anything external, and is otherwise called duration" such time exists independent of space and whatever dynamically happens in it and flows uniformly in one direction. An interval of time possesses always unchanging meaning for all times. This is presented figuratively in Figure 4 below: According to Newtonian Mechanics, gravitation and relative motion do not affect the rate at which time flows. From Newton's 2nd law F = ma, the isotropy of time can be viewed in case of a dynamic system that does not change from perpetrating transition from +t to −t. This is because it does not incorporate the element of time explicitly which implies that past and future are indistinguishable but this is paradoxical because time is unidirectional and flows always from past to future. Two observers in two inertial frames of reference in relative motion and equipped with standard measuring clocks record the spacetime coordinates of an event written as (t, x, y, z) and (t , x , y , z ), respectively. According to Galilean principle of relativity, the coordinate transformations are We can calculate the addition of velocities according to these transformations by differentiating the spatial parts of Equation (23) with respect to time t, we have As t = t , we infer that dx dt = dx dt . Likewise, acceleration can also be differentiated once again from Equation (24), which gives We can observe from Equation (25) that the accelerations in both frames are same. The time-coordinate t of one inertial frame remains unaffected during transformation to another inertial frame of reference in classical physics and does not depend on spatial coordinates x, y, and z. The set of equations in Equation (23) are known as Galilean transformations. The motion along y and z spatial dimensions remains unaffected and the time coordinates in the two frames are equivalent which implies that time is absolute as Newton believed meaning that for all the inertial observers the time interval between any two events would be invariant. We notice that the two events having coordinates (t, x, y, z) and (t , x , y , z ), respectively, with differential of the distance as Euclidean spatial interval described in Equation (21) as ds 2 = dx 2 + dy 2 + dz 2 and the time interval ∆t = t − t both remain separately invariant under the Galilean transformations in Equation (23). This fact makes us consider the nature of space and time as absolute entities in Newtonian Mechanics. We identify the quantity ds 2 as square of the distance between points of threedimensional Euclidean space and invariance of this differential of distance alludes to the fact that it is geometrical structural property of the space itself in its own right. This describes the geometry of space and time according to Newton's views.
Special Theory of Relativity: The Structure of Spacetime
Special relativity is a theory of the structure of spacetime and in this way constitutes a geometric theory [40]. The fields and particles grow over this spacetime structure and relativistic mechanics is developed according to this structure which corresponds to the postulates of special relativity. According to the Lorentz transformations implied by it, space and time are not distinguishable quantities but constitute innately a single continuum to be known as spacetime. One of the Einstein's 1905 papers brought forward this theory founded upon two postulates [41].
1.
The principle of special covariance.
2.
The principle of invariance of the velocity of light (c). As the laws of physics remain form-invariant, i.e., covariant according to a privileged class of observers known as inertial frames. This is also called principle of relativity. These two principles overthrew the pre-relativity notions of absolute space and absolute time proposing instead relative concepts. In classical physics as we saw earlier the coordinates of two observers are related by Galilean transformations, whereas according to the special relativity, the coordinates in two frames are related using Lorentz transformations.
Lorentz transformations contain all the geometric information about space and time, and describe the structure of spacetime. Further, we can see that space and time coordinates are absolute according to the Galilean transformations for two inertial observers which move relative to each other and are connected through space and time coordinates. Time coordinate has the same magnitude in pre-relativity physics; however, according to the special relativity, which obeys Lorentz transformations, the time coordinate in one coordinate system is connected to the time coordinate of the second coordinate system through both time and space coordinates, which alludes to the fact that space and time coordinates are now to be dealt on equal footings. It is obvious from the Lorentz transformations that the time coordinates are not equivalent in two frames, i.e., t = t rather t is innately cohered with both of the coordinates of time and space t and x respectively. It means that time t of one coordinate frame converts partially in space and partially in time coordinates. Therefore, t does not remain independent but has partially coalesced with space coordinates losing its absolute nature and the principle of relativity forbade us to locate a preferred frame of reference ensuing that absolute notion of time disappears logically. This fact was first perceived by Minkowski when he was recasting the special relativity in the language of geometry. He has presented a very profound and significant geometrical structure underlying special relativity. While delivering a lecture at the meeting of the Göttingen Mathematical Society on 5 November 1907, he introduced the concept of spacetime continuum whereby he asserted that independent space and time have to doom away into mere shadows and only a union of the two can preserve an independent reality. Minkowski viewed that the principle of special relativity can be described by the metric −dt 2 + dx 2 + dy 2 + dz 2 on the four-dimensional space R 4 which familiarized the concept of spacetime continuum and paved the way for the formulation of general relativity. A Minkowski metric g on the linear space R 4 is a symmetric non-degenerate bilinear form with signature (−, +, +, +). It means that there exists a basis {e 0 , e 1 , e 2 , e 3 } such that g e µ , e ν = g µν where µ, ν ∈ {0, 1, 2, 3} and g µν is expressed in the form g 00 g 01 g 02 g 03 g 10 g 11 g 12 g 13 g 20 g 21 g 22 g 23 g 30 g 31 g 32 g 33 so that an we have orthonormal basis and can construct a system of coordinates of R 4 as x 0 , x 1 , x 2 , x 3 such that at each point we can have e 0 = ∂ t and ∂ x j where j = 1, 2, 3. Now, with respect to this coordinate system, we can write the metric tensor (0, 2) in the form g = g µν dx µ dx ν = −dt 2 + 3 ∑ 1 dx j or ds 2 = −dt 2 + dx 2 + dy 2 + dz 2 The negative sign with one time component term in the metric indicates that it is not Euclidean space but represents a pseudo-Euclidean known as Minkowski space and also guarantees that the speed of light is same in all inertial frames. An expanding Minkowskian spacetime can be described in the form as written below which represents the simplest of all dynamic spacetimes ds 2 = −dt 2 + a 2 (t) dx 2 + dy 2 + dz 2 . It was thought convenient on the dimensional grounds to introduce the coordinates in the form x 0 , x 1 , x 2 , x 3 = (ct, x, y, z). Pythagoras theorem applied in Euclidean space R 3 of three spatial dimensions gives the distance of two points as an invariant as we observed in previous section.
here ds the length element is a scalar quantity which means that in certain frame of references all the observers will agree upon the length of the measured object. In 1905, Einstein speculated that the measurement of the spacetime interval where η µν = g 00 g 01 g 02 g 03 g 10 g 11 g 12 g 13 g 20 g 21 g 22 g 23 g 30 g 31 g 32 g 33 would not result in identical either in space or in time [42] for the observers in relative uniform motion. However, Minkowski noted that the four-dimensional entity in Equation (29) would remain invariant for all such observers. The basic significant idea which Minkowski took notice of was that the spacetime interval remains invariant for all the observers in uniform relative motion meaning that it is also a scalar upon which they all will agree. The metric of Minkowski space which is homogenous and isotropic is given by g µν = η µν = diag(−1, +1, +1, +1) (31) thus the geometry of spacetime is flat in special relativity. It is notable here that it is spacetime that is flat, however in classical mechanics, it is space rather than spacetime.
If the Minkowskian geometry of spacetime is required to be expanding, it can be made so. However, in the framework of special relativity, it does not need to expand. Figure 5 gives the structure of Minkowskian spacetime as a null cone structure. In the Figure 6, it is shown how the dimension time is converted in as space.
General Theory of Relativity: The Structure of Spacetime
The essence of general relativity is that geometry is gravity which comes from Equivalence principle. It models gravity into the dynamic structure of spacetime. In general relativity, the structure of spacetime is described by a fundamental quantity called the spacetime metric g µν or line element which gives the nature of the geometry of spacetime by finding the distance between two infinitesimally neighboring points in it. The geometrical structure of spacetime is incarnated [43] in two basic principles.
2.
The spacetime continuum has, at each of its points, a quadratic structure of coordinate differentials ds 2 = g = g µν dx µ dx ν known as "square of the interval" between the two points under consideration.
We consider a four-dimensional continuum every point of which is distinct from the other with four coordinates-a quadruplet x 1 , x 2 , x 3 , x 4 assigned consecutively to each of them It is denoted by g µν . In matrix form with components, it is written as g 00 g 01 g 02 g 03 g 10 g 11 g 12 g 13 g 20 g 21 g 22 g 23 g 30 a 31 g 32 g 33 The properties of spacetime that are intrinsically related to it, are completely determined by the spacetime metric. An example of the local curved spacetime around the Sun in two dimensions is displayed in Figure 7: A detailed discussion of space, time and spacetime is presented in Appendix A.
The Basics of General Relativity
It would be convenient to have a retrospective look into the basics of general relativity whose role has been very fundamental to the modern cosmology. We briefly review the structure of the theory specifically in connection with the geometrical structure of spacetime in it. General relativity in its core describes that gravity is the geometry of four-dimensional spacetime manifested through its curvature. It is a theory of spacetime and gravitation that are the very basic components of the universe. Einstein's journey towards general relativity in order to introduce gravity in his previous theory sought the fascinating geometry of the structure of spacetime, such that gravity as a field force disappeared and was assimilated in the very geometric structure of spacetime. In constructing the framework of new theory, Einstein was influenced and governed by Mach's principle, which states that it is a priori existence and distribution of matter which determines the geometry of spacetime, and in the absence of it, there shall be no geometric structure of a spacetime in the universe. Therefore, there will be no inertial properties in an, otherwise, empty universe. In general, relativity gravitation and inertia are essentially indistinguishable. The metric tensor g µν describes the effect of both combinedly, and it is arbitrary to ask which one contributes its effect more and which less, therefore to call it with a single name is suitable either inertia or gravitation [4]. In general relativity gravitation, inertia and the geometry of spacetime are coalesced into a single entity represented by a symmetric tensor of second rank g µν which owes its existence due to presence and distribution of matter which is represented by an other symmetric tensor T µν known as energy-momentum tensor. The metric tensor g µν is the fundamental object of study in general relativity and takes into consideration all the causal and geometrical structure of spacetime. General relativity underlies five fundamental principles connotated by it implicitly or explicitly manner:
principle of equivalence 3.
principle of covariance 4.
principle of minimal gravitational coupling 5.
correspondence principle In the light of the principle of general covariance, the theory requires that the laws of physics might be formulated in a coordinate-independent style. The coordinate independence requires the replacement of partial derivatives by covariant derivatives which introduces connection coefficients Γ λ µν as the 2nd kind of Christoffel symbols. All the geometric structure of spacetime is based on the existence of these connection coefficients. The field equations of general relativity read as G µν = 8πT µν , where G µν = R µν − 1 2 g µν R is the Einstein tensor and is expressed in terms of Ricci tensor, metric tensor, and Ricci scalar, and T µν is energy momentum tensor. The spacetime continuum of general relativity is postulated as a 4-dimensional Lorentzian manifold (M, g), where M denotes the Manifold and g is metric defined over it. The geometry of a spacetime is encoded in its metric which has a geodesic structure, though complex and frequently solved numerically for a specific bunch of geodesics. These geodesics specify the physical properties of the geometry of spacetime which are interpreted by drawing graphically in a certain spacelike volume. Gravity is the geometry of spacetime itself which is described through its dynamic structure in the framework of general relativity. The interaction between spacetime and the content it contains which mutually form and the universe is the pith and marrow of general relativity. Matter tells spacetime how to curve and spacetime tells the matter how to move. General relativity thus transforms gravitation from being a force to being it a property of spacetime, so that gravity does remain a force but curvature of the geometric structure of spacetime. Einstein worked out a relation between matter-energy content of the universe and its gravitating effects in the form of geometry of spacetime. He employed the language of tensors to describe it. The invariant interval between two events separated infinitesimally with coordinates (t, x, y, z) and (t + dt, x + dx, y + dy, z + dz) has been defined according to special relativity Which defines a Lorentz invariant Minkowski flat spacetime whose geometry of spacetime is encoded in η µν . Under the change of coordinates ds 2 remains invariant and is spacelike for ds 2 > 0, timelike for ds 2 < 0 and light-like for ds 2 = 0. Photon path is described by ds = 0 and baryonic matter follows a path between two events for which i.e., it generates stationary values and conforms to the shortest distance between two points to be straight line which means that there are no external forces to set their path deviated. General relativity was based on five principles incorporated in it explicitly or implicitly, namely, equivalence principle, relativity principle, Mach's principle, and Correspondence principle. Tensors are geometric objects defined on a manifold M, which remain invariant under the change of coordinates. It is composed of a set of quantities which are called its components, therefore a it is the generalization of a vector which means that it has more than three components. They represent mathematical entities which conform to certain laws of transformations. The properties of components of a tensor do not depend on a coordinate system which is used to describe the tensors. Transformation laws of a tensor relate its components in two different coordinate systems. The mathematical representation of a tensor is displayed through considering usually a bold face alphabetical letter like A, B, T, P, etc. with an index or a set of indices in the form of superscripts or subscripts or both in mixed form. . A mixed tensor is a tensor which has contravariant as well as covariant components. The number of indices appearing in the symbol representing certain type of a tensor is known as its rank. The appearing indices in the symbol representing a tensor can be contravariant or covariant or both type of indices in it. The order of a tensor is the same thing as rank, only the name differs. The number of components of a tensor is related with its rank or order and the dimensions of the space in which the is being described. In an n-dimensional space, a tensor of rank, say, k will have number of components equal to number of components of a tensor in n-dimensional space is equivalent to n k = (number o f dimensions o f space) rank . However, the spacetime of general relativity is pseudo-Riemannian having four dimensions, three spatial and one temporal. Coordinate patches are necessarily considered to map whole of the spacetime. Each point-event of a coordinate patch in the four-dimensional pseudo-Riemannian spacetime is labeled by a general coordinate system, which conventionally runs over 0, 1, 2, and 3, where 0 stands for time and the rest for space coordinates. An inertial or otherwise frame of reference characterized by a coordinate system can be attached to every point event of the spacetime and coordinate transformations between any two coordinate systems can be found. These can be written as while switching to Riemannian geometry for non-Euclidean spaces ordinary partial differentiation is generalized to covariant differentiation and is defined using a semi-colon; as where comma , denotes an ordinary partial differentiation with respect to the corresponding variable and ; signifies covariant differentiation. In the covariant differentiation, indices can also be raised or lowered with metric tensor, however the covariant differentiation of it vanishes, i.e., g µν;α = 0. The interval between infinitesimally separated events x µ and x µ + dx µ is given by The corresponding contravariant tensor of g µν is given by g µν and they result in Kronecker delta. Moreover, indices can be lowered or raised using the metric tensor in either form as In general relativity, all the geometry of curved spacetime is contained in the secondrank symmetric tensor g µν known as fundamental or metric tensor and is the function of four coordinates g µν = g µν (x 0 , x 1 , x 2 , x 3 ) and g µν encodes all the information about gravitational field induced by presence of matter. It governs the other matter as a response mimicking the role of gravitational potential similar to that of Newtonian gravity so that the paths remain no more straight, and the action in Equation (36) determines the path of a free particle known as geodesic are the Christoffel symbols which through the geodesic equation specify the world lines of free particles. The "acceleration due to gravity" in Newtonian gravitation law is described by these symbols in Einstein's picture of gravity as the geometric properties of spacetime encoding the similar information. Locally these symbols vanish in the inertial frame of reference in free fall and under coordinate transformation from x µ and x µ do not constitute components of a tensor and therefore do not represent a tensor.
The Riemann tensor is defined as It has symmetry properties and satisfies the following Bianchi identity: The Ricci tensor is obtained from Riemann tensor contracting Another expression of Ricci tensor is written in the form given below when determinant of the metric tensor g µν is envisaged as a matrix and denoted by g The Ricci scalar or scalar curvature is described as Contraction of the Bianchi Identity in Equation (45) gives which is the Einstein tensor. Now we can write basic equations of general relativity or These are written with cosmological constant also. From Equation (52) Energy-momentum tensor T µν is the source term for the metric tensor g µν which for a most general matter-energy fluid that is consistent with the assumption of homogeneity and isotropy represents a perfect fluid and has the form where u µ = (1, 0, 0, 0) is the four velocity in a comoving frame of reference and
Relativistic Cosmology
Relativistic cosmology was founded on three fundamental principles 1. Cosmological principle; 2.
General relativity.
These are illustrated in the following subsections.
Cosmological Principle
The cosmological principle states that on sufficiently large scale, the universe is homogenous and isotropic at any time. Therefore, it is the same for all observers and has similar properties on larger scales. The principle is the generalization of Copernican principle and almost all the standard cosmological models of the spacetime underpin it. It has two forms: (1) Cosmological principle with respect to spatial invariance (2) Cosmological principle with respect to temporal invariance In special invariance, we suppose the invariance of space with respect to translational and rotational properties known as homogeneity and isotropy, respectively, and the principle may be regarded as cosmological principle. Under both the invariant properties the space remains isomorphic. A perfect cosmological principle incorporates temporal homogeneity and isotropy which was employed by the steady state theory of the eternal universe and was not supported by the observation and was disfavored. For a local observer the principle might not be satisfied as the Earth and the solar system are not homogeneous and isotropic since the matter clumps together to form objects like planets, stars, galaxies with voids of vacuum-like in between them but on the larger scales of about MP > 1000 Pc the universe obeys the cosmological principle. The uniformity of CMBR in all directions (homogeneity and isotropy) provides the confirmatory proof of the cosmological principle. It is the generalization of Copernican Principle which incorporates homogeneity and isotropy. Homogeneity means location independence, i.e., all places in the universe at galactic scales are indistinguishable. Isotropy gives direction independence, i.e., in whatever direction we look in the universe it appears same. Certainly Isotropy connotes homogeneity, but this not true vice versa. To better understand its geometric properties, we begin with 1-dimensional spaces and revise to the four-dimensional spaces and then observe how the four-dimensional spacetime geometrical properties can be understood in this perspective. It is necessary to understand what we mean by embedding of a geometric object in an n-dimensional space because of the reason FLRW metric incorporates example of embedding three dimensional spaces in four dimensional spacetime. Figure 8 delineates homogeneity and isotropy properties of space:
Weyl's Principle
Weyl's principle helps us consider the universal stuff as consisting of a fluid, the particles of which are constituted by galaxies. Therefore, what we name "the universe" is just cosmic fluid. In the cosmological spacetime, the world lines of the fundamental observers form a smooth bundle of time-like geodesics which would never meet except in the past singularity from where the universe emerged or at the future singularity if it would happen. The fundamental observers are those who comove with the cosmic fluid. The world lines of galaxies as fluid particles are always and everywhere orthogonal to the family of spatial hypersurfaces. The postulate was presented by Hermann Weyl in 1923 which is essentially about the nature of matter in the universe [44]. He regarded the material content of the universe in the form of fluid whose constituent particles make a substratum in the cosmic fluid.
It means that in the substratum of spacetime it allows us to consider the structure of the universe as fluid. The Weyl principle introduces further symmetry in the structure of spacetime described by the metric tensor by considering the galaxies as test particles and postulates that the geodesics on which these galaxies move do not intersect. It states that the world lines of galaxies considered as "test particles" form a 3-bundle of non-intersecting geodesics orthogonal to a series of spacelike hypersurfaces. A simple illustration of the Weyl's principle is given in the Figure 9:
General Relativity
General relativity provides the best existing theory of gravitation on cosmological scales and models it structured into the geometric structure of spacetime. In Section 3, we discussed its basic ingredients.
The Standard Model of Cosmology
The standard model in cosmology has been established on the most general homogeneous and isotropic spacetime. The standard model that propounds the hot big bang model of the universe is known as Friedmann-Lemaitre-Robertson-Walker (FLRW) line element which reads as in the Cartesian coordinates (56) and in the spherical coordinates, we have Or equivalently g 00 g 01 g 02 g 03 g 10 g 11 g 12 g 13 g 20 g 21 g 22 g 23 g 30 g 31 g 32 g 33 The predictions for the quantitative behavior of the expanding universe is enunciated suitably by the metric tensor and the scale factor as a function of time, i.e., a(t) describes the scale of coordinate grid interrelating the coordinate distance with physical distance, i.e., in a smooth and homogeneously expanding universe.
Geometric Properties of the FLRW Line Element
From the line element in Equation (57) As time flows only in one direction and the space obeys cosmological principle, therefore we are allowed to separate the metric in temporal and spatial parts. To understand the four dimensional spacetime geometry of FLRW universe we begin with the geometry of spatial part of the line element that is This is the spatial part of the metric in Equation (59) and is characterized by the scale factor a(t), which is the function of time and 2nd curvature of the space k. These are obviously determined by the self-gravitating properties of the matter-energy content in the universe. The spatial part of the metric incorporates cosmological principle implying homogeneity and isotropy which provides the kinematics for the geometry of spacetime while we will observe afterwards that Einstein equations provide the dynamics into it through the scale factor a(t).
Comoving Coordinates and Peculiar Velocities
The coordinates (r, θ, φ) form the cosmological rest frame and are known as comoving coordinates. They can be considered constant because the particles remain at rest in these coordinates. Peculiar velocity is the motion of the particles with respect to comoving coordinates. Peculiar velocities of the galaxies and supernovae are ignored in cosmology in the expanding spacetime. As p(a) ∝ 1 a(t) , therefore momentum in expanding spacetime is red-shifted and freely moving particles come to rest in comoving coordinates. Physical distance between two points is calculated as thee scale factor a(t) times the coordinate distance. The expression without scale factor inside the bracket is the pure kinematical statement of the geometry of spacetime 1 1 − kr 2 dr 2 + r 2 dθ 2 + r 2 sin 2 θdφ 2 (61) and represents the line element of the three-dimensional space with hidden symmetry of being homogeneous and isotropic. It represents three geometries for three values of k.
The Geometry of Spherical World
For k = +1, the hypersurface is and represents a three dimensional sphere embedded in a four dimensional Euclidean space. This space is finite and closed.
The Geometry of Hyperbolic World
For k = −1, the hypersurface is and represents a three-dimensional hypersphere or hyperbola embedded in a four-dimensional pseud-Euclidean space. This space is infinite and open.
The Geometry of Euclidean World
For k = 0, the hypersurface is and represents a three-dimensional Euclidean flat space. This space is also infinite and open. Now to determine Friedmann equations, we write first the components of the metric tensor, since the metric is diagonal due to homogeneity and isotropy therefore we have these diagonal components Now, we turn to solve the FLRW metric and begins with finding Christoffel symbols of 2nd kind or the affine connections which are given by In four dimensions these will have (4) 3 = 64 components. The four generalized cases emerge in four dimensions for µ, ν, λ, and σ. Case In four dimensions Γ 0 In four dimensions the following twelve cases In four dimensions the following twelve cases In four dimensions the following twenty four cases:
Non-Vanishing Christoffel Symbols
We determine the following non-vanishing Christoffel symbols of the 2nd kind with the help of formula given in Equation (66) for the metric in Equation (57), which is the metric of universe in standard cosmology.
Riemann Curvature Tensor
The Riemann curvature tensor R σ µνλ has (4) 4 = 256 components in four dimensions from which only twenty components can possibly be non-vanishing. The Riemann tensor is given by The possibly non-vanishing twenty components are given by The non-vanishing components are
Ricci Curvature Tensor and Ricci Scalar
Ricci tensor R µν is obtained by contracting Riemann tensor R σ µνλ . We contract it by placing λ = σ, so that R σ µνλ = R λ µνλ = R µν In four dimensions it has (4) 2 = 16 components: The non-vanishing components are Ricci scalar (R) is obtained by contracting Ricci tensor Using double sums and simplifying in four dimensions, we have
Einstein Tensor G µν
Einstein tensor is defined in terms of Ricci tensor R µν , Ricci scalar R, and the metric tensor g µν . It is expressed as In four dimensions it has (4) 2 = 16, components. These are The non-vanishing components are In Equation (86), the spatial components of Einstein tensor can be written in a single equation of tensorial nature.
where µ = ν = 1, 2, 3, and mixed Einstein tensor can be found by Now, we calculate the energy-momentum tensor of a perfect fluid in mixed form. Cosmological principle and Weyl's postulate imply the material content of the universe to be regarded as perfect fluid [1][2][3].
The non-vanishing components of energy-momentum tensor are Putting the values of Einstein tensor G µν and energy-momentum tensor T µν from Equations (88) and (89), respectively, in Einstein field equations
Derivation of Friedmann's Equations
Now, using the Einstein field equations, we set to derive the Friedmann's Equations that describe the evolution of the universe by relating the large-scale geometrical characteristics of spacetime to the large-scale distribution of matter-energy and momentum. From Equation (92), we can write For other two components listed in Equation (92) the 2nd and 3rd components repeat, therefore we will write only one time from the three components. From Equations (93) which is Hubble parameter and gives expansion rate. The above Equation (96) can be written as which givesḢ Therefore, that Equation (95) takes the form in terms of Hubble parameter.
Now, differentiating Equation (93) with respect to time after shifting the factor 3 on the right side, we haveȧ subtracting now Equation (93) from Equation (94), we obtain substituting Equation (107) in Equation (106), after simplification we havė Cosmological principle compels us to consider a fluid in which inhomogeneities will be considered smoothed out and evolution of the universe shall be considered in the form of perfect fluid characterized by energy density ρ and isotropic pressure p. Further we consider that the pressure of the fluid depends only on the density neglecting its impact on the volume and the temperature, i.e., p = p(ρ) which defines a barotropic fluid. In addition, pressure and density bear a linear relationship where w = p ρ is a dimensionless constant known as equation of state parameter. Substituting Equation (109) in Equation (108), we have another form of energy conservation for the equation of state parameter w,ρ Now, Equations (95), (96), and (108) represent two Friedmann's Equations, namely, acceleration and evolution equations, and the equation of conservation, respectively. According to this equation, the evolution of all kinds of matter is determined by the conservation of energy and momentum.
Friedmann Equations with Cosmological Constant Λ
We have to incorporate dark matter and dark energy in the matter-energy content due to the significance of their role in current accelerated expansion and the present Minkowskian flat geometry of the universe. Therefore, their role is however unavoidable in the evolution of the universe. The solution of FLRW line element gives the Friedman's equations using Einstein field equations with cosmological constant Λ written usually in the form and Friedmann's equations with cosmological constant Λ can be worked ouṫ The equation of energy conservation can also be calculated from these Friedman equations in the presence of cosmological constant Λ. Multiplying Equation (112) with 3a 2 , differentiating it with respect to time and then dividing byȧ, we have Substituting now the 2nd Friedman Equation from Equation (113) in it, we have after simplification, we obtainρ where ρ and p are contributed by all whatever exists and constitutes the universe.
A Geometric Object Embedded in an n-Dimensional Space
An object cannot be placed in a space whose dimensions are equal or less than the object to be placed, rather the space must have larger number of dimensions in order to let the object allow rest in it. The presence of an object in a space having larger dimensions than the object is called embedding of it in that space.
Intrinsic Geometry
The properties of the geometry that we have access to, based on visualization of the two dimensional beings are called intrinsic because two dimensional beings cannot observe how surfaces are shaped in three or higher dimensional spaces.
Extrinsic Geometry
The properties of the geometry that we have access to, based on visualization of higher dimensional creature are called extrinsic because higher-dimensional creature can observe how surfaces are shaped in three-or higher-dimensional spaces. The geometrical properties related to an object describing how it has been embedded in some higher dimensional space. Extrinsic geometrical properties depend on how the bodies are placed in the space and how they affect it. The geometry which comes into existence due to interaction between space and the body placed in it describes the extrinsic properties. General relativity considers the geometry of spacetime as the extrinsic property of an object and owes its existence due to the body being present in it.
The Geometry of 2-Sphere Embedded in Three-Dimensional Space
We consider a three-dimensional Euclidean space where three dimensions namely length, width, and height are represented by three coordinate axes, respectively, as we know this space consists of points separate from time, and therefore we do not call its points as events. We assign the triplet of three Cartesian coordinates (x, y, z) to each point of it, where x, y, and z are measured along the three axes of it. The sketch of embedding the geometry of 2-sphere in three dimensional space is drawn in Figure 10: The line element in this space is given by Considering now a sphere with its center at the origin of this coordinate system and envisaging its radius to be a, the surface in Cartesian coordinates (x, y, z) where x, y, and z are along the three axes of three-dimensional Euclidean space. The equation of sphere of this sphere is Differentiating Equation (119) with respect to time Moreover, in differential form Solving Equation (121) for dz, we have Finding the value of z from Equation (119) Substituting in Equation (122) The value of dz = − xdx+ydy Putting in Equation (118), the line element takes the form by substituting for dz 2 The value of the line element in Equation (126) represents the line element for a sphere in terms of Cartesian coordinates (x, y, z). We further observe that the line element in Equation (126) has a coordinate singularity at a 2 = x 2 + y 2 in correspondence with the equator of the sphere and in relation to the point A, otherwise at the equator in the intrinsic geometry of 2-sphere there exists no such physical situation. The embedding scenario manifests how the coordinates (x, y) cover the whole surface of the sphere uniquely up to this point A. The geometry of 2-sphere in these coordinates becomes geometrically meaningful in three-dimensional Euclidean space. We can transform the line element in Equation (126) above into spherical polar coordinates by taking where we differentiate each of x and y with respect to θ and φ alternately to find dx = sin θ cos φdr + r cos φ cos θdθ − r sin θ sin φdφ dy = sin θ sin φdr + r sin φ cos θdθ + r sin θ cos φdφ (128) adding the values of x, y given in Equation (127) after taking square of both equations in it, we get adding dx and dy in Equation (128) after taking square of both equations in it, we possess dx 2 + dy 2 = (sin θdr + r cos θdθ) 2 + r 2 sin 2 θdφ 2 (130) we find the expression squaring Equation (131), we have Now substituting Equations (129), (130), and (132) in Equation (126) and simplifying to have the following form: The value of the line element in Equation (133) gives the line element for a sphere in terms of Spherical polar coordinates (r, θ, φ). The line element in Equation (126) results in an alternative form for The line element in Equation (135) above gives us, in addition, freedom to choose an arbitrary point on the surface of the sphere by ξ = 0 as the origin of the coordinate system. This freedom connotes in it as a hidden symmetry. We can develop ξ and φ coordinate curves on the surface of the sphere by generating a standard coordinate system (ξ, φ) on the tangent plane at the point A that projects vertically downward onto the surface of the sphere. We further observe that the line element in Equation (135) has a coordinate singularity at a = ξ in correspondence with the equator of the sphere in relation to the point A, otherwise at the equator in the intrinsic geometry of 2-sphere there exists no shade of occurrence of such situation. The embedding picture manifests how the coordinates (ξ, φ) cover the whole surface of the sphere uniquely up to this point A. The geometry of 2-sphere in these coordinates becomes geometrically meaningful in three dimensional Euclidean space.
The Geometry of 3-Sphere Embedded in Four Dimensional Euclidean Space
Spaces with dimensions higher than three are now significant in mathematical sciences to have proper description of the physical universe. We consider a four-dimensional Euclidean space which can be considered mathematical extension of three-dimensional Euclidean space. Minkowski used a four-dimensional spacetime to explain the phenomena of the physical world as required by special relativity. The structure of Euclidean fourdimensional space is simple as compared to the Minkowskian structure of spacetime. Minkowskian four dimensional spacetime is pseudo-Euclidean space. In four-dimensional Euclidean space, we assign the quadruplet of four Cartesian coordinates (x, y, z, w) to each point of it, where x, y, z, and w are along the four axes of it. The line element in this space is given by Considering now a sphere with its center at the origin of this coordinate system with radius a, the surface in Cartesian coordinates (x, y, z, w) where x, y, z and w are along the four axes of four dimensional Euclidean space. The equation of the sphere reads as Differentiating Equation (137) with respect to time, And in differential form Finding out the value of dw from Equation (138), we get Now finding the value of w from Equation (137) Substituting in Equation (140), we obtain The value of dw = − xdx+ydy+zdz substituting now in Equation (136), the line element takes the form for the value of dw 2 The value of the line element in Equation (144) gives the line element for a sphere in terms of Cartesian coordinates (x, y, z, w), We further observe that the line elements in Equation (144) has a coordinate singularity at a 2 = x 2 + y 2 + z 2 in correspondence with the equator of the sphere relative to the point A; otherwise, at the equator in the intrinsic geometry of the 3-sphere, there does not exist any situation like this. The embedding picture manifests how the coordinates (x, y, z) cover the whole surface of the sphere uniquely up to this point A. The geometry of 3-sphere in these coordinates becomes geometrically meaningful in four dimensional Euclidean space. We transform the line element in Equation (144) into spherical polar coordinates which are given below, where we differentiate x, and y with respect to θ and with respect to φ each and differentiate z with respect to θ only to find dx = sin θ cos φdr + r cos φ cos θdθ − r sin θ sin φdφ dy = sin θ sin φdr + r sin φ cos θdθ + r sin θ cos φdφ dz = r cos θ (146) Adding x, y, and z in Equation (145) after taking square of all three equations in it to obtain Adding now dx, dy, and dz in Equation (146) after taking square of all three equations in it to have dx 2 + dy 2 + dz 2 = dr 2 + r 2 dθ 2 + r 2 sin 2 θdφ 2 (148) and the expression we calculate Squaring Equation (149), we have Now, substituting Equations (147), (148), and (150) in Equation (144), after simplification we get It can further be expressed in the form It is important to note here that for a → ∞ in the Equation (152) above. It reduces to the metric of ordinary three-dimensional Euclidean space which we calculated in Equation (133). The metric in Equation (151) has a singularity at r = a, which is just a coordinate singularity and has nothing to do with physical reality of the sphere as we can observe. The line element in Equation (152) results in an alternative form for The line element in Equation (155) gives us, in addition, freedom to choose an arbitrary point on the surface of the sphere by ξ = 0 as the origin of the coordinate system. This freedom is implied by it as a hidden symmetry in it. We can develop ξ and φ coordinate curves on the surface of the sphere by generating a standard coordinate system (ξ, φ) on the tangent plane at the point A that projects vertically downward onto the surface of the sphere. We further observe that the line element in Equation (155) has a coordinate singularity at a = ξ with respect to the equator of the sphere in relation with the point A, otherwise at the equator in the intrinsic geometry of 2-sphere there exists no such situation. The embedding picture manifests how the coordinates (ξ, φ) cover the whole surface of the sphere uniquely up to the point A. The geometry of the 2-sphere in these coordinates becomes geometrically meaningful in three dimensional Euclidean space.
Einstein's Static Universe
Albert Einstein himself applied general relativity to the largest scale of spacetime [3] and presented the very first relativistic model of the universe laying the foundations of modern theoretical cosmology. The model was later on called as Einstein world or universe. For this purpose, Einstein modified his field equations by proposing an inbuilt energy density known as cosmological constant Λ in the geometrical structure of spacetime itself that provides repulsive gravity to keep the universe from expanding Equation (156) when solved for the most homogeneous and isotropic geometry of FLRW spacetime produces Friedmann equations as we derived earlier As for a static universe H = 0, which implies thatä a = 0. Now a static universe possesses cold matter which means it does not has pressure, i.e., p = 0, so Equations (157) and (158) reduce to the form, respectively, From above Equation (160), we have Substituting this value of Λ in Equation (159), and having the equation simplified, we again get the value of Λ in terms of the curvature term k and the scalar factor a(t), that is The line element for the static Einstein universe can be written now using FLRW metric. From above Equation (162) for k = +1, we have a 2 (t) = Λ −1 , substituting in Equation (59), the static solution for closed universe becomes Using the Schwarzschild coordinates with the re-scale of radial coordinate and by defining R = ra, we have (161) gives 4πGρ = 0 ⇒ ρ = 0 which implies that k = 0 a Euclidean flat universe. It does not belong to Einstein static universe because it is empty.
Case II (non-empty universe) Einstein universe belongs to Λ = 0 and ρ = 0 implying that k > 0 which represents a universe with hypersurface of Riemannian geometry. In Einstein's universe ρ > 0, therefore a positive cosmological constant Λ > 0 would be allowed which also implies k > 0. (157)
Equation of energy conservation can be had from Equations
after simplification, we getρ For the cold matter universe p = 0, with this the resulting equation is a separable universė ln ρ = −3 ln a + ln Z where Z is some positive constant of integration Z > 0 Further, as the universe does not expand so that a(t) = a(t 0 ) = a 0 , therefore replacing a(t) with a 0 in Equation (174) Substituting the value of ρ from Equation (176) in Equation (158) , i.e., 2nd Friedmann Equation with p = 0, we obtainä and substituting in Equation (161) gives where 4πG Z a 3 0 > 0 since Z > 0, Now, we perturb the solution slightly with the following perturbation substituting this in Equation (175), we have Or Using the Maclaurin series expansion as ε << 1, Using the value of Λ = 4πGZ a 3 0 from Equation (177) in Equation (183), it can be expressed in the form As the cosmological constant is Λ > 0, the solution of above equation will read as Due to existence of the 1st term in the above solution as positive and in the case of an arbitrary perturbation considered initially, both of the constants P = 0, Q = 0 will help the perturbation grow and it will not remain small which will imply that the static solution is unstable, although P = 0 can be possible only for specialized initial conditions such as singular one.
De Sitter Universe
In Einstein's static model with positive cosmological constant when energy density of the matter is removed de Sitter model results. The de Sitter model of the universe presented in 1917 was proposed just after Einstein presented his static closed model of the universe. Einstein resorting to the Mach's principle was of the view that it is merely matter density in universe that is the cause of inertia and gravitation. For checking the status of this Einstein's belief de Sitter posed the 2nd model of the universe devoid of matter density T µν = 0, however retaining the cosmological constant that is G µν = g µν Λ. The de Sitter model is the maximally symmetric solution of Einstein's field equations with vanishing matter density. The geometric theoretic structure of spacetime of the de Sitter model is comparatively more complicated than that of Einstein's model of the universe. The characteristic of the de Sitter model is that it predicts redshift despite it contains neither matter density nor radiation. We review de Sitter model using Fiedmann's equations, however it is important to note that these equations were worked out after the development of de Sitter model. We derived Friedmann equations above in the presence of cosmological constant term Λ, which are de Sitter universe corresponds to ρ = 0, so that k(ρ) = 0, Equation (186) takes the forṁ Integrating with respect to time From Equation (188), H =˙a a , so the Equation (190) can be expressed as Which corresponds to the modified Einstein field equations
The Conformal FLRW Line Element
The metric in Equation (57) can be conformally recast by defining conformal time as so that dt = a(t)dτ After substituting Equation (194) in Equation (57) and simplifying, we get the line element in the form Due to conformal time the scale factor a(τ) becomes a factor of spatial as well as temporal components in the metric. Now, a function f (t) which depends upon time can be differentiated asḟ where dot "." and "," represent derivatives with respect to cosmic and conformal times, respectively, and H = a (τ) a(τ) . Now, replacing f (t) and its derivatives with a(t) both in correspondence with cosmic time 't' and conformal time 'τ' and Similarly and Now we solve the energy conservation equation From Equation (108) in order to get the relation between energy density ρ, scale factor a and equation of state where p ρ = w. Integrating Equation (205) 1 which gives Now from 1st Friedmann equation, after simplification and doing integration, we find For w = −1, 0, 1 3 , we find pressure, energy density and scale factor characterizing the expansion of the universe which depicts three phases of the universe namely vacuum dominated, radiation dominated and matter dominated, respectively.
Radiation Domination
and
Matter Domination
For and a = t Now from 1st Friedman Equation (112) with Λ = 0 and H = ∂ t ln a, we relate the curvature of spacetime k and the expansion characterized by the scale factor a(t) to the energy density ρ(t) of the universe and find the expression for the critical density required to keep the current rate of the expansion.
For critical density ρ c the curvature of spacetime geometry k must vanish, so that Equation (215) reduces to where we obtain the expression for critical density From Equation (215) dividing both sides by H 2 and rearranging where 3H 2 8πG = ρ c , therefore Equation (218) becomes where Ω = ρ ρ c is the density parameter and we can predict in terms of it about the geometry of universe. The local geometry of the universe is investigated by this parameter by observing whether the relative density is smaller than unity, greater than or equal to it. In the Figure 11 all three geometries are represented as the density parameter would allow: Figure 11. The spherical geometry Ω 0 > 1 and for hyperbolic geometry Ω 0 < 1 and Ω 0 = 1 represents flat geometry. Equation (220) can also be derived from Equation (215) in an alternative style. Writing Equation (215) by multiplying and dividing the 1st term on the right side with ρ c Using the density parameter Ω = ρ ρ c , in Equation (221) we can write Now, from the critical density expression in Equation (217), Substituting the 2nd part in Equation (223) in Equation (222) and using the density parameter, we get which gives the following form similar to Equation (220) Now is considered decisive in describing the evolution of the universe. The present value of it is denoted by Ω 0 and it gives following three geometries of the universe a closed universe implying the universe with spherical geometry an open universe implying the universe with hyperbolic geometry and
Particle Horizon
When the scale factor a(t) is multiplied with the co-moving coordinates we get the proper distance. In cosmology causality is one directional since we just receive photons from the outer world that serves to be self-sufficient approach. The horizon or horizon distance of the universe is defined as the maximum distance that light could have traveled to our reference Earth as the time after the beginning of the universe when for the first time it became exposed to electromagnetic radiation [45], thus horizon represents the causal distance in the universe.
Such that d H (t) ∼ H −1 (t) Particle horizon is defined to be the distance traveled by a photon from the time of big bang up to a certain later time, t. Particle horizon puts limits on communication from the deep inward past.
Event Horizon
An event horizon defines such a set of points from which light signals sent at some given time will never be received by an observer in the future. It sets limits on the horizon distance and on communication to the future so that as long as it exists, the size of the causal patch of the universe will be finite.
Deceleration Parameter (q 0 )
A Taylor series is a series expansion of a function about a given point. We require here a one dimensional Taylor series which is the expansion of a real function f (x) about a point x = a and is given by We take the function f (x) = a(t) which is scale factor and find its Taylor series expansion about the present time t = t 0 dividing Equation (233) by a(t 0 ) throughout, we have ignoring the higher terms we have the following remaining expression multiplying and dividing now byȧ(t) with 3rd term of Equation (235) on the right hand side: Multiplying again the 3rd term on the right hand side of Equation (236) with˙a a(t 0 ) and its reciprocal Putting for˙a a(t 0 ) = H 0 , the present value of Hubble parameter and a(t 0 )ä(t 0 ) [ȧ(t 0 )] 2 = −q 0 , Equation (237) reduces to the following: where is called the deceleration parameter. It tells us that greater the value of q 0 , the faster will be speed of deceleration. It can be further related with the acceleration equation Putting Equation (240) in Equation (239) With p = 0 for a universe having matter domination and present energy density ρ = ρ 0 with dividing and multiplying by 2, we possess Now, as the critical density is given by ρ c = 3H 2 0 8πG from the 1st Friedmann equation. Therefore Equation (242) takes the form The measurement of deceleration parameter q 0 determines how much bigger the universe was in earlier times. The explorations of redshift measures of supernovae of Type SN Ia to measure the value of q 0 has shown astoundingly that q 0 < 0 at the present which means that the expansion of the universe is accelerating rather than to be decelerating which affirms that the concept of dark energy must be acknowledged. Accelerated expansion of the universe corresponds to q 0 < 0, whereas q 0 > 0 corresponds decelerated expansion. It is interesting to notice that for all of these components we have H > 0, i.e., an increasing scale factor which gives the expansion rate of the universe. Moreover, to get a better understanding of the properties of each species, it is useful to introduce the deceleration parameter q 0 as such that for both matter-dominated or radiation-dominated universe the expansion is decelerating. It is also interesting to notice that components with w < − 1 3 give an accelerated expansion.
Friedmann Equations in Terms of Density Parameter
We found earlier Friedmann equations In Equation (245) in order to incorporate vacuum energy we can write energy density as the sum of all energy components ρ = ρ m + ρ r + ρ Λ such that the equation can be written as where˙a a = H is the Hubble parameter, writing ρ Λ as ρ Λ = Λ = Λ 8πG and ρ = ρ m + ρ r which further can be written as the contributing ingredients ρ m = ρ b + ρ CDM and ρ r = ρ γ + ρ ν , also we found earlier the critical density to be 3H 2 8πG = ρ cd , which for the present value can be expressed as ρ c,0 = or where It might be suitable to write the curvature term k in terms of density parameter k = Ω k,0 = ρ k ρ c,0 , further the present value of the scale factor a(t) = 1 so that Equation (249) takes the form Now, for the present value of Hubble parameter, i.e., H = H 0 , Equation (251) can be written for the curvature density parameter Equation (249) can be written in general form, i.e., H = H 0 and a = a 0 = 1 Equation (253) can also be written for the present values of all the energy density parameters We know that energy density ρ for matter, radiation and vacuum domination eras changes with the scale factor that characterizes the expansion of the universe according to respectively. Thus, Equation (254) takes the following form using Equation (255): The Equation (256) represents Friedmann equation in terms of density parameters. For a = a(t) a(t 0 ) = 1 1+z , Equation (256) can be expressed in terms of redshift as follows We can discuss various models for the universe using Equation (256) for matter, radiation, Λ and curvature-dominated eras.
For matter domination, Equation (256) with Ω m,0 = 1 and with the rest of terms vanishing gives which gives an expanding universe with expansion rate inversely proportional to time, i.e., H = 2 3 t −1 and age of the universe would be t 0 = 2 3 H −1 0 . Such model must be subject to deceleration as the time goes on.
For radiation domination, Equation (256) with Ω r,0 = 1 and with the rest of terms vanishing, gives a = √ 2H 0 t t = a 2 2H 0 (259) which gives an expanding universe with expansion rate inversely proportional to time, i.e., H = 1 2 t −1 and age of the universe would be t 0 = 1 2 H −1 0 . The expansion is subject to deceleration in this radiation-dominated era.
For Λ domination, Equation (256) with Ω Λ,0 = 1 and the rest of terms vanishing gives which gives an exponentially expanding universe with expansion rate inversely proportional to time, i.e., H = (ln a)t −1 and infinite age. For k domination or the otherwise empty universe, Equation (256) with Ω k,0 = 1 and with the rest of terms vanishing gives which gives an expanding universe with expansion rate inversely proportional to the time, i.e., "t".
Cosmological Redshift
we considering the FLRW geometry Note here that the coordinates (r, θ, φ) in the metric Equation (262) are comoving spatial coordinates; therefore, galaxies which are considered as point particles constituting the particles of cosmological fluid in cosmology remain at fixed coordinates and it is the geometry of the spacetime that expands itself and is characterized by the scale factor a(t) completely. Three intervals-spacelike, timelike and lightlike, or null-expressed in the form ds 2 > 0, ds 2 < 0, or ds 2 = 0, respectively. In the spacetime geometry light propagates following the interval ds 2 = 0 or ds = 0 which means that it does not travel at all any distance through the spacetime. We consider a ray of light propagating along the radius as all the points in space are equivalent at a given time from some zero value radius to some certain value of it in later times. As the light ray travels radially therefore only one spatial dimension is retained and the vanishing of time dimension follows from ds = 0 and other two spatial dimensions vanish due to radial propagation of light therefore dt = dθ = dφ = 0, then Equation (262) gives In order to calculate the total time elapsed from r = 0 to some certain later time value r = r 0 , we shall integrate Equation (264) between emission and reception times t e and t r respectively.
A ray of light, now, given off after a short interval of time dt emi so that time of emission of light ray becomes t emi + dt emi and accordingly we can have the time of reception to be t rec + dt rec from an integral of the same nature given in Equation (265) For deriving the redshift relation as the universe expands we use Figure 12 that is given below: The slices are very narrow, so the area is just the area of a rectangle, i.e., width times height, i.e., dt rec a(t rec ) = dt emi a(t emi ) For an expanding universe a(t rec ) > a(t emi ), it implies from Equation (268) dt rec > dt emi that as the universe expands the time interval between two rays increases. We consider now successive crests or troughs of a single ray instead of two rays as we did earlier so that wave length λ is directly proportional to the time interval between two successive crests or troughs λ ∝ dt and dt ∝ a(t) and we have We define now the redshift 8. 10. Luminosity ((L)), Brightness, Luminosity Distance ((d L )) and Angular Diameter Distance (d A ) We can deduce relations from the properties of electromagnetic radiation and the quantities contained in FLRW line element. The velocity of electromagnetic waves is constant and finite. Light and electromagnetic radiation acts as cosmological messenger and all the distances measured cosmologically are extracted from the properties of it. The velocity of light being finite has to take time to reach us and universe might have expanded significantly during this time.
Luminosity L
Luminosity is defined as the absolute measure of the electromagnetic power or energy radiated per unit time by an astronomical object like star, galaxy or cluster of galaxies. It is denoted by L and is measure in Joule per second Js −1 which is also known as watts. Usually luminosity is measured in terms of the luminosity of the sun denoted by L .
Brightness
It refers to how bright an object appears to an observer and depends upon luminosity, distance between the observer and the object and absorption of light along the path between observer and the object.
Luminosity Distance ((d L ))
We consider a point source S radiating electromagnetic light equally in all directions spherically; the amount of light passing through elements of surface areas varies with the distance of it from the light source.
Given below in Figure 13, the light of luminosity L is being radiated. We consider a spherical hollow centered on the point source S as shown in the Figure 13 below The interior of a hollow sphere gets illuminated thoroughly. As the radius of the sphere increases, the surface area of the imagined hollow sphere also increases, such that a constant or absolute measure of luminosity has to spread in expanding sphere illuminating it, i.e., as the radius increases the constant luminosity has more and more surface area to illuminate which leads to decrease in the observed brightness. If an observer at a distance equivalent to the radius of sphere receives the electromagnetic radiation L per unit time and F be the energy flux per unit time per unit area from the source or the point source, say O, then in Euclidean geometry we will have where F = Flux density of the illuminated sphere, L = luminosity, and A = area of the illuminated sphere From Equation (272) which gives (274) Figure 13. A source S radiating electromagnetic energy.
We next look how the luminosity distance is related with expansion of the universe. In expanding sphere we might have its radius as the product of scale factor and the radius, i.e., a(t)r = a(t)d L , so that the energy emitted gets diluted 4πr 2 → 4π(a(t)r) 2 (275) and a photon loses energy as F ∝ a(t e ) a(t 0 ) and redshift relation we have 1 If L is known for a source, it is known as standard candle. Supernovae type Ia were used as standard candles for larger cosmic redshifts which led to accelerated expansion.
Angular Diameter Distance (d A )
It is the ratio of the proper distance measured when the light left the surface of an object to the later measured distance by redshifting of light in some later time. Certainly the redshift of light measured would be smaller measured at the time when the light left the surface of the object to be measured in later times. The schematic diagram is shown in Figure 14 for angular diameter distance: It is defined in terms of objects physical distance known as proper distance and the angular size of the object seen from the surface of earth. If size of the source be S and angular size θ, then where D A is the angular diameter distance of the source. From FLRW line element for photons dr 2 ≈ dφ 2 ≈ 0, we have
Problems Faced by the Standard Model of Cosmology
From 1st Friedmann equation we see that curvature k is negligible depending on observation and Ω 1 which means it would have been created tuned finely in the very early universe. From 2nd Friedmann equation we see that if (ρ + 3p) remains positive, the acceleration is negative which means that the expansion of the universe will go on slowing down. Further, far flung parts of the universe display the same properties as observation evidence despite the fact that they have not been in causal contact with each other.
Monopole Problem
The problem is about the question of why do we not observe magnetic monopoles in the universe today. It results from combining the big bang model with GUT in particle physics, thus it is related to particle cosmology where during symmetry breaking phase transitions are considered. In the very early universe, when the phase transitions are considered to occur, it is expected that these phase transitions will create magnetic monopoles with enormous energy density which might dominate the total energy density of the universe. During symmetry breaking when phase transitions take place, these give rise to flaws known to be as topological defects. GUT predict that during GUT phase transitions these point-like topological defects are created which act as magnetic monopoles. It is considered that the radiation and matter dominated eras could not take place as these monopoles do not get diluted as they are supposed to be non-relativistic and their energy density would decay like a −3 [46], but as we observe the universe evolved to the later eras so question arises how this occurred which is at the heart of this problem.
Horizon Problem
On the basis of the standard big bang model it is difficult to understand the uniform distribution of the temperature of CMB to 1 part in 10 5 . The horizon problem is related with the issue of the causal contact as it has been revealed by the uniform distribution of temperature of the cosmic background radiation (CMB) across all parts of the universe. In order to understand the problem we have to understand the horizon size and causal contact or communication. At any instant of time horizon size is defined as the largest distance, i.e., maximal distance over which two events could be in causal with each other. Therefore it is the maximum distance a photon could have traveled since the birth of the universe or since the time when the universe became transparent. It can be found from the FLRW metric to be ds 2 = R H = c t 0 dt a(t) which reveals the fact that size of the horizon depends upon the history of the universe as it evolves through time. It is also called comoving horizon as causal contact develops between two events and the universe is expanding so that they are getting separated apart mutually. In the standard big bang theory the universe was matter dominated at the time of last scattering (t ls ) so that the horizon distance at that time can be approximated by the value d H (t ls ) = 2cH −1 (t ls ). Now, the Hubble distance at the time of last scattering was cH −1 (t ls ) ≈ 0.2 Mpc and the horizon distance at last scattering was d H (t ls ) ≈ 0.4 Mpc. Therefore, the points which were separated more than 0.4 Mpc distance apart at the time of last scattering (t ls ) were not connected causally in the big bang scenario. Further, angular diameter distance (d A ) to the last scattering surface is 13 Mpc; therefore. points on the last scattering surface that were separated by a horizon distance shall have angular separation θ H = d H (t ls ) d A ≈ 0.4 Mpc 13 Mpc ≈ 0.03 rad ≈ 2 0 as viewed today from the Earth. It means that the points separated by an angle as small as ∼2 0 on the last scattering surface were not in causal contact with each other when CMB emitted with temperature fluctuations. However, we come to know that δT T is as small as 10 −5 on the scales with angular separation θ H > 2 0 . So here we state the problem that the regions which were not connected through causal contact with each other at the time of last scattering have similar properties homogeneously.
Flatness Problem
When we consider Friedmann's equations evolve in a universe where only radiation and baryonic matter exist without vacuum energy density present there, then flatness problem arises in such a universe [47]. From the 1st Friedmann Equation where 3H 2 8πG = ρ c , therefore Equation (285) becomes so that the spatial curvature of the universe is related to the density parameter Ω through Friedmann's equation. Observational evidence shows that the universe is nearly flat today, i.e., ρ = ρ c , ⇒ Ω = ρ ρ c ≈ 1. This means that the value of Ω would have to be very close to 1 at Planck era t pl . This means that the initial conditions of the universe were tuned finely. Because of this, the flatness problem is also known as the fine-tuning problem and the flatness problem arises because in a comoving volume the entropy remains conserved. Further, from Equation (284) above, the energy density of the universe without considerations of vacuum energy as is the case of big bang model is ρ = ρ R + ρ M and we can also write The term − k a 2 is clearly proportional to a −2 , while the energy density terms ρ R and ρ M fall off faster than scale factor a(t), i.e., ρ R ∝ 1 a 3 (t) and ρ M ∝ 1 a 4 (t) . This ratio k a 2 (t) ( 8πG then is much smaller than unity when the scale factor a(t) has increased by a factor of 10 30 since the Planck era.
Entropy Problem
The adiabatic expansion of the universe following the first law of thermodynamics is related to the flatness problem [48] discussed above. Temperature plays a significant role in the early universe because at early epochs the age and expansion rate H = ∂ t ln a are described in terms of it with the number of relativistic degrees of freedom. From the 1st Friedmann's equation we have the expression for density parameter Ω − 1 = k a 2 (t)H 2 and expansion rate in radiation-dominated era in terms of temperature is H 2 so that the density parameter expression becomes Ω − 1 = kM 2 pl a 2 (t)T 4 . Now the entropy density is s ∼ T 3 and the entropy per commoving volume S ∝ a 3 (t)s ∝ a 3 (t)T 3 and we have Ω − 1 =
Introduction to Inflation
Inflation is the period of superluminally accelerated expansion of the universe taking place sometime in the very early history of the universe. It is now a widely accepted paradigm which is described as the monumental outgrowth gushing out during the tiniest fraction of the first second between (10 −36 -10 −32 ) s. Inflation maintains that just after the occurrence of the big bang, exponential stretching of spacetime geometry took place, i.e., becoming twice in size again and again at least about (60-70) times over before slowing down. Alexei Strobinsky approached the exponentially expanding phase in the early universe by modifying Einstein Field Equations whereas Alan Guth approached the scenario in the realm of particle physics proposing a new picture of the time elapsed in the very small fraction of the first second in the 1980. He suggested that the universe spent its earliest moments growing exponentially faster than it does today. There is a large number of inflation models in hand today but every model has its own limitations to draw the true picture of what happened actually in the early universe.
As the theory of inflation is recognized today, it has myriad models describing inflationary phase in the early universe. From among the heap of these competing models which differ slightly from one to the other, no model can claim a complete and all-embracing prospectus of what happened actually in the universe so that the fast expansion of or in spacetime takes place. All the energy density that can be adhered to the early exponentially expanding phase of the universe was in the very fabric of spacetime itself despite ti being in the form of radiation or particles. The early accelerating phase can be now best described with de Sitter model with slightly broken time symmetry. With the creation of spacetime that purports to be the earliest patch of the universe that comes to being would be stretched apart in an incredibly small time span of the order of a tiniest fraction of first second to such a colossally larger size that its geometry and topology would be hardly indiscernible from Euclidean geometry. It will logically ensue similar initial conditions for the energy density to be dispersed at every point in the fabric of spacetime and the same will be the condition of temperature in this early phase. That's why the quantum fluctuations which seed in later times the structure formation in the universe impart the uniform temperature to all parts of the universe thereby resolving the homogeneity problem of the universe. This is because all the quantum fluctuations which cause the observable universe were once causally connected in the deep past of the universe. It might have attained a highest temperature which was within or lesser than the limits of Planck scale (10 19 GeV). The energy scale mentioned earlier when the inflation comes to an end and transforms into the uniform, very hot, largely dense that is a cooling and expanding state we ascribe to the hot big bang. This will take place for a universe inflating from a lower entropy state to an entropy state at higher level in the panorama of the hot big bang, where the entropy would carry on to get larger as it happens in our observed universe. The point of time in the earliest where the universe can be viewed approximately and hardly as classical is known as the Planck Era. It is thought that prior to this era the universe might be described as the hitherto unsuspected theory of certain quantum nature like quantum gravity etc. This era corresponds to E P ∼ 10 19 GeV > E > E GUT ∼ 10 15 GeV and the energies, temperature and times of particles are E P ∼ 10 19 GeV, T P ∼ 10 32 K, t P ∼ 10 −43 s, respectively. Grand unified theories describe that at high energies as described above the Electroweak and strong force are unified into a single force and that these interactions bring the particles present into thermal equilibrium Electroweak Era corresponds to phase transitions that occur through spontaneous symmetry breaking (SSB) which can be characterized by the acquisition of certain non-zero values by scalar parameters known as Higgs fields. Until the Higgs field has zero values, symmetry remains observable and spontaneously breaks at the moment the Higgs field becomes non-zero. The idea of phase transitions in the very early universe suggests the existence of the scalar fields and provides the motivation for considering their effect on the expansion of the universe.
The power spectrum of CMB is calculated by measuring the magnitude of temperature variations versus the angular size of hot and cold spots. To understand the nature of CMB the spectrum of a perfect blackbody is given in Appendix C. During these measurements, a series of peaks with different strengths and frequencies are determined which conforms to the predictions of inflation theory which confirms that all sound waves were indeed produced at the same moment by inflation. It is believed that inflation might have given rise to sound waves-the waves traveling in the primordial vacuum-like medium with different frequencies after the big bang at 10 −35 s starting in phase and would have been oscillating in radiation era for 380,000 years. Now, in the acoustic oscillations of the early universe, these must be measurable as power spectrum similar to that of measuring the sound spectrum of a musical instrument. The history of evolution of the universe to the present epoch is sketched in Figure 15-17 show how the inflationary period is driven by the inflaton field:
Starobinsky R 2 -Inflation
Alexei Starobinsky proposed a cosmological inflationary phase of the universe shortly before Alan Guth in 1980 working in the framework of general relativity. The model is founded on the semiclassical Einstein field equations which provide a self-consistent solution for an exponentially accelerating era [49]. Starobinsky modified the general relativity to describe the behavior of very early universe undergoing an exponential period by suggesting quantum corrections to the energy momentum tensor T µν . The quantum corrections are calculated by taking the expectation value of the energy momentum tensor. Beginning with Einstein equations where < T µν > represents the expectation value of the energy momentum tensor. The expectation value of energy momentum tensor is the probabilistic value of a result or measurement which is fundamentally rooted in all quantum mechanical systems. Intuitively, it is the arithmetic mean of a large number of independent values of a variable under consideration. The energy momentum tensor T µν usually takes care of classical components of the universe in the form of matter and radiation in the context of flat spacetime as the parametric observations evidence in the recent data. In the case of curved spacetimes, nonetheless T µν might be vanishing gradually and < T µν > must be imparted contributions from quantum regime non-trivially. In the absence of classical components of the universe in the form of matter and radiation, the curvature of spacetime from quantum fluctuations of matter fields contribute to < T µν > non-trivially which Starobinsky utilized. These are known as quantum corrections to the energy-momentum tensor T µν . The quantum fluctuations of matter fields give non-trivial contributions to the expectation value of the energy momentum tensor < T µν > in the presence of cosmologically curved spacetime, regardless, matter and radian do not exist in classical style. In the background we consider FLRW spacetime ds 2 = −dt 2 + a(t) 2 1 − kr 2 −1 dr 2 + r 2 dθ 2 + r 2 sin 2 θdφ 2 (291) The spatial part 1 1−kr 2 dr 2 + r 2 dθ 2 + r 2 sin 2 θdφ 2 of the metric represents the three geometries depending on the values of k. For k = +1, it represents a spherical geometry of 3-sphere which is finite, closed, and without boundary. For k = 0, it represents a flat Euclidean geometry of 3-planes which is, in principle, infinite in extent, open, and without boundary. k = −1, it represents a hyperbolic geometry of 3-hyperboloids which is infinite, open and without boundary. In the presence of conformally-invariant, free and massless fields, the quantum corrections adapt a simple form such that we can describe the expectation value of energy momentum tensor as where k 1 and k 2 are numerical coefficients in standard notation. In order to find < T µν > we have to compute these constants k 1 and k 2 and H (1) µν and H (2) µν . The coefficient k 1 is determined experimentally and can assume any value. The H (1) µν is a tensor and is conserved identically when expressed as the action given below and varied with respect to metric tensor √ −g, i.e., and H (2) The coefficient k 2 of H µν is defined uniquely in the following form where N 0 , N1 / 2 and N 1 denote the number of quantum fields with the subscripts of all three N's 0, 1 / 2 and 1 representing spins of zero, half, and one respectively. In certain GUT theories due to larger multiplier factor of N 1 , the value of k 2 is majorly contributed by vector fields. Now , H µν is also a tensor and it does not conserve generally but conserves only in those spacetimes which are conformally flat like FLRW spacetimes in particular and cannot be obtained by varying a local action as in the case of H µν . The Equation (292) multiplying with 8πG to both sides can be written as Now, we introduce the following parameters for convenience where both the parameters are positive i.e., H 0 > 0 and M > 0. Now Equation (297) takes the form Equation (299) can serve as the reasonable approximation in case of certain GUT models for the limit R > µ 2 , where µ represents the unified energy scale. Conformally invariant field equations usually describe the spinor and massless vector fields and contribute to < T µν > in the form of Equation (299). Further, if the number of matter fields is sufficiently bigger, then the corrections to Einstein's field equations due to gravitons can also be ignored.
Trace Anomaly
The trace of expectation value of energy-momentum tensor < T µν > does not vanish rather it has a non-zero anomalous trace and this is what we call as trace anomaly. It is, however, interesting to note here that the trace of energy-momentum tensor without expectation value, i.e., T µν , vanishes for all those classical fields which are conformally invariant. Therefore, the trace of < T µν > is given by The masses of the fields can be looked over in the limit of higher curvature, i.e., when R m 2 and in the same limit it remains true for the case of asymptotically free gauge theories where interactions between the fields become negligible. In de sitter space we can have where R is constant curvature term i.e., Ricci scalar. Substituting now Equations (300) and (301) in Equation (290), we have R = 12H 2 0 for non-trivial solution and the corresponding de Sitter solutions come about for k = 0, +1, −1, respectively, a(t) = a 0 e tH 0 (302)
Inflation and de Sitter Universe
In a very shorter period of time about 10 −35 after spacetime came into being, the inflationary era of accelerating superluminal expansion known to be de Sitter phase took place. de Sitter phase removed all the wrinkles of curvature and warpage of spacetime so that the universe is to be observed flat. It further smoothed out all energy density stuff for the distribution of radiation and matter. One significant remnant as the traces of this fast expansion remains there known later on to be cosmic background radiation. In de Sitter universe there exists no ordinary matter, however, de Sitter retained cosmological constant which represents vacuum energy smeared out into the structure of spacetime. We can define the energy density of this non-relativistic matter As p Λ = −ρ Λ which gives an exotic form of matter with negative pressure, that is where the scale factor a(t) goes on increasing butȧ(t) is decreasing. We write Now from Equation (307) for Equation (313) is the equation of a harmonic oscillator. From Equation (306) for vanishing curvature, i.e., k = 0 where Λ dominates and˙a a = H Substituting Equation (315) in Equation (313), and simplifying we have We can write the solution of above Equation (317) as differentiating Equation (318) twice with respect to time t using Equation (318) in Equation (320), we can writë substituting the value of ρ Λ from Equation (315) in Equation (306), we have simplifying Equation (322) gives substituting the values of a(t) andȧ(t) from Equations (318) and (319) in above Equation (323) Equation (325) means that the curvature term k depends upon the constants of integration C 1 and C 2 . For flat universe either C 1 = 0 or C 2 = 0. The solution in Equation (318) becomes accordingly and Further Einstein equations are given by where and the form of solution of these equations upon which big bang standard cosmology is based, as worked out by Alexander Friedman (1922), George Lemaitre (1927), and afterwards by Robertson and Walker (1935) independently on the base of cosmological principle which put to use the homogeneity and isotropy, is where Ω 2 = dθ 2 + sin 2 θdφ 2 . The metric in Equation (330) is characterized by scale factor a(t) and the curvature of spacetime k which are obviously determined by the selfgravitation of all the matter-energy content in the universe. We have incorporated dark matter and dark energy in the matter-energy content because their role is not avoidable at all in accelerated expansion and the present Minkowskian flat geometry of the universe. The solution of this line element gives Friedman equations using Einstein field equations that govern the time evolution of the universe and are given aṡ The presence of cosmological term Λ in the above equations would be equivalent to that of a fluid having an equation of state p = −ρ which is satisfied by Looking at the things classically, we may approach the classical period of exponential expansion by using the first Friedmann equation by vanishing density ρ of radiation and baryons and the entailing curvature k in Λ-dominated Era which corresponds to equivalently having a fluid with p = −ρ, thus Equation (331) becomeṡ after integrating and simplifying, we get Equation (335) gives the exponential expansion of the scale factor. It describes the fact that when the universe was dominated by cosmological constant Λ, the rate expansion was much faster than the present day scenario. From Equation (332) Considering a closed volume with energy U = ρV = ρ 4π 3 a 3 and now we see how inflationary period is obtained in the perspective of particle physics where a negative pressure is achieved for it to take place. Friedmann solved EFE with Λ = 0, sȯ Equation (338) is known as acceleration equation. The inflationary period, as its definition implies, is the acceleratingly expanding phase of the universe in a very small fraction of first second, as the expansion is characterized by the scale factor a; therefore, we have such an era asä > 0 For the inflation to occur and set the universe in an accelerating phase, we require the matter to possess an equation of state with negative pressure. The possibility of this negative pressure p which is less than negative of one-third of density is in perspective of symmetry breaking in modern models of particle physics. Froṁ Forä > o, the scale factor shall increase faster than a(t) ∝ t and the term 8πG 3 ρa 2 shall increase during this accelerated era such that the curvature term k will become negligibly small and shall vanish. Inflationary era is also defined by considering the shrinking of Hubble Sphere [43] due to its direct linkage to the horizon problem and because it provides a fundamental role in producing of quantum fluctuations. Shrinking Hubble Sphere is defined as which will imply accelerated expansionä At t = 0, the scale factor a characterizing expansion of the universe comes out to be of a specific value. In Equation (337), when ρ = ρ φ is of very larger value and the scale factor a dominates over the curvature term k, then we have de Sitter line element is given by inflation has to terminate and H is constant, meaning that the de Sitter phase cannot give perfect inflationary era, however forḢ H 2 , it would compensate. It would be interesting here to note that Z. G. Lie and Y.S. Piao have shown that the universe we observe today may have emerged from a de Sitter background without having the requirement of a large tunneling in potential and with low energy scale [50].
The Conditions under Which the Inflation Occurs
Shrinking Hubble sphere has been considered as basic definition of inflationary era due to its direct connection to the horizon problem and with mechanism of quantum fluctuation generations [51]. differentiating the comoving Hubble radius (aH) −1 with respect to time we find the acceleratedly expanding Hubble sphere We see that −ä a 2 < 0, multiplying the inequality by −1 and simplifying, we havë which means that shrinking comoving Hubble sphere (aH) −1 points toward accelerated expansionä > 0. As Hubble sphere H remains nearly constant, in order to understand the meaning of nearly constant we see how its slow roll variation takes place, so taking H as variable whereḢ H 2 = −ε known as slow roll parameter. It can be inferred thatḢ H 2 < 0 implies shrinking Hubble sphere.
Slow Roll Inflation-The Dynamics of Scalar Field
Elementary particles in modern physics are represented by quantum fields and oscillations of these fields are translated as particles. Scalar fields represent spin zero particles in field theories and look like vacuum states because they have same quantum numbers as vacuum. The matter with negative pressure ρ = −p represents physical vacuum-like state where the quantum fluctuations of all types of physical fields exist. These fluctuations can be considered as waves of all possible wavelengths related with physical fields, i.e., wavy physical fields moving freely in all directions. The negative pressure violates the strong energy condition which is necessary for the inflation to occur. To keep things simpler a single scalar field namely inflaton φ = φ(x, t) is considered present in the very early universe, as the value of the scalar field depends upon position x in space which assigns potential energy to each field value. It is also dynamical due to being function of time t and has kinetic energy as well, i.e., energy density ρ(φ) associated with the inflaton φ is ρ(φ) = ρ p + ρ k . The ratio of the potential and kinetic energy terms of φ = φ(x, t), decides the evolution of the universe. The Lagrangian of the scalar inflaton field φ is expressed as the energy difference between its kinetic and potential terms, that is It is assumed that the background of FLRW universe has been sourced by energymomentum associated with the inflaton that dominates the universe in the beginning. We shall observe under what conditions this causes accelerated expansion of the FLRW universe.
The energy-momentum tensor of the inflaton field is given as which for µ = 0, ν = 0 results as and for µ = ν = j The gradient term vanishes, in the otherwise condition, the pressure gained is much less than the required value to impart impetus for inflation to take place, therefore we obtain the following values for energy density and pressure and The condition V(φ) >>φ 2 corresponds to the negative pressure condition ρ φ = −p φ which means that the potential (vacuum) energy of the inflaton derives inflation. Now using Euler-Lagrange equations we can find equation for inflaton field that comes to bë It can also be computed from the energy density and the pressure terms given in Equations (361) and (362) respectively by substituting in equation of energy conservation By substituting Equations (361) and (362) in Equation (366), we have where V (φ) = dV(φ) dφ and the term 3Hφ is known as friction term and offers friction to the inflaton field when it rolls down (φ) its potential during expansion of the universe H =˙a a . Figures 18 and 19 manifest how the scalar field drives the universe evolution in the beginning and how does it slow roll afterward respectively: Figure 18. How the universe springs into being through a scalar field. Old Inflation: (a) The scalar field is in stable false vacuum (b) It shows that the scalar field causes inflation through quantum tunneling which comes to an end suddenly (c) Due to abrupt ending of inflation, energy is dissipated without evolution of the universe or an empty universe results. New Inflation: (d) The scalar field begins in right false vacuum (e) Despite quantum tunneling, the scalar field decays by slowly rolling down towards its minimum hence the name slow roll inflation (f) The energy does not dissipate, instead reheat occurs and the universe evolves to radiation and other phases.
Conditions of the Slow Roll Inflation
According to the big bang model, that is, the currently accepted model, the universe is about 14 billion years old. At the point of existence the curvature of spacetime was very large or equivalently can be described in other words that space was largely warped and curved where only quantum effects can prevail and the question of time to exist is likely to become absurd. From this state how the very brief era of exponential expansion can be had is fulfilled by assumption of scalar field which take the responsibility of such state mentioned. We know from the 2nd Friedmann's equation which is acceleration equation From Equations (361) and (362), substituting for p φ and ρ φ in Equation (371) solving the inequality and keeping in mind thatφ is a squared term, we havė which means that the inflaton field is slowly rolling down its potential. Differentiating Equation (373) with respect to time, we havë Now from Equation (369), we obtain We neglect the acceleration providing termφ = d 2 φ dt 2 as the inflaton field has to roll now slowly to escape from graceful exit problem in inflation i.e., it is decelerating, so we write plugging Equation (376) in Equation (374) On neglecting the constant factor, it gives differentiating now Equation (376) with respect to time, As H remains constant during inflation, thereforeḢ vanishes and we havë Putting Equation (380) in Equation (378), we have
Parameters for the Slow Roll Inflation
Two slow roll parameters ε and η are defined in terms of Hubble parameter H as well as potential V which quantify slow roll inflation.
Using the relation a(t) ∝ e −N ⇒ N = ln a, it can also be expressed in the form where N is the number of e-folds and 2nd is defined as For ρ = ρ φ and from Equation (361) ρ φ = 1 2φ 2 + V(φ), as during inflation V(φ) >>φ 2 , so that ρ φ = V(φ) also curvature term k is negligibly small, so that Equation (386) becomes differentiating Equation (387) with respect to time and simplifyinġ And from Equation (376) substituting in Equation (388), we havė Substituting above in Equation (383), we have Again from Equation (376), we havė η H can also be expressed as From Equation (387) H 2 = 8πG 3 V(φ), which gives 8πGV(φ) = 3H 2 substituting above in Equation (396), we have
Number of e-Folds
It is usual practice to have the inflation quantified and the quantity which does this is called number of e-fold denoted by N before the inflation ends. As the time goes by N goes on decreasing and becomes zero when inflation ends. It is counted or measured backwards in time from the end of inflation which means that N = 0 at the end of inflation grows to maximal value towards the beginning of inflation. It measures the number of times the space grows during inflationary period. The amount of e-folds necessarily required to resolve the big bang problems of Horizon, Flatness, Monopole, Entropy, etc. is N ∼ 60-75 depending upon the different models and on the reasonable estimation of the observational parameters. To find the number of e-folds between beginning and end of inflation we know that during inflation the scale factor evolves as or The factor Ht constitute the number of e-folds denoted by N, i.e., differentiating Equation (400) with respect to time Further, the relation between Hubble parameter H and the number of e-folds N can be written. we have derived earlier the evolution equation for inflaton field that reads During slow roll inflationφ = 0, so that Equation (403) becomes Moreover, during slow roll the Friedmann's 1st equation evolves as with k = 0 and During slow roll (φ) 2 << V(φ)and onlyφ works, thus Equation (406) becomes Dividing Equation (405) by Equation (407) φ Now from Equation (400), we can write because t = t f − t i , so t = t f t i dt and with dividing and multiplying by dφ Substituting from Equation (408) after inverting Thus number of e-folds can be found in terms of potential of the inflaton field. Further slow roll parameter ε H can be described in terms of number of e-fold N, we know
Inflationary Solutions to the Big Bang Problems
Horizon, flatness, entropy, and monopole problems are initial value problems which inflation solves in one go. Inflation explains why the observable universe is spatially flat, isotropically homogeneous and so large in size.
Inflation and Horizon Problem
We consider that the inflation begins at a time (t i ) and comes to an end at some time t f and the expansion rate H = ∂ t ln a, curvature term k and energy density of matter and radiation ρ = ρ M + ρ R during inflation vanishes, we know that and We will find how long the inflation must sustain to resolve the horizon problem. We can find the corresponding e-folding number N that is As H =˙a a , or Now the horizon scale observed today H −1 0 was reduced during inflation to a value of λ H 0 (t i ) which is smaller than the horizon length during inflation.
Dividing and multiplying Equation (422) by a t f and for the time when inflationary period comes to end Further, the beginning of the radiation-dominated era can be recognized with the beginning of inflationary phase such that it is required We calculated a i a f = e −N , the above Equation (438) takes the form With taking |Ω − 1| t=t i ≈ 1, N 70 (442)
Inflation and Entropy Problem
The entropy problem can be resolved if a large amount of entropy is created in the very early universe non-adiabatically [48,51] which is accomplished by inflationary era in a finite time in the early history of the universe. Let the entropy at the end of inflation is S f and in the beginning it was S i such that S f ∝ S i , then where M is the numerical factor with value M 3 = 10 10 ⇒ M = 10 30 . Now S f = S U . We know that S ∼ (aT) 3 , so that we can write for and for where T i and T f are the measures of temperature at the beginning and end of the inflationary period. dividing Equation (445) by Equation (444) we have Now a f a i = e N , and considering that at the beginning of inflationary phase the total entropy of the universe was of the order 1, i.e., S i ∼ 1 and S f = S U , thus Equation (448) takes the form therefore, entropy problem is resolved by inflationary period.
Inflation and Monopole Problem
In grand unified theories (GUT), the standard model SU(3) × SU(2) × U(1) in particle physics emerges out of a simple symmetry group breaking. In these theories, heavy particles of very high density are predicted to be created which are known as magnetic monopoles. The cosmological monopoles prior to the period of inflation are considered to take place and are allowed supposedly to exist. It means that inflation allows the existence of magnetic monopoles, that is to say that as they are created earlier to era of inflation, these magnetic monopoles are supposed to form in symmetry breaking during phase transitions and where the inflationary era is considered to take place just after it. Inflation dilutes the density of these magnetic monopoles n mp ∝ N mp a 3 → 0 to the negligibly small size such that these become so small to be detected today [52]. During inflation monopoles collapse in an exponential way and their abundant presence falls to the level of being hardly detectable.
Inflation and Observations
Cosmological perturbations are an important relic of the inflation used to describe the anisotropies of the cosmic microwave background (CMB) and structure evolution and formation of the universe. The seeds of inhomogeneities, which represent all the structure in the universe were produced during inflationary phase and were stretched to the astronomical scales with the exponential expansion. These inhomogeneities are what we see today as stars, galaxies, etc. in the form of baryonic matter. From the theory of linear perturbations and from the relation δφ ⇔ δg µν we know how to categorize FLRW metric perturbations at the first order in the form of scalar, vector, and tensor perturbations of spins 0, 1, and 2, respectively. A very important parameter which determines the properties of the perturbations of the scalar field is power spectrum p φ (k). The scalar field power spectrum at the time of horizon crossing comes out to be and the curvature power spectrum is calculated to be and the power spectrum for tensor perturbations is given by The tensor-to-scalar ratio of the spectrum is defined in the following expression and is found in terms of the first slow roll parameter: We can now define scalar and tensor spectral indices as and scale invariance of the scalar power spectrum is characterized by n s − 1 = 0 ⇒ n s = 1. Deviations from the scale invariance in an inflationary model gives it specific features. In this case, the spectral indices to their lowest order can be described in terms of potential slow roll parameters ε V and η V . In case of large field model, a general polynomial potential is and we find r = 4 p N (459) For a particular inflationary model p must be assigned a value greater than unity [53].
ΛCDM
The standard model of cosmology describes a universe that evolves from a singularity at t = 0. This singularity is known as the big bang and marks the instant when the universe begins its evolution in time. A detailed discussion of big bang theory of creation is presented in Appendix D. The kinematics of it is described by FLRW spacetime, and its dynamics is governed by the Friedmann equations in the framework of general relativity. The standard model is usually known as big bang model [54] due to extrapolation of redshift towards big bang singularity. The observational parameters are not fixed by the standard model or big bang which means that the big bang is parameterizable. ΛCDM model constitutes one of such parameterizations and shows remarkable consistency with the recent observations. That is why it has gathered support by majority of cosmologists. It incorporates the ingredients, namely, cosmological constant Λ and cold matter (CDM), in addition to ordinary matter. It is interesting to note that the nature of both ingredients contained in it are colossally unknown for which theoretical and observational developments are underway. It is also called the Concordance model for being it in agreement with the recent measurements of parameters. Obviously the dynamics of ΛCDM is governed by general relativity. Λ was introduced by Einstein himself [3] to balance the gravitational effect of the ordinary matter in order to show a static model of the universe. Therefore, the energy density of Λ is contained in the structure of spacetime itself or in other words it is vacuum energy of space. However, it was dropped by Einstein after the expansion was confirmed in 1929 calling it as the biggest blunder of his life ever made. After the accelerated expansion was discovered in 1998, Λ is coming back once again to accommodate the effect of accelerated expansion, but this time it is expected to have a dynamic nature. According to the recent observations [35,36], dark energy is~70%, dark matter constitutes 25%, and the ordinary matter (baryonic) is 5% only. In the framework of the ΛCDM model, the nature of dark energy presents one of the most challenging issues to the present day cosmology. In ΛCDM, space is spatially flat and the radius of curvature, therefore becomes infinitely large. The ΛCDM adapts a minimum number of parameters to describe the universe that is six. The recent measurements of these parameters from different sources are given [35,55] below in the Table. 1. From these six free parameters, we can deduce other parameters like Hubble constant with some assumption about the cosmological model. The detail can be found in [35]. In the context of ΛCDM we can find how the energy densities is related with the parameters like scale factor a t , time t, Hubble parameter, etc. in the corresponding sectors namely radiation, matter (cold matter), and dark energy. Using Equations (96) and (108), we find for the radiation sector for which ρ = 3p ⇒ w = 1 and a(t) ∝ t and for the dark energy sector for which ρ Λ = −p ⇒ w Λ = −1, we have and Using Friedmann equation and total energy density of all forms, we can determine the background dynamics in the form of two very significant parameters The observations of Planck collaboration on cosmic microwave background radiation (CMB) give [35] the values of these parameters to be Ω m0 = 0.3089, Ω r0 = 5.38916 × 10 −5 and Ω Λ0 = 0.691046 and for a flat ΛCDM model, the observations from Type SNe Ia supernovae give [3] Ω m0 = 0.295 and for other we can estimate from Ω Λ = 1 − Ω DM − Ω b . The Figure 20 gives the dark energy and cold matter densities [56] in terms of density parameters Ω Λ and Ω m .
Inflation and Dark Energy in f (R) Modified Gravity
Einstein Field Equation (EFE) of general relativity is well known
Equation (471) corresponds to Einstein-Hilbert action
In scalar field models we usually modify RHS, i.e., energy-momentum tensor (matter sector) and accordingly add some terms for a scalar field. If the RHS is kept unaltered and LHS is modified that stands for the geometry of spacetime mimicking the role of gravity. Due to this reason, it is called the model of modified gravity. The LHS of EFE is derived merely from the curvature term, i.e., Ricci Scalar R, however in the modified gravity we replace it by a general function of it [57][58][59][60]. Replacing the Ricci scalar R in the Einstein-Hilbert action given in Equation (471) by a general function of R, that is, f (R), i.e., R → f (R), we have The variation of Equation (473) would be Equation (474) yields through tedious calculations the following modified gravity equation where = ∇ ν ∇ ν and F(R) = f ,R (R) In Equation (474), the LHS is the modified form of R µν − 1 2 g µν R = G µν . We contract Equation (475) with g µν to find the trace of modified EFE For a vacuum solution T = 0, the de Sitter space with curvature term R to be constant Equation (477) represents an inflationary solution with the term 3 F(R) = 0.
If the condition in Equation (478) is fulfilled, the late time de Sitter solution can be obtained in a f (R)-based dark energy model. The Friedmann Equations for the modified gravity can be determined in the following way by a spatially flat expanding FLRW universe now going through the lengthy calculations, we find and Equation (480) can be further re-expressed using Equation (482) for where the trace of T µν is (−ρ M + 3p M ). Now µ = ν = 0 in Equation (475) gives solving through tedious calculations by making use of Equation (480) and Equation (484), we reach at the result again for µ = ν = j in Equation (475), we have Using Equations (481), (484) and (488) in Equation (487), we obtain Equations (486) and (489) together determine the background dynamics of a flat FLRW universe governed by f (R). From Equation (486), dividing by 3H 2 F and for ρ = ρ R + ρ M , we can construct a dynamical system in the framework of f (R), that is, here we can define the following parameters Equation (493) can be recast by using Equation (482) we have Now for p M = 4 3 ρ R , N = ln aandḢ = H H, we can determine the following dynamical system: where The effective equation of state can also be written form Equations (486) and (489) by division Or using Equation (496) Another form can also be written as Now we find f (R) inflation by considering first a general form of f (R) and determine its dynamics. Afterwards, Starobinsky inflation in f (R) will be discussed. Let us consider we have and substituting Equations (507)-(510) in Equation (486) where the kρ M vanishes during inflationary phase, we have The cosmological acceleration could be realized in the regime F >> 1 ⇒ 1 + nbR n−1 >> 1 and nbR n−1 >> 0, which implies 1 + nbR n−1 ≈ nbR n−1 . Dividing Equation (511) by 3nbR n−1 , we obtain after simplification We find the inverse of Equation (454) that is g µν , so that we have where δ µ ν is the Kronecker delta function and is defined as and g ζν is the simply unperturbed FLRW line element described as The inverse of g 0 µν = g µν is simply g µν 0 = g µν since for diagonal unperturbed metric g µν = 1 g µν , so that we can write For µ = 0, ν = 0 in Equation (541), we have 00 + g 0i 0 g (0) i0 + g 00 0 δg 00 + g 0i 0 δg i0 + δg 00 g (0) 00 + δg 0i g (0) i0 + δg 00 δg 00 + δg 0i δg 0i = δ 0 0 (546) which give δg 00 δg 00 + δg 0i δg i0 = 1 (547) substituting the values, we have Simplifying and neglecting second order product terms −2AX and ∂ i Y · ∂ i B, we get Again from Equation (541) for µ = 0, ν = i, we have after simplification 0i + g 0i 0 g (0) ji + g 00 0 δg 0i + g 0i 0 δg ji + δg 00 g (0) 0i + δg 0i g (0) ji δg 00 δg 0i + δg 0j δg ji = 0 Substituting the values neglecting the higher product terms 2A · ∂ i B, ∂ i Y · 2ψ and ∂ j D ji Y · E, we have On integrating, we get Now from Equation (541) for µ = i, ν = j, we have The non-vanishing terms are substituting values suitable change of indices Comparing the coefficients of δ i j and D i j , we get Z = ψ and K = −E, so that inverse metric of the perturbed line element becomes
Summary
Relativistic cosmology was founded on the general theory of relativity with the introduction of the cosmological principle and Weyl's principle implicitly implied. In the beginning, Einstein's and de Sitter's cosmological models were presented, though now of historical interest, yet they both are very significant as the first initiates the modern cosmology relativistically and scientifically and the latter, later on, was used to provide the initial conditions of the big bang model with a slight change. The first theoretical models for the possibility of dynamic universe evolved beginning with Friedmann, Lemaitre and were observationally determined by E. Hubble. In 1929, E. Hubble found exactly the same expanding universe that Friedmann did theoretically in 1922. Therefore, it was Friedmann who championed the cause of dynamical universes; however, his work was recognized later when he was no more in the world. The theory of big bang based on the standard cosmological model faces Horizon, Flatness, Entropy problems etc. To resolve these problems a phase of exponentially expanding universe was introduced in its very early history which occurred in a very small fraction of time (about 1 10 43 s of the very 1st second after time creation) known as inflation. The inflation is identified as the initial conditions under which the big bang might have taken place. The introduction of inflation caused the name inflationary cosmology and it is about forty years since its birth to date. The inflationary paradigm stands now on firm observational footing and is accepted irrevocably in cosmology as the viable description for the early universe. Starobinsky, Guth, and Linde are credited with setting the foundations of inflationary cosmology. The inflationary cosmology is being hailed as successful in explaining the origin of structure formation through cosmological quantum fluctuations as relicts of cosmic inflation. The observations conducted on microwave background radian and the recent discoveries of gravitational waves and black holes lend the confirmatory support to the underlying principles of the inflationary cosmology. Dark energy is the one of most challenging issues of the standard cosmology both on theoretical and observational grounds. In the framework of ΛCDM it has equation of state (EoS) w = −1, however δ is facing fine-tuning problem. An alternative remedy to tackle the problems of δ are the model consisting of canonical and non-canonical scalar fields. The scalar field models modify matter sector of EFE on the right hand side, nonetheless in f (R) geometry is modified as curvature of spacetime. ΛCDM model is accepted for its being in good agreement with the recent observations. Note that there exists well-elaborated scenario to unify inflation with Dark Energy in modified gravity which was first proposed in S. Nojiri and S.D. Odintsov [61].
Appendix A. Space, Time and Spacetime
A background arena of space and time is necessarily required for all the physical phenomena to play over it and the compatibility of the known physical laws is made with structure of space and time. Space, time, and motion are concomitant ingredients cohered to matter and can never be disengaged from each other. The universe exists in space and evolves in time so that universe, space and time are in separable from each other and are coherently related to each other. Space is understood as possessing three dimensions and time is speculated to have only one dimension. Therefore, Newtonian Mechanics has been formulated in such a way to consider the spatial dimensions existing independently from the only one dimension of time. The Euclidean geometry provides necessary mechanism in dealing with such notions of space and time. In this regard Euclidean space becomes important which proposes three independent perpendicular dimensions of space and the dimension of time does not get affected by it. Space and time are envisaged as independent absolute entities which are not affected by each other. The Euclidean structure of space is flat and distances are measured by using the standard Pythagoras theorem for three dimensions as ds 2 = x 2 + y 2 + z 2 , (A1) or in differential of the distances where ds = (x, y, z) or ds = (dx, dy, dz), respectively. The time coordinate does appear anywhere in this distance-measuring formula which means in the geometry of space, the dimension of time will be dealt separately. Newton's notions of space and time as described in Principia Mathematica are given as "Absolute space, in its own nature, without regard to anything external, remains always similar and immovable. Relative space is some movable dimension or measure of the absolute spaces which our senses determine by its position to bodies: and which is vulgarly taken for immovable space. Absolute motion is the translation of a body from one absolute place into another: and relative motion, the translation from one relative place into another" and absolute time is defined in these words "Absolute, true and mathematical time, of itself, and from its own nature flows equably without regard to anything external, and by another name is called duration. Relative, apparent and common time, is some sensible and external (whether accurate or inequable) measure of duration by the means of motion, which is commonly used instead of true time".
In 1905, Einstein's paper entitled "On the electrodynamics of moving bodies" put forth on the base of two postulates that time might be dealt on equal footing with space as one of the dimensions of space. Minkowski (1864Minkowski ( -1909 translated the mixing of space and time coordinates as requiring a four-dimensional scenario where physical phenomena take place and the geometry of such four dimensional spacetime, where time is one dimension, is described by spacetime interval which is the generalized form of Pythagoras theorem ds 2 = −dt 2 + dx 2 + dy 2 + dz 2 (A3) or Minkowski first understood that the spacetime interval given in Equation (A3) remains invariant for all the observers and carries the similar meaning for all the observers in uniform relative motion, however, Einstein considered either with respect time or space the interval does not remain identical for all relative observers in uniform motion. Minkowski avowedly said in a conference addressing to the German scientists that "Ladies and gentlemen! the views of space and time which i wish to lay before you have sprung from the soil of experimental physics, therein lies their strength, they are radical. Henceforth space by itself and time by itself are doomed to fade away into mere shadows and only a union of the two will preserve an independent reality" [42]. General relativity was formulated on the base of four dimensional spacetime as Minkowski has laid it but in order to incorporate the gravity into it Einstein utilized the power of tensors and modeled the curved geometry of spacetime describing its curvature as gravity. The geometry of curved spacetime is encoded into a two rank symmetric tensor known as fundamental tensor and given as the spacetime metric or line element as where g µν is given by g µν = g 00 g 01 g 02 g 03 g 10 g 11 g 12 g 13 g 20 g 21 g 22 g 23 g 30 g 31 g 32 g 33 In the absence of matter, the curvature of spacetime vanishes and the geometry of spacetime becomes flat, i.e., g µν = µ µν , yet non-Euclidean that is required by the special relativity.
Appendix B. Maximally Symmetric 3-Space (Spherically Symmetric Space)
In order to have a space more symmetrical we require comparatively lesser number of functions as much as possible to determine its properties. It is the curvature of a space and its nature that determines how much the space is symmetric maximally. If the curvature K of a space under consideration does not depend upon the coordinates of the points constituting it and has a constant value, then the space shall be maximally symmetric and the spaces possessing the curvature of this kind logically entail cosmological principle, i.e., homogeneity and isotropy. Spacelike coordinates x 1 , x 2 , x 3 obviously span 3-space which we require to be maximally symmetric. The Riemann curvature tensor R σ µνρ in three dimensional space has 3 4 = 81 components which depend on the coordinates. From these, only six components are independent and are the functions of coordinates and require six functions to be described in order to specify intrinsically the geometric properties of the three dimensional space. The Riemann curvature tensor R σ µνρ depends on curvature K and the metric tensor g µν for the maximally symmetric spaces which is the simplest form for it to adopt. It is given by R µνζπ = K g µζ g νπ − g µπ g νζ (A8) g µπ R µνζπ = Kg µπ g µζ g νπ − g µπ g νζ = K g µπ g µζ g νπ − g µπ g µπ g νζ (A9) R νζ = K δ π ζ g νπ − δ ζ ζ g νζ = K g νζ − δ 1 1 + δ 2 2 + δ 3 3 g νζ (A10) Then, Ricci scalar or curvature scalar from above Equation (A11) can be had by contraction with inverse metric tensor g νζ g νζ R νζ = −2g νζ g νζ K (A12) The metric of an isotropic 3-space must depend on rotational invariants given by and in spherical polar coordinates (r, θ, φ), it should take the form dσ 2 = C(r)r 2 dr 2 + D(r) dr 2 + r 2 dθ 2 + r 2 sin 2 θdθ 2 (A16) Redefining the radial coordinater 2 = r 2 D(r) and dropping the bars on the variables, we can write the above Equation (A16) in the form dσ 2 = B(r)dr 2 + r 2 dθ 2 + r 2 sin 2 θdθ 2 (A17) where B(r) is an arbitrary function of r. Solving the metric in Equation (A17), the components are g 11 = B(r) g 22 = r 2 g 33 = r 2 sin 2 θ (A18) The non-vanishing Christoffel symbols we find, are so we obtain the metric dσ 2 = dr 2 1 − Kr 2 dr 2 + r 2 dθ 2 + r 2 sin 2 θdθ 2 (A26) Equation (A26) incorporates a hidden symmetry characterized by homogeneity and isotropy, and represents the line element of a maximally symmetric 3-space. Due to arbitrary origin of radial coordinate system, we considered and due to symmetry of space we can take all the points of this space equivalent and the origin of this coordinate system can be chosen arbitrarily at any point which means that there exists no center in this space. Therefore the maximally symmetric space is infinite and open. Further the line element is equivalent perfectly to the metric of a 3-sphere embedded in a four dimensional Euclidean space which has spherical symmetry as well.
Appendix C. Spectrum of the Black Body
A blackbody can absorb hypothetically radiation of all wavelengths falling on it and reflecting nothing at all. How at the different wavelengths distribution of radiation occurs in a blackbody is given below in Figure A1: Figure A1. Radiation distribution of blackbody at different wavelengths.
In the early universe when matter and radiation decoupled from each other, the socalled decoupling, the primordial radiation given off gives a snapshot of the universe at that time and is known as cosmic microwave background radiation (CMBR) observed accidentally in the 60 s. The recent observations conducted on cosmic microwave background radiation reveals the fact that this is the perfect black body radiation with a temperature of 2.7255 Kelvin on average. We know that the wavelength distribution of a black body is given by u(λ, T)dλ = 8πhc where u(λ, T)dλ is the energy per unit volume of the radiation with wavelength between λ and λ + dλ emitted by a blackbody at temperature T. We consider now a black body radiation from the big bang when the universe first became transparent to photons after 400,000 years after big bang to this time about 4,000,000,000 years. The wavelength of the primordial photons λ is Doppler shifted to λ due to expansion of universe, certainly λ > λ. Let f (λ , T )dλ be the current per unit volume of the residual big bang radiation as measured from the earth. As the shell of charged particles that emitted the radiation is moving away from the Earth at extremely relativistic speed so we should use the relativistic Doppler shift for light from a receding source to relate λ to λ that is , and v is the speed of recession of the charged shell. As v < c, clearly λ > λ by a factor Equation (A29) can be interpreted by generalization that all the distances have grown since first radiation emitted. In order to have a relation between currently observed spectrum f (λ , T )dλ and original black body radiation distribution Equation (A33) says that the radiation from a receding blackbody has same spectral distribution as yet but its temperature T and energy u(λ, T)dλ (A34) dropped by factors of B and B 4 respectively.
Appendix D. Big Bang Theory of Creation
Historically the name of this theory as big bang is due to Fred Hoyle , one of the inventors and staunch proponents of steady state theory who coined the term accidentally with showing abhorrence towards it. The steady-state theory, once a rival theory of the big bang, lends support to an eternally evolving universe without a beginning and an end. The big bang theory explains the evolutionary phases of the universe beginning with a very small span of a fraction of very first second to the present age. The warp and woof of the theory is woven by equation of general relativity and the developments made in its context. The theory traces its theoretical origin back to Friedmann equations and the discovery of expansion of the universe by Edwin Hubble in thirties. Further it rests upon the relative abundance of light elements by George Gamow forties and CMB accidental discovery in seventies by Penzias and Wilson. The theory comes forth on the base of standard cosmological model and describes that our universe had had a beginning and had erupted from an extremely dense, point-like singularity about 14 billion years ago. At the singularity state, all basic interactions of nature had coalesced symmetrically where all the matter-energy melted down into an indistinguishable quark-gluon primordial soup. Einstein has expressed his views on the nature of this singularity in his later years: "The theory is based on a separation of the concepts of the gravitational field and matter. While this may be a valid approximation for weak fields, it may presumably be quite inadequate for very high densities of matter. One may not therefore assume the validity of the equations for very high densities and it is just possible that in a unified theory there would be no such singularity" [62]. It is speculated that during the Planck time of the order of 10 −43 s all the forces of nature, namely, electroweak nuclear, strong nuclear, electromagnetic, and gravitational, were so merged into one another such that they were indistinguishable bearing perfect symmetry. From the beginning of time, t = 0 s to Planck time t p ∼ 10 −43 s within the time span of very first second is known as the Trans-Planckian era whose physics is yet incomplete and is open hitherto to investigation. It is being conjectured that during the time ranging from 10 −43 s to 10 −35 s, the gravitational force freed itself from the rest of interactions, and during this period there exist the particles that supersymmetry predicts and are known as quarks, leptons, their antiparticles, and some certain massive particles. After the time interval that begins with 10 −35 s to some shortly later time 10 −32 s, the universe expanded exponentially and gradually cooled down where the strong and electroweak forces get separated from the rest. As the universe continues to cool after the big bang, around the time 10 −10 s, the electroweak force splits into weak force and electromagnetic force and within few minutes after it, protons and neutrons start to condense out of the cooling quark-gluon plasmic soup. During the first half of creation, the universe can be viewed as a thermonuclear bomb fusing protons and neutrons into deuterium and then helium producing most of the helium nuclei that exist now. After the big bang until about 400,000 years radiation-dominated era prevailed. Vibrant photonic radiation halted itself to become a clumped matter rather even forming single atom hydrogen or helium due to photon-atom collisions which would result in ionization instantly in the case if any atom happened to form, therefore no chance occurs for the formation of atoms and the universe remains opaque to electromagnetic radiation due to incessant Compton scattering experienced by photons with free electrons that abound in. On further cooling electrons could bind to protons forming helium nuclei with the reduction in the number of charged particles, absorption or scattering of photons consequently the universe suddenly became transparent to photons and radiation dominated era diminishes and neutral matter domination begins in the form of atoms, molecules, gas clouds, stars and in the end galaxies-the universe today. This is the whole saga of the big bang theory of creation. | 35,593.6 | 2021-06-24T00:00:00.000 | [
"Physics"
] |
Discrete Cosine Transformation based Image Data Compression Considering Image Restoration
Discrete Cosine Transformation (DCT) based image data compression considering image restoration is proposed. An image data compression method based on the compression (DCT) featuring an image restoration method is proposed. DCT image compression is widely used and has four major image defects. In order to reduce the noise and distortions, the proposed method expresses a set of parameters for the assumed distortion model based on an image restoration method. The results from the experiment with Landsat TM (Thematic Mapper) data of Saga show a good image compression performance of compression factor and image quality, namely, the proposed method achieved 25% of improvement of the compression factor compared to the existing method of DCT with almost comparable image quality between both methods. Keywords—Discrete Cosine Transformation; data compression; image restoration; Landsat TM
I. INTRODUCTION
Image data compression methods can be divided into two types, information loss-less and information lossy. The former uses image redundancy for getting high data compression ratio. On the other hand, the later ensures no image degradation by the data compression, data compression is not so high though. JPEG imagery data compression based on DCT is one of the information lossy data compression methods and is popular and widely used data compression method. Although the data compression ratio is satisfactory good, image quality degradation is severe in comparison to the other information lossy data compression methods. Due to the facts, there are block noise, mosquito noise, color distortion noise, etc. in the JPEG data compression method.
Image data compression method proposed here is based on the well-known JPEG compression method. By using image restoration methods, the aforementioned noises are removed as much as we can in the proposed image data compression method. This is the basic idea of the proposed method which allows comparatively high data compression ratio and relatively small image degradation by the data compression.
Research on image restoration is divided into those related to restoration methods and those related to methods for estimating restoration parameters from degraded images. Image restoration methods can be roughly classified into linear restoration filters and nonlinear restoration filters [1]. The former starts with the classic Wiener filter and parametric Wiener filter, which only restore the best approximation image on average, and evaluates the difference between the restored image and the original image not on the space of the original image but on the observed image. A general inverse filter, a least-squares filter with constraints, and a projection filter and a partial projection filter that may be significantly affected by noise in the restored image have been proposed [2]. However, the former is insufficient for optimization of evaluation criteria, etc., and is under study.
On the other hand, the latter is essentially a method for finding a non-linear solution, so it can take only a method based on an iterative method, and various methods based on iterative methods have been tried. There are various iterative methods, but there are a stationary iterative method typified by the successive excess relaxation method (SOR method) and an irregular iterative method typified by the conjugate gradient method [3], [4], [5]. In general, the former requires a large number of iterations, but the accuracy is high, and the latter has excellent convergence, but the problem of accumulation of rounding errors is a problem. When applied to image restoration, it is necessary to pay attention to noise resistance.
On the other hand, the maximum entropy method has been proposed as an image restoration method because it can take into account constraints (or prior knowledge) and resistance to noise [6]. In addition, as a parameter estimation method, methods using stationary iteration methods such as Newton's method and quasi-Newton's method and non-stationary iterative methods such as the conjugate gradient method have already been proposed [7], [8]. Furthermore, an annealing method has been proposed [9].
From the viewpoint of image compression, the estimation of the degradation operator (restoration parameter) by image compression on the transmission side can be performed with high accuracy because the images before and after compression can be referred to. By encoding this restoration parameter and sending it to the receiving side together with the compressed image, the receiving side can restore the image deteriorated by the compression based on the decoded restoration parameter [10]. This is the basis of the compression method with image restoration proposed in this paper.
To show the effect of this method, image compression based on orthogonal expansion is taken as an example here. This paper reports that a high-quality reconstructed image can be obtained by devising the encoding of the reconstructed parameters.
The following section d4escribes research background followed by theoretical background. Then the proposed method is described followed by experiment. After that conclusion is described together with some discussions.
II. RESEARCH BACKGROUND
Facsimile data compression by rearranging picture elements is proposed [11]. Data compression for archiving of Advanced Earth Observing Satellite: ADEOS data is well reported [12]. Method for image compression with a cosmetic restoration, on the other hand, is proposed [13] together with a method for image compression with cosmetic restoration [14].
Meanwhile, a study of data lossy compression using JPEG/DCT and fractal method is conducted and well reported [15]. Also, preliminary study on information lossy and lossless coding of data compression for archiving ADEOS data is conducted and well reported [16].
Method for video data compression based on space and time domain seam carving maintaining original quality when it is replayed is proposed [17]. Data hiding method which robust to run-length data compression based on lifting dyadic wavelet transformation is proposed [18]. On the other hand, method for image portion retrieval and display for comparatively large scale of imagery data onto relatively small size of screen which is suitable to block coding of image data compression is proposed and evaluated [19].
Prediction method of El Nino Southern Oscillation event by means of wavelet based data compression with appropriate support length of base function is proposed and validated with the actual data [20]. Meanwhile, method for data hiding based on Legall 5/2 (Cohen-Daubechies-Feauveau: CDF 5/3) wavelet with data compression and random scanning of secret imagery data is proposed and evaluated effectiveness and efficiency [21].
A. Transmission of Compressed Data and Restoration Parameters
Compression methods that allow image quality degradation are roughly classified into predictive coding, orthogonal transform coding represented by Fourier transform, and approximate coding represented by vector quantization. Of these, the orthogonal transform coding is a compression method that can easily estimate the restoration parameter on the transmission side. In this method, image compression is realized by reducing a large dimension of an order after orthogonal transformation, but image quality is deteriorated because high-order information is lost. Consider that the image quality degradation is restored in an integrated manner. Fig. 1 shows the outline of the method.
In the past, only compressed data obtained by compressing original image data by orthogonal transform coding was transmitted. However, the method proposed this time creates the data necessary to configure the restoration filter on the transmitting side and creates the compressed data. Transmit with. On the other hand, on the receiving side, a filter is constructed from reconstruction parameters for constructing a transmitted reconstruction filter, and a deteriorated image is restored.
B. Creating a Restoration Filter
The image quality degradation model is represented by a convolution operation as shown in equation (1).
Here, g represents a compressed image, h represents a deterioration operator, f represents an original image, and n represents noise. Also, if each Fourier transform is G, H, F, N, then.
Can be expressed as If the restoration filter is B (μ, v), the restoration image F (μ, v) can be expressed by the following equation.
It suffices if G, H, .F, and N are all known in equation (2), but the original image data and the compressed image data exist on the transmitting side, but the restoration filter and noise are unknown there. In equation (2), G and F are known, but H and N are unknown. Since H is difficult to find, we assume H (μ, v) = 1 and consider a model where noise causes image quality degradation. Then, the noise N is the difference between G and F.
It can be expressed like this. Wiener filter is considered as an example of the restoration filter.
It becomes, therefore, the following equation, Thus, the original image can be restored. At this time, since the deterioration operator is collected in the frequency spectrum of the S / N ratio, it can be restored on the receiving side only by adding this to the compressed image. The restoring filters other than the Wiener filter have many parameters of the degrading operator and are considered unsuitable for image compression.
C. Restoration Filter Parameterization
As an orthogonal transform, discrete cosine transform is used as an example. There have already been proposals on the spectrum estimation after conversion and the configuration of the restoration filter accompanying it [7]. Here, since the Wiener filter is used as the restoration filter, a new parameter configuration for the restoration filter is devised. The filter is expressed in equation (9).
Here, when looking at the portion of N (μ, v) / F (μ, v), it can be seen that this is the reciprocal of the S / N ratio of the original image and the noise. Therefore, it is first conceivable to parameterize the S / N ratio for transmission. Looking at the S / N ratio in the image, it is as shown in Fig. 2. Fig. 3 shows the concept of S / N ratio parameterization. In addition, when inverse quantization is performed, the maximum value (MAX) and the minimum value (MIN) of the low frequency component are required, so that these must also be transmitted. After all, what is going to be transmitted is • SN ratio of quantized low frequency components. It represents the S / N ratio of the higher frequency component toward the center of the image. Also, the blacker the pixel, the lower the S / N ratio, and the whiter the pixel, the higher the S / N ratio.
From these observations, it can be seen that white points and black points are mixed in the low-frequency components, so that values having a considerably large absolute value are mixed, and the values vibrate violently. Conversely, it can be seen that the value of the high frequency component changes relatively smoothly. From this, it is considered difficult to parameterize the low-frequency component, so consider transmitting the low-frequency component as it is, and parameterizing and transmitting only the high-frequency component.
The above is not true for general images. However, it is not necessary to discuss that the image quality is better when the high-frequency component is approximated and sent than when the high-frequency component is deleted after the discrete cosine transform as in the existing method. What can be considered in the parameterization is a polynomial approximation. In the case of polynomial approximation, the degree of the polynomial and the coefficient of each term are transmitted. This can be determined by a method based on regression analysis. When finding the regression plane, if data (x 1 , y 1 ), (x 2 , y 2 ), ..., (x n , y n ) are obtained: y = a 0 + a 1 x 1 +… + a n-1 x n-1 + a n x n (10) The sum of squares of the error J = Σ {y i-(a 0 + a 1 x 1i +… + a n-1 x (n-1) i + a n x ni )} 2 (11) These are determined so that is minimized. If both sides are differentiated by a 0 , ..., a n and set to 0, the following normal equation is obtained: By solving this, the coefficients a 0 ,..., a n are obtained, and only the coefficients are transmitted. When actually performing approximation, a three-dimensional plane is considered, so calculation is performed with n = 2. Furthermore, since the S / N ratio of the low frequency component is a floating point number, quantization is performed with 8 bits per element in order to reduce the capacity as much as possible. That is, one element is represented by one byte.
The quantization method first finds the maximum value from the low frequency components, sets it to MAX, and sets the minimum value at the boundary between the low frequency component and the high frequency component to MIN. Next, quantization is performed so that MAX becomes 255 and MIN becomes 0.
Furthermore, since this S / N ratio is a value in the frequency domain, there is a property that the same value 128 | P a g e www.ijacsa.thesai.org appears at a position symmetric with respect to the highest frequency component. In particular, with respect to the imaginary part, a value whose polarity is reversed appears at a position symmetrical with respect to the highest frequency component. By utilizing this property, the capacity can be further reduced by half.
These series of parameterization processes are performed on the transmission side, the low frequency component of the S / N ratio is inversely quantized on the reception side, and for high frequency components, a plane equation is obtained from the transmitted coefficients. It is possible to calculate the actual value. Fig. 6 shows the result of approximation of Fig. 3. Fig. 4 shows the original image. The sample image used this time is a Landsat / TM image near Ogi, Taku in the western part of Saga city, and is a PPM image using images of blue, green, and red wavelengths. The image size is 256 × 256 pixels. There are various formats of image data, but there are PPM format (color image) and PGM format (black and white image) as uncompressed formats. The luminance value of one pixel is represented by 8 bits for PGM and 24 bits for PPM (8 bits for red, 8 bits for green, and 8 bits for blue). Image data is recorded with 1 byte per pixel for PGM and 3 bytes per pixel for PPM, each with a header of about 15 to 30 bytes.
A. Original Image
In PGM, since one pixel is one byte, the number of pixels of an image becomes the capacity of image data almost as it is. If the image size is 512 × 512, the data capacity will be about 262144 bytes, which is a considerable capacity. If this is a color image, the number of bits per pixel will change from 8 bits to 24 bits, so it will triple to 786,432 bytes, further expanding the capacity. If the moving image is a color moving image, a large amount of still images will be included, so that the capacity will be further increased. Fig. 5 shows an image compressed by the discrete cosine transform. This is a compressed version. The Q factor is specified by an integer value in the range of 0 to 100, with 100 being the best image quality and 0 being the worst. The Q factor = 10 specified here is a considerably high compression ratio.
In this case, the capacity of the original image was 196720 bytes, the capacity of the compressed data was 2869 bytes, and the compression ratio was about 69 times. Figure 6 shows the result of creating and restoring a Wiener filter without creating an SN ratio at all. It can be seen that the details have been restored. Fig. 6 shows the results of an attempt to restore the image by constructing a Wiener filter from the approximated S / N ratio. Although it was a slightly blurred image, it was able to be restored in considerable detail.
B. Compression with Image Restoration
The image quality after restoration was improved compared to that before restoration, and it was found that restoration of high frequency components was possible to some extent. The data capacity of the S / N ratio is 3816050 bytes without approximation, and has a capacity of about 3.8 Mbytes. In this data format, the SN ratio for each frequency component is represented by a floating point number and output as text data.
As a result of the approximation, the data capacity of the SN ratio was 5053 bytes, and the compression was remarkable. After all, when the compressed image and the decompression parameters (encoded data of S / N ratio) were combined, the capacity became 7922 bytes and the compression ratio became 24.83 times. At this time, it was also found that when trying to obtain the same image quality by compression with discrete cosine transform, the compression ratio could only be obtained about 20 times. This is slightly less than JPEG compressed with a Q factor of 98. Comparing the image quality, it is not worse than the one compressed with Q factor = 98.
V. SOME DISCUSSIONS The image was compressed at a fairly high compression rate by the discrete cosine transform, and a filter was created to correct the degraded image and restore a good quality image. Then, the data (S / N ratio) necessary to construct the filter was parameterized and considered to be transmitted together with the compressed data.
As a result, it was possible to obtain an image that was somewhat blurry but was quite close to the original image. The original image used this time has a capacity of about 196 Kbytes, the data capacity of the compressed image (Q factor = 10) is 2869 bytes, and the capacity of the restoration filter is about 3.6 Mbytes (red 1.2 bytes, green 1.2 Mbytes, blue 1.2 Mbytes) Bytes), but with the approximation of the SN ratio, the capacity of the restoration filter could be compressed to about 53 Kbytes.
If this is added to the capacity of the compressed image of 2869 bytes, the compression rate will be about 7 times at about 56K bytes, which is slightly less than that of JPEG compressed with Q factor = 98, and the image quality is not worse than that. The Wiener filter created this time was able to completely restore the original image, but even if the S / N ratio for constructing the Wiener filter was approximated, an image close to the original image could be obtained.
The compression ratio was relatively effective, about 25 times, and the effect on the image quality by approximation of the S / N ratio was also small. It was found that if the same image quality was to be obtained by a compression method involving discrete cosine transform, the compression ratio would be about 20 times, and the compression effect would be reduced by 25 %. This is the result of a subjective evaluation experiment of image quality by a one-to-one comparison method based on the Thurston method.
Prepare a compressed image in which the Q factor in JPEG compression is changed in 10 steps from 10 to 90 (the compression ratio changes from about 90 times to 5 times), and compare the compressed image proposed this time with the one-to-one comparison This is the result of evaluating the quality of image quality for 40 subjects. Fig. 7 shows an image with a compression ratio of 20 at this time and an image quality determined to be comparable to that of the proposed method.
VI. CONCLUSION
Discrete Cosine Transformation: DCT based image data compression considering image restoration is proposed. An image data compression method based on the compression (DCT) featuring an image restoration method is proposed. DCT image compression is widely used and has four major image defects. In order to reduce the noise and distortions, the proposed method expresses a set of parameters for the assumed distortion model based on an image restoration method.
The results from the experiment with Landsat TM (Thematic Mapper) data of Saga show a good image compression performance of compression factor and image quality, namely, the proposed method achieved 25% of improvement of the compression factor compared to the existing method of DCT with almost comparable image quality between both methods.
VII. FUTURE RESEARCH WORKS
Further research works are required for the applicability of the proposed data compression method with the other remote sensing images.
ACKNOWLEDGMENT
The author would like to thank Prof. Dr. Hiroshi Okumura and Prof. Dr. Osamu Fukuda of Saga University for their valuable comments and suggestions. | 4,764.2 | 2020-01-01T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
Design of power control system for student dormitory
. T his design is based on the residential building of boarding school as the research object. The STC89C52RC single-chip microcomputer is used as the main control chip, and the clock chip, temperature chip, smoke collection module, A/D conversion, LED digital tube display circuit, matrix keyboard, infrared wireless remote control and other controllers are used to control the power system and adjust the access control system. This system is applicable to all kinds of colleges and universities that have strict management on students' electricity consumption. It can automatically control the electricity consumption of students' dormitories according to the school management regulations, and can automatically make emergency response to emergencies such as fire.
Introduction
With the sustained and rapid development of the economy and the remarkable improvement of people's living standards, the requirements of boarding students on the management level of electricity use are also increasing.The intelligentization and humanization of dormitory can improve students' good feeling for dormitory life and improve the quality and safety of dormitory life [1].If we only rely on human resources to achieve the management level required by these requirements, we often need to pay a huge cost.Therefore, the automation of building power system and access control system is particularly important.This design can realize the basic functions of the system, and the cost is low.It can expand the functions according to the needs, and has a certain market prospect and promotion value.
System design scheme
The design uses STC89C52RC as the core control chip, in combination with the work and rest time management regulations of the student dormitory, uses the clock chip as the time basis chip of the power system control, uses the 8-bit common anode 7-segment digital tube as the display circuit, and can display the date, clock and temperature information through *Corresponding author: skdlcy123@163.comhttps://doi.org/10.1051/shsconf/202316601058SHS Web of Conferences 166, 01058 (2023) EIMM 2022 the digital tube, Drive the corresponding relay through the drive circuit to meet the requirements of controlling each circuit (use led lamp to display the working status of each relay).The design can also realize the startup and shutdown of the dormitory access control system, and use the smoke sensor module, photosensitive resistor, A/D conversion and other circuits to realize the emergency plan.3 Hardware design
Minimum system design of single chip computer
STC89C52 is a low-power, high-performance CMOS 8-bit microcontroller produced by STC, full-duplex serial port communication.[2].The minimum system is mainly composed of power supply, reset circuit, oscillation circuit, etc.Since there are many external devices connected to the P0 port of the single chip computer in the later stage, in order to enhance its driving ability and ensure accuracy and stability, 8x10K pull-up resistor is used here.
Power circuit input and output design
The control board of this design adopts DC5V power supply, and directly uses the power adapter with the input Vi of AC220V and the output Vo of DC 5V for power supply.Because STC89C52RC single-chip microcomputer is the core control element, it is easy to be subject to electromagnetic interference.Therefore, several 0.1uF capacitors can be connected in parallel to enhance the filtering ability of the power supply circuit as a protective measure to improve its anti-interference ability.
LED display
In order to visually display the current time, so as to accurately judge the current working state of the controller, you can choose to display the current time using a 8-bit red LED with a total anode.
Clock circuit
DS1302 clock chip is a low-power, high-performance real-time clock chip launched by an American company, with 31 bytes of static RAM attached.It transmits multiple bytes of clock signal and RAM data in a burst mode, and uses SPI three-wire interface to https://doi.org/10.1051/shsconf/202316601058synchronize with MCU [3].The design is powered by the filtered power supply of the power adapter DC5V.At the same time, in order to ensure that the chip can save data in case of power failure, so that it can continue to work when the power supply is restored, the design uses DC3V button battery as the backup power supply energy.
Relay control circuit
Because the single-chip microcomputer system is a DC 5V weak current control system, and the dormitory building is powered by AC 220V and 50Hz current, the single-chip microcomputer cannot be directly used to control the power system and access control system in the dormitory building.In addition, because there are many rooms in the dormitory building of boarding school, the whole dormitory building consumes a large amount of power, reliable isolation technology must be used to separate the mains part from the controller circuit part.This design uses optocoupler isolation technology here.
Matrix keyboard circuit
The matrix keyboard is a keyboard group with a layout similar to the matrix used in the external equipment of the single chip computer.When there are many keys in the keyboard, in order to reduce the occupation of I/O ports, the keys are usually arranged in a matrix form.The keyboard is designed to adjust the manually input DS1302 clock, and also to manually control the access control and power system in case of emergencies.This design uses matrix keyboard instead of directly connecting with P3 pin of single chip computer, which is not only to save the number of I/O ports, but also to prepare for the expansion of functions when necessary later.
3.7A/D conversion circuit
A/D conversion is a circuit that quantizes analog quantity or continuously changing quantity and converts it into corresponding digital quantity.Due to the use of smoke sensors, photoresistors, etc. in this design, the single-chip microcomputer needs to convert the analog signals collected by the smoke sensor module, photoresistors and other sensors into digital signals for analysis through the A/D conversion chip, so that the controller can react to the system.In this design, the AT2402 chip is used as the external memory chip EEPROM, When necessary, it can save the working data that the single chip computer needs to store, which is convenient to call.
Sensor circuit
The sensor can convert the detected danger signal into electrical signal (usually analog value), and then convert it into digital signal that can be recognized by the computer through A/D, and make corresponding response after analysis.In this design, considering that high-power power consumption is likely to lead to sudden fire and other unexpected situations, smoke sensor module, photosensitive resistor and temperature chip are used as sensors for detection, so that emergency response can be made in time in case of special circumstances, and digital tube can be used to display the current temperature condition at a fixed time.
Infrared wireless receiving circuit
Infrared wireless receiving remote control is a widely used means of communication and remote control, which is widely used in real life.The infrared wireless remote control circuit includes a transmitting circuit and a receiving circuit.This design adopts infrared wireless remote control mode, directly uses the single chip computer for decoding, uses the common DIY electronic remote controller as the transmitter, uses the VS838 infrared remote control receiver as the receiver, and transmits the command signal to the single chip computer for analysis and corresponding execution.
Drawing of control board and relay drive board PCB
In order to save space and resources, the control board in this design uses patch packaging.In order to improve the anti-interference ability of PCB design, in addition to basically meeting the frequency characteristics of each device, the PCB board in this design should be arranged as evenly as possible, and large linewidth should be used for wiring.The copper-clad surface of the bonding pad and the bottom layer of the top layer shall be optimized accordingly.
Software design
The design of system software is to make each part of hardware run by writing the program of corresponding module.The corresponding program can be divided into main program and subprogram of each part.In this system, three modes of manual, automatic and infrared wireless remote control can be used to realize the functions of power supply/power off and door opening/closing at the corresponding time.
Conclusion
As an intelligent building power consumption system, the power consumption control system can monitor the temperature, smoke and other conditions of the equipment at all times, and also make emergency response to emergencies such as fire.In addition, the control mode of the system in this design is also very flexible, which can meet the control requirements of the electrical system in the student dormitory.A series of low-power devices are adopted, which has low cost.It not only effectively saves the use cost, but also effectively strengthens the management of the electrical system and access control system in the student dormitory, and ensures the use safety. | 2,006.2 | 2023-01-01T00:00:00.000 | [
"Computer Science"
] |
Quantum entropy and exact 4D/5D connection
We consider the AdS2/CFT1 holographic correspondence near the horizon of rotating five-dimensional black holes preserving four supersymmetries in N=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=2 $$\end{document} supergravity. The bulk partition function is given by a functional integral over string fields in AdS2 and is related to the quantum entropy via the Sen’s proposal. Under certain assumptions we use the idea of equivariant localization to non-rigid backgrounds and show that the path integral of off-shell supergravity on the near horizon background, which is a circle fibration over AdS2× S2, reduces to a finite dimensional integral over nV + 1 parameters CA, where nV is the number of vector multiplets of the theory while the C0 mode corresponds to a normalizable fluctuation of the metric. The localization solutions, which rely only on off-shell supersymmetry, become after a field redefinition, the solutions found for localization of supergravity on AdS2× S2. We compute the renormalized action on the localization locus and show that, in the absence of higher derivative corrections, it agrees with the four dimensional counterpart computed on AdS2× S2. These results together with possible one-loop contributions can be used to establish an exact connection between five and four dimensional quantum entropies.
Introduction
In string theory or in any consistent quantum theory of gravity we should be able to describe a black hole as an ensemble of quantum states. The statistical entropy of the black hole or simply quantum entropy is given by the Boltzmann formula S = ln d(Q) (1.1)
JHEP01(2015)109
with d(Q) the number of states with charge Q. In the thermodynamic limit or large charge regime the expression above is well approximated by the famous Bekenstein-Hawking area formula 1 [1][2][3][4][5][6] S ≃ A 4 (1.2) which then gives the semiclassical, leading contribution to the black hole's quantum entropy. Hawking's formula is in a sense very general and universal and therefore it does not tell much about the microscopic details of the theory. On the other hand, for extremal black holes, which have an AdS 2 near horizon geometry, finite charge corrections to the area formula can be used to test the holographic correspondence [7] beyond the thermodynamic limit and infer details of the dual quantum theory. In this sense it is of great interest to compute finite charge corrections to the entropy and compare them for example with known contributions from BPS state counting. Sen's proposal [8,9] relates the quantum entropy of an extremal black hole to a path integral of string fields over AdS 2 with some Wilson line insertions at the boundary. Via the AdS 2 /CF T 1 correspondence, it counts the number d(Q) of ground states of the dual conformal quantum mechanics in a particular charge sector Q. The entropy is then given by the statistical formula S = ln d(Q). By putting the boundary of AdS 2 at finite radius we generate an IR cuttoff [10] which can be used to extract relevant information via holographic renormalization. This definition then respects all the symmetries of the theory and reduces to the Wald formula in the corresponding limit of low curvatures or large horizon radius.
The quantum entropy function constitutes a powerful tool to compute finite charge corrections to the area law which can then be compared to microscopic calculations. 2 For many examples of supersymmetric black holes in both four dimensional N = 4, 8 string theories there are microscopic formulas for an indexed number 3 of BPS states [11][12][13][14][15][16][17][18][19] valid in a large region of the charge configuration space. Using either the Cardy formula or an asymptotic expansion in the large charge limit [20,21], the microscopic index agrees with the exponential of the Wald's entropy. Additional subleading corrections can then be compared with those obtained via the AdS 2 quantum entropy framework. For instance oneloop determinants of fluctuations of massless string fields over the attractor background give logarithmic corrections to the area formula that are in perfect agreement with the microscopic answers [22][23][24] .
Despite all this success, the techniques involved are quite limited which makes the computation of further perturbative corrections an extremely difficult problem. However, for supersymmetric theories we hope to use localization to compute all of them exactly. At least for rigid supersymmetric theories the principle is quite simple. We deform the original action by adding a Q-exact term of the form tQV , where Q stands for some supersymmetry of the theory and the functional V is chosen such that Q 2 V = 0. Then
JHEP01(2015)109
using the fact that both the action and the deformation are Q invariant it can be shown that the path integral does not depend on t. So in the limit t → ∞ the path integral collapses onto the saddle points of the deformation and the semiclassical approximation becomes exact. This explains the concept of localization. This technique has been used extensively and with great success to compute exactly many observables in non-abelian gauge theories defined on a sphere [25,26] and recently many other cousins of these spaces. Recently the same technique was applied with great success to supergravity on AdS 2 × S 2 [27] in the context of black hole entropy counting. A spectacular simplification was observed in that only a particular mode of the scalar fields was allowed to fluctuate, with the other fields fixed to their attractor values. We say that the path integral localized over a finite dimensional subspace of the phase configuration space. The renormalized action 4 has a very simple dependence on the prepotential of the theory and is a function of n V + 1 parameters C I which have to be integrated. More recently in [28] these results were applied in the case of four dimensional big black holes in toroidally compactified IIB string theory. The microscopic degeneracy, given as a fourier coefficient of a Jacobi form, can be rewritten in the Rademacher expansion and then compared with the gravity computation. The leading term of this expansion was reproduced exactly from these considerations. The nonperturbative corrections to this result, possibly coming from additional orbifolds, are more subleading, rendering the agreement between microscopics and macroscopics almost exact.
It would be interesting in the view of AdS 2 /CF T 1 correspondence to test these ideas in other examples. The study of higher dimensional black holes in this context is of particular interest for two main reasons. First, there is an interesting connection relating the microscopic partition functions of four and five dimensional black holes called 4d/5d lift [29][30][31][32]. It would be very important to understand this connection from a bulk point of view at the quantum level. For instance the microscopic partition functions of four and five dimensional black holes in toroidally compactified string theory are the same. 5 Since the quantum entropies have to agree one expects the five dimensional theory to "reduce" to four dimensions exactly. Secondly, we want to understand how localization works in the presence of gravity, that is, in a non-rigid background. Since the four and five dimensional answers are related, it is expected that some mode of the five dimensional metric is left unfixed. As a matter of fact the near horizon geometry of a supersymmetric five dimensional black hole has the form of a circle fibered over AdS 2 × S 2 [35] (which we denote as AdS 2 × S 2 ⋉ S 1 ). The fiber, which carries angular momentum, gives rise to a U(1) gauge field after dimensional reduction. Rigid supersymmetric localization is quite well understood. However localization in non-rigid backgrounds constitutes a new challenge and an interesting problem from a technical point of view.
At the level of two derivative supergravity action, the Bekenstein-Hawking entropy of the five-dimensional BMPV black hole [36] equals that of the four-dimensional supersymmetric black hole after identifying the five-dimensional angular momentum with fourdimensional electric charge. For N = 8 black holes in toroidally compactified string theory JHEP01(2015)109 this equality should hold even at quantum level since the microscopic answers in 5d [11] and 4d [17] are the same. 6 However in the case of N = 4 black holes the 4d/5d lift is non-trivial and the equality of quantum entropies is no longer true already at two derivative level 7 [37] but always in agreement with the microscopic answers. In the view of the existing results for four-dimensional black holes in N = 8 string theory [28] we give support for an exact bulk derivation of this property. We will find out that the renormalized action of both the five and four dimensional theories are the same.
This work has therefore two fundamental purposes: to compute the partition function of supergravity on AdS 2 × S 2 ⋉ S 1 using localization techniques and to establish a quantum version of the 4d/5d lift from a bulk perspective. Instead of reducing the theory down to AdS 2 , as in other perturbative computations [22][23][24], we consider the five dimensional theory and apply localization to the off-shell N = 2 theory. Our work focus on the perturbative part of this computation, that is, in finding the saddle points of the localization action. We compute the renormalized action on the localization solutions in the case when higher derivative corrections are absent, which is appropriate for five dimensional N = 8 black holes.
The use of localization in supergravity on AdS 2 ×S 2 , even though not fully understood, has remarkable results. As found in [27], the scalar in the compensating vector multiplet of four dimensional N = 2 off-shell supergravity is left unfixed by the localization equations. This means that from a five dimensional perspective we expect some mode of the metric to be left unfixed, namely the dilaton that measures the size of the fiber. In other words we need to consider localization on a non-rigid background. While it is straightforward to show localization in rigid supersymmetric theories this is not the case when the background itself is dynamical like in supergravity. The problem resides on the fact that it is very difficult to construct an exact deformation, if such an object exists, that is both gauge invariant and background independent. For rigid theories we usually pick a Killing spinor whose associated Killing vector generates a compact symmetry of the background. This is then enough to show exactness of the deformation. In general this choice of Killing spinor breaks the symmetries of the background and therefore it cannot be a diffeomorphic invariant deformation. We explain this in more detail later in section 3. However, in supergravity this only makes sense in regions of the configuration space where we can use a partially fixed background. In these regions we can use a partially fixed Killing spinor that stills generates a compact symmetry. For instance we will show that by fixing the four dimensional metric to be AdS 2 × S 2 while leaving the fiber off-shell it is still possible to construct an exact deformation that can be used to localize the theory. Even though our understanding of localization in supergravity is only partial, our belief is that supergravity, at least on AdS 2 spaces, localizes, that is, there are just a finite number of modes that capture all quantum corrections. This is a strong statement but it is quite clear from the microscopic answers that something similar might be happening.
JHEP01(2015)109
In this sense we adopt a different approach in this work. We start from an ansatz for the metric and do the same for the Killing spinor. To be able to use localization we need a fermionic symmetry δ that generates a compact bosonic symmetry. More specifically we need where L v is the Lie derivative along a Killing vector v and G are gauge transformations.
The operator δ has the name of twisted de Rham operator in differential geometry. As we will see later, in order for δ to act equivariantly, that is, in the sense of (1.3), we need to impose certain conditions on the fields. In order to use localization we need an off-shell realization of supersymmetry in five dimensions. A beautiful construction is given by the N = 2 five dimensional superconformal formalism developed recently in [38][39][40][41]. Although our interest is on BPS black holes in N = 4, 8 theories, for which we have microscopic answers, we will use the N = 2 formalism where these black holes can be embedded.
The localization solutions, presented in section 5, look very complicated from a five dimensional perspective. There are a great number of fields both from the weyl multiplet and vector multiplets left unfixed by the localization equations. The hypermultiplet fields however remain fixed to their background values. This is only an assumption since we do not have an off-shell representation of their supersymmetric variations. After a field redefinition, these solutions are recognized to be the solutions found in [27,42] for localization of supergravity on AdS 2 × S 2 . They are parametrized by n V + 1 parameters C A , with n V the number of vector multiplets, and label normalizable fluctuations of the four dimensional scalar fields. These four dimensional scalar fields result from a combination of the five dimensional scalars σ together with the fifth component χ of the gauge fields and a mode coming from the auxiliary antisymmetric field T ab , in the Weyl multiplet, that we denote as α, in the form At asymptotic infinity α is related to the five dimensional angular momentum J ψ ∝ sinh(α). The reduced euclidean theory has SO(1, 1) R-symmetry which explains the use of paracomplex scalars X ± . 8 The localization equations leave unfixed the "real" part of the paracomplex scalar fields which in hyperbolic coordinates 9 have the following spacetime dependence where ( * ) denotes the on-shell value. On the other hand the dilaton Φ which measures the size of the five dimensional circle combines also with α to give an additional paracomplex scalar Euclidean N = 2 supersymmetry in four dimensions has SO(1, 1) R-symmetry [43]. The vector multiplet scalars are "charged" ± under this non-compact R-symmetry. 9 In hyperbolic coordinates AdS2 metric is written as ds 2 = dη 2 + sinh(η) 2 dθ 2 .
JHEP01(2015)109
and gives an extra mode that has to be integrated out with C 0 an arbitrary constant. As in [27] these modes can fluctuate if certain auxiliary fields are also allowed to fluctuate above their attractor values. The localization equations show that any dependence on the five dimensional coordinate drops out, rendering the 5d/4d reduction exact. However we need to be cautious about possible contributions from Kaluza Klein modes in the one-loop determinants but we do not consider this problem here. Note that the localization analysis uses off-shell supersymmetry and therefore it is independent of the particular details of the action. As explained later in section 5.1, to guarantee that the supergravity action is invariant under the fermionic symmetry δ we need to add appropriate boundary terms, Wilson lines in this case. These boundary terms are important to guarantee a consistent variational principle. These Wilson lines are different from the electric Wilson lines used in [8,9,27] as they do not carry explicit information about the five dimensional charges as we could have expected. Nonetheless, after some algebra, the renormalized action shows dependence on the five dimensional charges in a way that is consistent with the four dimensional results of [27]. The final answer for the quantum degeneracy d(Q), in the absence of higher derivative terms, is a finite dimensional integral over n V + 1 variables, where M denotes an effective measure on the space of φ I , Q and J are the charges and angular momentum respectively, and the renormalized action has the form where C IJK is a completely symmetric constant matrix. After a suitable analytic continuation of φ 0 the renormalized action matches the four dimensional counterpart as expected from the 4d/5d lift. The measure M should in principle be computed from the one-loop determinants. However since the localization equations are valid only in a region of the full configuration space we do not have access to all fluctuations orthogonal to the localization locus but only a subset. The paper is organized as follows. In section 2 we review the quantum entropy function formalism. In section 3 we explain the technique of equivariant localization starting from finite dimensional integrals and then introducing the case of infinite dimensional integrals. In section 4 we review the five dimensional superconformal formalism. We introduce the supersymmetric variations of the various supermultiplets, the lagrangian and also the full BPS attractor equations. Finally in section 5 we do localization of the supergravity theory and compute the renormalized action on the localization locus.
JHEP01(2015)109
2 Quantum entropy function and AdS 2 /CF T 1 correspondence The quantum entropy function [8][9][10]44], based on the AdS 2 /CF T 1 correspondence, is a proposal for the quantum entropy of an extremal black hole. By quantum entropy we mean a generalization of the Wald's formula that captures the entropy of a black hole described as an ensemble of quantum states. At least for BPS black holes for which there are precise microscopic answers it is believed that such a formula might exist. We have to stress the fact that notwithstanding the microscopic answer being an index, it has been argued in [10,20] that for black holes that preserve at least four supercharges index equals degeneracy for the near horizon degrees of freedom.
The quantum degeneracy d(q), where q labels the charges of the black hole, counts the number of ground states of the conformal quantum mechanics and is related, via AdS/CF T correspondence, to the partition function of supergravity on AdS 2 with Wilson lines inserted at the boundary: where the geometry has euclidean signature. 10 The Wilson line insertions can be understood from two different but equivalent ways. From an holographic point view, the electric part of the gauge fields carries a nonnormalizable component at asymptotic infinity, that is, in coordinates where the boundary is at r → ∞ the gauge field goes as A ∼ er, and therefore via the usual bulk/boundary correspondence dictionary [45] these modes have to be fixed while the normalizable component, the chemical potentials, have to be integrated out. This is in contrast with higher dimensional examples like in AdS 4 . The microcanonical ensemble is natural from this point of view. But we can also see these Wilson lines as a requirement of a consistent formulation of the path integral. Without the Wilson lines the equations of motion for the gauge fields are not obeyed at the boundary because they carry a non-normalizable component. 11 Much like the Gibbons-Hawking terms, the path integral on AdS 2 requires appropriate boundary terms, Wilson lines in this case, that restore the validity of the equations of motion throughout all the space. We develop this idea further in section 5.1.
This formalism surpasses in many ways other attempts to compute quantum corrections to the entropy. The success comes essentially from two basic facts. Firstly, there is a natural UV cutoff, the string scale l s . Secondly, it introduces via holographic renormalization an IR cutoff which is essential for extracting relevant information even at the classical level. Besides, this formalism respects all the symmetries of string theory and reduces to Wald's formalism in the limit of low curvatures or large horizon. To see this consider the following simple example. The relevant near horizon data of an extremal black hole is given 10 As usual we perform a Wick rotation t → −iθ. In other instances we learned to take t → iθ such that the path integrand becomes e −S with S positive providing a convergent integral. Here however the euclidean action is already divergent due to the infinite volume of AdS2 and it is the renormalized action that provides the correct damping exponential. In short, Ren e S = e −Sren . 11 In other words the boundary terms that arise from varying the action do not vanish at asymptotic infinity.
JHEP01(2015)109
by the metric with conformal boundary at r → ∞, and gauge fields and scalar fields respectively. The leading contribution to (2.1) comes from evaluating the action on the configuration described above. Since AdS 2 has infinite volume we introduce a cutoff at r = r 0 and discard terms that are linearly divergent, that is, where Ren denotes renormalization by appropriate boundary counter terms that remove the r 0 dependence. The most r.h.s. expression is just the exponential of the Wald's entropy.
If we want to compute quantum contributions we look at normalizable fluctuations order by order in perturbation theory. This is essentially the work done in [22][23][24]37]. The authors consider the reduced theory on AdS 2 and look at normalizable fluctuations of the background (2.2) and (2.3). They compute, using the heat kernel method, one-loop determinants in the two derivative supergravity action. These terms give corrections of order ln(A) to the entropy, where A is the horizon area in appropriate units. When performing localization we will consider the path integral defined on the fivedimensional space AdS 2 × S 2 ⋉ S 1 instead of reducing all the fields down to AdS 2 . This method is obviously favorable for a number of reasons. However in the context of localization it requires the addition of appropriate boundary terms. Some of them arise by demanding that the equations of motion be obeyed also at the boundary, as explained before, others are necessary to restore gauge invariance. As we will explain later in the section 3 these terms are necessary for invariance of the action under supersymmetry, an essential ingredient for using localization. This condition will allow us to define in five dimensions an entropy functionà la Sen [46]. Current available attempts [47] circumvent this problem by reducing to four dimensions which is clearly unsatisfactory in the view of our main goal.
Localization principle
For illustrative purposes consider the example of a finite dimensional integral I(t) over a compact manifold M, If f (m) has a finite number of non-degenerate critical points p ∈ {f ′ (m) = 0}, a saddle point approximation gives an asymptotic expansion in t where the coefficients a l (p) can be computed in terms of f (m).
JHEP01(2015)109
For a certain class of integrals an extraordinary simplification occurs. If M is a symplectic manifold and f (m) is the hamiltonian of an S 1 action on M then the "higher loop" corrections, that is, the l > 1 terms vanish and the saddle point approximation becomes exact. This is the simplified version of the Duistermaat-Heckman theorem [48]. In such a case, the integral localizes exactly over the critical points of f (m) which are also the fixed points of the S 1 action on M where 2n is the dimension of M and V is the vector associated with the S 1 action. The fact that the integral only depends on the neighborhood data of the fixed points is commonly referred as localization.
To better understand the mechanics of localization we need to study equivariant cohomology. This is particularly well understood for finite dimensional integrals. A good mathematical reference is [49] while [50] is more convenient for a physicist point of view.
Without entering in too many mathematical details we will explain briefly localization of finite dimensional integrals. The idea behind localization is that is possible to define on a manifold M an operator D which has the property that it squares to an isometry of the space. In other words where L V is the Lie derivative. On the space of forms the operator D, also called twisted de Rham differential, has the form where d is the de Rham differential and i V is the contraction operator by the vector V .
Since the vector generates an isometry of the manifold we have L V g mn = 0, that is, V is a Killing vector. This operator then allows to define a cohomology on the space of forms which are left invariant under the isometry generated by V , that is, on the space of forms α for which L V α = 0. These forms are also called equivariant forms. It turns out that the integral over M of a closed form α, that is, D V α = 0, localizes on the fixed points of action of V . To show localization we consider the auxiliary integral parametrized by t with β an equivariant differential form. Since both α and D V β are closed under D V we can show by integration by parts that or in other words D V β is an exact deformation. A clever choice for β is to take
JHEP01(2015)109
with g µν a metric on the manifold M. The property that L V g µν = 0 ensures that β is an equivariant form. That is a clever choice because the "deformation" D V β has a term which is positive everywhere on M This can be used to show that in the limit t → ∞ the integral collapses onto the fixed points of V , rendering the saddle point approximation exact. This is the equivariant localization principle.
The same idea can be applied to infinite dimensional integrals. The idea is to extend the properties of the operator D to the space of fields. Since it mixes forms of even and odd degrees it behaves much like a supercharge in supersymmetric field theories that sends bosonic to fermionic fields and vice-versa. The twisted de Rham operator becomes a functional and can be identified with the action of a real supersymmetric transformation while the analog of a closed equivariant form is given by a supersymmetric functional. By the same token we can deform the integral by an exact equivariant functional and show localization of the theory.
To make things simple consider the case of one-dimensional N = 1/2 supersymmetric quantum mechanics on a circle with period T [50].
There exists a supersymmetry S that takes a boson to a fermion and vice-versa with τ the coordinate on the circle. These transformations can be used to define the functional equivariant operator , and I = dτẊ δ δΨ(τ ) (3.12) It is an easy exercise to show that this operator squares to translations as expected from the supersymmetry algebra Q 2 = H, with H, the hamiltonian, The space of equivariant functionals is determined by functionals W [X, Ψ] that vanish under the action of D 2 x . This immediately gives the condition Since this should be valid for any X and Ψ the condition is satisfied if we impose periodic conditions on both the scalars X and fermions Ψ. In other words the space of equivariant functionals is the space of functionals with both X and Ψ fields periodically indentified on the circle. The localization principle follows analogously. We deform the original integral by adding an exact deformation to the action
JHEP01(2015)109
Using the fact that the deformed integrand is equivariantly closed we can show as in (3.7) that the integral does not depend on the parameter t and consequently the limit t → ∞ can be used to prove localization of the theory on the space of configurations for which That is, the theory localizes on constant fields X. Further corrections, which include the contribution from the Kaluza-Klein modes, are one-loop exact. Without much effort the same idea can be applied to higher dimensional supersymmetric theories. In general, localization in rigid supersymmetric gauge theories is quite straightforward as long as there is a fermionic symmetry that squares to a compact Killing symmetry of the background. More generally there is an odd symmetry δ with the property that where v µ is a Killing vector and G a denotes a gauge transformation with parameter a.
With a set of fields that respect this algebra we can easily construct an exact deformation of the physical action. For this reason any deformation of the form δW with W gauge invariant and ∂ v W = 0, will be an exact deformation if the fields respect periodic boundary conditions along the compact v direction. In other words Pestun in his seminal work [25] gives a beautiful application of this formalism in the computation of Wilson loops in N = 4, 2 SYM defined on S 4 . In this case he uses a fermionic symmetry which is a combination of a conventional Q-supersymmetry and a special S-supersymmetry. This fermionic symmetry squares to an antiself-dual rotation of the sphere plus R-symmetry and gauge transformations.
For non-rigid supersymmetric theories it is not known if the same idea can be applied. We do not know how to construct, if it exists, an exact deformation that is both gauge invariant and background independent. In general it is difficult to find an odd symmetry that satisfies the condition (3.18). The case is even worst when there is gravity. However in a certain region of the phase configuration space it is possible to realize linearly such an algebra. For instance the authors in [27] claim to have computed exactly the path integral of N = 2 supergravity on AdS 2 × S 2 . The results are quite astonishing. Assuming that the background remains fixed they localize the gauge theory sector and find that for each vector multiplet a normalizable fluctuation of the scalars is allowed if the corresponding auxiliary scalar also fluctuates. They have found that the theory localizes on the set of fluctuations of the form with X a scalar and K the auxiliary scalar field, in the coordinates (2.2). Integration over the constants C yields a finite dimensional integral which agrees with the microscopic predictions for 1/8 BPS black holes in N = 8 string theory [28]. Since in general the susy transformations of supergravity do not respect equivariant properties, the strategy that we pursue here is to find in which region of configuration space those properties are realized. This brings additional constraints on the fields. On this restricted subspace we can deform the path integral and show localization. We believe that in the full quantum gauge fixed theory such a restriction would follow naturally.
5D superconformal gravity and near horizon analysis
In this section we introduce the N = 2 off-shell superconformal formalism for five dimensional supergravity. We present the various multiplets and respective supersymmetric transformations. We introduce the lagrangian with supersymmetric higher derivative corrections and present the BPS attractor equations for the AdS 2 × S 2 ⋉ S 1 near horizon geometry of the BMPV black hole.
Superconformal formalism
The superconformal calculus was originally constructed for N = 2 supergravity in four dimensions [51][52][53] but only recently a formulation in five dimensions was developed [38][39][40][41]. The idea is to construct a supersymmetric theory for the five dimensional conformal group by gauging the global generators and then imposing appropriate gauge fixing conditions. This is similar to the example of a scalar conformally coupled to the Einstein-Hilbert term. By gauge fixing the scalar to a constant we recover Poincaré gravity. One major distinction between four and five dimensional formulations is that while the first has SU(2) × U(1) R-symmetry, the five dimensional theory only has SU(2) R-symmetry. This means, for instance, that the scalars in the vector multiplets are real.
In the following we give a summary of the content of the various supermultiplets, namely the Weyl multiplet, the vector multiplet, the linear multiplet and the hypermultiplet, and respective supertransformation rules. We follow closely the paper [54] where more details can be found.
Weyl multiplet: the independent fields consist of the funfbein e a µ , the gravitino field ψ i µ , the dilatational gauge field b µ , the R-symmetry gauge fields V i µj (anti-hermitian traceless matrix in the SU(2) indices i, j), a real tensor field T ab , a scalar D and a spinor field χ i . Both V i µj , T ab , D and χ i are auxiliary fields. For the problem we want to solve we set b µ = 0 and gauge the special conformal transformations K a parameters Λ a K to zero. The conventional Q and special S supersymmetry transformations, parametrized respectively by the spinors ξ i and η i , are as follows: The derivatives D µ are covariant derivatives.
Vector multiplet: the vector multiplet consists of a real scalar σ, a gauge field W µ , a triplet of auxiliary fields Y ij and a fermion field Ω i . The superconformal transformations are as follows: with Y ij = ε ik ε jl Y kl , and the supercovariant field strength is defined as, Linear multiplet: though they do not play any relevant role in our work we decided to include the supersymmetric transformations of the linear multiplet for congruence of the exposition. The linear multiplet consists of a triplet of scalars L ij , a divergencefree vectorÊ a , an auxiliary scalar N and a fermion field ϕ i . The superconformal transformations are as follows: The divergence free condition ofÊ µ can be easily solved by considering the three-rank antisymmetric tensor E µνρ via the equationÊ = * dE.
JHEP01(2015)109
Hypermultiplet: hypermultiplets are usually associated with target spaces of dimension 4r that are hyperkahler cones. The superconformal transformations are written in terms of local sections A α i of an Sp(r) × Sp(1) bundle as follows The covariant derivative contains the Sp(r) connection Γ α aβ associated with rotations of the fermions. Moreover the sections A α i are pseudo-real in the sense that they obey the constraint where Ω αβ is a covariantly constant skew-symmetric tensor with its complex conjugate satisfying Ω αβ Ω βγ = δ γ α . The information on the target space metric is contained in the hyperkahler potential Note that the hypermultiplets do not exist as an off-shell supermultiplet. The superconformal transformations close only up to fermionic equations of motion.
The Lagrangian
We present the bosonic part of the Lagrangian. The lagrangian is essentially the sum of three parcels, that is, The first term L V V V is cubic in the vector multiplet fields, where C IJK , symmetric in all its indices, are constants that encode the different couplings of the fields. The function C(σ) is the contraction C(σ) = C IJK σ I σ J σ K . The term L H encodes the lagrangian for the hypermultiplets while L V W W contains higher derivative corrections with couplings between vector and weyl multiplets fields (4.10) The constants c I encode the couplings of the higher derivative terms. The symbol e denotes e = det(e a µ ) = √ −g.
Note that R and R ab are respectively the Ricci scalar and tensor 12 while R abcd is the superconformal Weyl tensor. Other conventions can be found in the appendix.
In the following we show how to obtain on-shell Poincaré supergravity by integrating out the auxiliary fields. The equation of motion for the auxiliary field D is 16 3 which on the attractor background (4.14) reduces to For simplicity consider the theory with a unique vector multiplet without higher derivative corrections, that is, c I = 0. The function C(σ) becomes C(σ) = σ 3 . The gauge theory sector of the lagrangian, composed of a scalar σ, a vector W µ and auxiliary fields Y ij becomes, after reintroducing the fermion fields, invariant under rigid superconformal transformations. Due to scale invariance we fix the scalar to a constant. If we further use the attractor equations Y ij = 0 and T ab = (4σ) −1 F ab (4.14) we obtain which upon including the gravitino field, is equal to the Lagrangian of pure five-dimensional supergravity. The Newton's constant is identified with G N = σ −3 so that the Ricci scalar appears with the canonical prefactor (16πG N ) −1 .
BPS attractor equations and near horizon geometry
In this section we present the attractor field configuration that preserves full supersymmetry. The analysis is completely off-shell and therefore it does not depend on the specific higher derivative corrections the theory may contain. To fully determine the black hole attractor background these equations must be supplemented with the values of the charges which depend on details of the higher derivative corrections. For further details we refer the reader to [54].
JHEP01(2015)109
Since ultimately we are interested in Poincaré supergravity we want to study the vanishing of the fermionic variations modded out by S-supersymmetry variations. This is achieved by constructing fermionic fields which are invariant under S-supersymmetry. This is basically the approach first outlined in [55]. The solutions are Vector mtpl. : Hyper mtpl. : The geometry has the form of a circle non-trivially fibered over AdS 2 × S 2 where T 01 , T 23 are the only non-vanishing components of T ab , where (0, 1, 2, 3) are the local Lorentz indices. For T 01 = 0 the line element (4.15) can be rewritten for r ≫ 1 as Up to the conformal factor (4v 2 ρ 2 ) −1 , the second term in the line element (4.18) is diffeomorphic to flat space. However for p 0 = 1 we have a conical singularity at the origin. Requiring smoothness of the solution we fix p 0 = 1 by imposing the condition Since the theory is scale invariant we set v = 1/4 for convenience. The geometry is left with only one parameter β ∈ [0, π/2[ defined via the equation (4.17) by setting
JHEP01(2015)109
The line element (4.15) becomes ) This is the near horizon geometry of a rotating black hole with angular momentum proportional to J ∝ sin(β). The limiting case β = π/2 or T 01 = 0 has line element The first three terms describe a local AdS 3 . So effectively, we have the space AdS 3 × S 2 . If we insist on the identification ψ ∼ ψ+4π we have the near horizon geometry of a black ring, while for noncompact ψ we have an infinite black string. In this work we will be interested only in the case of a rotating black hole. The AdS 3 case, which is very interesting, will be postponed for a future work.
In summary, we have a one parameter family of geometries, which are locally AdS 2 × S 2 × S 1 , that interpolate between the non-rotating black hole with near horizon geometry AdS 2 × S 3 and the black ring/string with near horizon geometry AdS 3 × S 2 [56].
JHEP01(2015)109
To find the gauge field attractor configuration we decompose the gauge field into a four and five dimensional components A 4d and χ respectively, (4.30) respecting the symmetries of the near horizon geometry (4.26). The field strength F 5d has components from which we can construct the five-dimensional gauge field Note that dψ + cos(θ)dϕ is a globally defined form on S 3 . 13 Note that the action contains Chern-Simons terms and therefore we want to use globally defined forms. For instance in the case of black rings the topology of the horizon is now S 1 × S 2 and so we can put magnetic flux on the S 2 . This requires a careful treatment of the gauge fields as explained in [54]. The field strength becomes
Entropy, angular momentum and electric charges
To compute the entropy, electric charges and angular momentum we can use the usual Noether procedure. Since the theory contains Chern-Simons terms this requires a careful treatment of the various fields. We present the results of [54].
Entropy: the entropy follows from the 3-integral over the horizon of the Noether potential associated with space-time diffeomorphisms. This is particularly difficult due to the higher derivative terms and subtle due to the presence of Chern-Simons terms. Nevertheless we will see later in section 5 that the entropy comes out naturally by computing the renormalized entropy function on the attractor background. Its value is where σ * denotes the attractor value of the scalar.
JHEP01(2015)109
Angular momentum: if we consider the Noether potential associated with the Killing vector ∂/∂ ψ we compute the angular momentum Electric charges: the electric charges are determined by considering the Noether potential associated with abelian gauge transformations. They are given by In the euclidean theory, entropy, angular momentum and electric charges become respectively Note that the angular momentum carries an imaginary factor i. This is a consequence of the fact that the fiber B becomes real in the euclidean theory while the other gauge fields become imaginary. Once the charges and angular momentum are specified the attractor background is fully determined.
5 Localization of 5D supergravity on AdS 2 × S 2 ⋉ S 1 To proceed with equivariant localization we need two basic ingredients. Firstly, we need a fermionic symmetry δ that can be used to define a twisted de Rham operator in the sense of (3.18). Secondly, to show localization of supergravity on asymptotic AdS 2 × S 2 ⋉ S 1 background we have to ensure that the integrand is equivariantly closed, that is, invariant under the fermionic symmetry δ. Even though for supersymmetric theories defined on compact manifolds the last condition is satisfied by construction, 14 for spaces with boundaries, which is the case, the action functional is equivariantly closed only up to boundary terms. A different but equivalent way to understand this is to observe that the equations of motion for the gauge fields are not obeyed at the boundary as they carry non-normalizable components. To cure the theory we add appropriate boundary terms that compensate for the anomalous transformations. These terms take the form of five dimensional Wilson lines.
Boundary terms and Wilson lines
The lack of δ invariance can be restored by adding appropriate boundary terms. For the problem in hands it is enough to consider δ variations of the fields that carry nonnormalizable components at the boundary, that is, the gauge fields and the fiber B (4.16).
JHEP01(2015)109
For ilustrative purposes consider the model with a single gauge field and Lagrangian in which the two derivative sector of our theory fits naturally. A variation of L under A + δA 15 gives a bulk plus boundary terms δL = 2δA∧d⋆F +αδA∧d⋆T +3βδA∧F ∧F +2d(δA∧⋆F )+αd(δA∧⋆T )+2βd(δA∧A∧F ) (5.2) at "order" δA. The last three terms being total derivatives will give contributions at the boundary. Consequently, to make the action δ invariant we add the boundary terms To compute these boundary terms we use the attractor values of the fields. Paying careful attention to the orientation chosen 16 we can show that the boundary term simplifies to and g = lim h(r 0 ) is the induced metric at the boundary of AdS 2 with cutoff r 0 (2.4). For the attractor solution (4.34 )we haveQ Even though the last term in (5.4) cannot contribute to the on-shell renormalized action, since it's on-shell value is proportional to the cutoff r 0 , it can contribute at the quantum level. In four dimensionsQ becomes the four dimensional charge and the boundary term (5.4) reduces to a Wilson line insertion on the thermal boundary as in [27]. The fiber also carries a non-normalizable component so we need to worry about possible new boundary terms. From a four dimensional point of view the fiber gives rise to an electric field. Small variations of the fiber generate total derivative terms that have to be compensated by boundary terms. We compute these terms by reducing the theory to four 15 Note that δA is an anticommuting field. However this analysis is independent of the commuting character of the variation. 16 We have chosen dτ ∧ dη ∧ dϕ ∧ dθ ∧ dψ = dτ dηdϕdθdψ.
JHEP01(2015)109
dimensions and studying the Maxwell kinetic term. For the second derivative lagrangian we compute The discussion with higher derivative corrections follows exactly the same recipe, however the computation of the different Wilson lines is a hard task. For technical reasons, we decided to postpone for a future work the effects of higher derivative corrections in the computation of the quantum entropy. Nonetheless we present here the analysis for the boundary terms.
The electric Wilson lines get the additional contribution The computation of the boundary terms for the fiber in the higher derivative lagrangian is not trivial. Special attention is needed to the term that couples the hypermultiplet scalar χ to the Ricci scalar With these boundary terms we can show that the on-shell renormalized action correctly reproduces the Wald's entropy. We will come back to this point later.
In summary, closeness of the path integrand under δ requires the supergravity action to be supplemented with additional boundary terms, that is, where S sugra stands for supergravity action. Notice that neitherQ I +Q I norJ +Ĵ match with the five dimensional electric charges and angular momentum respectively. However, as we shall see later on, the on-shell renormalized action correctly reproduces the entropy computed using the Noether methods.
Localization
So far we have not specified the fermionic symmetry δ. As a matter of fact the analysis done in the previous section is equivalent to requiring that the equations of motion be obeyed also at the boundary [10]. This means that the boundary terms (5.15) are independent of the choice of δ.
For the problem in hands we have to consider a fermionic symmetry δ that is composed of a conventional Q and special S supersymmetries. In other words the fermionic symmetry is parametrized by spinors ξ i and η i , Generally the QV deformation breaks most of the isometries of the problem because we choose to localize with a supercharge parametrized by a particular Killing spinor. That is, by choosing a Killing spinor ξ we are in a sense gauge fixing part of the diffeomorphisms so this cannot be an exact deformation at least in supergravity. The susy transformations of section 4 realize local superconformal symmetry and in general do not close to a circle action. Instead we will look at a region of the phase configuration space that realizes the equivariant algebra, that is, on which the fermionic symmetry closes to a compact symmetry modulo gauge transformations, in the sense that The superconformal transformations are very complicated and contain a large number of fields. To make things practical we consider the following ansatz for the five dimensional metric ds 2 = g µν dx µ dx ν + Φ 2 (dψ + B) 2 (5. 18) with g µν the four dimensional metric which asymptotes to AdS 2 × S 2 . We assume that g µν is independent of the five dimensional coordinate while both Φ and B remain completely off-shell. Since the geometry is asymptotically a circle times AdS 2 × S 2 , the fermionic symmetry δ is expected to square at infinity to a Killing symmetry of AdS 2 × S 2 × S 1 . From the AdS 2 point of view, there is a supercharge Q in the near horizon superconformal algebra [57] that squares to a compact symmetry, that is, where L 0 generates rotations at the origin of AdS 2 and J, the two dimensional R-symmetry operator, generates azimuthal rotations on the sphere S 2 . This was the supercharge used for localization in [27,42]. However, while in [27,42] the fermionic symmetry is generated by a Killing spinor of AdS 2 × S 2 , here this is true only asymptotically. Because the localization equations allow for the five dimensional metric to fluctuate, as we will show, both ξ i and η i in (5.16), the five dimensional susy parameters, will also have non-trivial profiles on the AdS 2 space while preserving the geometry at infinity. We stress that this analysis is not exhaustive and a more general consideration could in principle be taken. For practical
JHEP01(2015)109
purposes we will only allow for a particular mode of the Killing spinor to fluctuate. We will show that the solutions are consistent with this assumption. In the case that the background along with the other fields in the Weyl multiplet are kept fixed to their on-shell values both ξ and η are determined by the vanishing of the susy variation of the gravitino. In the gauge η = 0 there are eight independent Killing spinors. We refer the reader to the appendix (B). The choice generates the Killing vector So indeed, in the case α = 0 we recover the symmetry (5.19). Since we expect the parameter ξ to not be completely fixed by the localization procedure we will use the ansatz where α(x, ψ) is an arbitrary function which asymptotes to a constant α * . The boundary conditions are as usual: we fix the non-normalizable modes and integrate the normalizable ones. This means Φ = cosh(α) + O(1/r)
Localization in non-rigid background
The strategy that we pursue here is to find a nice truncation where the susy superconformal transformations generate an equivariant algebra in the sense that with L v the Lie derivative and G a gauge transformation. Due to the large number of fields and the complexity of the equations involved, we consider an ansatz for the metric and the susy parameter ξ. By computing second variations of the susy transformations we have to impose some constraints on the fields such that (5.24) is satisfied.
JHEP01(2015)109
In the following we discuss the fermionic variations of both the Weyl and Vector multiplet fields. This discussion is tightly correlated with the off-shell reduction of five dimensional supergravity studied in [58]. However, since we are working with Euclidean space there are important details that have to be reconsidered. In this discussion we will try to keep the fields as much off-shell as possible.
Weyl multiplet: we assume a Kaluza-Klein decomposition of the gravitino fields where M and µ are five and four dimensional coordinate indices, respectively, and i is the SU(2) R-symmetry index. The field B µ is the fiber in the metric (5.18). Since we are considering an off-shell fiber we need to keep the termψ i . For a standard reduction, that is, when the fiber does not depend on the five dimensional coordinate this term is zero. Similarly we write the decomposition of the other fields in the Weyl multiplet: Note that the field T goes to −iT in Euclidean space. Since we are decomposing the five dimensional fields in four dimensional ones we need to ensure that certain gauge conditions of the superconformal algebra are still preserved. As explained in [58] this accounts to include additional Lorentz and special conformal compensating transformations of the fields.
After some algebra, which is explained in the appendix (C), the susy transformations of the fields Φ, B and e i µ , the four dimensional vielbein, become which now look like the four dimensional susy transformations withγ µ = e i µ γ i . We would like to stress that most of the susy transformations that we will be considering here are not supercovariant. Additional fermionic terms have to be added to the susy transformations of the fermionic fields. However since our main interest is on the solutions to the localization equations we can focus on the bosonic contributions only.
The action of δ 2 on the fields Φ and B becomes where we used a number of properties of the spinor ξ as described in the appendix, and the vector V M is given by We see that both transformations do not obey the algebra (5.24). In δ 2 Φ only the first term corresponds to the action of L v on Φ and therefore we need to impose For δ 2 B µ we have a bit more freedom because the fermionic symmetry can close up to gauge transformations since B µ is now a four dimensional vector. 17 In addition δ 2 B µ contains non-linear terms acting on B µ which have to vanish. This means that the terms and ξ|ξ ∂ ψ αB µ must vanish. To solve these constraints we need to impose ∂ ψ (B, Φ, α) = 0, since we want to keep B, Φ and α as independent fields. In addition there is in δ 2 B µ a term proportional to the gravitino's variation δψ µ . If we impose δψ µ = 0 together with ψ µ = 0 we ensure that the vielbeins e i µ do not transform under δ 2 . In view of equivariant localization we want L v to commute with the background such that in deforming the action by an exact term δW [X, Ψ] we can pull out an action L v through the functional and show that This wouldn't be necessary if we knew how to construct a diffeomorphic and background independent exact deformation. In other words we need L v g µν = 0, that is, the vector V M should generate an isometry of the four dimensional metric. This can be achieved by imposing the four dimensional gravitino equation δψ µ = 0.
The deformation δW [X, Ψ] may depend on other parameters like the Killing spinor so we have to guaranty that they are also invariant under the flow generated by v. From our parametrization (5.23) of the Killing spinor this is not immediately true as α is an independent off-shell field. In order to circumvent this problem we also need α to transform under supersymmetry as such that the action of δ 2 on α is a translation along v, that is, The fact that α transforms under supersymmetry is natural from a four dimensional point of view where the theory has an additional SO(1, 1) R-symmetry. As a matter of
JHEP01(2015)109
fact, the field α joins the scalar Φ to form a paracomplex scalar in the four dimensional theory [43,58]. As we show soon, the physical existence of α is offset by A i = T 5i via the condition δψ µ = 0 such that we are not adding additional degrees of freedom. In view of these results the leftover expression 18 in δ 2 B µ must be a gauge transformation, that is, which is trivially satisfied after solving δψ µ = 0. From this analysis we conclude that in order for the fermionic transformation (5.16) to square to a circle action on the Weyl multiplets fields we need the following conditions Let us now solve the four dimensional gravitino equation δψ µ = 0. It has been solved in [42] to give AdS 2 ×S 2 as the unique solution. However they considered the problem with Minkowski signature. The Euclidean case is not so different except that some fields have to be analytically continued to imaginary values. For instance we rewrite equation δψ µ = 0 as with * T ij = 1/2ǫ ijkl T kl and ω 0kl µ is the spin connection of AdS 2 × S 2 . At the on-shell level ω kl µ becomes ω 0kl µ . The fieldT 0 corresponds to the on-shell value ofT computed on AdS 2 × S 2 and has only one componentT 01 = 1. The spinor ξ(0) denotes the Killing spinor of AdS 2 × S 2 , that is, ξ for α = 0.
According to the authors of [42] the only solution corresponds to AdS 2 × S 2 , that is, for ω kl µ = ω 0kl µ . Assuming V i µj = 0 we conclude that (see appendix for more details) With this result it is now easy to show that equation (5.30) becomes a gauge transformation with J = Φ −1 cosh(α) and H = Φ −1 sinh(α), as expected.
Vector multiplet: we now perform a similar analysis for the vector multiplet fields. The relevant susy transformations are Along the standard Kaluza-Klein reduction we decompose the five-dimensional gauge field into a four dimensional component W µ and a "scalar" W ψ as Note that this vector is still completely off-shell. We have separated the five dimensional component in two parcels to put in evidence the Wilson line along ψ, that is, This ensures that ∂ ψ (χΦ) = 0 and dψW ψ = 0. However we do not make any other assumption about coordinate dependence ofW . After some algebra it is possible to show that the action of δ 2 on the bosonic fields is with the gauge parameter Λ = σ sinh(α) + χ cosh(α) cos(θ) + σ cosh(α) + χ sinh(α) cosh(η) (5.41) From the first equation we conclude that η must be "orthogonal" to ξ in the sense that ξ † η = 0. The choice for η (C.6) trivially satisfies this condition after using the property that ξ † γ ab ξ = 0. The rest of the algebra is already in the form Before proceeding with localization we make a brief summary of what we have done so far. Starting from an ansatz for the metric and Killing spinor and assuming a particular Kaluza-Klein reduction we computed the action of δ 2 on the bosonic fields of both the Weyl and vector multiplets. Since we want δ 2 to generate a circle flow this imposes additional constraints on the fields. From this analysis it results that Note that we need to impose the condition δψ µ = 0 before using the localization argument.
A similar analysis should be carried also for the fermionic fields, even though it should
JHEP01(2015)109
follow just by supersymmetry. We will skip this analysis and proceed to solving the localization equations which perturbative analysis only requires the supersymmetric transformations δΨ.
On the field configuration space where δ 2 acts equivariantly we can add an exact deformation of the form where Ψ runs through all the fermions of the theory and we keep fixed the four dimensional metric to AdS 2 × S 2 and ψ µ = 0. From the analysis done before it is easy to show why that is an exact deformation. For any scalar functional W [X, Ψ] the action of δ 2 is simply which vanishes after an integration by parts, that we can do because ∂ M V M = 0. The bosonic action that results from this deformation is (δΨ) † δΨ. So in the limit t → ∞ we derive the localization equations δΨ = 0. (5.46)
Localization solutions
In this section we solve the localization equations for the Weyl and Vector multiplet fields under the conditions derived in the previous section.
Weyl multiplet: using the condition that ∂ ψ (B, Φ, α) = 0, equation δψ i = 0 (C.2) becomes Notice that this equation has the form of a susy transformation of a vector multiplet fermion except for a couple of imaginary factors. As a matter of fact this becomes the supersymmetry transformation of the four dimensional compensating vector multiplet fermion [58]. If we denote byF the field strength of the fluctuations δB above the attractor value B * , we have where we defined H = Φ −1 sinh(α) − tanh(α * ) and J = Φ −1 cosh(α) − 1. Both H and J vanish at the boundary. If the fields δB, H and J take real values this leads to an infinite number of solutions. To avoid this situation we perform a Wick rotation of the field δB to iδB E , 19 which does not change the boundary conditions, and take the imaginary branch of V 1 1 , that is, V 1 1 = −V 2 2 = iK, with the other components zero. Remind that V i j is 19 Analogously we could have considered the complexified version of (δψ) † in (5.44) in the sense that we take F (B)ij to be a complex field with the reality condition that F (B) † ij = F (B * )ij − F (δB)ij, with B * the on-shell value. The resulting action would not be positive definite in this case. However this can be avoided by integrating over imaginary values of δBµ.
JHEP01(2015)109
an antihermitian traceless matrix in the SU(2) indices. Another possibility would be to Wick rotate both H and J. However this would spoil the reality condition of α inducing important changes in the localization equations.
Parametrizing the supersymmetry transformations in the form we construct the bosonic part of the localization lagrangian as i=0,1 More details about this construction can be found in the appendix D. We used the notations such that the vector V a / ξ|ξ has unit norm. Since the bosonic lagrangian is written as a sum of squares the localization equations follow directly from the zero locus of each of these squares, that is, For the problem we are considering we need to set both L and J to zero. If it wasn't the case we could generate an infinite number of solutions to the localization equations. One possibility would be to consider a space-time dependent analytic continuation of the auxiliary fields as in [42].
Observe that some of the equations are not independent. For instance equation (5.53) comes from equation (5.52) after contraction with the vector V . Analogously, equation (5.55) comes from contraction of (5.54) with the vector V after using equation (5.56).
Under the parametrization (5.49), we read
Equation (5.52) is easily solved to givê
that is, the fiber must be fixed to its on-shell value. From (5.55) we deduce The second equation translates into the fact that the gauge parameter Λ in (5.35) becomes a constant C on the localization locus. The equations that follow from the "master equation" (5.54) are which have been solved before in [27,42]. The solutions are with C an arbitrary constant (to be identified with the gauge parameter Λ, as pointed out just before). In terms of the fields Φ and B this gives Φ = cosh(α), tanh(α) = tanh(α * ) + C cosh(η) , K = cosh(α) 2 C cosh(η) 2 , B = B * (5.59) From here we see that C must be defined in the interval [−1 − tanh(α * ), 1 − tanh(α * )]. We proceed with localization and consider the remaining fermionic fields in the Weyl multiplet. The field χ i has an intricate susy transformation. Instead we use the results of [58]. The authors present its decomposition in terms of four dimensional fields Since both δχ i and δψ i vanish at the localization locus, this implies that δχ i 4D = 0. This has a much simpler expression we rather use
JHEP01(2015)109
withD defined asD with R and ∇ a the four dimensional Ricci scalar and covariant derivative respectively. We have used the fact thatη = 0 and set V µ = 0. Note thatD vanishes on-shell. Substituting back the value of A µ we find Since the tensorT is covariantly constant, this implies This finishes the analysis for the fields in the Weyl multiplet.
Vector multiplet: as explained before we perfom an off-shell Kaluza Klein decompostion of the five dimensional gauge field as where U = χΦ is the Wilson line along ψ. To obtain non-trivial solutions to the localization equations we have to analytically continue the field χ to imaginary values. This is consistent with the on-shell solution discussed in the section 4.3. From a four dimensional point of view this is a consequence of the fact that N = 2 euclidean supersymmetry has SO(1, 1) R-symmetry [43], so that the vector multiplet scalars are real. The δ variation of the fermion Ω i in the vector multiplet becomes where a, b and k, l, m are respectively five and four dimensional tangent space indices. With the help of equation δψ = 0 (5.48) and the fact thatη = 0 we rewrite the equation above as whereF , which is taken to be real, denotes fluctuations of the gauge fields above the attractor background and ( * ) is the on-shell value of (σ + γ 4 χ)e αγ 4 . The bosonic part of the localization lagrangian can be written again in the form (5.50). We read
65)
G 01 = 1 2 (σ cosh(α) + χ sinh(α) − ( * )) ,G 23 = 1 2 (σ sinh(α) + χ cosh(α) − ( * )) , (5.66) It immediately follows from the localization equations The r.h.s. of the equation does not depend on ψ. So in order to preserve the periodicity of σ we must have We therefore conclude that σ must live on AdS 2 × S 2 . The remaining equations are analogous to the system (5.57) and can be solved to give withC an arbitrary constant. We also observe that the equation with v r the five dimensional unit vector V r / ξ|ξ andF = dA, with A denoting the fluctuations above the attractor value. It follows from contraction of this equation with v r that v mF mn = 0 (5.75) Before trying to solve these equations note that the vector V M has components unless M contains non-trivial one-cycles. Without entering in details about the topological properties of M we will assume that this is the case.
Quantum entropy function
Our task now is to compute the action on the localization solutions. As discussed in section 2 the action suffers from IR divergences due to the infinite volume of AdS 2 . However they can be renormalized systematically by introducing appropriate local boundary counter terms.
In [58] the authors performed not only the Kaluza Klein reduction of five dimensional off-shell multiplets but they have rewritten part of the five dimensional action in terms of four dimensional fields. Their results are very interesting. They observe that the two derivative lagrangian together with part of the higher derivative corrections can be rewritten in terms of four dimensional chiral superspace invariant terms, usually called F-terms. This part of the action can be written in terms of the holomorphic prepotential function F (X,Â) = aC IJK X I X J X K X 0 + b c I X I X 0 (5.79)
JHEP01(2015)109
with a, b some numerical constants. This type of lagrangian falls in the class of theories reviewed in [63] relevant for BPS black holes in N = 2 supergravity and more recently in the case of localization of supergravity in AdS 2 × S 2 [27]. Interestingly though some of the higher derivative terms give unexpected contributions in four dimensions. For instance they give rise to Gauss-Bonnet type of corrections in four dimensions which have never been written in N = 2 supergravity. Other terms can be written as integrals over the full superspace usually known as D-terms. This class of terms was extensively analyzed in [64]. Their analysis however is not fully complete as there are a number of terms whose reduction can be ambiguous because of integration by parts. On the other hand our analysis in section 5.1 gives a consistent treatment of the boundary terms that are required by the closure of the action under the fermionic symmetry δ.
Absence of higher derivative corrections
In this section we compute the renormalized action for the case when c I = 0, that is, when we do not have higher derivative corrections. Due to the form of the localization solutions it is convenient to introduce paracomplex variables defined as which are natural variables in theories with SO(1, 1) R-symmetry. With this parametrization the localization solutions are given by Since no field has dependence on the fifth coordiante the Kaluza Klein reduction is exact. The reduction goes much like in [58] except for the fact that the theory now has manifest SO(1, 1) R-symmetry.
It is observed that the hypermultiplet lagrangian vanishes exactly on the localization locus, that is, T 2 | loc = 0 (5.80) even though we were not able to localize in the hypermultiplet sector. We do not know if this is always true or just accidental. With the field redefinition we can write the relevant non-zero part of the action with a prepotential F (X) given by where we have written the last two terms in differential form for easy reading, and defined t = 2χΦ. The total derivatives arise after expressing the Chern-Simons terms with four dimensional quantities. Notice that the action above, apart from the total derivatives, has the same form as the one used for localization in [27]. We borrow their results. In the absence of higher derivative corrections the boundary terms are given by where once more we use * to denote the on-shell value of the fields. The boundary action contributes not just to the on-shell renormalized action, that is, to the on-shell entropy, but also at the quantum level. The boundary quantum correction, which is linear in C, offsets an equal contribution coming from the bulk action. So overall, the renormalized action has, in a taylor expansion around the attractor background, no linear dependence in C, which is equivalent to saying that the equations of motion are satisfied at C = 0. In other words, Note that the quantum part of these two contributions cancel as expected. The constant piece on the other hand can be written as with q I , J the five dimensional electric and angular momentum charges respectively and e I 4d , e 0 4d the corresponding four dimensional electric fields The bulk renormalized action on the other hand gives the contribution ) the four dimensional charges, and X * denotes the on-shell values of the scalar fields. Note that even though we have written explicitly a term linear in C in (5.85), the second term of the expression gives another with opposite sign, so overall we do not have linear C dependence.
The four dimensional charges are computed to give which are related to the five dimensional charges by a proportionality factor. Putting together boundary and bulk contributions we arrive at the final expression with φ I = −e I 4d + C I , φ 0 = e 0 4d + C 0 . The index I runs over the number of vector multiplets in the theory. On the other hand the renormalized action for four dimensional N = 2 theory, derived in [27], is
JHEP01(2015)109
where F (X) is the prepotential of the theory, and q I , p I are the four dimensional electric and magnetic charges respectively. Here the index I goes over the range I = 0 . . . n V . Note that in four dimensions we can turn on magnetic fluxes which appear in the renormalized action as the magnetic charges p I . However in five dimensions for an horizon with S 3 topology this cannot happen. In the case of the black ring the horizon has S 1 × S 2 topology which allows for dipole magnetic charges [54]. Under the analytic continuation φ 0 → iφ 0 and J → −iq 0 the five dimensional renormalized action (5.86) acquires the form (5.87) for p 0 = 1 and p I = 0.
On-shell renormalized action with higher derivative corrections
The computation of the renormalized action in the presence of higher derivative terms is technically cumbersome and for this reason it is still work in progress. It would be very interesting if we could put it in a form like (5.86), that is, as a function of the potentials φ. Notwithstanding this technical difficulty, we decided to present here the "tree-level" computation of the renormalized action. The interest is to show that this formalism agrees with the traditional Noether procedure, giving an entropy function "à la Sen", in the sense that the entropy equals the on-shell five dimensional lagrangian density.
The final answer for the entropy after computing both the bulk renormalized action and boundary terms is with C(σ) = C IJK σ I σ J σ K and c.σ = c I σ I . This agrees with the result for the entropy computed using the Noether procedure (4.40).
As an aside it is easy to show that the quantum contributions coming from each Wilson line cancel as This is in agreement with the fact that the renormalized action for the higher derivative terms does not contain terms linear in C, a fact observed in Mathematica. This confirms the validity of our boundary terms.
Discussion and conclusion
In this work we considered the problem of computing the quantum entropy of fivedimensional rotating supersymmetric black holes using localization techniques. We focused on N = 2 supergravity, within the context of off-shell superconformal formalism, and showed using localization that, in the absence of higher derivative corrections, the quantum entropy function is the same as the four-dimensional counterpart after a suitable analytic continuation. The inclusion of higher derivative corrections in the computation of the quantum entropy is more complicated. The reduction to four dimensions gives, besides the usual chiral JHEP01(2015)109 content, corrections of the form Gauss-Bonnet in addition to D-type term corrections. Even though our analysis is independent of the higher derivative content, because it only relies on off-shell susy transformations, the computation of the renormalized action in a form that is dependent only on the unfixed modes revealed to be very difficult.
In the case those corrections are absent we were able to compute the quantum renormalized action and showed that it matches with the four dimensional counterpart. However this is not the full answer to the problem as there can be additional one-loop contributions. Within localization we used a partially fixed background together with some other gauge fixing conditions. As explained before it is not known or even if it is possible to construct an exact deformation in supergravity that we can use to localize the theory in a background independent way. Our method can only probe the perturbative part of this computation since it only requires the equations of motion that result from the localization action. Instead we can think of an effective measure on the space of the localization solutions. To understand this we write the final answer as where M(φ) stands for an effective measure on the space of φ's, the unfixed modes, and it should be computed from the one-loop effects we have just mentioned. Since we do not know how to compute the one-loop contribution from first principles we can try to determine the measure as in [28]. The idea is to construct an induced metric on the space of collective coordinates using duality symmetry. We know via the microscopic 4d/5d lift that the quantum entropies of four and five dimensional black holes are intimately related. For instance the microscopic BPS partition function of black holes in toroidally compactified four and five dimensional string theory are the same. By the equality of index and degeneracy for the near horizon degrees of freedom the black holes must have the same quantum entropy. We expect to explore this idea with concrete examples in a future publication.
The higher derivative content of the five dimensional theory can be used to address very interesting questions about the four dimensional black holes. Since supersymmetry is highly restrictive, not every four dimensional term can be uplifted to five dimensions. The converse is also interesting. The reduction to four dimensions gives rise to terms that cannot be written within the off-shell N = 2 formalism. For instance the reduced four dimensional action contains apart from the usual 4d N = 2 chiral higher derivative content, a Gauss-Bonnet contribution and D-terms. It was observed in [64] that D-terms do not contribute to the on-shell entropy and later it was conjectured that their quantum contribution should also vanish [28,64]. We believe that understanding how higher derivative terms contribute to the five dimensional quantum entropy we can shed light on the role of non-chiral corrections to the black hole entropy.
In this work we considered geometries that have an AdS 2 horizon. This is the near horizon geometry of a supersymmetric black hole. As discussed in the section 4.3 there is also an AdS 3 solution to the off-shell equations. Depending on how we identify the fifth coordinate we can have the near horizon geometry of a black ring or black string. The
JHEP01(2015)109
AdS 3 case is richer but at the same time more difficult. For instance we have to consider the contribution of SL(2, Z) orbifolds of AdS 3 , the usual BTZ black holes, to the path integral in a way consistent with localization. This has been attempted in [65] but the answer is still unsatisfactory.
Acknowledgments
It is a pleasure to thank Atish Dabholkar
• In the lorentzian theory levi-civita tensors are defined as with ǫ 012345 = 1.
They generate a complex structure in the sense that where the r.h.s. is just the projector onto the space transverse to V .
C KK reduction and Susy variations
In this section we work out the susy variations for the Kaluza-Klein fields. For the metric ds 2 = g µν (x)dx µ dx ν + Φ 2 (dψ + B) 2 we compute the spin connections: with e j and ω ij respectively the vielbein and spin connections of the four dimensional metric, and e 4 = Φ(dψ + B). We have defined F (B) µν = B [µ,ν] . We rewrite the spin connections as The susy variations of δψ i µ and δψ i in the Kaluza-Klein reduction (5.25) become andγ µ = e i µ γ i . In addition we have which vanishes for ∂ ψ α = 0 as expected. We have put in evidence a common term in the susy transformations that we denote byη The other susy transformations also contain a term proportional toη. If we choose appropriately η we can haveη = 0. That is, we choose Note that η vanishes on-shell, this way respecting the boundary conditions.
D.1 Solving δψ µ = 0 In this section we solve the gravitino equation studied in section 5.2.1: (ω kl µ − ω 0kl µ )γ kl ξ(0) i + 1 4 (T kl −T 0 kl )γ kl γ µ ξ(0) i = 0 (D.4) after setting the auxiliary fields to zero. In order to find a finite set of solutions we need to consider the analytic continuation of ∆ω = ω kl µ − ω 0kl µ to imaginary values. Interestingly this doesn't happen in the Minkowski case for which the authors of [42] found AdS 2 × S 2 as the unique solution to the gravitino equation. We can study the equation in components or we can construct the auxiliary "lagrangian" δψ † µ δψ µ whose vanishing locus is in one-to-one correspondence with the solutions we are looking for. In this sense we can use the formulas described previously for the square of susy transformations. | 18,446.4 | 2015-01-01T00:00:00.000 | [
"Physics"
] |
Shp2–Mitogen-Activated Protein Kinase Signaling Drives Proliferation during Zebrafish Embryo Caudal Fin Fold Regeneration
ABSTRACT Regeneration of the zebrafish caudal fin following amputation occurs through wound healing, followed by formation of a blastema, which produces cells to replace the lost tissue in the final phase of regenerative outgrowth. We show that ptpn11a−/− ptpn11b−/− zebrafish embryos, lacking functional Shp2, fail to regenerate their caudal fin folds. Rescue experiments indicated that Shp2a has a functional signaling role, requiring its catalytic activity and SH2 domains but not the two C-terminal tyrosine phosphorylation sites. Surprisingly, expression of Shp2a variants with increased and reduced catalytic activity, respectively, rescued caudal fin fold regeneration to similar extents. Expression of mmp9 and junbb, indicative of formation of the wound epidermis and distal blastema, respectively, suggested that these processes occurred in ptpn11a−/− ptpn11b−/− zebrafish embryos. However, cell proliferation and MAPK phosphorylation were reduced. Pharmacological inhibition of MEK1 in wild-type zebrafish embryos phenocopied loss of Shp2. Our results suggest an essential role for Shp2a–mitogen-activated protein kinase (MAPK) signaling in promoting cell proliferation during zebrafish embryo caudal fin fold regeneration.
The SHP2 protein consists of two SH2 domains, followed by a catalytic PTP domain and a C-terminal domain (30). SHP2, like all classical PTPs, mediates dephosphorylation of its substrates through a mechanism involving a catalytic cysteine (C460 in zebrafish Shp2a) and an assisting arginine (R466 in zebrafish Shp2a) in the PTP domain, and mutation of either of these residues abolishes catalytic activity (16,(31)(32)(33)(34). The crystal structure of SHP2 shows a closed conformation, with the N-terminal SH2 domain interacting with residues close to the catalytic pocket, thus blocking access of substrates to the catalytic site and impairing catalytic activity (30). Activation of SHP2 is facilitated by dissociation of the SH2 domains from the PTP domain, engendering an open conformation, which allows access of target substrates to the catalytic site (35). Mutation of key residues, such as D61 in the N-terminal SH2 domain, which was identified as causing Noonan syndrome (NS), results in an open conformation of SHP2 and increased catalytic activity (36)(37)(38). In Noonan syndrome with multiple lentigenes (NS-ML), mutations were identified close to the catalytic cysteine, such as A461 (A462 in zebrafish Shp2a), which result in strongly reduced activity (38)(39)(40).
Importantly, the SH2 domains and C-terminal domain of SHP2 are required for the function of SHP2 in response to growth factor stimulation. The SH2 domains bind to phosphotyrosine-containing target proteins (25). The C-terminal domain mediates interactions with other proteins. Two tyrosines (Y542 and Y580) are particularly important, because when phosphorylated, they constitute binding sites for SH2 domaincontaining proteins (41)(42)(43), which mediate MAPK activation in response to growth factors (44). Collectively, the studies on the function of the domains of SHP2 show that both the SH2 and C-terminal domain potentiate, but are not definitively required for the stimulation of, MAPK signaling by the PTP domain of SHP2.
Regeneration requires cell survival, migration, proliferation, and differentiation for effective wound healing and replacement of lost tissue (4,45,46). MAPK activation following injury is associated with regenerative competence across species (47,48). The need for MAPK signaling in zebrafish caudal fin regeneration has also been implied (49)(50)(51). However, not only MAPK but also phosphoinositide 3-kinase (PI3K), phospholipase C␥ (PLC␥), and signal transducer and activator of transcription (STAT) signaling is activated (52), complicating the conclusion that MAPK signaling is required. Whereas PI3K signaling is essential for zebrafish caudal fin regeneration (45,53), the evidence supporting a role for MAPK signaling is inconclusive. Hence, the role of SHP2 and MAPK signaling in zebrafish caudal fin regeneration remains to be determined definitively.
We investigated the role of Shp2 in zebrafish embryo caudal fin fold regeneration using homozygous ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos, which lack functional Shp2 (26), and found that Shp2 is required for normal caudal fin fold regeneration. Rescue experiments with mutant Shp2a indicated that functional SH2 domains and catalytic activity were required for its capacity to rescue regeneration, whereas the two tyrosine residues in the C terminus of Shp2a were dispensable. Characterization of the regeneration defect in ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos suggested that formation of the wound epidermis and distal blastema occurred similarly to that in their siblings but that cell proliferation and MAPK phosphorylation were significantly reduced during regenerative outgrowth. In a similar manner, pharmacological inhibition of MEK1, upstream of MAPK, in wild-type zebrafish embryos inhibited regeneration and reduced proliferation during regenerative outgrowth. Collectively, our results demonstrate that Shp2a requires its SH2 domains and catalytic activity for its function in zebrafish embryo caudal fin fold regeneration and likely acts to activate MAPK signaling, which is required to stimulate proliferation during regeneration.
RESULTS
Shp2a requires its SH2 domains and catalytic activity for its function in regeneration of the zebrafish embryo caudal fin fold. We have previously shown that homozygous ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos, lacking functional Shp2, fail to regenerate their caudal fin fold following amputation, demonstrating that Shp2a is required for zebrafish caudal fin fold regeneration (54). To validate that impaired regeneration is indeed due to the lack of functional Shp2, we performed rescue experiments using wild-type Shp2a (WT). Next, we determined which signaling domains of Shp2a are required for its function, using SH2 domain or C-terminal domain mutants of Shp2a. To this end, zebrafish embryos from a ptpn11a ϩ/Ϫ ptpn11b Ϫ/Ϫ incross were microinjected at the one-cell stage with synthetic mRNA encoding wildtype Shp2a; Shp2a-R32M-R138M (SH), in which the essential arginine residues in both SH2 domains were mutated; or Shp2a-Y542F-Y580F (YF), which lacks the two tyrosine phosphorylation sites that are important for signaling. The mRNAs encoding (mutant) Shp2a proteins also encode enhanced green fluorescent protein (eGFP) linked by a peptide-2a cleavage sequence (55). At 2 dpf, eGFP-positive zebrafish embryos were selected, their caudal fin folds were amputated (referred to here as "amputated zebrafish embryos"), and they were allowed to regenerate for 3 days, which results in ϳ80% complete regeneration in wild-type zebrafish embryos. Representative photographs of regenerated caudal fin folds of ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos expressing (mutant) Shp2a protein at 3 days postamputation (dpa) are shown in Fig. 1A. Caudal fin fold lengths were determined and are presented as percent caudal fin fold growth, normalized to that of uncut control ptpn11a ϩ/ϩ ptpn11b Ϫ/Ϫ zebrafish embryos ( Fig. 1B and C). All the zebrafish embryos were subsequently genotyped.
Expression of wild-type Shp2a or the tyrosine phosphorylation site mutant (YF) resulted in significant rescue (P Ͻ 0.001) of regeneration in ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos. In contrast, expression of the SH2 domain mutant of Shp2a (SH) was unable to rescue regeneration ( Fig. 1A and B). Rescue of regeneration did not reach the 80% normally exhibited by control ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos. This was probably due to a combination of mosaicism that occurs when using mRNA injections, resulting in a fraction of the cells not expressing the Shp2a protein, and mRNA injections being transient (56). These results demonstrate that the SH2 domains, but not the C-terminal tyrosine phosphorylation sites, are required for Shp2a function in zebrafish embryo caudal fin fold regeneration.
Next, we tested the rescue capacity of Shp2a mutants with altered catalytic activity. We have previously shown that zebrafish Shp2a mutants with an NS-associated mutation, D61G, or an NS-ML mutation, A462T, have increased or reduced activity, respectively, when tested in vitro (57). In addition, we used a Shp2a mutant with a mutation of the conserved arginine (R466M), which lacks catalytic activity, rather than the catalytic cysteine mutant (C460S in zebrafish Shp2a) because Shp2a-C460S may trap substrates (31,58) and thus have inadvertent dominant effects. Interestingly, the amputated caudal fin folds of ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos expressing Shp2a-D61G (DG) or Shp2a-A462T (AT) but not Shp2a-R466M (RM) regenerated to similar extents ( Fig. 1A and B). Small differences were observed between the different Shp2a mutants that rescued caudal fin fold regeneration in the ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos (Fig. 1B). However, using a Mann-Whitney U test with a post hoc Monte-Carlo exact test, we established that they were not statistically significant. The fin folds of uncut controls expressing any of the Shp2 mutants were not significantly affected and showed normal growth of the caudal fin fold, with lengths comparable to those in noninjected zebrafish embryos and siblings (Fig. 1C).
A trivial explanation for the inability of the catalytically inactive mutant of Shp2a (RM) or the SH2 domain mutant of Shp2a to rescue caudal fin fold regeneration might be that these proteins have reduced stability. Due to low expression levels of (mutant) Shp2a proteins in zebrafish embryos, it was not possible to monitor protein expression in vivo. However, transfection of constructs encoding wild-type Shp2a, Shp2a-R32M-R138M, Shp2a-Y542F-Y580F, Shp2a-D61G, Shp2a-A462T, or Shp2a-R466M proteins in HEK293T cells revealed that all Shp2a proteins are expressed to similar extents, albeit expression levels vary from mutant to mutant (Fig. 2). This suggests that the inability of the SH and RM mutants to rescue caudal fin fold regeneration is not due to greatly reduced protein expression or stability, but rather to functional differences in Shp2a function.
FIG 1
Functional Shp2a is required for regeneration. Zebrafish embryos from a ptpn11a ϩ/Ϫ ptpn11b Ϫ/Ϫ incross were microinjected at the one-cell stage with synthetic mRNA encoding wild-type Shp2a, SH2 domain mutant Shp2a-R32M-R138M, C-terminal tyrosine mutant Shp2a-Y542F-Y580F, Shp2a-D61G, Shp2a-A462T, or Shp2a-R466M or were not injected (Ϫ). At 2 dpf, the caudal fin fold was amputated, and regeneration was assessed at 3 dpa (i.e., 5 dpf and 3 dpa); equivalent uncut controls were included (i.e., 5 dpf, uncut). All the embryos were genotyped. (A) Representative images of amputated ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ embryo caudal fin folds at 3 dpa. A ptpn11a ϩ/ϩ ptpn11b Ϫ/Ϫ sibling in which regeneration of the caudal fin fold was 80% complete by 3 dpa is shown for comparison (top left). (B and C) Regeneration was quantified by measuring the distance from the tip of the notochord to the edge of the caudal fin fold, as indicated (bars in panel A). The means of caudal fin fold growth are depicted relative to caudal fin fold growth of uncut ptpn11a ϩ/ϩ ptpn11b Ϫ/Ϫ controls. Means of microinjected amputated (amp) (B) or uncut (C) ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ embryos were compared to those of noninjected amputated or uncut ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ embryos. The data were pooled from multiple experiments. Statistical evaluation was performed using a Mann-Whitney U test for comparison of ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos with siblings within amputated or uncut groups, not between amputated and uncut groups. The error bars indicate standard errors of the mean. ***, P Ͻ 0.001; n.s. not significant.
Taken together, these results demonstrate that functional Shp2 is required for zebrafish embryo caudal fin fold regeneration. Although the two tyrosine phosphorylation sites in the C-terminal domain are dispensable, the catalytic activity, as well as the SH2 domains, of Shp2 is required for the function of Shp2a in zebrafish embryo caudal fin fold regeneration. Furthermore, the level of catalytic activity harbored by Shp2a appears not to affect the extent of zebrafish embryo caudal fin fold regeneration.
Markers for formation of the wound epidermis and distal blastema suggest the initial response to amputation occurs normally in zebrafish embryos deficient for Shp2. Following amputation, wound healing occurs, and an apical epidermal cap is produced that signals for the formation of the blastema. Thus, successful blastema formation is indicative of successful wound healing (1,2). Zebrafish embryos from a ptpn11a ϩ/Ϫ ptpn11b Ϫ/Ϫ incross were fixed at 3 h postamputation (hpa) and subjected to in situ hybridization using probes specific for mmp9 and junbb, which are normally upregulated following caudal fin fold amputation (59,60) and mark the wound epidermis and distal blastema, respectively. All the zebrafish embryos were subsequently genotyped. Expression of mmp9 (Fig. 3A) and junbb ( Fig. 3B) was clearly induced in amputated caudal fin folds but not in uncut controls. Homozygous ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos expressed mmp9 and junbb to an extent similar to that of their siblings following caudal fin fold amputation, suggesting that formation of the wound epidermis and the distal blastema occurred in Shp2-deficient zebrafish embryos.
Arrested proliferation in zebrafish embryos deficient for Shp2 during regenerative outgrowth. During regenerative outgrowth, proliferation is upregulated to generate the cells required to form and replace the lost tissue (2, 3). We analyzed cell proliferation during the regenerative outgrowth stage by immunohistochemistry using an antibody specific for proliferating cell nuclear antigen (PCNA). Zebrafish embryos from a ptpn11a ϩ/Ϫ ptpn11b Ϫ/Ϫ incross were fixed at 2 dpa and subjected to wholemount immunohistochemistry for detection of PCNA expression. All the zebrafish embryos were subsequently genotyped. At 2 dpa, PCNA immunofluorescence was dispersed and significantly reduced (P Ͻ 0.05) at the edges of the amputated caudal fin folds of ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos, whereas in siblings that regenerated normally, PCNA immunofluorescence was concentrated between the amputation plane and the wound margin (Fig. 4). PCNA immunofluorescence remained low in uncut controls (Fig. 4B). These results indicate that proliferation is reduced in ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos by 2 dpa compared to siblings in the regenerative outgrowth phase following caudal fin fold amputation.
Reduced MAPK signaling in zebrafish embryos deficient for Shp2. Loss of SHP2 in tissue culture cells or in knockout mice results in reduced MAPK signaling, leading to reduced proliferation and differentiation and developmental defects (10, 17-19, 32, 61). Furthermore, activated MAPK signaling following injury is associated with regenerative FIG 2 Similar expression levels of Shp2a mutants. Human embryonic kidney 293 T cells were transfected with cytomegalovirus (CMV) promoter-driven expression vectors for zebrafish wild-type Shp2a and Shp2 mutants: Shp2a-Y542F-Y580F, Shp2a-R32M-R138M, Shp2a-D61G, Shp2a-A462T, and Shp2a-R466M. The cells were lysed, and the lysates were run on an SDS-PAGE gel. The blots were probed using a SHP2-specific antibody and developed using enhanced chemiluminescence. The blots were stripped and reprobed for tubulin as a loading control. All the samples were loaded on the same blot. competence across species (47,48). To determine if loss of Shp2 affected MAPK signaling in ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos, we performed whole-mount immunohistochemistry using a phospho-MAPK-specific antibody (p-MAPK; phospho-p44/42 MAPK [Thr202/Tyr204]). In comparison to their siblings, ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos displayed significantly reduced p-MAPK levels following caudal fin fold amputation at 4 dpf (P Ͻ 0.01) (Fig. 5A). Note that, compared to their siblings, p-MAPK levels were also significantly reduced in the caudal fin folds of uncut ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos (P Ͻ 0.05) (Fig. 5B). These immunohistochemistry experiments indicated that MAPK signaling is reduced in zebrafish embryos lacking functional Shp2.
Reducing MAPK signaling by pharmacological inhibition of MEK1 phenocopies loss of Shp2 in zebrafish caudal fin fold regeneration. We hypothesized that the reduced MAPK signaling observed in Shp2-deficient zebrafish embryos was responsible for the lack of caudal fin fold regeneration. We therefore tested if pharmacological inhibition of MAPK signaling in zebrafish embryos impaired caudal fin fold regeneration. The caudal fin folds of wild-type zebrafish embryos (ptpn11a ϩ/ϩ ptpn11b ϩ/ϩ ) were amputated at 2 dpf and allowed to regenerate for 3 days in the presence of 50 nM the MEK1 inhibitor PD184352 (also known as CI-1040) or solvent (1% dimethyl sulfoxide [DMSO]) as a control. Treatment of zebrafish embryos with PD184352 significantly impaired caudal fin fold regeneration (P Ͻ 0.001) (Fig. 6A). Uncut control zebrafish embryos showed that PD184352 treatment by itself did not affect normal caudal fin fold growth (Fig. 6A).
We investigated if the impaired regeneration of wild-type zebrafish embryos treated with PD184352 was associated with defective wound healing or distal blastema formation. Wild-type zebrafish embryos were amputated at 2 dpf and treated with PD184352 or DMSO until fixation at 3 hpa. Equivalent uncut controls were treated and fixed. Zebrafish embryos were subjected to in situ hybridization using an mmp9-specific or junbb-specific probe for detection of the wound epidermis and distal blastema, FIG 4 Proliferation is arrested at the amputated caudal fin fold margin of Shp2-deficient embryos. At 2 dpf, the caudal fin folds of embryos from a ptpn11a ϩ/Ϫ ptpn11b Ϫ/Ϫ incross were amputated and allowed to regenerate. The embryos were fixed at 2 dpa (4 dpf, 2 dpa) and subjected to whole-mount immunohistochemistry using an antibody specific for the cell proliferation marker PCNA (red). The embryos were counterstained with DAPI (4=,6-diamidino-2-phenylindole) (blue). Maximum-intensity projection images of the caudal fin folds were taken, and all the embryos were genotyped. (A) Representative images of amputated embryo caudal fin folds, with the edges of the fin folds indicated with dashed lines. The number of embryos showing similar patterns/total number of embryos analyzed are indicated in the bottom right corners of the images in the right-hand column. Scale bars, 100 m. (B) PCNA immunofluorescence between the tip of the notochord and the edge of the caudal fin fold was quantified by mean particle count, with thresholding and size restriction to remove background signal. Equivalent uncut controls were also quantified, and the mean values of all the caudal fin folds are shown. The statistical significance of the means was determined relative to ptpn11a ϩ/ϩ ptpn11b Ϫ/Ϫ zebrafish embryos within the amputated group, and likewise within the uncut group. *, P Ͻ 0.05; the error bars represent standard deviations.
Shp2-MAPK Signaling in Zebrafish Fin Regeneration
Molecular and Cellular Biology respectively. Expression of mmp9 (Fig. 6B) and junbb (Fig. 6C) was clearly induced in amputated caudal fin folds but not in uncut control zebrafish embryos. PD184352treated zebrafish embryos expressed mmp9 and junbb to an extent similar to that of solvent-treated control embryos following caudal fin fold amputation, suggesting that formation of the wound epidermis and subsequent formation of the distal blastema were not affected by PD184352-mediated inhibition of MAPK signaling. Next, we analyzed cell proliferation during the regenerative outgrowth stage of zebrafish embryos treated with PD184352. Wild-type zebrafish embryos had their caudal fin folds amputated at 2 dpf and were treated with PD184352 or solvent until FIG 5 Reduced p-MAPK in regenerating caudal fin folds of Shp2-deficient embryos. At 2 dpf, the caudal fin folds of embryos from a ptpn11a ϩ/Ϫ ptpn11b Ϫ/Ϫ incross were amputated and allowed to regenerate. The embryos were fixed at 2 dpa (4 dpf, 2 dpa) and subjected to whole-mount immunohistochemistry using a p-MAPK-specific antibody (Thr202/Tyr204) (green). The embryos were counterstained with DAPI (blue). Maximum-intensity projection images were taken of the caudal fin folds, and all the embryos were genotyped. (A) Representative images of amputated embryo caudal fin folds, with the edges of the fin folds indicated with dashed lines. The number of embryos showing similar patterns/total number of embryos analyzed are indicated in the right-hand column. Scale bars, 100 m. (B) p-MAPK was quantified by the mean intensity of the region between the notochord and the edge of the caudal fin fold. Equivalent uncut controls were also quantified, and the mean values of all the caudal fin folds are depicted. The statistical significance of the means was determined relative to ptpn11a ϩ/ϩ ptpn11b Ϫ/Ϫ zebrafish embryos within the amputated group, and likewise within the uncut group. **, P Ͻ 0.01; *, P Ͻ 0.05; the error bars represent standard deviations. fixation at 2 dpa. The zebrafish embryos were subjected to whole-mount immunohistochemistry for detection of PCNA expression. At 2 dpa, PCNA immunofluorescence in amputated caudal fin folds was dispersed and significantly reduced (P Ͻ 0.01) in zebrafish embryos treated with PD184352 compared to control zebrafish embryos treated with DMSO (Fig. 6D). Baseline PCNA staining in the caudal fin folds of uncut control zebrafish embryos at 4 dpf was low but was also significantly reduced following PD184352 treatment (P Ͻ 0.01) (Fig. 6E). These results demonstrate that MAPK signaling is required for normal caudal fin fold regeneration of zebrafish embryos and promotes proliferation during regenerative outgrowth.
DISCUSSION
Our results demonstrate a critical role for Shp2 and MAPK signaling in zebrafish embryo caudal fin fold regeneration. Zebrafish embryos lacking functional Shp2 FIG 6 Impaired caudal fin fold regeneration in wild-type zebrafish embryos treated with MEK1 inhibitor. At 2 dpf, the caudal fin folds of wild-type embryos were amputated and allowed to regenerate in the presence of 50 nM PD184352 (MEK1i) or 1% DMSO (solvent control). (A) Regeneration after 3 days was quantified by measuring the distance from the tip of the notochord to the edge of the caudal fin fold. By 3 dpa, regeneration of the caudal fin fold of control zebrafish embryos was 80% complete. The means of caudal fin fold growth are depicted relative to caudal fin fold growth of DMSO-treated uncut controls. The statistical significance of the mean of PD184352 (MEK1i)-treated amputated embryos was determined relative to the mean of DMSO-treated amputated embryos, and likewise for the uncut treated and untreated embryos. The number of embryos is indicated (n). ***, P Ͻ 0.001; n.s. not significant; the error bars indicate standard errors of the mean. (B and C) Embryos were fixed at 3 hpa, or the equivalent for uncut controls, and subjected to hybridization for mmp9 (B) or junbb (C). Representative images of caudal fin folds of embryos are shown, and the number of embryos showing similar patterns/total number of embryos analyzed are indicated in the bottom right corner of each image. (D) Embryos were fixed at 2 dpa (4 dpf, 2 dpa) and subjected to whole-mount immunohistochemistry using an antibody specific for the cell proliferation marker PCNA (red). The embryos were counterstained with DAPI (blue). Maximum-intensity projection images were taken of the caudal fin folds. Representative images of amputated embryo caudal fin folds are shown, with the edges of the fin folds indicated with dashed lines. The number of embryos showing similar patterns/total number of embryos analyzed are indicated in the right-hand column. Scale bars, 100 m. (E) PCNA immunofluorescence between the tip of the notochord and edge of the caudal fin fold was quantified by mean particle count, with thresholding and size restriction to remove background signal. Equivalent uncut controls were also quantified. The means of the amputated PD184352 (MEK1i)-treated group were compared to those of the amputated DMSO-treated group, and likewise, the means of the uncut PD184352 (MEK1i)-treated group were compared to those of the uncut DMSO-treated group. **, P Ͻ 0.01; the error bars represent standard deviations.
Shp2-MAPK Signaling in Zebrafish Fin Regeneration
Molecular and Cellular Biology (ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ ) show severely impaired regeneration of their caudal fin folds following amputation (Fig. 1). Expression of wild-type Shp2a rescues regeneration, which relies on its SH2 domains and catalytic activity (Fig. 1). The initial response to amputation includes formation of the wound epidermis and distal blastema, which are characterized by increased expression of mmp9 and junbb, respectively, and our in situ hybridization results suggest that these two processes do occur in Shp2-deficient zebrafish embryos (Fig. 3). Critically, immunohistochemistry for PCNA revealed that proliferation was arrested during the regenerative outgrowth phase in Shp2-deficient zebrafish embryos (Fig. 4). We propose that the reduced p-MAPK levels in ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos (Fig. 5) cause impaired caudal fin fold regeneration, which is consistent with our observation that MEK1 inhibition phenocopies loss of Shp2 in zebrafish embryo caudal fin fold regeneration, impairing zebrafish caudal fin fold regeneration and reducing proliferation during regenerative outgrowth (Fig. 6).
Recently, we demonstrated that Shp2a and Shp2b are two of the eight PTPs that are oxidized and hence inactivated in response to caudal fin amputation (54). Here, we demonstrate that Shp2 signaling is required for regeneration of the zebrafish embryo caudal fin fold, which seems to contrast with the finding that Shp2 is oxidized and thus inactivated upon amputation of the zebrafish caudal fin. However, production of reactive oxygen species (ROS) in response to caudal fin amputation is transient (62). Presumably, Shp2 is transiently inactivated by the production of ROS following zebrafish caudal fin amputation, and Shp2 is subsequently reduced again to an active form that is required for caudal fin regeneration. Whether transient inactivation of Shp2 is required for caudal fin regeneration remains to be determined.
SHP2 has an important signaling role in many cellular processes (35,63) and interacts with associated proteins and substrates through its SH2 domains and/or C-terminal domain (25,41,42). Our rescue experiments indicated that the SH2 domains, but not the tyrosine phosphorylation sites in the C-terminal domain of Shp2a, are required to rescue caudal fin fold regeneration of ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos (Fig. 1). Mutation of the two SH2 domains impairs the association of SHP2 with phosphorylated growth factor receptors and substrates (64) and has been shown to inhibit EGF stimulation of MAPK activation in cells (65). Thus, the inability of Shp2a-R32M-R138M to rescue caudal fin fold regeneration suggests that Shp2a binding to substrates or interacting proteins via its SH2 domains is required. Mutating Y542 and Y580 prevents binding of SHP2 to GRB2 and reduces, but importantly does not abolish, the activation of MAPK in response to stimulation by some growth factors in tissue culture cells, suggesting that the SHP2-GRB2 interaction is dispensable in some contexts (11,41,66). We conclude that expression of Shp2a-Y542F-Y580F in ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos apparently mediated sufficient MAPK activation to rescue caudal fin fold regeneration.
The catalytic activity of SHP2 is paramount for regulation of MAPK signaling (11). We provide evidence that Shp2a-R466M, which lacks detectable catalytic activity, fails to rescue caudal fin fold regeneration in ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos (Fig. 1), yet Shp2a-A462T, which harbors very low, but detectable, catalytic activity did rescue regeneration. Shp2a-D61G, with enhanced catalytic activity compared to wildtype Shp2a, rescued regeneration in ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos to an extent similar to that of Shp2a-A462T (Fig. 1B). These results are surprising, particularly because tight regulation of signal transduction has been demonstrated to be essential for zebrafish caudal fin regeneration (45,67,68). It is not unlikely that differences in the conformation dynamics of Shp2a mutants affect the function of Shp2a. The current model for SHP2 is that under control conditions, it is in the closed conformation, through interactions between the SH2 domains and the PTP domain (30). Ligation of the SH2 domains to phosphotyrosine residues on other proteins prompts an open conformation, allowing the PTP domain to dephosphorylate substrates. "Activating" mutations, such as D61G in the N-terminal SH2 domain, disrupt the interaction between the SH2 domains and the PTP domain and stabilize the open conformation of SHP2 (36)(37)(38). Recently, it has been hypothesized that while the SHP2 A461T mutant has reduced catalytic activity, it is also stabilized in an open conformation, allowing prolonged association with substrates that compensates for its reduced activity (39,69). This would explain why both Shp2a-D61G and Shp2a-A462T rescue zebrafish embryo caudal fin fold regeneration. It would be interesting to investigate the effects of Shp2a-D61G and Shp2a-A462T on downstream signaling during caudal fin fold regeneration.
The expression of mmp9 and junbb has been shown to be specifically increased in the wound epidermis and distal blastema, respectively, following zebrafish embryo caudal fin fold amputation (59). Furthermore, junbb expression is maintained well into the initial stage of regenerative outgrowth (60), indicating that junbb is a definitive distal blastema marker. We show that the amputated caudal fin folds of ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos express mmp9 and junbb, like those of their siblings (Fig. 3), suggesting that both wound healing and distal blastema formation occur in the absence of Shp2. This appears to contrast with previous results showing that Fgfr1 signaling is required for blastema formation (50,68). Whereas there are overlaps in FGFR1 and SHP2 signaling (52), apparently, Fgfr1 and Shp2a signaling in zebrafish differ to such an extent that distal blastema formation is dependent on Fgfr1 but not on Shp2.
The next stage of regeneration, regenerative outgrowth, is characterized by proliferation and differentiation of cells to replace the lost tissue. Our whole-mount immunohistochemistry experiments demonstrated that cell proliferation and MAPK phosphorylation are significantly reduced in the regenerating caudal fin folds of ptpn11a Ϫ/Ϫ ptpn11b Ϫ/Ϫ zebrafish embryos compared to their siblings ( Fig. 4 and 5). Inhibiting MAPK signaling using an inhibitor of MEK1 was sufficient to phenocopy the effect of loss of Shp2 (Fig. 6). Our results using the MEK1 inhibitor endorse the conclusion that MAPK signaling is required to drive proliferation during regenerative outgrowth of zebrafish embryo caudal fin fold regeneration. These results are consistent with previous work demonstrating a requirement for Fgfr1 signaling in proliferation during zebrafish caudal fin regeneration (49,51,68) and reduced proliferation and regeneration in response to MEK1 inhibition during zebrafish heart regeneration (70). Considering this, Shp2a-MAPK signaling may have a conserved role in the regeneration of various tissues.
Collectively, our results suggest an essential role for Shp2a-mediated MAPK signaling in promoting cell proliferation during the regenerative outgrowth phase of regenerating zebrafish embryo caudal fin folds. Recent work has shown that loss of SHP2 in mice, resulting in reduced MAPK signaling and reduced proliferation, leads to impaired muscle regeneration, and this was attributed to satellite cell quiescence (71). Possibly, the loss of Shp2 in zebrafish embryo caudal fin folds induces quiescence in dedifferentiated cells of the distal blastema. This would certainly be in concordance with our results showing that regenerative outgrowth was impaired, despite apparently normal distal blastema formation. In addition to promoting MAPK signaling, SHP2 has been shown to promote or inhibit PI3K signaling (72,73). Interestingly, the symptoms that present in vivo as a result of loss of ptpn11 or activating mutations of SHP2 appear to be primarily due to the effect on MAPK signaling. For example, mice expressing the activating SHP2 mutant Q79R display MAPK hyperphosphorylation and congenital heart defects, while both these phenotypes are ameliorated in Q79R ϫ Erk1 Ϫ/Ϫ mice (74). In comparison, genetic ablation of PTPN11 in retinal cells results in reduced MAPK phosphorylation but does not affect AKT phosphorylation (61). The defects that result are also not rescued by mutating the antagonist of PI3K signaling, phosphatase, and tensin homolog (PTEN), which normally increases PI3K signaling (75). However, hyperactive KRas, which has been shown to alleviate the requirement for SHP2 in the maintenance of hematopoietic stem cells (17), does rescue the retinal defects. As PI3K signaling has previously been shown to be required for blastema formation (45) and our results suggest that distal blastema formation occurs in zebrafish embryos lacking functional Shp2, we conclude that it is unlikely that Shp2 acts through PI3K signaling during zebrafish embryo caudal fin fold regeneration.
In conclusion, we have demonstrated that Shp2a signaling is indispensable for zebrafish embryo caudal fin fold regeneration. Our results are consistent with Shp2a acting to promote MAPK signaling, thus coordinating proper proliferation during regenerative outgrowth of the zebrafish embryo caudal fin fold.
MATERIALS AND METHODS
Zebrafish husbandry. All procedures involving experimental animals were performed under license number GZB/VVB 2041019 of the Hubrecht Institute/Royal Academy of Arts and Sciences (Koninklijke Nederlandse Akademie van Wetenschappen [KNAW]) and approved by the local animal experiment committee according to local guidelines and policies in compliance with national and European laws.
Caudal fin fold amputation. Zebrafish embryo amputations were performed as previously described (78) at 2 dpf for all experiments. Regeneration was allowed to proceed until analysis at 3 dpa or fixation at 3 hpa or 2 dpa. PD184352 (Sigma) or dimethyl sulfoxide (Sigma) was administered directly following recovery of amputated zebrafish embryos in E3 medium (5 mM NaCl, 0.17 mM KCl, 0.33 mM CaCl 2 , 0.33 mM MgSO 4 ). Whole zebrafish embryos were lysed for genotyping or fixed in 4% paraformaldehyde (PFA) in phosphate-buffered saline (PBS), either 3 hpa for in situ hybridization or at 2 dpa for immunohistochemistry.
In situ hybridization. In situ hybridizations were performed as previously described (79), using mmp9 or junbb digoxigenin-UTP-labeled antisense riboprobes. Parts of mmp9 and junbb were amplified from zebrafish cDNA using specific primers (Table 1), and the resulting products from the nested PCR were used as DNA templates for synthesis of digoxigenin-UTP-labeled antisense riboprobes. Following staining, the caudal fin folds of zebrafish embryos were severed and mounted in 70% glycerol in PBS for imaging on a Zeiss Axioskop 2 Mot Plus microscope with a Plan-Neofluar 10ϫ/0.30-numerical-aperture or 20ϫ/0.50-numerical-aperture objective. The rest of the zebrafish embryo was lysed for genotyping.
Immunohistochemistry. Zebrafish embryos fixed in 4% PFA were washed in PBS-0.1% Tween 20, and antigen retrieval was performed depending on the antibody used: ice-cold acetone for 20 min for PCNA and 10 mM Tris, 1 mM EDTA, pH 9.0, for phospho-p44/42 MAPK (Thr202/Tyr204). Whole zebrafish embryos were incubated overnight at 4°C in mouse anti-PCNA (1:200; number M0879; Dako Agilent Pathology Solutions), or rabbit anti-phospho-p44/42 MAPK (Thr202/Tyr204) (1:100; number 4370; Cell Signaling Technology). Secondary antibodies conjugated to Cy5 goat anti-mouse or goat anti-rabbit IgG (number 115-175-146 and number 111-175-144; Jackson ImmunoResearch) were used at 1:500 and 1:200, respectively. Nuclei were shown by DAPI (4=,6-diamidino-2-phenylindole) staining. Caudal fin folds of zebrafish embryos were mounted for imaging in 70% glycerol in PBS, and the rest of the zebrafish embryo was lysed for genotyping. Z-stacks (6-m step size) of the caudal fin fold were acquired for every ). PCNA immunostaining between the tip of the notochord and the edge of the caudal fin fold of zebrafish embryos from a ptpn11a ϩ/Ϫ ptpn11b Ϫ/Ϫ incross were quantified by cropping the z-projections to remove the signal adjacent to the tip of the notochord and applying rolling-ball background subtraction with an average of 15 px. Particles were counted for PCNA immunofluorescence following black-and-white thresholding of 40 to 255, applying watershed, and using a size restriction of 0.00009 in 2 to infinity. PCNA immunostaining between the tip of the notochord and the edge of the caudal fin fold of wild-type zebrafish embryos treated with PD184532 or DMSO was quantified in an identical manner, applying rolling-ball background subtraction with an average of 100 px. Immunostaining of p-MAPK was performed following rolling-ball background subtraction with an average of 50 px. The mean intensity of p-MAPK was measured from the wound margin inward using a region of interest with dimensions equivalent to the following: height (h), 2.08 m (625 px), and various widths (w) for each sample, i.e., for embryos at 2 dpa, 0.82 m (246 px) for amputated zebrafish embryos and 1.00 m (300 px) for uncut zebrafish embryos. Genotyping. All the zebrafish embryos that were used in these assays were genotyped to establish their ptpn11a status. To this end, genomic zebrafish DNA was extracted through lysis of the zebrafish embryos in 100 g/ml proteinase K (Sigma) diluted in SZL buffer (50 mM KCl, 2.5 mM MgCl, 10 mM Tris, pH 8.3, 0.005% NP-40, 0.005% Tween 20, and 10% 0.1% gelatin). Lysis was performed by incubating at 60°C for 1 h, followed by 95°C for 15 min in a thermal cycler (Bio-Rad T100). The ptpn11a hu1864 allele in nonfixed tissue was analyzed by Kompetitive allele-specific PCR (KASP): primers of ptpn11a containing nonsense mutations of the ptpn11a hu1864 allele (Table 1) were mixed with genomic zebrafish DNA and KASP master mix (LGC Group). Amplification was carried out according to the manufacturer's instructions, and the resulting PCR products were analyzed in a Pherastar microplate reader (BMG Labtech). Klustercaller software (LGC Group) was used to identify the mutations. For fixed tissue, genotyping for the ptpn11a hu1864 allele was performed by nested PCR with primer sets 1 to 4 (Table 1), followed by Sanger sequencing (Macrogen Inc., Europe) to detect the mutations.
Statistics. For analysis of caudal fin fold lengths, histograms of whole data sets were examined to determine nonnormal distribution of the data. Statistical analysis of unequal variances was obtained through a Kruskall-Wallis test. Differences between different experimental conditions were assessed for significance using a Mann-Whitney U test. Differences were considered significant at a P value of Ͻ0.001 and if they satisfied a confidence interval of 95% in a Monte Carlo exact test. All tests for regenerating caudal fin folds were performed in SPSS (IBM). For analysis of immunohistochemistry measurements, differences between different experimental conditions were assessed for significance using a Mann-Whitney U test with a confidence level set to 95%. All tests for immunohistochemistry measurements were performed in GraphPad Prism (GraphPad Software). Differences were considered significant at a P value of Ͻ0.05. | 8,370.6 | 2017-12-04T00:00:00.000 | [
"Biology"
] |
Engineering nonlinear optical phenomena by arbitrarily manipulating the phase relationships among the relevant optical fields
Nonlinear optical processes are intrinsically dominated by the phase relationships among the relevant electromagnetic fields, including the phase of nonlinear polarization produced in them. If one can arbitrarily manipulate these phase relationships at a variety of desired interaction lengths, direct and highly designable manipulations for the nonlinear optical phenomenon could be achieved. Here, we report a proof-of-principle experiment in which a high-order Raman-resonant four-wave-mixing process is used as a representative nonlinear optical process and is tailored to a variety of targets by implementing such arbitrary manipulations of the phase relationships in the nonlinear optical process. We show that the output energy is accumulated to a specific, intentionally selected Raman mode on demand; and at the opposite extreme, we can also distribute the output energy equally over broad high-order Raman modes in the form of a frequency comb. This concept in nonlinear optical processes enables an attractive optical technology: a single-frequency tunable laser broadly covering the vacuum ultraviolet region, which will pave the way to frontiers in atomic-molecular-optical physics in the vacuum ultraviolet region. Adaptive optics permits control of linear and nonlinear optical phenomena in order to achieve the desired output signal. Here, arbitrary manipulation of phase relations are used to engineer nonlinear interactions in a higher-order Raman-resonant four-wave-mixing platform.
S ince the field of nonlinear optics was established by Bloembergen et al. 1,2 , as nonlinear optical processes are intrinsically inefficient, one of the core challenges within this field has been how nonlinear optical phenomena excited at each point in space can be coherently accumulated (phase-matched) over a long interaction length. Previous research has established a variety of phase-matching technologies, including use of the birefringence properties of crystals 3,4 and implementation of periodic structure in a medium (quasi-phasematching, QPM) 1,2,5,6 . These achievements have defined a route for applying nonlinear optics to practical uses, including industrial applications such as laser processing and biomedical lasers.
Over the past two decades, a variety of methods, in which engineered functions are incorporated into nonlinear optical processes, have been investigated to begin this new chapter of nonlinear optics. Such methods include QPM techniques that incorporate a variety of periodic [7][8][9] or non-periodic structures [10][11][12][13] for operating more than two nonlinear optical processes within a single device, and gas-filled hollow-core fibers or photonic crystal fibers in which refractive index dispersions are designed to enhance a specific wavelength region in highharmonic 14,15 or supercontinuum generation 16 . Also, an angledbeam geometry in two dimensions has been numerically investigated to manipulate a broad Raman generation in solid hydrogen 17 . Three-dimensional photonic crystals including a metamaterial as a constituent element have also been investigated to engineer nonlinearly converted radiations into a variety of specific emitted directions in which the reciprocal lattice vectors assist in phase matching 18,19 . Furthermore, other techniques involve zero-index materials that enable phase-matchingfree nonlinear optical processes in all directions, demonstrating simultaneous forward and backward four-wave mixings [20][21][22] . The common conceptual idea among all these methods is that each realizes a specific function by implementing an additional design in a coherent accumulation (phase-matching) process of nonlinear optical phenomena (Fig. 1a).
In contrast, there is another approach to incorporate engineered functions into nonlinear optical processes in which we directly manipulate the scheme of the nonlinear optical process itself by manipulating the phase relationships among the relevant electromagnetic fields (Fig. 1b). This approach has not been extensively studied so far, although it has been described in the expression of the nonlinear optical process itself since the birth of nonlinear optics that nonlinear optical processes are intrinsically affected by the phase relationships among the relevant electromagnetic fields. This approach is more direct and designable in respect of engineering nonlinear optical processes. In fact, on the basis of the concept in Fig. 1b, Zheng and Katsuragawa 23 theoretically and numerically showed that a high-order Ramanresonant four-wave-mixing (Rr-FWM) process was tailored to a variety of targets, leading to a single-frequency tunable laser entirely covering the vacuum ultraviolet (VUV) region from 120 to 200 nm as being one of the attractive applications. Furthermore, Ohae et al. 24 experimentally demonstrated that the Rr-FWM process in gaseous para-hydrogen was tailored on the basis of this concept.
Here, we again focus on the Rr-FWM process as a representative nonlinear optical process in investigating the concept in Fig. 1b. Although Ohae et al. 24 demonstrated the concept, the efficiencies of the Rr-FWM generation were on the order of 10 −3 , thus limiting the Rr-FWM process to first Stokes and first anti-Stokes generation, as the experiment was executed at room temperature (non-optimal) owing to a technical limitation. To overcome this limitation, we develop a Raman cell system (Fig. 1c) that can arbitrarily manipulate the phase relationships among the relevant electromagnetic fields at a variety of interaction lengths in a nonlinear optical medium held at the liquid nitrogen temperature (optimal). By realizing this Raman cell system, we experimentally tailor the Rr-FWM process with efficiencies of tens of percentages to a variety of targets; namely, we demonstrate accumulation of the output energy to a selected single-Raman mode on demand; and at the opposite extreme, an equal distribution of output energy over high-order Raman modes in the form of a frequency comb. These results are also well reproduced in a numerical calculation and are discussed in respect of the physical mechanism.
Results and discussion
Phase-engineered Raman-resonant four-wave-mixing process. First, we describe the theoretical framework by which the highorder Rr-FWM process can be engineered by implementing the relative phase manipulations during its evolution. Figure 1d, e represents a Rr-FWM process. In this process, a high-Raman coherence, ρ ab , is adiabatically driven by precisely controlling the two-photon detuning, δ, where a pair of long-pulse laser fields at the two wavelengths, Ω 0 and Ω −1 are applied [25][26][27][28][29] . The produced molecular ensemble with high-Raman-coherence functions as an ultra-high-frequency phase modulator and thereby deeply modulates a variety of arbitrary incident laser radiations, Ω T 0 28-30 . This optical process simultaneously generates a substantial magnitude of high-order Rr-FWM radiations (high-order Stokes, Ω T Àq , and anti-Stokes, Ω T þq , modes) coaxially along the incident laser beam, Ω T 0 , without being restricted by angle phase matching.
Maxwell-Bloch equation. A standard framework for the Maxwell-Bloch equation well describes this Rr-FWM process (see Methods). Equation 1 represents one of the coupled propagation equations, expressing the generation of the qth Raman mode, To clearly visualize the role of the relative phase, Δϕ T q , in this nonlinear optical process, we used the following expressions: where ϕ T q and ϕ ρ represent the phases of the electric-field amplitude at the qth mode, E T q , and the Raman coherence, ρ ab , respectively. Then, the relative phase, Δϕ T q , is defined as: Other terms n T q and ω T q denote the photon number density and the angular frequency at Ω T q , respectively; d T q is a coupling coefficient between modes Ω T q and Ω T qþ1 ; z is the interaction length; N is the medium density; _ is the reduced Planck constant; ε 0 is the vacuum permittivity; c is the speed of light.
The first term on the right-hand side of Eq. 1 implies energy flows of electromagnetic fields from Ω T qÀ1 to Ω T q and the second term from Ω T qþ1 to Ω T q . According to the signs of the relative phases, Δϕ T q and Δϕ T qþ1 , directions of such energy flow change, and their speeds vary depending on their values over a full dynamic range of −π to +π. Therefore, we can tailor the obtained nonlinear optical phenomenon toward a variety of targets by G la n la s e r p r is m I ) Fig. 1 Conceptual illustrations of the engineered Raman-resonant four-wave-mixing (Rr-FWM) process. Conceptual comparison between engineerings in nonlinear optical processes: a manipulation of the spatial phases, k(ω, r). b Manipulation of relative phases, ϕ(ω, r). c Schematics: the apparatus is a gas cell filled with gaseous para-hydrogen and cooled to 77 K by liquid nitrogen. Six dispersive plates (fused silica) are inserted in the interaction region, where the relative phases, Δϕ T q , are manipulated by precise control of the effective optical thicknesses of the plates. d Scheme of the adiabatic excitation of the vibrational coherence, ρ ab , at a pure vibrational Raman transition of ν 00 ¼ 1 j i j ν 0 ¼ 0i. e Scheme of the high-order Rr-FWM process initiated by laser radiation, Ω T 0 . The relative phases, Δϕ T q , dominate the photon flow between the adjacent Raman modes, Ω T q and Ω T qÀ1 . f-i Expected high-order Stokes and anti-Stokes generations in the engineered Rr-FWM processes by manipulating the relative phases, Δϕ T q ; f no manipulation applied; g, h output energy accumulations onto a specific Raman mode; and i broad comb-like generation.
providing the freedom of arbitrarily manipulating the directions of these energy flows including their flow speeds at a variety of interaction lengths desired in the evolution of this phenomenon ( Fig. 1f-i).
On-demand phase manipulation among many discrete spectral modes. To implement such phase manipulations in a nonlinear optical medium, it is important to determine how to practically realize such a physical mechanism that can simultaneously manipulate many relevant phases among the high-order Raman modes to arbitrary values. Previous work showed that a simple device with which the optical thickness of a transparent dispersive plate can be precisely tuned over a relatively large plate thickness can manipulate the relative phases among many discrete spectral modes nearly on demand [31][32][33][34] . This method includes the solution of the high-order temporal Talbot method as one of the many solutions 35 . Here, we use this tunable plate thickness technology to arbitrarily manipulate the concerned relative phases, Δϕ T q (Fig. 1c).
Experimental. We developed a Raman cell where six transparent dispersive plates (fused silica with a thickness of 5 mm) were installed in a nonlinear optical medium that was refrigerated down to a liquid nitrogen temperature of 77 K. The effective optical thickness of each plate was electronically controlled by adjusting the incident angle from outside the cell (Fig. 1c) to arbitrarily manipulate the relative phases, Δϕ T q . We used gaseous para-hydrogen (purity > 99.9%) as a nonlinear optical medium with a density controlled at 4.2 × 10 19 cm −3 at a temperature of 77 K (Lamb-Dicke regime; experimentally found), providing the smallest inhomogeneous broadening (<200 MHz) at the pure vibrational Raman transition, ν 00 ¼ 1 ν 0 ¼ 0 (124.7460 THz). We generated a pair of single-frequency nanosecond pulsed-laser radiations at 801.0817 nm (Ω 0 ) and 1201.6375 nm (Ω À1 ) 33 that overlapped in time and space (pulse duration of 6 ns), softly focused them into the medium (waist diameter of 200 μm), and then adiabatically drove a vibrational coherence, ρ ab (δ = 0.5 GHz). We used the second harmonic, Ω T 0 (400.5408 nm, 6 ns), of the driving laser radiation, Ω 0 , as the initiation laser for the Rr-FWM process. The initiation laser radiation, Ω T 0 , was temporally and spatially overlapped with the driving laser radiations, Ω 0 and Ω À1 , and introduced into the medium, where the polarization of Ω T 0 was set orthogonal to those of Ω 0 and Ω À1 . A series of highorder Stokes and anti-Stokes radiations, Ω T À2 (600.8188 nm), Ω T À1 (480.6514 nm), Ω T þ1 (343.3195 nm), Ω T þ2 (300.4038 nm), Ω T þ3 (267.0250 nm), was generated via the vibrational coherence, ρ ab , and was manipulated in a variety of ways by controlling the phase relationships among the high-order Raman modes, Ω T q . Then, such high-order Raman radiations, Ω T q , were picked out at appropriate interaction lengths in the series evolution process and detected by photodiodes with relatively calibrated sensitivities.
Physical mechanism in engineering the Rr-FWM process. A high vibrational coherence, ρ ab , was adiabatically driven, and thereby, a series of high-order Stokes and anti-Stokes radiations, Ω T ± 1 , Ω T ± 2 , …, were generated coaxially without being restricted by angle phase matching.
First, we studied how the physical mechanism of the relative phase manipulation implemented in this nonlinear optical process functioned in reality by comparing the resultant Stokes and anti-Stokes generations with those calculated in the numerical simulations. For this purpose, we focused on the generation of the first Stokes and anti-Stokes modes, where only two dispersive plates (A 1 and B 1 in Fig. 2a) were used. Figure 2b, c shows contour plots of the photon number densities of the generated first Stokes, Ω T À1 (Fig. 2b), and anti-Stokes, Ω T þ1 (Fig. 2c), modes as a function of the angles of the two inserted dispersive plates, A 1 and B 1 , corresponding to manipulations of the effective optical thicknesses (resolution of 0:015 , scanning range of ± 7:5 ). In these plots, 1 represents unity quantum conversion efficiency. Enhancement and suppression of the mode densities at Ω T À1 and Ω T þ1 occurred almost periodically, while on the other hand, their near-maximal densities appeared irregularly. These observed features were well reproduced in the corresponding numerical simulations (Fig. 2d, e), including the appearance rates of the nearmaximal densities. That is, the numerical simulations well reproduced the intrinsic physical nature of this engineered nonlinear optical process, although it did not perfectly reproduce all properties, including their absolute values, which should depend on the initial phases and the initial absolute plate thicknesses.
Although it is difficult to measure the phase relationships among the Raman modes, Ω T q , in the experiment, especially during the evolution of the nonlinear optical process, we can discern such phase relationships in detail from the numerical calculation, as the numerical simulations well reproduced the intrinsic physical nature of the engineered Rr-FWM process (Fig. 2f, g). The periodic behaviors of the observed mode densities were due mainly to the inversion of the signs of the relative phases, Δϕ T 0 or Δϕ T 1 (see Eq. 1), and the irregular appearances of the near-maximal densities were caused by more details in which more than three relative phases, −, Δϕ T À1 , Δϕ T 0 , Δϕ T 1 , − in the case of the Raman mode, Ω T À1 , were non-negligibly associated with different sign-inversion periodicities with different magnitudes which determine the speeds of the relevant photon flows (see Supplementary Fig. S1 for the details including the high-order Raman modes).
In Fig. 2h-j, we show how the relative-phase relationships among the high-order Raman modes, Δϕ T q , were manipulated along the interaction length at the point marked by the white cross in Fig. 2d, which denotes that the maximal mode density at Ω T À1 was found within the explored range. In Fig. 2k-m, we illustrate the flows of the relevant photon number densities provided by Δϕ T q , revealing that such phase relationships were successfully manipulated at interaction lengths Z 1 and Z 2 so that the photons were accumulated to form the first Stokes mode, Ω T À1 . Here, the relative phases, Δϕ T 0 and Δϕ T 1 , were manipulated to have negative signs with large amplitudes; therefore, the photon flows were accelerated from Ω T 0 to Ω T À1 and from Ω T 1 to Ω T 0 , respectively. Although Δϕ T À1 was not optimally manipulated for the purpose of maximizing the mode density at Ω T À1 , it was manipulated to have a negligible photon flow from Ω T À1 to Ω T À2 . The photon flows from Ω T þ2 to Ω T þ3 and manipulations of Δϕ T 2 and Δϕ T 3 were ignored here, as their mode densities were nearly zero.
Up to this point, we confirmed how the physical mechanism of the relative phase manipulations implemented in nonlinear optical processes can work in reality by comparing the fundamental experiment with the detailed numerical simulation.
Application of the engineered Rr-FWM process
On-demand enhancement of a selected Raman mode. The concept of relative-phase manipulation in nonlinear optical processes has generality; we can apply it to a variety of purposes 23 . In Fig. 3a-p, we applied such a physical principle to the targets where the output energy in the Rr-FWM process was accumulated to a specific Raman mode selected from the range between the second Stokes mode, Ω T À2 , and the second anti-Stokes mode, Ω T þ2 . For this purpose, we implemented engineered phase manipulations comprising three layers in the Rr-FWM process, each having a pair of transparent dispersive plates, along the interaction length as illustrated in Fig. 3a. The contour plots in Fig. 3b, c are the same as those in Fig. 2b, c. The white cross in each figure indicates the point at which the photon number density (normalized to that at Ω T 0 ) was maximized at the first Stokes mode, Ω T À1 , in Fig. 3b or the first anti-Stokes mode, Ω T þ1 , in Fig. 3c in the explored range. We fixed the phase manipulations at these conditions as the optimal solutions in the first layer for the targets aimed at as above.
Next, we moved to the second layer ( Fig. 3d-g). Here, also by using two dispersive plates, A 2 and B 2 , we engineered photon flows to extend them to each of the four Raman modes, Ω T À2 to Ω T þ2 , from the optimal solutions in the first layer and maximized each of the four Raman mode densities, Ω T À2 in Fig. 3d, Ω T À1 in Fig. 3e, Ω T þ1 in Fig. 3f, and Ω T þ2 in Fig. 3g. In Fig. 3d, we substantially enlarged the magnitude of the photon flow to the mode Ω T À2 from Ω T À1 (maximized in Fig. 3b). We also simultaneously enhanced the photon flows to Ω T À1 from Ω T 0 and to Ω T 0 from Ω T þ1 . Conversely, in Fig. 3g, the directions of such photon flows were manipulated to be the opposite; that is, we enlarged the photon flows from Ω T þ1 to Ω T þ2 , Ω T 0 to Ω T þ1 , and Ω T
À1
to Ω T 0 . In Fig. 3e, f, we again implemented the same manipulations as in the first layer. The white crosses in Fig. 3d-g indicate the conditions that maximized the photon number densities for each of the four Raman modes in the explored range, which were then fixed as the optimal solutions at the second layer.
Finally, in the third layer, we repeated the same conceptual phase manipulations as those implemented in the second layer, with the two dispersive plates, A 3 and B 3 , then, we completed this output energy accumulation onto each of the four Raman modes in the Rr-FWM process. The white crosses in Fig. 3h-k indicate the optimal solutions, which indicated the conditions for obtaining maximal photon number densities in the final layer: Ω T À2 in Fig. 3h, Ω T À1 in Fig. 3i, Ω T þ1 in Fig. 3j, and Ω T þ2 in Fig. 3k. Figure 3m-p shows the photon number density distributions among the Raman modes that were ultimately achieved through this phase engineering process. When no manipulation was applied ( Fig. 3l; dispersive plates not inserted), this nonlinear optical process intrinsically evolved to broaden both the positive and negative high-order Raman modes. Here, by implementing the three relative-phase manipulation layers in the Rr-FWM process, we achieved significant enhancements of specific single-Raman-mode densities: Ω T À2 : 0.22 ± 0:01 (no manipulation (nom: 0.047), Ω T À1 : 0.50 ± 0:03 (nom: 0.18), Ω T þ1 : 0.50 ± 0:02 (nom: 0.13), and Ω T þ2 : 0.29 ± 0:02 (nom: 0.038). The light red bars overlaid in Fig. 3m-p show the mode densities at the targets, simulated numerically with the optimal phase manipulations.
Generation of broad comb-like Raman modes. As already described, the physical concept of the engineered nonlinear optical process described here can be applied for a variety of purposes. In Fig. 3q-t, we examined another target that is regarded as an opposite extreme against those executed in Fig. 3m-p, showing the generation of an equal photon number density distribution over broad high-order Raman modes. Compared with the nomanipulation scenario in Fig. 3l, a very flat photon number density distribution in the form of a frequency comb was realized via the same three layers of relative-phase manipulations: Ω T À2 : 0.19 ± 0:02, Ω T À1 : 0.19 ± 0:02, Ω T 0 : 0.22 ± 0:01, Ω T þ1 : 0.17 ± 0:01, Ω T þ2 : 0.14 ± 0:01, and Ω T þ3 : 0.10 ± 0:01. The produced spectrum was phase coherent in time and space 33 , having an ability to generate ultrafast pulses with a temporal duration of 1.2 fs at an ultrafast repetition rate of 125 THz (inset in Fig. 3t).
Discussion on the conceptual differences from related studies. Last, we briefly comment on the conceptual differences between to Ω T þ2 , observed after each of the three relative-phase manipulation layers (1st, b, c; 2nd, d-g; 3rd, h-k). The crosses in the contour plots indicate the optimal conditions explored in each of the three layers in each of the targets, which maximize a specific Raman mode density of Ω T À2 to Ω T þ2 . l-p Photon number density distributions among the Raman modes, Ω T q (q = −2 to 3; Ω T À2 : 600.8188 nm, Ω T À1 : 480.6514 nm, Ω T 0 : 400.5408 nm, Ω T þ1 : 343.3195 nm, Ω T þ2 : 300.4038 nm, Ω T þ3 : 267.0250 nm), which were finally achieved at the apparatus exit; for l, no manipulation was applied; for m-p, the plates were engineered towards each of the targets to maximize a specific Raman mode density (red bars) from Ω T À2 to Ω T þ2 . The overlaid light red bars show the photon number densities calculated by the numerical simulations. q-t Generation of an equal photon number density distribution over broad high-order Raman modes, Ω T À2 to Ω T þ2 , was examined. The measurements were conducted at four interaction lengths of Z 1 (q), Z 3 (r), Z 5 (s), and Z 7 (t) at the exit of the cell. This target was regarded as an opposite extreme to those executed in m-p. Each of the error bars in l-t implies a standard deviation in one hundred measurements executed for each of the Raman mode densities.
previous studies and the work we present here. In the terminology of adaptive optics, a large number of studies involving the control of nonlinear optical phenomena have been reported. The key concept in such works is the implementation of an artificial design in the distributions of amplitude, phase, or polarization of the incident light to achieve the optimal solution for a specific target, where the feedback control is applied to the light at the incidence and the lightmatter interaction is, in general, considered a black box [36][37][38][39] . Recently, Tzang et al. demonstrated the manipulation of a multimode stimulated Raman scattering cascade in a multimode fiber by applying adaptive-optic control to the wavefront of the incident light, where the angle phase-matching conditions were substantially shaped 40 . In addition, the conceptual idea of metamaterial: designing optical susceptibility tensors including their phases by creating artificial structures at the nanoscale, has been extended to nonlinear optical processes. Many works on nonlinear beam shaping including the nonlinear optical hologram have been reported [41][42][43] .
Conclusion
We conducted a proof-of-principle experiment to show that nonlinear processes can be tailored in a variety of ways by manipulating the relative phases among the relevant electromagnetic fields during the evolutions of nonlinear processes. A high-order Rr-FWM process was used as a representative nonlinear process and was tailored to a variety of targets by installing practical phase manipulation devices (thickness-controlled dispersive plates) consisting of three layers in gaseous para-hydrogen controlled in the Lamb-Dicke regime (77 K, 4.2 × 10 19 cm −3 ). We showed that a specific, intentionally selected Raman mode was enhanced on demand with a maximum photon number density of 0.5. We also demonstrated that at the opposite extreme, the photon number densities were equally distributed over the highorder Raman modes in the form of a frequency comb.
The physical concept described here is simple and has generality, paving the way to avenues for incorporating engineered functions in nonlinear optical processes. An attractive potential application of this technology could be the realization of a single-frequency tunable laser entirely covering the VUV region from 120 to 200 nm, as suggested in a previous study 23 where Ω T 0 was set at 210 nm. As the mature single-frequency tuneable solid-state laser technology in the near-infrared region established a major trend in atomic-molecularoptical (AMO) physics leading to the realization of Bose-Einstein condensation from the initiation of laser cooling, if an equivalent laser technology can be established in the VUV region, frontiers in AMO physics, such as the laser cooling of anti-matter (Lyman α: 121.6 nm) 44 or optical frequency standards locked at nuclear transitions (Th, 149 nm) 45,46 , may be deeply explored.
Methods
Lasers and operating conditions. The Ω 0 laser radiation was generated by an injection-locked nanosecond pulsed Ti:sapphire laser (repetition rate: 10 Hz), with a homemade external-cavity-controlled continuous-wave diode laser (800 nm) as the seed laser. The Ω À1 laser radiation was generated by an injection-locked optical parametric oscillator (OPO) followed by an optical parametric amplifier (OPA), with the Ω 0 laser radiation used as the pump laser radiations for both the OPO and the OPA. A homemade external-cavity-controlled continuous-wave diode laser (1200 nm) was used as the seed laser for the OPO. The third laser radiation, Ω T 0 , was generated by taking the second harmonic of the driving laser radiation, Ω 0 . Temporal overlap of the three laser radiations (Ω 0 ; Ω À1 ; and Ω T 0 ) was very stable (pulse duration: 6 ns), as all of them were provided by the single Ti:sapphire laser system. We also coaxially overlapped the three laser beams and softly focused them into the Raman cell. The beam diameters of the three laser radiations at the waist were set to 220 μm at 1 e 2 for Ω À1 and Ω 0 and to 90 μm at 1 e 2 for Ω T 0 . We set wavelengths of the two driving laser radiations to 1201.6375 nm for Ω À1 and 801.0817 nm for Ω 0 , and adiabatically drove the molecular coherence, ρ ab 25, [27][28][29][30]33 , where the frequency difference of the two driving laser radiations was set so that the two-photon detuning, δ; to the vibrational Raman transition ( b j i : ν 00 ¼ 1 j i a j i : jν 0 ¼ 0i) was optimal (0.5 GHz). The pulse energies of the two driving laser radiations were adjusted to 5.3 mJ for Ω À1 and 5.0 mJ for Ω 0 .
Theoretical framework: Maxwell-Bloch equation. The high-order Rr-FWM process can be well described by the standard framework of the Maxwell-Bloch equations. In the far-off resonant Λ-scheme, the entire system can be reduced to a two-level system with an effective Hamiltonian as: where Ω aa and Ω bb are ac Stark shifts for the ground state, jai, and the excited state, jbi, respectively, and Ω ab indicates the effective two-photon Rabi frequency. Equation 6 represents the equations of motion in the reduced two-level system with a density matrix formula: here, ρ aa and ρ bb are the populations of the ground state, jai, and the excited state, jbi, respectively, and ρ ab is coherence associated with the Raman transition between states jai and jbi. The coefficients γ a , γ b , and γ c are the decay rates of the populations ρ aa and ρ bb , and the coherence, ρ ab , respectively. The coupled propagation equation for the complex electric-field amplitude, E q (qth Raman mode), propagating in the z direction, is expressed with the slowly varying envelope approximation as: where N is the molecular density of para-hydrogen, ω q is the angular frequency at the qth Raman mode, _ is the reduced Planck constant, and ε 0 is the vacuum permittivity. Parameters, a q and b q determine the dispersions of para-hydrogen, and d q determines the coupling strength between neighboring Raman modes. To show the role of the relative phase more explicitly, we transformed Eq. 7 to Eqs. 8 and 9, where the complex electric-field amplitude and molecular coherence are transformed as E T q ¼ E T q e iϕ T q and ρ ab ¼ ρ ab e iϕ ρ , respectively, by using the expressions of photon number density, n T q / E T q 2 _ω q , and the relative phase, Δϕ T q . The relative phases, Δϕ T q and Δϕ T qþ1 are defined as Δϕ T q ϕ T q À ϕ T qÀ1 þ ϕ ρ ; ð10Þ Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
Code availability
The codes that support the findings of this study are available from the corresponding author upon request. | 6,980.4 | 2022-07-08T00:00:00.000 | [
"Physics"
] |
Characteristics and Determinants of Domestic Food Waste: A Representative Diary Study across Germany
: As it is the case in many industrialized countries, household food waste accounts for a large share of total food waste in Germany. Within this study, the characteristics of edible and inedible domestic food waste, the reasons for discarding food and the potential influence of socio-demographic factors on food waste generation are assessed. A data set of 6853 households who participated in a diary study in 2016 and 2017 was analyzed by use of descriptive statistics, parametric tests, and linear regression. The results indicate that perishable products such as vegetables, fruits, and bread are mainly a ff ected by disposal. Moreover, household food waste occurs due to quantity problems at purchase for small households and quantity problems at home for larger households and households with children. Despite statistically significant di ff erences in food waste amounts between household lifecycle stages, age of the head of household, household size, and size category of the municipality, socio-demographic factors have a limited power in predicting a household’s food waste level. The study has important implications for food waste policy and research regarding the issues of food waste prevention measures, quantification methodologies, and monitoring implementation.
Introduction
Domestic food waste is highlighted by Sustainable Development Goal 12.3 as one of the food waste streams which should be reduced by 50% until 2030 [1]. This focus is justified because in industrialized regions households contribute the highest share of food waste in comparison to other stages of the food supply chain (FSC). At the same time, the invested resource input, corresponding emissions, and impacts on the environment until food reaches consumers accumulate along the entire FSC [2,3]. Thus, the prevention of food waste at the very end of the FSC seems to be especially desirable and effective. The design of a proper framework, strategy and prevention measure bundle to tackle household food waste requires comprehensive information on the generation of, characteristics of, and factors influencing domestic food waste. Nevertheless, as households differ in socio-demographic characteristics and behave very differently (due to external framework conditions, past and present experiences, knowledge, motivation, life cycle status, etc.), the collection of representative data sets requires great effort. The first research study on household food waste started in 1895, and the research intensity has increased enormously in recent years [4]. According to Xue et al. [3], up to 2015, 49% of the screened global literature on food loss and waste targeted domestic food waste. Nevertheless, there is still a lack of representative, reliable primary data on the household level related to the generation of food waste and especially to the complex interaction of individuals and existing framework conditions leading to domestic food waste [3]. Thus, research is still necessary to analyze households in order to
Data Set
The sample was drawn from the ConsumerScope Panel of the GfK SE, whose participants are already familiar with the diary reporting procedure. For each month, a representative sample was selected for the Federal Republic of Germany (min. 500 households) according to the criteria of the Federal Bureau of Statistics applied in the frame of the micro-census, namely: The necessary material to undertake a diary survey (such as paper and pencil diary, operation instructions and Supplementary Materials) was sent to the selected households. The response rate of the respective households was 85%. The pool of responsive households was again adjusted with regards to the criteria region, age of head of household, and household size to prevent a skewness of the sample.
Each household participating in the survey recorded all food and drink waste (further called "food waste") accruing within the household over a period of 14 days. Each month, a different set of households reported for such a 14-day-period. In total, 6853 households reported their food waste within the study, which means on average 571 households per month. In addition to the mass of the discarded food items, a set of further characteristics of the wasted food, as well as of the household itself, was selected and surveyed for each disposal act (Table 1). For further details on the surveyed socio-demographic characteristics, see also Tables A1 and A2 in the Appendix A. Table 1. Characteristics of wasted food and socio-demographic characteristics of the households sampled within the survey.
Characteristics of Wasted Food Socio-demographic Characteristics of Sample Household 1
Food waste masses per waste act Household size Product group Household lifecycle stage Disposal route Age of head of household Food condition at disposal Size category of municipality Disposal reason 1 Detailed characteristic values are presented in the results part and in Tables A1 and A2. Moreover, all food waste had to be classified as edible or inedible in the sense that for example the peel of certain fruits and vegetables, such as banana or watermelon, is generally presumed to be inedible. However, the classification of food waste as edible or inedible took place without any clear definition and hence was at the participants' discretion. For simplification of the classification, examples of inedible food fractions were listed in the diary material, including peels and cores of fruit and vegetables, bones, skin, cheese rinds, coffee grounds, and tea bags. To determine the mass of food waste per disposal act, the participants could decide for themselves whether to measure or estimate the mass or volume or indicate the number of pieces discarded. A conversion sheet (piece to mass) was provided with the survey diary for estimation. In the case of piece indication, the respective mass was calculated subsequently by GfK SE with the aid of a conversion table. The final data set was provided by BMEL to the authors.
Extrapolation
An extrapolation to national scale is valid since the sample households were selected representatively based on the mentioned criteria and the extrapolation was carried out according to Equation (1). Based on the assumption that the 14-day sampling period can be seen as representative for the respective month, the total mass of food waste in the Federal Republic of Germany was calculated per year. To enable this assumption, the sampled households were asked to select a 14-day period which represents a common behavior of their household. The individual weighting factor (f, Equation (1)) enables the multiplication of each household according to the respective characteristic values of the representative criteria mentioned above. This means that households which more accurately represent the population than others are assigned a higher weight. The weighting factor was based on the household characteristics presented in Table S1. Although there are indications of underreporting, no arithmetical adjustment of the data was made for this paper apart from extrapolating the reported data to annual waste quantities. The problem of underreporting will be discussed further in Section 4.
Statistical Analysis
The statistical analysis was carried out in three subsequent steps. In a first step, a descriptive analysis of the data set was carried out to get a general overview of characteristics, such as edibility, product group, condition at disposal and disposal route.
In a second step, the reasons of households for discarding food were examined with regard to product groups, household size groups and lifecycle stages. This focus was selected because the disposal reason has major implications for potential prevention actions. The first two steps were carried out with the edible fraction of food waste only, which is of specific interest regarding potential policy and prevention measures.
The third step was undertaken with the whole data set, including the edible and inedible fraction of food waste, following the guidelines for food waste monitoring set by the European Commission. Potential drawbacks of the monitoring methodology suggested by the European Commission may thus be detected. Within this step, an explorative analysis (boxplots) was applied in the statistical software R Studio (R) to get an overview on food waste masses of households with regard to the sampled socio-demographic household characteristics. Subsequently, inductive statistics were applied in R to test for statistically significant differences in food waste levels between In a final step, potential dependencies of the amount of food waste (dependent variable) on other variables were examined by the aid of weighted multiple linear regression. For both analyses, the specific weighting factors of each household (f, Equation (1)) were applied to ensure an inference on the parent population. As both tests demand normal distribution, the original data was transformed by use of Box-Cox power transformation ( Figure S1 in Supplementary Materials). Normal distribution of residuals and homogeneity of variances of the transformed data as well as of residuals were given for all variables.
Results
The distribution of food waste amounts per household and 14-day-period is right-skewed ( Figure 1a) indicating that many households reported smaller amounts of food waste, and a smaller number of households reported comparably large amounts. A total of 5% of all households reported no food waste at all and a further 12% only inedible food waste (Figure 1b). Another 8% of the sample households indicated having discarded only edible food waste, while the majority of 75% reported both types. Those 346 households reporting no food waste were excluded from the investigation for any further analysis. The reason was that it was assumed to be very unlikely to have no food waste at all within a period of 14 days, particularly as households on vacation were excluded in advance. A sample of 6507 households remained. The food waste amount per household and 14-day-period (excluding households reporting no food waste) ranged up to 37.8 kg, with a mean of 3.4 kg and a median of 2.7 kg. After extrapolation to national scale, the data accumulate to a total amount of household food waste of about 3.7 million tons within the study period of one year. With a total number of 41.3 million households and 82.8 million inhabitants in Germany [17], household food waste sums up to 89.5 kg per household and year and 44.6 kg per person and year, respectively. number of households reported comparably large amounts. A total of 5% of all households reported no food waste at all and a further 12% only inedible food waste (Figure 1b). Another 8% of the sample households indicated having discarded only edible food waste, while the majority of 75% reported both types. Those 346 households reporting no food waste were excluded from the investigation for any further analysis. The reason was that it was assumed to be very unlikely to have no food waste at all within a period of 14 days, particularly as households on vacation were excluded in advance. A sample of 6507 households remained. The food waste amount per household and 14-day-period (excluding households reporting no food waste) ranged up to 37.8 kg, with a mean of 3.4 kg and a median of 2.7 kg. After extrapolation to national scale, the data accumulate to a total amount of household food waste of about 3.7 million tons within the study period of one year. With a total number of 41.3 million households and 82.8 million inhabitants in Germany [17], household food waste sums up to 89.5 kg per household and year and 44.6 kg per person and year, respectively. Of all food waste generated, 56% was classified as inedible and 44% as edible by the participating households ( Figure 2a). Figure 2b-d provides more detailed information on the edible section of reported food waste. With respect to the product categories (Figure 2b), fresh fruit and vegetables clearly represent the main disposed food categories (both 17.1%), followed by cooked and prepared food (16.2%). Bread and baked goods also represent a product group disposed in large amounts with 13.8% of all food waste. Animal products such as dairy (9.4%) and meat, fish, and eggs (3.7%) were discarded to a smaller extent. When looking at the condition of edible food at disposal, more than half of all discarded food was described as loose/unpacked, while another 21% was prepared or cooked and 13% in opened packaging. Only 6% was still in its original unopened packaging. The major disposal routes of the participating households were the organic waste bin used for 34% of all edible food waste, and to a similar extent, the residual waste bin with 33%. Another 14% of edible food waste was discarded into the sewer while 9% and 6% were recycled for home composting and reused as animal feed, respectively. The underlying data regarding absolute numbers of food waste masses are provided in the supplement (Table S2). Of all food waste generated, 56% was classified as inedible and 44% as edible by the participating households ( Figure 2a). Figure 2b-d provides more detailed information on the edible section of reported food waste. With respect to the product categories (Figure 2b), fresh fruit and vegetables clearly represent the main disposed food categories (both 17.1%), followed by cooked and prepared food (16.2%). Bread and baked goods also represent a product group disposed in large amounts with 13.8% of all food waste. Animal products such as dairy (9.4%) and meat, fish, and eggs (3.7%) were discarded to a smaller extent. When looking at the condition of edible food at disposal, more than half of all discarded food was described as loose/unpacked, while another 21% was prepared or cooked and 13% in opened packaging. Only 6% was still in its original unopened packaging. The major disposal routes of the participating households were the organic waste bin used for 34% of all edible food waste, and to a similar extent, the residual waste bin with 33%. Another 14% of edible food waste was discarded into the sewer while 9% and 6% were recycled for home composting and reused as animal feed, respectively. The underlying data regarding absolute numbers of food waste masses are provided in the supplement (Table S2).
Reasons of Disposal in Relation to Socio-demographic and Food Characteristics
An important indicator for the identification of potentials for action is the reason for disposal ( Figure 3). Classes and answer options were predefined by GfK SE within the diary and could be answered by the households by indicating their main and sub-reason for the disposal of the respective food item. The vast majority of edible food stuff (57.6%) was disposed due to the durability of the product as indicated by the participants. Most of these products were apparently spoilt while only 5.8% were disposed as a consequence of an expired best-before date. 21.3% of all food waste was discarded due to a quantity-related problem at home, e.g., too much food had been cooked or prepared within the household. Another 11.9% was wasted as a result of a quantity-related problem at purchase such as too big packaging sizes. Only 1.7% of the households indicated that a too big packaging size was bought because it was cheaper or on offer. Other reasons such as bad taste, wrong preparation, and storage play a minor role for the disposal of recorded food products.
Reasons of Disposal in Relation to Socio-Demographic and Food Characteristics
An important indicator for the identification of potentials for action is the reason for disposal ( Figure 3). Classes and answer options were predefined by GfK SE within the diary and could be answered by the households by indicating their main and sub-reason for the disposal of the respective food item. The vast majority of edible food stuff (57.6%) was disposed due to the durability of the product as indicated by the participants. Most of these products were apparently spoilt while only 5.8% were disposed as a consequence of an expired best-before date. 21.3% of all food waste was discarded due to a quantity-related problem at home, e.g., too much food had been cooked or prepared within the household. Another 11.9% was wasted as a result of a quantity-related problem at purchase such as too big packaging sizes. Only 1.7% of the households indicated that a too big packaging size was bought because it was cheaper or on offer. Other reasons such as bad taste, wrong preparation, and storage play a minor role for the disposal of recorded food products.
A detailed look at the disposal reasons with respect to household size and structure (see also Table S3) shows that especially small households, such as one person and older single households (both ca. 16%) discard food products as a result of quantity-related problems at purchase (Figure 4b,c). Moreover, households without children indicate in around 13% of all cases quantity-related problems at purchase as the disposal reason while only 8% to 9% of households with children listed this as the main reason. Larger households of three persons or more, as well as households with small children, indicate disproportionally often quantity-related problems at home as a major reason for disposal. Single households and young households without children seem to be less affected by this category of disposal reasons. The overall main reason recorded is durability with relevance of 54% to 62%, which can be explained by the product types wasted (Figure 4a).
The identification of the main disposal reasons by product group revealed that particularly fresh fruit and dairy products are affected by a limited durability, followed by bread and baked goods and fresh vegetables. Quantity-related problems at home occur mainly for cooked and prepared food as well as for beverages, such as coffee and tea. Quantity-related problems at purchase, for instance large packaging or portion sizes, result disproportionally often in the disposal of convenience products (including canned food) but also of bread and baked goods. Convenience products are moreover often discarded for "other reasons" which can be traced back to an "accident" (32% of other reasons), such as a freezer defect or infestation, and needed shelf space (25% of other reasons).
Differences in Food Waste Amounts between Socio-demographic Household Characteristics
A weighted analysis of variance (ANOVA) was performed to test for significant differences in food waste amounts (edible and inedible) between households with distinct socio-demographic characteristics. The individual weighting factors of households according to the characteristic values (f, Equation (1)) were applied within the analysis. This enables the transfer of results from the sample to the whole population of the Federal Republic of Germany. The Boxplots ( Figure 5) seem to indicate that between-group differences with respect to household lifecycle stage, age of the head of the household, household size, size category of municipality, and level of education are often
Differences in Food Waste Amounts between Socio-Demographic Household Characteristics
A weighted analysis of variance (ANOVA) was performed to test for significant differences in food waste amounts (edible and inedible) between households with distinct socio-demographic characteristics. The individual weighting factors of households according to the characteristic values (f, Equation (1)) were applied within the analysis. This enables the transfer of results from the sample to the whole population of the Federal Republic of Germany. The Boxplots ( Figure 5) seem to indicate that between-group differences with respect to household lifecycle stage, age of the head of the household, household size, size category of municipality, and level of education are often negligible in absolute terms. Substantial differences become only visible for household lifecycle stage and household size. Nevertheless, according to the ANOVA, statistically significant differences in the amount of food waste exist between factor levels of all tested groups ( Table 2). The post-hoc Tukey-Kramer test revealed that significantly less food waste occurs in the group of "younger singles; young couples; young families (without children)" and in the group of "older singles" than in all other groups with a p-value of the ANOVA below 0.001 ( Table 2). The household lifecycle groups with children as well as the older families without children reported average food waste levels between 3.97 and 4.24 kg per 14 days while the younger group without children and the older singles reported only about 2.6 kg within the period of two weeks ( Figure 5). For head of household age groups, only the youngest (up to 39 years) and the oldest group (60 years and older) are significantly different from each other with regard to the amount of household food waste they produce (p < 0.01). The older household age group however, discards with 3.54 kg/14d on average only 340 g more than the youngest age group. It should be noted here that the medium age group disposes about 10% and the oldest age group about 20% more inedible food than the youngest age group. An analysis of edible food waste only led to a noticeably different outcome concerning age and lifecycle groups (see also Figures S2 and S3), which will be discussed further in Section 4. With respect to household size groups, all factor levels differ significantly from each other (p < 0.001). Unsurprisingly, more food waste accrues in larger households. However, the difference between 2-person (3.9 kg/14d) and 3+-person households (4.3 kg/14d) is rather small, although 3+-person households with an average of 3.61 persons per household are almost twice as large as a 2-person household. This shows that the average per capita mass of food waste decreases with increasing household size. The post-hoc test for the size categories of municipality led to the finding that households in cities of more than 100,000 inhabitants waste with an average of 3.32 kg/14d significantly less than households in rural areas and smaller municipalities (<20,000 inhabitants, p < 0.001) with an average of 3.86 kg/14d and households in medium sized municipalities (20,000-100,000 inhabitants, p < 0.01) with an average of 3.62 kg/14d. With regard to the level of education groups, the post-hoc test, as opposed to the ANOVA, indicated that none of the groups differ significantly from each other. The boxplots and average values also reveal quite small differences between groups of education level within the sample which only become significant in the ANOVA after the implementation of weighting factors. This means that a correlation between formal education and household food waste mass cannot clearly be drawn from this analysis.
Multiple Dependencies between Waste and Household Characteristics
Weighted linear regression models were created with the transformed data of food waste amounts as dependent variable and f (Equation (1)) as weighting factor. First, the regression was carried out for all independent variables separately before implementing all variables into one weighted multiple linear regression model. The analysis aligns with the ANOVA by indicating that the considered independent variables indeed show significant differences among characteristic values. Nonetheless, the selected variables are not sufficient to predict the amount of household food waste as indicated by the low adjusted r 2 values below 0.1 ( Table 2). The weighted simple linear regression with lifecycle stage and household size as independent variables resulted in the highest adjusted r 2 values of 0.086 and 0.088, respectively. The weighted multiple linear regression model resulted in an almost as low adjusted r 2 value of just above 0.09 indicating that the addition of more independent variables does not lead to a markedly better fit of the regression model. It could be shown that the individual as well as the combination of socio-demographic household characteristics predict less than 10% of the variance of the dependent variable.
Design and Realization of the Survey
The data set showed that 5% of the households did not record any food waste within 14 days, although the households were asked to select a representative period. It must be asked whether it is realistic that neither edible nor inedible food waste occurs in these households within 14 days. Corresponding information from literature suggests that between 15% and 40% of respondents to questionnaires stated not wasting any food or edible food within a regular week or during the previous week [18][19][20]. In contrast, household food waste collection in Denmark led to the finding that only 3% did not have any food waste in the bag [21]. In Spain, 20% of all households did not record any food waste in diary surveys [20]. The Netherlands Nutrition Centre [22] compared food waste generation self-assessments from Dutch households with waste sorting analyses and summarizes that "every household throws things away." The methodological procedure applied here should be better reflected within scientific literature in order to gain more experience on "zero food waste" households.
Methodological questions arise with regard to the extent of underreporting of diary studies as well as to whether all respondents conduct underreporting in the same way, and if specific food waste fractions are more or less affected. For the present data set, GfK SE [16] estimates an underreporting of 18% by comparing the reports with panel data on daily shopping behavior. They assume that all food waste is underreported in the same way by all respondents. Findings from the literature show that this might not be the case. According to Hoj [12], the unavoidable food waste fraction was not statistically underreported by all respondents, while the avoidable and possibly avoidable food waste fraction was underreported to a huge extent by households with multiple members. A smaller extent of underreporting was detected by Giordano et al. [23] who report a lack of 23% for edible food waste on average. Quested et al. [15] compared the diary methodology with waste collection at the example of five studies in the UK, Saudi-Arabia, and the US and came to the conclusion that the underreporting of diaries lies between 7% and 40%. The main reasons for the underreporting of diary studies according to Quested et al. [15] are behavior change resulting from the reporting, misreporting, measurement bias (if not all items are weighed) and self-selection bias. In the present paper, the primary data set was not corrected, but for future use of such diaries more research needs to be carried out on the level of under coverage for distinct product groups and different behavior of respondent groups. The German baseline-2015-study by Schmidt et al. [5] derived household food waste mainly from official waste statistics, the underlying data set of GfK SE to consider relevance of disposal paths and waste composition analyses of bins. The study suggests a food waste amount of 75 kg per person, which represents 1.7 times the mass of the diary study presented here and 1.3 times the mass of the correction methodology using the panel data of daily shopping behavior described above.
Furthermore, other methodological questions arise, for instance concerning the impact of the independent categorization of edible and inedible food waste by the participants themselves. However, the perception of edibility of a product can vary between households (e.g., for peels of different vegetables) which complicates a clear categorization of edible and inedible food products in advance. Moreover, participants were allowed to weigh or estimate the mass of food waste in grams or liters or indicate the number of pieces, leading to further uncertainties. Conversion tables were provided, but it was not recorded which households weighed and which estimated the food waste mass. This information could support assumptions on uncertainties of the reported data. Additionally, the prior aggregation of the data by GfK SE, e.g., for household size and lifecycle groups complicated the statistical analysis. For future surveys, a more detailed household characterization, such as the exact number of household members, would be desirable.
Product Characteristics
Unsurprisingly, perishable products such as fruit, vegetables and bread are mainly discarded which could be shown in previous studies [13,24]. In addition, the study was able to show that cooked/prepared and loose/opened food products range among the main discarded food products. Our findings indicate that beverages contribute to a major extent of domestic food waste and should not be excluded from quantification methodology, as it is quite common in current literature on household food waste; for example [9] or [18] or [20]. The relevance in terms of the environmental impact should not be underestimated as liquid waste streams consist of coffee, tea, fruit juices, alcohol or soft drinks. These may have a major impact during production related to the use of fertilizer, pesticides, water demand, etc. Findings from Schmidt et al. [25] showed that the environmental impact of domestically wasted beverages is considerable in Germany.
With respect to disposal paths, the results show that toilet or sewer represents the 3rd relevant disposal option. The present findings are less than for Dutch households who dispose nearly 30% of their total food waste (including e.g., yogurt, soup, dairy drinks, coffee, tea, soft drinks, fruit juices, milk, wine, beer) into the toilet or sewer which ranks this disposal option in 2nd place [26]. With the background of such findings, it seems to be critical that sewer/toilet disposal is excluded in current EU legislation [6]. Moreover, the alternative disposal ways of home composting and feeding to animals should not be neglected. Although the waste compositional analysis can be referred to as the more accurate methodology, it is a drawback that the disposal ways sewer, home composting, and animal feed cannot easily be captured by it [15,27].
The residual waste bin is used almost as often as the bio waste bin, which underlines the importance of taking into account both of these disposal options for any household food waste survey. The fact that separate bio-waste collection is not available everywhere should also be taken into account when comparing (inter)national food waste amounts.
Disposal Reasons
The respondents chose durability or spoilage as the main reason for edible food wastage within their homes, which corresponds well with other literature; for example [9] or [13] or [28]. This result should be interpreted rather as perceived reason than as actual reason for disposal, as spoilage arises due to poor planning or wrong storage in the first place rather than due to poor quality of food produce.
The relevance of the quantity-related problems at home is quite similar to other studies [9, 13,19]. Quantity-related problems at purchase play a greater role for smaller households within the present study. Here, the offer of smaller packaging units, re-sealable packaging, or piece-by-piece withdrawal for small households could contribute to prevention, especially for canned food and convenience products.
In contrast to other studies [9, 18,19,28], the best-before date, with less than 6% of all wasted items, was not highly ranked as reason for disposal. This finding aligns with Schmidt et al. [25] who found that 88% of their participants in Germany check the edibility of the particular product after expiry of the best before date. Only 7% of the participants usually discard all products after expiry of the best-before date. Moreover, the present respondents presume not to waste a high share of food due to too large packages being cheaper or on offer, which is supported by other authors as well [3,9,19].
Socio-Demographic Characteristics
The inedible fraction was included in the analysis of socio-demographic characteristics due to the legal requirements within the European Union to report the total food waste masses from 2020 onwards. It represents a different approach to most of the already available literature which is often dedicated to edible domestic food waste only. The findings on the dependency of generated domestic food waste on age and lifecycle stage of participants differ within the literature. Koivupuro et al. [9] could not find any significant connection between food waste level and age. However, most studies found that older households waste less than younger households [18,20,[29][30][31][32]. In contrast to these studies, the present results indicate that older age groups and lifecycle stages account for a relatively high share of food waste. The reason is that the inedible fraction of food waste is much higher for older households than for other age groups ( Figure S2) which raises specific implications for the monitoring. First, it makes a noticeable difference if one targets edible or inedible domestic food waste. Therefore, both fractions should be reported separately with clear indication on what is covered. Second, the shares of edible and inedible food waste vary between household types, making a more specific addressing of prevention measures necessary. At present, the European legislation asks for separate reporting of edible and inedible fraction on voluntary basis only.
Many studies suggest that food waste amounts are lower in smaller households than in larger households and that the amount of waste per person decreases with increasing household size [9, 13,28,29,31,33]. This result was also found within our data set.
Families with children within this study waste significantly more which is in line with findings of other household surveys [28,32]. Parizeau et al. [28] offer the explanation of time and money constraints with children in the household while Neff et al. [34] suggest that eating behavior of children is not always predictable resulting in too much food served on the plate. Since too much food served on the plate was not a major disposal reason in our study, this explanation seems unsuitable in this context. Taking into account the assumed underreporting of a diary study, the impact of household composition on the recorded food waste should be considered as well. According to Hoj [12], households with children and multiple adults provided substantial underreporting in diaries by 40% of the total food waste (wasted into municipal waste collection), whereas single occupancy households recorded the same amount of food waste as parallel conducted compositional analyses found for them. This puts the comparably high per capita mass of food waste of small households somewhat into perspective.
The present findings concerning food waste in rural and urban areas seem to be contrary to other studies. Neff et al. [34], who conducted an online survey which did not actually aim at food waste quantification, found few differences in reported food waste amounts between rural and urban status. Koivupuro et al. [9] did not find a significant correlation for avoidable food waste to all disposal paths. Secondi et al. [30], who statistically analyzed survey data of telephone interviews within the EU, found that households living in towns indicate they produce more food waste than those living in rural areas. This is supported by Schneider and Obersteiner [35] as well as Lebersorger and Schneider [27], who conducted waste sorting analysis covering residual waste only. These apparent differences may be grounded in the coverage of disposal options surveyed. All disposal paths were covered in the present study, whereas the latter two mentioned studies included residual waste only. This means that according to the present results, nearly two thirds of the food waste was not covered within these studies. The findings of Neff et al. [34] and Secondi et al. [30] rely on questionnaire and telephone surveys which do not represent an adequate method to draw exact figures on food waste quantities from. The comparison of the different coverage of disposal paths and domestic food waste generation may suggest that in rural areas, non-residual waste disposal paths such as separate collection of bio waste (municipal collection or home composting) or animal feeding are more relevant than in urban areas. This issue should get more attention by future research.
Similar to our study, Cecere et al. [36] cannot draw clear findings on the correlation between education and household food waste. Also Neff et al. [34] found few differences in reported food waste amounts between differently educated groups. Visschers et al. [32], who analyzed a much smaller sample of less than 900 households, did not find a correlation either. Similarly, in Finland, Koivupuro et al. [9] did not find any significant correlation between food waste amount and educational level of the householders who filled in the diary. Only Secondi et al. [30] found that less educated individuals state to waste less than more educated ones. As mentioned above, the study by Secondi et al. [30] relies on own estimations of food waste amounts by use of questionnaires and hence on a completely different methodology compared to the one of the underlying study.
Prediction of Food Waste Amounts through Socio-Demographic Variables
In the present study the tested independent variables of the regression only explain a very small share of food waste emergence. Similar results were found by Giordano et al. [13], as the included socio-demographic characteristics within their study as for instance household size and food related habits such as shopping and eating behavior, could only explain about 30% of the variation in food waste quantities within the regression random forest. In a similar way, Grasso et al. [20] stated that their findings "underscore the modest role of socio-demographic characteristics in predicting food waste behavior in Europe".
De Hooge et al. [37] showed that besides demographics, waste behavior is influenced by personality characteristics, such as value orientation, commitment to environmental sustainability, and perceived consumer effectiveness in saving the environment and by individual waste aspects, such as perceived food waste of the household, perceived importance of food waste and engaging in shopping and cooking. Visschers et al. [32] performed a Tobit analysis on self-reported household food waste and also found that personal attitudes and norms such as perceived behavioral control and good provider identity are important influencing factors. Parizeau et al. [28] observed that food and waste awareness in general as well as family and convenience lifestyles are connected with food waste behavior.
Food waste is a multi-dimensional problem which is influenced by purchasing behavior, general waste prevention habits, as well as the importance of materialistic and environmental values [38]. Stancu et al. [31] strengthen this view on food waste generation by showing that psychological factors and household-related routines perform better in explaining food waste behavior than socio-demographic factors. Therefore, food waste can be described as an unintended result of several practices in a broader context of values and factors and should, with regard to domestic food waste reduction, also be addressed as such a multi-faceted issue [39].
Conclusions
The study presents for the first time findings on household food waste behavior and characteristics in Germany in a representative way and grounded on a quantitative statistical analysis. It is able to show that levels of household food waste indeed differ between distinct socio-demographic factors which has not become clear in previous studies with smaller sample sizes. Nonetheless, the socio-demographic factors considered in the analysis explain only a small share of households' variance in food waste levels, which must substantially be affected by parameters not taken into account in the frame of this study. Food waste not only depends on selected socio-demographic characteristics of households but also on many other conditions that relate to behavior, routines, lifestyles, attitudes, and norms. This implies that policies targeting certain population groups such as single, young, or households with children might be limited in their effectiveness. As the comparison with other research on household food waste showed, a focus on overall consumer behavior, waste prevention habits, daily routines and environmental values could be more accurate. More quantitative research on potential influencing factors of household food waste should be carried out as a scientific basis for targeted prevention policies. The results on disposal reasons indicate that food waste prevention measures should not solely rely on information provision regarding the best-before date and perishability but rather focus on adequate packaging sizes for smaller households, especially of canned foods and convenience products, and better meal planning options for larger households and households with children as well as re-use ideas for surpluses.
Regarding the monitoring of household food waste, further issues need to be taken into account. The disposal path sewer/toilet should not be neglected as it represents a major disposal path, particularly for beverages. The present dataset show a survey methodology for households that integrates this component in a consistent way. Further, the disposal options composting and animal feed and the different use of these paths between regions must also be taken into consideration.
With regard to the recent EU requirements for food waste monitoring, the problem of distinct shares of inedible fractions between households must be discussed in further detail. Differences in the share of edible and inedible food waste among age groups may result in distortions if indication on the potential edibility of the food product is not provided during reporting.
From a methodological viewpoint, it is challenging to compare existing studies and make clear assumptions on food waste masses, behavior, and the influence of socio-demographic factors. The existing studies apply different methodologies and rely partly on households' own perceptions and estimations of food waste levels. Moreover, the studies differ in their inclusion of distinct disposal routes, liquid products, and inedible food waste. Future research should focus more on adequate methods to quantify domestic food waste and to better estimate potential underreporting and negligence of certain food waste fractions. Particularly methodologies for measuring wasted beverages and the disposal path sink should be discussed. Research on household food waste quantities should moreover clearly unveil and reflect on the advantages and disadvantages of their applied methodologies to facilitate a comparison between studies. Furthermore, the phenomenon of "no food waste at all"-reporting should be investigated with respect to methodological interpretation, impact on results, and proceedings for other households. Recent literature already states the disadvantages of diary studies for quantifying household food waste [5,13,15]. Nonetheless, our study could show that particular information on all relevant disposal paths can easily be captured by use of food waste diaries. Thus, a combination with other methodologies should be applied for a proper assessment. Finally, time series should be established to capture potential trends in the development of food waste, which are as yet unclear for households in Germany.
Supplementary Materials:
The following are available online at http://www.mdpi.com/2071-1050/12/11/4702/s1, Figure S1: Histogram (left) and QQ-Plot (right) showing the normal distribution of transformed data on food waste masses by use of Box-Cox transformation, Figure S2: Total amount of edible and inedible food waste by age group of head of household per household and year [kg], Figure S3: Total amount of household (hh) edible food waste only (in contrast to Figure 5 addressing total food waste) within 14 days per (a) household lifecycle stage, (b) age of the head of the household, (c) household size, (d) size category of municipality, and (e) level of education (n indicates sample sizes; outliers are excluded), Table S1: Criteria for calculation of the weighting factor (f) for each household by GfK SE, Table S2: Underlying data of descriptive statistics concerning edibility, food product types, condition of disposal and disposal routes in percentages and absolute annual numbers (absolute numbers are extrapolated to the whole population of Germany), Table S3: Underlying data of descriptive statistics concerning disposal reasons by product group, household size group, and lifecycle group in percentages and absolute annual numbers (absolute numbers are extrapolated to the whole population of Germany).
Conflicts of Interest:
The authors declare no conflicts of interest. Table A2. Detailed specifications of the socio-demographic characteristic "lifecycle stage". | 9,895.2 | 2020-06-09T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Neutralizing antibody titers six months after Comirnaty vaccination: kinetics and comparison with SARS-CoV-2 immunoassays
Objectives: mRNAvaccines,includingComirnaty(BNT162b2 mRNA, BioNTech-Pfizer), elicit high IgG and neutralizing antibody (NAb) responses after the second dose, but the progressive decrease in serum antibodies against SARS-CoV-2 following vaccination have raised questions concerning long-term immunity, decreased antibody levels being associated with breakthrough infections after vaccination, prompting the consideration of booster doses. Methods: A total number of 189 Padua University-Hospital healthcare workers (HCW) who had received a second vaccine dose were asked to collect serum samples for deter-miningAbat12(t 12 )and28(t 28 )days,and6months(t 6m )after their fi rst Comirnaty/BNT162b2 inoculation. Ab titers were measured with plaque reduction neutralization test (PRNT), and three chemiluminescent immunoassays, targeting the receptor binding domain (RBD), the trimeric Spike protein (trimeric-S), and surrogate viral neutralization tests (sVNT). Results: The median percentages (interquartile range) for decrease in antibodies values 6 months after the fi rst dose were 86.8% (67.1 – 92.8%) for S-RBD IgG, 82% (58.6 – 89.3%) for trimeric-S, 70.4% (34.5 – 86.4%) for VNT-Nab, 75% (50 – 87.5%) for PRNT 50 and 75% (50 – 93.7%) for PRNT 90 . At 6 months, neither PRNT titers nor VNT-Nab and S-RBD IgG bAb levels correlated with age (p=0.078) or gender (p=0.938), while they were correlated with previous infection (p<0.001). Conclusions: After 6 months, a method-independent reduction of around 90% in anti-SARS-CoV-2 antibodies was detected, while no signi fi cant differences were found between values of males and females aged between 24 and 65 years without compromised health status. Further ef-forts to improve analytical harmonization and standardization are needed.
Introduction
The coronavirus disease 2019 (COVID-19) pandemic caused by SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) continues to sustain a global public health crisis, despite the availability of vaccines that have dramatically reduced severe disease, hospitalization and mortality [1]. Several studies have demonstrated that mRNA vaccines, such as Comirnaty [BNT162b2 mRNA, BioNTech-Pfizer, Mainz, Germany/New York, United States (US)], elicit high IgG and neutralizing antibody (NAb) responses, especially after administration of the second 30 µg dose [2,3]. However, the decrease in serum antibodies against SARS-CoV-2 have raised questions concerning long-term immunity, since decreased antibody levels have been associated with breakthrough infections, prompting the consideration of additional vaccine booster doses [4].
Although initial studies demonstrated that Ab titers elicited by Comirnaty can persist for more than 6 months after the second boost dose [5,6], there is a pressing need for further data on time-dependent kinetics and persistence of immunization [5,7]. Recent papers in the literature have reported lower antibody levels 3 months postvaccination, with 6 months values being comparable to those of subjects vaccinated with one dose, thus indicating a progressive waning of immune response over time [7]. Moreover, analogous to studies evaluating Ab levels in convalescent individuals [8], further findings suggest that the dynamics of decrease in Ab elicited by vaccines might be not homogeneous across individuals [5,9]. Antibodies may decrease earlier in the elderly, and in chronic renal disease, underweight, solid malignancy patients as well as in those on immunosuppressive medication, whereas they can increase in females [5,7,9]. On the other hand, in some individuals (4%), Ab slowly increase after the second dose, peaking 3 months later (late-responders) [7].
The humoral response and dynamic after vaccination has mainly been demonstrated by measuring binding antibody (bAb) using commercially available assays, often based on chemiluminescent technology targeting different forms of Spike proteins or its receptor binding domain (RBD) moiety [10,11]. Only a few studies have focused on the shortand long-term dynamic of NAb [5,12], found to correlate with protection from infection [13]. The assessment of neutralizing assays in clinical practice is cumbersome, since this technique is labor-intensive, has long turnaround time and needs bio-safety level 3 (BSL-3) containment, which is unavailable in most laboratories. Therefore, anti-S IgG immunoassays closely correlated with neutralizing antibody titers could be used to determine the relationship with protection, and for evaluating a vaccinated or Covid nonnaïve individual level of immunity.
In this study we describe the dynamics of neutralizing response of sera from HCW without and with prior SARS-CoV-2 infection after 6 months from administration of a primary cycle of Comirnaty/BNT162b2 vaccine, measured with plaque reduction neutralization test (PRNT), which is considered the gold standard for anti-SARS-CoV-2 Nab measurement [14]. The antibody kinetics was evaluated further with two commercially-available CLIA assays, having as targets either the RBD portions or the trimeric form of the viral Spike protein, and a surrogate viral neutralization test (sVNT) measuring specific interactions between SARS-CoV-2 and host cells with high affinity to Ab neutralization activity.
Materials and methods
This study included a cohort of 189 Padua University-Hospital healthcare workers (HCW) who underwent a primary cycle of vaccination (first dose, followed by a second after 21 days) between December 26th 2020 and March 10th 2021. HCW were consecutively enrolled from the Emergency Department, and the Infectious Disease and the Laboratory Medicine wards of the University-Hospital of Padova. All subjects underwent weekly nasopharyngeal swab testing from March 2020 to September 2021, while their immunological status for SARS-CoV-2 was determined weekly between April 8th and May 29th, 2020, as described elsewhere [11]. Thirty-five post-graduate medical trainees are included in the cohort. Seventeen HCW were previously diagnosed with COVID-19 natural infection on the basis of at least one positive nasopharyngeal swab test and clinical confirmation; the time elapsed after infection ranged from 3 to 9 months. Overall, the numbers and percentages of subjects within the age classes <30 years, 30/40 years, 40/50 years, 50/60 years and >60 years were: 32 (16.9%), 55 (29.1%), 41 (21.7%), 48 (25.4%) and 13 (6.9%), respectively.
Of the whole cohort, 179 underwent a second vaccine administration after 21 days from the first dose; the remaining 10, who were non-naïve to SARS-CoV-2 infection, had a single dose. All HCW were asked to undergone collection of serum samples for determining Ab at 12 (t 12 ) and 28 (t 28 ) days after the first Comirnaty/BNT162b2 inoculum; a pre-vaccination sample (t 0 ) was collected 24-0 h only from the 35 resident staff before vaccination. Around 6 months (t 6m ) after the first vaccine administration (median time from first dose 185 days, minmax 13-214, 25th and 75th percentiles 180-195 days), a further blood sample for Ab assessment was obtained from all patients.
Results
Among the HCW included in the study, 58 (30.7%) were males, and 131 (69.3%) females. The overall mean value for age, which did not significantly differ by gender (Student's t=−0.562, p=0.574), was 42.3 (range, 24-66) years with a standard deviation (SD) of ±11.8 years. Seventeen individuals (8.9%) presented one or more comorbidities [11 had cardiovascular diseases without or in association with diabetes (n=1), respiratory diseases (n=1) or severe obesity (n=7); three had respiratory diseases; one had diabetes; two had past or current cancer]. Of the 17 individuals with previous SARS-CoV-2 natural infection, 8 (47.0%) were females and 9 (53.0%), males. Figure 1 shows the differences in bAb, VNT-NAb and NAb by gender. Multivariate regression analysis demonstrated that S-RBD IgG bAb levels at 6 months were neither correlated with age (p=0.079) nor with gender (p=0.466), while they were correlated with previous infection (p<0.001). Differently, trimeric-S IgG bAb levels at 6 months correlated with age (p=0.013) and with previous infection (p<0.001), but not with gender (p=0.723). VNT-Nab levels at 6 months were neither correlated with age (p=0.078) nor with gender (p=0.938), whereas they were correlated with previous infection (p<0.001). Likewise, PRNT 50 and PRNT 90 titers were neither correlated with age (p=0.064 and p=0.674, respectively) nor with gender (p=0.356 and p=0.563, respectively), while they were correlated with previous infection (p<0.001 for both). Figure 2 reports the kinetics of bAb and VNT-NAb in all HCW, and Figure 3 shows PRNT 50 and PRNT 90 titers at 12 days (t 12 ), 28 days (t 28 ) and 6 months (t 6m ) after the first vaccine administration. The median and IQR of bAb and PRNT at different time points are reported in Table 2 for the entire cohort, as well as for those with and without previous COVID-19. In order to ascertain the decrease in bAb and PRNT over time in individuals with Ab measured both at t 28 and t 6m , the difference (in percentage) between levels at 6 months and t 28 were calculated. Figure 4 shows the Table 3 also reports the results of multivariate analyses, conducted in order to establish whether age and gender were significantly associated with the percentage of Ab decrease. Supplementary Figure 1 shows the dot plots of corresponding decreases levels by age and gender. Supplementary Figure 2 reports results at Passing and Bablok analyses (including equations, 95% CI of slopes and intercepts) across bAb, VNT-NAb and PRNT.
Discussion
Vaccines against COVID-19 have been demonstrated to be effective in preventing severe disease, hospitalization and death [1]. However, studies evaluating Ab levels in response to vaccination have reported contradictory results. Yet it is of utmost importance to gain sound understanding of the extent and duration of protection following natural Overall Results of all the studied time points are presented overall (including all subjects) of subdivided by individuals with or without previous COVID-. SARS-CoV-2 infection, and vaccination. The waning of serum antibodies and neutralizing antibodies against SARS-CoV-2 has been widely documented [16,17], although several studies have underlined that levels of vaccine-induced Ab persist also 6 months after the second dose [18]. In view of these results, the heterogeneous levels reported might be explained by the different Ab types evaluated [binding antibodies (bAb) or neutralizing antibodies (NAb)] and/or by age or gender dependent differences. Poor standardization of analytical methods, as well as different targets evaluated by commercial assays, might also explain the reported discrepancies. Overall, NAb are considered the most accurate available method for ascertaining protection against COVID-19, since they correlate with the capability of immune response of neutralize the entry of virus into host cells [13].
In this study, a cohort of HCW was followed up for 6 months after vaccination with Comirnaty (BNT162b2, Bio-NTech/Pfizer). Using vital virus, NAb titers were measured by the gold standard, the plaque reduction neutralization test (PRNT) [14], at low (PRNT 50 ) or high (PRNT 90 ) stringency thresholds, and the results compared with those obtained with three different CLIA assays: two assays measured bAb, and one sVNT measured VNT-NAb. Investigation into SARS-CoV-2 previous infections is of utmost importance in understanding the humoral response. In this cohort, all HCW underwent repeated nasopharyngeal swabs as from March 2020, but were also further interviewed to collect data on previous infection and comorbidities. A total of 17 subjects were not infection naïve, since they previously had COVID-19, while comorbidities were only present in a limited number of individuals. Multivariate analyses were performed to evaluate the correlation of Ab levels at 6 months adjusted by age, gender and previous infection. Our results demonstrate that for all the investigated methods, neither bAb nor NAb correlated with gender ( Figure 1), while only trimeric-S was slightly associated with age. These results contradict findings made by other Authors, who have reported age-and gender-dependence of bAb and/or NAb at 6 months [5,16,19], while they are in agreement with the finding of an absence of correlation of Ab levels and age or gender found by us and other groups [2,11,20]. The different detectable levels for gender, as well as for age, could be attributed to heterogeneity of individuals included in the study. However, contrasting results for the significance between Ab levels, age and gender are reported also by other Authors. Levin et al., for example, studied Ab dynamics, and found that bAb levels did not differ by gender, but were associated with age classes; likewise, NAb varied by gender and age groups [5]. A different pattern was reported by Khoury et al., who underline that gender differences are appreciable only for individuals above 50 years of age [16]. The slight differences found in the our enrolled cohort were not statistically significant. The limited analytical standardization might also explain reported differences among published studies, as confirmed by our data when comparing different methods (Supplementary Figure 2).
The evaluation of Ab dynamics revealed that, independent of the assay used to determine bAb of NAb levels, at 6 months the majority of subjects had about 90% decrease in their anti-SARS-CoV-2 Ab levels ( Figure 4). These results were independent of age and gender and previous infection, as confirmed at multivariate analyses, with exception of a slight significance found for PRNT 50 and PRNT 90 (Table 3). Interestingly, a limited number of individuals presented Ab levels at 6 months higher than after 1 month from the first dose. Accordingly to Naaber et al., these individuals could confidently be considered "late responders", rather than statistical outliers, as they slowly developed Ab after vaccination [7].
The present study has some limitations. First, the number of HCW with a previous infection is limited, although our data confirm previously reported patterns [2]. Second, to elucidate the waning of humoral response, a longer study period is required. Third, the full spectrum of analytical performances of CLIA methods (except for Maglumi SARS-CoV-2 S-RBD IgG [15]) was not verified. The strength of this study was its characterization of the cohort of HCW individuals, who were followed weekly, undergoing molecular testing to identify any early infection. Further, NAb titers are developed using vital virus (PRNT), as this method is consensually accepted as a valuable tool for appropriately estimating the risk of re-infection and protection against SARS-CoV-2.
Conclusions
The findings made in the present study demonstrate a method-independent reduction of 90% of anti-SARS-CoV-2 antibodies occurs 6 months post-vaccination, and in individuals aged 24-65 years without severe health issues, no significant differences between males and females are to be expected. However, SARS-CoV-2 antibody assays still present contradictory results, and there is an urgent need for comparability and standardization, particularly in view of the fact that PRNT determination is labor-intensive, has a long turnaround time and calls for bio-safety level 3 (BSL-3) containment, which is unavailable in most clinical laboratories, thus precluding its widespread utilization in clinical practice. | 3,270.4 | 2021-12-16T00:00:00.000 | [
"Medicine",
"Biology"
] |
GATA1 knockout in human pluripotent stem cells generates enhanced neutrophils to investigate extracellular trap formation
Human pluripotent stem cell (hPSC)-derived tissues can be used to model diseases and validate targets in cell types that are challenging to harvest and study at-scale, such as neutrophils. Neutrophil dysregulation, specifically unbalanced neutrophil extracellular trap (NET) formation, plays a critical role in the prognosis and progression of multiple diseases, including COVID-19. hPSCs can provide a limitless supply of neutrophils (iNeutrophils) to study these processes and discover and validate targets in vitro. However, current iNeutrophil differentiation protocols are inefficient and generate heterogeneous cultures consisting of different granulocytes and precursors, which can confound the study of neutrophil biology. Here, we describe a method to dramatically improve iNeutrophils’ yield, purity, functionality, and maturity through the deletion of the transcription factor GATA1. GATA1 knockout (KO) iNeutrophils are nearly identical to primary neutrophils in cell surface marker expression, morphology, and host defense functions. Unlike wild type (WT) iNeutrophils, GATA1 KO iNeutrophils generate NETs in response to the physiologic stimulant lipopolysaccharide (LPS), suggesting they could be used as a more accurate model when performing small-molecule screens to find NET inhibitors. Furthermore, through CRSPR/Cas9 deletion of CYBB we demonstrate that GATA1 KO iNeutrophils are a powerful tool in quickly and definitively determining involvement of a given protein in NET formation.
Introduction
Neutrophils are the most abundant immune cells in the human body and make up approximately 70% of circulating leukocytes. They migrate to the site of infections where they recruit other immune cells, and independently destroy invading microorganisms through phagocytosis, the release of granules, and the formation of neutrophil extracellular traps (NETs) (1,2).
While neutrophils are an important first line of defense in the innate immune system, their overactivation can have a proportionally negative impact on many diseases and affect numerous organ systems. The dysregulation of neutrophils is correlated with the progression of multiple diseases including rheumatoid arthritis, atherosclerosis, psoriasis, chronic obstructive pulmonary disease, and gallstone formation (3)(4)(5)(6)(7). COVID-19 clearly establishes the link between overactive neutrophils and disease severity and highlights the need for improved methods to model neutrophil dysregulation. SARS-CoV-2, the virus which causes COVID-19, has been shown to directly induce NETs (8,9) and the overproduction of NETs and neutrophil reactive oxygen species (ROS) exacerbate COVID-19 complications including blood clots, cytokine storm, organ damage, and respiratory failure (10)(11)(12)(13). Unsurprisingly, there is a strong positive correlation between NET production, disease severity and patient outcome (14,15).
While inhibiting overactive neutrophils has the potential to mitigate COVID-19 severity (15)(16)(17), the nature of primary neutrophils severely restricts their utility in drug discovery. Primary neutrophils, like all donor tissues, are limited by access and cross-patient variability, survive ex vivo for less than 24 hours, are transcriptionally silent and non-proliferative, and cannot be cryopreserved (18). These shortcomings preclude large-scale drug screening and make unbiased genetic screens and target validation experiments using CRISPR/Cas9 challenging.
Human pluripotent stem cells (hPSCs) provide an inexhaustible source of material that can overcome these challenges. hPSCs self-renew indefinitely, can be differentiated into a variety of highly relevant cell types, and are easy to genetically modify. hPSC-derived cells have been successfully used in a variety of pharmaceutical efforts, ranging from high-throughput phenotypic drug screens to model and correct neurological disorders, to the generation of hepatocytes to screen drug-mediated toxicity (19,20). The production of homogeneous cultures of mature cells is crucial for assay relevance and reproducibility.
Neutrophil specification is tightly regulated by an interplay of cell fate-determining transcription regulators.
While eosinophils and neutrophils rely on the expression of CEBPE and GFI1, post-translational acetylation of CEBPE at K121 and K127 along with the reduction of GATA1 ultimately determines neutrophil commitment (26,27). During neutrophil maturation, CEBPE is downregulated and expression of the terminal granulopoiesis genes CEBPD and SPI1 escalate (28). The small-molecules and cytokines governing these events are largely unknown. Based on transcriptional analysis of sorted iNeutrophils along with mouse genetic studies, we surmised that the deletion of GATA1, a transcription factor important for the development of eosinophils and basophils, would force hPSC-derived granulocytes into a neutrophilspecific program and eliminate contaminating cells (29,30). We demonstrate that knocking out GATA1 in H1 human embryonic stem cells (hESCs) using CRISPR/Cas9 (GATA1 KO) followed by granulocyte differentiation produces pure populations of iNeutrophils that are nearly identical to their primary counterparts. Compared to wild type (WT) iNeutrophils, GATA1 KO iNeutrophils have dramatically improved levels of the neutrophil surface markers CD182, CD11b, CD15, CD16 and CD66b and retain their host defense functions including phagocytosis, ROS production and myeloperoxidase (MPO) activity.
Unlike WT iNeutrophils, GATA1 KO iNeutrophils form NETs after treatment with the physiologic NET stimulant lipopolysaccharide (LPS). Furthermore, GATA1 KO iNeutrophils can be further genetically manipulated through CRISPR/Cas9 to evaluate the role of individual genes in neutrophil functions. GATA1 KO iNeutrophils with deletion of CYBB, which encodes a protein involved in NET formation, produce reduced NETs in response to the NET stimulant phorbol myristate acetate (PMA).
GATA1 KO using CRISPR/Cas9
An Amaxa Nucleofector II Device and Nucleofector Kit (Lonza, VPH-5012) were used to transiently express 5 µg of GATA1 gRNA plasmid (gRNA sequence: ggtgtggaggacaccagagcagg) containing a puromycin resistance gene into 1 x 10 6 H1 hESCs constitutively expressing Cas9 from the AAVS1 locus. After nucleofection, cells were plated onto a 6 cm dish in mTeSR plus media with 10 µM Y-27632, and two days later selected using 1 µg/ml puromycin for 2 days, clonally expanded, genomic DNA extracted (Invitrogen, K1820-1), and target locus PCR amplified (forward primer: gatgcaggagggaaaagagagga, reverse primer: gcaaccaccacatacttccagt) using Platinum Taq DNA Polymerase (Invitrogen, 11304011). Amplicons were analyzed using Sanger sequencing and clones with frame shift deletions picked for expansion. All experiments were performed with a GATA1 KO clone containing a 13 base-pair frame shift deletion ( Figure S1). GATA1 is located on the X-chromosome so only one allele of H1 hESCs (XY karyotype) required editing.
Hematoxylin and eosin staining
To visualize morphology, 1.5 x 10 5 cells were suspended in 200 µl PBS plus 1% BSA (Sigma-Aldrich, A9576) and spun onto glass slides using a Thermo Scientific Cytospin 4 centrifuge at 300 x g for 5 minutes, processed through a Siemens Hematek 2000 for staining and sealed / preserved using DPX mounting media (Sigma-Aldrich, 06522). Stained cells were then visualized using a Nikon Eclipse Ci-L microscope. washes, 200 µl of stain buffer were added to each sample well followed by centrifugation at 300 x g for 5 minutes at 4°C. All steps were performed in dark conditions and on ice. Antigen specific antibodies reacted to UltraComp eBeads (Invitrogen, 01-3333-42) were used as single-color compensation controls and corresponding isotype controls were included for each antigen specific antibody. Experiment was run using a Yeti (Propel Labs) flow cytometer and analysis was performed using FlowJo software. To calculate positive marker expression, a cut off of no more than 3% isotype background was used. A list of antibodies used is available in Supplemental Tables S1A and S1B.
Cell sorting 10 x 10 6 Day 19 WT iNeutrophils were resuspended in 4 ml PBS + 1% BSA (Sigma-Aldrich, A9576) and sorted using a FACSAria III (BD Biosciences) based on low and high forward-and side-scatter populations gated in Figure 1. Once sorted, the two populations were immediately processed for RT-qPCR.
RT-qPCR
Approximately 2 x 10 6 cells were harvested, and RNA extracted and purified using the RNeasy Mini Kit (Qiagen, 74104), reverse transcribed using SuperScript IV Reverse Transcriptase (Invitrogen, 18090010), and gene expression analyzed by TaqMan assay using Fast Advanced Master Mix (Applied Biosystems, 4444963) and the QuantStudio Flex Real-Time PCR System (Applied Biosystems, 4485701).
Fold-changes relative to WT H1 hESCs were calculated using the delta-delta Ct method and normalized using the housekeeping gene GAPDH and experimental error was calculated through standard deviation (33). For the time-course study, samples were collected from three independent differentiations and performed in technical triplicates. TaqMan probes used are listed in Supplemental Table S2.
ROS production
ROS release was measured using the CM-H2DCFDA (General Oxidative Stress Indicator) kit (Invitrogen, C6827) following the manufacturer's protocol. Briefly, cells were resuspended to 1x10 6 Plates were washed with HBSS, then treated with 10 nM PMA stimulant or DMSO for one hour at 37°C.
Fluorescence of 2',7'-dichlorofluorescein generated by ROS-induced oxidation of the DCFDA reagent to was measured on a CLARIOstar plate reader at 488 / 535 excitation / emission, then adjusted down by 75% of highest well to bring all wells into range. Mean fluorescence from cell-free wells was subtracted to control for background fluorescence. Experiment was performed on four independent differentiations and three independent donors in at least five technical replicates per experiment.
Primary neutrophil isolation
Peripheral blood from healthy donors (defined as not having asthma or allergies and not having taken NSAIDS within the previous 5 days) was obtained at Novartis Institutes of Biomedical Research using informed consent under an approved Institutional Review Board research protocol. Fresh blood was EDTA anti-coagulated and used within two hours of donation. Primary donor neutrophils were extracted using Ficoll-density centrifugation. Per 10 ml of fresh blood, 5 ml of PBS and 5 ml of 4% Dextran (Sigma-Aldrich, 31392-50G) in PBS (Gibco, 10010-023) were added and mixed in a 50 ml tube by gently by inverting 2.5 times, then allowed to settle for 30 minutes at room temperature, separating into a dense layer topped with a supernatant containing leukocytes. 75% of the supernatant volume of Ficoll-Paque Premium (Sigma-Aldrich, GE17-5442-03) was added to a new 50 ml falcon tube. The supernatant was carefully transferred on top of the Ficoll, then centrifuged at 650 x g for 20 minutes at room temperature, with a low acceleration (2) and no brake (deceleration set to 0). The supernatant was removed, then the pellet was resuspended in 10 ml of water (Ultrapure diH2O) and mixed no more than 30 seconds to lyse red blood cells. Then 10 ml of 2 x PBS (made from 10 x PBS Gibco, 70011044) was added, and tubes were centrifuged 300 x g for 10 minutes at room temperature (reset acceleration and deceleration to 9).
The supernatant was aspirated, and the pellet containing granulocytes was resuspended in IMDM (Gibco, 21056023) and counted with a ViCell Cell Counter. For the ROS assay, primary neutrophils were resuspended in Hanks Balanced Salt Solution (HBSS, Gibco, 14025-092) and counted. centrifuge and the supernatant was removed. Cells were fixed for 5 minutes with 100 µl / well 4% paraformaldehyde in PBS, then washed twice with ice-cold PBS. After the second wash the supernatant was removed and 200 µl of PBS, followed by 100 µl of 0.4% trypan blue, was added to each well. Cells were then analyzed by flow cytometry for uptake of fluorescent particles. Experiment was performed on at least three independent differentiations and three independent donors in at least technical triplicates.
MPO activity
The MPO activity of cell lysates was measured using an EnzChek Myeloperoxidase Activity Assay Kit (Invitrogen, E33856) following manufacturer's instructions. Briefly, cells were resuspended in at 5 x 10 5 cells / ml in PBS and lysed through freeze thaw cycles and 25 ul added to each well of a 384-well dish (PerkinElmer, 6007270). Chlorination was measured by addition of AFP reagent and fluorescence was measured using a BMG PHERAstar at excitation and emission wavelengths of 485 nm and 520 nm, respectively. Mean fluorescence from cell-free wells was subtracted from experiment wells to control for background fluorescence. Experiments were performed on three independent differentiations and three independent donors using at least technical triplicates. paraformaldehyde (Electron Microscopy Services, 15710), 0.1% Triton X100 (Sigma-Aldrich, X100-100ML) and 50 nM Sytox Green (Invitrogen, S7020). Nine fields per well were imaged using the Yokogawa CV8000 automated microscope at 20x magnification. Image features were extracted using CellProfiler (version 4.2.4) followed by analysis using custom supervised machine learning software to classify NET versus non-NET nuclei based on nuclei features including size, shape, intensity, etc.
NET formation and small molecule inhibition
Experiments were performed on at least three independent differentiations and three independent donors in at least technical triplicates.
Conventional cytokine differentiation yields heterogeneous iNeutrophils
First, we generated hemogenic endothelium using cytokines and small molecules following previously published protocols (28,29) (Figures 1A and 1B). Next, we supplemented the hematopoietic progenitor's media with G-CSF to push the cells towards the neutrophil lineage (34). Hematoxylin and eosin images of Day 19 iNeutrophils revealed a variety of cell types with morphologies consistent with different granulocytes and progenitors ( Figure 1C). Flow cytometry analysis identified two major populations distinguishable by size (forward-scatter) and granularity (side-scatter) ( Figure 1D). The larger, more granular cells expressed high levels of the non-neutrophil granulocyte surface markers Siglec-8 and CD193, while the smaller, less granular cells expressed high levels of the neutrophil surface markers CD15 and CD16. These smaller, less granular cells also expressed lower levels of the hematopoietic progenitor marker CD33 and higher levels of the mature granulocyte marker CD66b, suggesting this population is immunophenotypically like mature neutrophils. As expected, the pan-hematopoietic marker CD45 was similar in both populations of floating cells (Figures 1E and 1F).
Next, we sorted the low and high forward-and side-scatter populations using FACS and compared the transcript levels of five regulators of granulocyte specification. While most regulators had a five-fold difference or less in transcript levels between the two groups, GATA1 was upregulated more than 25-fold in the non-neutrophil population ( Figure 1G). Studies in mice demonstrate that Gata1 is critical in the development of eosinophils and basophils, and while it is expressed in the common myeloid progenitor, it is dispensable for the differentiation and function of neutrophils (29,30). These findings suggest that GATA1 is a key gene responsible for specifying the non-neutrophil population and downregulation could restrict cells toward the desired neutrophil fate.
GATA1 KO improves iNeutrophil specification
We devised a novel differentiation approach by deleting GATA1 in our hESCs to restrict their differentiation capacity to the desired neutrophil cell type. Like their WT counterparts, GATA1 KO hESCs were able to self-renew and expressed high levels of the pluripotency markers OCT4 and NANOG ( Figure 2A). Upon differentiation, both the WT and GATA1 KO cells downregulated these pluripotency genes and began expressing the hematopoietic transcription regulators SPI1 and GFI1 ( Figure 2B). By Day 12, the WT and GATA1 KO hematopoietic progenitors showed differences in gene expression suggesting the GATA1 KO cells were more neutrophil-like than the WT cells. The Day 12 GATA1 KO hematopoietic progenitors expressed significantly higher levels of the neutrophil genes AZU1, AQP9, ELANE and MPO, and significantly lower levels of the eosinophil and basophil-specific gene CLC (Figures 2C and 2D). As seen with primary neutrophils, Day 19 WT and GATA1 KO iNeutrophils expressed low levels of mRNA which prevented us from comparing gene expression with early time-points (35,36).
Hematoxylin and eosin images of Day 19 GATA1 KO iNeutrophils showed a dramatic increase in the number of cells with the classic neutrophil multilobulated nuclear morphology (Figures 2E and S2).
Furthermore, the GATA1 KO cells generated on average 17 x 10 6 cells from each 6 cm dish, more than double that of the WT cells, demonstrating that this improved differentiation method can produce at-scale numbers of homogeneous iNeutrophils ( Figure 2F).
GATA1 KO iNeutrophils share many characteristics of primary neutrophils
Surface proteins on immune cells mediate cell communication and signal transduction and are often used to distinguish different granulocytes. We used fluorophore-conjugated antibodies specific to basophil, eosinophil, and neutrophil surface proteins ( Figure 3A) and flow cytometry to compare WT and GATA1 KO iNeutrophils versus primary neutrophils. Staining WT iNeutrophils using antibodies against Siglec-8 showed that roughly 50% of the cells adopted an eosinophil phenotype, supported further by the coexpression of CD193 in 25% of the total floating cells (Figures 3B and S3A). Additionally, 26% of the WT cells expressed the non-neutrophil granulocyte marker CD49d. Alternatively, 5% of the GATA1 KO iNeutrophils expressed Siglec-8 and 4% co-expressed CD193. Only 6% of the GATA1 KO iNeutrophils expressed CD49d (Figures 3B and S3A).
Alternatively, we saw a dramatic increase in not only the number of CD182 (94%) and CD11b (98%) positive GATA1 KO iNeutrophils, but also in the magnitude of signal ( Figure 3C). More than 90% of the GATA1 KO iNeutrophils expressed CD15 and CD16, and more than 85% expressed CD66b. Multiplexed staining revealed that more than 80% of the GATA1 KO iNeutrophils were co-positive for the neutrophil surface markers tested compared to 14% of the WT iNeutrophils ( Figure S3B). A list of surface proteins and expression percentages for primary neutrophils, WT and GATA1 KO iNeutrophils is available in Supplemental Table S3.
The GATA1 KO iNeutrophils produced a homogeneous forward-and side-scatter profile which largely localized to the previously determined neutrophil-like cell population seen in WT cells (Figures 1D, 1E, and 3D). The diffuse, non-neutrophil like population was dramatically reduced. Interestingly, the GATA1 KO iNeutrophil population overlaps with the forward-and side-scatter profile seen in primary neutrophils ( Figure 3D). Taken together, these results clearly show a remarkable similarity in surface protein expression, size, and granularity between the GATA1 KO iNeutrophils and primary neutrophils.
GATA1 KO does not impact host defense functions
Neutrophils are a critical component of innate immunity and kill invading microorganisms through phagocytosis, MPO release, and ROS production. Analysis revealed that the GATA1 KO iNeutrophils retained these important functions.
The WT and GATA1 KO iNeutrophils were able to phagocytose human serum opsonized fluorescent microspheres in vitro; however, uptake in the WT cells was reduced relative to the GATA1 KO cells (22 ± 8% vs. 41 ± 14%, respectively). While the GATA1 KO iNeutrophils had a slightly lower rate of phagocytosis relative to the primary neutrophils (46 ± 29%), the GATA1 KO iNeutrophils were less variable. As expected, baseline phagocytosis was inhibited in all groups after treatment with the actin polymerization inhibitor cytochalasin D (Figures 4A and 4B).
The GATA1 KO iNeutrophils retained their MPO activity to the same degree as WT iNeutrophils, but at elevated levels relative to primary neutrophils ( Figure 4C). Additionally, WT and GATA1 KO iNeutrophils along with primary neutrophils generated baseline ROS and were further stimulated through treatment with 10 nM phorbol myristate acetate (PMA). As expected, ROS production was inhibited with the selective protein kinase C (PKC) inhibitor sotrastaurin in all groups ( Figure 4D).
GATA1 KO iNeutrophils form NETs like primary neutrophils
The formation of NETs was assessed in the WT and GATA1 KO iNeutrophils after stimulation with PMA, the calcium ionophore A23187, and the bacterial toxin LPS. These stimulants were chosen because they induce NETs using diverse pathways. Treatment with 50 nM PMA stimulated NETs in similar numbers of WT (50 ± 5%), GATA1 KO iNeutrophils (51 ± 13%), and primary neutrophils (67 ± 4%). Similarly, A23187 was able to induce NETs in both WT and GATA1 KO iNeutrophils, while WT iNeutrophils produced more NETs after stimulation (83 ± 4%) compared to both GATA1 KO iNeutrophils (62 ± 6%) and primary neutrophils (51 ± 8%) ( Figures 5A and S4A). Considering both PMA and A23187 also generate extracellular traps (ETs) in other granulocytes, and flow cytometry determined that roughly 50% of the WT cells expressed the eosinophil surface marker Siglec-8, the large number of ETs seen in the WT iNeutrophils could be a product of non-neutrophil stimulation (37,38).
While LPS is a well-described, physiologically relevant NET stimulant, it failed to induce significant NET formation in WT iNeutrophils (17 ± 12%) relative to DMSO controls (6 ± 1%), highlighting a severe limitation with this cell model (Figures 5A and 5B). Importantly, we observed that GATA1 KO addresses this gap through restoring sensitization to LPS and generating significant NETs (72 ± 14%) relative to DMSO controls (17 ± 12%).
NET formation can be inhibited in GATA1 KO iNeutrophils using small molecules as in primary neutrophils
While PMA is non-physiologic, it is a commonly used tool to study NETs in vitro because it reliably activates relevant pathways (39). Like many physiologic NET forming stimulants, PMA activates PKC and subsequently generates ROS through the NADPH oxidase (NOX) complex. This releases MPO which helps decondense chromatin and expel DNA into the extracellular environment through the poreforming protein Gasdermin D (40)(41)(42). We investigated the fidelity of this pathway in the WT and GATA1 KO iNeutrophils using the PKC inhibitor sotrastaurin, the NOX inhibitor diphenylene iodonium (DPI), the MPO inhibitor 4-aminobenzoic acid hydrazide (4-ABAH) and the proposed Gasdermin D inhibitor, disulfiram. In line with primary neutrophils, NET formation in both the WT and GATA1 KO iNeutrophils was significantly reduced after pre-treatment with these selective inhibitors ( Figures 5C, 5D and S4B).
These results demonstrate that the GATA1 KO iNeutrophils respond to known NET inhibitors and can be used in screens to find novel small-molecule NET inhibitors.
GATA1 KO iNeutrophils can be genetically edited and used for target validation CYBB encodes p91 phox , a component of the multi-protein NADPH complex that is critical for NOXdependent NET formation (39,43). To test whether GATA1 KO iNeutrophils could be leveraged for functional genomic approaches, we knocked out CYBB using CRISPR/Cas9 in the GATA1 KO hESCs, differentiated the cells to Day 19 iNeutrophils and stimulated the cells with PMA to induce NETs. Upon stimulation with PMA, 44 ± 13% of GATA1 KO iNeutrophils with intact p91 phox (control gRNA) generated NETs compared to 6 ± 4% without p91 phox (Figures 6A and 6B). These results establish that our GATA1 KO iNeutrophils form NETs with diverse stimuli like primary neutrophils, and that NET formation can be inhibited pharmacologically and genetically. We conclude that GATA1 KO iNeutrophils overcome a major limitation associated with primary neutrophils, by enabling the identification and validation of targets modulating neutrophil functions.
Discussion
In this study, we developed a novel method that dramatically increases the efficiency of differentiation, maturity, and functionality of hPSC-derived Neutrophils. Our method was adapted from previously published protocols to generate hematopoietic progenitors, with the addition of G-CSF between Day 12 and 18 to establish a granulocyte program, generating WT iNeutrophils. Recent reports optimizing the generation of iNeutrophils utilize gene overexpression to overcome differentiation challenges and deficiencies in host defense functions (37,38). Overexpression and modification of genes can enhance iNeutrophil behavior in vitro, but strays from primary neutrophils in ways that may not be readily apparent.
These methods also require genetic manipulation during each round of cell production, adding delivery challenges and/or FACS to purify targeted cells. Unlike these protocols, our method is amenable to engineering at the self-renewing hPSC stage where modified cells can be expanded and banked for further use.
Flow cytometry analysis of WT iNeutrophils confirmed previous observations that they are composed of two distinct populations: one characterized by an immunophenotype typical of primary neutrophils, and the other of either non-neutrophil granulocytes or hematopoietic progenitors. Sorting these two populations and comparing gene expression of five granulocyte regulators revealed that the neutrophillike population expressed low GATA1 while the non-neutrophil population highly expressed GATA1.
From this observation, we hypothesized that knocking out GATA1 in the hESCs before differentiation would push them towards neutrophils and away from other fates. hESCs with CRISPR/Cas9 deletion of GATA1 expressed high levels of pluripotency genes, which were lost upon differentiation, consistent with the behavior and expression changes seen in WT cells. After hematopoietic induction, levels of the hematopoietic progenitor markers SPI1 and GFI1 rose in the Day 6 monolayer cells. Large numbers of cells began shedding off the supporting monolayer between Day 7 and Day 8, and floating GATA1 KO cells on Day 12 expressed higher levels of neutrophil specific genes relative to the WT control, supporting our hypothesis that GATA1 removal encourages a neutrophil program.
In line with these findings, flow cytometry analysis showed a very high degree of similarity between the GATA1 KO iNeutrophils and primary neutrophils. Other methods report producing populations with roughly 50% CD11b positive cells, and low levels of CD66b (37). GATA1 KO enhances neutrophil specification and produces greater than 95% CD11b+ and 85% CD66b+ cells, with 81% expressing all the neutrophil surface proteins tested, precluding time-consuming and resource-intensive sorting. Our method therefore constitutes a substantial improvement over previously described approaches.
Prior to this study, the formation of NETs in iNeutrophils has mostly been assessed using PMA, and while PMA does robustly activate specific NET pathways, it is not a physiologic stimulant. Additionally, PMAstimulated ETs are not unique to neutrophils and occur in eosinophils (37,38). This suggests that PMAstimulated ETs observed in cells made following previous iNeutrophil protocols (which generate heterogenous granulocyte populations) could be from non-neutrophil cells. Conversely, LPS is a physiologic bacterial cell wall component known to stimulate NETs in vitro in primary neutrophils, and by itself does not evoke DNA release in eosinophils (44). Furthermore, studies demonstrate differential production of ETs from neutrophils and eosinophils in human disease, stressing the mechanistic differences between the two cell types (45). Current iNeutrophil protocols generate heterogeneous populations of different granulocytes, and while these cells form PMA stimulated ETs, they likely do not capture the disease-relevant nuances of neutrophil-specific NETs. The relevance of these cells in NET studies is therefore limited. The GATA1 KO iNeutrophil model overcomes these limitations by generating pure cultures of neutrophil-like cells that respond to diverse NET stimulants like their primary counterpart. This is highlighted by the restoration of NET formation after stimulation with LPS.
While screens using primary neutrophils have uncovered drugs that inhibit NET formation (41), the targets of these drugs remain extremely challenging to pinpoint. Even if a ligand partner is discovered, this does not rule out off-target modalities. For instance, a group using the potent neutrophil elastase inhibitor GW311616A concluded neutrophil elastase is critical for NET formation (46). Follow up work employing selective neutrophil elastase inhibitors and knock-out mice dispute these findings, suggesting GW311616A's NET inhibition mechanism is likely off-target (47,48). Furthermore, validating targets using CRISPR/Cas9 knockouts in primary human cells is challenging due to the neutrophils' extremely short lifespan ex vivo, and the use of classic gene silencing techniques such as siRNA or shRNA may not be effective in neutrophils, which are rather stable and transcriptionally silent. Through the deletion of CYBB, we show how our iNeutrophils can provide a clean method to quickly validate targets without the uncertainty of compound off-target effects.
While targeting overactive NETs in disease is therapeutically attractive, interfering with other hostdefense activities like phagocytosis and ROS release leave patients vulnerable to infections (49,50).
Because GATA1 KO iNeutrophils retain their other host defense capabilities, NET target knockout cells can serve as a tool to address the impact on these critical functions.
In conclusion, our differentiation method overcomes the limitations of previously published protocols by | 6,157.2 | 2023-04-08T00:00:00.000 | [
"Medicine",
"Biology"
] |
Monte Carlo Simulation of a Modified Chi Distribution Considering Asymmetry in the Generating Functions: Application to the Study of Health-Related Variables
: Random variables in biology, social and health sciences commonly follow skewed distributions. Many of these variables can be represented by exGaussian functions; however, in practice, they are sometimes considered as Gaussian functions when statistical analysis is carried out. The asymmetry can play a fundamental role which can not be captured by central tendency estimators such as the mean. By means of Monte Carlo simulations, the effect of a small asymmetry in the generating functions of the chi distribution is studied. To this end, the k generating functions are taken as exGaussian functions. The limits of this approximation are tested numerically for the practical case of three health-related variables: one physical (body mass index) and two cognitive (verbal fluency and short-term memory). This work is in line with our previous works on a physics-inspired mathematical model to represent the reaction times of a group of individuals.
Introduction
The chi and chi-squared distributions are well-known continuous probability distributions widely used in Applied Statistics [1][2][3][4][5]. The fact that they can be generated by a set of Gaussian-distributed random variables makes them amenable to simulations. We devoted a previous work to study the percentile ratios in a chi distribution [6].
A chi distribution of k = 3 degrees of freedom is found in physics to model the velocities of the independent particles of an ideal gas in thermodynamic equilibrium. Similarly, a chi-squared distribution models the energies of the particles in the same physical system. Another typical case from physics is the Rayleigh distribution (chi of k = 2 degrees of freedom).
In one of our previous works [7], we found a new interesting application of the chi distribution of k = 3 degrees of freedom, that is, of the Maxwell-Boltzmann (MB) distribution. In reference [8] we proved that the reaction times (RTs) of children responding independently to visual stimuli in a short time (hundreds of milliseconds) and without exchanging information are correlated. We interpreted this fact as an experimental evidence for the existence of a system of individuals (or collective). In order to gain insights into this correlation, we developed a physics-inspired mathematical model in reference [7] to represent these correlations. In fact, we could elucidate a correspondence between a system of particles and a group of correlated individuals.
We are rather interested in the conceptual modelling of different situations using the chi distribution. In this respect, in our recent work we have been studying the limits within which the chi distribution can still represent well probability distributions originated from generating functions which are not necessarily Gaussians of equal variances. For example, in reference [9], we studied the limits of the chi modelling for the case of unequal variances in the generating Gaussians. We also proposed a discrete model and an ansatz to calculate the only parameter of this distribution as a function of the unequal variances of the generating Gaussian.
In line with our previous works [7,9], we have extended here the simulation study of the chi distribution for the case of asymmetric generating functions. In this respect, the exGaussian function is a simple, flexible and intuitive function that can be used to represent a skewed distribution as it results from the convolution between the Gaussian and exponential decay functions. The convolution between two functions can be easily simulated as the sum of the respective randomly generated variables. Two practical examples which can be represented by exGaussians are the reaction time distributions in Experimental Psychology [10][11][12][13][14][15][16][17] or the peaks in Chromatography [18].
In this paper we will carry out Monte Carlo simulations to study the distribution Z that originates from combining k generating functions with certain asymmetry (Z j ), as , and will evaluate the result by means of a fit to a chi distribution. The level of asymmetry is considered by using exGaussians as generating functions. Our aim is to explore the level of asymmetry for which the fit to chi distribution can be still considered reasonably good for practical applications. Our approach is useful to model multiple situations in health and social sciences where random variables commonly follow asymmetrical distributions. In this respect, an example involving health-related variables is also included in this work.
Generalities on the Chi Distribution
The chi distribution is a continuous probability distribution of a random variable defined as where each of the X j , j = 1, . . . , k, is a Gaussian-distributed independent random variable. Each one of the k variables X j follows a Gaussian distribution with mean zero and variance to achieve unity. In Ref. [9] we analysed the case of different values of the variance for each Gaussian component j. In the present paper we propose to study the case in which the X j components deviate from exact Gaussianity, instead exhibiting a certain degree of asymmetry. We propose to model the deviation of each component from pure Gaussianity, considering them as exGaussian distributions. The exGaussian distribution [10] is given by where erfc is the complementary error function. The above f (x; µ, σ, τ) is the result of convoluting the pure Gaussian with the exponential distribution where µ and σ are the mean and standard deviation of the Gaussian component and τ the decay constant of the exponential. Let us recall that, if a random variable X is distributed according to (3), and a random variable Y is distributed according to (4), then the sum X + Y is distributed according to the exGaussian (2). In this way, the parameters µ, σ are not the mean and the standard deviation of the exGaussian distribution (2), but those of its Gaussian component (3). Thus the parameter τ is a measure of the skewness of the exGaussian distribution, i.e., of its deviation from pure Gaussianity. The value τ = 0 corresponds to a pure symmetric Gaussian [9]. On the other hand, the probability density function corresponding to (1) is given by In the case when all the variances of the k generating Gaussians take a value different from one, then the above f (x; k) generalises to [9] where B is related to the variance of distribution (6). The chi distribution as stated in (1) describes a k-dimensional ideal gas of free, independent particles. In this latter case, the k variances are all equal. The cumulative distribution function corresponding to (6) is given by where Γ(a, b) is the upper incomplete gamma function [19]. A particularly interesting case of the above appears in the statistical mechanics of ideal gases [20]. This is the case of a chi distribution with k = 3 degrees of freedom. Then the random variables X j are the three components v x , v y , v z of the velocities of the particles. These components are Gaussian-distributed, and their modulus (v 2 x + v 2 y + v 2 z ) 1/2 is distributed according to (6). This special case, called the Maxwell-Boltzmann distribution [21,22], is such that all three component distributions are centered around v j = 0, and the three variances are all equal (and proportional to the temperature of the gas). A k-dimensional ideal gas would be represented by (6).
Monte Carlo Simulations for Non-Gaussian Generating Distributions Z j
In this paper we will perform Monte Carlo simulations to generate one random variable Z obtained as Z = ∑ k j=1 Z 2 j . Each one of the Z j is a random variable whose probability density function resembles a Gaussian but however has some degree of asymmetry γ = 2τ 3 /(σ 2 + τ 2 ) 3/2 (for the exGaussian [10]). In this work we considered γ < 1.7 (see Figure 1, and Tables 1 and 2). This asymmetry will be implemented considering distributions such as the exGaussians presented in (2). As we have stated above, a random variable following an exGaussian can be simulated by summing a random variable following a Gaussian distribution and another random variable following an exponential decay distribution. We will first simulate k generating exGaussians, each one with a vanishing value of the mean and standard deviations σ j all equal to one. A total of 10 6 random numbers were generated to obtain the probability distribution of the variable Z. The generating exGaussian random variables (Z j ) will be chosen to have different levels of asymmetry. All fittings are performed by using the non-linear fitting algorithm of Levenberg-Marquardt [23,24]. We used the FORTRAN 90 programming language to make all calculations. The machine epsilon is 2.220446 × 10 −16 for the "double precision" real type. The same methodology as in our previous article [9] to study the modified chi distribution for the case of unequal variances in the generating Gaussians was followed. The cases of k = 3 and k = 5 degrees of freedom are developed hereafter in a general way.
Measurement of Goodness of Fit between Data and Models
Statistical analysis was performed to measure the quality of the proposed model with respect to the data, including both simulations and real health-related variables. The goodness of fit of the model with respect to the data was first studied by the coefficient of determination R 2 . It quantifies the percentage of data variance that can be explained by the model [25] with values in the range [0, 1] representing from a null fit to a perfect fit.
In addition, a non-parametric test was assessed to quantify the equality between the two continuous probability distributions under comparison in each case: the ex-Gaussians, and the distributions for the simulated or real data. Kolmogorov-Smirnov (KS) distance is the maximum vertical distance between the cumulative density functions (CDFs) of the simulated/real data, and the model [26,27]. This statistic is sensitive to differences in both location and shape of the CDFs [28,29]. We also checked the associated p-value to check whether both distributions can be considered to follow the same distribution (i.e., the null hypothesis is true).
Finally, quantile-quantile (Q-Q) plots are also depicted to study the goodness of fit between data and models [30]. Each point of the plot (x, y) corresponds to one of the quantiles of the first distribution (simulated/real data) which is compared against the same quantile of the second distribution (the model). Thus, points in the Q-Q plot lie approximately on the line y = x when data follow the same distribution. We used these probability plots to confirm that both probability distributions (simulated/real data and the model) had good fitting agreement. Figure 1 shows the probability densities of the exGaussian random variables (Z j ) obtained for four different values of the parameter τ and therefore of the asymmetry γ in the generating exGaussians. The coefficient of determination (R 2 ) of a Gaussian fit (red solid line) has also been included. The larger the value of γ the lower the values of R 2 , as expected. For the sake of clarity, quantile-quantile plots of these probability densities are also shown in Figure 2. The good quality of the fittings can be observed since the points lie approximately in a line. This is also remarked with the low values of Kolmogorov-Smirnov distances, and the p-values higher than 0.5 which show strong acceptance of the null hypothesis: data follow the same distribution. Figure 3 includes the asymmetry of the generating exGaussian random variables (Z j ) as a function of τ for µ = 0 (mean value) and σ = 1 (standard deviation). It can be seen that almost a constant value of the asymmetry γ is reached when τ increases. Values of asymmetry in the linear region of this curve were chosen for this work (0.18 < γ < 1.67). A linear fit for 0.18 < γ < 1.67 yields a coefficient of determination of 0.97. The resulting distributions of the variable Z for three and five generating exGaussians, and for different values of the asymmetry were fitted by using chi distributions of k = 3 and k = 5 degrees of freedom, respectively. All the generating exGaussians were centered at and divided by the corresponding mode in order to standardise the resulting distribution. The mean asymmetry is calculated over the asymmetry values within each of the sets shown in Tables 1 and 2. The values of the coefficient of determination R 2 are represented in Figure 4 versus the mean asymmetry for several increasing values of asymmetry. A total of 10 6 random numbers were used to obtain the probability distributions to which chi distributions are fitted. Table 1. Results for different levels of asymmetry when considering k = 3 generating ex-Gaussians. The columns in order show: the set number, the values of τ for each of the generating exGaussians (τ 1 , τ 2 , and τ 3 ), the corresponding percentage difference between the smallest and the largest value of τ (e τ ), the mean asymmetry among the three generating exGaussians ( γ ), the calculated parameter (B calc ), the fitted parameter (B fit ), the mean coefficient of determination ( R 2 ), and the mean percentage difference between B calc and B fit over the values within the set ( e B ). Table 2. Results for different levels of asymmetry when considering k = 5 generating exGaussians. The columns in order show: the set number, the values of τ for each of the generating exGaussians (τ 1 , τ 2 , τ 3 , τ 4 , and τ 3 ), the corresponding percentage difference between the smallest and the largest value of τ (e τ ), the mean asymmetry among the five generating exGaussians ( γ ), the calculated parameter B calc , the fitted parameter (B fit ), the mean coefficient of determination ( R 2 ), and the mean percentage difference between B calc and B fit over the values within the set ( e B ). A change in the τ values of the generating exGaussians leads to a change in their variances (S 2 ) as they depend on this parameter as S 2 = σ 2 + τ 2 [10]. In Figure 5, the values of B f it (the fitted B parameter of the chi distribution (6)) are compared with the B value as calculated from B calc = [(S 1 + S 2 + S 3 )/3] 2 − γ 2 . This expression is an extended version of the ansatz defined in our previous reference [9], B = [(S 1 + S 2 + S 3 )/3] 2 , but for the case when a small asymmetry is present. It should be noticed that the exGaussian parameters involved in the calculation of B calc (i.e., σ and τ) should be divided by the corresponding mode of each generating Gaussian.
Set
The results shown in Figures 4 and 5 are summarised in Tables 1 and 2, respectively. For very low values of asymmetry (<0.8), the difference between B calc and B f it (i.e., the error e B ) remains very small (<6%) and R 2 is reasonably good (>0.97). However, for asymmetry values larger than 1.2 the results for e B are higher and R 2 lower, especially for k = 5, where R 2 is below 0.9 and e B higher than 18%. In order to illustrate the presented work with real data, three health-related variables were chosen from the seventh wave of SHARE (Survey of Health, Ageing and Retirement in Europe) (released 17 December 2020) [31][32][33]. The three variables in this example were: one related to physical health (body mass index-BMI), and two related to cognitive frailty (verbal fluency, measured as the number of animals named within a minute; and shortterm memory, measured as the number of words the participant was able to repeat from a 10-word list). We considered a sample formed by 1503 participants from several European countries. We chose those three variables to illustrate results since they were numeric and not categorical, in which case the proposed representation may be compromised.
The empirical probability distributions of these variables were fitted by exGaussian functions (Figure 6A-C) with a good coefficient of determination. For the sake of clarity, quantile-quantile plots of these probability densities are also shown in Figure 7, where it is depicted that both probability distributions match to a straight line. In addition, the good quality of the fittings can be visually checked with the small Kolmogorov-Smirnov distance values, while p-values are higher than 0.05, showing no-significant differences. Figure 6D shows the probability distribution of the random variable Z = ∑ 3 j=1 (Z j ) 2 1/2 and the corresponding fit to a MB distribution (see (6) for k = 3). The variables Z j stand for body mass index, verbal fluency, and short-term memory, which were standardised [7,9]. The parameters, uncertainties, and coefficients of determination (R 2 ) from the exGaussian and MB fittings are included in Table 3. This new variable, Z, combines the values of the three health-related variables in a unique value for each individual, which can be considered as a new index able to characterise each individual in the sample. The MB-like distribution in Figure 6D models the probability distribution of Z in the sample. Thus, the entire sample can be modelled by only one parameter, namely, the parameter B of the MB distribution. We illustrated this methodology for three variables but it can be extended for any number of k variables. The methodology developed in this work can have potential applications in diverse areas, for instance, to model health-related [34] and psychological variables [16,35]. Figure 6: (a) BMI, (b) verbal fluency, (c) short-term memory, and the corresponding fit to a MB distribution (d). Each panel also shows the Kolmogorov-Smirnov distance and the associated p-value. Table 3. Parameters (µ, σ, and τ), uncertainties (∆µ, ∆σ, and ∆τ), and coefficient of determination (R 2 ) from the exGaussian fitting of the analysed variables. In the last two rows, the results for the MB fitting are included. The fitted parameter (B f it ) is compared with the calculated parameter (B calc , with the ansatz introduced in this work) yielding a percentage difference of e B = 7.17 %.
Conclusions
The influence of the asymmetry in the chi distribution was investigated by means of fitting this function to the distribution resulting by taking three and five generating exGaussian functions. The results indicate that, for very small asymmetries (γ < 0.8) in the generating functions, good values for the coefficient of determination (R 2 > 0.97) are still obtained when the simulated distribution is fitted with a chi function for both k = 3 and k = 5. The results for k = 3 are also good for asymmetries larger than 1.2 while they worsen for k = 5. We also extend the ansatz proposed in [9] to include small asymmetries. As a practical example to illustrate the results of the Monte Carlo simulations, three health-related non-dichotomic variables (body mass index, verbal fluency and short-term memory) were studied. These variables were combined by taking the square root of the sum of their squares. The resulting new variable can be fitted by a Maxwell-Boltzmann (MB) distribution. Thus, the entire sample can be characterised by a one-parameter distribution, namely, B. The values of the MB variable can be considered as a new index Z able to characterise each individual in the sample. In this article, we chose three variables but this methodology can be extended to any number of variables that can be combined into a single scalar which is the variable of the resulting chi distribution. | 4,601.8 | 2021-05-22T00:00:00.000 | [
"Mathematics",
"Medicine"
] |
New iodine-apatites: synthesis and crystal structure
The paper describes methods for the preparation of compounds with an apatite structure containing only iodine atoms in the “halogen” position. The crystal structure of the compounds was refined by the Rietveld method. The resulting apatites have a structure with a space group P63/m and have the following unit cell parameters: Ba4 f 1.78(2)Ba6 h 2.75(2)(PO4)3I0.04(2) (a = 10.18609(34) Å, c = 7.71113(30) Å, V = 692.889(54) Å3, R = 5.448 %), Pb4 f 1.82(2)Pb6 h 2.75(2)(PO4)3I0.13(2) (a = 9.87882(18) Å, c = 7.43222(16) Å, V = 628.144(26) Å3, R = 8.533 %), Pb4 f 1.90(2)Pb6 h 2.68(2)(PO4)3I0.16(2) (a = 9.87058(48) Å, c = 7.41255(46) Å, V = 625.437(72) Å3, R = 5.433 %). The study of the crystal structure showed a relatively low efficiency of the binding of iodine in the apatite matrix.
Introduction
Currently, 37 isotopes of iodine are known. Among them, the greatest attention is paid to iodine-129: it is one of the seven long-lived fission products of uranium and plutonium and in significant quantities entered the atmosphere as a result of nuclear tests in the 1950s and 1960s [1]. At the same time, it also poses a danger to humans, as a more rapidly decaying iodine-131, since, due to its nature, it can accumulate in the body.
To bind various isotopes of iodine, including iodine-129, many approaches have been proposed [2,3]: • mercurex-process, implying receipt of its mercury compounds, • iodox-process, allowing to fix iodine in the form of solid precipitate HI 3 O 8 , • use of sorbents based on titanium oxide, copper, silver. In a number of works [4][5][6], it was proposed to use a matrix with the structure of the mineral apatite for binding isotopes of iodine, in particular, long-lived iodine-129 (T 1/2 = 1.57(4)⋅10 7 years).
The structural type of apatite is known for its high isomorphic capacity, due to which it can undergo substitutions in the structure by almost all atoms of the periodic table and within wide quantitative limits [7,8]. It should also be noted that, unlike the structural types traditionally used as the basis of matrices for binding radionuclides (garnet, pervoskite, hollandite, monazite, etc. [9,10]), apatites are able to bind not only cations but also anions; therefore, the attention directed at them in the context of radioactive iodine binding is quite justified.
The general formula of apatites can be described as follows: M 4f 2 M 6h 3 (AO 4 ) 3 L, where M stands for cations of different oxidation states, located in two crystallographically different positions of the structure, A refers to the most often atoms prone to the formation of tetrahedral coordination polyhedra (for example, P, V, Si, S), and L is for halogens or OHgroups, as well as O 2-, CO 3 2and other ions. The content of cations in the structure is rather high; therefore, geoceramics, including apatites, are studied to bind strontium-90 [11,12]. Taking into account the crystal-chemical similarity of apatite to the mineral part of the native bone of mammals, similar processes can be observed in the human body: at this limit of strontium accumulation in bone tissue, thermodynamic modeling is possible [13].
Attempts to theoretically predict the possibility of binding iodine by the apatite structure were undertaken earlier [14], but a more detailed result was obtained in [15], where, based on machine learning, density functional theory, DFT (density functional theory), and experimental data 18 new thermodynamically stable compounds with the apatite structure containing iodine anions were predicted. Since the attention to the issue of the efficiency of the binding of iodine by the apatite matrix does not decrease [16,17], an attempt is made in this work to partially reproduce and expand the results of theoretical modeling.
Abstract: The paper describes methods for the preparation of compounds with an apatite structure containing only iodine atoms in the "halogen" position. The crystal structure of the compounds was refined by the Rietveld method. The resulting apatites have a structure with a space group P6 3
Synthesis
To obtain iodine-apatites, two approaches were considered, namely wet and solid-state approaches. In the solution method of synthesis, 0.5 M solutions of nitrate of the corresponding divalent cation, ammonium hydrogen phosphate, and a solution containing a 2-fold excess of potassium iodide were used. The general scheme of the reaction can be represented as follows: 5M II (NO 3 ) 2 •nH 2 O + 3(NH 4 ) 2 HPO 4 + KI → M II 5 (PO 4 ) 3 I + 6NH 4 NO 3 + KNO 3 + 3HNO 3 + 5nH 2 O (1) The resulting precipitates were kept in the mother liquor for a day, then centrifuged with rinsing with bidistilled hot water and dried in air.
Solid-phase synthesis implied the preparation of a stoichiometric mixture of ammonium hydrogen phosphate, as well as nitrate and iodide of a divalent cation. The reaction mixture was calcined sequentially at temperatures of 300°C, 500°C, and 800°C. The calcination time was 4 h at each stage with the dispersion of the mixture in an agate mortar during the transition to each next stage. 4 (2) Both approaches are simpler in terms of hardware design than the microwave synthesis proposed in [18], the mechanochemically activated method in [19], or the synthesis using electro-pulse plasma sintering in [4]. The particular amounts of used compounds are given in Table 1.
We used reagents manufactured by the Vekton company of analytical grade and chemically pure grade, except for lead (II) iodide, which was synthesized by the solution method by the reaction of saturated solutions of lead nitrate and potassium iodide and characterized by X-ray diffraction.
Research methods
The phase individuality of the obtained compounds was monitored using a Shimadzu XRD 6000 powder diffractometer. Powder X-ray diffraction patterns were taken in the 2θ angle range of 10-60 °, on an X-ray tube with a copper cathode (λ (CuKα) = 1.5406 Å) at a voltage of 30 kV and a current 30mA. with barium phosphate (comments in text). 2 after losing of the greater part of PbI 2 (comments in text).
Chemical purity and composition of the obtained sample were studied with Shimadzu XRF-1800 spectrometer using fundamental parameters (FP) method with using standard example. BaK α , CaK α , PbK α , IK α , PK α lines intensities were measured three times at 40 kV, 50 mA on Rh anode with FPC detector for P, and SC detector for Ca,Ba,Pb,I ( Table 1). Investigation of the chemical composition of the samples was also performed on method energy dispersive X-ray microanalysis (EDXMA) with Oxford Instruments X-MaxN 20 detector.
To refine the crystal structure, the method of full-profile X-ray analysis (the Rietveld method) was used [20]. The X-ray diffraction patterns were taken on the same diffractometer in the 2θ angle range of 10-120°, the X-ray tube voltage of 40 kV and the current strength of 40 mA, the exposure at a point was 11 s. The structures of known apatites with large halogens Sr 5 (PO 4 ) 3 Br [21], Cd 5 (VO 4 ) 3 I [22], and Pb 5 (VO 4 ) 3 I [19] were considered as primary models. The pseudo-Voight function (PV_TCHZ) was used to describe the peak profile. The crystal structure was refined using the Topas 3.0 software package.
To estimate the particle size, we used both the data on the refinement of the crystal structure of the Topas 3.0 program and calculations using the Scherrer formula: where d is the average crystal size, K is the dimensionless particle shape factor (Scherrer's constant, for spherical particles is considered equal to 0.9), λ is the wavelength of X-ray radiation, β is the width of the reflection at half height, θ is the diffraction angle [23].
Microscopic studies by high-resolution transmission electron microscopy were performed on a JOEL JEM2100F transmission microscope at a voltage of 200 kV, and by atomic scanning microscopy on an AURIGA CrossBeam Workstation (Carl Zeiss).
Results
In this work, an attempt was made to obtain iodide-trisphosphates of a number of divalent cations (Ca, Sr, Ba, Cd, Pb) with an apatite structure.
X-ray phase analysis of precipitates obtained in the course of solution synthesis, as well as polycrystalline samples obtained by the solid-phase method, showed that, in the overwhelming majority of cases, orthophosphates of the corresponding divalent cations were obtained. The exceptions were solution synthesis with lead (hereinafter PbPI (w)) and solid-phase syntheses with barium and lead (BaPI and PbPI (ss), respectively): their diffraction patterns were similar to the X-ray diffraction patterns of compounds with apatite structure presented in the inorganic crystal structure database (ICSD).
X-ray fluorescence analysis showed ( Table 2) that the amount of bound iodine in the resulting precipitates is significantly less than expected from the theoretical stoichiometry of the compounds (especially in the case of barium compound).
A full-profile X-ray analysis of the obtained compounds showed that their crystal structure corresponds to the type of apatite with the space group P6 3 /m of the hexagonal system. In addition, the quantitative phase analysis of the obtained samples showed that the BaPI sample contains a significant impurity of barium phosphate, the structure of which was taken from [24]. The phase analysis of the lead-containing samples showed the absence of any secondary phases in the final product (Figure 1a-1c, Table 3). In the case of PbPI-ss, it can be explained by the absorption of most of the melt of lead iodide by the material of the alundum crucible (T m (PbI 2 ) = 412°C). As you can see from the Table 4, all three obtained apatites are characterized by a high defectiveness of the positions occupied by halogen, which indicates the low efficiency of the structural type of apatite in relation to the binding of iodine ions, despite the rather optimistic forecast in [15]. In this case, iodine ions are located in the crystallographic position 2b (0; 0; 0), which is located on one side in the hexahedral tunnel of the structure formed by three-cap trigonal prisms M 4f O 9 (Figure 2a), on the other hand, between quasi-layers formed by phosphate tetrahedra and polyhedra M 6h O 6 I 2 (Figure 2b), which is typical for halogens larger than fluorine in the apatite structure [25][26][27].
The significantly broader diffraction maxima of PbPI (w) as compared to other obtained apatites indicate the nanoscale of the sample particles. According to calculations using the Scherrer formula and when refining the structure using the Rietveld method, the particle size is 19.6 and 27.6 nm, respectively. The particle size was also confirmed by a direct method -atomic scanning microscopy ( Figure 3).
The abnormal values of the crystal structure distortion index of the obtained apatites are presumably related to the particle size. In a number of works by T. White and colleagues [7,28], the angle φ is the angle of rotation of the bases of three-point trigonal prisms M 4f O 9 relative to each other -is considered as such an indicator ( Figure 4). As you can see from the Table 5, the value of this index for the PbPI (w) sample is as close to 0° as possible, while BaPI and PbPI (ss) have the values of the angle φ typical for apatites with large divalent cations [7]. It should also be noted that, according to high-resolution transmission electron microscopy data, during the synthesis of PbPI (ss), whiskers were formed in the sample -one-dimensional dislocation-free crystals 50-100 nm long and 5-10 nm wide (Figure 5a). Figures 5b and 5c also show the crystal structures of agglomerated whiskers. Such whiskers can play the role of a native reinforcing agent when creating a ceramic material based on this compound.
Conclusion
Despite the theoretical prediction of the possibility of obtaining various iodine-apatites, in particular, only one iodide triphosphates, solution and solid-phase methods were able to obtain two individual compounds, the compositions of which can be described by the following formulas Ba 4f (PO 4 ) 3 I 0.16(2) (w), and pentalead iodide triphosphate was not considered as possible. Despite the low iodine content in the obtained phases, such nonstoichiometric compounds, nevertheless, proved to be quite stable, but only for the largest cations of the studied series, barium and lead. This can be explained by the fact that iodine cations are located between the layers of the structure formed by phosphate tetrahedra, while the interlayer distance is precisely determined by the size of the cation at the 4f crystallographic position. In addition, phosphate phases with completely vacant halogen positions (for example, Pb 9 (PO) 6 □ [29]) are known, which have an apatite structure despite the absence of a halogen. This can be attributed to the stability of the obtained phases, which have a largely similar composition and structure and differ only in the presence of a small amount of halogen and an additional amount of cation necessary to maintain electroneutrality.
An analysis of the crystal structure of the obtained compounds showed that these phases bind about 80% less iodine ions from the theoretically expected amount, which cannot speak in their favor as a promising basis for creating a matrix for binding radioactive iodine. However, it should be noted that, during the solid-phase synthesis of PbPIapatite, nanowhiskers are formed directly in the polycrystalline sample, which can have a favorable effect on the strength characteristics of ceramic materials based on it. Moreover, phosphates with the apatite structure are characterized by higher melting points and polymorphic transformations [30], which makes them more preferable as a chemical base of ceramics for binding radioactive isotopes of iodine. | 3,164 | 2021-10-19T00:00:00.000 | [
"Materials Science",
"Geology"
] |
Specifics of Development of the Integral Method of Knowledge Estimation
The problem of testing-based objective estimation of knowledge acquires new forms and content in the context of new paradigms. Analysis of current test methods suggests that sometimes questions with answers assuming multiple choice or multiple choice and formulation do not allow objective estimation of students’ knowledge that results in reduction of the simulating effect of pedagogical grades on the cognitive activity of students and educational process quality in general. This article suggests an integral method of knowledge estimation based on a new approach to question and answer formulation enabling free formulation of a test answer. The theoretically justified and experimentally verified data can be used in order to improve control and estimation of knowledge by the social and humanitarian subjects.
Introduction
In the XXI century, didactics is oriented to strict control of all stages of educational process from purpose and content development to verification of results.Therefore, pedagogical science actively seeks the ways and means of knowledge control and estimation in order to increase education quality.Scientist K. Ingeskamp thinks that "modern scientifically based didactics is bound to be defeated if it is not based on many tools of maximum objective methods of pedagogic diagnostics." Naturally, this means objective control, i.e. such knowledge assessment methods and, broadly speaking, pedagogic diagnostics, which enable a teacher or a researcher to use such means that provide accurate and complete information on the knowledge level and quality of the educational process in general.At the present stage of development, pedagogic science considers testing to be such a means.
Testing is targeted examination, which is equal for all testees, performed in strictly controlled conditions and enabling objective estimation of the studied characteristics of the pedagogical process.Objectivity, i.e. independence of the verification and knowledge assessment from the qualified teacher proficient in this field, is an advantage of this form of control.However, from our point of view, sometimes application of this method does not allow objective estimation of the knowledge level.Herewith, the following question arises: how has the testee managed to answer the test: through logical reasoning or randomly?Besides, physical acquisition of training material is always possible.1) Selection of the correct and complete answer from a series of the proposed ones (correct or incorrect, complete or incomplete, accurate or inaccurate); 2) Selection comprising of two parts (the first part requires any selection, and the second one requires justification of the selection); 3) Alternative selection (yes/no, 0/1, true/false); 4) Arrangement of elements from the proposed list in the correct succession; 5) Matching elements from two lists; 6) Statement completion, indication, or selection of omitted words; 7) One word (or number) answer; 8) Answer in many words limited in the order or inter-word connections.
The available possible forms of presentation of test answers are appropriate.Their applicability in the training process is justified.It is beyond any doubt that didactic tests are the latest method of control by many characteristics and are definite leaders among traditional forms of knowledge checks.
The form is defined as a communication method for arrangement, adjustment, and existence of the content in general composition of test jobs.The main difficulty of the problem is the contradiction between theoretical and practical reasoning against the form.The majority of practitioners of the test process find the form of test jobs familiar and quite understandable.Therefore, they do not see any problem here.Correspondingly, the practitioners do not see any reasons to change the forms, to learn forms development and methods for development of new test jobs.Such position of practitioners results in testing degradation.
Current computer source environment enables development of tests including multiple-choice, numerical, and formulated answers.Multiple-chose answers are the most commonly used.They are easier to prepare (they do not require multiple samples of correct answers, which are difficult to make complete) and, most importantly, to use.In case of multiple-choice answers, students direct the main efforts to perform their tasks and not to choose answers.
Many years' experience of using testing in the educational process shows that this method has many advantages: audience coverage; simple and efficient verification of results; possible integrated computerization of the testing process; reduction in estimation subjectivity.
However, despite all evident advantages of testing, we have to acknowledge that it also has some disadvantages.These include deep distortion in perception of material integrity by students; stereotyped thinking development; absence of innovative approach; verbatim learning of answers; memorizing and copying of test keys; great dependence of test reliability rate on the variability of test scores of the testees in a certain group, etc.This situation implies strong negative consequences: reduction in motivating action of grades on the cognitive work of students and the quality of the educational process in general.
Test questions point to ponder.The case is that not every subject can be formalized.Formalization is obvious for such subjects as physics, mathematics, mechanics, and others.Herewith, it is though not always possible to formalize knowledge in social and humanitarian subjects.The main reason for disparity between prospective and operational capabilities of computers is the delay in development of methodological problems and new knowledge evaluation methods.
In order to make test-based knowledge control efficient for the specified subjects, it is necessary to find out the knowledge acquisition level at every educational stage.Herewith, the test tools have to cover all the required characteristics of knowledge acquisition, for example, ability to specify an answer with examples, ability to express own thoughts logically, properly, correctly, etc.Only such forms of knowledge estimation, which are as good in matter as oral control, allow using the great advantages of such an efficient method of accurate and objective estimation of knowledge as testing.
Nevertheless, there is such an important problem as complexity of text answer meaning recognition.As you know, standard methods for system analysis and computer modeling that are based on precision processing of numerical data cannot essentially cover the great complexity of the processes of human reasoning and decision-making.Thus, it is hard to escape a conclusion that significant estimates of humanitarian systems including education require refusal of high standards of accuracy and rigidity, which are usually expected from a mathematical analysis of clearly defined mechanical systems and which provide more tolerate approaches similar by nature.
The testing method can be improved trough a new approach to the formulation of the answer to a test question assuming free text formulation of the answer; the answer can be recognized based on subject knowledge base developed by experts; and the analysis of this answer can be estimated based on a package of its formation criteria, provided the algorithms of estimation criteria for the quality of the answers have been developed.
Definition of the Integral Method of Knowledge Estimation
Integral method is a method enabling objective estimation of students' knowledge by a series of the criteria of its formation.The core of the method is estimation of students' knowledge by test questions assuming free formulation of the answers.
A Series of Criteria for Estimation of Test Answers
We estimate the answer analysis by the following series of criteria: • Objectivity.It reflects the basic level of the subject knowledge and is determined by comparison of the correspondence of the applied descriptors with thesaurus descriptors or their synonyms.The quality of the criterion "objectivity" is characterized by coefficient δ.The estimation criterion for δ is the ratio between the number of correctly applied descriptors and the total number of descriptors relevant to every test question in the thesaurus.This criterion is evaluated by the following formula: where N is the number of descriptors corresponding to the thesaurus for every question of the test: M is the total number of descriptors corresponding to the thesaurus for every question of the test: • Literacy.This criterion is determined by the rules for structuring text documents.In this research, we are restricted with the conditions for application of particular factors, e.g.spelling check determined by the comparison of correspondence of every word in the answer text to the words in the spelling dictionary.The quality of the criterion "literacy" is characterized by the coefficient γ.The estimation criterion for γ is the ration between the correctly spelled words and the total number of words in the answer text.This criterion is evaluated by the following formula: where K is the number of correctly written words in the text of answer; and R is the total number of words in the answer text.
• Presence of examples.An example in the answer text clearly explains the main question of the test.This criterion is determined by the comparison of the correlation of examples used in the answer text with the words in the database of examples or their synonyms.The quality of the criterion "presence of examples" is characterized by the coefficient φ.The estimation criterion for φ is a ratio between the number of correct examples in the text of answer and the total number of examples conforming to their database for every test question.This criterion is evaluated by the following formula: where D is the number of examples corresponding to the database of examples or their synonyms for every test question; F is total number of examples from the database of examples or their synonyms for every question of the test.
• Logic connections between sentences.These connections are determined by the rules of the language for connections between sentences.Quality of the criterion "logic connections between sentences" is characterized by the coefficient μ.The estimation criterion for μ is a ratio between the quantity of logic connections in the answer determined by the answer incidence matrix (Figure 1) and the maximum possible number of logic connections between sentences used in the answer text.
This criterion is evaluated by the following formula: where E is the number of logic connections determined by the answer incidence matrix.
L is the maximum number of logic connections between the used sentences.It is evaluated by the following formula: where n is the number of sentences in the text of answer.
Diagonal elements of the matrix determine the number of sentences in the text of answer.Super diagonal elements of the matrix determine connections between the concerned sentences and subsequent sentences of a text answer.
• Complexity.It characterizes the quality of the test answer in general and is determined by the presence of connections between the concerned criteria: objectivity, literacy, presence of examples, logic connections between sentences.The quality of the criterion "complexity" is determined by the coefficient ŋ.This criterion is evaluated by the following formula: Where δ is a coefficient obtained by the criterion "objectivity"; γ is a coefficient obtained by the criterion "literacy"; φ is a coefficient obtained by the criterion "presence of examples"; μ is a coefficient obtained by the criterion "logical connections between sentences"; z is the number of the considered criteria.
The complexity analysis is provided by expertise.The general score for every question of the test is assigned by the following formula: Where δ is a grade assigned by the criterion "objectivity"; γ is a grade assigned by the by criterion "literacy"; ϕ is a grade assigned by the criterion "presence of examples"; μ is a grade assigned by the criterion "logic connections between sentences"; η is a grade assigned by the criterion "complexity"; is the total number of criteria of the answer analysis; κ is a complexity coefficient for every test question.
• Estimation of the content of a "verbal and linguistic" answer.Let us consider an example of verbal and linguistic answer to the question "What is an array of information"?
This question may have one of the following answers: "An array of information is an information structure consisting of one or more records, so that the records describe objects and the array describes the class of objects.Several arrays form a system or series of arrays.A record is a specified set of data, which characterizes some objects or processes.Record examples are sales vouchers, work orders, invoices, questionnaires, statistic records".
Series of Oral Answer Criteria
As is obvious, an oral answer is a specific text, the content of which discloses the essence of the raised question.Accordingly, estimation of the answer, first of all, requires estimation of its content by the developed complex of criteria: • Objectivity.It determines the basic level of the subject knowledge and is determined by the comparison of correspondence of the applied terms to the thesaurus terms.This criterion is a constituent part of the basic vocabulary of the subject.
The basic vocabulary of the subject field is the main reference didactic material used for an analysis of every text answer.It is divided into the following components: • A thesaurus is a kind of a vocabulary with terms placed in a certain order (the principal sense-bearing items) and connections fixed between them.A thesaurus is used for the search of words determining the basic level of the subject knowledge.Any thesaurus consists of an introduction, an alphabetic index (vocabulary), and a classified catalog.
The main method of thesaurus formation is selection of descriptors, which are usually nouns, from any training module, notes, working programs, synonyms, and other linguistic sources, which are specific for the subject.
• An example database is a vocabulary consisting of words placed in a certain order, which vividly explain the main questions of the test.The main method for formation of the vocabulary is selection of examples provided by teachers at lectures, answers of students at practical studies, seminars, etc.
Example database development consists of two interrelated stages: 1) Initially, it is necessary to form a set of various example words.
2) Then, the most frequently used examples are selected.Moreover, all forms of the example words are considered.
Let us separate descriptors from the text of the considered answer by the morphological analysis of sentences.The analysis of a testee's answer results in division of the answer into sets of various parts of speech: a set of nouns, a set of verbs, a set of adjectives, etc.We need to count the number of nouns in the sentence in order to select descriptors from the text of the considered answer.These are the following words: array, information, structure, record, object, class, system, series, data, and process.Herewith, various forms of every word and its synonyms must be taken into account.Consequently, the thesaurus must include the selected words-descriptors and consider all the forms of the corresponding words, including synonyms.
Based on the above, the following settlement procedures for determination of the criterion "objectivity" have been selected: 1) Sentence analysis.
• Literacy is determined by grammar rules of the text document structure.Quality of the criterion "literacy" is characterized by the coefficient γ.The answer literacy is determined by several characteristics: 1-presence of a subject; 2-presence of a predicate; 3-presence of objects; 4-presence of attributes; 5-presence of adverbial modifiers; 6-absence of grammar mistakes; 7-absence of errors of style; 8-presence of various syntactic constructions.
The selected coefficients characterize the answer as an overall literacy rate, for example: A is a complete sentence; 1 ⋂ 2 ⋂ 3 ⋂ 4 ⋂ … , where k is the number of structural and semantic components, which determine this rate.
If k = max I, where I is the maximum number of structural and semantic components, which are in compliance with the definite condition, the answer shall be considered as a complete one: B is an incomplete sentence; C is a limited sentence-only 1 ⋂ 2; D is an irregular sentence-there is no γ1 or γ2; E is a faulty sentence with grammar mistakes; F is an inaccurate sentence with errors of style.
Series of various rules for evaluation of grammar quality of the text answer in general may be an estimation criterion γ.
-As we know, examples in the text of answer vividly explain the main question.The quality of the criterion "presence of examples" is characterized by the coefficient φ and determined by comparison of the correspondence of the applied examples to the words from the example database.The above answer to the question "What is an array of information?"contains five examples, as it has been specified: sales vouchers, work orders, invoices, questionnaires, statistic records.
-The rules of language connections between sentences determine logic connections between sentences.It is commonly known, that a text consisting of two or more sentences, which are interconnected by sense or structure and have equal function on a compositional or stylistic level, is a complex syntactical unity.
In the context of structure and semantic, complex syntactical unities are divided into complex syntactical unities with chain connection and complex syntactical unities with parallel connection between sentences.
μ may have the following estimation criteria: -All the sentences of the answer shall have logic connections with the principal sentence of the answer; -All the sentences, except for the principal one, shall be subdominant to the principal sentence; -All the sentences shall complete content of the principal sentence, discover its essence in more details, explain the principal sentence, classify subjects of the answer, etc.
We propose to construct an incidence matrix of the text answer in order to formalize this complex criterion.
Diagonal elements of the matrix determine the number of sentences in the text of answer.Super diagonal elements of the matrix determine connection between the concerned sentences and subsequent sentences of the answer, i.e. successive logical connections.In the above answer to the question "What is an array of information?"diagonal elements of the matrix consist of four components, because the answer text consists of four sentences.There is a logic connection between the first and the second sentence.It is determined by the keyword "array."There is a logic connection between the first and the third, the first and the fourth, and the third and the fourth sentences of the answer.It is determined by the keyword "record".The criterion "logic connections between sentences" is specified by the number of logic connections in the answer text.
-Complexity.This criterion characterizes the quality of every text answer in general and is determined by the presence of connections between the concerned criteria: objectivity, literacy, presence of examples, logic connections between sentences.An answer is considered to be complex within the meaning of general characteristic if in the answer to the main question a testee manages to determine criteria "objectivity" at a sufficiently high level, forms a grammatically correct text of the answer, demonstrates examples explaining the substance of the main question, and gradually develops his/her idea in the answer text.The complexity is analyzed by counting the number of connections in the answer and determined by expertise.
General score for every question of the test is assigned by the formula (7).
Scoring
Grades are assigned with consideration of every selected criterion according to a scale of one to ten.The main problem for final score assignment is determination of the limits between two points of the traditional scale of one to five, when knowledge can be estimated higher or lower than the corresponding grade.Fuzzy sets of criterion grades are applied in order to determine grade limits in the research, and a heuristic approach is applied for determination of final score.
Further, it is necessary to determine the limits of fuzzy assessment of the dedicated parts.Herewith, the main purpose and difficulty is to determine the limit values of the dedicated parts.In case of a normal law for the grade allocation function, the grade its parameters can be used for selection of the grade limits.For example, a property of the normal probability law is a standard deviation σ.We take the function describing the confidence probability as a membership function, and a standard deviation of the normal law of distribution as the criterion for identification of fuzzy set limits.Confidence figures of estimand within the confidential interval are calculated for the standard intervals by standard integrals (Table 2).For example, if the confidence interval is 2σ, the lower limit of the grade is Х 1 =М-2σ, and the upper limit is Х 2 =М+2σ, where M is a parameter of the grade distribution function.Thus, it is valid to say that the score is equal to the considered grade with a probability of 0.9545, and to the previous or following grade with a probability of 0.0227.
The confidence interval is selected based on the result accuracy requirements, and the more accurate results are required the less the value of the confidence interval is.
Similarly, it is possible to determine the limits of the fuzzy grade for all criteria applied in determination of the final score.
Let N grades are provided by the criterion "objectivity" for a certain answer.
Mathematical expectation is calculated by the following formula: where n is the number of grades; i is the sequence number of the grade.
The result is M =8.
Dispersion is evaluated by the following formula: where n is the number of grades; i is the sequence number of the grade; M is mathematical expectation.
The result is D =1.4.
The standard deviation is calculated by the following formula: where D is the dispersion.
When the grade distribution function parameters are determined, for example, by the criterion "Objectivity": mathematical expectation, dispersion, standard deviation, we can determine the limits of the fuzzy grade depending on the desired accuracy.
When fuzzy sets are applied, a preference function is composed for every grade, the confidence probabilities are selected, the borders of fuzzy sets are determined, and then the grade on a scale of one to five is assigned.
Whereas formation of the final grade in integral method is based on a set of criteria, this method is applied for every criterion.
Pedagogically, the structure of knowledge of every student must be considered at grade assignment.Therefore, we have developed the following heuristic algorithm: every form of the test consists of 10 proposed test questions and numbers of the questions are ranked in difficulty growth.3 (three) first test questions are simple, the following 5 (five) questions are moderately difficult, and the last 2 (two) questions are difficult.
For the cases of heuristic approach to estimation, we have developed an algorithm, which is a task of combinatorial theory and is formed as follows: it is required to determine various values of α, β, and λ that would meet the conditions for the grade obtaining on a scale of one to five: with the threshold values of k, m, n.
where α, β, λ are various combinations of difficult, moderately difficult, and simple questions, correspondingly; k is the total number of difficult questions in the test; m is the total number of moderately difficult questions in the test; n is the total number of simple questions in the test; d is the total number of questions in the test; a, b, c is the number of correct answers meeting the conditions of being assessed on a scale of one to five.
Specifics of Development of the Expert Control System and Integral Method of Knowledge Estimation (IMKE)
This integral method of knowledge estimation is a base for development of the system of expert control and estimation of knowledge-IMKE.
The core of the system is a base vocabulary of the subject field developed and accumulated in the process of formulation and education of students and which is the main reference didactic material.It is used for analysis of every text answer and consists of the following elements: • Thesaurus The main method of the thesaurus development is selection of descriptors, which are usually nouns, from lecture notes, a working program, synonyms, and other linguistic sources specific for the subject.
• Example database.The main method for development of the base is selection of examples provided by teachers at lectures, answers of students at the practical studies, seminars, etc.
• Spelling dictionary.Input information of the testee shall be fixed and entered into the lexical analyzer of the expert system IMKE.The lexical analyzer receives the original text of an answer directly from the input interface elements and transforms it into a lexical items array.The analyzer searches for every lexical item using the basic vocabulary of the subject field.Search suggestion shall be successful in case of exact match of the analyzed word with descriptor of the basic vocabulary of the subject area.In this case, the corresponding information is transferred to the program.
The criterion "Objectivity" is calculated based on this information.Similar information is selected and prepared for settlement of other criteria.
The system of control and estimation of knowledge IMKE assigns final test score providing a corresponding result summary based on the tests results for each testee.
An example of the result summary is shown in Table 1 where coefficients enable a detailed analysis of testees' answers to each question of the test and justify the final test score.
Results
Perennial research in the field of knowledge estimation performed by various authors shows that the grades used for estimation of the students' knowledge in the groups shall be distributed under a normal law.Accordingly, the most efficient knowledge estimation system is a system preventing underestimating or shifting average grade in the group answers, i.e. the main operational hypothesis is a hypothesis of normal point allocation in the process of education control.
Using this hypothesis, we have checked the validity of knowledge estimation results with the system of control and estimation of knowledge IMKE.For benchmarking, we selected the theoretically "perfect" knowledge control and estimation system, which was called theoretical.Accuracy and efficiency of the developed method of knowledge estimation can be determined by comparison and theoretical evaluation of the results of knowledge estimation by testing (the control group) and by the method of integral estimation of knowledge (the experimental group).It is expected that the closer the parameters of the investigated system to the theoretical one are, the more perfect the system is.
In order to ensure validity of the results, we tried to identify: -The interconnection (r) between the considered knowledge estimation methods (the testing method (T) and the method of integral estimation of knowledge (I)); -The statistical characteristics of the estimation results by the type of control: average point (T av ; I av ), standard deviation (σT; σI); -Whether the sampled data conform to the hypothesis of normal distribution of the population by the method 2 χ (K.Pierson's criterion) with the significance value equal to 0.05.The obtained results of statistical processing of experimental data demonstrate that the score distribution functions applied in the control and experimental groups are close and are regulated by the same law.At the same time, grade frequency distribution function of the experimental groups are closer to the theory than the control groups (the probability is lower than the critical values obtained on the data processing in the control groups) (P T (χ 2 ≥ χ q 2 ) = 0.0047; where k = 1 and 2 q x = 7.514485; 0.0047 < 0.005; P I (χ 2 ≥χ q 2 ) = 0.0833 where k = 1 and 2 q x = 2.985654; 0.0833 > 0.005).
Thus, the developed method of integral estimation of knowledge enabled education improvement due to providing teachers with objective information on the level of training material assimilation by students; due to a detailed analysis of the content of knowledge estimation, which increases the interest and motivates to study as is evidenced by the data available to the teacher after testing; and due to purposeful correction of the education process with due consideration of testing results, complexity index selection, and alteration of fuzzy grade boundary on assignment of the final grade.
Discussion
The problem of the testing-based objective estimation of knowledge acquires new forms and content in the context of new paradigms.Wide spreading of the method of knowledge estimation resulted in the problem of estimation of students' educational results by the teacher as an independent line of pedagogical science.
An analysis of various approaches to objective estimation of knowledge by testing suggests that, notwithstanding of all results in this field, the issue of the method objectivity, of the sufficiency of the reflection of real knowledge level through the current estimation systems remains open.The problem of knowledge objectivity is determined by the multidimensionality of this issue from the pedagogical, psychological, and methodological points of view.
An analysis of the current condition of the automated system of knowledge control and estimation revealed that the available methods for formulation of questions and answers based on didactic tests (multiple choice questions and questions of the multiple choice and formulation type) have a lot of advantages: they reduce estimation subjectivity; make complex computerization of the knowledge control and estimation process possible; raise productivity; enable simplicity and efficiency of the result verification, etc.
However, examples seem to indicate that available methods for formulation of questions and answers based on didactic tests sometimes prevent objective estimation of the actual knowledge level of students, especially when it comes to the subjects of social and humanitarian cycle.These methods have the following disadvantages: deep distortion in perception of material integrity by students; stereotype development; absence of innovative approach; verbatim learning of answers; memorizing and copying of the test keys; sometimes it is difficult to find out the way of answer chosen by the student (logical reasoning or randomly), etc.This situation has negative consequences: reduction in the motivating effect of the grades on the cognitive work of students and the quality of the overall educational process.
The routine control practice using pedagogical tests enables receiving objective information on the specific knowledge level of an individual person and his/her skills, and relate these data with the training tasks in order to enable timely correction of the new knowledge acquisition process.The need for well-developed methods of estimation of the students' knowledge level is constantly implemented in the process of study.
Search for pedagogically efficient ways and development of a method that would allow improving the knowledge control and estimation process to raise the education level remains the main problem of our research purposed for theoretical justification and practical development of the method of integral estimation of the knowledge.
In order to improve the knowledge control and estimation process for the education level increase, we propose to improve the testing method by a new approach to the formulation of answer to the questions of the test assuming free text form of the answer.An answer is recognized based on the subject knowledge base developed by experts.An analysis of such answer is provided by estimating using the series of formation criteria, including algorithms of the quality control estimation.There are definite difficulties.The case is that not every subject can be formalized.Formalization is applicable and obvious for such subjects as physics, mathematics, mechanics, and others.Herewith, it is though not always possible to formalize knowledge in social and humanitarian subjects.In order to make test-based knowledge control efficient for the specified subjects, it is necessary to find out the knowledge acquisition level at every educational stage.Herewith, it is necessary to cover all the required characteristics of knowledge acquisition with test tools.For example, such factors, as ability to make an answer concrete by means of examples, knowledge of the facts, ability to express own thoughts logically, properly, correctly, etc.Only such forms of knowledge estimation, which are as good in matter as oral control, allow using the great advantages of such an efficient method of accurate and objective estimation of knowledge.
Nevertheless, there is such an important problem as complexity of the text answer meaning recognition.Standard methods for system analysis and computer modeling that are based on precision processing of numerical data cannot essentially cover the great complexity of the processes of human reasoning and decision-making.Thus, it is hard to escape a conclusion that significant estimates of humanitarian systems including education require refusal of the high standards of accuracy and rigidity, which are usually expected from a mathematical analysis of clearly defined mechanical systems and provide more tolerate approaches that are similar by nature.Possibly, only these approaches will make computer modeling a really efficient method of humanitarian system analysis.
Naturally, innovations are associated with a certain risk due to difficulty in specification of the final result and avoidance of erroneous assumptions.Achievement of the qualitative result from innovation requires mature reflection, detailed analysis, and competent arrangement.
Accuracy and objectivity of the knowledge estimation depends not only on formulation of text answers or on the underlying criteria and what the designated factors of student knowledge estimation are in particular, but also on what the grade scale or system is.
Conclusion
This article suggests a method of integral estimation of knowledge based on a new approach to the question and answer formulation enabling a free formulated form of test answers.Theoretically justified and experimentally verified data can be used in order to improve control and estimation of knowledge in the subjects of social and humanitarian cycle.
In the present context, the analysis of theory and practice of the knowledge control and estimation let us make the following conclusions: 1) Currently, the automated systems of knowledge control and estimation based on didactic tests and various approaches to assignment of grades are widely used for verification of the process of knowledge acquisition.Dedicated constructions of the test questions and answers: multiple choice or multiple choice and formulated questions do not always provide objective estimation of the student knowledge, especially on the social and humanitarian subjects.This situation has strong negative consequences: reduction in motivating effect of the grades on the cognitive work of students and the quality of the educational process in general.
2) Currently, there are no automated systems of knowledge control and estimation, which would enable assessment of such factors as ability to specify an answer with examples, demonstrate awareness of facts, show the skill of expressing oneself logically and correctly, etc.The issue of the need for a system of knowledge control and estimation, which would enable estimation of the real level of students' knowledge on the social and humanitarian subjects, where knowledge and reasoning play the main role, is especially acute.
3) The main reason for disparity between the prospective and operational capabilities of computers is a delay in the development of methodological problems and new methods of knowledge estimation.4) In the process of consideration of the problem of improvement of the knowledge control and estimation to improve the education quality, we have examined the ways and means to improve this method.There is a theoretically justified need to apply a new approach to formulation of test answers assuming a free formulated form of the test question and answer and also a need to develop criteria for analysis of this answer and scientifically based approach to estimation.Thus, using the developed integral method of knowledge estimation, we managed to achieve the education quality improvement through: -Providing teachers with objective information on the level of the training material assimilation by students-the shape and nature of the curve of estimation density function match; -A detailed analysis of the estimation content, which increases the interest in and motivation for studying as is evidenced by the data available to a teacher after testing (table 1); -Purposeful correction of the education process with due consideration of testing results, complexity index selection, alteration of fuzzy grade limits at forming the final score.
-An analysis of the theory and practice of current knowledge control and estimation status has allowed justifying theoretically the need to develop a method of integral estimation of knowledge; -A search of the ways and means of knowledge control and estimation enables development of a model of integral estimation of the knowledge; -An integral method of knowledge estimation has been developed, which is based on a new approach to the question and answer formulation based on didactic tests that allow freely formulated form of test questions and answers; -An automated system of the knowledge control and estimation imke has been developed based on the integral method of knowledge estimation; -The research results, which are theoretically justified and experimentally verified, can be used in order to improve control and estimation of knowledge to improve the quality of education, which proves the suggested hypothesis.
The following methodical recommendations can be formulated on the basis of the conclusion of the practical and experimental work: the automated system of knowledge control and estimation IMKE, which is based on the developed integral method of knowledge estimation, can be applied in the process of teaching students and senior high-school pupils, especially with regard to social and humanitarian subjects; The obtained results of the research cannot settle all issues of the problem of the quality of knowledge acquired in the process of education.Further theoretical and practical development of this method requires addressing such issues as improvement of the integral method of knowledge estimation in terms of expansion in the number of knowledge estimation criteria, development of criteria of their quality assessment; development of the knowledge base, involving other analyzers, etc.
Table 2 .
Grade confidence figures | 8,482.6 | 2015-05-19T00:00:00.000 | [
"Education",
"Mathematics"
] |
Recent Advances in Laser Self-Injection Locking to High-Q Microresonators
(
I. INTRODUCTION
Laser sources with narrow linewidth and low noise are of paramount importance for nearly all laser applications such as timing, communication, spectroscopy, metrology, navigation, as well as fundamental research. Stateof-the-art chip-scale semiconductor laser diodes emit continuous-wave light of wavelength from ultraviolet to mid-infrared, sufficient optical power, and are produced with low cost and high volume. However, the Achilles' heel of them is the inevitable frequency fluctuations due to low cavity finesse. Several methods to frequencystabilize diode lasers have been demonstrated. One solution to achieve simultaneously high laser power and narrow linewidth is to transfer the narrow frequency spectrum of a well-stabilized, but low-power master laser to a high-power broad-spectrum slave diode laser using optical injection [1]. However, such a system is complicated to implement and very sensitive to ambient perturbation. The linewidth reduction can also be achieved by locking the laser diode to an external high-finesse reference cav-ity. Active locking, like Pound-Drever-Hall (PDH) technique [2][3][4], is conventionally and widely used, requiring optical modulation and electronic feedback circuitry. The side-of-fringe stabilization [5] provides locking without optical modulation, but requires stable laser intensity and the reference level.
Passive stabilization of semiconductor lasers uses resonant optical feedback from an external optical element. One of the most effective approaches to stabilize laser frequency using an external cavity is based on the selfinjection locking effect. Self-injection locking is a profound phenomenon observed in oscillatory circuits. For many years this effect has been used in radio-physics, radio-engineering, and microwave electronics to improve the spectral purity of the devices [6][7][8][9][10][11][12][13][14][15][16]. It has also been widely applied for stabilizing laser sources, and has enabled various practical applications [17][18][19] including high-resolution spectroscopy and high-precision metrology. Self-injection locking of a chip-scale semiconductor laser to an optical microresonator results in sub-kilohertz laser linewidth that is orders of magnitude smaller than the original linewidth of the free-running semiconductor lasers (typically megahertz to hundreds of megahertz) [20,21].
Self-injection locking of oscillators has been extensively studied for the last three decades. It was shown initially Artistic vision of self-injection locking of a diode laser to a microring resonator, which enables frequency comb generation seeded by its narrowed linewidth emission.
that adding a partially transparent mirror at the output of a Fabry-Pérot (FP) laser can lead to laser noise reduction [22][23][24][25][26][27]. However, this stabilization scheme has significant limitations due to the dynamic instability arising from the excessively strong optical feedback. Even the relative feedback power at the level of 10 −4 can be sufficient to destabilize the system. The instability can be reduced if the feedback is frequency-selective. Although resonant reflectors are also used in this case (diffraction, Bragg or holographic gratings in the Littrow or Littman configuration [28,29]), the main effect comes from the resonators formed by one of the diode facets on one side, the external element used for stabilization on the other side, and the distance between them. In this case, the output edge of the diode is often covered with an antireflective coating. Such narrow-linewidth lasers are also called external cavity lasers. Regimes of the optical feedback in such a system have been heavily studied for simple mirror feedback [27,30,31] and even used for distance measurement [32].
Self-injection locking of a laser generation line to a high-quality-factor (high-Q) mode of an external resonator provides fast frequency-selective optical feedback, which leads to improved stabilization of the laser frequency [17,[33][34][35][36]. This configuration is dynamically stable and can produce coherent light even when the relative feedback power exceeds tens of percent. It was initially demonstrated with tilted (or V-shaped) FP res-onators [17,19,35]. Then it was studied for other large resonators such as discrete mirror ring cavities [37] and monolithic total-internal-reflection resonators (TIRRs) [36]. It was shown that the locking results in the reduction of the phase and amplitude noises [17,38], while simultaneously allows frequency tuning of the laser emission and facilitates efficient frequency doubling [36]. The laser linewidth can be narrowed by six orders of magnitude if a high-Q microresonator is involved [21,39]. A theory of the self-injection locking was developed for larger optical cavities nearly three decades ago [34,38], which indicated that a high Q factor of the optical modes, low modal density, and a highly stable optical path are required to achieve prominent linewidth reduction. Unfortunately, the stabilization technique using large optical cavities has drawbacks due to the sensitivity of the cavities to the environment. It was shown recently that a high-Q FP cavity can also be miniaturized to make compact self-injection-locked narrow-linewidth lasers [40].
Whispering-gallery-mode (WGM) microresonators [41][42][43][44][45][46][47][48][49][50][51], combining high Q factors in a wide spectral range with small size, simple construction, and reduced environmental sensitivity, have proven to be suitable to implement self-injection locking. The first implementation of using a high-Q optical microresonator for laser linewidth narrowing due to the optical feedback from the microresonator was reported in Ref. [52]. However, the term "self-injection locking" was not mentioned in this paper. The authors first encountered a parasitic effect that prevented the resonance curve from being obtained because the tunable laser was clanging to the resonance. They immediately realized that it is an effect similar to the frequency pulling of radio-frequency generators by additional high-Q circuits and that the feedback is formed due to the resonant Rayleigh back-scattering in the resonator. Comprehensive analysis of the backscattering and counter-propagating mode formation in WGM microresonators was performed in Ref. [53]. The possibility of realizing robust and effective laser stabilization without an external electronic feedback chain was implemented in Ref. [54,55], where the existence of the optimal coupling and back-scratching coefficients was experimentally shown. When a laser is locked to a microresonator mode, the laser wavelength can be fine-tuned by changing the microresonator mode frequency, e.g. by mechanical compression or extraction or stretching the microresonator [56,57]. Detailed theoretical analysis of thermodynamic and quantum limits of the resonance frequency stability of solid-state WGM microresonators was performed in Ref. [58]. An efficient method for numerical calculation of the thermo-refractive noise (TRN) was suggested in Ref. [59], and experimental characterization of TRN of integrated silicon nitride microresonators has been shown in Ref. [60]. Substantial progress from the laboratory demonstrations to off-the-shelf devices started since Ref. [20] demonstrated a narrow-linewidth DFB lasers self-injection-locked to a WGM microresonator in a package with 15 mm size, 3 mm thickness with instantaneous linewidth narrower than 200 Hz.
Here we review some recent key advances in theoretical and experimental study of the physics and applications of laser self-injection locking. We note that most of the effects here are described on the example of WGM microresonators, but generally can also be implemented with any types of high-Q optical resonators e.g. ring, FP and other types.
II. STABILIZATION OF LASERS BY MEANS OF SELF-INJECTION LOCKING
Fast optical feedback of laser self-injection locking enables significant reduction in laser linewidth. First demonstrated with fused silica microspheres [100], this method is now actively used to control the spectral characteristics, e.g. to narrow the linewidth and stabilize the frequency, of various laser sources [52,61,101], including fiber ring [102], and DFB lasers [20]. Note that over the past decade, significant progress has been made in the applications of this technique. In 2010, it was reported that the linewidth of an external-cavity semiconductor laser was reduced by a factor of 10 4 and an instantaneous linewidth of less than 200 Hz was achieved [20]. In 2015 the laser linewidth was further decreased by a factor of 10 7 and reached a sub-Hertz level [21].
A. Basics of self-injection locking
The schematics of laser self-injection locking is presented in Fig. 2, where a refocused laser beam is resonantly coupled to a high-Q WGM microresonator via a prism coupler. Any other resonantly reflecting element can replace the WGM microresonator, but the general idea and qualitative results are the same. As shown in Fig. 2a, a part of the laser radiation is resonantly backscattered (e.g. due to Rayleigh scattering in the WGM microresonators [53], or direct reflection in FP cavities) to the laser cavity, locking the laser radiation frequency to the frequency of the microresonator mode. Note that the bottom left panel of Fig. 2a shows the laser cavity resonance, not the free-running laser emission line, whose width δω free is defined by the laser noises and limited by Schawlow-Townes relation.
To describe the self-injection locking, one can start with the general phase and amplitude lasing criteria for a FP laser diode, with amplitude reflectivity coefficients of the output and end facets R o and R e , as ω LC τ LC + arg(R e R o ) + α g gτ LC = 2πN, where ω LC is the laser generation frequency of the freerunning diode, τ LC is the light round-trip time in the diode laser cavity, g is the diode medium gain, α g is the Henry factor, and N is an integer number that can be attributed to the mode number of the system. If a reflector with amplitude reflectivity Γ is introduced to induce self-injection, we can unite it with the output facet, composing an effective reflector. This effective reflector can now be considered as a FP cavity with the length corresponding to the diode-reflector round-trip time τ , and with effective reflectivity b. An optimal self-injection locking curve with ψ = 0, κmτs 1, and K0 = 35. The unstable branches are shown with dashed lines. The locking range is marked with a thick red line. The bi-stable transitions are marked with with blue arrows. Panel a is taken from Ref. [103], and Panel b from Ref. [104].
Similarly, the dependence of the self-injected laser generation frequency ω can be derived by inserting R eff into Eq. (1) instead of R o . Solving both systems and using 2πN ≈ ω LC τ LC 1 (see Ref. [105]), we obtain the relation between the free-running laser frequency and the locked laser frequency as following: This relation is usually called the "tuning curve". It shows the dependence of the system generation frequency ω on the free-running laser generation frequency ω LC (i.e. without feedback). For free-running laser, the tuning curve is a 1:1 line. For weak self-injection locking, the tuning curve is approaching to the 1:1 line. For strong self-injection locking, the tuning curve has nearlyhorizontal parts -locking ranges. A common form of the tuning curve is shown in Fig. 2b (there frequencies are taken relative to the reflector resonance and normalized over its resonance width). When the laser frequency is tuned (e.g. by changing the laser current) far from the resonance frequency of the reflector (i.e. the microresonator), the laser generation frequency follows the 1:1 line (see the green line in Fig. 2b). When the laser frequency approaches the reflector resonance -to the multistable part of the red curve in Fig. 2b -it can jump to the stable central part of the curve (thick red line in Fig. 2b). In this regime, varying the laser cavity frequency (e.g. by changing the laser current or due to noises and fluctua-tions) result in negligible change in the system generation frequency, i.e. the laser is locked to the microresonator. Finally, the locking is lost if the laser cavity is tuned far from the microresonator resonance frequency (see outer blue arrows in Fig. 2b).
The inverse slope of the tuning curve in the locked region K = ∂ω LC /∂ω is called the "stabilization coefficient", and its square represents the locked laser linewidth narrowing factor [19,104,106]. This derivative should be averaged near the laser cavity detuning over its initial linewidth K −1 = ωLC+δω free ωLC−δω free | ∂ω ∂ωLC | dωLC 2δω free to avoid singularities, as the linewidth can be viewed as frequency fluctuation. Note that the jump between the metastable branches can also happen before the turning point [107,108] in case of a higher microresonator Q factor. However, such spontaneous locking usually happens to the branch with the highest stabilization coefficient. For complex reflectors, Eq. (3) is difficult to analyze. A simple and common approach was considered in Ref. [19] for tilted FP cavities and in Ref. [104] for WGM microresonators. The equivalence with Eq. (3) in a specific range of feedback and front facet reflectivity was shown in Ref. [105]. The system can be described by the nonlinear rate equation: where ω LC and κ LC are the laser cavity eigenfrequency and loss rate, for different initial phases ψ. Points I-IV correspond to phases ψ = [0, π/3, 2π/3, π] with κτs 1. The envelope for the family of curves with different ψ is shown with the black dash-dotted line. In comparison, the solid green line shows a tuning curve for a free-running laser. b. Tuning curves I-IV corresponding ψ = 0 and long delay, so that κmτs/2 = [0, 1, 2, 3]. This figure is taken from Ref. [104].
rate of laser output mirror divided by front facet reflectivity, g = g(|A| 2 ) is the laser gain, ω is the actual laser generation frequency, A is the laser field slowly-varying complex amplitude and B is the complex amplitude of the field reflected from the microresonator. In the case of small feedback, B can be described by the following equation where Γ(ζ) is the microresonator amplitude reflection coefficient. It depends on the detuning of the laser oscillation frequency ω from the nearest microresonator eigenfrequency ζ = 2(ω − ω m )/κ m (ω m and κ m are the microresonator mode frequency and the loaded linewidth or loss rate). In Eq. (5) we explicitly substituted the reflectivity of a WGM microresonator [53] where η is the dimensionless pump coupling coefficient, β is the normalized mode-splitting coefficient (Rayleigh scattering), and Θ is the power mode coupling factor, proportional to the ratio of the laser aperture area S LC to the final beam area S. The results for FP cavities are qualitatively the same [19]. As discussed earlier, Eq. (5) does not account for the power reflecting back and forth inside the front facet-microresonator facet, but it works fine for small feedback. It is convenient to use the normalized tuning curve to analyze the self-injection locking effect. The curve shows the dependence of the effective frequency detuning ζ on the detuning of the laser cavity frequency ω LC from the microresonator eigenfrequency ξ = 2(ω LC − ω m )/κ m . Equation (4) should be split into the amplitude and phase parts, with the former discarded, and then the tuning curve can be determined. We will return to it in some sense in Section II B. The tuning curve can be described by the following [104]: where ψ = ω m τ s − arctan α g − 3/2π is the locking phase [104], determined by the round-trip time τ s from the laser to the microresonator, the microresonator resonant frequency ω m , and the Henry factor α g . K 0 = 8ηβ κm κ do √ Θ 1 + α 2 g is the zero-point stabilization coefficient, and the Henry factor-related laser cavity frequency shift is included into ω LC . The coupling coefficients also can be expressed in terms of more common coupling rates η = κ c /(κ 0 + κ c ) and β = 2γ/(κ 0 + κ c ), where κ c and 2γ are the pump and forward-backward wave coupling rates and κ 0 is the intrinsic microresonator loss rate (κ m = κ c + κ 0 ). It was shown in Ref. [105] that Eq. show the tuning curves for β = 0.1 and β = 10 respectively, with K0/β = 400 and κmτs = 0.011. The red dashed lines show the slope of locking bands, and the red crosses mark the optimal points ζ = ζ0 Eq. (18). c shows the tuning curves for different K values. Curves I-III correspond to ψ = π and K = [5, 3, 1], respectively. Curves IV-VII correspond to ψ = 0 and K = [0, 2, 4, 6], respectively. All quantities are plotted in dimensionless units. Panels a, b are taken from Ref. [103], and Panel c from Ref. [104].
can be obtained from Eq. (3) assuming R o |Γ| 1 and qualitatively valid in even broader region.
Both parts on the right-hand side of Eq. (7) depend on the feedback round-trip time τ s . In the following, we consider that ψ is "independent" on τ s as the self-injection locking process is periodic in the locking phase, and thus its absolute value is trivial. The scales of κ m τ s and ω m τ s (which is a part of ψ) are quite different for high-Q microresonators, thus these values need to be treated separately. More formally, the parameter ψ can also be independently tuned with locking mode frequency. Figure 3 illustrate the influence of the phase ψ and normalized delay κ m τ s /2. Essentially, Eq. (6) is sinusoidal along the 1:1 line, which can be attributed to an external cavity [105] filtered with the resonant envelope. So the change in the phase moves the sinusoid along the 1:1 line, while the normalized delay changes its frequency. The latter also can bring undesired fringes inside the resonant envelope, making the system more multistable.
The value K 0 is a universal constant for self-injection locking [104]. First, it defines the stabilization coefficient in the case of optimal phase and zero laser cavity detuning dξ dζ ψ=0,ζ=0 = K 0 + 1. It leads to a realistic estimation of maximal line narrowing δω ≈ δω free /K 2 0 . Second, it defines the locking range δω lock /κ m = 3 √ 3K 0 /16 ≈ 0.32K 0 for small β. Note that if the locking range is greater than the double finesse (ratio of the microresonator intermode distance to its linewidth), it can overlap with the locking ranges of neighboring modes. Third, the tuning curve's criterion for having a pronounced locking region is K 0 > 4. In this sense, the zero-point stabilization coefficient K 0 is analogous to the feedback parameter C used in the theory of simple mirror feedback [31,32], where self-injection locking is achieved with the frequency-independent reflector forming an additional FP cavity. However, in the resonant feedback setup, the self-injection locking coefficient does not depend on the laser-to-reflector distance, but depends on the parameters of the reflector instead. Though the system has qualitatively similar regimes as the simple one [27], their ranges and thresholds are different [19,103,104]. The value of K 0 > 4 is required for the pronounced locking with sharp transition, naturally becoming a locking criterion. Figure 4c shows the tuning curves for small K 0 values. It also includes several curves with different π phase values. In the case of high-Q microresonators K 0 can be no less than several hundred.
The illustrative tuning curves for high and low values of the mode-splitting coefficient β are presented in Fig. 4(a, b). Note that the tuning curve experiences splitting similar to the resonance splitting at the increased forwardbackward wave coupling [53]. The splitting impacts the self-injection locking process, and the stabilization can be worse at larger splitting values.
Experimentally, it is more convenient to control the transmission resonance of the system, when the light from the output of the coupler is directly registered with a photo detector and oscilloscope while tuning the frequency. The diagrams are also called "light-current (LI) curve" because the frequency is usually tuned by changing the injection current of laser diodes. The theoretical LI curve for a WGM microresonator can be obtained from the transmission resonance curve, given by [53]: Note that if the detuning is controlled by the injection current I, the power B in ∝ I ∝ −ζ and the curve is slightly tilted. Calculated dependencies of the transmit- Calculated normalized transmitted power dependence on the detuning of the laser frequency from the WGM frequency for the different locking phases. The sum of the forward and backward locking ranges (FLR and BLR) can be measured experimentally. It is a good approximation of the total locking range δω lock in the case of a high Q factor when the inner overlap band δωin is negligible. This figure is taken from Ref. [109].
ted power on the laser detuning from the WGM frequency are shown in Fig. 5. A free resonance is observed in the absence of back-scattering (black dotted line in Fig. 5), and a locked resonance appears when the back-scattering causes self-injection locking (blue dashed line in Fig. 5). In the self-injection locking regime, one can determine the locking range: the bandwidth on the LI curve where self-injection locking suppresses the frequency change. Sharp edges bound the locking range at the locking phases ψ ∈ [−π/2; π/2]. For locking phases ψ = nπ, the shape of the LI curves during forward and backward frequency scans are different, which can be observed experimentally. We need both scans to capture the full locking band correctly. The theoretical predictions are shown in Fig. 5 with solid lines with triangle markers. The sum of the forward locking range (FLR) measured with forward scan, and backward locking range (BLR) measured with backward scan, is connected with the locking range δω lock as FLR + BLR = δω lock + δω in (see Fig. 5). The FLR-BLR overlap or the inner band δω in is significant only for low intrinsic quality factor Q int or for overcoupled microresonator [109].
B. Multi-frequency laser locking
Self-injection locking is efficient in laser linewidth reduction. However, in the case of a diode laser selfinjection-locked to a high-Q WGM microresonator, a broad multi-frequency emission spectrum can collapse to a single line with a linewidth of kilohertz level or even below. Due to mode competition, the total initial power redistributes in favour of the locked mode (or, in some cases, in favour of several locked modes), providing a single-frequency (or few-frequency) regime with energy concentration reaching 96% [68]. For example, a multifrequency laser diode with 1535 nm central wavelength, 100 mW output power, a spectrum initially consisting of 50 lines and linewidth of the order of megahertz is locked to a high-Q magnesium fluoride (MgF 2 ) microresonator. One can obtain a single-frequency laser with a power of 50 mW and a linewidth of less than 1 kHz. This also brings out the so-called Bogatov effect [110,111] -nonuniform energy distribution inside the suppressed modes.
The amplitude relations of the modes should be considered to model self-injection locking of a multi-frequency laser. The standard multi-mode laser model can be represented as a system of differential rate equations [23,112]: where I is the diode current, e is the electron charge, N is the number of excited electrons, τ s is the lifetime of the excited electron, S l is the number of photons, G l is the stimulated emission coefficient in the laser mode l and G th is the threshold gain (or in other words the total losses), and F l is the spontaneous emission contour. Note that Eq. (9) and (10) are real. They can be considered as some form of the amplitude part of Eq. (4) that was discarded while deriving the tuning curve. The magnitude of the threshold gain is determined by the design features of a particular laser, and in the simplest case for a laser cavity consisting of two mirrors with reflection coefficients R 0 and R e , one can obtain the following expression: where L LC is the length of the diode, τ LC is the diode round-trip time and α loss is material loss factor. The gain in each mode depends on a combination of such effects as stimulated emission of photons, spectral hole burning due to neighboring modes, and asymmetric mode interaction. For the gain factor G l , the following expression can be written as [112,113]: l(k) is the coefficient of symmetric crosssaturation (spectral hole burning due to neighboring modes), G Bogatov is the asymmetric mode interaction coefficient (Bogatov effect) [114].
The coefficient G is determined by both the number of excited electrons and the dispersion of the linear gain: where g l is the differential gain, N g is the number of excited electrons when the laser diode becomes optically transparent, D g is the linear gain dispersion coefficient, λ l is the wavelength of the mode l, and λ peak is the central wavelength of the laser. The effect of asymmetric mode interaction was first described by Bogatov in Ref. [110,111], where a model of stimulated scattering of laser light on the dynamic electron density inhomogeneities was introduced as the theoretical explanation of this effect. The model proposed by Bogatov describes the change in the permittivity δ , caused by the dynamic inhomogeneity of the electron density due to the stimulated emission of the excited electrons under the influence of mode interference. The expression obtained by Bogatov for the variation of the dielectric constant can be rewritten in terms of the gain of a laser active region. Thus the expression to describe the coefficient of asymmetric gain (Bogatov coefficient) is: where Ω l(k) = ω l − ω k are laser modes offsets, S = S l is the total number of photons, and α g is the linewidth enhancement factor.
In the self-injection-locked regime, the feedback term for electric field amplitude E l is introduced. Taking into account that the photon numberṠ l ∝ 2Ė l E l , the expression for the δS feedback contribution to the dynamics of the mode intensity in Eq. (10) can be obtained as following: where RoτLC Γ(ω l ) is the total feedback rate, φ l (t) is the phase of the mode, ψ l = ω l τ + arg(Γ(ω)) with τ being the round-trip time from laser to the reflector and back.
Spectrum Collapse
For the case when the high-Q microresonator acts as an external mirror, it is sufficient to replace the reflection coefficient of the mirror Γ(ω l ) in the expression for the total feedback rate in Eq. (15) with an expression for the frequency-selective reflection coefficient of the WGM microresonator [53]. Each laser mode is assumed to interact efficiently with only one mode of the microresonator. This assumption is evidently justified for the case when the free spectral range (FSR) of the laser is larger than the resonator mode spacing. By tuning the laser frequency, one can achieve the regime when a certain mode of the laser ω l=p becomes close enough to some mode of the optical microresonator ω m . In this case, the feedback to this laser mode from the microresonator increases dramatically, and the laser mode locking to the high-Q mode of the microresonator occurs. In stationary regime we can assume that Γ( ω l =p ) 1, so the feedback expression is simplified δS feedback = δ lp 2κ dl S l cos(ψ l ), where δ lp is the Kronecker symbol, meaning that the feedback is added only to the mode closest to the frequency of the WGM.
The emission spectrum envelope in the model of a free-running multi-frequency laser [Eq. (9)-Eq. (10)] is mainly defined by the linear gain dispersion Eq. (13) and spontaneous emission F l . In the model of a laser with optical feedback, the frequency-selective feedback coefficient introduced in Eq. (15), in addition to the dispersion of the linear gain, also plays an important role. If one laser mode p is self-injection-locked to the microresonator mode, the feedback coefficient of this mode can compensate for the dispersion term of the linear gain. The total gain of this mode then exceeds the gain of the central mode (zero-dispersion mode) with wavelength λ peak . This enhances the power of the mode p to that comparable to the central mode. Further feedback enhancement can lead to the strong feedback -"complete" suppression of other modes S l S p . In this case, the mode p uses all the excited electrons produced by the injection current, which simplifies the electron/photon dynamics [Eq. (9)], allowing the summation to be omitted. Consequently, this process effectively transfers the energy of the other laser modes into the locked mode. It is the strong feedback when a multi-frequency laser becomes a singlefrequency one effectively. Figure 6 Experimental (blue line) and numerically calculated (red line) emission spectrum of the self-injection-locked multi-frequency diode laser. c. Experimentally obtained spectra of the self-injection locked laser at different feedback levels (coloured solid lines) with numerically calculated envelopes (black lines) for Γ1 = 1 × 10 −2 , Γ2 = 1.2 × 10 −2 , Γ3 = 1.5 × 10 −2 , respectively. The green spectrum is not approximated well. d. Numerically calculated dependence of the single-mode energy concentration η on the feedback level (blue line), and the experimentally obtained energy concentration points (squares). The circle corresponds to the green spectrum in Panel c, and the triangle corresponds to the free-running laser. This figure is taken from Ref. [68].
ison of the analytical solution of the model with [panel b] and without [panel a] the feedback term together with the experimental measurement [68]. The non-locked spectrum ( Fig. 6a) became matching automatically after the parameters of modelling were slightly adjusted for the Bogatov spectrum (see Fig. 6b) to match.
To obtain a strong feedback condition, we derive N from G (1) p from the stationary form of Eq. (10) for the locked mode and substitute into G (1) l in stationary form of Eq. (10) for other modes. In this way, the relation between S p and S l is found. For the consistency of our initial assumption and solution, it is necessary that the condition S l S p should follow from this solution. In this way, we get the following criterion for the strong feedback: The physical meaning of this statement is that for efficient spectrum conversion, the strong feedback should be greater than the spontaneous emission rate. A series of measurements of the self-injection-locked multi-frequency laser emission spectrum at different feedback levels have been carried out. Figure 6c shows several experimentally obtained states of the self-injection locking regime when the optical feedback level was controlled via the gap between the microresonator and the prism. Performing numerical modelling with different feedback levels, the theoretical curves shown in Fig. 6c was obtained. A good correspondence between theory and experiment is observed. In the strong coupling regime (see the purple curve in Fig. 6c) a single narrow line and the maximum suppression of other spectrum lines are observed. When the gap increases, the suppression decreases. The green, red and blue lines in Fig. 6c show that the intensity of the suppressed modes begins to grow at lower coupling efficiency. At a certain threshold level of the feedback, other lines start to appear in the optical spectrum (see the green curve in Fig. 6c), above which the locking is destroyed. The feedback is weak near the threshold level. In other words, the power of the backward wave is not enough for stable self-injection locking.
To estimate the efficiency of the spectrum collapse parameter η = S p / S l describing the concentration of the energy in the locked mode was introduced and calculated using the developed model. The curve in Fig. 6d shows the numerical estimation of η for different feedback level Γ together with the experimental dots. Note that after entering the strong feedback regime (according to Eq. (16)) the concentration quickly grows and reaches the value of about 96%, which corresponds to the transition to a single-frequency generation. The energy concentration near the threshold level can only be considered an estimated value. All numerical results obtained from the developed model are in good agreement with the experi- Optical spectra of the multi-frequency emission of the self-injection-locked laser. a. two-frequency regime; b. fourfrequency regime; c. six-frequency regime. Additional feedback is added to corresponding modes. Blue data is the experimental data, orange curve is from the analytic model. d. Upper panel: scheme of the laser(red) and WGM (blue) mode frequencies in model. dξ -difference between laser and microresonator intermode distances, cκ -first to second microresonator mode width ratio. Lower panel: theoretical (red and blue-dashed) and modelled (blue, green, yellow) tuning curves for the same WGM mode widths (left) and different mode widths (right). Panels a -c are taken from Ref. [68], and Panel d from Ref. [115].
mental data. The measured power feedback level |Γ(ω p )| 2 (about ∼ 10 −4 ) was sufficient for single-frequency lasing. Similar measurements when the laser diode was stabilized by other WGM modes give estimates of the optical feedback level of about 10 −4 to 10 −3 . The same feedback level has been demonstrated with a DFB laser locked to high-Q microresonator [100]. The obtained results suggest that a higher level of single-mode energy concentration and line narrowing can be achieved by developing a technique for increasing the feedback. It should be noted that an arbitrary feedback increase will decrease the output power of the stabilized laser.
Multi-frequency locking
In addition to the collapse of the diode laser's multifrequency spectrum into one narrow line, simultaneous self-injection locking of a few laser modes to different microresonator modes can be achieved [68]. This effect results in effective discrimination of these locked modes and transformation of the initial spectrum into a spectrum with just a few locked narrow lines. The locking occurs on modes spaced by an integer number of the microresonator FSR interval ∆f WGR and laser FSR interval ∆f d , i.e. ∆f mult = M ∆f WGR = N ∆f d (which sometimes is called Vernier effect). In this case, the mode competition near each diode frequency acts the same way as the case of single-frequency self-injection locking: the spectrum in the vicinity of the resonant frequency is suppressed, and energy is redistributed in favour of the locked spectral line. The spectral width of the locked mode also decreases significantly. This situation is depicted in Fig. 7, which shows a two-frequency regime (panel a), a fourfrequency regime (panel b) and a six-frequency regime (panel c). Note that if different mode families in the microresonator have slightly different FSRs [116,117], different spacings between locked modes can be observed [ Fig. 7 (b-c)]. Good correspondence with the experiment is achieved if the feedback is added for the corresponding modes in the numerical model Eq. (15).
In Ref. [115] numerical modelling for three laser modes and two microresonator modes was performed. The three laser modes' frequencies were sweeping through the microresonator modes with different mode spacings and widths (see Fig. 7d upper panel). The results showed that the self-injection locking occurs to the mode that is the first to match with the laser mode (see Fig. 7d, lower left -orange curve locks) or to the narrowest mode (Fig. 7d, lower right -green curve locks). Then all laser modes begin to oscillate on the locking WGM frequency stably without switching. At the same time, if several WGM microresonators have similar parameters (linewidth and detuning from the closest laser mode), self-injection locking can happen to both WGM modes, and the laser modes oscillate on both frequencies simultaneously. It was demonstrated that for the multi-mode locking, the allowed linewidth discrepancy is ±0.005κ, and the allowed difference between laser-WGM modes distances is dξ < 1. This somehow explains why the multi-locking occurs only within a single transverse mode family: this is the only way for the modes to have close enough losses and regular spacing.
C. Optimization of the self-injection locking regime
One can see that the five main parameters define the laser performance of in the self-injection locking regime, which are: 1. The coupling strength of the forward and backward waves in the cavity β; 2. The locking phase ψ determined by the optical path between the laser and the microresonator, and the frequency of the microresonator locking mode; 3. The optical round-trip time τ s between the laser and the microresonator; 4. The laser cavity-microresonator frequency detuning ξ; and 5. The pump coupling efficiency η. In what follows, we consider effective detuning ζ instead of the normalized frequency difference between the laser cavity mode and the WGM, ξ, since ξ 1 in the case of tight locking. The laser linewidth is reduced proportionally to the square of the stabilization coefficient [19,118] determined by the slope of the tuning curve K(η, β, ζ, ψ) = ∂ξ/∂ζ. The free-running and locked laser linewidths are related as .
The last simple formula for the linewidth reduction was obtained in Ref. [104] under small back-scattering β 1, zero locking phase ψ = 0, resonant tuning ζ = 0 and critical coupling η = 0.5. This formula has been tested in several works [65,119]. In Ref. [103] a five-parameter (ψ, ζ, η, β, κ 0 τ s ) optimization study of the stabilization coefficient was performed. In contrast with the common knowledge, the increase of the back-scattering, described by the parameter β, does not monotonously enhance the stabilization coefficient but leads to its eventual saturation (see Fig. 8). The optimal selection of the system parameters reduces the laser linewidth by several orders of magnitude. Here we consider parameters ψ = 0 and κ 0 τ s 1 as the most common and illustrative. In this case, the resonance curve of the locking mode (for a laser diode, it coincides with the light-current curve) can be observed while the laser frequency is broadly scanned in and out of the locking range and has a nearly rectangular shape.
To optimize the laser performance, we look for the effective detuning ζ 0 that maximizes the stabilization coefficient K. The expression ∂K(ζ, ψ = 0, η, β)/∂ζ| ζ0 = 0 results in a ζ-multiplied bi-cubic characteristic equation, dependent on β only. Solving it, we obtain for different β values This expression has a simple physical meaning. The linear interaction of the counter-propagating waves leads to the resonance splitting [53]. The splitting value is approximately equal to βκ m (see Fig. 2a) [120]. The locking band also splits into two for the large β [see Fig. 4b, the crosses mark the points ζ = ±(β − 1/ √ 3), and the tips of the peaks are close to ζ = ±β].
The slope has a maximum inside its validity range if β ≤ 1. For the case κ 0 τ s 1 the maximum of the stabilization coefficient is given by [103] This expression shows that the optimal stabilization coefficient increases with the laser-microresonator distance. However, one should not increase the distance uncontrollably as the increase is responsible for decreasing the laser signal quality and producing redundant metastable fringes on the tuning curve [104]. The criterion of the stable operation was approximated as κ m τ s < 9.4(8η It is helpful to introduce another parameter µ = 2γ/κ 0 (so β = µ(1 − η)) that is a constant for a given resonator. Using this notation, it is possible to perform a complete parametric optimization of the stabilization coefficient. Figure 8a shows the results of the numerical optimization for zero phases ψ = 0 and optimal frequency detuning Eq. (18). It can be seen that the critical coupling is optimal for the short laser-microresonator distance (κ 0 τ s < 0.1) and for the large back-scattering β ≥ 1.
The dependence of the optimal pump coupling coefficient η on the normalized forward-backward coupling rate µ is shown by the solid line. While increasing µ, we should increase the load to keep β < 1, preventing the resonance splitting (β = 1 is shown in Fig. 8 with the dotted line). It is also clearly seen in the map of the optimal detuning (see Fig. 8b). At some point (at µ ≈ 5 for the considered parameters), the detuning increase is no longer advantageous, and the critical coupling becomes optimal.
The stabilization coefficient can be optimized for the locking phase ψ [103]. For small β (β ≤ β cr ≈ 0.68), and, thus, small µ, we get the same ζ opt = 0 (also see Fig. 9b) as for the zero-phase case. The critical value β cr increases with the round-trip time and pump coupling coefficient (κ m τ s ) but always stays less than unity [103]. It was shown also in that ζ opt = 0 provides that ψ opt = 0 (see Fig. 9c). The map of the stabilization coefficient under optimal detuning and locking phase conditions is shown in Fig. 9a for different combinations of η and µ. The maps of optimal detuning and optimal phase are shown in Figs. 9(b, c). Zero phase (ψ = 0) is an exact optimum for β < 0.68 (see Fig. 9c), which is connected to the optimal condition ζ = 0 (see Fig. 9b). Since the optimal value of β found earlier for the zero-phase case β max = 3 −1/2 is less than 0.68, the maximum stabilization coefficient value for the zero locking phase is a global maximum. According to our model, optimizing the self-injection locking can result in a significant reduction of the laser linewidth compared with the best experimental results. For example, a diode laser linewidth reduction from 2 MHz in the free-running regime to below 100 Hz in the locked regime was demonstrated in Ref. [21]. The linewidth reduction in the case of the optimal parameters η opt (µ), ψ opt (µ) and ζ opt (µ) for µ = 3 can be improved by 15 times, which is at least an order of magnitude better than the result obtained for the non-optimal coupling. Furthermore, if the mode of the resonator is optimally selected (µ = 1.16), the linewidth reduction can be improved by 94 times.
It was shown the non-monotonic saturation of the stabilization coefficient K with respect to the backscattering (see Fig. 9a). The maximum value of the stabilization coefficient is reached at β ∈ [3 −1/2 ; 1], which determines the optimal "semi-split" mode. The saturation happens due to the doublet back-scattering reso- 10. Numerically obtained the transmission resonance curves Eq. (8) for the parameters taken from Ref. [21]. The parameter ζ(ξ) was evaluated using Eq. (6) (the dashed lines), and the corresponding LI curves were evaluated for the increasing frequency (the solid lines). The points on the transmission curves mark the optimal detuning values. All quantities are plotted in dimensionless units. This figure is taken from Ref. [103].
nance's formation because of the microresonator's counterpropagating modes. Microresonator modes with high back-scattering rates require the laser frequency to be tuned to the inner slope of the doublet back-scattering resonance to achieve the highest stabilization coefficient (see Fig. 9b).
In the experiment, one can choose and control the corresponding parameters by analysing light-current (LI) or transmission-resonance curves (see Fig. 10). If the detuning and phase in the experiment are close to the optimal ζ = 0 and ψ = 0, it leads to the typical close-torectangular LI curve (see the blue curve in Fig. 10). The optimal detuning is marked with a point. For a sufficient Q factor of the microresonator, the locking region entrance point is close to zero detuning, which also helps to tune to the optimum. Decreasing the pump coupling to the optimal value, we set it close to the critical coupling. Thus transmittance will be reduced. If β is greater than the critical value β cr ≈ 0.68, the optimal phase becomes nonzero. The correct phase also can be controlled using the transmittance resonance form ([see the difference between the green and orange curves in Fig. 10). If the laser-microresonator distance is unchangeable in a particular setup, the phase can be tuned by switching the operational mode. This step, however, can also modify the scattering coefficient µ. The LI curve analysis also suggests the optimal mode, which should have a particular shape (see Fig. 10, red curve).
In general, the following optimization recommendations based on the developed theoretical model can be made: 1. If we can estimate µ, we select a mode with the optimal µ.
2. We set up the critical coupling regime, which is indicated by the nearly maximal depth of the dip, at which the LI-curve width is also maximal.
3. We adjust the phase so that the LI-curve acquires the correct shape -the first angle (counting in the frequency scanning direction) should be sharp, and the second -with rounding in the scanning direction.
4. If we do not know the µ and have not yet selected a mode and/or cannot change the laser position, then we can look for the resonance with the correct shape (see above).
Any particular experimental realization of the laser requires an adjustment of the optimization algorithm in accordance with the theoretical model described above.
In some cases, optimal parameters are hard to achieve. Recently, it has been proposed the scheme of the selfinjection locking of a laser via a high-Q WGM microresonator, in which the drop-port-coupled mirror adjusts the optical feedback [120] (see Fig. 11). The adjustment enables tuning the stabilization coefficient and optimizing it for any level of Rayleigh scattering. In this way, the self-injection locking scheme with a mirror solves the problem of the non-ideal Rayleigh back-scattering rate, which is highly suppressed in high-Q crystalline WGM microresonators. It has been noticed that the optimal regime of the proposed scheme is far from critical coupling (unlike the classic self-injection locking scheme), which results in less radiation losses.
Based on the quasi-geometrical approach, the resonant optical feedback for the mirror-assisted scheme (see Fig. 11a) was derived [120]: where β m = κ mirror /κ m is the ratio between the mirrorcoupling rate and rate of total losses in the microresonator. It was found out that for the stabilization coefficient optimization problem, the ratio between mirrorcoupling rate and rate of the internal losses in the microresonator (µ m = κ mirror /κ 0 ) is more convenient [120].
We note that such scheme can also be implemented onchip [91] by means of a Sagnac reflector.
The maximal values of the stabilization coefficient for the self-injection locking scheme with drop-port coupled mirror and for the classic scheme with the optimal Rayleigh scattering reported in Ref. [103] are approximately the same (see Fig. 11b). However, for the classic scheme, the maximal level of laser stabilization needs precise Rayleigh scattering rate tuning, which is not a trivial task compared to the drop-port mirror coupling rate tuning.
D. Broadening usable spectral range
Though first experiments and applications where focused on the telecommunication (1.55µm) and visible band, there are no principal limitations for implementation of self-injection locking at other wavelength. A number of main directions can be distinguished in the area. A gallium nitride (GaN) semiconductor FP laser diode operating at a wavelength of 446.5 nm with a linewidth of less than 1 MHz has been demonstrated [67] as well as the sub-100-kHz UV laser at 370 nm [70]. It is worth noting that there were crystalline MgF 2 microresonators with Q factors exceeding 10 9 . Also, a hybrid integrated laser composed of a GaN-based laser diode and a Si 3 N 4 photonic chip-based microresonator operating at record low wavelengths of 410 nm in the near-ultraviolet wavelength region was reported [93]. It is suitable for addressing atomic transitions of atoms and ions used in atomic clocks, quantum computing, or for underwater LiDAR. By self-injection locking of the FP diode laser to a high-Q (0.4 × 10 6 ) photonic integrated microresonator, the optical phase noise at 461 nm was reduced by a factor greater than 100, limited by the device Q factor and back-reflection. A chip-scale visible lasers platform was created by using tightly-confined, micrometer-scale Si 3 N 4 resonators and commercial FP laser diodes [121]. A tunable and narrow-linewidth lasers in the deep blue (450 nm), blue (488 nm), green (520 nm), red (660 nm) and near-IR (785 nm) with coarse wavelength tuning up to 12 nm, fine tuning up to 33.9 GHz, linewidth down to sub-kilohertz, side-mode suppression ratio > 35 dB, and fiber-coupled power up to 10 mW was achieved.
The possibility of stabilizing lasers at longer wavelengths is also of interest. In particular, the stabilization of a GaSb-based diode laser with distributed feedback at a wavelength of 2.05 µm by a WGM microresonator was studied [62]. The measured frequency noise of the stabilized laser are below 100 Hz/Hz 1/2 in the range from 10 Hz to 1 Hz. The instantaneous linewidth decreased by four orders of magnitude compared to the free-running laser and amounted to 15 Hz at a measurement time of 0.1 ms. The integral linewidth was 100 Hz. Even better results were obtained in later work [64]: the frequency noise of the laser was below 50 Hz/Hz 1/2 at 10 Hz, reaching 0.4 Hz/Hz 1/2 at 400 kHz. The instantaneous linewidth of the laser improved by almost four orders of magnitude compared to the free-running laser and amounted to 50 Hz at a measurement time of 0.1 ms. The Allan deviation of the laser frequency was about 10 −9 from 1 to 1000 s. In addition, the possibility of stabilizing a quantum cascade laser at a wavelength of 4.3 µm was studied. The linewidth decreased to 10 kHz for integration times from 1 ms to 1 s [122]. The Q factor of the CaF 2 microresonator at this wavelength was about 2.2 × 10 7 . The possibility of frequency tuning in the 1 GHz band by controlling the cavity temperature was also demonstrated. Ww note that the Q factor of crystalline microresonators made of many standard materials, including MgF 2 and CaF 2 , decreases in the mid-IR due to multi-phonon absorption [50], which limits their application for laser frequency stabilization in this range. One of the possible ways is to use a microresonator made from crystalline silicon, which has comparable Q [51], and selfinjection locking to crystalline microresonator made from silicon was demonstrated at 2.64 µm [109]. Also, a tunable, single-mode, mid-IR laser at 3.4 µm using a tunable high-Q silicon microring cavity and a multi-mode interband cascade laser was developed [91]. Single-frequency lasing with 0.4 mW output power via self-injection locking and a wide tuning range of 54 nm with 3 dB output power variation was achieved. An upper-bound effective linewidth of 9.1 MHz was estimated and a side mode suppression ratio of 25 dB from the locked laser was measured.
E. Additional thermal stabilization
The thermo-refractive coefficient as well as thermal expansion of the microresonator imposes the major limitations on the self-injection-locked laser performance, causing both thermal drift and inevitable thermodynamic fluctuations [65,123]. One way to increase thermal stabilization efficiency is the development of methods to study the microresonators to thermal bath connection. For example, WGM microresonators are usually characterized by a relative temperature sensitivity frequency of the order of 10 −5 / • C. The ambient temperature should be stabilized at the level of the µK so that the linewidth of the cavity-based laser is less than 10 kHz for 1 s. The problem can be solved by using a thermally self-compensating resonator. Thermal compensation is carried out by using a specially developed design of the composite resonator. For microresonators made of MgF 2 , a sandwich structure with layers of Zerodur was developed, which led to a decrease in sensitivity to fluctuations by a factor of 7 [123]. For CaF 2 , the use of a composite structure with layers of Zerodur provided a threefold decrease in sensitivity to fluctuations. However, for CaF 2 , it turned out to be more promising to create a composite structure with ceramic layers with a negative coefficient of thermal expansion [124]. This approach made it possible to reduce the sensitivity to thermal fluctuations by more than 100 times, and to achieve a frequency stability level of 10 −12 at normal atmospheric pressure with an integration time of 1 s [119]. To further improve the laser stability, the integration of lasers in evacuated thermally stabilized packages, the introduction of active stabilization of the optical path, and the use of high-Q thermally compensated cavities are promising. For a thermally compensated MgF 2 microresonator in a rigid evacuated shell, we obtained a linewidth of less than 25 Hz and a relative frequency stability of 1.67 × 10 −13 (5.0 × 10 −12 ) for an integration time of 0.1 (1.0) s for 191 THz optical frequency [123]. Active methods of thermal stabilization are also being developed. In particular, using cross-polarized twomode temperature stabilization for a birefringent high-Q WGM microresonator, the long-term stability was improved by a factor of 51 at an integration time of 1000 s [125]. A cavity temperature instability level of 10 µK has been achieved even up to an integration time of 1000 s, allowing this compact optical cavity module to serve as a high-performance frequency reference in potential metrology, synchronization, and frequency transmission applications.
F. Self-injection locking to Fabry-Pérot cavities
Performance of narrow-linewidth laser based on selfinjection locking to WGM microresonators and on-chip microring resonators are ultimately limited by the absorption, various nonlinear effects and thermal expansion of the hosting material [48,126,127]. For example, although crystalline WGM microresonators has reached maximum Q factor of 10 11 , they typically have to be loaded to Q < 10 9 to avoid Kerr nonlinear effects [21]. On the other hand, Fabry-Pérot (FP) cavities has been the best optical cavities ever made and frequently used as the optical reference in the most demanding applications including optical clock, frequency combs, and precision measurements [128]. In terms of Q factor and fre-quency stability, hollow FP cavities made of super-mirror and ultra-low-expansion supporting material are still far ahead of WGM microresonators and on-chip microring resonators. Compared with WGM microresonators or on-chip microring resonators, hollow FP cavity have the following advantages. Firstly, the only part of the cavity that bears the very high intensity of the build-up optical field inside the cavity is the coating of the mirror, with typical field penetration depths of a few micrometer. Combined with the fact that the modal volume is orders of magnitude larger, hollow FP cavity possesses much lower thermal effect, nonlinear effects and induced frequency drift. Hence it is possible to use FP cavities of very high finesse and Q factor in self-injection locking without inducing nonlinear effects. Secondly, the body of an FP cavity can be built with zero-expansion material such as ULE or Zerodur, which significantly reduces its long-term frequency drift due to temperature fluctuation.
An FP cavity is also the first type of external high-Q cavity used to demonstrate linewidth narrowing of diode lasers utilizing self-injection locking [17][18][19]. Almost ten years ago, self-injection locking of a single laser and two lasers to a bulky FP cavity with high finesse have been demonstrated [39,130,131] for TIRR, linear and Vshaped configurations to produce very narrow linewidth. However it has never matured into a viable product due to the large FP cavity and optical bench used in the experiments. To take this technology out of lab and make commercial products similar to the size of other competing narrow-linewidth laser technologies, a miniaturized FP cavity of high-Q factor needs to be used. Some recent research efforts have been put into developing miniature FP cavities and using them as reference in PDH lock scheme [94,132], and hertz-linewidth has been achieved. Despite the excellent results, the cavity needs to be stabilized in a vacuum and acoustic isolated chamber and the complexity makes the system only available in lab. In addition, there are also reports of using 10-mm-length confocal FP (CFP) cavity with self-injection locking to achieve sub-kilohertz linewidth [133,134]. However finesse of CFP cavities can not reach very high level and the package size is still many times of a standard butterfly package. In a recent effort, a miniature FP cavity of sub-milliliter volume and 10 8 ∼ 10 9 Q factor was developed and utilized in self-injection locking to make compact narrow-linewidth lasers [40]. Figure 12a shows a ring-down measurement of the linewidth of the FP cavity. Figure 12b shows the schematic diagram of a narrowlinewidth laser locked to the FP cavity. The earlier result [40] utilizing µ-FP cavity of 10 8 Q has already shown its potential to beat the performance of a fiber laser with a very compact size. By improving Q factor to 7.7 × 10 8 [135], we also demonstrated that the frequency noise is much better than leading commercially available narrow linewidth laser products such as NKT E15 and OE4040-XLN, it's also ahead of the recently reported heterogeneous integrated narrow linewidth laser utilizing high Q microring resonator [94], if only self injection lock technique is employed. Our work marks a major step toward a new category of compact narrow linewidth lasers of superior performance utilizing ultra-high Q miniature FP cavity.
It is also worth to mention that, other than hollow FP cavities, solid FP cavities based on low-loss fibers have also been used in self-injection locking to achieve narrow linewidth laser [136,137]. This platform has also demonstrated a Kerr frequency comb with external fiber laser pumping [138]. In a recent work, a compact Kerr frequency comb engine based on self-injection locking of an 80 mW DFB laser to fiber FP resonators of various FSRs from 1 ∼ 10 GHz has been demonstrated [139].
III. INFLUENCE OF THE MICRORESONATOR NONLINEARITY
Because of the high Q factor and small mode volume of optical microresonators, Kerr nonlinearity and thermal effect should be considered in the case of sufficient intracavity power. Above a certain pump power the tuning curve will be distorted. The locking coefficient and bandwidth will change, and the generation frequency will be shifted. Furthermore, if the modulation instability threshold is exceeded, comb states can be generated. The self-injection locking was found to be beneficial for soliton generation [87,107,140], when the pump is stabilized at the desired regime. Moreover, selfinjection locking allows for solitonic pulse generation even in microresonators with normal group velocity dispersion (GVD) [90,107,141], which otherwise requires specialized techniques [142][143][144]. On the other hand, the manifestation of thermal effects, such as thermo-optic and thermal expansion, are inevitable in high-Q optical microresonators [126,145] at high power. Microresonator thermal effects are often considered parasitic, especially in the context of nonlinear optical processes, where thermally induced drifts, fluctuations, and instabilities [145][146][147][148] can strongly impact the generation of optical frequency combs and solitonic structures [149][150][151].
A. Kerr nonlinearity
It is reasonable to begin with the analysis of the Kerr effect. Consider the microresonator coupled mode equations [149] with back-scattering [152] with forward and backward (clockwise and counter-clockwise propagating) mode amplitudes a µ and b µ , which are analogous to the linear self-injection locking model [104] with additional nonlinear terms: where f is normalized pump amplitude (f = 1 means the modulation instability threshold), β is the normalized coupling rate between forward and backward modes (mode splitting in the unit of mode linewidth), α x is a cross-modulation coefficient derived from mode overlap integrals [152], ζ µ = 2(ω eff − ω µ + µD 1 )/κ is the normalized detuning between the laser emission frequency ω eff and the µ-th cold microresonator resonance ω µ on the FSR-grid (with µ = 0 being the pumped mode and D 1 /2π being the microresonator FSR. Note that in Ref. [140], as in other previous experimental works, the detuning ζ is defined with the opposite sign to be co-directional with the diode pump current and the wavelength. Here one conventionally sticks to the definition where the detuning is co-directional with frequency. For numerical estimations, α x = 1 can be used for the modes with the same polarization. Equation (21) provides a nonlinear resonance curve and the soliton solution [149,152]. For analyzing self-injection locking, we combine Eq. (21) with the standard laser rate equations Eq. (4), which is similar to the Lang-Kobayashi equations [23] but with resonant feedback [104]. The pumped mode corresponding to µ = 0 is of main interest. We search for the sta-tionary solution: (−1 + iζ) a + iβb + ia(|a| 2 + 2α x |b| 2 ) + f = 0, where a = a 0 , b = b 0 and ζ = ζ 0 for simplicity. These equations define the complex reflection coefficient of the WGM microresonator used for self-injection locking theory. To solve Eq. (22) for the reflection coefficient and make the resemblance to the linear case [53,104], the nonlinear detuning shift δζ nl and nonlinear coupling shift δβ nl are introduced. Then, we further transform ζ = ζ + δζ nl ,β 2 = δβ 2 nl + β 2 to achieve the reflection coefficient in the same form as in the linear self-injection locking model Eq. (5). After redefinitionξ = ξ + δζ nl , the nonlinear tuning curve in the new coordinatesξ-ζ becomes the same as Eq. (6): Note that the laser cavity resonant frequency ω LC as ξ is also assumed to include the Henry factor in its definition. The κτ s /2 is usually considered small, i.e. κτ s /2 1, so the locking phaseψ ≈ ψ = ω 0 τ s − arctan α g − 3π/2 de-pends on both the resonance frequency ω 0 and the roundtrip time τ s from the laser output facet to the microresonator and back. The nonlinear detuning and coupling can be expressed as Equations (23)- (25) can be solved numerically and plotted in ζ =ζ − δζ nl , ξ =ξ − δζ nl coordinates. One can observe that the calculated tuning curve in the nonlinear case, where Kerr nonlinearity is present, differs drastically from the tuning curve predicted by the linear model. Also, it can be seen from Eq. (24) that the nonlinear detuning shift is positive and allows for larger negative detuning ζ (proportional to the pump power). The detuning in the locked state can be estimated by assuminḡ ζ = 0 in Eqs. (23)- (25). For low β 1 andψ = 0 [153]: It is a reasonable estimation if the pump is moderate, where the tuning and resonance curves are symmetric, without self-intersections, and good stabilization can be achieved (see intersection of black 1:1 line with the tuning curves in the left panel of Fig. 13). It can be shown that this detuning is always inside the bi-stability region. The proposed nonlinear self-injection locking model is valid for anomalous and normal GVD. The theoretical solutions for tuning and resonance curves are shown for different pump values f in Fig. 13.
It can be seen that the locking band is decreased with the pump power, but the effective detuning occurs in the comb region. One can also note the points with infinite derivative ∂ω LC /∂ω suggesting formally for infinite stabilization coefficient. However, as the linewidth can be viewed as the fluctuation of frequency, this derivative should be averaged near the laser cavity detuning over its initial linewidth to get the real line narrowing.
Complex transient dynamics can happen in the bluedetuned region, where the instability of the lower branch of the unperturbed curve appears earlier than the bistability (second stable branch) of the nonlinear curve (compare violet and blue dashed curves in Fig. 13, right panel). A spontaneous transition from the linear model curve (blue dashed line in Fig. 13, right panel) to the locked state happens [108,154]. However, the intracavity power increases in the locked state and in nonlinear regime there is no stable stationary solution at this laser detuning (see orange or purple curves in Fig. 13, right panel). So the frequency unlocks and power tries to go back causing dynamical oscillations [154]. Note that this effect comes from any nonlinearity, e.g. from both thermal and Kerr one and is not connected with the nonlinear generation (single-mode regime was checked in [154]). If the scan is stopped in this oscillatory regime the detuning continues to perform periodic evolution.
B. Thermal nonlinearity
The above theory can also be modified in the case of thermal nonlinearity. The thermal equation can be added to the system Eq. (21) as following [155]: Here a a a * µ and b b b * µ stand for the Kerr nonlinear summations, and P a and P b stand for the average forward and backward wave powers compared with system Eq. (21). The thermal variable θ is the temperature averaged with the optical field mode power and the thermo-refractive coefficient over the cavity volume, thus representing the thermal frequency shift. The thermal parameters κ θ and r θ are inverse thermal relaxation time and thermal-to-Kerr-nonlinearity coefficient ratio [156]. Note that negative thermo-refractive coefficient corresponds to negative r θ and, consequently, negative θ. From the above equations, we can see that the comb and soliton parameters are now governed by ζ eff = ζ + θ rather than ζ.
In stationary regime, Eq. (29) yields θ = r θ (P a + P b ) and the system can be solved in a way similar to the above case, introducing the nonlinear detuning shift δζ nl and the nonlinear coupling shift δβ nl . In this case, the 2α x in Eq. (24) will be modified with 2r θ [154]. It can also be shown that after renormalization of the fields a, b = a, b/ √ 1 + r θ in modified equations, the system is reduced to Eq. (21) with the effective pump and crossmodulation coefficient This result, however, leads to formal complexities for the cases of r θ ≤ −1, but the solution remains correct. We observe that thermal effects further deform the tuning and resonance curves of the self-injection locking. Figure 14 shows the curves for different thermal-to-Kerrnonlinearity coefficient ratio values. We can see that the locking band slope increases with r θ , meaning less stabilization efficiency. At the same time, the allowed generation detunings range grows. Another important note is that for the positive thermo-refraction, the points of ∂ζ/∂ξ = 0 (infinite stabilization coefficient), which are also natural boundaries, holding the locking range inside, exactly correspond to the bi-stability criterion of the nonlinear resonance over generation detuning ζ with effective normalized pump amplitude f θ (see green dashed and dash-dotted lines in the left panel of Fig. 14). This stationary model has some issues related to the time scales. As the Kerr nonlinearity is much faster than the thermal one, the stationary regime of the equations will occur at different times. So first, the system will come to the stationary regime of the non-thermalized model (r θ = 0) and then will evolve to the presented one. The above consideration suggests that from the point of view of the comb dynamics, this will result in the effective incoming pump line broadening and additional detuning increase. However, this additional detuning does not drive the system from the soliton existence range. Another issue is that while the Kerr nonlinearity constitutes the complex triple sum of the modal amplitude products, allowing to soliton formation [152], the thermal part depends on the total power. This difference makes the frequency comb parameters depend on the f and ζ eff = ζ + θ rather than f θ and ζ. It can be shown that ∂ζ eff /∂ξ = 0 points of tuning curves for the effective detuning ζ eff coincide with the bi-stability criterion for nonlinear resonance with pump f , similar to ζ and f θ . It directly shows that if the working point is in the self-injection locking state, it is automatically inside the bi-stability range, corresponding to the soliton existence domain. Note also that the locked state corresponds to small effective comb detuning.
C. Frequency comb generation
We consider the coupled mode equations with backscattering for the forward and backward mode amplitudes a + µ , a − µ (having frequencies ω µ = ω 0 +D 1 µ+D 2 µ 2 + D 3 µ 3 + ...) with laser rate equations for the normalized carrier densityÑ l and laser field a l [107]: dÑ where ω (1) µ = ω 0 + D 1 µ is the microresonator mode first-order estimated oscillation frequency (FSR-grid), is the laser cavity to FSR-grid mismatch, Ω l is the laser cavity frequency, t s = τ s /2 is the one-way-trip-time from the laser to the microresonator defining the locking phase, κ/2π is the WGM resonance linewidth (Q = ω 0 /κ is the loaded Q factor), τ = κt/2 is loss-normalized time, r g/κ is a combination of laser gain to microresonator nonlinearity ratio and laser to microresonator coupling coefficient [107], f l is the normalized pump amplitude,κ l andκ N are the normalized laser optical and carrier loss rates, α g is the laser Henry factor, ξ l is the normalized laser cavity detuning from its initial value Ω l , ζ µ = 2(ω (1) µ − ω µ )/κ is the microresonator effective detuning, β µ is the normalized forwardbackward mode coupling (back-scattering coefficient) for the µ-th mode (equal to the mode splitting in units of κ 0 ). Theκ W is the normalized laser-to-microresonator back coupling rate, andK l is the resonator to laser coupling coefficient that can be defined by the stabilization The terms S ± µ in (34)-(35) represent the nonlinear sums, including the self-and cross-phase modulation [107,152]. Equation (32) describes the carrier concentration dynamics, and Eq. (33) describes the field amplitude in the laser. The laser field is normalized to the stationary solution of non-fedback equation, so that in stationary regime a l will be close to 1. The last term of Eq. (33) is the sum of the fields coming from the WGM microres- onator (backward wave). In practice, the sum in Eq.
where κ l /2π is the laser cavity linewidth, to avoid modeling of redundant fast oscillating terms. The second pair Eqs. (34)-(35) describe the WGM field in the highfinesse limit [152] [note the δ-symbol in the pumping term of Eq. (34)]. The terms with β stand for the forwardbackward mode coupling. The backward wave Eq. (35) is excited only through this term. The forward wave has two pumps: the backward wave (iβ µ a − µ term) and the laser (the last term). In Ref. [107], the Lugiato-Lefever equation (LLE) was obtained. The main feature here is that the feedback amplitude was written as if gathered from the point rotating around the microresonator. It is because the symmetry of the microresonator is broken with the introduction of the coupling element, providing the origin of the azimuthal angle at the touching point. It is where the field going to the laser originates in the laboratory frame, and the rotation goes from the fact that LLE is usually written in the rotating frame. Equations (32)- (35) can be easily expanded to the case of several lasers just treating the index l as numerating the lasers and adding a summation over it in Eq. (34).
The generation of the dissipative Kerr solitons (at anomalous GVD) and platicons (at normal GVD) was demonstrated numerically for the self-injection-locked pump [107]. Different regimes of different combinations of the locking phase (laser-microresonator round-trip time), back-scattering coefficient, and pump power exist (see Fig. 15 and Fig. 16). Generation of both types of the considered solitonic pulses was shown to be possible in a certain range of the locking phase and become less stable at high pump powers. Generation of the dissipative Kerr solitons was not very sensitive to the normalized back-scattering β, while for the platicon generation, this is a key parameter. The threshold value of the back-scattering coefficient was found to grow with the pump power. Some nontrivial dynamics, such as drift and breathing dynamics of the self-injection-locked platicons, were revealed. Self-injection locked solitons were demonstrated in Ref. [66,86,87,140,157] and platicons in Ref. [90,141].
D. Multilaser locking
Though most studies have been performed to lock a single laser to one or more intrinsic modes of a high-Q microresonator, it cannot provide a multi-frequency pump with controllable detuning in each mode line. A multifrequency pump can be realized by coupling two or more diode lasers to different resonances of a single microresonator with nonlinearity (see Fig. 17). This phenomenon is called "multilaser self-injection locking" or "dual-selfinjection-locking" (in the case of two lasers). Driving a single microresonator with two pump lasers features ad- Dual-self-injection-locking concept. Two laser diodes are coupled to an integrated high-Q microresonator. Both lasers are simultaneously self-injection-locked to different frequency modes of the microresonator, resulting in a stable and narrow-linewidth bichromatic output. This figure is taken from Ref. [158].
Dual-self-injection-locking is more complicated than the single-laser case because of the nonlinear interactions between lasers inside the microresonator. The reason for such interaction is that the resonance shift caused by one pump applies to all microresonator modes. It shifts the other laser resonance, changing the frequency of backscattered waves and self-injection locking dynamics. The theoretical description of multilaser self-injection locking can be started with Eq. (32)- (35), that can be easily expanded to the case of several lasers just treating the index l as numerating the lasers and adding a summation over it in Eq. (34). A semi-analytical study was performed in Ref. [158] for two lasers locked to two separate modes. The solution was developed analogous to the study of a single-laser case (see Ref. [140] and Section III A), but the number of equations and detunings doubled. Exactly Eq. (23) was obtained for the two lasers detuningsξ ± and two-generation detuningsζ ± . The equations for the nonlinear shifts δζ ± and δβ ± were found to be coupled (see the Supplemental Materials of Ref. [158]): Equations (37) were solved numerically, and the values of nonlinear shifts were determined. As a result, a 4D dualself-injection-locking tuning surface can be obtained as shown in Fig. 18. The (ξ + , ζ + ) view in Fig. 18b is shown to simplify the interpretation of this surface. The black curve is obtained for f + = f − = 0.05 and approaches the weak-pump limit governed by the single-laser linear self-injection locking theory, described by Eq. (23) withζ ± → ζ ± and Γ ± → Γ ± . In this limit, the two lasers do not affect each other. At large detuning from microresonator resonances, the lasers are in the free-running regime, so ζ + = ξ + or ζ − = ξ − . Close to resonance, self-injection locking ensues, characterized by the plateau ∂ζ ± /∂ξ ± 1 [104].
The colored curves in Fig. 18b correspond to pump amplitudes f + = f − = 0.5, where nonlinear effects are significant. The locking regions (tuning curves' plateaus ∂ζ + /∂ξ + 1) are shifted to lower frequency, resulting from the red shift of microresonator resonances caused by the nonlinearity. The overall shift is due to the nonlinearity of the "+" laser. The nonlinear effect of the "−" laser is manifested by additional surface shifts for different ζ − values, and different hues show the latter. When this laser is closer to the resonance (lighter hues), the power inside the microresonator is higher, so the shift is more significant. The cross-influence of the lasers is also evident in the (ζ − , ζ + ) view (Fig. 18c), manifesting as a red shift of the high color gradient region (corresponding to the locking regime of the second laser ∂ζ − /∂ξ − 1) in the central region of the plot. The light-green and aqua colors in Fig. 18c show the locking regions of the two lasers, whose boundaries are defined by ∂ζ ± /∂ξ ± = ∞. Their intersection determines the dualself-injection-locking region, displayed by the black and blue areas in Fig. 18d for the weak and strong pumps, respectively. The square shape of the black area is a manifestation of the lasers' mutual independence in the linear case. The blue area's overall red shift and the redsided curvature of its boundaries represent the self-phase and cross-phase modulation of the fields inside the microresonator by the two lasers.
Recently several experiments were conducted to investigate the dual-self-injection-locking phenomenon. Reference [99] shows the simultaneous locking of two verticalcavity surface-emitting lasers (VCSELs) to a single WGM microresonator made by Hydex glass. Two-port configuration was used where 90% of the radiation from the drop-port is redirected back to the VCSELs output. Such feedback allowed significant compression of the linewidths of both lasers from 3.5 and 5 MHz to 20.9 and 24.1 kHz, respectively. It has shown frequency noise suppression of VCSELs in such a configuration by more than 60 dB for Fourier offset frequency from 100 kHz and higher.
Another application of dual-self-injection-locking is to realize all-optical dissipative time crystals (DTC), as shown in Ref. [171]. In this work, DTCs formed in Kerr-nonlinear microresonators is presented. Two independent CW DFB diode lasers were locked to different eigenmodes of a MgF 2 microresonator with FSR of 32.8 GHz and loaded resonance bandwidth of 200 kHz. It was demonstrated that the two locked lasers with normalized powers higher (f > 1) and lower (f < 1) than the nonlinear threshold for the one and the other laser, respectively, and frequency spacings in the locked regime of an arbitrary number M of FSRs allow the generation of dissipative soliton time-crystals with periodicity T = m M T R , where the ratio m M is an integer and T R is a round-trip time. Instead, the fact that the temporal structure of the time crystals was not obtained experimentally from the measured spectrum, and its agreement with theoretical simulation leads to conclusion that the DTCs were achieved. These DTCs are stable over hundreds of lifetimes of cavity photons which makes their lifetime much longer than for DTCs in other physical systems. Such an approach opens new horizons to investigate all-optical DTCs at room temperature in dissipative Kerr systems.
The first experimental investigation of simultaneous locking of two multi-frequency FP lasers to a single integrated Si 3 N 4 microresonator with 1 THz FSR and coupled to a single waveguide was demonstrated in Ref. [158]. The authors experimentally investigate three possible configurations of dual-self-injection-locking: 1. When both lasers are locked to the different modes of the same microresonator mode family with frequency spacing equal to 2 FSR; 2. When lasers locked to the dif- ferent modes of different mode families; 3. When both lasers locked to the same eigenmode of the microresonator. In all cases, the spectral properties of dual-selfinjection-locking, such as spectral collapse, phase-noise suppression, and linewidth compressing for both lasers, were observed by using heterodyne detection. The obtained laser linewidth calculated by Lorentz approximation gave the order of several kHz for both locked lasers. The phase-noise measurements showed that the phase noise level of reference had been achieved for the high offset frequency above 10 4 Hz. The spectrogram showed that the lasers experienced nonlinear interaction inside the microresonator, and the dual-self-injection-locking's locking range was estimated. The spectrogram measurements for the case of locking to different modes of different mode families are presented in Fig. 19. The sharp dips in Fig. 19f in about 1 ∼ 2 ms and 10 ∼ 11 ms corresponded to nonlinear interaction between lasers when the dual-self-injection-locking regime was achieved. For the locking of two lasers to the same eigenmode, the authors also observed coherent addition of the laser output signal that can be a base for developing high-power narrow-linewidth compact laser sources.
IV. SPECIAL REGIMES AND NOVEL APPLICATION
In recent years, a number of interesting and promising applications of microresonator-stabilized lasers have been realized. First, an effective pump source for frequency comb generation [85,[173][174][175] has been demonstrated. The implementation of microresonators and compact laser diodes allows to decrease size and weight of comb generators in comparison with conventional modelocked systems. Moreover, generation of coherent or solitonic frequency combs in the form of bright soliton [66,86,87,140,157] or platicon [90,141] trains have been shown. Such coherent optical frequency combs are actively used in different areas of science and technology such as high-precision metrology and optical clocks [132,176], high-resolution spectroscopy [177,178], ultrafast optical ranging [179,180], astrophysics [181,182], and high-volume telecommunication systems [183][184][185]. Besides, high-efficiency compact hybrid dual-comb system, important and necessary for spectroscopy, has been developed [153]. Self-injection locking allows to compensate thermal effects inevitable in practical microresonator systems [126,145,146,149] and to facilitate access to soliton states. For the generation of dark pulses and platicons, self-injection locking can simplify the experimental setup and steps by avoiding complex multimicroresonator systems [144,186] and pump modula- tion schemes [143,187]. Self-injection locking also provides possibilities for realization of another nonlinear processes, e.g., SHG [188].
Besides, it has been shown recently that self-injection locking can be applied to a laser in the gain-switching regime [172], where laser current is rapidly modulated above and below the lasing threshold. As a result, a frequency comb with a line spacing equal to modulation frequency is formed (see Fig. 20). In this case, self-injection locking allows for stabilizing and narrowing every comb line to a sub-100-Hz limit, as in a plain self-injection locking. Also it was demonstrated in an add-drop setup with Si 3 N 4 on-chip microresonator with narrowing the linewidth to 4 kHz [97,189]. Interestingly, such stabilized combs can be tuned in terms of line spacing change with modulation frequency. Such highcontrast electrically tuned optical frequency combs with line spacing from 10 kHz to 10 GHz were investigated in Ref. [172]. The adjustment of the modulation voltage can be used to control the width of the frequency comb in terms of the number of spectral lines. The unique combination of a gain-switched laser with self-injection locking enables a broad plain spectrum of the comb with sub-kilohertz linewidth.
A. Brief introduction to integrated photonics
With advances of integrated photonics, particularly the development of low-loss photonic integrated circuit (PIC), high-Q optical microresonators can now be realized on silicon chips. Integrated material platforms, which allow the fabrication of PIC-based microresonators using CMOS foundry processes, have been widely explored for linear and nonlinear photonics including frequency comb generation [174], supercontinuum generation [85], wideband frequency translation [202], and Brillouin lasers [203]. While silicon-on-insulator (SOI) wafers -ubiquitously used for microelectronic circuits -have also been the mainstream integrated platform for photonics, it is well known that silicon has intrinsic material limitations such as the two-photon absorption in the telecommunication bands that precludes high power handling and ultralow optical loss. In the past decade, a myriad of material platforms have emerged to complement or even to replace silicon, particularly for nonlinear photonic applications [85,89]. In addition to optical nonlinearity of the material itself, optical loss in the waveguide (i.e. inversely proportional to the microresonator Q factor) is a critical figure of merit when comparing different plat-forms. The optical loss not only depends on material properties such as intrinsic optical absorption, but also on fabrication processes. Despite that currently the Q factors of PIC-based microresonators remain orders of magnitude lower than those obtained in the best bulk fluoride crystals WGMs and suspended SiO 2 microdisks [77], main interest and focus have been put on integrated platforms with continuous effort to reduce optical loss.
The key advantages of PIC-based microresonators over bulk fluoride crystalline WGMs and SiO 2 microdisks are: • Fabrication of microresonators can employ mature CMOS technology that have been developed for decades for microelectronic circuits. The CMOS fabrication allow scalable manufacturing of integrated devices with high volume and low cost.
• Both microresonators and bus waveguides can be directly fabricated together on the same chip, thus the coupling between them is much more robust as compared to the case where tapered fibers or prisms are used to couple light into fluoride crystalline WGMs and suspended SiO 2 microdisks. In addition, the coupling strength between the microresonator and the bus waveguide, as well as the back-reflection strength with loop mirrors in the drop port [93], is lithographically controlled with high precision.
• Integrated microresonators do not have to be in perfect circular shapes, as long as the shapes are close. For example, microresonators of microwaverate FSR (e.g. less than 20 GHz) can be designed and fabricated in optimized racetracks microresonators [204,205], or even in spiral shapes to achieve extremely low FSR down to the RF domain, e.g. 135 MHz in Ref. [206]. This design freedom allows significant reduction in device footprint on chip.
• Heterogeneous integration enables co-fabrication of the laser and the external microresonator on the same monolithic substrate [157], offering critical robustness and stability to the laser-microresonator coupled system. In addition, tuning elements can be simultaneously implemented, allowing fast (megahertz to gigahertz rate) resonance frequency actuation via e.g. piezoelectric MEMS [92] or electro-optic lithium niobate [207].
All these features highlight that the bridging with integrated photonics significantly broadens the technological scope and maturity of high-performance, self-injectionlocked lasers and microcombs with large-volume and lowcost manufacturing.
Among all material platforms used in integrated photonics so far, Si 3 N 4 [78,79,81,[208][209][210][211][212] has become the leading platform for applications that rely critically on ultralow loss [213,214]. Silicon nitride has a long history of being used as a CMOS material for diffusion barriers and etch masks in microelectronics. Its first use in integrated photonics can date back to 1987 [215] or even earlier. Silicon nitride has many properties that make it suitable for building ultralow-loss optical waveguides and high-Q photonic microresonators. Its refractive index n 0 = 2 enables strip waveguides of tight optical confinement with SiO 2 cladding. Compared with silicon, the smaller difference in refractive indices between the Si 3 N 4 waveguide core and SiO 2 cladding can reduce scattering losses induced by interface roughness and facilitate fiber-chip interface coupling with reduced mode mismatch. Amorphous Si 3 N 4 has a wide transparency window from visible to mid-infrared and a large bandgap of 5 eV that makes Si 3 N 4 immune to two-photon absorption in the telecommunication band around 1550 nm. In addition, Si 3 N 4 has a dominant Kerr nonlinearity that is nearly an order of magnitude larger than that of SiO 2 , but simultaneously negligible Raman and Brillouin nonlinearities that limit the maximum allowed optical power [216]. All these features highlight that Si 3 N 4 is an excellent integrated platform to realize laser self-injection locking using chip devices.
B. Experimental progress
Self-injection locking of a semiconductor laser to a chip-based Si 3 N 4 microresonator, without an optical isolator in between, can be realized via hybrid or heterogeneous integration [83,86,90,140,157]. Here we refer "hybrid integration" to the approach that a semiconductor laser diode or a gain chip is seamlessly edge-coupled to a Si 3 N 4 chip, such that light is coupled from the laser into a Si 3 N 4 microresonator, as shown in Figs. 21(a, b). This approach has been widely used to build narrow-linewidth semiconductor chip lasers [217,218] where the CW laser emission frequency is thermally controlled. Meanwhile, once the circulating laser power inside the external Si 3 N 4 microresonator exceeds a certain threshold, nonlinear parametric oscillation and soliton formation occur in the presence of laser self-injection locking [66,127]. Photodetection of the generated soliton stream produces an ultralow-noise microwave carrier at the soliton repetition rate [127].
In addition, when using thin-core, ultrahigh-Q Si 3 N 4 microresonators, laser linewidth down to hertz level has be achieved [90,94,206]. In this case, the laser noise or linewidth is ultimately limited by the thermo-refractive noise (TRN) of the Si 3 N 4 microresonators [60] instead of the microresonator Q. Using ultrahigh-Q, spiralshape Si 3 N 4 microresonators with 135 MHz FSRs, 40 mHz Lorentzian laser linewidth has been achieved in Ref. [206].
To build low-noise chip-scale lasers, employing laser self-injection locking in a Si 3 N 4 chip device enables a soliton microcomb module in a highly compact form, which consists only a laser chip or a laser diode with a high-Q Si 3 N 4 chip. As shown in Fig. 21b, a multi- Ultralow-noise lasers and turnkey soliton microcombs using laser self-injection locking to chip-based Si3N4 microresonator. a. Schematic of a tunable, self-injection-locked laser [92]. An DFB laser chip is butt-coupled to a chip-based Si3N4 microresonator. When the laser emission is coupled into the microresonator, backreflected light from the microresonator into the DFB laser can trigger laser self-injection locking that change the laser dynamics. As a result, the DFB laser frequency is locked to a resonance mode of the microresonator, and its linewidth is significantly reduced. Piezoelectric actuators can be integrated directly on the Si3N4 microresonator, offering fast frequency tuning. b. Close-range photo of self-injection-locked laser consisting a laser diode chip edge-coupled to a Si3N4 chip [86]. c. Experimental results of laser frequency locked to different microresonator resonances [86]. Top panel shows the transmission spectrum of a Si3N4 microresonator of 1.02 THz FSR, where the fundamental TE mode family is marked with red circles. Bottom panel shows the laser emission spectrum, as well as a typical split resonance that triggers laser self-injection locking. d. Increasing backreflection, thus to enhance laser self-injection locking, can be realized using a loop mirror in the microresonator drop-port [93]. e. Images of a self-injection-locked soliton microcomb module in a compact butterfly package [87]. f. Demonstration of turnkey operation in the soliton module [87]. Top panel shows measured comb power versus time upon power on and off. Bottom panel shows the spectrogram of the measured soliton repetition rate during power switching. Images are taken from Ref. [92] (Panel a), Ref. [86] (Panels b, c), Ref. [93] (Panel d), and Ref. [87](Panels e, f ).
diode chip with typical output power exceeding 100 mW is edge-coupled to a Si 3 N 4 chip [86]. By current tuning of the laser diode such that the laser emission frequency matches to a high-Q resonance of the Si 3 N 4 microresonator, light is coupled from the diode into the microresonator. This triggers laser self-injection locking, and transforms the free-running, megahertz-linewidth, multilongitudinal-mode laser diode into a single-mode laser with significantly reduced Lorentzian linewidth [86,104]. Meanwhile, as shown in Fig. 21c, it is observed that, resonances with prominent mode split -indicating backscattering [53] -can often be advantageous for laser selfinjection locking. In addition, the intensity of backreflected light from the Si 3 N 4 microresonator can be varied and controlled by adding a loop mirror in the drop port [93], as shown in Fig. 21d. With Si 3 N 4 microresonators of Q factors exceeding 10 7 and anomalous GVD, the soliton generation threshold power of few tens of milliwatt can be easily satisfied in this scheme [219]. Via laser current tuning, a soliton microcomb can be electrically initiated, and its state can be controlled and switched from chaotic states to breathing soliton states, and finally to multi-soliton states and single soliton states [86,140].
In a conventional experimental setup, the light emitted from a laser passes through an isolator, and then is coupled into a Si 3 N 4 microresonator. The soliton generation in Si 3 N 4 requires complex initiation techniques where the laser quickly scans from the red-detuned side to the blue-detuned side across a resonance [149,156]. Meanwhile, accessing to the single soliton state requires delicate switching and feedback control. This is due to the thermo-optic effect in the microresonator that the resonance experiences significant shift with drastic intracavity power variation during soliton initiation and switching [220][221][222][223]. It has been observed that, nonlinear laser self-injection locking can overcome this issue, and enables soliton generation via "turneky" operation [87]. As shown in Fig. 21e, once the device is carefully assembled, packaged and stabilized, in the first trial a set of optimized parameters (e.g. laser current and laser-chip gap distance) is searched and found that allows the device to generate solitons. Later, as long as the setup is configured with this set of parameters, upon laser power-on, the same soliton state is immediately generated without any parameter tuning process [87], as shown in Fig. 21f. The turkey operation is resulted from the ultrafast (gigahertzlevel bandwidth) feedback between the external photonic microresonator and the laser cavity, which is much faster than the thermo-optic speed (typically kilohertz to tens of kilohertz). Therefore, this feature eliminates complex soliton initiation and electronic control, and offers a compact solution for field-deployable soliton microcomb modules.
In Ref. [140], a set of practical parameters has been studied numerically and experimentally, leading to the following key conclusions. First, the effective detuning ζ predominantly locks into the red-detuned region, where the solitons can be initiated. Second, in the selfinjection locking regime, soliton generation can be observed for both directions of laser current sweep, as shown in Figs. 22(c, e). This is impossible in the conventional case where an isolator and an independent laser are used. Also, larger values of the detuning ζ can be obtained in the locking regime using backward tuning, as shown in Figs. 22(c, e). Meanwhile, the span of detuning in the locked state can be shorter for the backward tuning than that for the forward tuning. Third, while decreasing the diode current (i.e. backward tuning, increasing the free-running laser), as shown in Fig. 22e, the detuning ζ can grow that is counter-intuitive. Moreover, such non-monotonic behaviour of ζ can take place in the soliton existence domain and affect soliton dynamics. As it has been shown in Ref. [156], decreasing the detuning value in the soliton regime can trigger switching to different soliton states. In addition, besides bright dissipative solitons, it has been demonstrated that laser selfinjection locking also allows dark pulse or platicon generation [90,141,224,225].
C. Heterogeneous integration advances laser self-injection locking
In parallel to the study of laser self-injection locking dynamics, there are equally important advances in photonic integration of laser self-injection locking for even more compact sizes and extra functions. The first example is heterogeneous integration of high-power, narrowlinewidth, InP/Si semiconductor lasers with ultralowloss Si 3 N 4 microresonators on a monolithic silicon substrate [157,227], as shown in Fig. 23a. Heterogeneous integration [228][229][230] can further improve device stability and performance, and allows high-volume manufacturing. Translating laser self-injection locking from hybrid to heterogeneous integration can enable thousands of narrow-linewidth lasers and soliton microcombs produced from a single wafer using CMOS-compatible techniques and foundry pilot lines.
In Ref. [157], a single soliton microcomb module, occupying a footprint less than 2 mm 2 , has been demonstrated. This module consists of an InP/Si DFB laser, a thermo-optic resistive heater on silicon, and a high-Q Si 3 N 4 nonlinear microresonator. It is worth to mention that the thermo-optic resistive heater is used to tuned and stabilize the locking phase, while in the case of hybrid integration the gap distance between the laser chip and Si 3 N 4 chip is mechanically controlled for phase tuning. As shown in Fig. 23b, these elements are combined on a monolithic substrate by leveraging multilayer heterogeneous integration [230] through sequential wafer bonding of an SOI wafer and an InP multiple-quantum-well epitaxial wafer to a patterned and planarized Si 3 N 4 substrate [212]. The CW laser output from the DFB laser passes through the thermo-optic phase tuner and couples into the high-Q Si 3 N 4 microresonator where solitons are formed. Laser self-injection locking is optimized by electric control of the microresonator-laser relative locking phase using the thermo-optic phase tuner. The entire device outputs a CW laser with more than 1000 times linewidth reduction and a single soliton of 100 GHz repetition rate [157]. Furthermore, hertz-level instantaneous ) is compared with the linear tuning curve ζ = ξ (thin black lines in (c, e)). While tuning the laser, the actual effective detuning ζ and the intracavity power |a(ξ)| 2 will follow red or blue lines with jumps due to the multistability of the tuning curve. The triangular nonlinear resonance curve (thick black in (b)) is deformed when translated from ζ frame to the detuning ξ frame (d,f ) with corresponding tuning curve ζ(ξ) (c,e). The width of the locked state is larger for forward scan, but the backward scan can provide larger detuning ζ, which is crucial for the soliton generation. This figure is taken from Ref. [140].
laser linewidth can be achieved using thin-core Si 3 N 4 microresonators with higher Q and lower FSR (down to 5 GHz) [227].
The second example is monolithic integration of piezoelectric thin films for fast frequency modulation. For many metrology applications of lasers and optical frequency combs, frequency agility -the ability to achieve megahertz to gigahertz bandwidth of frequency actuation -is critical. In the case of laser self-injection locking, instead of direct modulation of the laser current, laser frequency actuation can be realized via actuating the external microresonator that drags the laser frequency to follow. In integrated photonics, metallic heaters deposited and patterned directly on microresonators are commonly used for frequency shift [231,232] and phase modula-tion [233], emplying the thermo-optic effect. However, heaters have several disadvantages, including kilohertz modulation bandwidth and strong cross-talk. Meanwhile, heaters have low tuning efficiency as the thermooptic coefficient [234] of Si 3 N 4 is dn mat /dT = 2.5 × 10 −5 K −1 , nearly one order of magnitude smaller than that of silicon [235].
One approach to achieve high-speed on-chip actuators on Si 3 N 4 microresonators is monolithic integration of piezoelectric actuators [92,226,236] on Si 3 N 4 PIC. One suitable piezoelectric material is aluminium nitride (AlN) that is widely used in commercial microelectro-mechanical-systems (MEMS) techniques for wireless communications. Figures 21a and 23c show the topview optical microscope image and the scanning elec- Photographs showing a completed 100-mm-diameter wafer and its zoom-in view of chips and elements [157]. A Si3N4 microring resonator and its interface with silicon is shown. b. Schematic of laser soliton microcomb devices consisting of DFB lasers, phase tuners, and high-Q microresonators on a monolithic substrate. Bottom panel shows the simplified device cross-section. The laser is based on InP/Si, and the microresonator is based on Si3N4. The intermediate silicon layer with two etch steps is used to deliver light from the InP/Si layer to the Si3N4 layer. c. False-coloured scanning electron microscope (SEM) image of the Si3N4/AlN device cross-section, showing Al (yellow), AlN (green), Mo (red), Si3N4 (blue) and the optical mode (rainbow) [226]. d. Left panel shows a false-coloured SEM image of the sample cross-section with a PZT actuator integrated on the Si3N4 photonic circuit. The piezoelectric actuator is composed of Pt (yellow), PZT (green) layers on top of Si3N4 (blue) buried in SiO2 cladding [92]. Right panel shows the optical micrograph of disk-shaped PZT actuator on top of Si3N4 microring with 100 GHz FSR [92]. Images are taken from Ref. [157] (Panels a, b), Ref. [226] (Panel c), and Ref. [92] (Panel d).
tron microscope (SEM) image of the cross-section. The piezoelectric actuators [226,236] are made from polycrystalline AlN as the main piezoelectric material, molybdenum (Mo) as the bottom electrode (ground) and the substrate to grow polycrystalline AlN, and aluminium (Al) as the top electrode. The piezoelectric control employing the stress-optic effect [237,238] for actuation speed up to a megahertz [92,226], and bulk-acoustic waves for megahertz to gigahertz actuation speed [236]. Figure 21a shows the principle and structure of laserself-injection locked, low-noise, frequency-agile lasers [92]. By fast piezoelectric actuation of the Si 3 N 4 microresonator, the laser locked to the microresonator inherits the frequency actuation and can achieve a flat actuation response up to 10 MHz with optimized designs and mechanical damping. This low-noise, frequency-agile laser features a gigahertz frequency tuning range and megahertz tuning speed. Similarly, electro-optic mod-ulation of laser frequency can also be constructed on the heterogeneous Si 3 N 4 -LiNbO 3 platform with laser self-injection locking [207]. The advantage of electrooptic modulation is the wider modulation bandwidth and smoother modulation response enabled by LiNbO 3 , however at the cost of more complex fabrication process and lower device Q factor.
Besides high speed, other key features of such piezoelectric AlN actuators are the high linearity, low hold-on electric power consumption, and maintained ultralow optical loss in the beneath Si 3 N 4 PIC [226]. The main disadvantages are the uneven actuation response due to the presence of mechanical modes and the low stress-optic tuning efficiency (few tens of megahertz per volt) [226]. The response flatness can be improved by dampening the mechanical modes [92], and the stress-optic tuning efficiency can be improved by using ferroelectric leadzirconate-titanate [92,239,240] and employing geometry change [241]. For example, cited from Ref. [92], Fig. 23d shows the top-view optical microscope image and the SEM cross-section image of a PZT actuator integrated on Si 3 N 4 .
VI. OUTLOOK
Despite all the advances and milestones mentioned above, there are still many open questions and targets of laser self-injection locking. Below we outline few topics.
• Exploring new physics of nonlinear laser selfinjection locking dynamics. Currently for integrated photonics, Si 3 N 4 is predominantly used as the material for external microresonators. However, as mentioned earlier, Si 3 N 4 has dominant Kerr nonlinearity but simultaneous weak Raman and Brillouin nonlinearities. Therefore, if using materials other than Si 3 N 4 , novel dynamics can be observed in the presence of other optical nonlinearity such as Raman effect, Brillouin scattering and photo-refraction. How do these optical nonlinearities affect self-injection locking?
• Laser self-injection locking to complex microresonator structures. So far, nearly all reported works have used a single laser locked to a single microresonator. Can the nonlinear locking dynamics be very different if using coupled microresonators? For example, it has been reported recently that dual-coupled-microresonator system can significantly boost the CW-to-soliton power conversion efficiency [242][243][244]. Can this scheme be used with laser self-injection locking and simultaneously enable "turnkey operation" [87]?
• Improving long-term stability and overcoming frequency drift of self-injection-locked lasers. Despite that there are promising progresses in achieving ultralow-noise lasers with hertz-level linewidth, the long-term frequency stability of self-injectionlocked laser has not reached a level comparable to that of fiber lasers without active locking (e.g. using PDH lock). The laser frequency stability is ultimately impacted by the thermal stability of the external microresonator. For example, Si 3 N 4 has a thermo-optic coefficient of dn mat /dT = 2.5 × 10 −5 K −1 , thus 0.01 • C temperature change induces laser frequency drift of around 50 MHz at 1550 nm wavelength, much larger than the laser linewidth. Thermal stabilization and isolation of high-Q external microresonators is becoming a central issue, particularly for chip-based devices where the laser chip and the external microresonator chip are closely packaged together. Therefore, an open question is: How to overcome this long-term frequency drift without significantly increasing the size, weight and power consumption of the chip module? There are many open questions to be studied and answered on laser self-injection locking. It is encouraging to see that this field has today become an active field in optics (particularly in integrated photonics), and we are very sure that there will be more achievements in the future which will bring laser self-injection locking to next-gen chip-scale lasers and frequency combs.
ADDITIONAL INFORMATION
This review article is submitted to Frontiers of Physics, Special Topic on "Embracing the Quantum Era: Celebrating the 5th Anniversary of Shenzhen Institute for Quantum Science and Engineering" (Editors: Dapeng Yu, Dawei Lu and Zhimin Liao).
ACKNOWLEDGMENTS
The authors are grateful for the fruitful discussion and collaboration with colleagues at EPFL, UCSB, Caltech, Purdue, and OEWaves, during and especially prior to the preparation of this review. The results presented in section 3. | 24,106.6 | 2022-12-12T00:00:00.000 | [
"Physics",
"Engineering"
] |
Studying the Dynamic Properties of a Distributed Thermomechanical Controlled Plant with Intrinsic Feedback. II
The dynamic properties of the response of a one-dimensional elastic mechanical system to an external mechanical action are examined. Transfer functions are calculated in two channels: from the force action at one of the system boundaries to the displacement of the medium sections and to the temperature. The asymptotic behavior of the transfer function is analyzed for each channel in the neighborhood of the origin on the complex plane. The case of no heat exchange between the system and the environment is considered separately.
INTRODUCTION
Thermomechanical systems with mechanical vibrations and heat transfer processes are widely used in modern engineering. Therefore, it is necessary to study the dynamic properties of such systems in mathematical terms and develop control methods for them.
The literature on the thermoelasticity phenomenon is quite extensive. The early works [1][2][3] were followed by [4], where thermoelasticity was investigated as part of general elasticity effects. In the recent literature, we mention the publications [5][6][7] devoted to various properties of thermoelastic media. The book [8] developed a modern theory of thermomechanics of elastoplastic deformation. A coupled dynamic thermoelasticity problem for a one-dimensional medium was stated in [9].
In this paper, we analyze the dynamic properties of a one-dimensional distributed elastic thermomechanical system. The mathematical model of processes in such a system is based on the classical work [4]. In contrast to [10], the system is subjected to a mechanical (force) action at one of its boundaries instead of a thermal action. The system dynamics equations have the form where t ≥ 0, 0 ≤ x < l, and a, c, β, and β therm are positive constants. (For details, we refer, e.g., to [4].) In these equations, ϕ(x)(t) denotes the displacement of the section located at a distance l -x from the point of application of the force action; θ(x)(t) is the temperature of the medium in the section x.
The initial conditions with respect to the time variable are assumed to be zero. The boundary conditions are as follows: (a) for the function ϕ, where u is the control action (with the physical sense of a mechanical (force) action applied to the system); where α and λ are positive constants.
CALCULATION OF THE VECTOR TRANSFER FUNCTION
We perform the Laplace transform of Eqs. (1.1) with the boundary conditions ((1.2), (1.3)) to obtain the system of ordinary differential equations (2.1) where κ = . In this system, the pair of unknown functions ( , ) consists of the Laplace images of the desired functions (ϕ(x)(t), θ(x)(t)).
Solving the boundary-value problem (2.1)-(2.3) yields the following expressions for the transfer functions in the channels u → ϕ(x) and u → θ(x). Theorem 1. The transfer functions of the system in the channels u → ϕ(x) and u → θ(x) are given by (2.4) and (2.5) respectively. In these formulas, Δ A = a 11 a 22 -a 12 a 21 , 3. ASYMPTOTIC BEHAVIOR OF TRANSFER FUNCTIONS AS p → 0 We study the dynamic properties of the system, beginning with the asymptotic behavior of its transfer functions in the neighborhood of the origin on the complex plane C. Theorem 2. In the neighborhood of the origin on the plane C, the transfer function can be represented as (3.1) and the transfer function as Here, O(p) denotes a function f(p) (p ∈ C) with a bounded ratio as p → 0.
Thus, the system has the double integrating property in the channel u → ϕ(x) and the differentiating property in the channel u → θ(x).
Remark. According to (3.2), the asymptotic formula for the transfer function as p → 0 includes the ratio = . Therefore, the case κ = 0 (no heat exchange with the environment) should be considered separately; see the next section.
In this case, the functions a jk ( j, k = 1, 2) have the form Thus, in the case κ = 0, the transfer function of the system in the channel u → θ(x) has a finite nonzero limit as p → 0. This property can be called static.
The proofs of Theorems 1-3 are given in Appendices A-C, respectively.
CONCLUSIONS
As has been demonstrated by this study, the thermomechanical controlled plant subjected to a mechanical (force) external action possesses the following dynamic properties: double integration in the channel from the force action to the displacement of the one-dimensional medium and (but only under heat exchange with the environment) differentiation in the channel from the force action to the temperature.
t h e r m t h e r m , The resulting conclusions should be considered when designing control systems for thermomechanical plants with dynamic properties described by (1.1)-(1.3).
According to the results of [10] and this paper, the intrinsic feedback of the plant (from the displacement of the sections to the temperature) complicates the description of its dynamic properties compared to the case of no feedback, which was investigated in [11]. (3) In the neighborhood of the origin on the plane C, the functions a jk ( j, k = 1, 2; see the explanations for (A.9)) can be represented as follows: | 1,209.6 | 2023-04-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Flexible Wireless Wall Temperature Sensor for Unsteady Thermal Field
We present a novel flexible wireless wall temperature sensor with high spatio- temporal resolution and its performance evaluation in an unsteady thermal field. A base part of the sensor is made of thermally-stable polyimide and the copper films. Using a Si hard mask fabricated by standard lithography and DRIE process, 1 mm-sized sensing resistor is sputtered on the copper coil. We enhance the time response for each measurement by reducing the frequency sweeping points. It is shown that the accuracy of the present temperature measurement is in acceptable range for most combustion studies, based on a series of error- estimation analyses. The temperature measurement uncertainty of ± 6.4 °C has been achieved with the measurement time interval as small as 2.48 ms.
Introduction
The wall temperature measurement for unsteady thermal fields is one of the major issues in combustion studies [1]. Contact thermometry using thermocouple has been widely used, but it requires wiring and physical contact that would easily introduce external disturbances [1,2]. As non-contact thermometry, infrared pyrometer and thermographic phosphor are also available, but existing techniques require optical access, which could be problematic in combustion studies [1][2][3][4].
To overcome such limitations, we have proposed a wireless wall temperature sensor using TCR (Temperature Coefficient of Resistance) changes of Au thin film [5,6]. The prototype sensor is successfully fabricated with standard MEMS process and the measured resonant frequency shows a quadratic increase in response to an increase of the temperature. However, the spatial resolution of the temperature measurement is as large as the diameter of the sensor coil, i.e., 10 mm, and its temporal resolution has not yet been examined.
In the present study, we propose a novel flexible wireless wall temperature sensor with improved spatio-temporal resolution.
Design of the wireless temperature sensor
The present sensor consists of the following elements: a planar spiral coil, a capacitor, and a resistor with temperature dependence. The sensor coil is inductively coupled with the read-out coil, and the circuit impedance changes in response to the resistance change of the resistor on the sensor. One advantage of this approach is that the sensor consists of passive devices only without any semiconductors, so that it is applicable to high temperature applications. Figure 1 shows the equivalent circuit model for the sensor and its coupled read-out circuit, under an external driving voltage v e . Based on the circuit analysis, the phase angle of the read-out circuit impedance ∠Z e ω ( ) can be derived as equation (1) [5,6], where L, C, R, and ω represent the inductance, the capacitance, the resistance, and the angular frequency, respectively. Subscripts e and s denote the external read-out circuit and the sensor. R p is the parasitic resistance due to the sensor coil, and the coupling coefficient k is determined by the self-inductance of the two coils and their mutual inductance M, which is highly dependent on their geometric relationship. The resonant frequency is obtained from the extremum condition of equation (1), and its shift is determined by the sensor resistance and thus the temperature. For the coupled series LCR circuit, the Q factor depends both on the resistance and the mutual inductance M. To achieve a higher Q factor for higher accuracy in the resonant frequency, the sensor resistance should be minimized and the mutual inductance should be maximized under constraint of the device. In our previous study [5,6], however, due to large sheet resistance of the sensor coil with sputtered Au films, the parasitic resistance of the coil itself was large, making integration of the sensor resistor prohibited. Therefore, the sensor coil was used as the sensor resistor, which significantly deteriorates the spatial resolution of the temperature measurement.
In order to suppress the unwanted effect of the parasitic resistance, thickness of the sensor coil should be on the order of 10 μm. In the present study, we employ a Cu-laminated polyimide film as the sensor substrate. Note that polyimide, which has been widely used for the flexible printed circuits (FPC), is a material with suitable properties to be used in combustion fields: high thermal durability up to 400 ºC, chemical stability, and flexibility. Figure 3d shows a photograph of the successfully fabricated prototype sensor with a 1 mm sensing resistor. The measured resistance of the sensing resistor and the designed value of the parasitic resistance are respectively 130 Ω and 1 Ω. Note that the dimension of the resistor can be further reduced to as small as 50 μm for better spatial resolution.
Improvement of temporal resolution
When determining the resonant frequency, the sensor is inductively coupled with the read-out coil and its impedance is measured using a network analyzer. In our previous studies, the frequency sweeping required long time, so that the average time for the measurement remained as an order of 100 ms. Thus we try to shorten the measurement time by reducing the number of frequency sweeping points. As shown in figure 4, the measured phase angle data with different scanning points are fitted with (1). With decreased number of data points, a fitting error on the resonant frequency is increased. The fitting error and the measurement time interval are plotted as a function of the number of sweeping points in figure 5. Note that the measured data with 601 points are used as a reference. When we decrease the number of sweeping frequency to 7 points, the sweeping time takes approximately 0.6 ms. When the time required for data-transfer is included, it corresponds to 2.3 ms. Due to the increase of fitting error, the uncertainty of the temperature measurement is also increased. However, the uncertainty around 100 ºC is estimated to be ± 5.2 ºC that should be acceptable for most combustion studies. Figure 6 shows an experimental setup for the performance evaluation in an unsteady thermal field. The sensor is fixed on a plate and the read-out coil is attached on the other side with a distance of 1.6 mm. The read-out coil is connected to a network analyzer (ZNB20, Rohde & Schwarz) through a coaxial cable, and the wall surface temperature is simultaneously monitored with a K-type thermocouple once in every 100 impedance measurements. Both the network analyzer and the data acquisition unit are connected to a host PC through Ethernet for the remote control and the data transfer. VISA (Virtual Instrument Software Architecture) is used as an application programming interface with SCPI (Standard Commands for Programmable Instruments) commands.
Performance evaluation in unsteady field
To provide an unsteady thermal field to the wall surface, the sensor is suddenly approached to a 2 mm distance from a hot plate, which is heated up at 350 ºC beforehand. The measurement is performed until surface temperature reaches 230 ºC. The measured resonant frequency data are converted to the temperature and plotted as a function of the elapsed time as shown in figure 7. The measured data in the vicinity of 225 ºC are magnified as shown in figure 8. The average time for each measurement is 2.48 ms, and the uncertainty of the temperature measurement is found to be ± 6.4 ºC with the 95% coverage. By improving the measurement method, it is expected that the temporal resolution less than 1 ms can also be achieved.
Conclusions
We have developed a wireless wall temperature sensor with high spatio-temporal resolution for the use in combustion studies. The sensing system includes the sensor composed by a LCR circuit and the inductively-coupled read-out coil. The sensor is fabricated on the Cu-laminated polyimide film, and an additional 1 mm-sized sensing resistor is formed by standard MEMS process. The parasitic resistance of the coil is markedly suppressed by using the thick Cu layers, so that the additional sensing resistor dominates the whole sensor resistance. The fabricated sensor shows high flexibility, so that it can be fitted well on a curved surface. The sensor performance has been evaluated in an unsteady thermal field. Uncertainty in the temperature measurement of ± 6.4 ºC with the mean measurement time interval of 2.48 ms has been achieved. | 1,880.2 | 2015-12-10T00:00:00.000 | [
"Physics"
] |
Hydraulic Research for Lateral Plant Obstruction in an Open Channel
The hydraulic model testing results of flow of a water stream through a lateral plant obstruction are presented in the paper. The plant element was formed by natural deciduous and coniferous tree branches filling the space between tree trunks set in chequered pattern. The experiments conducted in a horizontal, rectangular hydraulic channel included the measurements of flow intensity, difference in water surface level before and after the obstruction and filling of the channel on the upstream and downstream sides. A preliminary analysis of results was performed determining the coefficient of local losses of lateral plant obstruction. The coefficient of water discharge for the fascine overfall case assumed for the plant obstruction was determined as well.
INTRODUCTION
The ever increasing application of plant build up in stream channels encourages to seek more perfect calculating methods taking into consideration the reaction of plants to flow in open channels (Chow, 1959;Tsujimoto, 1999;Järvelä, 2004;Gurnell, 2015;Solari et al., 2016;Radecki-Pawlik et al., 2017;Kałuża et al., 2018).The biological reaction of channel build up on the flow conditions is characterized among others, by damming up (accumulation) effect (change in water surface level), increase in flow resistance, decrease in flow capacity of the channel, etc. (Chow, 1959; Da̜ bkowski and Pachuta, 1996; Tymiński and Kałuża, 2012; Wolski et al., 2018).One of the ways enabling the study of such hydraulic reactions of plants are laboratory investigations (Tal and Paola, 2010;Västilä and Järvelä, 2014;Ebrahimi et al., 2017).Nowadays, little is still known about that problem.There is not enough field or laboratory data available.An important contribution are the classic investigations by Klaassen and van der Zwaard (Klaassen and Van Der Zwaard, 1974).
In the paper, a hydraulic characteristic test of plant obstruction situated laterally to the direction of water flow, was undertaken.Such a situation occurs in broad obstructions of stream channels, or during flow into polders.In such cases, water accumulations occur above the obstruction (Klaassen and Van Der Zwaard, 1974).The hydraulic characteristic of the plant obstruction could be the coefficient of local losses if we treat the plant element as the local linear obstruction or the overflow water discharge coefficient, and when we treat the obstruction as the overflow.In such a case, this overflow can be called a fascine overfall.
MATERIAL AND METHODS
The experimental investigations were performed in the water laboratory of Franzius Institute at the University in Hannover.Figure 1 shows a general schematic of the measurement set-up together with the supply installation.
The test model consisted of a straight rectangular channel with the length of 20 m and width of 0.98 m.The channel walls were made of glass and the horizontal, flat bottom was a PVC board.The maximum depth of the channel was 0.85 m.The total delivery of the pumps supplying the model amounted to 0.22 m 3 s -1 (220 l•s -1 ).The water surface level drop was measured as the difference of readings of two Wavo type limnigraphs (manufacturer: Delft Hydraulics Laboratory) located in the channel axis at a definite distance from each other.The filling of the channel was additionally checked on the water gauges along the measuring section and regulated with a movable regulating overfall provided with a scale and localized at the outlet of the channel.Water was supplied from a top equalizing tank through a steel pipeline with the diameter of 300 mm.A MAG-X Plus type induction flow meter (manufacturer: Fischer&Porter) and sluice valve with servo control were installed on the pipeline.
The plant element under study is presented in Figures 2 and 3.The characteristic quantities amounted to: diameter of tree trunks d p = 0.10 m, spacing a x = 0.35 m, a z = 0.20 m. the free space between the trunks arranged in a chequered pattern was filled with deciduous and coniferous tree branches.The plant element under study Apart from the measurements of flow intensity Q, the depth before (H g ) and after (H d ) the plant obstruction and the difference of water surface level (∆H), the water temperature was also measured for each set of measurements.The results of the measurements obtained and their elaboration are presented below.Data labeling on a schematic calculation system is presented in Figure 4 (explanation under the equations).
RESULTS AND DISCUSSIONS Plant element as a fascine overfall
The lateral plant obstruction in the laboratory channel was considered as a kind of fascine overfall.The characteristic determined experimentally Q = f(∆H) of such an overfall is presented in Figure 5.The fascine overfall discharge of the parameters described above, can be defined from the universally applied formula (Finnemore and Franzini, 2009): Table 1 presents an example of measurement results and the coefficients determined on their basis, of the discharge of the tested overfall taking into consideration /m(v)/ and not considering /m/ flow velocity ahead of the obstruction v(g) (Finnemore and Franzini, 2009).The coefficient of discharge m of the tested fascine overfall for flows in the inflow channel characterized by Reynolds number Re > 50 000 steadies at certain level (Figure 6) and is a constant quantity.For Reynolds number Re < 50 000, the coefficient m decrease with the decrease of value of Re.
Coefficient of local resistances of lateral plant barrier
In the case under consideration, the plant element under study is treated as a local linear cluster of plants, e.g. at an inter-embankment.For this type of plant obstruction described above, the coefficient of local resistances ζ was determined.
Assuming the Bernoulli equation for cross-sections before and after the obstruction (Figure 4) (Chow, 1959;Järvelä, 2004): where: The local loss of energy (local resistance h M ) can be determined on the plant element under investigation as also the coefficient of local resistances ζ (Table 1) characterizing it.
2𝑔𝑔
[m] (3) hence: Figure 7 depicts the flow curve obtained from the measurements: channel without plants -H = f(Q) and the channel built in plant zone -H = f(Q).For the examined partition of flow variables Q (Figure 7), resulting from them is the drop in the channel flow capacity reaching 30.3% (DQ).For example, the plant element under investigation at unit flow q = 0.122 m 2 •s -1 (Q = 120 l•s -1 ) causes change in water surface level in the channel reaching up to 32% (damming up effect).
The variability of the coefficient of local resistances ζ of the plant element under investigation depending on the Reynolds numbers ζ = f(Re) is presented in the appended Figure 8.
Figure 2 .Figure 1 .
Figure 2. View of a lateral plant obstruction located in a laboratory channel: A) along the direction of water flow, B) perpendicular to the direction of water flow ) where: m -coefficient of overfall discharge [-]; b -overfall width [m]; g -acceleration due to gravity [m•s -2 ]; H -"thickness of overflowing water layer", here H = ∆H (difference of water surface level ahead of and behind the obstruction [m].
Figure 4 .Figure 3 .Figure 5 .
Figure 4. Data labeling on a schematic calculation system (explanation under the equations) ) where: α -the St. Venant's coefficient [-]; H g and H d -water depth ahead of and behind the obstruction [m]; ∆H = H g -H d [m]; h M -local energy losses [m]; g -acceleration due to gravity [m•s -2 ]; v g and v d -mean flow velocity ahead of and behind the obstruction [m•s -1 ].
Table 1 .
Sample results of measurements Where: Q -flow intensity; ∆H -difference in water surface level (damming up/accumulation); H -depth; vmean velocity; m -coefficient of overfall discharge; h M -local resistances (local energy lost); ζ -coefficient of local resistances; index: g -upstream water; d -downstream water. | 1,778 | 2019-03-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
An application of kernel methods to variety identification based on SSR markers genetic fingerprinting
Background In crop production systems, genetic markers are increasingly used to distinguish individuals within a larger population based on their genetic make-up. Supervised approaches cannot be applied directly to genotyping data due to the specific nature of those data which are neither continuous, nor nominal, nor ordinal but only partially ordered. Therefore, a strategy is needed to encode the polymorphism between samples such that known supervised approaches can be applied. Moreover, finding a minimal set of molecular markers that have optimal ability to discriminate, for example, between given groups of varieties, is important as the genotyping process can be costly in terms of laboratory consumables, labor, and time. This feature selection problem also needs special care due to the specific nature of the data used. Results An approach encoding SSR polymorphisms in a positive definite kernel is presented, which then allows the usage of any kernel supervised method. The polymorphism between the samples is encoded through the Nei-Li genetic distance, which is shown to define a positive definite kernel between the genotyped samples. Additionally, a greedy feature selection algorithm for selecting SSR marker kits is presented to build economical and efficient prediction models for discrimination. The algorithm is a filter method and outperforms other filter methods adapted to this setting. When combined with kernel linear discriminant analysis or kernel principal component analysis followed by linear discriminant analysis, the approach leads to very satisfactory prediction models. Conclusions The main advantage of the approach is to benefit from a flexible way to encode polymorphisms in a kernel and when combined with a feature selection algorithm resulting in a few specific markers, it leads to accurate and economical identification models based on SSR genotyping.
Background
Genetic markers are target sites in the genome that differ between individuals of a population. These differences can occur in DNA that codes for specific genes, or usually in the vast areas of intergenic DNA. These differences in the make-up of the genetic content at a specific site in the genome are often referred to as polymorphisms (literally "multiple forms"). These polymorphisms are detected with a range of different technologies of which simple sequence repeat markers (SSRs) [1] and single nucleotide polymorphisms (SNPs) are currently the most commonly used types. The markers used in this study are SSRs. The SSRs of interest for marker development include di-nucleotide and higher order repeats (e.g. (AG) n , (TAT ) n , etc.). The number of repeats usually ranges between just a few units to several dozens of units. The polymorphism can exist at a locus containing a microsatellite between individuals of a population and is characterized as a different number of repeat units of the microsatellite, which is reported by several authors to result from an unbiased single-step random walk process [2,3].
The detection of these differences occurs by site-specific amplification using polymerase chain reaction (PCR) [4] of the DNA followed by electrophoresis in which the DNA fragments are essentially separated by size. Fragment sizes at a specific locus in the genome are also referred to as "alleles". Depending on the ploidy level of the organism being studied (haploid, diploid, tetraploid), an individual can have one or more alleles at a specific locus. The set of alleles that has been collected for a given individual (often representing a single sample in the study) is referred to as the "genotype" of that individual.
Our purpose is to propose an approach for using SSR marker genotypes to build predictive models to identify commercial tobacco varieties. Predicting unknown samples requires genotyping. When large numbers of samples and SSR markers are involved, the genotyping process can be costly in terms of laboratory consumables, labor and time. As a consequence, it generally makes sense to select a minimal set of markers to build the prediction model.
As mentioned above, primers associated with an SSR marker that are amplified by PCR on a DNA sample lead to several amplicon sizes, (the "alleles") defining the genotype of the sample. The results of such amplification on one sample are of the form g 1 = a 1 /a 2 /.../a m where a i is an integer depending on the number of microsatellite repeats between the two flanking primers and m depends on the ploidy type of the organism from which the DNA is extracted (it can vary from one to several). For SSR markers, the number a i is qualitative only and not quantitative as (a i , a i +10) is no more different than (a i , a i + 2) from the point of view of the genetics. A snapshot of such a dataset is given in Table 1.
The challenge in building a supervised prediction model is therefore to handle these data, which are neither continuous, nor nominal, nor ordinal. A straightforward approach would be to code all the alleles and treat the 0 -1 data in the feature space whose dimensions are defined by the distinct alleles in the training set. However, unless the initial feature space is enriched with extra dimensions and the prediction model is retrained, metrics on this binary data space will not take into account new alleles coming from new samples in order to use a prediction model built on this feature space of fixed dimension. Defining the feature space as the infinite (countable) direct sum of {0, 1} spaces and the usage of a kernel overcomes this limitation.
Geneticists ususally compute the Nei-Li distance [21] to estimate the evolutionary distance between the samples and unsupervised methods, like hierarchical clustering or principal coordinate analysis on the Nei-Li distance matrix, are commonly used to treat SSR data; but those are not suited to predict new DNA samples. To our knowledge, only Artificial Neural Networks have been used in a supervised manner in this context [22], where the allele binary coding was used.
The purpose of this article is twofold: 1) show that encoding the SSR marker polymorphism into the Nei-Li similarities indeed defines a positive definite kernel that will allow the usage of supervised methods to address specific discrimination tasks; 2) describe a simple filter method [23] for selecting identification kits, consisting of a small number of SSR markers that have acceptable discrimination ability for a specific task.
Results and Discussion
In this study, Nicotiana tabacum, a functional diploid was used. The methods described above will be applied to four datasets, with distinct discrimination purposes. The material and method description for the primers development and genotyping of the samples can be found in [24]. Four datasets were developed: a) tobType: A set of 91 varieties were genotyped on 186 SSR markers without replicates; that lead to 91 observations (see additional file 1). The objective is to discriminate the following tobacco types: Burley, Flue Cured and Oriental. b) landRace: A set of 10 different landraces of a given variety (5 plants with 5 replicates) were genotyped on 19 SSR markers for a total of 250 observations (see additional file 2). The groups to discriminate are the 10 landraces of this variety. c) geoVar: A set of 67 different varieties from the same geographic region were genotyped on 48 SSR markers for a total of 93 observations (see additional file 3). The objective is to discriminate the 12 known subtypes. d) ORvar: A set of 38 different varieties from the same tobacco type (oriental) were genotyped on 48 SSR markers for a total of 88 observations (see additional file 4). The objective is to discriminate 8 predefined families.
Mutual Information based Feature Selection (MIFS) [25] and maximum Relevance -Minimum Redundancy (mRMR) [26] and our method (the naive case a = 0 and the cases a > 0) are compared on those four datasets, generated internally. The comparison is done on a range from N = 2 to 8 markers. For MIFS, the additional parameter b [25] (which balances the importance and the complementarity of a feature) is chosen by cross-validation over the set of values 0, 0.75, 1, 1.25 and for our method a is chosen over the same set. The cross-validation loop includes the feature selection to avoid a possible selection bias. The results shown in the tables are the best 10-fold cross-validated results over the parameters of each method and the classification error rates for the different kit sizes, when combined with kernel linear discriminant analysis (KLDA) or kernel principal component analysis followed by linear discriminant analysis (KPCLDA) are shown in Table 2 and Table 3. The number of markers in the kit, N, is kept as a separate parameter as a consensus between performance and kit size has to be reached. Overall, the proposed method leads to satisfying results, comparable or better than the other ones. Only in four cases (both classification methods confounded), improved performance by at least 3% lower error rate were found by the other selection methods. Out of 56 cases, the proposed method obtained the best results (equal or better to the compared methods) in 42 cases. Though generally the improvements are slight, for a few cases the relative difference in error rates is substantial.
It is interesting to consider the case a = 0 separately as it forbids skipping features and allows an evaluation of the benefit of skipping markers. In the vast majority of comparisons, skipping markers is beneficial and the differences in error rate range from 1% to 15% (ORvar dataset, N = 2).
Comparing the obtained results on all three methods to the classification error rates using all the markers (see Table 4), one can observe that better error rates can be obtained by the selected kits for all the datasets except the geoVar dataset, where only KPCLDA with 6 markers can almost reach the error rate of the full set of SSR.
In order to evaluate how the selected set of markers performs versus the other subsets of cardinality 5, an exhaustive search (11'628 possibilities) was performed. The 10-fold cross-validation results from this simulations are summarized in Table 5. Among all the subsets of size 5, the one chosen by the algorithm belongs to the best 0.5% sets of size 5 for KPCLDA (best selected kit error rate = 7% (mRMR) -8% (FS), best subset = Table 4 Cross-validation results using the full set of markers 4%) and the best 0.5% for KLDA (best marker error rate = 3% (FS), best subset = 1%), which show the abiltiy of the feature selection algorithm to capture the few most important markers. The final kit sizes retained for the datasets under consideration are 3 for tobType, 5 for landRace, 5 for geo-Var, 6 for ORvar. For the 2 first datasets no marker is skipped, for the third dataset the fifth most powerful marker is skipped and finally for the fourth dataset the third and fourth most powerful markers are skipped by the algorithm.
Conclusions
The Nei-Li similarity was shown to define a positive definite kernel on the set of marker genotypes and therefore is a very convenient way to encode the polymorphisms contained in SSR marker data. It has shown its ability to be used further for SSR fingerprint based predictions. To our knowledge, the usage of kernel methods in this context is new.
On the four case studies presented, the proposed algorithm for selecting SSR marker kits can definitely lead to economical and efficient prediction models for discrimination. The algorithm is independent of the supervised method chosen in the modelling process (so-called filter method).
The results also show that as a general rule, the full set of markers is not necessarily the most predictive kit, and for all case studies presented, similar classification performance can be achieved with less than 8 markers.
Simulation studies show that the kit selection algorithm performs well as compared to the best subset selection when combined with KLDA or KP-CLDA; both methods leading to low classification error rate. Feature selection strategies that can deal with categorical data in classification are not so common and the proposed filter approach might be useful in other contexts as well.
The main advantage of the approach is to benefit from a fast algorithm that results in a few specific markers for a given task. An exhaustive search is generally infeasible or is very time consuming. The choice of the constant can be done by cross-validation. However, from our experience a = 1, is consensually a good default choice and performs well.
The choice a = 0 (i.e, no consideration for the redundancy) leading to a very straightforward approach, is usually less performant; even though it leads to the best results in 9 cases. Hence, this possibility should not be disregarded when performing a cross-validation experiment on a.
When the number of markers becomes smaller, the missgenotyping effect becomes more pronounced and new genotypes on new measured samples affect the genetic dissimilarities more (even with a smaller proportion of prototypes). Therefore, it should be stressed that choosing the minimum number of markers for a given problem can lead to weaker generalization properties of the classifier due to the fact that the new samples whose type or landraces are unknown are perhaps not in the original dataset and may have new genotypes. It is therefore recommended, in practice, to use at least 5 markers in a selection kit, if the number of classes to discriminate is greater than 4. Moreover, the pre-processing and identification of the electrophoresis amplicons as well as the marker usage have to be well established in order to test new samples. The quality of the laboratory work and of the SSR markers development used here also contributed to the efficiency of the models.
Kernel methods for genotyping data
As mentioned in the introduction, genotyping data are neither continuous, nor nominal, nor ordinal. Considering the allele (and not genotype) data as nominal and using a 0 -1 coding can be done but is not without problems.
The difficulty in using this special type of data is discussed in [27], where the authors argue against the use of Fisher Discriminant Analysis due to the discrete nature of the data, preferring the usage of Artificial Neural Networks based on the allele frequencies. A possible way to handle the binary data is to build a model using the DISQUAL approach as presented in [24,28]. Despite the presence of Multiple Correspondence Analysis (which is intended to make the model more robust), this approach is rather sensitive to genotyping error (misassignment of alleles).
Indeed, the natural binary coding feature space whose dimensions are the alleles in the training set ({0, 1} N where N is the number of distinct alleles in the training set) is not the best option because, for a given SSR marker, the alleles obtained on new samples can often be lacking in the original training set. Therefore, any metric Geneticists usually estimate the degree of polymorphism between two sample genotypes by computing the Nei-Li genetic distance between them. The similarity associated with that distance will be shown to define a positive definite kernel on the set of the genotyped samples. Hence, this kernel will be our prefered choice.
Given two samples S 1 , S 2 on which m SSR markers are amplified, leading to m genotypes for the first sample m and m genotypes for the second sample g m j } is seen as the amplicons set, the Nei-Li genetic distance between S 1 and S 2 , is computed as Where Δ denotes the symmetric difference of the two sets and | ...| the set cardinality.
This approach overcomes the issues mentioned above as new alleles will be implicitly used in the computation of the Nei-Li distance. Moreover, it is well suited to these data due to their biological meaning and is coherent with the fact that the set of genotypes is partially ordered by set inclusion: g 1 = a 1 / · · · /a n ≤ g 2 = a 1 / · · · /a m if and only if {a 1 ,..., a n } is contained in {a 1 , ..., a m }, which reflects the biological comparison of genotypes. Therefore, given a data set of samples on which m SSR markers are amplified, it leads to a dissimilarity matrix whose entries are the estimated genetic distance between a pair of samples. The purpose here is not to accurately estimate the evolutionary distances between the varieties (as those distances are supposed to, see [21]) but to exploit the polymorphism encoded in the SSR data in a meaningful way.
The basic concept of kernel discrimination methods is to model a classifier in a feature space (which will be a Hilbert space) based only on a"similarity" matrix which is assumed to be positive definite. Indeed, if the measure of similarity between the samples is a positive definite kernel [29,30], then classifiers can be trained in the reproducing Hilbert space associated with it [30]. It turns out that the Nei-Li similarity defines a positive definite kernel.
Lemma 1 -δ NeiLi defines a positive definite kernel over the set of genotypes associated to SSR markers.
Proof Let S 1 , S 2 be two samples genotyped and let us consider them as binary vectors in ⊕ n≥1 {0, 1} n .
Then 1 -δ NeiLi can be rewritten as . Using the fact that a pointwise product of positive definite kernel is also positive definite (see e.g. [29]), it is sufficient which proves the lemma. Once this valid kernel is defined, a wide range of supervised methods can be applied. The supervised approaches investigated in our examples are Kernel-Linear Discriminant Analysis (KLDA, [31]) and Kernel-Principal Component-Linear Discriminant Analysis (KPCLDA, Kernel-Principal Component Analysis followed by Linear Discriminant Analysis as described in [29,32]). To our knowledge kernel approaches have not yet been applied to SSR data.
Identification kit selection: Discrimination power of a SSR marker
The cost of the SSR analysis is to be taken into account when building a predictive model: the classification results should be obtained with a minimal number of SSR markers in order to be used at a reduced cost.
The exhaustive subset selection is obviously too computer extensive, as subsets of size 5 to 20 should be extracted from hundreds of SSR markers. Hence a strategy has to be developed to address this issue.
As the feature selection in the reproducing kernel Hilbert space associated with our kernel is not useful and by definition of the kernel building, classical embedded method [23] like Lasso [33] or L1-SVM cannot be applied. Therefore, filter methods for SSR selection are natural in our context. Additionally, as the data generated have a long life-cycle and can be used in the long run, the set of markers proposed for a given task is preferred to be independent of the classification method used.
The criteria for having a suitable identification SSR markers kit can be stated as follows: "Choose the set of markers that show the biggest polymorphism between the groups to discriminate and the lowest polymorphism within the groups". This criteria is to be thought of as similar to the famous Fisher's "between/within" maximization criteria used in canonical discriminant analysis.
A score will be computed for each of the SSR markers which represents the ability of a given SSR marker to discriminate between the groups. Additionally, a redundancy score will also be computed in order to assess wether the polymorphism contained in a marker A is "similar" to the polymorphism of a second marker B. If this is the case, one marker should be dropped in favor of another one explaining a different polymorphism.
Due to the nature of the genotype data, information theoretic measures are well suited here: association between the marker and the group to discriminate is measured through Asymmetric Uncertainty Coefficient [34], which reflects the dependency of the SSR marker and the group to be discriminated and the redundancy between two markers will be quantified by the Uncertainty Coefficient (a normalized version of the mutual information).
For X and Y two discrete variables, let H(X) and H(X, Y ) denote the entropy and the joint entropy respectively. Empirical estimates are used to evaluate these quantities ( p i. = n i. n , and p ij = n ij n ).
Following [34], we have: 1) The symmetric uncertainty coefficient is defined by | 4,753 | 2011-05-20T00:00:00.000 | [
"Computer Science"
] |
Recurrent mutations, including NPM1c, activate a BRD4-dependent core transcriptional program in acute myeloid leukemia
Recent evidence suggests that inhibition of bromodomain and extra-terminal (BET) epigenetic readers may have clinical utility against acute myeloid leukemia (AML). Here we validate this hypothesis, demonstrating the efficacy of the BET inhibitor I-BET151 across a variety of AML subtypes driven by disparate mutations. We demonstrate that a common ‘core' transcriptional program, which is HOX gene independent, is downregulated in AML and underlies sensitivity to I-BET treatment. This program is enriched for genes that contain ‘super-enhancers', recently described regulatory elements postulated to control key oncogenic driver genes. Moreover, our program can independently classify AML patients into distinct cytogenetic and molecular subgroups, suggesting that it contains biomarkers of sensitivity and response. We focus AML with mutations of the Nucleophosmin gene (NPM1) and show evidence to suggest that wild-type NPM1 has an inhibitory influence on BRD4 that is relieved upon NPM1c mutation and cytosplasmic dislocation. This leads to the upregulation of the core transcriptional program facilitating leukemia development. This program is abrogated by I-BET therapy and by nuclear restoration of NPM1. Finally, we demonstrate the efficacy of I-BET151 in a unique murine model and in primary patient samples of NPM1c AML. Taken together, our data support the use of BET inhibitors in clinical trials in AML.
INTRODUCTION
Acute myeloid leukemia (AML) is an aggressive hematological malignancy where o30% of all patients are long-term survivors. 1ver 11 000 patients per year die of this disorder in the United States alone and novel therapeutics are urgently required.3][4] These include aberrant transcription and epigenetic dysfunction, which provide a potential basis for therapeutic intervention. 5In particular, the dynamic plasticity of the epigenome lends itself well to therapeutic manipulation.As an exemplar of this principle, we and others have recently shown the efficacy and mechanism of action of protein-protein interaction inhibitors of the bromodomain and extra-terminal (BET) family of epigenetic readers in animal models of mixed lineage leukemia (MLL)-rearranged leukemias, 6,7 multiple myeloma 8 and non-Hodgkins lymphoma. 9owever, the efficacy and potential mechanism(s) of action of BET inhibitors in other forms of AML are largely unknown.
One of the most common mutations in AML, occurring in 35% of cases, involves the nucleophosmin (NPM1) gene. 10NPM1 is a pleiotropic protein with roles in processes as diverse as ribosome biogenesis, chaperoning histones and centrosome duplication.The functional integrity of NPM1 is dependent on its ability to shuttle between the nucleus and cytoplasm, and this ability is severely compromised in NPM1 mutated AML. 11The mutations (termed NPM1c mutations) uniformly alter one or both critical tryptophan residues in the C-terminus of the protein, which prevent proper folding 12 and destroys a nucleolar localization signal.In addition, the most common mutation (type A, accounting for 75% of all mutations) 11 also generates an aberrant extra nuclear export signal.Although NPM1c mutations are heterozygous, hetro/homodimerization with wild-type (WT) NPM1 results in cytoplasmic mislocalization of both mutant and WT protein.This alteration in subcellular location perturbs normal NPM1 function, including the mislocalization and stabilization of critical proteins such as the TP53 regulator p14ARF, 13,14 and leads to transformation.This process also generates a distinct transcriptional signature in NPM1c AML that facilitates the generation of leukemia. 15,16However, the nature of aberrant transcriptional regulation in NPM1c AML remains obscure.
In this report, we address these questions and demonstrate that multiple AML subtypes are sensitive to BET inhibition in vitro.In addition, we identify a BRD4-dependent transcriptional program that maintains multiple subtypes of AML and underlies I-BET sensitivity.In mechanistically interrogating the sensitivity of the common NPM1c subtype of AML, we further identify a previously unknown function for NPM1 as a negative regulator of BRD4dependent transcription.This function is perturbed in NPM1c AML and inhibited by I-BET.Finally, we demonstrate convincing preclinical evidence to suggest efficacy of BET inhibition in NPM1c AML.
MATERIALS AND METHODS
Cell culture MV4;11, MOLM13, NOMO1, Kasumi, ME-1, SKM1, KG-1, NB4, HEL, HL60 and K562 were grown in RPMI-1640 medium (Sigma-Aldrich, St Louis, MO, USA) supplemented with 10-15% fetal calf serum (FCS).OCI-AML3 were grown in 80% alpha-MEM þ 20% FCS.All growth media were 1% penicillin/ streptomycin.Murine progenitors retrovirally transformed with MOZ-TIF2 or NUP98-HOXA9 were grown in RPMI-1640 medium supplemented with 20% FCS þ 10 ng/ml of IL3.Primary human leukemia cells were also grown in RPMI-1640 medium supplemented with 20% FCS in the presence of 10 ng/ml of IL3, 10 ng/ml IL6 and 50 ng/ml SCF.Cells were incubated at 37 1C and 5% CO 2 .K562 cells were transfected using the Amaxa Nucleofector system according to the manufacturer's instructions.Cell proliferation assays and clonogenic assays in methylcellulose were performed as previously described. 6Immortalized MOZ-TIF2 and NUP98-HOXA9 cell lines were generated through retroviral transduction of whole bone marrow with MSCV MOZ-TIF2 and MSCV NUP98-HOXA9, as previously described. 26ne expression and chromatin immunoprecipitation Gene expression analysis and chromatin immunoprecipitation followed by downstream analysis with next-generation sequencing or RT-PCR were performed as previously described.27 The following primer pairs were used in the chromatin immunoprecipitation analysis.The following primer pairs were then used in the gene expression analysis.
Murine models of disseminated NPM1c leukemia
Three separate NPM1c AML samples each containing different collaborating mutations were chosen for in vivo studies.Ten million NPM1c AML cells from each leukemia were intravenously injected into 6-8-week-old sublethally irradiated (300 cGy) NOD-SCID mice, as previously described. 6reatment with I-BET at 15 mg/kg was commenced on day 11 and mice were inspected twice daily.This dosing schedule maintained plasma levels of the compound above the in vitro IC 50 (Supplementary Figure 1).Mice were killed upon signs of distress/disease.All mice were kept in a pathogen free animal facility.All experiments were conducted under UK home office regulations.Mouse histology and tissue sample preparation were performed as previously described 6
Flow cytometry analysis
Cell apoptosis and cell cycle analysis were performed as previously described 6 on an ADP flow cytometer (Dako, Stockport, UK), and all data analyzed with FlowJo software (Tree Star, Inc., Ashland, OR, USA).
Patient material
Peripheral blood or bone marrow containing 480% blasts, was obtained from patients following consent and under full ethical approval at each involved institute.
Immunofluorescence microscopy.Haematopoietic cells were washed once in 1 Â phosphate buffered saline before cytocentrifugation onto polylysine coated microscope slides.Cells were fixed with buffered 4% paraformaldehyde, and following stepwise incubation with primary and then secondary fluorescent antibody (see antibodies) cells were stained with Hoechst 33258 (Sigma-Aldrich) and mounted with Vectashield mounting medium (Vector Laboratories, Peterborough, UK).Confocal laser images were captured with an Olympus Fluoview FV1000 microscope equipped with a 40 Â oil lens (Olympus, Southend-on-Sea, UK).Image processing was carried out using PHOTOSHOP (Adobe systems, San Jose, CA, USA).
Bioinformatics analysis
Microarray and bioinformatics analysis.RNA from OCI-AML3 was extracted after 6 h of treatment with I-BET151 and processed as described 6 before hybridization to Illumina Human HT12 v4 BeadChips (Illumina, San Diego, CA, USA).Gene expression data were processed using the lumi 28 package in R. Probes were filtered to remove those where the detection P-value (representing the confidence that expression is above the background of the negative control probe) was greater than 0.01 in all samples.Expression data were transformed using variance stabilization, 29 then normalized using quantile normalization.Comparisons between the dimethyl sulfoxide-and BET-inhibitor-treated samples for all three cell lines (OCI-AML3, MV4;11 and MOLM13) were performed using the R package limma.Genes with a false discovery rate below 5% and a fold-change greater than two were considered significant.Agglomerative hierarchical clustering of the most downregulated genes was performed in R using complete linkage on the pairwaise Euclidean distance between gene signatures, and clusters were identified based on a cutoff of 10.
ChIP-seq analysis.Sequenced reads were mapped to the reference human genome (hg19) using the Burrows-Wheeler aligner 30 with default parameters.Only reads mapped with a mapping quality score 410 were retained, and multiple reads mapping to identical genomic loci were removed to limit potential PCR bias.Reads were extended along their strand to the estimated fragment size of 300 bp.Peaks were called using Model-based analysis of ChIP-seq 31 with standard parameters.The distribution of reads about the transcriptional start site of all proteincoding genes 41 kb, as well as lincRNAs, miRNAs, snoRNAs and snRNAs, was calculated using the Repitools package in R.
Assigning enhancer and super-enhancer regions to specific genes.Enhancer and super-enhancer regions were defined as per Young and colleagues, based upon ChIP-seq enrichment for the mediator complex member MED1, the histone mark H3K27Ac and distance from promoters as defined by H3K4Me3 in the MMI.S cell line. 17As there was an exceptionally strong correlation between MED1 and BRD4 binding intensity at enhancer regions in MM1.S cells, we estimated the number of ChIP-seq reads mapped to these enhancer regions for BRD4 binding in the OCI-AML3 cell line following treatment with dimethyl sulfoxide and I-BET, respectively.For both the treatments, reads per million were estimated and enhancer regions were ranked according to increasing BRD4 signal.Genes (N ¼ 6725) that were within 50 kb were assigned to the enhancer regions.Only a few genes were assigned to more than one enhancer.
BET inhibition demonstrates efficacy across a number of AML subtypes
To determine if BET proteins are valid therapeutic targets in other AML subtypes, we assessed the sensitivity of a representative panel of cell lines with common recurrent AML mutations to inhibition by I-BET151 (hereafter I-BET) in growth and colony formation assays (Figures 1a and b and data not shown).These data demonstrate that in addition to cell lines harboring MLL translocations, a number of AML cell lines including OCI-AML3 (which is the only human cell line that contains an NPM1c mutation), KG-1 (FGFR1OP2-FGFR1 rearrangement), SKM1 (EZH2 Y641C mutation), Kasumi (AML1-ETO rearrangement) and ME-1 (CBFb-MYH11 rearrangement) show sensitivity to treatment with I-BET, both in liquid culture and clonogenic assays (Figures 1a and b and data not shown).Moreover, murine bone marrow progenitors retrovirally transduced with poor-risk AML-associated fusion oncogenes for which no human cell lines exist, such as MOZ-TIF2 and NUP98-HOXA9 also demonstrated sensitivity (Figure 1c).Although the sensitivity varied over a relatively wide range, the majority of cell lines were inhibited at concentrations predicted to be achievable in vivo (Figure 1a and Supplementary Figure 1).Similar to our observations in the MLL-rearranged leukemias, I-BET induced a rapid and profound apoptosis and G 0 /G 1 cell cycle arrest in non-MLL fusion AML cell lines (Figures 1d and e and data not shown).
Finally, sensitivity to I-BET was also demonstrated in clonogenic assays in primary samples from patients with non-MLL fusion AML (Figure 1f and Supplementary Table 1), and I-BET was shown to induce apoptosis across multiple non-MLL fusion patient samples (Figures 1g and h).Taken together, our findings demonstrate the efficacy of I-BET against a broad range of AML cells with disparate oncogenic mutations and suggest clinical utility across a wide range of AML subtypes.
BET proteins regulate a 'core' transcriptional program in AML We have previously demonstrated that inhibition of BET proteins alters a specific transcriptional program in MLL-rearranged leukemias, with downregulation of genes within this program, such as BCL2 and C-MYC, leading to the induction of apoptosis and cell cycle arrest. 6On the basis of our demonstration of similar cellular phenotypic consequences following I-BET treatment in sensitive AML cell lines, we hypothesized that downregulation of a similar transcriptional program may have mediated these findings.
To test this hypothesis, we analyzed the changes in gene expression following 6 h of I-BET treatment (before significant changes in cell cycle and apoptosis are evident) in the sensitive S2).Cells were plated in cytokine-supplemented methylcellulose in the presence of vehicle (DMSO) or I-BET151.These show a marked reduction of colony formation in the presence of I-BET151.AML patient samples demonstrate apoptosis following treatment with I-BET.A representative sample is shown (g) and the results from five separate patients are enumerated in the bar graph (h).
cell lines OCI-AML3 and SKM1, containing NPM1c and EZH2 Y641C mutations, respectively.Similarly to the MV4;11-and Molm13 MLLrearranged cell lines, only a relatively small number of genes demonstrated a significant alteration in expression in either cell line, with the majority of genes unchanged (Figures 2a and b).This corroborates our previous finding of the specificity of inhibition of BET proteins on overall gene expression in AML.Although a number of genes were uniquely altered, remarkably, the degree of overlap of the transcriptional changes between OCI-AML3, SKM1 and each of the MLL cell lines was similar to the overlap between the two MLL cell lines themselves and there was a high degree of correlation between the genes (Figure 2c and Supplementary Figure 2A).Importantly, the expression of 26 genes was commonly downregulated in all four cell lines following I-BET treatment (Figure 2c and Supplementary Table 2).This gene set, which includes several critical regulators of myelopoiesis and leukemia including BCL2, C-MYC and IRF8, was also downregulated in other AML cell lines sensitive to I-BET (Supplementary Figure 2B).Moreover, decreased expression of a subset of the same genes was demonstrated in primary patient AML samples following I-BET treatment (Figure 2d and Supplementary Table 1).Of particular interest, genes from the HOXA cluster do not form part of this core transcriptional program (Supplementary Table 2).Taken together, this inhibition of common critical leukemia regulators following I-BET treatment in a range of AML subtypes demonstrates that BET proteins regulate the expression of a 'core' transcriptional program in AML and that this program is HOX gene independent.Although BRD4 is present ubiquitously at promoter and enhancer elements, it has recently been demonstrated that the expression levels of certain genes are more susceptible to BRD4 inhibition. 17 feature of these responsive genes appears to be the presence of super-enhancers.Using the same methodology described by Young and colleagues, 17 we find that nearly half (12/26) of the genes present within the core transcriptional program contain superenhancers and BRD4 binding to these regions is dramatically decreased following just 6 h of inhibition with I-BET151 (Figure 2e).
The BET-dependent core transcriptional program classifies patients with AML into groups that differ in their specific molecular subtype and may predict response to BET inhibition We next assessed the expression levels of the genes within the BET-dependent program in a large series of untreated AML patients at diagnosis.Of the 26 genes within the program, we were able to assess the levels of expression of 18 of these in a large series of 436 patients. 18Using unsupervised clustering, this gene set classified the patient samples into six groups (Figure 3a).Moreover, these groups demonstrated statistically significant differences in the proportions of known prognostic factors such as karyotype (Po0.0001, Figure 3b) and mutation status for the NPM1c (P ¼ 0.019, Figure 3c) and FLT3-ITD mutations (P ¼ 0.02, Supplementary Figure 3).Of particular interest, patients with predicted sensitive genotypes, such as those patients with MLLfusion leukemias significantly clustered within specific groups (Figure 3b).Taken together with the downregulation of this core program upon experimental treatment with I-BET, these data raise the possibility that the genes within the core program may not only provide putative transcriptional biomarkers of response, but may also assist with determining sensitive patients before therapy.
The NPM1c mutation relieves inhibition of BET proteins, facilitating upregulation of the 'core' AML transcriptional program Our results suggest that BET inhibition is likely to be effective in a broad range of AML subtypes.However, the molecular mechanism underpinning this efficacy is likely to vary depending on the mutational spectrum of individual cases, with BET proteins serving as a common terminal effector of transcription downstream of these mutations.We had previously described the underlying molecular mechanism for BET inhibitors in MLL-fusion AML, and we next chose to address the mechanism of action of this emerging epigenetic therapy in NPM1c AML.Our previous characterization of the nuclear BET protein interactome had identified an interaction between BRD4 and a proportion of WT NPM1 with three separate proteomic methodologies in HL60 cells. 6e could further document that a portion of BRD4 colocalizes with WT NPM1 in primary samples from AML patients (Supplementary Figure 4).Although NPM1c has previously been shown to result in an excess of cytoplasmic NPM1, 19 we noted that the vast majority of BRD4 in OCI-AML3 was retained in the nucleus (Supplementary Figure 5A).We therefore hypothesized that the NPM1-BRD4 interaction may repress the transcriptional activity of the proportion of BRD4 that interacts with NPM1.We therefore reasoned that cytoplasmic dislocation of NPM1 may abrogate this repressive interaction leading to aberrant gene expression.Normal cytoplasmic shuttling of NPM1 occurs in a CRM1dependent manner, and to test our hypothesis we restored nuclear NPM1 with the CRM1 inhibitor Leptomycin B (LMB). 11The prediction of our hypothesis would be that LMB treatment would restore nuclear NPM1c and negatively regulate BRD4-dependent transcription at critical loci.As LMB treatment may lead to pleiotrophic effects we first established that LMB treatment for a period of 6 h did not lead to discernible phenotypic effects on apoptosis and cell cycle progression in the cell lines (Supplementary Figure 5B and C and data not shown).We then tested the effects of LMB treatment on gene expression and BRD4 binding at two critical loci from the core transcriptional program, BCL2 and C-MYC, in NPM1c mutant OCI-AML3 cells.As a further control we also treated the I-BET sensitive but NPM1 WT KG-1 cells with LMB.For comparison, the effects on transcription and BRD4 binding were also assessed in both cell lines following I-BET treatment.In OCI-AML3 (NPM1c) cells, relocation of NPM1c into the nucleus with LMB phenocopied treatment with I-BET in downregulating expression of both BCL2 and C-MYC (Figures 4a Figure 3.The core transcriptional program classifies human AML: (a) 18 of 26 genes (80%) from the BET-responsive core signature were differentially expressed across a cohort of 436 AML patients as shown in the heat map.The gene set could classify this cohort into six groups through the use of unsupervised clustering.(b) Significant differences in cytogenetic characteristics were shown for individuals in each of the groups (Po0.0001), with significant differences in molecular prognostic factors including mutational status for (c) NPM1c and FLT3-ITD (Supplementary Figure 3) also noted (P ¼ 0.02 and P ¼ 0.02, respectively).( þ 8 ¼ trisomy 8, NK ¼ normal karyotype, Other HR ¼ other high risk (t(6;9), 3q abnormality and del 5q).and b).In contrast, in NPM1 WT KG-1, no effect was seen following LMB treatment and only I-BET treatment decreased expression of BCL2 and C-MYC (Figures 4a and b).In OCI-AML3, as anticipated, we found that the decrease in gene expression of BCL2 and C-MYC following LMB and I-BET treatment was accompanied by a concomitant decrease in BRD4 binding at the transcriptional start sites of these genes (Figures 4c and d).However, in KG-1 cells, BRD4 binding only decreased following I-BET treatment (Figures 4c and d).To assess if this relationship was observed at a global level we performed chromatin immunoprecipitation followed by next-generation sequencing (ChIP-seq) experiments for BRD4 binding following treatment with dimethyl sulfoxide vehicle, I-BET or LMB in OCI-AML3 cells.In keeping with the expression patterns following these perturbations, the binding of BRD4 was only marginally altered at the majority of loci whose expression remained unchanged following treatment with either LMB or I-BET (Figure 4e).However, in genes whose expression was significantly downregulated following I-BET administration, an obvious decrease in BRD4 binding was demonstrated at the transcriptional start site following treatment with either I-BET or LMB (Figures 4e-g).
To expand on these data, we used K562 cells (which carry germline NPM1 alleles) and expressed GFP-NPM1c in a proportion of these cells, allowing us to mechanistically test our hypothesis in an isogenic background (Supplementary Figure 6).In these cells we could demonstrate that LMB relocated the majority of the NPM1c to the nucleus (Figure 5a) and this restored the interaction between BRD4 and NPM1c (Figure 5b).Furthermore, in these cells the expression of NPM1c increased the expression of exemplar genes from our core transcriptional program (Supplementary Figure 6), in comparison with control cells.Taken all together, these data are consistent with our hypothesis that nuclear NPM1 exerts a repressive effect on BRD4 and decreases transcription at certain critical loci via a reduction of BRD4 chromatin binding.Loss of this inhibitory effect via NPM1c mutation would explain, at least in part, the aberrant transcriptional regulation evident in NPM1c AML and would also explain the sensitivity of this disease to BET inhibition.
I-BET is a promising preclinical agent against NPM1c AML NPM1c AML represents one of the largest subtypes of AML patients and has a variable prognosis, dependent upon the presence of cooperating mutations. 20For example, co-occurrence of NPM1c with internal tandem duplication of the FLT3 gene (FLT3-ITD) and DNMT3A mutations (as occurs in the OCI-AML3 cell line) is common and confers a relatively poor prognosis. 20,21herefore, to further assess the efficacy of I-BET against NPM1c AML in cooperation with a number of other mutations, we utilized leukemia cells from an elegant, recently described murine model.In this model, NPM1c drives leukemia in cooperation with a range of other mutations generated through transposon-based insertional mutagenesis leading to unique tumors in each case with identified cooperating mutations in addition to NPM1c (Vassiliou et al. 15 ).Leukemia cells from all six independent tumors tested demonstrated sensitivity to I-BET in both liquid culture (with IC 50 values ranging from 120-312 nM) and colony formation assays (Figures 6a and b).Furthermore, sensitivity of NPM1c AML to I-BET in vivo was confirmed following transplant of leukemia cells from three separate NPM1c AML tumors into NOD-SCID IL-2Rg À / À recipient mice.In these preclinical models, I-BET therapy was not instituted until overwhelming disease burden
F T I P I n p u t F T I P I n p u t F T I P I n p u t
Figure 5. Relocation of NPM1c into the nucleus leads to a re-association with BRD4: in a cell line diploid for wild-type NPM1 we trasfected mutant NPM1 N-terminally tagged with green fluorescent protein (GFP-NPM1c).Transfection efficiency was between 10-20%.We were able to distinguish wild-type NPM1 from mutant NPM1 with an antibody raised against amino acids 1-100 of NPM1 and therefore does not recognize GFP-NPM1c.In this isogenic cellular background (a) confocal immunofluorescence microscopy images show that the subcellular localization of NPM1c is within the cytoplasm.However, following treatment with LMB, NPM1c is relocated back into the nucleus/nucleolus.(b) From the subset (10-20%) of cells expressing GFP-NPM1c, we demonstrate that the relocalization of NPM1c into the nucleus/nucleolus leads to an increased association with BRD4.Also demonstrated is 5% input and 5% of the flow through (FT) fraction following immunoprecipitaion (IP).
was demonstrated by flow cytometry of peripheral blood (day 11 post transplant), mimicking the presentation of patients with AML in the clinic.Across all tumors, a significant survival advantage (P ¼ 0.01) was demonstrated for mice treated with I-BET in comparison with vehicle-treated controls (Figure 6c and Supplementary Tables 3 and 4).Of specific relevance for human AML, in leukemia driven by both NPM1c and a FLT3 activating mutation a highly significant survival advantage was demonstrated (P ¼ 0.008)(Figure 6d).Although the survival advantage observed in this in vivo model is modest, it is important to note that in contrast to many published studies assessing the efficacy of small molecules in vivo, we commenced treatment only once established disease was demonstrated to mimic the scenario observed clinically.Despite this, we still established a significant survival advantage in keeping with those previously reported with BET inhibitors and other novel epigenetic therapies in models of AML, 7,22 lymphoma 9 and multiple myeloma. 8Moreover, at necropsy, tumor bulk was considerably lower in all recipient mice treated with I-BET, as evidenced by histology, peripheral white cell count and spleen weight (Figures 6e-g).
Finally, we tested the efficacy of I-BET to inhibit clonal growth of samples from NPM1c AML patients.The majority (four out of five) of these samples also carried FLT3-ITD mutations (Supplementary Table 1).As is shown, colony number was significantly decreased in all cases and increased apoptosis and characteristic transcriptional changes were also evident (Figure 6h, Figure 1e and Figure 2c).Together with the murine studies, these data demonstrate that, regardless of the nature of the cooperating mutation, NPM1c leukemias are highly sensitive to growth inhibition by I-BET.
DISCUSSION
AML remains an unmet medical need.Novel therapies are therefore urgently required, particularly therapeutics with toxicities acceptable for the increasingly elderly population who present with AML.In this report, we demonstrate that a wide range of disparate AML subtypes are sensitive to growth inhibition in vitro following I-BET151 treatment, a specific inhibitor of BET proteins that we have previously shown to be of low toxicity in preclinical studies. 6,23These data suggest the clinical utility of BET inhibition across a number of AML subtypes.Of note, the related BET inhibitor I-BET762 24 is currently in clinical trials for NUT midline carcinoma, an aggressive epithelial carcinoma characterized by genomic rearrangement of BRD4 or BRD3 (ClinicalTrials.govidentifier: NCT01587703).Our findings with I-BET151 and those with another inhibitor of BET proteins, JQ1, in AML 6,7 now provide the basis for early phase clinical trials in these often fatal malignancies.
Our findings also demonstrate that BET proteins are transcriptional regulators of genes critically required for leukemogenesis.We have previously defined a set of genes in MLL-rearranged AML cell lines whose expression is altered following treatment with I-BET, but it was unknown whether a similar transcriptional program was required in other sensitive AML subtypes.When similar experiments were performed in other AML cell lines, an obvious overlap with the MLL program was evident.This overlap contains 26 genes that are consistently downregulated and includes the critical regulators of myelopoiesis and leukemogenesis BCL2, C-MYC and IRF8.Although MLL fusion and NPM1c AML subtypes are associated with the aberrant expression of HOXA genes, 16,25 the common transcription program and the profound in vitro and in vivo effects of BET inhibitors in these leukemias are independent of downregulation of the HOXA genes.Interestingly, a number of the common genes derived from our analyses in AML cell lines contain super-enhancers that are exquisitely sensitive to BRD4 inhibition by I-BET151.Notably, many of the genes downregulated in the human AML cell lines were also downregulated following I-BET treatment in multiple primary AML samples from patients with disparate genotypes.In addition, our 26-gene signature could classify AML into distinct molecular subgroups raising the possibility that this core transcriptional signature can provide biomarkers to predict sensitivity to BET inhibition and may also be used prospectively to monitor patient response to BET inhibition in real time.We propose that this gene set comprises a 'core' BET-responsive transcriptional program abnormally regulated in AML, and that abrogation of this program underpins the apoptosis and cell cycle arrest that are uniform upon I-BET therapy.
Our studies also shed direct light on one of the prevailing mysteries in AML biology: the mechanisms of transcriptional dysregulation in NPM1c AML.We have previously demonstrated that WT NPM1 and BRD4 interact and further demonstrate this interaction in primary AML cells with WT NPM1.Our data predict that NPM1c mutations relieve an inhibitory interaction with BRD4 through cytosolic dislocation of NPM1c and WT NPM1, allowing BRD4 to activate aberrant transcription in NPM1c AML (Figure 7).In support of our hypothesis, we demonstrate that restoration of nuclear NPM1c following treatment with the CRM1 inhibitor LMB restores the NPM1-BRD4 interaction and downregulates expression of critical genes, such as .Model for the molecular mechanism of action for I-BET in NPM1c AML.Wild-type nucleophosmin 1 (NPM1) associates with a small nuclear pool of BRD4 (left panel) and exerts an inhibitory effect on its transcriptional activity.The NPM1c mutation in AML alters this equilibrium (middle panel) as a significant proportion of NPM1 is dislocated into the cytoplasm without BRD4, which is then free to drive the transcription of its target genes.I-BET displaces the binding of BRD4 from chromatin (right upper panel) leading to the repression of the target genes and relocation of NPM1c into the nucleus with LMB phenocopies I-BET (left lower panel), as it leads to a re-association of NPM1c with BRD4 off the chromatin template also resulting in transcriptional repression.
following LMB treatment closely phenocopies the inhibitory effects of I-BET on transcription and BRD4 chromatin binding (Figure 7).
We and others have previously demonstrated the preclinical efficacy of BET inhibition in MLL-rearranged leukemias, and here we report a similar in vitro sensitivity of AML with NPM1c mutations.NPM1c AML represents one of the single largest AML subgroups, comprising around 35% of all cases, and has a variable prognosis.This prognosis is largely dependent upon the presence or absence of other 'cooperating' mutations, particularly the FLT3-ITD mutation.Importantly, our findings demonstrate that NPM1c AML cells are uniformly sensitive to I-BET inhibition regardless of the nature of the cooperating mutations (Figures 6a-h and Supplementary Tables 1, 3 and 4).These include mutations known to predict poor prognosis, including DNMT3A and the FLT3-ITD. 20f particular clinical relevance, around 15% of cases of AML harbor both an NPM1c mutation and a FLT3-ITD, and these patients have a relatively poor prognosis. 20Here, we demonstrate sensitivity to I-BET in both primary human AML cells and a mouse model that carry both an NPM1c mutation and an activating mutation of FLT3, suggesting significant clinical utility in this poor-risk subgroup.
Taken together, our data greatly inform the pathogenesis of NPM1c AML and provide compelling evidence supporting clinical trials of BET inhibitors across multiple AML subtypes.Moreover, our data identify potential predictive biomarkers of sensitivity and response to inform these studies.Novel, nontoxic, targeted therapies are desperately required in AML, and we eagerly await results of trials of BET inhibition in this aggressive hematological malignancy.
Figure 1 .
Figure1.I-BET151 has activity in a broad range of AML (a) A panel of human AML cell lines encompassing a variety of oncogenic drivers were tested in cell proliferation assays using I-BET151.We have previously reported some of this data6 and report it again only to provide an overall appreciation of sensitivity of AML cell lines to I-BET151.(b) Clonogenic assays performed in cytokine-supplemented methylcellulose in the presence of vehicle (dimethyl sulfoxide (DMSO)) or I-BET151 show a marked reduction in colony numbers (enumerated in the bar graph) after treatment with I-BET151.(c) Primary murine hematopoietic progenitors were isolated from mouse bone marrow and retrovirally transformed with MOZ-TIF2 or NUP98-HOXA9.These cells were propagated in liquid culture as well as being used in clonogenic assays.Both proliferation and clonogenic assays (enumerated in the bar graph) demonstrate a marked sensitivity to I-BET151.(d) The degree of apoptosis in OCI-AML3 was assessed using the vital dye 7-amino-actinomycin D (7-AAD) and Annexin V in cells following 72 h incubation with DMSO or I-BET.These data demonstrate a marked induction of apoptosis.(e) Cell cycle progression in OCI-AML3 was assessed 24 h after incubation with DMSO or I-BET151.These data demonstrate a marked increase in G 0 /G 1 fraction, which was accompanied by a concomitant decrease in the number of cells in S and G 2 /M phases.(f) Clonogenic assays with primary human AML cells from five different patients (Supplementary TableS2).Cells were plated in cytokine-supplemented methylcellulose in the presence of vehicle (DMSO) or I-BET151.These show a marked reduction of colony formation in the presence of I-BET151.AML patient samples demonstrate apoptosis following treatment with I-BET.A representative sample is shown (g) and the results from five separate patients are enumerated in the bar graph (h).
Figure 2 .
Figure 2. A core transcriptional program is affected by I-BET151 in AML.(a) OCI-AML3 and (b) SKM1 cells were treated for six hours with either I-BET151 or DMSO (vehicle) followed by mRNA extraction.The mRNA from three biological replicates was used to generate gene expression data set.Volcano plots for the DMSO-versus I-BET151-treated samples, showing the adjusted significance P-value (log10) versus the fold change (log2) are shown.These plots identify a small subset of genes that demonstrate a significant change in expression (Pp0.01).This is represented as either twofold downregulation (blue) or twofold upregulation (red) on treatment with I-BET151.(c) Venn diagram of all the significantly downregulated genes, shows that 26 genes are commonly downregulated in all four cell lines.Several of these genes are also downregulated in another sensitive AML cell lines KG-1 (Supplementary FigureS3).(d) Similar transcriptional changes were demonstrated in both NPM1c mutated and wild-type AML patients, for exemplar, genes C-MYC, BCL2 and IRF8.(e) Total BRD4 ChIP-seq signal in units of reads per million is charted at all enhancer regions.Enhancers are ranked by increasing BRD4 ChIP-seq signal in the presence (red) or absence of I-BET151 (blue).Super-enhancers are enriched in the vertically rising ranked enhancers to the right of the graph.Treatment with I-BET151 markedly decreases the BRD4 read count at these enhancers.
Figure 4 .
Figure 4. Nuclear relocalization of NPM1c phenocopies treatment with I-BET151: treatment with LMB reduces the expression of (a) BCL2 and (b) MYC in OCI-AML3 but not KG-1.In contrast, I-BET151 reduces the expression of these genes in both cell lines.The gene expression changes shown were performed by real-time PCR (RT-PCR) on cDNA prepared from independent biological replicates.The expression level of target genes in the presence of DMSO was assigned a value of 100 following normalization to the B2-microglobulin (B2M) house-keeping gene whose expression in all cell lines is unaltered by I-BET151 or LMB treatment.The fold-change following treatment with I-BET151 or LMB for 6 h is shown (after normalization to the B2M house-keeping gene).Chromatin prepared from OCI-AML3 cells after 6 h of treatment with DMSO, LMB or I-BET151 was used in chromatin immunoprecipitation (ChIP) assays, followed by real-time PCR analysis.In comparison with DMSO, LMB reduces the chromatin binding of BRD4 at the transcriptional start site (TSS) of (c) BCL2 and (d) MYC in OCI-AML3 but not KG-1.In contrast, I-BET151 reduces BRD4 binding at both these target genes in both cell lines.Bar graphs are represented as the mean enrichment relative to input and error bars reflect s.d. of results derived from biological triplicate experiments.(e) Density of BRD4 ChIP-seq reads in OCI-AML3 shown as heat maps centered on the TSS of annotated genes with 5 kb of flanking sequence either side.Heat maps are shown for BRD4 binding following treatment with DMSO, I-BET151 and LMB.Red color indicates higher density of reads.The decrease in BRD4 binding occurs primarily over genes that show a significant decrease in expression following treatment with I-BET151 (red dotted line).(f ) Mean enrichment pattern for BRD4 binding was profiled across all annotated TSSs following treatment of OCI-AML3 with DMSO, I-BET151 and LMB.These data demonstrate that similar to treatment with I-BET151, the relocation of NPM1c with LMB reduces BRD4 binding at chromatin.(g) The decrease in BRD4 binding by LMB and I-BET151 is demonstrated across the BCL2 and MYC loci.
Figure 6 .
Figure 6.I-BET151 is efficacious in vitro and in vivo in a murine model of NPM1c AML and primary human NPM1c AML samples.Six different murine NPM1c AML were tested in (a) cell proliferation and (b) clonogenic assays.These data demonstrate that I-BET151 is effective in vitro in multiple NPM1c AML cases that carry a variety of other collaborating mutations.(c) Kaplan-Meier curve demonstrating that treatment of NOD-SCID mice transplanted with 1 Â 10 7 murine NPM1c leukemic cells show a significant increase in overall survival following treatment with I-BET151 at the experimental end point.Here, 24 mice were split into three equal groups and transplanted with three different NPM1c AML.Half of each group were treated with vehicle and half treated with I-BET151.Treatment was commenced on day 10 post transplantation.(d) Kaplan-Meier curve from the subgroup of mice that received the NPM1c AML, which contained a concurrent gain of function mutation in FLT3.These data show a significant increase in overall survival following treatment with I-BET151 at the experimental end point.(e) Top panel-Romanowsky stain of a peripheral blood smear from a vehicle-and I-BET151-treated mouse showing the morphological appearance of the increased circulating leukemic cells in the control mice.Middle panel-Haematoxylin and eosin stained histological sections of the renal parenchyma and lung (lower panel) of control and treated mice.These data demonstrate overt extramedullary leukemic infiltration of the kidney and lung in the control mouse.In contrast, a relatively normal architecture is seen in the treated animal.(f ) Spleen weights and (g) total circulating white cell count (WCC) from all the vehicle and treated mice at the time of necropsy.(h) Clonogenic assays with 5 Â 10 3 -1 Â 10 4 primary human NPM1c AML cells from five different patients.Cells were plated in cytokine-supplemented methylcellulose in the presence of vehicle (DMSO) or I-BET151.These show a marked reduction of colony formation in the presence of I-BET151.
Figure 7
Figure 7. Model for the molecular mechanism of action for I-BET in NPM1c AML.Wild-type nucleophosmin 1 (NPM1) associates with a small nuclear pool of BRD4 (left panel) and exerts an inhibitory effect on its transcriptional activity.The NPM1c mutation in AML alters this equilibrium (middle panel) as a significant proportion of NPM1 is dislocated into the cytoplasm without BRD4, which is then free to drive the transcription of its target genes.I-BET displaces the binding of BRD4 from chromatin (right upper panel) leading to the repression of the target genes and relocation of NPM1c into the nucleus with LMB phenocopies I-BET (left lower panel), as it leads to a re-association of NPM1c with BRD4 off the chromatin template also resulting in transcriptional repression. | 8,513.4 | 2013-11-13T00:00:00.000 | [
"Biology"
] |
A Method for the Measurement of Photons Number and Squeezing Parameter in a Quantum Cavity
Measurement of photons number in a quantum cavity is very difficult and the photons number is changed after each measurement. Recently, many efforts have been done for the nondemolition measurement methods. Haroche et al. succeed in recognizing existence or nonexistence of one photon in a quantum cavity. In this paper, we employ their experimental setup for a quantum nondemolition measurement and pump a coherent state in their quantum cavity. In this case, we could detect more photons in the quantum cavity by a measurement of a displacedWigner function. It is also shown that the measurement of more than one photon is possible by the Haroche method by measuring just one point of displaced Wigner function. Furthermore, if the cavity field is filled by a superposition of two number states, the average number of photons within the cavity would be measurable. We show that their setup is also suitable to apply for the measurement of the squeezing parameter for the squeezed state of photons number in the quantum cavity successfully.
Introduction
The formulation of quantum mechanics in phase space was proposed by Wigner [1].This formulation is very useful in various fields of physics including quantum mechanics [2,3], quantum optics [4][5][6], and condensate matter [7,8].The physical concepts are extractable from Wigner function.Wigner function may take negative value for a quantum state.The existence of negative or interference of Wigner function is a nonclassicality indicator for quantum systems [9][10][11].On the other hand, Wigner function is a measurable quantity.Many authors introduced methods to measure Wigner function for trapped ions [12], photonic number states in quantum cavity [13][14][15], Schrodinger cat state, and coherent state [16].Bertet et al. measure a complete Wigner function for the vacuum and a single photon state [17].Lutterbach and Davidovich presented a method to measure the Wigner distribution function of photonic state in a quantum cavity field [18,19].They used an experimental ingenious setup which was made by one high Q-factor and two low Q-factor cavities.
Nogues et al. (members of Haroche group) measured the Wigner distribution functions of electromagnetic fields in a cavity with the number states = 0 and = 1 at origin of phase space [20].The Wigner distribution function at the origin of phase space is positive for = 0 and negative for = 1.Therefore, the sign of measured Wigner distribution function, itself, gives us the number of photons in the cavity and its value is not important [20].So, if there are more than one photon, it would not be possible to recognize the number of photons.In this paper we used the Haroche method to measure the larger number of photons by measuring just one point of the displaced Wigner distribution function in a quantum cavity.We use an experimental setup for the measurement of displaced Wigner function proposed by Deléglise et al. [16] (Haroche group).It is shown that their experimental setup is useful for the measurement of number of photons, even for > 1.This method is also suitable to measure the average number of superpositions of two number states.The development of method to an arbitrary superposition of states needs more measurement for many more points of Wigner function which is not discussed in this paper.Furthermore this method is also applied to measure the squeezing parameter for squeezed number state of photons.
In the next section, the Wigner distribution function is calculated for four values of and their plot in phase space is illustrated.It would be shown that Wigner functions at the origin of phase space have positive value for the even and negative value for the odd number of photons.So, by measuring the Wigner function at the origin we only find whether the number of photons is even or odd.The value of the displaced Wigner function depends on the photons number.We find a point in phase space in which Wigner functions have different values for different photons number.Therefore we determine the photons number (or the average of photons number) by measuring the displaced Wigner function.In this section Lutterbach and Davidovich method is developed for displaced Wigner function and its experimental setup will be introduced.The measurement of displaced Wigner function in a quantum cavity is compared with the value of the Wigner function of different number states.Then the number of photons is obtained.In Section 3 a quantum cavity field is set to be in a superposition of two number states.In this case the average number of photons in the cavity is measured by calculating the Wigner function in the quantum cavity field by comparing the result with the Wigner function measured by the Latterbach and Davidovich proposed experimental setup.Section 4 is devoted to the measurement of squeezing parameter for the squeezed number states in a quantum cavity.The displaced Wigner function is measured for the squeezed field by developed Lutterbach and Davidovich experimental setup.The squeezed parameter is measurable by calculating the displaced Wigner function, for any value of .Finally, the last section is devoted to the conclusions.
Measuring Number of Photons in
a Quantum Cavity where ρ = |⟩⟨| is the density operator, P = â † â is the parity operator, and D() = (â † − * â) is the displacement operator in phase space [21,22].The operation of the parity and displacement operators on |⟩ is given by [21] where |, ⟩ is the displaced number state [21].Equation (1) can be written as [21] where |⟩ is a number state.By (2) and (3) the Wigner function is obtained as follows: The value of ⟨ | , 2⟩ for < and ≥ is written versus Laguerre polynomials as respectively.By the above relations the Laguerre polynomials for = are given by where = 4|| 2 .Let us consider a cavity with photons.The exact value of is not definite, but suppose there is a few number of photons, for example, between 0 and 3. From ( 4) to (6) the Wigner function is calculated for number states as In Figure 1, Re() and Im() are the axes of phase space.The cross section of the Wigner functions for the number states is plotted in terms of Re() for Im() = 0.These Wigner functions have just two values in the origin; for even number states they are positive, while for odd number states they are negative.Thus, the value of the Wigner function at the origin is not sufficient to specify the number of photons in the quantum cavity.For other points of phase space these values are not the same for different states of photon numbers.One can apply this feature to specify the number of photons in the cavity.
To select a point in the phase space in which the Wigner function and consequently the number of photons are going to be measured, we note that the values of the Wigner function for each should have the maximum difference as much as possible.In order to have exact and significant measurement the discrepancy should be much more than measuring errors.In Figure 2 the value of Wigner functions versus the number of photons has been plotted for Re() = 0.5.
The Measurement of Photons Number.
In this section we apply the Lutterbach and Davidovich method for measuring the Wigner function of the electromagnetic field in the cavity, where the number of photons is one of the values (e.g. = 0, 1, 2, and 3).
As shown in Figure 3, the experimental setup is made of 3 cavities.The quantum field in cavity is determined by an eigenstate of number states, but 1 and 2 cavities contain Re() classical fields.Each cavity is a Fabry-Perot resonator which is made of two spherical super conductor mirrors [20].
In this experiment, a beam of Rydberg rubidium atoms pass through the cavities and interact with their electromagnetic fields.If one of the electrons of the atoms (usually the valence electron) is excited and jumps to a level with a higher quantum number, the atom would settle in a Rydberg state.Here, two Rydberg levels with quantum numbers, 50 and 51, are investigated and their states denoted by |⟩ and |⟩, respectively (see Figure 4).The frequency of the field of each cavity is set to be close to the transition frequency between these two levels.So, the rubidium atom interacts with the field as a two-level atom.The interaction of the rubidium atom with the field of 1 and 2 cavities is performed in a resonant state.In the cavity 1 , the atom in the |⟩ state emits a photon during the interaction with the field and jumps to |⟩ state.The atom may absorb this photon and come back to the |⟩ state again.This happens over and over and the atom oscillates between the two levels.These oscillations are called Rabi oscillations.The Rabi frequency is denoted by Ω .
In general, during interaction, the state of atom evolves to a superposition of states [23] as follows: where Ω is a phase which is obtained by the atom while passing through the cavity 1 .Here we consider Ω = /2; then In the cavity , by applying a uniform electric field, the Stark effect causes a small difference between the cavity field frequency and the two-level atom transition frequency = ( − )/ℎ, which is called detuning = − .In a nonresonance interaction, there is no transition between the ISRN Optics atomic levels.Such nonresonance interaction applies a phase shift to the atomic state.This phase shift is given by [23] for |⟩ and |⟩ states, respectively.In these relations, The relative phase between the cavities 1 and 2 is [18].The atom makes also an interaction with a phase Ω = /2 in the cavity 2 resonantly.Thus each part of states in (11) changes as follows: Therefore, after going out of the last cavity the total state of the system is given by By (13) the atom-field density matrix of the whole system is obtained as where ρ = D() ρ D−1 ().Then outgoing state of atom is detected by ionization detector in the |⟩ or |⟩ states.This experiment should be repeated many times and the probabilities of detecting the atoms in each |⟩ or |⟩ state are described with = /( + ) and = /( + ), respectively.
Here and are the numbers of detected atoms in |⟩ and |⟩ states, respectively.By (14), these probabilities and also the difference between them versus the Wigner function [18] are obtained as where Δ = − .If we set = − = /2, then Δ would be proportional to a displaced Wigner function: The number of photons in the cavity is obtained by a comparison between the measured Wigner function at the point , (16), and the value of the displaced Wigner functions for = 0, 1, 2, 3. We can develop this method to measure a large number of photons, although the accuracy of our measurement is our main limitation.
Measuring the Average Number of Photons in a Quantum Cavity
In the previous experimental setup the cavity was in a number state |⟩ and the proposed experiment reveals integer photons number within the cavity.In this section we show if cavity can be in a superposition state; the average of photons number is also measurable by the same experimental method.This method is applied for different superpositions of number states.For example, consider the field of quantum cavity to be in a superposition state of |0⟩ and |1⟩: where = ||exp .The average number of photons is independent of the phase : In the next subsection we show that the measurement of the displaced Wigner function for a superposition of states gives us the average photon numbers and the phase .Equations ( 5) are utilized to determine the value of ⟨ | , 2⟩ in the above equations for , = 0, 1 as follows: where = Re() + Im().By substituting (18) in (20) the Wigner function is obtained in terms of average photons number and the phase : The value of Laguerre polynomials is calculated by for = 4|| 2 [24].By ( 21) and ( 22) the Wigner distribution function for Im() = 0 is obtained as Clearly, the measurement of the Wigner function is more difficult for the large value of because increasing the value of gives a decrease in the value of the Wigner function.For || ≪ 1, (23) reduces to () ≃ 2 exp(−2|| 2 )(1 − 2), which is a linear in terms of as shown in Figure 5.For this example our method gives us the value of the average number of photons even for = 0.
Measuring the Average Number of Photons.
Before running the experiment, the field of the cavity is set initially to be by applying a coherent beam of light.The state of incoming atom is |⟩ which is resonantly (nonresonantly) interacted with the cavities 1 and 2 (with the cavity ) with a phase factor Ω = /2 (with a phase factor).Outgoing atom-field state is obtained as Furthermore, the atom-field density matrix is obtained as × ρ ( +(n+1) − −(n−) ) + ⟩ ⟨ (( (−(n+1)+) + n )) × ρ ( (−(n+1)+) + −n ) + nondiagonal terms of atomic state] , where ρ = D() ρ D−1 () and The probability of finding the outgoing atom state in the states |⟩ or |⟩ is measured by an ionized detector.The difference between these probabilities − is given by (15).By ( 15) and ( 27) and for = − = /2, similar to ( 16), the measured Wigner function is obtained as The Wigner function, obtained with an experimental measurement of Δ, is compared to the value of displaced Wigner function, shown in Figure 5, to obtain the average of photons number .The development of this method to different superposition of two number states is straightforward.Figure 5 shows the Wigner function for a superposition of |0⟩ and |3⟩ number states which has many applications in the construction of a GHZ state [25].As expected from (18), the average number of photons is independent of the phase .Therefore by measuring the Wigner function in the point = 0 (or = 0) we can obtain the average number of photons and consequently the superposition coefficient .Then by replacing the average number of photons in (23), the Wigner function is obtained in terms of the phase .Measuring the Wigner function that is shown in Figure 6 leads us to obtain the phase for each superposition.
Measuring the Squeezing Parameter
4.1.Measuring the Squeezed Wigner Function.Usually the squeezed lights are produced by a nonlinear interaction of light and matter [26,27].Almeida et al. used two-photon interactions to produce the squeezed states [28].If the field of the cavity is set to be in a displaced squeezed number state |, , ⟩, where = [6], we show that it is possible to determine the squeezing parameter by the proposed experimental setup.The states |⟩ and |⟩ of rubidium atoms resonantly interacted with the 1 and 2 cavities, where the phase Ω is /2.Similar to Section 2, due to the nonresonance interaction of atom and field in the quantum cavity , the state of atom is changed by a phase factor .The density operator for such a system is given by The density matrix of the atom-field system before any measurement is obtained as follows: (32) Clearly, the above equation is proportional to the squeezed displaced Wigner function: In (34) the squeezed Wigner function is obtained by replacing Re() and − Im() with Re() and Im().For a number state, the Wigner functions versus the squeezing parameter are plotted for 0 to 3 number states versus Re() in Figure 7.
In order to set in the best point for the measurement of the squeezing parameter , we partitioned the horizontal axis into three domains.
(1) The A-B domain is relevant to small .This domain is started from the origin to the first intersection point of the curves.Near the middle, the difference between the Wigner functions is almost larger than that of the other points.So selecting in the middle of this domain is a more suitable selection for the measuring of the squeezing parameter .In Figure 8 the Wigner functions are plotted versus for 0 to 3 photon number in the domain A-B.It has been shown that for a single value of a Wigner function and a specific number state, the value of squeezing parameter is unique.The sensitivity of Wigner function is higher for larger .We conclude that smaller values of are more suitable for measuring larger squeezing parameters.(2) The B-C domain is relevant to the middle value of .This domain is from the first intersection point of the two curves and extends out till the last intersection.
Usually selection of in this domain is not suitable at all, since for a single value of a Wigner function the value of squeezing parameter is not unique.Figure 9 shows this nonuniqueness of the squeezing parameter for any Wigner function.
(3) The domain greater than is relevant to the bigger values of .In this domain squeezing parameter is unique for any Wigner function.It should be noticed that for bigger values of squeezing parameter the Wigner functions are very small.Figure 10 illustrates the Wigner functions versus squeezing parameter for Re() = 1.7 and Im() = 0.It decreases for bigger and the sensitivity of the Wigner functions is higher for smaller .Therefore, the domain is more suitable for the measurement of small .Here, the value of the Wigner function for bigger is very small, so it is not a suitable measurable value.Therefore, we choose Re() to be not very far from the point .
Conclusion
In this paper we use measurement of the displaced Wigner function for the measuring of photons number in a quantum cavity.In this method a two-level Rydberg Rubidium atom is used to make a nondemolition measurement of the average number of photons in cavities.Detection of the atom states gives us and .It is shown that the difference between and gives us the displaced Wigner function and further gives us the average number of photons in the cavity in a nondemolition measurement method.This setup has also been used to the nondemolition measurement of the squeezing parameter in the field of quantum cavity.One may measure the squeezing parameter by measuring the displaced squeezed Wigner function.In order to obtain either a unique for each measured Wigner function or to increase the sensitivity of our measurements, the value of displacement should be set.We find that for larger values of squeezing parameter the displacement should be small and for smaller values of squeezing parameter the displacement should be large.
Figure 1 :Figure 2 :
Figure 1: The Wigner function is plotted versus Re() for = 0, 1, 2, 3. 0 , 1 , 2 , and 3 are the Wigner distribution functions for = 0, 1, 2, 3, respectively.The Wigner distribution functions at the origin have only two values for both the even and odd numbers of photons.Outside the origin, for example, at Re() = 0.5, values of the Wigner distribution functions are not the same for different number of photons.
Figure 3 :
Figure 3: The experimental scheme for measuring the Wigner functions in the electromagnetic field in cavity , which leads us to measure the number of photons.The cavities 1 and 2 contain the classical electromagnetic fields.
Figure 4 :
Figure 4: The Rydberg levels for the rubidium atom which interacts with all cavities in Figure 3.
3. 1 .
Calculating the Wigner Function for a Superposition of Number States.Suppose the cavity field is given by a superposition state | ⟩ = √ 1 − 2 |0⟩ + |1⟩.By substituting the density operator ρ = | ⟩⟨ | in (3) the Wigner function for the field in the cavity is obtained as
Figure 7 :
Figure 7: The plots of the Wigner function versus the real part of in the regions between A and B, between B and C, and greater than C. The point A is at the origin of phase space, B is the first intersection point of the curves, and C is the last interaction point of the curves.
Figure 8 :
Figure 8: Plots of the Wigner function versus for Re() = 0.02 and Im() = 0.As illustrated, these plots are more suitable to measure the greater values of because the difference between the Wigner functions is larger than small value of .
[23]in which is equal to the electric field amplitude, and Ê = (ℎ/ ∘ ) 1/2 (â † + â) sin [23].In order to measure the displaced Wigner function a coherent electromagnetic field is pumped into the cavity .The effect of this coherent field on the number state of photon is a displacement which is shown by |, ⟩ = D()|⟩.The state of the atom outgoing the cavity is given by Calculating the Squeezed Wigner Function.As shown in Section 2, the Wigner function is written in terms of the Laguerre polynomials for a number state.It is also possible to write the Wigner function for a squeezed number state in terms of the Laguerre polynomials.For a squeezed number state, the Wigner function can be obtained by replacing and − with and , for the Wigner function at = 0 ∘ [27], respectively.Therefore the Wigner function of a squeezed displaced number state is given by | 4,943.4 | 2013-12-31T00:00:00.000 | [
"Physics"
] |
Phosphoflow cytometry to assess cytokine signaling pathways in peripheral immune cells: potential for inferring immune cell function and treatment response in patients with solid tumors
Tumor biopsy is often not available or difficult to obtain in patients with solid tumors. Investigation of the peripheral immune system allows for in-depth and dynamic profiling of patient immune response prior to and over the course of treatment and disease. Phosphoflow cytometry is a flow cytometry‒based method to detect levels of phosphorylated proteins in single cells. This method can be applied to peripheral immune cells to determine responsiveness of signaling pathways in specific immune subsets to cytokine stimulation, improving on simply defining numbers of populations of cells based on cell surface markers. Here, we review studies using phosphoflow cytometry to (a) investigate signaling pathways in cancer patients’ peripheral immune cells compared with healthy donors, (b) compare immune cell function in peripheral immune cells with the tumor microenvironment, (c) determine the effects of agents on the immune system, and (d) predict cancer patient response to treatment and outcome. In addition, we explore the use and potential of phosphoflow cytometry in preclinical cancer models. We believe this review is the first to provide a comprehensive summary of how phosphoflow cytometry can be applied in the field of cancer immunology, and demonstrates that this approach holds promise in exploring the mechanisms of response or resistance to immunotherapy both prior to and during the course of treatment. Additionally, it can help identify potential therapeutic avenues that can restore normal immune cell function and improve cancer patient outcome.
Background
Tumor tissue biopsy allows for in-depth profiling of the primary tumor immune microenvironment at the time of surgical resection.However, tissue biopsies of metastatic lesions of patients with most solid tumors are often not available or difficult to obtain, and typically provide information only from a single lesion and time point in the evolution of a tumor mass.Utilizing "blood biopsies" to study the peripheral immune response in cancer patients is less invasive and more dynamic, allowing for monitoring over the course of treatment and disease, and can complement analysis of the tumor microenvironment.
Currently, investigation of the peripheral immune response is largely focused on quantification of peripheral immune cell frequencies and expression of cell surface markers using flow or mass cytometry, whole transcriptome sequencing, epigenetic changes, T and B cell receptor sequencing, and levels of serum and plasma factors [1].Despite the ability to quantify in-depth profiles of peripheral immune cell subsets, focusing on frequencies and cell surface markers of immune cells alone may not provide sufficient insight into their function.
Cytokines can regulate the immune response through activation of key signaling pathways in immune cells [2].The ability of immune cells to respond to cytokines therefore influences their function, and signaling in immune cells may reflect their ability to mount an anti-cancer response.Exploration of cytokine signaling in peripheral immune cells may provide insight into immune cell dynamics and altered immune cell function, and may help to predict cancer patient response to therapy.
Common methods to visualize activation of cellular signaling pathways include immunohistochemistry (IHC) and immunofluorescence (IF), western blot, immunoprecipitation (IP), real-time quantitative PCR (RT-qPCR), RNA sequencing, and flow cytometry.Certain staining techniques like IHC and IF are often not practical for the study of peripheral immune cells.Other methods, such as western blot, IP, RT-qPCR, and bulk RNA sequencing, can be used to study signaling within peripheral immune cells, but they do not allow for analysis of signaling in single cells.Single-cell RNA sequencing can be employed to examine signaling pathways in peripheral immune cells of cancer patients.One such study found that the interferon gamma (IFN-γ) signaling pathway was upregulated in CD4 + and CD8 + T cells of gastrointestinal cancer patients who responded to anti-PD-1 treatment, compared to non-responding patients [3].However, this method can be costly and time-consuming, and does not easily allow for quantification of signaling response to cytokines.
Phosphoflow cytometry is a flow cytometry-based method to analyze basal activation and sensitivity of immune cell signaling pathways to cytokine stimulation (Fig. 1).The protocol involves short-term stimulation of a diverse population of cells with cytokines.These cytokines attach to cell surface receptors causing phosphorylation of intracellular signaling proteins (Fig. 1A).Following this, the cells are fixed to maintain their phosphorylation state and then stained with extracellular antibodies labeled with fluorophores to define immune subset populations.Subsequently, the cells are permeabilized to access intracellular proteins and stained with fluorescently labeled intracellular antibodies, which specifically identify phosphorylated forms of proteins of interest (Fig. 1B).Through flow-cytometry analysis, the labeled cells are identified to determine phosphorylation levels in specific subsets of immune cells in both cytokine stimulated and unstimulated groups (Fig. 1C).This method allows for detection of phosphorylated signaling proteins within single cells [4][5][6].
Phosphoflow cytometry was first reviewed in 2004 by Krutzik et al. who discussed several studies using this technique, as well as the methods, technical considerations, and clinical applications existent at that time [7].Krutzik and Nolan then went on to publish a comparison of staining techniques to optimize phosphoflow cytometry [4].A review of phosphoflow cytometry was published in 2010 by Wu et al. [8], which centered on phosphoflow cytometry methods and preclinical applications for monitoring immune cells and cancer cells.The review also covered a limited number of studies that had employed phosphoflow cytometry to evaluate human immune responses, and highlighted several technical limitations existing at that time that needed to be addressed to improve the utility of phosphoflow cytometry as a potential immunomonitoring tool.At that time, these limitations included the need for (a) better fixation and permeabilization methods/reagents coupled with brighter fluorochrome conjugates to detect phosphorylated proteins, (b) flow cytometry technology compatible with small sample sizes, slow flow rates, and increased sensitivity, (c) identification of a wider panel of monoclonal antibody clones to identify various immune subsets that are compatible with the fixatives and permeabilization buffers needed for phosphoflow applications, (d) optimization and standardization of staining methods, and (e) new strategies to interpret and organize the increasing amount and complexity of experimental data on multiple signaling pathways in a single experiment.Since 2010, many of these limitations have been addressed, and subsequent studies, covered in the current review, have extended the utilization of phosphoflow cytometry in cancer immunology.
Phosphoflow cytometry has been used to investigate immune cell signaling and function in various peripheral blood mononuclear cell (PBMC) types such as regulatory T cells (Tregs) [9], total T cells [10], and B cells [11].The method has been applied to various physiologic states, including aging [12], immunodeficiency [13], autoimmune disease [14,15], and cancer, with the first application of this method to evaluate peripheral immune cells of patients with solid tumors published in 2004 by Lesinski et al. [16].Methods for studying immune cells in cancer patient blood typically involve phenotyping by flow or mass cytometry and functional assays.Phenotyping provides information on the types and numbers of cells present and gives clues to their activation state, whereas functional assays inform on the ability of immune cells to perform their key functions.Functional assays of immune cells, however, often require large quantities of peripheral blood and can be time-consuming, and therefore may not always be feasible to perform in large numbers of patient samples.Phosphoflow cytometry informs on the ability of cells to respond to external cytokines by measuring activated signaling pathways; this signaling capacity may mirror function, and can be thought of as a bridge between phenotypic and functional assays.Here, we present a current and comprehensive review of the application of phosphoflow cytometry in the field of cancer immunology.This review focuses on peripheral immune cells to identify activation and sensitivity of signaling pathways, evaluate the effect of various agents on the immune system, and potentially predict treatment response and patient outcome.Furthermore, we explore the use of phosphoflow cytometry in preclinical murine cancer models to demonstrate the potential applications of this method.
Altered cytokine signaling in peripheral immune cells of cancer patients
Table 1 summarizes studies demonstrating the variances in cytokine signaling pathways in healthy donors and cancer patients, along with the immune cell types investigated.Below, we review studies reporting on these differences in cytokine signaling between healthy donors and cancer patients, and in cancer patients with varying plasma cytokine levels.
Altered interferon signaling in cancer patient peripheral immune cells
Antigen recognition, costimulation, and cytokine support are required to develop an effective T cell response [22].IFN-γ is a cytokine primarily produced by activated natural killer (NK) and T cells and is essential for T cell function, including full activation, clonal expansion, and memory development.IFN-γ promotes T cell function through upregulation of MHC molecules [23,24] and MHC class I and II processing machinery [25][26][27], promotes T cell differentiation [28,29] and CD8 + T cell memory development [30], skews helper T cell responses toward a T helper 1 (Th1) phenotype [31], and improves motility and cytotoxicity of lymphocytes [31].Interferon alpha (IFN-α) is a type I IFN that plays a key role in the innate immune response to viral infection and enhances cross-priming of CD8 + T cells exposed to antigen [32,33].IFNs bind to specific receptors on the surface of cells and activate signaling pathways that lead to the transcription of interferon-stimulated genes (ISGs).The signaling pathways activated by IFN-α and IFN-γ are complex and involve multiple signaling proteins, including Janus kinases (JAKs) and signal transducers and activators of transcription (STATs), including STAT1 [34].
Reduced IFN-α and IFN-γ signaling have been observed in peripheral blood lymphocytes from cancer patients.Downregulation in ISGs was found in T and B cells from melanoma patients compared to 12 healthy controls using single cell gene expression profiling [17].Using phosphoflow cytometry to determine signaling response to IFN-γ, IFN-α, and IFN-β, phosphorylated STAT1 (p-STAT1) was reduced in lymphocytes stimulated only by IFN-α in melanoma patients compared to healthy controls, with these differences observed in CD8 + and CD4 + T cells, but not B cells.Upon IFN-α stimulation, ISGs remained lower in melanoma patients compared to healthy controls.The mechanism for the reduced IFN signaling was investigated by measuring gene expression levels of components of the IFN signaling pathway including STAT2, JAK1, JAK2, and Tyk2, which were not significantly different between healthy donor and melanoma patient lymphocytes.Therefore, the mechanism for impaired IFN signaling in T cells from melanoma patients was not determined.
IFN signaling has also been assessed in peripheral blood lymphocytes (including total, naïve and effector memory T cells, B cells, and NK cells) from patients with breast cancer, melanoma, and gastrointestinal cancer via RT-qPCR, phosphoflow cytometry, and functional assays [18].ISGs, including STAT1, were found to be downregulated in blood lymphocytes from breast cancer patients compared to healthy controls via RT-qPCR.Reduced IFN-α and IFN-γ stimulation of p-STAT1 was identified by phosphoflow cytometry in cancer patient peripheral blood lymphocytes compared to healthy controls.IFNα-induced p-STAT1 was impaired in B, T and NK cells from all cancer groups.IFN-γ-induced p-STAT1 was diminished in B cells, but not T and NK cells, from all cancer groups.Defective signaling was present in memory, effector, and naïve T cells from melanoma patients.
There was no difference in impaired IFN-α and IFN-γ phosphorylation of STAT1 between early-and latestage breast cancer patients, or between chemotherapytreated and untreated breast cancer patient groups.Finally, the authors showed that the ability of T cells to be activated by anti-CD3/CD28 stimulation was reduced in breast cancer patients compared to healthy donors.NOS1 secretion by melanoma cells is one mechanism that has been identified as contributing to dysregulated IFN signaling through in vitro analysis [35].The effects of soluble factors released by 12 melanoma cell lines on IFN signaling in healthy donor PBMCs were investigated using a Transwell system.Following 1 week of co-culture, PBMCs were stimulated with IFN-α and p-STAT1 induction was quantified by phosphoflow cytometry.NOS1 was identified as an inhibitory factor released by melanoma cells, which led to dysfunctional IFN-α p-STAT1 signaling in PBMCs.
Altogether, these studies using phosphoflow cytometry reveal that cancer patients, including those with melanoma and breast cancer, exhibit defects in IFN-α and IFN-γ signaling in their peripheral blood lymphocytes, which is evidenced by reduced activation of p-STAT1.This impairment is observed in various immune cell types, and the exact mechanism of T cell dysfunction in patients remains unknown.
Altered interleukin 6 signaling in cancer patients' peripheral immune cells
Interleukin 6 (IL-6) is a cytokine that plays a central role in the immune system, influencing the activation and differentiation of immune cells and stimulation of acute phase responses [36].IL-6 signaling is initiated when IL-6 binds to its receptor (IL-6R) on the surface of cells, leading to the phosphorylation and activation of JAK1 and STAT3.Activated STAT3 translocates to the nucleus and regulates the transcription of target genes.IL-6 signaling is necessary for proper T cell function; it is a costimulatory factor for T cells and promotes proliferation [37] and T cell survival [38], induces the initial production of IL-4 in CD4 + T cells, polarizing them to an effector T helper 2 (Th2) phenotype [39], aids effector T cells in overcoming Treg-mediated suppression, supports CD4 + T cell memory development [40], and skews T cell differentiation towards a Th17 phenotype and away from a Treg phenotype [41].However, IL-6 is also a pleiotropic proinflammatory cytokine that reflects a negative prognosis in cancer patients, and prolonged signaling can lead to T cell dysfunction, thereby suppressing anti-tumor immune responses [42].
Impaired IL-6 signaling has been detected in peripheral T cells from both non-small cell lung cancer (NSCLC) patients with high vs low plasma IL-6 concentrations, and in breast cancer patients compared to healthy donors.In patients with NSCLC, the investigators sought to determine differences in T cells between patients with high levels of plasma IL-6 (> median of 6.41 pg/ml) compared to low levels (< 6.41 pg/ml), as high levels of IL-6 are associated with poor prognosis in these patients [43].They found that patients with high levels of IL-6 had a higher percentage of peripheral Tregs and higher expression of PD-1 on both CD4 + and CD8 + T cells.The responsiveness of peripheral T cells to IL-6 was assessed from patients with high and low levels of plasma IL-6 using phosphoflow cytometry by measuring p-STAT1, p-STAT3, and p-ERK1/2.Patients with high IL-6 levels had lower activation of p-STAT1 in CD4 + T cells.No differences in induction of p-STAT3 or p-ERK1/2 was noted between the two groups.Furthermore, in vitro experiments were conducted using PBMCs from a healthy donor, where constant exposure to high levels of IL-6 was found to attenuate STAT signaling in T cells, leading to weaker signaling responses through STAT1, STAT3, and MAPK1/2 in CD4 + T cells.
The responsiveness of IL-6 signaling in peripheral blood T cells has also been evaluated in breast cancer patients compared to healthy donors [19].Treatmentnaïve breast cancer patients and age-matched healthy donors were assessed for responsiveness of peripheral T, B, NK, and myeloid cells to IL-6 using phosphoflow cytometry.IL-6-induced p-STAT1 and p-STAT3 were lower in naïve CD4 + T cells from breast cancer patients compared to healthy donors (Fig. 2A).Peripheral levels of IL-6 were not elevated in this cohort of breast cancer patients compared to healthy donors, and no correlation was found between impaired IL-6 signaling response in naïve CD4 + T cells and plasma IL-6 levels.However, IL-6 receptors IL-6Rα and gp130 were found to be reduced on naïve CD4 + T cells from breast cancer patients compared to healthy donors by flow cytometry, and expression levels correlated with signaling responsiveness to IL-6, identifying a potential mechanism of impaired signaling (Fig. 2B).Furthermore, elevated mRNA level of ADAM19, which can cleave IL6-Rα, was found in breast cancer patient T cells.
Aberrant p-STAT3 expression has been identified in peripheral CD4 + and CD8 + T cells and related to development of hepatocellular carcinoma (HCC) [20].In this study, basal levels of p-STAT3, measured by phosphoflow cytometry, were higher in peripheral blood CD4 + and CD8 + T cells of HCC patients compared to healthy controls.Serum cytokine levels of IL-4, IL-6, and IL-10 were increased, and IFN-γ levels were decreased in HCC patients compared to healthy controls.IFN-γ levels and IFN-γ/IL-4 ratio negatively correlated with p-STAT3 expression in CD4 + and CD8 + T cells in HCC patients, whereas levels of IL-4, IL-6, and IL-10 positively correlated with p-STAT3 levels.
Overall, these studies highlight the importance of balanced IL-6 signaling in T cell function and demonstrate the significance of impaired IL-6 signaling in various cancer types, including breast cancer, NSCLC, and HCC.
Altered interleukin 7 signaling in cancer patients' peripheral immune cells
Interleukin 7 (IL-7) plays a role in the development, maintenance, and function of immune cells, particularly T and B cells, and is required for T cell development and memory function [44].IL-7 signaling is activated when IL-7 binds to its receptor (IL-7R) on the surface of cells, leading to the phosphorylation and activation of JAK1 and STAT5.Activated STAT5 translocates to the nucleus and regulates the transcription of target genes.
Impaired IL-7 signaling was found in T cells from breast cancer patients [21].Using phosphoflow cytometry, cancer patients' PBMCs had lower constitutive p-STAT5 levels in CD4 + and CD8 + T cells compared to healthy donors.The signaling responsiveness of p-STAT5 to IL-7 stimulation was tested in CD4 + and CD8 + T cells; while all healthy donors (19/19) responded to IL-7, only 6/19 breast cancer patients responded to cytokine stimulation.In addition, IL-7Rα expression was reduced on peripheral CD4 + T cells from breast cancer patients compared to healthy donors.A functional assay testing immune effector function was carried out to corroborate and extend the phosphoflow cytometry findings.Intracellular cytokine production of IFN-γ and IL-2 upon PMA/lonomycin or T cell receptor (TCR) crosslinking stimulation was reduced in CD4 + and CD8 + T cells of breast cancer patients (9/19) compared to healthy donors' (19/19) PBMCs, further indicating impaired function of these cells.
Overall, altered IFN, including both IFN-α and IFN-γ responses, IL-6, and IL-7 signaling pathways have been noted in cancer patients' peripheral immune cells compared to those of healthy donors.Further investigation is needed into the prevalence, cause, and consequences of this varied cytokine signaling in cancer patients' PBMCs.
Differences between tumor-infiltrating lymphocytes and PBMCs
The relationship between peripheral and intratumoral immune cells is complex and not fully understood.Limited numbers of studies have assessed the correlation between signaling responsiveness in peripheral immune cells with that of intratumoral immune cells.The few studies that have attempted this comparison have found both similarities and differences in signaling response in the two domains.In one study, suppressed cytokine signaling was found in intratumoral lymphoma T cells compared to peripheral T cells [45].Using phosphoflow cytometry, reduced IL-4, IL-10, and IL-21-induced p-STAT6 and p-STAT3 were identified in tumor infiltrating lymphocytes (TILs) in follicular lymphoma tumors.In particular, CD4 + CD45RO + CD62 − follicular lymphoma TILs were unresponsive to cytokines; this was not observed in the autologous peripheral blood subset.In addition, CD4 + PD1 hi follicular lymphoma TILs lost their cytokine responsiveness, whereas PD-1 neg TILs had normal cytokine signaling.
Signaling differences have also been noted in peripheral and intratumoral T cells from colorectal cancer (CRC) patients [46].Peripheral blood and paired tumor tissue from 63 CRC patients and peripheral blood from 33 healthy donors were evaluated by phosphoflow cytometry analysis for IL-6, IL-10, and IL-2-induced phosphorylation of p-STAT1, p-STAT3, and p-STAT5, respectively, in helper T, Treg, and cytotoxic T cell subsets.The signaling response to cytokines in TILs tended to be lower compared to CRC patient and healthy donor PBMCs.IL-2-induced p-STAT5 in Tregs was the only signaling pathway with no difference between healthy donor PBMC and cancer patient PBMC and TIL groups.Certain signaling pathways were increased in cancer patient PBMCs (IL-10-induced p-STAT3 in helper T, Treg, and cytotoxic T cells) while others were decreased (IL-6 induced p-STAT1 in helper T, Treg, and cytotoxic T cells) compared to healthy control PBMCs.
Overall, these studies in both lymphoma and colorectal cancer patients have demonstrated differences in signaling responsiveness between peripheral and intratumoral immune cells, suggesting altered cytokine signaling pathways in TILs compared to their peripheral counterparts.It should be noted that in these and other studies, the differences or similarities observed could be due to the time of sampling of both tumor tissue of primary or metastatic lesions and PBMCs.
Similarities between TILs and PBMCs
As few studies have been published on the association between signaling responsiveness in TILs and PBMCs, limited data exist on similarities between the two compartments.However, similarities have been observed in the signaling responsiveness and function of a specific population of peripheral blood Tregs and intratumoral Tregs in breast cancer patients [47].In this study, peripheral Tregs were categorized into three distinct subgroups: Treg I (CD45RA hi FoxP3 lo ), II (CD45RA lo FoxP3 hi ), and III (CD45RA lo FoxP3 lo ) (Fig. 3A).Treg II cells in the periphery of breast cancer patients were found to be most phenotypically like intratumoral Tregs via expression of CD25 and other markers (Fig. 3B) and T cell receptor clonal overlap (Fig. 3C).Signaling responsiveness was tested by phosphoflow cytometry in conventional T cells, total Tregs, and Treg I, II, and III cells; Treg II cells were found to be more sensitive to immunosuppressive cytokines (IL-10-induced p-STAT1 and TGF-β-induced p-SMAD2/3) and less responsive to immunostimulatory cytokines (IL-4-induced p-STAT6 and IFN-γ-induced p-STAT1) compared to the other cell types evaluated.The combined effect of cytokine signaling response was calculated with a cytokine signaling index (CSI), in which a higher CSI indicated more responsiveness to immunosuppressive cytokines and less responsiveness to immunostimulatory cytokines (Fig. 3D).There was no association between plasma levels of IL-10, TGF-β, IL-4, and IFN-γ and signaling responsiveness.Association between signaling responsiveness and function were investigated; Treg II cells were more suppressive of responder T cells (CD4 + CD45RA + CD25 − ) than Treg I and Treg III cells, and Treg II cells with a higher CSI correlated with increased suppressive ability (Fig. 3E).
Connections between impaired IFN-γ signaling responsiveness via p-STAT1 in peripheral monocytes and tumor associated macrophage (TAM) infiltration in paired primary tumors have been noted in patients with non-metastatic breast cancer [48].Patients with lower IFN-γ signaling responsiveness in peripheral monocytes had lower levels of TAM infiltration.TAMs are known to be recruited to tumors through the expression of CSF1R.A negative correlation was found between the expression of CSF1R on monocytes and the responsiveness of peripheral monocytes to IFN-γ signaling, which provides a potential mechanism for the reduced infiltration of TAMs in the tumors of these patients.
In summary, while limited studies have explored the association between signaling responsiveness in TILs and PBMCs, similarities have been observed in the signaling responsiveness and function of a specific population of peripheral blood Tregs and intratumoral Tregs in breast cancer patients.Furthermore, impaired IFN-γ signaling in peripheral monocytes may contribute to reduced TAM infiltration in non-metastatic breast cancer patients, potentially mediated by the expression of CSF1R on monocytes.
High-dose IL-2
Phosphoflow cytometry has been used as a tool to determine the effect of cancer treatment agents on the immune system.High-dose IL-2 treatment is employed to cancer patients with the goal of activating immune cells to destroy cancer cells.The activation of signaling pathways in immune cells in response to IL-2 therapy has been studied using phosphoflow cytometry.In one study, PBMCs from 11 patients with metastatic melanoma and renal cell carcinoma were analyzed for p-STAT5 at baseline and 1 h after high-dose IL-2 [49].The activation of p-STAT5 in immune cells varied between individuals but persisted in CD4 + and CD8 + T cells and CD56 + NK cells up to 3 weeks in patients who responded clinically to treatment.In another study, 17 patients with metastatic renal cell carcinoma and melanoma were treated with high-dose IL-2 and sorafenib, a kinase inhibitor, and activation of p-STAT5 in peripheral T cells was assessed [50].Elevated p-STAT5 was noted in CD4 + and CD8 + T cells 1 h after IL-2 administration and was not altered by the addition of sorafenib.The findings mentioned here highlight the significance of using phosphoflow cytometry to understand the effect and efficacy of a cancer immunotherapy agent on the immune system.
STAT3 inhibition
Targeting immune cell signaling has been explored as a way to enhance clinical response to immunotherapy.One study examined the ability of the STAT3 inhibitor WP1066 to enhance T cell anti-tumor activity through Treg inhibition in patients with melanoma brain metastases [51].The baseline percentage of p-STAT3 + PBMCs was higher in melanoma patients compared to healthy donors.WP1066 reduced IL-6-induced p-STAT3 expression and enhanced CD3 + T cell cytotoxicity against melanoma.The response was dependent on the presence of Tregs, as WP1066 was able to inhibit FoxP3 + Treg induction through the inhibition of p-STAT3, contributing to the anti-tumor response.These findings suggest that specifically targeting relevant immune cell signaling pathways may be a promising approach for enhancing the effectiveness of cancer treatment.
Signaling in peripheral blood immune cells to predict cancer patients' response to treatment and outcome
Signaling pathways activated in PBMCs have been studied as a way to predict cancer patient response to treatment and outcome.Studies reporting associations of phosphoflow cytometry-detected cytokine signaling pathways and cancer patient response are summarized in Table 2.
Association of phosphoflow cytometry with clinical outcome in melanoma
The association of signaling responsiveness using phosphoflow cytometry and cancer patient outcome has been studied in patients with melanoma.Phosphoflow cytometry was used to analyze p-STAT1 expression in PBMCs from 17 healthy donors and 19 melanoma patients before and after treatment with IFN-α [16].Healthy donors had higher basal levels of p-STAT1 in total PBMCs, NK cells, and T cells compared to melanoma patients.P-STAT1 was detected in PBMCs of two patients receiving IFN-α treatment and increased with treatment in one patient.In a later study, 14 melanoma patients were treated with high-dose IFN and assessed for signaling responsiveness through phosphoflow cytometry of IFN-α-induced p-STAT1 in peripheral blood lymphocytes [52].Patients who responded clinically to treatment had a significant increase in IFN-α signaling capacity from days 0 to 29 in total lymphocytes and CD8 + T cells, which was not seen with non-responders (Fig. 4A).In addition, p-STAT1 response to IFN-α was compared to disease-free and overall survival.Patients with a greater increase in signaling responsiveness to IFN-α post-treatment had a trend towards better outcome (Fig. 4B).These findings suggest that defects in IFN signaling can be overcome with highdose IFN therapy in some patients, and that early changes in cytokine signaling detected by phosphoflow cytometry may predict response to therapy.
In another study, PBMC samples from melanoma patients were analyzed for p-STAT3 expression after surgical resection and adjuvant nivolumab treatment [53].An increase in p-STAT3 + Tregs and CD8 + T cells at 13 weeks compared to baseline was observed in non-relapsing patients, but not in relapsing patients (Fig. 4C).In addition, conventional T cells from nonrelapsing patients had upregulation of p-STAT3.Similar results were seen in unresectable stage III/IV melanoma patients treated with nivolumab.The change in p-STAT3 at week 13 compared to baseline in both Tregs and CD8 + T cells from patients with metastatic disease was also positively correlated with survival (Fig. 4D).In vitro experiments revealed that PD-1 blockade increased p-STAT3 expression in Tregs, conventional T cells, and CD8 + T cells.This induction of p-STAT3 was accompanied by a reduction in Treg suppressive capacity.Overall, phosphoflow cytometry findings in melanoma patients indicate that signaling, as measured by p-STAT1 and p-STAT3 expression, can potentially be used to predict patient response to high-dose IFN-α and nivolumab therapies, respectively, with increased signaling correlating with better treatment response.
Association of phosphoflow cytometry with outcome in breast cancer
Other studies have identified a relationship between cytokine signaling pathways in PBMCs as measured by phosphoflow cytometry and clinical outcome in breast cancer.Wang et al. found that impaired IL-6 p-STAT1 and p-STAT3 activation in peripheral T cells was observed at baseline in breast cancer patients who later relapsed compared to patients who did not relapse following standard treatment (Fig. 5A) [19].In addition, higher levels of IL-6-induced p-STAT1 and p-STAT3 (above the median) significantly associated with improved relapse-free survival (Fig. 5B).Another study investigated IFN-γ signaling response via p-STAT1 in peripheral monocytes from breast cancer patients who later relapsed or remained relapse-free, and compared them to healthy donors [48].In that study, IFNγ-induced p-STAT1 in peripheral monocytes was lower in breast cancer patients who relapsed compared to those who remained relapse-free, as well as compared to healthy donors (Fig. 5C).Additionally, IFN-γ Receptor1 levels were higher in healthy donor monocytes compared to relapsed breast cancer patients (Fig. 5D).Lower IFN-γ signaling response in peripheral blood monocytes at diagnosis correlated with worse relapse-free survival (RFS), and was independent of clinicopathologic features and plasma IFN-γ levels (Fig. 5E).The researchers also found a significant positive correlation between IFN-γ-p-STAT1 in monocytes and IL-6-p-STAT1/3 in CD4 + naïve T cells.
In another study in breast cancer patients, signaling responsiveness was investigated in peripheral Treg II cells, which were identified as being most phenotypically similar to intratumoral Tregs [47].Breast cancer patients with increased signaling responsiveness to immunosuppressive cytokines (IL-10 and TGF-β) in Treg II cells had worse RFS, whereas patients with increased signaling responsiveness to immunostimulatory cytokines (IL-4 and IFN-γ) in Treg II cells had better RFS.Calculation of a cytokine signaling index, by combining signaling responsiveness to all four cytokines, determined that patients with peripheral Treg II cells with a higher cytokine signaling index, indicative of increased sensitivity to immunosuppressive cytokines and decreased sensitivity to immunostimulatory cytokines, had a worse RFS.
To summarize, phosphoflow cytometry studies in breast cancer patients have revealed that impaired activation of cytokine signaling pathways, such as IL-6-induced p-STAT1 and p-STAT3 in peripheral T cells, and lower IFN-γ-induced p-STAT1 in peripheral monocytes, are associated with increased risk of relapse.In addition, increased signaling responsiveness in Treg II cells to immunosuppressive cytokines and decreased signaling responsiveness to immunostimulatory cytokines is linked to worse relapse-free survival.Further investigation of the association of phosphoflow cytometry signaling in peripheral immune cells with cancer patient outcome is warranted.
Phosphoflow cytometry in preclinical mouse models
There have been several studies that have used phosphoflow cytometry to investigate signaling pathways in peripheral blood of mice, albeit fewer than in humans.These include investigation of T cell receptor signaling, phosphorylated signaling proteins in B cell subpopulations, the activation of signaling pathways in peripheral blood cells of mice with various diseases, and a tumortargeting vaccine.Phosphoflow cytometry was used Fig. 5 Associations of phosphoflow cytometry signaling with outcome in breast cancer patients.In panels A-B, IL-6 signaling response was assessed in peripheral blood CD4 + naïve T cells prior to any therapy.The delta median fluorescent intensity (ΔMFI) in p-STAT1 and p-STAT3 response to IL-6 was assessed between non-relapsed and relapsed patients A. The signaling responsiveness (ΔMFI) above and below the median was also compared by Kaplan-Meier analysis to assess relapse-free survival B. In panels C-E, IFN-γ signaling response was assessed in peripheral monocytes from breast cancer patients prior to therapy.IFN-γ p-STAT1 signaling responsiveness in peripheral blood monocytes of breast cancer patients associates with relapse and relapse-free survival.MFI of pSTAT1 in monocytes stimulated with IFN-γ minus unstimulated in relapsed and relapsed-free breast cancer patients compared to healthy donors (HD) C. MFI of IFNγR1 on monocytes in relapsed and relapsed-free breast cancer patients compared to healthy donors D. Kaplan-Meier survival curves of relapse-free survival (RFS) between patients with high (≥ 25% quantile) vs low (< 25% quantile) IFN-γ responsiveness in both a discovery and validation cohort E. Panels A and B are modified from Wang et to analyze the differences in T cell receptor signaling between Tregs and CD4 + T cells [9].The study found that Treg and non-Treg CD4 + T cells displayed differences in TCR-dependent signaling responses following in vitro or in vivo stimulation.The researchers used phosphoflow cytometry to profile the kinetics and extent of TCR signaling (ZAP-70 and PKC-0 phosphorylation) in Tregs and non-Tregs.The experiments were performed using cells from 6-8-week-old C57BL/6J mice or OT-II TCR transgenic mice, with cells harvested from both lymph nodes and spleens.Another study developed a phosphoflow protocol to evaluate the status of phosphorylated signaling proteins in murine and human B cell subpopulations [54].
Phosphoflow cytometry has also been employed in mouse models of various disease states, including graftversus-host disease [55], aortic valve stenosis [56], leukemia [57] and poxviral infection [58].In a mouse model of aortic valve stenosis, researchers assayed SMAD2/3 phosphorylation in circulating leukocytes and platelet-leukocyte aggregates and found that p-SMAD2/3 staining was more intense in leukocytes of hypercholesterolemic mice that developed aortic valve stenosis, suggesting increased circulating active TGF-β1 levels [56].In the study on poxviral infection, researchers found that STAT1 and STAT3 pathways were rapidly activated in C57BL/6 mice resistant to poxviral infection, whereas in susceptive BALB/c mice, IL-6-dependent STAT3 activation did not occur [58].Phosphoflow cytometry has been investigated in mouse models of blood-based cancers [59,60], but it has not yet been studied in mouse models of solid tumors.It has, however, been applied to the investigation of the activation of MUC-1 specific T cells following vaccination with a MUC-1 targeted vaccine in non-tumor bearing mice [61].The investigators found that MUC1-specific T cells from MUC1-transgenic mice 3 h after a dendritic cell vaccination showed a higher level of p-ZAP-70, p-ERK1/2, and p-PKC-theta phosphorylation compared to non-transgenic mice.MUC-1 is the target of various cancer vaccines currently in development [62].
Despite the small number of studies, phosphoflow cytometry has shown promise in the examination of immune responses and signaling pathways in mouse models.This includes the evaluation of the effects of anti-cancer therapeutics on signaling in immune cells and the investigation of defective immune cell signaling in various disease states.However, since there have been no studies utilizing phosphoflow cytometry in preclinical solid tumor mouse models, there is need for further research in this area.The use of this method has the potential to improve our understanding of cancer biology and evaluate the effectiveness and mechanism of action of potential treatments, including immunotherapies.
Conclusions
Phosphoflow cytometry is a useful technique for studying the activation of signaling pathways and the sensitivity of these pathways to cytokine stimulation in peripheral immune cells.Herein, to our knowledge, is the first review of the application of phosphoflow cytometry in the field of cancer immunology.The use of phosphoflow cytometry in peripheral blood immune cells allows for dynamic and non-invasive monitoring of the activation of signaling pathways in immune cells at baseline and changes over the course of disease and treatment, which may not be as easily accessible with other methods.Many of the limitations that existed when phosphoflow cytometry was initially developed, restricting its potential use as a widespread immunomonitoring tool, have since been addressed.Further studies, including Good Laboratory Practice (GLP) applications, will be required to apply phosphoflow cytometry as a method in clinical practice.In addition, more work is needed to optimize the type of cytokine stimulation and flow cytometry panels needed to measure the key signaling pathways that are relevant to the physiologic state/indication being evaluated.Cytokines and the signaling pathways they activate are essential for regulation of the immune response.Therefore, this method goes beyond providing basic information on immune cell numbers and/or phenotype and may inform on the function of immune cells by identifying activated and defective signaling pathways.
Despite the potential value of phosphoflow cytometry for understanding immune responses in cancer patients, its use in this context has been relatively limited.In this review, we show that phosphoflow cytometry has been applied to study a number of important cytokine signaling pathways in peripheral blood of cancer patients, including signaling induced by interferon, IL-6, and IL-7.However, there are additional key signaling pathways in cancer immunology that have not been explored with phosphoflow cytometry, including signaling related to immune checkpoints such as PD-1, PD-L1, and CTLA-4, NF-kB, Wnt/β-catenin, PI3K-Akt-mTOR, STING, the adenosine pathway, and additional cytokines activating the JAK/STAT signaling cascade such as IL-12 and IL-15 [63][64][65][66][67][68][69].Application of phosphoflow cytometry to these additional pathways represents an opportunity for further research, as the mechanisms underlying response to immunotherapy in cancer patients are often poorly understood.By investigating numerous signaling pathways in peripheral immune cells, it may be possible to identify therapeutic targets that could restore normal immune cell function and improve treatment outcomes, and help in the dynamic assessment of patient response to therapy prior to and during the therapeutic regimen.
Fig. 1
Fig.1Phosphoflow cytometry method.A heterogeneous population of cells is briefly stimulated with cytokines, which bind to receptors and phosphorylate intracellular signaling proteins A. Cells are then fixed to preserve their phosphorylation status and stained with fluorescently labeled extracellular antibodies to define cell populations.Next, cells are permeabilized to allow access to intracellular proteins and stained with fluorescently labeled intracellular antibodies that specifically recognize phosphorylated forms of proteins of interest B. Labeled cells are then detected by flow cytometric analysis to determine phosphorylation levels in specific immune cell subsets of cytokine-stimulated and unstimulated groups C. MFI, mean fluorescent intensity.P-STAT, phosphorylated signal transducer and activator of transcription
Fig. 2
Fig. 2 IL-6 signaling responsiveness in breast cancer patients' CD4 + naïve T cells compared to healthy donors measured by phosphoflow cytometry.Median fluorescent intensity (MFI) of p-STAT1 and p-STAT3 between CD4 + naïve T cells stimulated with IL-6 minus unstimulated in healthy donors and breast cancer patients A. Flow-cytometry expression of IL-6Rα and GP130 on CD4 + naïve T cells in healthy donors and breast cancer patients, and the correlation with IL-6 signaling responsiveness of p-STAT1 and p-STAT3 B. Modified from Wang et al., Cancer Res.2017 [19].Copyright© 2017.American Association for Cancer Research
Fig. 3
Fig. 3 Association of signaling responsiveness in regulatory T cell (Treg) II cells with intratumoral Tregs and suppressive function.Peripheral Treg populations were defined based on differential expression of CD45RA and FoxP3 and compared to intratumoral Tregs A. Peripheral Treg populations were compared to intratumoral Tregs for expression of CD25 and other markers (not shown) B. T cell receptor sequencing was performed and the proportion of overlapping clones among the top 50 clones was compared between peripheral Treg populations and matched intratumoral Tregs C. A cytokine signaling index (CSI) was calculated from the z-score of the difference in median fluorescent intensities of cytokine-stimulated minus unstimulated groups D. Suppression of responder T cells (CD4 + CD45RA + CD25 − ) by Treg I, II, and III cells was compared, and Treg II cell suppression was correlated with the Treg II CSI E. Modified from Wang et al., Nat Immunol.2019 [47].Copyright© 2019, Springer Nature
Fig. 4
Fig. 4 Associations of phosphoflow cytometry signaling with outcome in melanoma patients.In panels A and B, 14 melanoma patients were treated with high-dose IFN-α (HDI) and p-STAT1 signaling responsiveness to IFN-α was analyzed both prior to and after treatment.p-STAT1 induction at baseline and after 4 weeks of treatment was analyzed in total lymphocytes and CD8 + T cells from responding (R) and non-responding (NR) patients A. The relationship between IFN-α-induced p-STAT1 activation and clinical outcome was assessed by Kaplan-Meier analysis by generating a ratio (post/pre) of p-STAT1 fold induction in lymphocytes before and after treatment, and comparing patients above and below the median B. In a separate study, in panels C and D, melanoma patients were analyzed for p-STAT3 expression after surgical resection and adjuvant nivolumab.The geometric mean fluorescent intensity (gMFI) of p-STAT3 from baseline to week 13 in regulatory T cells (Tregs) and CD8 + T cells was compared between patients with no evidence of disease (NED) and relapse C. The correlation between overall survival and percent change in p-STAT3 at week 13 compared to baseline was assessed in Tregs, conventional T cells (Tcon), and CD8 + T cells D. Panels A and B are modified from Simons et al., J. Transl Med.2011 [52].Copyright© 2011, Simons et al., Licensee BioMed Central Ltd.Panels C and D are modified from Woods et al., Clin Cancer Res.2018 [53].Copyright© 2018, American Association for Cancer Research
Table 1
Differences in basal-or cytokine-stimulated phospho-protein levels in cancer patients' peripheral immune cells compared to healthy
Table 2
Cytokine signaling pathways associated with survival outcome following treatment in cancer patients al., Cancer Res.2017 [19].Copyright© 2017.American Association for Cancer Research.Panels C-E are modified from Wang et al., EBioMedicine 2020 [48].© Wang et al.Published by Elsevier B.V (See figure on next page.) | 9,245.6 | 2023-09-23T00:00:00.000 | [
"Biology",
"Medicine"
] |
A forensic-driven data model for automatic vehicles events analysis
Digital vision technologies emerged exponentially in all living areas to watch, play, control, or track events. Security checkpoints have benefited also from those technologies by integrating dedicated cameras in studied locations. The aim is to manage the vehicles accessing the inspection security point and fetching for any suspected ones. However, the gathered data volume continuously increases each day, making their analysis very hard and time-consuming. This paper uses semantic-based techniques to model the data flow between the cameras, checkpoints, and administrators. It uses ontologies to deal with the increased data size and its automatic analysis. It considers forensics requirements throughout the creation of the ontology modules to ensure the records’ admissibility for any possible investigation purposes. Ontology-based data modeling will help in the automatic events search and correlation to track suspicious vehicles efficiently.
INTRODUCTION
Recent years have witnessed an increasing number of violent events occurring all around the world (Organization, 2014). Security services are in continuous work and at utmost readiness for anticipating eventual crimes by identifying suspicious vehicles used for illegal purposes. In most cases, the vehicles are used by criminals to spy, hit, transport, and escape from justice. Therefore, security forces improve the security provisions in checkpoints and sensitive roads using cameras. The efficiency enhancement of cameras monitoring the inspection points in identifying suspicious targets goes through the suitable models and approaches used for image recognition and classification. It is of paramount importance to connect all checkpoints and share relevant information through enhanced communication techniques to improve recognition performance. Thus, the database of each checkpoint will benefit from the cumulative extracted data and analysis of others. For instance, the collected critical details of an escaped vehicle from an inspection point will be immediately communicated with others, enabling fast processing of the detected images and, therefore, taking the required policies against that vehicle before escaping the next checkpoint.
Rapid data processing and sharing present the main factor for effective checkpoint traffic control and management while preserving the collected data's admissibility. Its importance in a forensically sound manner. This research is motivated by the current and increased need for Makkah's security services to effectively handle the big data received from the various inspection points. Thus, this paper presents the checkpoint system specification and the different challenges related to the required devices, processing speed, and the data size before creating the model.
The checkpoint system consists of several static cameras installed in several checkpoints containing multiple roads. Each vehicle is scanned by two digital cameras and one 3D camera. The used checkpoint's cameras are of type grasshopper 2.0 MP color firewire (camps://www.flir.com/support/products/firewire-cameras). They provide a 1624 ×1224 resolution image with a 30 frame rate. The camera has a 14-bit analog to digital converter (ADC) and 32 MB image buffer. The power consumption is 3.5 W at 12V. Each car requires 3 s to be fully inspected, producing an image size of about 2 MB. The images should be carefully stored for possible security and forensics needs after analysis and possible detection of suspicious vehicles. Since critical information may be identified from the data received, checkpoints require reliable communication to share information and improve detection performance. It is mandatory to consider during the cameras selection the different source cameras models to enhance the forensics requirements preservation (Amerini et al., 2021).
The above case study presents several challenges related to camera management, fast real-time vehicle recognition, information sharing between checkpoints, and rapid analysis of events from multiple distant locations while considering forensics constraints. The main challenges are depicted as follows: • Data processing scalability. Concerned with the checkpoint's control system's ability to respond to any considerable number of vehicles accessing them in terms of data processing and storage scalability. For instance, during the Hajj season in Makkah, the roads witness an exponential increase in vehicle number. The checkpoints must process them reliably and detect any violation.
• Information and network security. It increases depending on the data sensitivity transferred between involved parties. Information such as car plate number, color, type, checkpoint passing time, and maybe the driver's general description must not be altered or modified during their transmission.
• Information sharing. It is about the fast, secure, and forensically sound exchange of information between different checkpoints, which will reduce the processing time and improve the security service agent's readiness.
• Forensically sound data processing. It aims to preserve the gathered data's admissibility for any further investigation need. The system must adhere to forensics rules without decreasing its performance.
To deal with the above challenges, this study proposes a two-fold approach that consists of Cluster-based checkpoint design and forensically sound data modeling, detailed in the following sections.
CLUSTER BASED CHECKPOINT'S MULTI-CAMERAS MANAGEMENT FRAMEWORK
The design of the Cluster-based framework is driven by the required specifications mentioned in section II. Figure 1 presents the different architectural ingredients. The framework takes advantage of the distributed architecture as follows: • the replication and independence of databases, processing units, and network devices increases transaction reliability, availability, and fault tolerance.
• the enhanced modularity of distributed architectures enables the easy modification of the distributed database without affecting other system's modules or causing scalability issues.
• the analysis load distribution into several processing units enables an improved performance to handle and process big data.
Framework ingredients
The proposed checkpoint's vehicle design encompasses two layers:
Management layer
The management layer deals with system management issues. It adopts the ISO telecommunications management model FCAPS (Fault, Configuration, Accounting, Performance, and Security) to organize the network management functions into five categories covering all telecommunications issues (Goyal, Mikkilineni & Ganti, 2009). The FCAPS model is widely used in big organizations to manage any networked system (Kwiecień & Opielka, 2012). This paper adopts best practices in the literature (Widjajarto, Lubis & Syahputra, 2019) to implement and realize the framework.
Application layer
The application layer includes application specifications and requirements. According to the case study specification, the application requirements are divided into four phases. Each phase has specific tasks distributed in the architecture according to each task requirement. The phases are: • 3D scans reconstruction phase: this phase reconstructs the 3D scans based on the images and shapes provided from the camera and 3D scan devices, respectively. The scanning process takes 3 s and delivers 2 Mb for each vehicle. After capturing the 2D images scans, the data are transmitted to the onsite Front-End layer processing units (GPU-based server) to reconstruct the 3D shapes in real-time using monocular methods (Zollhöfer et al., 2018;Russell, Yu & Agapito, 2014). The system must adhere to the best practices in handling data to maintain their admissibility once required in a possible investigation case. Section 4 presents the proposed data model to ensure the integrity of the data chain of custody.
• Car Classification/recognition phase: the application classifies each car using accurate classification algorithms executed by the GPU to determine the car class based on the different classification features. • Car recognition/classification training database: the car recognition database includes images of car models or any photograph of a suspicious car. It has a non-deterministic size and should be shared between all checkpoints. Thus, this study adopts a central Cloud recognition database shared between all checkpoints (clusters) to deal with scalability requirements. A Cloud server manages the training database for any updating or synchronization requirements.
• Application phase: deals with the management of the different architectural components.
Processing architecture
The proposed framework encompasses two processing layers based on the detection and analysis time, which are: • Front-End layer is represented by each Cluster (checkpoint) detailed in Fig. 2. It deals with real-time 3D scan reconstruction, car classification, recognition, and onsite data storage. The used classification algorithms must also consider real-time requirements to enable fast suspicious car detection. Mainly the used algorithms are finite gamma mixture based method (Al-Osaimi & Bouguila, 2015), beta-Liouville and Dirichlet distribution based methods , and Pitman-Yor process mixtures of Beta-Liouville Distributions method .
• Back-End layer is a cloud-based solution responsible for storing and maintaining the recognition/classification training database. The back-end layer includes the management unit, which controls all connected clusters and receives their status to intervene in any failures.
FORENSICALLY SOUND SEMANTIC DATA MODELING SCHEMA
To enhance the data integration, offer a reliable, extensible content description, and fit the need for automated vehicle tracking and analysis, the paper adopts Semantic Web technologies; namely, it uses ontology. This technology improves data sharing between the different checkpoints and integrates heterogeneous resources of various hardware and software technologies. The ontology also allows the deduction of new details such as finding contradictions or validating things through the use of reasoning engines such as Pellet (https://www.w3.org/2001/sw/wiki/Pellet). This feature is used to automatically track and identify the suspicious vehicles deduced by the reasoning engine.
The key feature of the proposed data model resides in its easy integration with the existing tracking system through the use and extension of current standards. Using standards is required to increase its interoperability and integration within already implemented vehicle tracking systems. Thus, this paper creates a semantic data model by reusing several existing ontology standards and selecting only the suitable ontology modules related to the research topic. Then, it adds and completes the different required classes and relations specific to the case study based on scoping and tailoring techniques. During the scoping process, this effort distinguishes three relevant standards and researches to be adopted. Mainly they are: • The Incident Object Description Exchange Format (IODEF) (Danyliw & Takeshi, 2016).
Then, the paper proceeds for the tailoring process to select only associated-relevant items and propose missing elements and modules as required. Thus, every established ontology module is either newly proposed or extracted from existing efforts and extended by new attributes. This process aims to ensure the integration of forensics requirements into the ontology without negatively impacting the performance and reliability of the vehicle tracking system. Table 1, depicts the different selected standards and research efforts and their adaptation tailoring process. Also, it shows the newly proposed modules and extensions to existing standards.
The proposed ontology encompasses several components covering all activities and used technologies related to vehicle control inspection points. Also, it considers forensics requirements throughout the design of the semantic-based data model. Figure 3 presents the overall proposed ontology named Forensics-aware Checkpoint's Vehicle Recognition Ontology (FCVRO).
FCVRO modules overview
The following includes the details of the main ontology modules showing their goals and advantages. Several new ontology modules depicted in Table 1 are not described since their purpose could be easily estimated from its naming.
Vehicle modeling
The vehicle module (see Table 2) encompasses all attributes that distinguish a vehicle from another. It determines whether the car is self-driving (auto) or not (nauto). Also, it determines the legal status of the vehicle. This module communicates with the Contact module since each vehicle has a driver and eventual passengers and checkpoint module where the vehicle is located and is subject to the recognition process.
This paper treats only cars vehicles since, currently, it implements only algorithms dealing with car processing.
Contact modeling
The purpose of the Contact module is to model all human beings in contact with the recognition process. Contact may be the police agents, system administrators, vehicle drivers, and any possible person that may impact the final recognition process. The contact module (see Table 3) is connected directly with the vehicle (in this case, the contact may be the driver or passengers), the fraud (the person(s) that commits the scam), and the event (the person(s) involved in the events) classes. Also, the contact module includes all persons working in the checkpoints and those maintaining the recognition system.
Incident & event modeling
This module includes Incident and Event classes (see Table 4). Each incident may have one or several events. An event is the smallest complete task that occurs by an active part. The incident describes all actions and events within checkpoints and/or within the intermediate systems and tools. Thus, an incident may cover several events generated from different checkpoints or/and medians. An incident may be internal (caused by internal contact) or external (caused by external contact such as a new suspected vehicle). Each incident is evaluated via the assessment class to determine its monetary, time, technical loss, and severity impact. Since forensics requirements differ from one county to another, the assessment class could be adapted and extended by new forensics metrics using metrics elicitation frameworks that reflect the country's regulations (Akremi, 2021a). The incident module has connections with the Contact, Fraud, Record, and Technologies modules.
Technologies modeling
The technologies module (see Table 5) covers all classes and attributes related to the system's hardware and software tools. Mainly, it distinguishes three categories: Hardware (encompasses the different hardware elements such as cameras, routers, computers, etc.), software (includes all software requirements such as recognition and transmission methods, soft security tools), and network (defines the various used network interfaces and the different connections as well as the used technologies). The technology module components are secured using the security module provisions. It is essential to use trusted software to integrity-aware process the gathered data to avoid admissibility issues. A good solution is their validation using code review tools (Akremi, 2021b) before deployment.
Records and security modeling
The record module (see Table 6) mainly deals with the forensically sound processing and preservation of generated files and data for further use. Within the record module, the forensics requirements in the ontology design are incorporated and ensured through (Casey, 2018) and standards; a record is admissible when it preserves its authenticity through the preservation of records identity and integrity (Akremi et al., 2020;Duranti & Rogers, 2012), privacy by avoiding any kind of private information breaches during the data seizure (Akremi et al., 2020), comprehensiveness by ensuring that no missing information exists in the final report (Grobler, Louwrens & von Solms, 2010), relevance by focusing on presenting only evidence pertinent to the case), and not being hearsay since ''electronic documents generated and made in the usual and ordinary course of business are not hearsay'', Duranti2012trust. The proposed ontology models the admissibility requirements of the record via both the Record and Security modules.
The record module is connected to the contact handling or generating the record, the hardware that may create, handle, or store the documents, the security provisions to grantee the admissibility and security of records, and the communication network responsible for their safe and reliable transmission.
The Security module's purpose is to provide all required security provisions for other modules (see Table 6), including software or hardware tools. Aside from protecting the system from malicious and hacker penetration, this module provides the means to forensically sound record processing and preservation, such as delivering integrity techniques (MD5, Sha1, etc.).
Fraud modeling
The fraud module (see Table 7) describes the various possible frauds the system aims to identify. It is connected only with the event class to determine the event fraud type.
Automatic fraudulent vehicles detection using the ontology
The proposed ontology's main objective is to automatically and forensically sound validate or omit a possible hypothesis about vehicle suspicion based on real-time verification of already stored information and current captured data. The idea is to infer SWRL rules that define facts and possible recognition patterns over the ontology and verify the proposed hypothesis's conformity. This paper describes three scenarios of vehicle frauds and automatically identifies them when the frauded vehicle passes through checkpoints and immediately alerts the authorities reliably and securely. These scenarios are: 1. Vehicles that may be stolen. 2. Vehicles that may have a fraudulent license plate. 3. Vehicles that may be involved in a hit and run accident. The paper presents the definition and implementation of the different SWRL rules enabling the validation of each scenario's hypothesis through their inference over the proposed ontology. Then, it uses the PELLET reasoning engine (Sirin et al., 2007) to extract and fire rules over the proposed ontology. Therefore, the reasoner will determine the defined rules' satisfaction and notifies the administrator by any deduced event. Table 8 describes the variables used by the OWL rules.
The object properties used within OWL rules are: • isStolen-holds when a vehicle is identified/deduced as stolen.
• hasFraudLicensePlate-holds when a vehicle is identified/deduced as having an illegal license plate.
• isHitRun-holds when a vehicle is identified as running away after a hitting accident.
• loactedIn-holds when the vehicle is identified at a checkpoint.
• relatedTo-holds when two incidents have relation to the same activity.
• happenedBefore-holds when an event happens before another event.
Scenario 1 -vehicles that may be stolen
Identifying stolen vehicles is a daily police mission since this type of fraud is widely committed. Based on some already collected or calculated data, the objective is to identify stolen cars passing the checkpoints. For instance, the SWRL rule in Listing 1 identifies any none self-driving vehicle with a blue color, 2021 model, sedan body type, and a ''5694 SA 23'' license plate number. These data are provided to the system via a graphical interface to enable vehicle searching and editing. Figure 4 shows the inferred knowledge after executing the SWRL rule 1. It shows the generation of new information about the car owner's name, the checkpoint where the car was detected last time, and the fraud number associated with the car theft.
Scenario 2 -vehicles that may have a fraud license plate
Same as the stolen vehicle rule, identifying cars with fraudulent license plates are based on the same data. The system will then compare them with any data extracted from passing cars and notify the administrator about wanted ones. The rule in Listing 2 determines a none self-driving car with a fraudulent license plate number.
Scenario 3 -vehicles that may be involved in a hit and run accident
In this scenario (see Listing 3), cars involved in hit accident and run from the accident scene is modeled. In this case, the vehicle is searched based on some information provided by witnesses and already installed control cameras. The search is focused on the checkpoints that the driver may pass through. Each time, the system compares the detected time of cars passing the checkpoint with the accident reporting time received from other checkpoints. It processes only vehicles that arrived in or after the determined accident time. Finally, the system notifies the police agents about the detection of any suspected car. The rule in Listing 3 uses the following variables; Local: determines the accident coordinates, Color: the color of the vehicle, BodyType: vehicle body type, Model: vehicle manufacturing year, LicensePlate: full or partial vehicle license plate, Gender: the driver gender, and HitTime: the accident time.
Events tracking and research
The events research and tracking module aim to help traffic control system users such as investigators and security agents identify information quickly and deduce new relations between data through tasks automation. Also, inferring over semantic data representation will help to detect the low-level tracking algorithm errors, which enhances the overall system performance (Greco et al., 2016).
Events tracking and ordering
The following rules aim to link the events from the same hardware resources in some checkpoints and order the aggregated events associrelatwithto the same oreparate incidents. The rule in Listing 4 verifies the possible relation between two incidents to the same activity(RelatedAcivity). The activity is identified by a car driver (an actor) crossing the checkpoint with a specific address. Comparing different detected addresses of different drivers may reveal the affiliation of the incidents to the same related activity and being considered alternatives. The rules in Listings 5 and 6 use the OWL new defined object property ''happened before'' to order the events associated with the same or multiple incidents. The ''happened before'' object property uses the event detection time to determine events order. Inferring SWRL rules using reasoning inference tools (i.e., Pellet) creates new links between events and automatically deduces new knowledge in near real-time.
Events research
Since the size of the collected data by checkpoints is significant and in continuous increase, this paper defines SPARQL (Steve Harris, Seaborne & Consortium, 2013) (Protocol and RDF Query Language) queries for extracting partial researched data from the ontology as a tree to limit the research area. The queries syntax of SPARQL uses similar clauses as SQL with the advantage of enabling the querying of semi-structured data from multiple heterogeneous local or remote sources. Consequently, it improves the research time aside from automating the research task by predefined queries for specific objects. The first query in Listing 7 constructs a graph containing the queried data (i.e., Eric's car driver in this query instance).
Listing 7: SPARQL query that extracts partial queried data graph The second query in Listing 8 is executed over the graph generated by the first query. It selects subjects, predicates between them, and objects (instances of Eric's car driver in this query) that exist in the graph.
RELATED WORKS AND DISCUSSION
This paper reviewed the existing efforts dealing with video surveillance data representation for self and non-self-driving vehicle tracking and their solutions to address scalability, big data search, and forensically sound records processing. According to the literature review, any ontology-based representation does not exist encompassing all proposed forensics-aware checkpoint's vehicle recognition ontology modules. However, only a few research sets are identified tackling video event descriptions, events tracking, and scalability issues.
Mostfa & Ridha (2019) proposed a vehicle plate number recognition through a distributed camera-based subsystem installed in several checkpoints connected to a central database. They could detect license plates effectively within 2 to 4 s. However, their approach does not support 3D images, and therefore, they focus only on plate recognition, not detecting and identifying fraud scenarios.
A recent paper (Patel et al., 2021) introduced a semantic representation of suspicious vehicle features used to detect malicious activities. The automatic tracking systems using ontologies is the purpose of paper (Greco, Ritrovato & Vento, 2017). They define several SWRL rules to track cars and walking persons meanwhile tagging them as suspects or not. However, their ontology does not consider records management, different used device specifications, or the involved actors in the tracking event.
SanMiguel, Martínez & García (2009) proposed a semantic representation of prior knowledge related to video events analysis. Their ontology mainly models the domain knowledge and system knowledge. Their data representation model is extended by SanMiguel & Martinez (2013) to include more details about context knowledge, the scene, and user preferences. Francois et al. (2005) proposed the Video Event Representation Language(VERL) to describe video events and the Video Event Markup Language (VEML) to annotate event instances. Their semantic representation is considered among the first dealing with video events modeling. Recently, automated driving vehicles have received an increased focus due to the large auto-cars spreading. Elgharbawy et al. (2019) and Li, Tao & Wotawa (2020) proposed similar approaches using ontology to test several generated scenarios to validate required functional safety. Indeed, the effort in Elgharbawy et al. (2019) uses a data mining technique to extract representation scenarios witnessed in real-world traffic from the ontology-based database. Table 9 depicts the required features to solve the challenges posed by this paper and compares the relevant existing efforts in terms of their satisfaction with those features or not. Briefly, those comparing features are: • Extensibility: the ability of the proposed approach to incorporate new requirements such as new technologies or linked with other similar vehicle tracking systems.
• Forensics consideration: does the proposed approach considers forensics requirements attributes.
• Scalability: do the system management and associated data model able to deal with a significant increase of vehicles or captured data.
• Interoperability: could the proposed approach be easily integrated with already existing systems.
• Automatic reasoning: do the proposed approach implement and enhance the automatic reasoning to detect and deduce committed frauds.
• Multiple fraud types: does the proposed approach model several fraud scenarios or jnot.
According to the depicted results, none of the relevant related efforts to this paper has fully responded to the required features. More specifically, none of them merges forensics requirements into their proposed ontology. Thus, any evidence is subject to admissibility issues. This paper, however, identifies relevant suspicious patterns about vehicle frauds in a forensically sound manner. It is achieved by using a scalable, secure management framework and an extensible forensics-aware auto-reasoning data model. This paper also uses standards to build the data model, which provides high interoperability, enabling easier integration. The scalability is achieved through the clusterbased distributed management framework that offers onsite data processing and then deep processing through the cloud-based solution. The extensibility, automatic reasoning, and definition of multiple fraud scenarios are ensured by adopting an ontology-based data model and using reasoning engines to infer new knowledge triggered by the defined SWRL rules and SPARQL queries.
Since forensics legislation and understanding are different for each country, the integrated forensics requirement and implemented rules must be adapted according to the country's laws. It requires experts to integrate them, although the ontology uses standards to modify it easily. Also, it is essential today to consider including IoT requirements, once used with checkpoint managements, within the ontology to keep with the world's tendencies towards smart cities such as NEOM, the newly established fast-growing smart city in Saudi Arabia (https://www.neom.com/en-us). More specifically, the ontology may incorporate the intelligent routing of self-driving vehicles (Celsi et al., 2017). This feature will enable the ontology to respond to future requirements while keeping the admissibility of the records.
CONCLUSION
The increased use of cameras to detect roads frauds raises several issues associated with significant data communicated between cameras and processing units, scalability and real-time detection, automation of the data search for important information, and data admissibility preservation.
To address these gaps, this paper defines the different checkpoint control system specifications and design requirements. Then, it proposes a new architectural framework that adheres to the system specification. Besides, this study provides a new checkpoint's vehicle recognition ontology to identify suspicious vehicles, their tracking, and search from the associated events. Aside from the proposed cluster-based multi-checkpoints management system, the main contribution of this study is the forensic-oriented design of the ontology to respond to all court requirements regarding the gathered evidence admissibility. The paper uses standards during the establishment of the new data model to improve and ensure its easy integration with already existing similar systems.
As future work, the plan is to extend the ontology to include prior knowledge of the scene, which helps in improving tracking performance. Furthermore, anonymization techniques (Akremi & Rouached, 2021;El Ouazzani & El Bakkali, 2020) will be used to protect the ontology data privacy without decreasing the real-time detection of suspicious vehicles and system control scalability.
ADDITIONAL INFORMATION AND DECLARATIONS Funding
This work was supported by the Deanship of Scientific Research at Umm Al-Qura University under grant number 18-COM-1-01-0011. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Grant Disclosures
The following grant information was disclosed by the author: | 6,277.6 | 2022-01-05T00:00:00.000 | [
"Computer Science"
] |
Existentialist Perspective: A Study of Bharati Mukherjee’s Fiction The Holder of the World
The present study explores and analyses the existential perspective in Bharati Mukherjee’s novel The Holder of the World; she tries to represent the fluid nature of immigrants due to frequent dislocations and relocations, which facilitate the characters to transform and assimilate into a new environment. The protagonist, Hannah Easton, discards nostalgic feelings and celebrates the latest opportunities in a liberal environment. Being a fearless, brave, and bold, spirited woman, she chooses her ways of life freely and accepts the consequences frankly. Bharati Mukherjee, one of the path breaking Indian American novelists and short story writers, has constantly made efforts to voice the immigrant experience of women. Hannah initially suffers from cultural shock, but being resourceful establishes her authentic existence by understanding the new environment utilising her full potential via free choices.
the first true feminist she had met in life. She had gained great things for women at a time when women were treated as mere slaves" (Stephen 15).
Bharati Mukherjee's characters in the fiction reveal tolerance, love, and harmony, where no community or sex is superior to another, and each individual has equal rights. They struggle to live with freedom and a search for the 'self' and existence. She projects her characters persistently struggling with their conflicting selves and the environment. She has also worked on various dimensions of pressures exerted by the complex nature and demands of the society in which modern man is inflicted. Her writings relate to Rogers assertion that: "the operation of inherent forces impelling each person to want to 'become' or 'realise' himself'" (qtd. in McDavid & Harari 87). The present work attempts to explore Mukherjee's novel The Holder of the World in context with existentialist philosophy and investigate the existential perspective where there is a tendency in the characters to work with freedom and to exist in their own way.
Existential philosophy emerged in the writings of Kierkegaard and Jaspers and the later contributions of Heidegger, Sartre, and others. There is no single existential philosophy; existentialism instead is more oriented towards understanding the nature and meaning of man's existence. It emphasises that man is not a readymade machine; preferably, he has the freedom to make vital choices and to assume responsibility for his existence. It lays stress on the subjective experience as a sufficient criterion of truth. As stated by John Macquarrie and others, man exists before he acquires essence, a definite individuality. The difference is between 'being' and 'becoming.' Other things also exist, but man differs from them in that he is free to become a personality. Strong men transcend the oppressive discipline of a dull society and create their own values. They commit themselves to a cause to change the culture and overcome life's complexities while weak men make vain efforts to escape from them.
The consciousness of the concept "'exist' that inspired the word 'existentialism' was first articulated by Kierkegaard, and Nietzsche was among the first writers to expose the intimate relationship between experience, practice and the world that came to play a central role in existential philosophy" (Cooper 31). The single factor for existential thinkers is freedom which is almost a synonym of existence. In the 20 th century Jean-Paul Sartre, the French philosopher, adopted Kierkegaard's perspective, and for him, human action occurs within a zone of freedom. Sartre's assertion that 'existence precedes essence' means that "man first of all exists, encounters himself, surges up in the world ─ and defines himself afterward. If man, as the existentialists see him, is not definable, it is because to begin with he is nothing. He will not be anything until later, and then he will be what he makes of himself" names a distinctive, and systematically coherent, picture of the world shared by a 'family' of thinkers." Generally, "existentialists assert the uniqueness of the human situation in the world (i.e., they reject a theoretically reductive philosophical naturalism). This situation is characterised by ambiguity and estrangement, but also by a sense of freedom and responsibility for meaning" (qtd. in Crowell 15). Human beings are prone to a sense of estrangement or alienation from the world.
Martin Buber presented alienation, as the main subject, in his book I and Thou that the person who lives in an 'I-It' relation to the world lives in "severance and alienation," without a home, a dwelling in the Universe (58). The 'I' used for 'self' convey a deeply personal, subjective and familiar meaning while the 'It' is foreign and unknown, hence an alien presence. This estrangement is responsible for arousing the notion of existence and makes the name existentialism an appropriate one. Kierkegaard restricts the term "existence" to individual human beings where they are "infinitely interested in existing" and "constantly in the process of becoming" (253). For Macquarrie, existence means, "Man fulfils his being precisely by existing, by standing out as the unique individual that he is and stubbornly refusing to be absorbed into a system" (66). In its root sense, existence means 'standing out' or going beyond what he is in that moment or moulds one's life accordingly. Kierkegaard, in his works, calls the individual to come out from the crowd and bear the burden of his being upon himself. One should not seek help from theories and principles or the illusion of conventionality because the existence of each existent is 'distinct' and 'unique' from the existence of everyone else. The other more important aspect of existence is self-relatedness, which means the individual is the centre of everything and has to evolve his value system. It does not remain fixed or static keeps on changing with time.
For existentialists, "we have to start from freedom if we are to understand man'' (Roubiczeck 122). Freedom means acting entirely by our own free will. For Sartre, "freedom and existence are indistinguishable. One does not first exist and then become free; rather, to be human is already to be free" (qtd. in Macquarrie 177). Kierkegaard, Nietzsche, and Jaspers call the term 'the public' 'the herd' and 'mass existence' respectively that works autocratically and becomes a barrier to inhibit freedom. Existentialists believe that individuals do not get or leave a well-structured universe with a coherent design. In their dealing with freedom, people are responsible for their choices, life plans, and the world. The inescapable reality of death gives meaning to existence. It is also the source of existential or normal anxiety. Existentialists sometimes seem preoccupied with death. It is in facing death that an individual is most likely to come to an understanding of life. Frankl sees "death not as a threat but as urging for individuals to live their lives fully and to take advantage of each opportunity to do something meaningful" (qtd. in Sharf 176). Accordingly, death awareness can lead to creativity and living entirely rather than facing fear and dread.
The novel depicts the story of a strong character that utilises her potential to counter the restrictive perspectives of gender, class, and culture and accept new possibilities of selfhood. Hannah Easton, the protagonist, is exposed to a wide range of experiences in the alien land of Mughal India. She was born and brought up in an orthodox Puritan society where she has been trained to suppress her passions, but unsatisfied by her mechanical and The narration in the novel is similar to that of Nathaniel Hawthorne's The Scarlet Letter (1850), where Hester Prynne, like Hannah Easton, tries to assert her unique and independent spirit by challenging the imposed rigid Puritan rules and crossing the thresholds.
Both of them face similar circumstances, and both transform themselves according to the required situation and condition. Hester Prynne's Scarlet letter 'A' represents 'Adultery' for the Puritan world, but she, with her consistent efforts, transforms the same letter 'A' into 'Ability' or potential, which enables her to assimilate and exist in the same old Puritan world Hannah's story is meant to suggest that there were passages to and fro from India even in colonial New England and that lives have been lived across cultures in all centuries.
Moreover, Mukherjee seems to be telling her readers that if we care to bring together the stories scattered in history, we will come to realise how intertwined lives are. To say merely American-Indian lover, and a Nipmuc woman drops her on the doorsteps of the Fitch family.
The little girl was brought up under the rigid norms of Puritan society. Hannah Easton, a sensitive and bold girl, imagines the vulnerable position of her mother as a widow in orthodox Puritan society and loves her "more profoundly than any daughter has ever loved a mother" (30).
Hannah grows up as an adopted child of the devout Puritan couple, Robert and Susannah Fitch, who try to insert in her all the conventional wisdom and housekeeping tips supposedly required for an agreeable bride. Hannah, a revolutionary figure like Jasmine, refuses to accept the imposed rules and regulations of Puritan society since she wants to shape her own fate. A human being's striving for transcendence (Kierkegaard) World" (20). Hannah, searching for liberation, rebels whenever the situation demands and charts out her own path. In the beginning, she rebels against her husband "for having a bibi but within a few months, she willingly becomes one herself, suspending all morality, all expectations of conventional relationships" (Mehta 197). She justifies the existentialist belief that individuals do not enter or leave a structured universe with a coherent design. It is in their dealing with the freedom that they create their own world. Hannah chooses freely to exist in her own way. English married woman on the Coromandel Coast to pregnant sari-wearing bibi of a raja; a murderer, a widow, a peacemaker turned prisoner of the most powerful man in India" (271).
The love she got from Raja Jadav Singh strengthens her determination. She attains courage that enables her to face the 'holder of the world'-the 'Alamgir,' the Great Mughal Emperor Aurangzeb. Even Aurangzeb is fascinated by the personality of Hannah and hails her with a very precious title: "I call you Precious-as-Pearl" (270).
She ends up as Mukta, Bhagmati's word for 'pearl,' precious as a pearl in the court of Aurangzeb. Hannah frees herself from the confines of geographical and religious boundaries, social distinctions, cultural differences and linguistic hurdles and progresses to fulfil her quest for self-realisation. Beigh Masters recognises Hannah's potential and acknowledges her as a woman with a free spirit and having the extraordinary enthusiasm to progress in life.
Hannah rejects the stereotypical world of Puritans and emerges as a real fighter in life.
Malashri Lal's comment related to the protagonists of Bharati Mukherjee's fiction is very apt for Hannah Easton and Jasmine, as they are ". . . neither nostalgic for their personal past nor afraid of the unfamiliar present. The main strategy is adaptation without surrender". Also, women protagonists are ". . . confident, sophisticated, poised --who will not melt into . . . | 2,559.2 | 2021-10-28T00:00:00.000 | [
"Philosophy"
] |
(α, β)−Pythagorean Fuzzy Numbers Descriptor Systems
By using pythagorean fuzzy sets and T-S fuzzy descriptor systems, the new (α, β)-pythagorean fuzzy descriptor systems are proposed in this paper. Their definition is given firstly, and the stability of this kind of systems is studied, the relation of (α, β)-pythagorean fuzzy descriptor systems and T-S fuzzy descriptor systems is discussed. The (α, β)-pythagorean fuzzy controller and the stability of (α, β)-pythagorean fuzzy descriptor systems are deeply researched. The (α, β)-pythagorean fuzzy descriptor systems can be better used to solve the problems of actual nonlinear control. The (α, β)-pythagorean fuzzy descriptor systems will be a new research direction, and will become a universal method to solve practical problems. Finally, an example is given to illustrate effectiveness of the proposed method.
Introduction
Pythagorean fuzzy sets [1][2][3][4] were proposed by Yager in 2013, are a new tool to deal with vagueness. Pythagorean fuzzy sets maintain the advantages of both membership and non-membership, but the value range of membership function and non-membership function is expanded from triangle to quarter circle. The expansion of the value area makes the amount of information of pythagorean fuzzy sets expand 1.57 times that of the intuitionistic fuzzy sets, and ensures that intuitionistic fuzzy sets are all pythagorean fuzzy sets. They can be used to characterize the uncertain information more sufficiently and accurately than intuitionistic fuzzy sets. Pythagorean fuzzy sets have attracted great attention of a great many scholars that have been extended to new fields and these extensions have been used in many areas such as decision making, aggregation operators, and information measures. Due to theirs wide scope of description cases are very common in diverse real-life issue, pythagorean fuzzy sets have given a boost to the management of vagueness caused by fuzzy scope. Pythagorean fuzzy sets have provided two novel algorithms in decision making problems under Pythagorean fuzzy environment.
Takagi-Sugeno (T-S) fuzzy systems [5][6][7][8][9] has been applied on intelligent computing research and complex nonlinear systems. T-S fuzzy systems have also been extended to new fields and these extensions have been used in many areas by a great many scholars. However, the membership functions of T-S fuzzy systems cannot make full use of the all uncertain message in the premise conditions. So we decide to study the new (α,β)-pythagorean fuzzy descriptor systems in order to solve practical control problems more easily and feasible.
1. Pythagorean fuzzy sets maintain the advantages of both membership and nonmembership, but the value range of membership function and non-membership function is expanded from triangle to quarter circle. The expansion of the value area makes the amount of information of pythagorean fuzzy sets expand 1.57 times that of the intuitionistic fuzzy sets. They can be used to characterize the uncertain information more suffificiently and accurately than intuitionistic fuzzy sets.
2. The membership function and non-membership function of pythagorean fuzzy sets can be easy to be defined. The value ranges of membership function and non-membership function are also more consistent with objective reality and many hesitant problems and people's thinking.
3. Pythagorean fuzzy sets can ensure that intuitionistic fuzzy sets are all pythagorean fuzzy sets, i.e. intuitionistic fuzzy sets are the special examples of pythagorean fuzzy sets. So intuitionistic fuzzy control systems can be changed into (0,1)-pythagorean fuzzy control systems.
4. (α,β)-pythagorean fuzzy descriptor systems are a broader generalization of T-S fuzzy descriptor systems i.e. T-S fuzzy descriptor systems are the special examples of (α, β)-pythagorean fuzzy descriptor systems.
5. We can judge the degree of weight in the control process according to the value of membership function and non-membership function of the rules. By setting the values of α and β, we decide whether the rules will participate in the final calculation, thereby reducing the calculation process and improving the control efficiency and effectiveness.
6. In fact, (α, β)-pythagorean fuzzy descriptor systems are consistent with the control methods of human being. This method is to imitate the control process of people and also solves the most difficult problem for humans.
The rest of this paper is organized as follows: In Section 1, the basic concepts of T-S fuzzy descriptor systems are introduced. In Section 2, (α,β)-pythagorean fuzzy descriptor systems are firstly proposed. Then the relationship of T-S fuzzy descriptor systems and (α,β)-pythagorean fuzzy descriptor systems are discussed in Section 3. (α,β)-pythagorean fuzzy controller and the stability of (α,β)-pythagorean fuzzy descriptor systemsare deeply researched in Section 4. In Section 5, a numbers examples is given to show the corollaries are corrected. We discussed in detail the effects of controls in several cases. Through this practical example, we find that the selection of pythagorean fuzzy membership functions in the premise conditions of the rules has a great influence on the control effect. Therefore, the choice of pythagorean fuzzy membership functions must be determined after more tests, and we can not completely believe the original given functions. Finally, the conclusion is given in Section 6.
Notations: Throughout this paper, R n and R nÂm denote respectively the n dimensional Euclidean space and n  m dimensional Euclidean space. PFS denotes pythagorean fuzzy set.
Preliminaries
This section will briefly introduce some baisc definitions and theorems on pythagorean fuzzy sets and T-S fuzzy descriptor systems. Definition 1.1 [1][2][3][4] Let X be a universe of discourse. A PFS P in X is given by.
For convenience, a pythagorean fuzzy number (μ P (x), ν P (x)) denoted by p = (μ P , ν P ). Definition 1.2 [10,11] T-S fuzzy descriptor systems are as follows: Ã T ∈ R n and μ t ð Þ ∈ R m are the state and control input, respectively; A i , B i , C i and D i are known real constant matrices with appropriate dimension; Eis a singular matrix; F i 1 , F i 2 ,⋯, F i n (i ¼ 1, 2, … , r) are the fuzzy sets. By fuzzy blending, the overall fuzzy model is inferred as follows.
Þ is the normalized grade of membership, given as.
α, β ð ÞÀpythagorean fuzzy descriptor systems
As T-S fuzzy descriptor systems are very familiar to us, and pythagorean fuzzy sets are a new tool to deal with vagueness. So we decide to study the new (α, β)-pythagorean fuzzy descriptor systems in order to solve practical control problems more easily and feasible. Next, the related definitions of α, β ð ÞÀpythagorean fuzzy descriptor systems are gradually given. Definition 2.1 α, β ð ÞÀpythagorean fuzzy descriptor systems are as follows: x n t ð Þ T ∈ R n and μ t ð Þ ∈ R m are the state vector and the control input vector, respectively;y t ð Þis the measurable output vector; A i , B i , C i and D i are known real constant matrices with appropriate dimension;Eis a singular matrix; P i 1 ,P i 2 ,...,P i n (i ¼ 1, 2, … , r) are all pythagorean fuzzy sets. By fuzzy blending, the overall fuzzy model is inferred as follows.
Þis the normalized grade of membership, given as. where Þare respectively positive and negative membership functions. 1. We can judge the degree of weight in the control process according to the value of the positive and negative membership functions of the rules. By setting the values of α and β, we decide whether the rules will participate in the final calculation, thereby reducing the calculation process and improving the control efficiency and effectiveness.
2. In fact, (α,β)-pythagorean fuzzy descriptor systems are consistent with the control methods of human being. People generally proceed appropriate control at one point by the past experience, i.e. people's decisions are decided and implemented at roughly one point. This method is to imitate the control process of people 3. The relations between (α,β)-pythagorean fuzzy descriptor systems and T-S fuzzy descriptor systems Firstly, the relation of T-S fuzzy descriptor systems and (α,β)-pythagorean fuzzy descriptor systems is studied through an example. When Then the special (0,1)-pythagorean fuzzy descriptor systems are T-S fuzzy descriptor systems. In other words, T-S fuzzy descriptor systems are all the special (0,1)-pythagorean fuzzy descriptor systems. Therefore, it is easy to get the following Theorem 3.1.
Proof:It is so easy, so omit.
α, β ð ÞÀpythagorean fuzzy numbers controller
Now we continue to study the feedback control and stability of pythagorean fuzzy descriptor systems according to the traditional research path of the control systems. Suppose.
Þand ... and x n t ð Þ is P i n x n t ð Þ ð Þ, then.
Þare respectively positive and negative membership functions.
is the membership function value of x j t ð Þ that belongs and does not belong to the intuitionistic fuzzy numbers set P i j : If we take (3) into (1, 2), we can get.
The system stability is guaranteed by determining the feedback gains G j . Basic LMI-based stability conditions guaranteeing the stability of the above control system in the form of (4, 5) are given in the following theorem.
Theorem 4.1 The system (3) is asymptotically stable, if there exist matrices N j ∈ R mÂn (j = 1,2,3,..., r) and K ¼ K T ∈ R nÂn such that the following LMIs are satisfied: where the feedback gains are defined as G j ¼ N j K for all j: Proof: Considering the quadratic Lyapunov function.
Fuzzy Systems -Theory and Applications (3) is asymptotically stable.
Simulation example
Example 5.1: Considering an inverted pendulum, subject to parameter uncertainties [12][13][14][15] as the nonlinear plant to be controlled. The dynamic equation for the inverted pendulum is given by.
Where θ t ð Þis the angular displacement of the pendulum, g = 9.8 m/s 2 is the acceleration due to gravity, m p ∈ [m p min ,m p max ] = [2,3]kg is the mass of the pendulum, M c ∈ M min , M max ½ = [8,12]. Kg is the mass of the cart, a ¼ 1= m p þ M c À Á , 2 L = 1 m is the length of the pendulum, and u t ð Þis the force (in newtons) applied to the cart. The inverted pendulum is considered working in the operating domain characterized by Next, according to the ideas based on the principles of interpolation and interval coverage, we firstly change the interval-valued T-S fuzzy model of inverted pendulum into the special (α,β)-pythagorean fuzzy descriptor systems of inverted pendulum as follows.
Rule 1: If According to the theorem 4.1, we can get.
The second case(interval-valued T-S fuzzy model of inverted pendulum), suppose x 1 0 ð Þ ¼ À 11π 29 ,x 2 0 ð Þ ¼ À0:88, then take the variable x 1 t ð Þ as the main factor of the control, and according to Table 1 we can control in three steps, i.e.
Thus the stable control time of the (0.30,0.25)-pythagorean fuzzy descriptor systems of inverted pendulum is 4.836 second shorter than the interval-valued T-S fuzzy descriptor systems of the inverted pendulum (Figure 2). Remark 5.1: In this way, the (0.30,0.25)-pythagorean fuzzy descriptor systems can get the better effect than the control effect of interval-valued T-S fuzzy model of inverted pendulum. It is easy to see that the (0.30,0.25)-pythagorean fuzzy descriptor systems has the best control, and can reduce the number of rules and thus reduce the amount of calculations.
In this way, it can get the better effect than the control effect of interval-valued T-S fuzzy model of inverted pendulum. Because the feedback more or less needs a little time, when the system carries out feedback instructions, but the time has gone, so the feedback that have been given are also lagging and out of date. α, β ð ÞÀ pythagorean fuzzy descriptor systems can be closer to the actual, and easy to control the error range. The new control method is more convenient and feasible!
Conclusions
In this paper, the new α, β ð ÞÀpythagorean fuzzy descriptor systemsare firstly introduced, and more consistent with the human way of thinking and more likely to be set up and more convenient for popularization. The new α, β ð ÞÀpythagorean fuzzy descriptor systems is very simply and quickly. We can do not know the control principle, but we can directly achieve good control effect. The new theory can be studied in parallel to the basic framework of the original theories and easy to promote the old theories and achieve good results. In addition, we can judge the degree of weight in the control process according to the value of the positive and negative membership functions of the rules. By setting the values ofαandβ, we decide whether the rules will participate in the final calculation, thereby reducing the number of the rules and the calculation process, and improving the control efficiency and effectiveness. Otherwise, T-S fuzzy descriptor systems are the special examples of α, β ð ÞÀpythagorean fuzzy descriptor systems. α, β ð ÞÀpythagorean fuzzy controller and the stability of α, β ð ÞÀpythagorean fuzzy descriptor systems are deeply researched. At last, a numbers example is given to show the corollaries are corrected. But the theoretical part of the new systems need to be in-depth studied, and specific applications are also to be further developed. For example, α, β ð ÞÀ pythagorean fuzzy descriptor systems can also be used as the model of autonomous learning in order to establish intelligent control, and can be used well in unmanned driving in the future. So α, β ð ÞÀpythagorean fuzzy descriptor systems is just to meet the reality requirements. | 3,154.2 | 2022-01-12T00:00:00.000 | [
"Engineering",
"Mathematics",
"Computer Science"
] |
EURoPEAN STANDARDS IN ThE fIELD of CoMBATINg CyBER CRIME
Cyber crime is a phenomenon which is often written and spoken about, ever since its inception, in theory, judicial and legislative practice of developed countries and international institutions. It had rapidly developed in the last decade of the 20 century, and in the 21 century its evolution has become even more evident. Countries have responded by introducing new measures in their criminal legislation, in an effort to reconcile traditional criminal law with the demands for perception, investigation and demonstration of new criminal acts. This paper presents and analyzes the most significant European standards adopted in order to create more effective national legislation in the field of combating cyber crime. Standards given in the Convention of the Council of Europe but also the European union Directives have to a large extent been a guide for national legislations in order to regulate the new situations regarding the misuse of information and communication technologies in the most adequate manner. Among other things, this paper pays special attention to the most important Convention in the field of combating cyber crime, which is the Council of Europe Convention on cyber crime, whose objectives include: harmonization of national legislations with regard to substantive provisions in the field of cyber crime, introduction of adequate instruments in national legislations with regard to process provisions in order to create the necessary basis for investigation and prosecution of offenders in this field and establishment of quick and efficient institutions and procedures for international cooperation.
introduction
There are no technical and technological achievements in the history of mankind that has not encountered various forms of misuse.Their specificities are the phases of development in which the invention has been subject to misuse, efforts in combating this type of criminal offences.Comparative analysis is not possible without reference to the only relevant international instrument in the field of cyber crime -the Council of Europe Convention on cyber crime and pointing out its significance within global frameworks. 3yber crime is a phenomenon which is often written and spoken about, ever since its inception, in theory, judicial and legislative practice of developed countries and international institutions.All its aspects are being considered.We are trying to provide more complete answers to numerous and each day more complex questions.The complexity of the issues which we have been facing for over thirty years may partly be anticipated through numerous papers in this field.
The subject of this paper is to present and analyze the most significant European standards adopted in order to create more efficient national legal solutions in the field of combating cyber crime.
Council of europe standards
The Council of Europe is a regional international organization, whose headquarters are located in Strasbourg.The purpose of the Council of Europe is reflected in the achievement of basic personal and democratic rights and freedoms in Europe, and its most important acts are the adoption of the European Convention on Human Rights in 1950 and the establishment of the European Court of Human Rights in 1998 as a permanent legal protection system.The Council of Europe has 47 member states, which are also signatories to the European Convention on Human Rights, 1 candidate state and 5 observer states.
Ever since the 80s of the 20 th century, the Council of Europe has been dealing with the issue of combating cyber crime.Two recommendations issued by the Committee of Ministers, which are considered as the first international document on cyber crime, are mainly devoted to the beginnings of the combat against the misuse of computer technology, such as: criminalisation of illicit behavior, separation of legal and illegal actions in national legislations, provisions on the exercise of investigations, obligations of providers to cooperate with investigating authorities etc.These are recommendations 4 that were not binding for member states, but whose primary goal was to draw attention to the emergence of a new type of criminal activities that have a strong international component and that it was made clear to the states that they need to react in a timely manner in order to prevent the spread of such illegal and malicious use 3 prlja, D., Reljanović, M. (2009): "Cybercrime -Comparative Experiences", Foreign legal life, 3, p.161. 4 These are the following recommendations: Recommendation No. R (89)9 i R (95)13.Recommendation No. R (95) 13 of the Committee of Ministers to Member States, Concerning problems of Criminal procedure Law Connected with Information Technology of new technologies. 5Bearing in mind the hazards brought by the development of information technology, during the second half of the nineties the European Committee on Crime problems -CDpC) 6 of the Council of Europe has founded an expert group, the Committee of Experts on Crime in Cyberspace -pC-CY, with a mission to prepare the text of the first international convention whose substance would include prevention, capture and punishment of offenders in the field of cyber crime. 7he Council of Europe Convention on cyber crime was adopted on November 23 rd , 2001 in Budapest 8 .It entered into force on July 1 st , 2004 and is open for signature by countries that are not members of the Council of Europe.Serbia signed the Convention on April 7 th , 2005 and ratified it on April 14 th , 2009 by adopting the Law on Ratification of the Convention on cyber crime. 9he Additional protocol to the Convention on cyber crime concerning the criminalisation of acts of a racist and xenophobic nature, committed through computer systems was adopted in 2003.It entered into force on March 1 st , 2006.Serbia has ratified the Additional protocol by adopting the Law on Ratification of the Additional protocol to the Convention on cyber crime concerning the criminalisation of acts of racist and xenophobic nature, committed through computer systems. 10
Convention on cyber crime
The Convention has, above all, the following objectives: 1) harmonization of national legislations with regard to substantive provisions in the field of cyber crime; 2) introduction of adequate instruments in national legislations with regard to process provisions in order to create the necessary basis for investigation and prosecution of offenders in this field; 3) establishment of quick and efficient institutions and procedures for international cooperation. 11According to the above, "an important part of the Convention on cyber crime is dedicated to the states' obligations to create normative preconditions for the introduction of additional procedures and powers, in order to enable efficient detection and processing of cases of cyber crime.In this sense, the essential importance lies in the establishment of special state authorities specialized for the combat against cyber crime.Formally and legally speaking, such obligations have become current only after the ratification of the said Convention and the Additional protocol". 12he Convention consists of four chapters: (I) use of the term; (II) Measures to be undertaken at the national level -substantive and procedural law; (III) International cooperation; (IV) Final provisions. 13he first chapter of the Convention provides a brief overview and definitions of basic terms used in the text of the Convention.Thus, a computer system (Article 1a) is a group of connected devices, of which at least one can perform automatic data processing; computer datum (Article 1b) is any information in a form suitable for processing in a computer system, including programs that can be used to perform certain functions of a computer; the term providerservice provider (Article 1c) is any natural or legal person that provides services that enable communication through a computer network, but also any person that keeps, or processes computer data incurred during such communication; the term datum in traffic (Article 1d) means any computer datum that is related to communication within the system or has emerged as a part of such communication and carries information about the origin and destination of communication, its path, date, time, size and duration, or type of service.
The second chapter of the Convention that includes Articles 2-22 is divided into several parts and includes substantive and procedural legal provisions.The substantive provisions stipulate nine criminal offences, grouped into four categories.
The first group of incriminating acts constitutes of acts against computers and computer systems in the narrow sense.The Convention calls this group "offences against the confidentiality, integrity and availability of computer data and systems" 14 .This group includes the following offences: 1) unauthorized access to information contained in a computer or computer system, in order to catch hold of, modify or destroy such information, Art.2; 3) Interference with data (data disruption) on a computer in terms of intentional full or partial damage, deletion, modification of content or any other manner of changes in original data, Art.4; 4) Interference with the system (system disruption) is an act that is equally defined as the previous criminal act, but is related to a computer system whose operation is disabled or altered via illegal access and modification of data on the network, Art.5; 5) Misuse of devices as a general provision through which the signatory states commit themselves to punish any intentional illegal manufacture, possession, use or supply, sale and any other form of distribution and making available to anyone who is not entitled to, any device, including computer programs, as well as any form of data which may assist in the execution of criminal offences set forth in the previous Articles of the Convention, Art. 6.
The second group of incriminating acts constitutes of classic criminal acts whose execution is related to the aspect of information technologies. 15This group includes the following offences: 1) Computer forgery -intentional, unauthorized insertion, deletion, modification or concealment of computer data, that results in altered content of such data, regardless of whether they in this way obtain a different purpose and meaning, or become unusable, Art.7; 2) Computer fraud -intentional, unauthorized insertion, deletion, modification or concealment of computer data, as well as any other kind of interference with the operation of a computer system, in order to obtain unlawful property gain for oneself or a third person, Art. 8.
The third segment of the second chapter deals with acts that are related to the content of communication on a computer network 16 and is dedicated to criminal acts related to child pornography, in Article 9. Signatory states are obliged to criminalise the following activities as a criminal offence under national legislation: production of child pornography for the purpose of its distribution through a computer system; offering or making available child pornography through a computer system; distribution or sending child pornography through a computer system; procuring child pornography for oneself or others through a computer system; possession of child pornography on a computer system or a medium for transmission of computer data.Thus, any behavior related to child pornography is criminalised.The fourth segment of the second chapter is dedicated to criminal acts related to infringement of copyright and related rights 17 in Art.10.
The fifth segment of the second chapter includes criminalisation of attempt to commit, assist in, and incite the said offences (Art.11), the liability of legal persons (Art.12) and prescription of sanctions for offences committed under the Convention (Art.13).
procedural law is dealt with in the second part of the second chapter of the Convention.These regulations address the procedural authorities of state bodies during the investigation of criminal acts related to new technologies.The Convention 18 has introduced some classic instruments for research of criminal acts in a new virtual environment, thereby respecting the specific nature of cyberspace. 19ccording to the Convention, competent state bodies have the authority to examine and seize any computer or data storage medium which contain or it is suspected that they may contain incriminating materials, as well as to collect data primarily related to the use of the Internet and credit cards from electronic communication providers, through which they can obtain data on potential perpetrators of a criminal act of computer crime (Art.19 and 20).one of probably the most far-reaching provisions is related to the so-called data interception, i.e. type of wiretapping of electronic communications (Art.21).The said measure is undertaken only when the proving of existence of an offence requires evidence collected at the time the communication is performed.Such treatment practically violates the privacy right and correspondence right, and the Convention itself does not contain any appropriate restrictions in order to prevent the misuse of such rights.It is stated that this measure shall be undertaken for "serious offences", but the Convention itself does not imply what kind of offences are those.Article 22 deals with the jurisdiction of a signatory state in case of occurrence of any offence under the Convention.The state shall have jurisdiction to prosecute should the offence be committed in its territory, on a ship or airplane carrying its flag, and should the offence be committed by a citizen of that state, provided that it is in another state that acknowledges the same kind of criminalisation, or outside the state territory (e.g.international waters).
The third part of the Convention 20 deals with international cooperation of states in combating computer crime, primarily in the way that should overcome practical obstacles in the implementation of national legislation for offences that typically cross national boundaries, and often involve the participation of individuals from several countries around the world. 21The main provisions of this act are dedicated to the cooperation of states in an organized or spontaneous exchange of data concerning possible execution of a criminal offence related to the use of electronic communications, as well as the possibility of extraditing the perpetrators of such offences from one signatory state to another (Art.26).Each signatory state must entrust a particular body the task of cooperation with other states in the field of computer crime, and in case of emergency, the cooperation can be established directly between judicial authorities of the two countries, as well as through Interpol and other relevant channels of cooperation (Art.27).According to Art. 31 each signatory state may request from another one to carry out certain investigations in its territory should it be necessary for the purposes of investigation in relation to some of the offences provided for in the Convention.
When it comes to extradition, there are situations where the state is not obliged to extradite a person.This is especially the case when there is a lack of double criminalisation, but the Convention also provides an additional condition -the act must be labeled as serious within the law itself, i.e. its execution should be punishable by a minimum sentence of one year in prison, unless provided otherwise by some other international agreement among states in question, that may be applied in a given situation (Art.24).Also, among states that do not have mutual bilateral or multilateral extradition treaties, the Convention shall serve as the basis for extradition. 22here is also an interesting provision regarding the establishment of a 24/7 network in each country, that would serve as support to police and other authorities, as contact for all information and a starting point for all requests concerning the processing and investigation of criminal acts of computer crime (Art.35).
additional Protocol to the Convention on cyber crime concerning the criminalisation of acts of a racist and xenophobic nature, committed through computer systems
Additional protocol to the Convention on cyber crime concerning the criminalisation of acts of a racist and xenophobic nature, committed through computer systems 23 was adopted on January 28 In addition to the preamble, the Additional protocol consists of four chapters: I -Common provisions, II -Measures to be taken at national level, III -Relations between the Convention and this protocol, IV -Final provisions.
The main purpose of the adoption of this protocol is the criminalisation of behavior that is not covered by the Convention, and which is related to the spread of hatred, intolerance and animosity towards racial, national, religious and other groups and communities, use of computers as a means of communication and dissemination of propaganda. 24or the purpose of this protocol, "racist and xenophobic material" is any written material, any image or any other representation of ideas or theories that advocate, promote or incite hatred, discrimination or violence, against any individual or group of individuals, based on race, skin color, inherited, national or ethnic origin, as well as religion, if used as a pretext for any of these factors (Article 2 of the protocol).
In the second chapter, entitled "Measures to be taken at national level", the protocol introduces an obligation for signatory states to criminalise the following conducts in the national legislation: 1.) Dissemination of racist and xenophobic material through computer systems (Article 3 of the protocol) -means any action by which this kind of material is made available to the public, using a computer or a computer system.
Each party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right, the following conduct: distributing, or otherwise making available, racist and xenophobic material to the public through a computer system.(paragraph 1, Article 3 of the protocol).
A party may reserve the right not to attach criminal liability to conduct as defined by paragraph 1 of this article, where the material, as defined in Article 2, paragraph 1, advocates, promotes or incites discrimination that is not associated with hatred or violence, provided that other effective remedies are available.(paragraph 2, Article 3 of the protocol).
Notwithstanding paragraph 2, a party may reserve the right not to apply paragraph 1 to those cases of discrimination for which, due to established principles in its national legal system concerning freedom of expression, it cannot provide for effective remedies as referred to in the said paragraph 2 (paragraph 3, Article 3 of the protocol).
2.) Racist and xenophobic motivated threat (Article 4 of the protocol) -is presenting to an individual or a group that a serious criminal offense shall be committed against them, as defined in the domestic legislation of countries, through a computer or computer systems.An individual or group should be dis-tinguished according to their race, skin color, origin, national, ethnic or religious affiliation, in order for this offence to obtain a specific form provided by the protocol.
Specifically, "Each party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right, the following conduct: threatening, through a computer system, with the commission of a serious criminal offence as defined under its domestic law, (i) persons for the reason that they belong to a group, distinguished by race, color, descent or national or ethnic origin, as well as religion, if used as a pretext for any of these factors, or (ii) a group of persons which is distinguished by any of these characteristics" (Article 4).
3.) Racist and xenophobic motivated insult (Article 5) -has the same elements as the previous case, but does not concern threats but insulting an individual or a group, based on race, skin color, origin, national, ethnic or religious affiliation.
Each party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right, the following conduct: insulting publicly, through a computer system, (i) persons for the reason that they belong to a group distinguished by race, color, descent or national or ethnic origin, as well as religion, if used as a pretext for any of these factors; or (ii) a group of persons which is distinguished by any of these characteristics (paragraph 1, Article 5 of the protocol).
A party may: a) require that the offence referred to in paragraph 1 of this Article has the effect that the person or group of persons referred to in paragraph 1 is exposed to hatred, contempt or ridicule; or b) reserve the right not to apply, in whole or in part, paragraph 1 of this Article.(paragraph 2, Article 5 of the protocol).4.) Denial, gross minimization, approval or justification of genocide or crimes against humanity (Article 6) -introduces an interesting concept of punishment for alleged actions committed through computers or computer systems in cases that were subject to decisions by international courts.This kind of content must be somehow made available to a larger number of people who use computers and the Internet or any other computer network.
Each party shall adopt such legislative measures as may be necessary to establish the following conduct as criminal offences under its domestic law, when committed intentionally and without right: distributing or otherwise making available, through a computer system to the public, material which denies, grossly minimizes, approves or justifies acts constituting genocide or crimes against humanity, as defined by international law and recognized as such by final and binding decisions of the International Military Tribunal, established by the London Agreement of 8 August 1945, or of any other international court established by relevant international instruments and whose jurisdiction is recognized by that party (paragraph 1, Article 6 of the protocol).
A party may: a) require that the denial or the gross minimization referred to in paragraph 1 of this Article is committed with the intent to incite hatred, discrimination or violence against any individual or group of individuals, based on race, color, descent or national or ethnic origin, as well as religion if used as a pretext for any of these factors, or otherwise b) reserve the right not to apply, in whole or in part, paragraph 1 of this Article.(paragraph 2, Article 6 of the protocol).
5.) Aiding and abetting (Article 7) -Each party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right, aiding or abetting the commission of any of the offences established in accordance with this protocol, with intent that such offence be committed.
The third chapter, entitled "Relations between the Convention and this protocol" specifies which provisions of the Convention shall be applied to this protocol mutatis mutandis, as well as the provisions of the Convention whose scope may be extended to the application of the protocol by each party.
Convention for the Protection of individuals with regard to automatic processing of personal data
Convention for the protection of individuals with regard to automatic processing of personal data25 was opened for signature to member states on January 28 th , 1981, and came into legal force on october 1 st , 1985.
Namely, it is necessary that the persons who have access to information and data stored in computers and computer systems are prevented and disabled to commit fraud or any illegal use of such data.This approach is particularly significant in situations of so called "cross-border information flow", since it has turned out that the quality of personal data protection gets weaker as the observed space expands geographically.on the other hand, when this issue is observed within national frameworks, it may be noted that not all national legislations provide a sufficient level of protection to the citizens in this field.
The Convention is related to personal data collected in both public and private sectors.The essential part of the Convention contains substantive provisions in the form of basic principles that apply to all major segments in this field (quality and categories of data collected, security, safety measures, exceptions and limitations, sanctions etc.).The Convention also resolves the issue of cross-border traffic of automatically collected personal data, and provides mechanisms of cooperation of the contracting states.
The intention of the Convention is that the signatory states harmonize their national legislations with the basic principles and recommendations contained in this document.Respecting the rule of law, human rights and basic freedoms, the Convention intends to bring its members together, to extend the protection of basic rights and freedoms of individuals, especially the right of privacy, when it comes to automatic processing of personal data.The states have the initiative to, in the process of regulation of this matter, decide on the content, scope and coverage of personal data protection, with the possibility of expressing certain specificities.In doing so, each state must adhere to the principles established. 26ne of the basic principles of personal data protection is the principle of legality and impartiality.This means that personal data are collected, processed and used in accordance with the law.(…) This also implies that personal data are collected, processed and used impartially and in a manner that does not injure the personal dignity of man.(…) Regulations governing the protection of personal data should also include provisions based on the principle of data accuracy.(…) The rights of persons whose data are collected and processed to be informed of which collections contain data related to them, which data, who processes them, for what purpose and on what basis, as well as who are users of such data, are all contained in the principle of purpose determination.(…) The principle of data availability, which includes the right of the person whose data are recorded to be informed of the existence of a collection or other records containing personal data, the right to have access to his personal data, to request the correction of inaccurate data related to him, delete the data should their processing be against the law or contract, prohibit the use of inaccurate, outdated and incomplete data related to him, i.e. to prohibit the use of such data should they not be used in accordance with the law or contract.The rights stipulated by the principle of purpose determination and principle of data availability to the person whose data are collected and processed, may be restricted and be used to the extent necessary for the protection of national security, public safety, monetary interests of the state or the suppression of criminal offences, as well as rights and freedoms of others.
Convention on the Protection of children against exual exploitation and sexual abuse
Convention on the protection of children against sexual exploitation and sexual abuse28 was opened for signature to member states on october 25 th , 2007, and came into legal force on July 1 st , 2010.
The purpose of the Convention is to establish the possibilities for more effective criminal proceedings in which children appear as victims of sexual exploitation and abuse.
From the aspect of combat against cyber crime, the Convention is a backbone for harmonization of national legislations with regard to substantive criminal law in all cases in which the elements of computer technology are used for the purpose of distribution, exchange and storage of illegal content.The Convention, among other things, has been implemented due to a perceived increase in the degree of sexual exploitation of children, especially in the form of child pornography and prostitution, as well as all other forms of child abuse that are destructive to children's health and their psychosocial development.particularly noteworthy is the significance of considering the need for preparing a comprehensive international instrument for the prevention, protection and criminal and legal aspect of the combat against all forms of sexual exploitation and sexual abuse of children, with particular importance being put on the creation of a special mechanism for monitoring the implementation of the Convention.
The Convention contains the following chapters: I -purposes, nondiscrimination principle and definitions; II -preventive measures; III -Specialised authorities and co-ordinating bodies; IV -protective measures and assistance to victims; V -Intervention programmes or measures; VI -Substantive criminal law; VII -Investigation, prosecution and procedural law; VIII -Recording and storing of data; IX -International co-operation; X -Monitoring mechanism; XI -Relationship with other international instruments; XII -Amendments to the Convention; XIII -Final clauses.
The purposes of this Convention are to (paragraph 1, Article 1 of the Convention): a) prevent and combat sexual exploitation and sexual abuse of children; b) protect the rights of child victims of sexual exploitation and sexual abuse; c) promote national and international co-operation against sexual exploitation and sexual abuse of children.
It is interesting to mention the types of preventive measures foregrounded by the Convention in order to prevent all forms of sexual exploitation and sexual abuse of children and the protection of children.These measures are the following: recruitment, training and awareness raising of persons working in contact with children (Article 5), education for children (Article 6), preventive intervention programmes or measures (Article 7), measures for the general public (Article 8), participation of children, the private sector, the media and civil society (Article 9).
The chapter that is related to substantive criminal includes the following offences: sexual abuse (Article 18), offences concerning child prostitution (Article 19), offences concerning child pornography (Article 20), offences concerning the participation of a child in pornographic performances (Article 21), corruption of children in order to perform sexual abuse, where the act is considered finished at the moment of recruitment/corruption of the child (Article 22), acts that include misuse of information and communication technology in order to promote pornographic material for the purpose of committing any of the above acts and the performance of such act (Article 23).
In relation to the misuse of computers and information technology in general, stress is put on criminal offences related to child pornography (Article 20 of the Convention) which, at the national level, recommends the criminalization of producing child pornography, offering or making available such materials, as well as their distributing or transmitting, procuring child pornography for oneself or for another person, possessing such material, as well as knowingly obtaining access, through information and communication technologies, to child pornography.
In the chapter relating to investigating, prosecuting and court proceedings, the Convention requires that each signatory state takes the necessary legislative or other measures to ensure that investigations and criminal proceedings are carried out in the best interests and respecting the rights of the child.Also, each state should adopt a protective approach towards victims, ensuring that the investigations and criminal proceedings do not aggravate the trauma experienced by the child, and that, in doing so, the proceedings are treated as priority and carried out without any unjustified delay (Article 30).
Convention on the prevention of terrorism
With regard to the combat against terrorism, in 1977 the Council of Europe adopted the Convention on the suppression of terrorism, amended in 2005 by the Convention on the prevention of terrorism 29 , which entered into force on December 1 st , 2009.
The part which in details presents the manifestations of computer crime, also shows the manners of connecting and impact of terrorism and information technology.It is pointed out that information technologies may be targets of terrorist organization attacks 30 , but the capacities of computer technology can be 29 Council of Europe, 2005, CETS No. 196 30 More recently, the term "cyberterrorism" has been used as a special kind of terrorist attacks that are directed towards computer systems and networks with the intention of achieving certain political goals.used for distribution of various content, as well as raising funds that enable further terrorist activity (spreading propaganda, sending threats, fundraising etc.).Finally, the Internet as a global network serves also as a means of communication among group members, as well as an instrument of planning and support.
Important articles of the Convention are Articles 5-7, which are related to certain preparation acts that are of such quality and importance that they have the potential to cause or assist terrorist (public invitations, recruiting for the commission of terrorist acts, training future terrorists).
Article 5 of the Convention is related to "public provocation to commit a terrorist offence".
For the purposes of the Convention, "public provocation to commit a terrorist offence" means the distribution, or otherwise making available, of a message to the public, with the intent to incite the commission of a terrorist offence, where such conduct, whether or not directly advocating terrorist offences, causes a danger that one or more such offences may be committed (paragraph 1, Article 5).
Each party shall adopt such measures as may be necessary to establish public provocation to commit a terrorist offence, as defined in paragraph 1, when committed unlawfully and intentionally, as a criminal offence under its domestic law (paragraph 2, Article 5).
Article 6 of the Convention is related to "Recruitment for terrorism".
For the purposes of this Convention, "recruitment for terrorism" means to solicit another person to commit or participate in the commission of a terrorist offence, or to join an association or group, for the purpose of contributing to the commission of one or more terrorist offences by the association or the group (paragraph 1, Article 6).
Each party shall adopt such measures as may be necessary to establish recruitment for terrorism, as defined in paragraph 1, when committed unlawfully and intentionally, as a criminal offence under its domestic law (paragraph 2, Article 6).
Article 7 of the Convention refers to "Training for terrorism".For the purposes of this Convention, "training for terrorism" means to provide instruction in the making or use of explosives, firearms or other weapons or noxious or hazardous substances, or in other specific methods or techniques, for the purpose of carrying out or contributing to the commission of a terrorist offence, knowing that the skills provided are intended to be used for this purpose.
european Union standards
Apart from the fact that it recommends to the member states to sign and adopt the conventions and conclusions of the Council of Europe, the European union has adopted specific acts aimed at effective combat against cyber crime.
In September 1990, the Commission of the European Community published a decision in the field of information security, which consisted of six sections related to personal data protection and information security.This decision upheld, for a period of two years, an activity plan that did not explicitly involve criminal law assistance, but included the following activities: development of strategic order of information security, analysis of needs for information security, solutions for emergency and temporary needs, specification, standardization and verification of information security, integration of technological and operational achievements within information security through general strategy and integration of reliable security functions into the information system. 31n 2000, within the framework of the European union, the Directive on electronic commerce 32 was adopted, and paid special attention to the problem of malevolence, but also numerous other acts, from the Decision of the Council of Europe on the prevention of child pornography on the Internet, to the recommendations and strategies for the new millennium on the protection and control of computer crime.Each of these acts is an act of constructing a safer information society through improvement of security of information infrastructure and the combat against computer crime.
There are two Directives that are significant in the combat against cyber crime, which are presented below.
directive of the Council of the european Community on the legal protection of computer programmes
This Directive, 33 was published in the "official Journal of the European Community" on May 17 th , 1991, with the duty of implementation in the member states, starting from January 1 st , 1993, prior to which date they had been obliged to harmonize their national legislation with the content of the Directive.The need for uniform solutions in the field of legal protection of computer programs was imposed by the differences in national legislations of member states, that had an adverse impact on the functioning of the common market.
In accordance with the provisions of the Directive, the member states protect computer programs by copyright as literary works, in terms of the provisions of the Berne Convention for the protection of literary and artistic works, and the term "computer program" includes the preparatory design material.The concept of computer program author has also been determined, and legal protection is offered to any natural or legal person falling under the provisions of national legislations in the field of copyright law applicable to literary works.
The Directive provides for an obligation to sanction precisely specified conduct, duration of protection provided for computer programs, and also prescribes the obligation of seizure of any illegal copy of a computer program in accordance with the procedure laid down by procedural rules of national legislations.
directive 2006/24/eU of the european Parliament and of the Council on storing data generated or processed in the provision of publicly available electronic communications services or of public communications networks
From the standpoint of efficient detection and prosecution of any criminal offences whose execution leaves "electronic trails" that in properly conducted proceedings may gain the force of incontrovertible evidence before the court, this Directive,34 as well as the procedures set out in its provisions, represents an essential step towards the suppression of activities that endanger the security of computer data.
The main objective of the Directive is to harmonize the provisions of the member states concerning the obligations of providers of publicly available electronic communications services and public communications networks to store certain data received or processed in order to ensure that these data are available for the purpose of investigation, detection and prosecution of serious criminal offences.It should be noted that the Directive is applied only to the data on traffic and location of legal and natural persons and to the related data necessary for the identification of the subscriber or registered user.
For the purposes of the Directive, at the beginning, there are definitions of the most significant terms, and in Art. 4 there is a rule who and under what conditions may gain a right of access to information of a member state.The central part of the Directive is the categorization of data stored, which are enumerated and sorted into categories and subcategories.In accordance with the provisions, the member states undertake to store all of the said data categories for a period of not less than six months nor more than two years from the date of communication (Art.6), and the provisions of the Directive also regulate the issue of legal protection of persons whose data are collected and stored for a certain period of time.
Conclusion
Although today life and functioning of a society as a whole is impossible without the use of computers and modern information technology, we are experiencing growth in the awareness that these useful and necessary assets may be used for illicit, illegal purposes, primarily for obtaining illegal material gain for a certain person or for causing harm to others.Since our society has, in recent years, recorded numerous cases of computer misuse for criminal purposes, it is high time to recognize the need for striving towards adequate legislation in the field of cyber crime, which will to some extent be able to respond to the irresponsible conduct of individuals and groups in this segment.
Hundreds of millions of people who daily use cyberspace for business or personal use often do not have enough attention, time or will to properly protect themselves and get to know potential mishaps that may befall them if they are gullible or not serious enough when entering into various types of transactions or communications.The fact is that many classic criminal acts may be committed on the Internet, as well as that through obtaining information about users one may prepare or enable the execution of almost any criminal offence against life and physical integrity, property, copyright and many others.In addition there are crimes whose emergence and development are related exclusively to the development of electronic communications and the Internet.It is a wide range of conducts that may be harmless, but also may lead to the most serious crimes.All these illegal conducts are included in the definition of new criminal offences that follows a series of procedural and forensic specificities. 35his paper presents and analyzes the most significant European standards adopted in order to create more effective national legislation in the field of combating cyber crime.Standards given in the Convention of the Council of Europe, with special regard to the Convention on cyber crime, but also the Directives of the European union, have to a large extent been a guide for national legislations in order to regulate the new situations regarding the misuse of information and communication technologies in the most adequate manner.Visokotehnološki kriminal je pojava o kojoj se, od samog njenog nastanka, u teoriji, sudskoj i zakonodavnoj praksi razvijenih država i međunarodnih institucija, dosta piše i govori.Naglo se razvila u poslednjoj deceniji XX veka, a u XXI veku njegova evolucija je još evidentnija.Države su odgovorile uvođenjem novih mera u svoja krivična zakonodavstva, pokušavajući da pomire tradicionalno krivično pravo sa zahtevima za percipiranjem, istraživanjem i dokazivanjem novih krivičnih dela.u radu su predstavljeni i analizirani najznačajniji evropski standardi usvojeni u cilju stvaranja što efikasnijih nacionalnih zakonskih rešenja u oblasti suzbijanja visokotehnološkog kriminala.Standardi dati u Konvencijama Saveta Evrope, ali i Direktivama Evropske unije, u velikoj meri su vodilja nacionalnim zakonodavstvima kako bi se nove situacije u vezi sa zloupotrebama informaciono-komunikacionih tehnologija, što adekvatnije pravno regulisale.Između ostalih, u radu je posebna pažnja posvećena najznačajnijoj Konvenciji u oblasti suzbijanja visokotehnološkog kriminala, a to je Konvencija Saveta Evrope o visokotehnološkom kriminalu, čiji su ciljevi: usklađivanje nacionalnih zakonodavstava kada je reč o materijalnim odredredbama u oblasti visokotehnološkog kriminala, uvođenje adekvatnih instrumenata u nacionalna zakonodavstva kada je reč o procesnim odredbama u cilju stvaranja neophodnih osnova za istragu i krivično gonjenje učinilaca krivičnih dela iz ove oblasti i ustanovljavanje brzih i efikasnih institucija i procedura međunarodne saradnje Ključne reči: visokotehnološki kriminal, evropski standardi, konvencije Saveta Evrope, direktive Evropske unije | 9,578.8 | 2014-01-01T00:00:00.000 | [
"Law",
"Computer Science",
"Political Science"
] |
Theoretical Calculation of the Gas-Sensing Properties of Pt-Decorated Carbon Nanotubes
The gas-sensing properties of Pt-decorated carbon nanotubes (CNTs), which provide a foundation for the fabrication of sensors, have been evaluated. In this study, we calculated the gas adsorption of Pt-decorated (8,0) single-wall CNTs (Pt-SWCNTs) with SO2, H2S, and CO using GGA/PW91 method based on density functional theory. The adsorption energies and the changes in geometric and electronic structures after absorption were comprehensively analyzed to estimate the responses of Pt-SWCNTs. Results indicated that Pt-SWCNTs can respond to the three gases. The electrical characteristics of Pt-SWCNTs show different changes after adsorption. Pt-SWCNTs donate electrons and increase the number of hole carriers after adsorbing SO2, thereby enhancing its conductivity. When H2S is adsorbed on CNTs, electrons are transferred from H2S to Pt-SWCNTs, converting Pt-SWCNTs from p-type to n-type sensors with improved conductivity. However, Pt-SWCNTs obtain electrons and show decreased conductivity when reacted with CO gas.
Introduction
Carbon nanotubes (CNTs) have structures with abundant pores, large surface-to-volume ratios, and strong adsorption and desorption capabilities for gases. Gas molecules that adsorb on the surface of CNTs change the shape of CNTs and trigger redistribution of electrons, leading to a macroscopic change in resistance [1]. Kong et al. [2] used chemical vapor deposition (CVD) to fabricate single-wall CNTs (SWCNTs) on SiO 2 /Si substrates to detect NO 2 (2 ppm to 200 ppm) and NH 3 (0.1% to 1%) diluted in air or Ar. Results showed that the conductivity of SWCNTs decreases threefold after adsorbing NH 3 , whereas the conductivity increases threefold after adsorbing NO 2 . Unlike traditional gas sensors, CNT gas sensors exhibit faster response, higher sensitivity, smaller size, and lower working temperatures [3,4]. These advantages make CNTs suitable for application in industries, the medical field, and environmental protection. Recently, the CNT gas sensor has received extensive research attention and achieved certain interesting results. It is significant to study the gas sensitive properties of CNTs, which is the foundation of sensor design. In this paper, the gas sensitive property of CNTs is studied mainly through theoretical calculation analysis. Intrinsic CNTs can only detect several strong oxidizing and reducing gases, while other gases are only weakly adsorbed with low sensitivity because of the structure and chemical properties of intrinsic CNTs [5][6][7]. To overcome this limitation, some researchers have proposed various physical and chemical modifications, such as introduction of new active sites on the surface of CNTs. The authors in [8] showed that polar groups (COOH, NH 2 , NO 2 and H 2 PO 3 ) are promising candidates for enhancing CO 2 and CH 4 adsorption capacity by strengthening adsorption and activating exposed edges and terraces to introduce additional binding sites. Peng et al. [9] found B-doped CNT gas sensors have a good sensitivity to CO and H 2 O. Besides, transition metals are rich in d-electrons and empty orbitals, wherein small gas molecules can bond strongly to the metal when adsorbed on the surface [10,11]. Studies [12] indicated that compared with intrinsic CNTs, CNTs with metal depositions have better sensitivity. For instance, Pt-and Au-functionalized CNTs are more sensitive by an order of magnitude for NO 2 and NH 3 detection than intrinsic CNTs. The response characteristics of sensors largely depend on the number of active sites, which can strengthen the response of sensors [11]. Pt can adsorb small molecules [13][14][15][16], and Pt has a good catalytic activity. CNTs have excellent physical and chemical properties and unique structures that can be used as a supporting material that influences the activity of Pt catalysts [17]. Conversely, the catalytic activity of Pt can improve the gas sensing properties of CNTs. Therefore the present study introduces Pt as a new active site. The adsorptive processes and properties of Pt-SWCNTs for SO 2 , H 2 S, and CO are calculated. These gases are highly toxic to the human body, making this research significant by providing theoretical support and a good foundation for the fabrication of suitable CNT-based sensors.
Computation Model and Methods
We built (8,0) SWCNTs and gas molecule models using Materials Studio (Accelrys, San Diego, CA, USA), a molecular dynamics simulation software. The geometries and properties of the system were derived using the quantum mechanics program DMol 3 code (Accelrys). We adopted GGA to treat the electronic exchange and correlation effects, as described by PW91 [18]. Pt is a heavy metal with an atomic number of 78; therefore, DFT semi-core pseudopotentials [19] were used to manage the interactions between the nucleus and the valence electron. To ensure accuracy, the energy threshold and self-consistent field convergence criteria were set to 2.72 × 10 -4 and 2.72 × 10 -5 eV, respectively. The space orbital cutoff radius was set to 0.40 nm, whereas the Brillouin zone k-point sampling was performed in a 1 × 1 × 2 [20,21] Monkhorst-Pack mesh. A 2.50 nm × 2.50 nm × 0.85 nm periodic boundary was adopted to avoid the interaction between adjacent cells.
Two SWCNT unit cells were selected as intrinsic CNTs to build Pt-SWCNTs. According to references [11,22], Pt can easily adsorb on the vacancy defects of SWCNTs, and its adsorption energy is 6.400 eV, which is greater than that of Pt adsorbed on perfect crystal surface (2.750 eV). Accordingly, the present research selected this Pt-SWCNT model with geometry optimization structure, as shown in Figure 1.
Results and Discussion
The radius of a Pt atom is 0.183 nm, which is greater than that of a C atom (0.070 nm). Thus, Pt is highlighted on the CNT surface. The bond lengths between the Pt atom and three adjacent C atoms changed from 0.142 nm to 0.199, 0.199, and 0.189 nm for Pt-C1, Pt-C2, and Pt-C3, respectively. These results are consistent with the reference [22].
The quantum chemical energies of Pt-SWCNTs and gas molecules, as well as the optimized structure of the adsorption systems (E Pt-SWCNTs , E gas , and E gas-Pt-SWCNTs ) were calculated. The adsorption energy (E b ) between a gas molecule and CNTs can be calculated by the following formula: At E b < 0, the energy of the absorption system is less than the total energy of gas molecules and Pt-SWCNTs. Therefore, the reaction is exothermic and spontaneous. Greater adsorption energy releases more energy during the reaction process. However, when E b > 0 it is relatively difficult for the reaction to continue because of the energy required.
In actual practice, the gas-sensitive response of the sensor is evaluated by the changes in electrical characteristics (e.g., resistance) of sensors. Therefore, we also calculated and analyzed the electronic structure of Pt-SWCNTs, gas molecules, and adsorption system. E HOMO and E LUMO represent the highest occupied molecular orbital (HOMO) energy and the lowest unoccupied orbital (LUMO) energy, respectively. Eg is the difference of E LUMO and E HOMO , and Q is the net charge of the system. The parameters are defined as follows:
SO 2
SO 2 is colorless, corrosive, and has a strong pungent odor. Moreover, when SO 2 is dissolved in bodies of water, sulfurous acid rain is generated, which is harmful to the environment. SO 2 can also form sulfuric acid when dissolved in water, which can irritate the mucous membrane of the eyes and nose.
The full geometric optimization of the Pt-SWCNTs and SO 2 adsorption model is shown in Figure 2. An oxygen atom O1 points to Pt, with Pt-O1 and Pt-S distances of 0.212 and 0.245 nm, respectively. The reaction adsorption energy is -1.225 eV (Table 1), which denotes an exothermic and spontaneous reaction. By contrast, the reaction adsorption energy of intrinsic SWCNTs is -0.830 eV, so Pt-doping enhances the interaction between SO 2 and SWCNTs. Pt is not only a sensing element of Pt-SWCNTs, but also an active site. Strong interaction with gas molecules adsorbed on the surface results in deformation of Pt-SWCNTs and elongation of the Pt-C bond. The frontier orbital energy difference of SO 2 and Pt-SWCNTs is E H-L << E L-H . A Pt-SWCNT electron only needs to overcome a 0.158 eV energy barrier to transfer to SO 2 , whereas a SO 2 electron needs to overcome a 3.818 eV energy barrier to transfer to Pt-SWCNTs. Therefore, Pt-SWCNTs provide electrons to SO 2 in the adsorption process. A portion of electrons fill the anti-bonding orbital of S-O1, changing the bond length from 0.143 nm to 0.165 nm. O2 is far from the CNT surface, so the interaction is small, allowing only a small change in the bond length of S-O2 (0.150 nm).
According to the respective Mulliken charge populations, SWCNTs of Pt-SWCNTs have 0.147 positive charge and Pt has 0.147 negative charge before adsorption. After the adsorption process, SWCNTs have 0.509 positive charge, whereas Pt has 0.116 negative charge. SO 2 obtains 0.393 electrons during the adsorption reaction with Pt-SWCNTs, which is 4.6 times than intrinsic SWCNTs (Table 2). Charge variation (ΔQ SWCNTs , ΔQ Pt ) of SWCNTs and Pt are 0.362 and 0.031, respectively (Table 3). Therefore, SO 2 obtains electrons mainly from SWCNTs, whereas the Pt exhibits a small charge change. The transfer of a large number of electrons during adsorption causes the redistribution of system charges. The density of states (DOS) near the Fermi level appears to be impure, for example there is a peak in -0.5eV. And the DOS between HOMO and LUMO changes. Figure 3 shows that these impure states are caused by SO 2 adsorption. The p orbitals of S and O atoms have a large overlap with the d orbitals of Pt atom, and it demonstrates that SO 2 can strongly hybridize with Pt [23]. This has significant effect on the frontier orbital of the adsorption system, which changes the HOMO and LUMO orbital formation, causing the change in the DOS. Figure 4a shows that the p orbitals of C1 and C3 form σ bond with S. In Figure 4b, the d orbitals of Pt and the p orbitals of S are hybridized.
The frontier orbital energy gap E g of the system is 0.285 eV after adsorbing SO 2 , which is reduced by 0.047 eV compared with that in the non-adsorbed SO 2 . This is beneficial for the transfer of electrons between HOMO and LUMO, thereby enhancing conductivity. SO 2 adsorption on the surface of Pt-SWCNTs has large adsorption energy and can form a stable structure. The p-type Pt-SWCNTs [11] donate electrons and increase the number of hole carriers, reducing the frontier orbital energy, diminishing energy gap E g , and enhancing conductivity. Pt-SWCNTs are highly responsive to SO 2 , the doped Pt effectively improved the adsorption sensitivity of SWCNTs to SO 2 .
H 2 S
H 2 S which is the simplest hydride of sulfur, is a colorless toxic gas that smells like rotten eggs and is strongly corrosive. It is also harmful to human health. The S in H 2 S is at the lowest valence state, so it is strongly reducible.
The adsorption reaction of Pt-SWCNTs and H 2 S is also exothermic, with E b of -0.977 eV, more than intrinsic SWCNTs (-0.591eV in Table 2). The frontier orbital energy differences are E H-L = 4.438 eV and E L-H = 1.519 eV, therefore H 2 S provides electrons to Pt-SWCNTs in this reaction. Mulliken charge analysis (Table 3) shows that H 2 S donates 0.285 electrons almost 22 times more than intrinsic SWCNTs. Pt and SWCNTs have 0.019 and 0.266 electrons, respectively, after H 2 S is adsorbed on Pt-SWCNTs ( Figure 5). A large number of electron transformations convert Pt-SWCNTs from p-type to n-type. E g of the adsorption system is 0.283 eV, which is reduced by 0.049 eV compared that with non-adsorbed H 2 S, thus enhancing conductivity. H 2 S-Pt-SWCNT frontier orbitals concentrate on Pt-SWCNTs, and H 2 S is not involved in the composition of HOMO and LUMO orbitals. Figure 6 shows that the DOS of H 2 S is not distributed between the HOMO and LUMO, and the DOS near the Fermi level is basically the same as that of Pt-SWCNTs, which is consistent with the results of frontier orbitals (Figure 7). The p orbitals of S have a large overlap with the d orbitals of Pt, and the strong interaction of them enhances the adsorption between H 2 S and the nanotube surface. From the comparison results of adsorption energy and transfer charge, it can be seen that doped Pt obviously improves the adsorption ability of the instrinsic SWCNTs to H 2 S. H 2 S adsorbs on the surface of Pt-SWCNTs and donates substantial electrons to Pt-SWCNTs, which converts CNTs from p-type to n-type. The frontier orbital energy is increased, while the conductivity is enhanced because of the decrease in the frontier orbital energy gap.
CO
CO is a colorless, non-irritating gas. However when it enters the human body, CO combines with blood hemoglobin, which prevents the union of hemoglobin and oxygen, leading to body tissue hypoxia and even suffocation. The C atom in CO has +2 valence electrons and that can be further oxidized to +4. Accordingly, CO is a reducing gas that provides electrons in reactions.
The optimized adsorption structure of CO adsorbed on Pt-SWCNTs is shown in Figure 8, which is consistent with reference [24], C atoms point to Pt, whereas O atoms point away from the CNT surface. The adsorption reaction is exothermic. High adsorption energy results in a tight bond between gases and CNTs, as well as an interaction distance of 0.198 nm.
The same as the previous two gases, the adsorption energy and the transfer charge between Pt-SWCNTs and CO are increased obviously, enhancing the adsorption. Figure 9 shows that the p orbitals of C in CO have an overlap with the d orbitals of Pt, especially near the Fermi level. The DOS near the Fermi level is changed, and the peak at -7 eV is split, which is related to CO adsorption. The contributions of CO to HOMO and LUMO are mainly on the p orbitals of C and O atoms (Figure 10), which change the system configuration of frontier orbitals, changing the DOS of the system. During adsorption, CO provides 0.181 electrons (Table 3), p-type CNTs obtain electrons, and the number of hole carriers decreases. Frontier orbital energy and energy gap E g increase, decreasing conductivity.
Discussion
Pt is a heavy metal with an atomic number of 78. Its outer core 5d orbitals have nine electrons and an unpaired d electron, so it easily absorbs electrons to reach a steady state. Pt doped into SWCNTs obtains electrons from C. In Pt-SWCNTs, Pt has 0.147 electrons, C1, C2, and C3 adjacent to Pt have 0.028, 0.029, and 0.097 electrons, respectively, which form an electron accumulation zone around the Pt atom. Given that Pt easily obtains electrons, when SO 2 reacts with Pt-SWCNTs, SWCNTs donate most of the electrons, and a small charge change in Pt (ΔQ Pt ) occurs. On the contrary, when the target gases are H 2 S and CO, Pt exhibits a large charge change.
Conclusions
In this study, the adsorptions of three gases on the surface of Pt-SWCNTs were calculated based on DFT. The gas-sensing properties of Pt-SWCNTs were assessed according to the changes in adsorption energy, geometric structure, and electronic structure during adsorption. The main conclusions are as follows: 1. The doped Pt effectively improves the adsorption sensitivity of intrinsic SWCNTs to the three kind of gases. 2. The adsorption energy of the reaction between Pt-SWCNTs and SO 2 is large, and numerous electrons are transferred from CNTs to the target gases. The frontier orbital energies (E HOMO and E LUMO ) and E g are decreased, and the electrical conductivity of Pt-SWCNTs is enhanced. Pt-SWCNTs have high sensitivity to SO 2 . 3. H 2 S is reduced when reacted with Pt-SWCNTs. H 2 S provides a large number of electrons, converting CNTs from p-type to n-type. The frontier orbital energies are increased, whereas E g is decreased, thereby enhancing conductivity. 4. When CO is adsorbed on Pt-SWCNTs, CO provides electrons to p-type CNTs, decreasing the number of hole carriers. The frontier orbital energies and E g are increased, decreasing conductivity.
Results of the theoretical calculation show that Pt-SWCNTs can respond to the three gases. The electrical characteristics of Pt-SWCNTs show different degrees of changes after adsorption of the test gases. As SO 2 is adsorbed on Pt-SWCNTs, the CNTs lose electrons, the number of hole carriers is increased, and conductivity is enhanced. As H 2 S is adsorbed on the surface of CNTs, Pt-SWCNTs receive a large number of electrons and transform from p-type into n-type. The conductivity of Pt-SWCNTs is also enhanced. Comparing their adsorption energies and charge transformations, the sensitivity to SO 2 is higher than the sensitivity of Pt-SWCNTs to H 2 S. Moreover, CO is an electron-donor gas, which reduces hole carriers and weakens conductivity. Therefore, Pt-SWCNTs can be used to fabricate gas sensors in detecting SO 2 , H 2 S, and CO gases. | 3,967.6 | 2013-11-01T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Particle Acceleration Due to Coronal Non-null Magnetic Reconnection
Various topological features, for example magnetic null points and separators, have been inferred as likely sites of magnetic reconnection and particle acceleration in the solar atmosphere. In fact, magnetic reconnection is not constrained to solely take place at or near such topological features and may also take place in the absence of such features. Studies of particle acceleration using non-topological reconnection experiments embedded in the solar atmosphere are uncommon. We aim to investigate and characterise particle behaviour in a model of magnetic reconnection which causes an arcade of solar coronal magnetic field to twist and form an erupting flux rope, crucially in the absence of any common topological features where reconnection is often thought to occur. We use a numerical scheme that evolves the gyro-averaged orbit equations of single electrons and protons in time and space, and simulate the gyromotion of particles in a fully analytical global field model. We observe and discuss how the magnetic and electric fields of the model and the initial conditions of each orbit may lead to acceleration of protons and electrons up to 2 MeV in energy (depending on model parameters). We describe the morphology of time-dependent acceleration and impact sites for each particle species and compare our findings to those recovered by topologically based studies of three-dimensional (3D) reconnection and particle acceleration. We also broadly compare aspects of our findings to general observational features typically seen during two-ribbon flare events.
Introduction
Magnetic reconnection was first conceived as a way to explain the possible generation of high-energy particle populations during a solar flare (Giovanelli, 1946). Since then, it has been established as the fundamental way by which complex magnetic fields commonly restructure into a lower energy state. This restructuring also allows stored magnetic energy to be converted from initially complex field configurations into other forms of energy. Magnetic reconnection in three dimensions (3D) is also generically associated with parallel electric fields (indeed, the only necessary and sufficient condition for 3D reconnection to take place is that E ds = 0, see e.g. Schindler, Hesse, and Birn, 1988;Hesse and Schindler, 1988;Schindler, Hesse, and Birn, 1991, where E is the component of the electric field parallel to the magnetic field and ds is a line element along which a component of the magnetic field persists). Direct acceleration through these electric fields at the primary energy release sites in the solar corona is likely to be a significant contributor to accelerated particle populations (as discussed in detail in e.g. Birn and Priest, 2007;Zharkova et al., 2011).
A common approach in the study of magnetic reconnection is to use test particles. This is one of several methods that allow us to bridge the gap between the macroscopic description of a system undergoing magnetic reconnection and the kinetic scales of the plasma response. Other methods, for example particle-in-cell (or PIC), are better at describing the interplay between macroscopic and kinetic scales (e.g. Baumann, Haugbølle, and Nordlund, 2013) but due to numerical constraints, they can only be considered over a severely restricted size of the domain and also require rescaling of certain parameter regimes. Test particles, on the other hand, omit any back-reaction upon the global field (and each other) that is caused by their motion.
The equations that govern test-particle motion (for example guiding centre theory, e.g. Vandervoort, 1960;Northrop, 1963) are relatively well understood. Advances in computational power have allowed test-particle studies to move from simple idealised 2D configurations to fully 3D models of complex macroscopic configurations with reconnection taking place in different guises and at multiple sites. Many commonly focus on isolated topological features that are known to be likely sites of current sheet formation and reconnection in 3D, typically motivated by flare particle acceleration. Examples include several regimes of 3D null-point reconnection (e.g. Dalla and Browning, 2005, 2008Guo et al., 2010;Stanier, Browning, and Dalla, 2012), magnetic separator reconnection (e.g. Threlfall et al., 2015Threlfall et al., , 2016a or reconnection at fragmented current sheets (e.g. Turkmani et al., 2005Turkmani et al., , 2006Onofri, Isliker, and Vlahos, 2006;Gordovskyy, Browning, and Vekstein, 2010). The application of test-particle analysis to simulations of larger structures embedded in the solar atmosphere, including coronal loops (e.g. Gordovskyy et al., 2014) or indeed entire active regions (Threlfall et al., 2016b) has also uncovered evidence of significant particle acceleration. It is noteworthy that such large-scale structures often contain many locations and topological features (such as nulls, separatrix surfaces, spine lines, and separators) where reconnection takes place. Any resulting acceleration is also intrinsically linked to the chosen parameter regime, specifically depending on resolution and magnetic Reynolds number. When studying separator reconnection, Threlfall et al. (2016a) showed that simple analytical models reproduce all of the essential features of much more complex numerical separator reconnection models at a fraction of the computational effort. In addition, orbit calculation results based on such models can be rescaled without recalculation, allowing access to wider ranges of parameter space than models with specific numerical constraints.
In this present investigation, we examine test-particle behaviour in the vicinity of a simple analytical (scale-free) model of 3D magnetic reconnection without topological features (e.g. magnetic nulls or separators) associated with reconnection. The magnetic field in this model (based on Hesse, Forbes, and Birn, 2005) has similarities to that of an erupting magnetic flux rope. The primary objective of the present work is to investigate the overall response of test particles to this reconnection scenario, which is free of such topological features. Key questions that we seek to answer are how the particle acceleration in this model relates to other (topologically underpinned) models of magnetic reconnection and particle acceleration. Despite the simplicity of this model, we wish to determine whether the resulting particle orbits tie in with observational features that are typically associated with particle acceleration in the solar atmosphere, for example during a flare.
The article is organised as follows. In Section 2 we discuss the model itself, which combines a test-particle approach (whose governing equations are outlined in Section 2.1) with a simple kinematic global field that models the eruption of a magnetic flux tube due to (nonnull) magnetic reconnection (described in Section 2.2). Our results are presented in Section 3, with analysis in Section 4 (including a comparison with other topologically-based models of magnetic reconnection in Section 4.1, and a broad comparison of our results with aspects of solar flare observations in Section 4.2). We finally outline our conclusions and future areas of study in Section 5.
Model Setup
Our approach is broadly split into two components. The guiding centre test-particle orbit motion equations, which are solved numerically for a given set of electric and magnetic fields, form the first component. The second component is an analytical global model that describes a reconnection event in the absence of magnetic null points (in the context of the eruption of a magnetic flux rope) used by the orbit calculations. A brief overview of each component follows.
Relativistic Particle Dynamics
We first outline the equations that govern the particle behaviour. Our investigation makes use of the full relativistic set of guiding-centre-motion equations, outlined in Northrop (1963), which we briefly recapitulate here: We note that μ r is the relativistic magnetic moment for a particle with rest mass m 0 and charge q, whose guiding centre is located at R, subject to an electric field E and a magnetic field B with magnitude B = |B| and unit vector b = B/B. Local conditions will dictate aspects of the orbit behaviour, particularly through guiding-centre drifts; the largest in magnitude is typically the E × B drift, which has a velocity u E = E × b/B. The component of velocity parallel to the magnetic field is v = b ·Ṙ, while E = b · E is the magnitude of the electric field parallel to the local magnetic field,Ṙ ⊥ =Ṙ − v b is the component of the velocity perpendicular to b, and s is a line element parallel to b. Finally, γ is the Lorentz factor, relating velocities to the speed of light through γ 2 = 1/(1 − v 2 /c 2 ). Using this factor, we define a relativistic parallel velocity u = γ v for simplicity of notation. For a given magnetic field strength B, B and B are defined as The multiplying quantities are dimensionless, i.e. B and B retain the dimensions of the magnetic field. Equations (1) -(4) are dimensionless, and related to physical quantities by multiplying the relevant non-dimensional quantity by a magnetic field strength b scl , a length-scale l scl , a timescale t scl , or some combination thereof, e.g.: where the barred quantities represent dimensionless counterparts of specific variables. The choice of these quantities fixes the remaining normalising constants. Our normalising parameters are chosen so that the resulting particle behaviour may reflect the behaviour found in the solar corona; hence, unless otherwise stated, all experiments take b scl = 0.001 T, l scl = 100 km and t scl = 100 s. The guiding-centre equations (Equations (1) -(4)) are further simplified by considering only electrons or protons in this study. The rest mass m 0 = m e = 9.1 × 10 −31 kg and charge q = e = −1.6022 × 10 −19 C are fixed for electrons, or m 0 = m p = 1.67 × 10 −27 kg and q = |e| = 1.6022 × 10 −19 C for protons. This allows us to express several normalising constants in terms of a normalising electron or proton gyrofrequency, scl = q b scl m 0 −1 , which controls the scales at which certain guiding-centre drifts become important.
The guiding-centre equations (1) -(4) are evolved in time using a fourth-order Runga-Kutta scheme, subject to the global electric and magnetic fields in our model. Errors are minimised through the use of a variable time step, constrained by comparisons of the fourthand fifth-order Runga-Kutta calculations at each step. This scheme has been directly used in several recent works (e.g. Threlfall et al., 2015Threlfall et al., , 2016aBorissov, Neukirch, and Threlfall, 2016), with similar implementations in other recent investigations (e.g. Gordovskyy et al., 2014). By controlling the timescales of the guiding-centre approximation code and the timescale of the global flux-rope eruption model so that they never become of a similar size, we are justified in our use of this test-particle approach. Similarly, we monitor the spatial scales of the gyromotion and the global environment, in order to ensure that they remain disparate as well. However, in order to proceed, we must now define the environment in which the particles will gyrate.
Non-null Magnetic Reconnection Model
For this study, we modify a simple analytical kinematic model first proposed by Hesse, Forbes, and Birn (2005) which aims to describe how an arcade of closed sheared magnetic field lines may reconnect to form helical field lines and mimic the evolution and eruption of a flux rope within the solar atmosphere.
To begin, we create a magnetic vector potential, A, of the form for a magnetic field strength b 0 and length scale l 0 using Cartesian variables expressed in non-dimensional form. From this point on, we endeavour to use real (rather than normalised) variables and set l 0 = l scl and b 0 = b scl throughout. Equation (5) fully defines the electric field in the model (in the absence of an electrostatic potential). We note that a magnetic vector potential was not specified in the original model of Hesse, Forbes, and Birn (2005). The constant in the z-component (here set to 0.2) is arbitrary. This value defines the level of shear in the magnetic field resulting from ∇ ×A. We have also rotated the original coordinate system of the magnetic field specified in Hesse, Forbes, and Birn (2005). In our model, the z-axis is aligned with the vertical direction (perpendicular to the solar surface). The model photosphere lies in the xy-plane where z = 0. The model includes a spatial perturbation, controlled by a time-dependent parameter, (t). When combined, these factors control how the flux rope and electric fields evolve. We select a form for the time-dependence once we have considered how the spatial and temporal parts of the perturbation appear in the equations for the electric and magnetic field. The resulting magnetic field is Parameters L x , L y , and L z anchor the model at a specific location and have the same dimensions as x, y, and z (hence, L x = l 0Lx , etc.). Equation (6) describes a series of sheared arcade-like field lines at (t) = 0, which then reconnect due to the addition of a (circuital) magnetic field perturbation. The form of this perturbation is chosen to reconnect both short magnetic loops and the overlying helical field; in this way, this simple model reproduces the basic effects that are thought to occur during the eruption of a flux rope. As noted in Hesse, Forbes, and Birn (2005), this field is non-vanishing everywhere and does not contain topological features, which are often considered as likely sites of magnetic reconnection, including magnetic null-points, separators, or separatrix surfaces. The addition of the time-varying perturbation also generates an electric field through Faraday's law, in the absence of an electrostatic potential, i.e.
The strength of the electric field depends on the chosen form of (t); with our photosphere at z = 0, the electric field increases linearly with height above the photosphere before decreasing much more slowly above a height determined by L z . We illustrate the behaviour of both the magnetic field and the strength of the electric field outlined here at different values of (which may also be thought of as being at different times) in Figure 1. By considering = t/τ for a constant global timescale, τ , we recover an electric field that is constant in time. We also note the findings of Hesse, Forbes, and Birn (2005) regarding this model, in which the flux rope only begins to form at ≈ 10. By setting τ = t scl (our normalisation time for the particle orbit code) and initialising particles at integer multiples of Figure 1 Evolution of (non-dimensional) magnetic fields and isosurfaces of the electric field strength in our chosen model, outlined in Equations (5) -(7), seen at different stages of the flux rope eruption. The magnetic field is illustrated by blue field lines (with arrows denoting orientation), while purple contours show the strength of the electric field (opacity at 10, 25, and 50 % representing the same percentage of E max , which reaches 50 V m −1 when l scl = 100 km, b scl = 0.001 T, and t scl = 100 s, for example). In the case shown here,L t scl (so that t = 0, t scl , 2t scl , etc.), we automatically evolve the global model on much longer timescales than those over which the particles gyrate (recalling that t scl = 100 s, while a typical gyroperiod is of the order of milliseconds or shorter). We want to emphasise again that the key aspect of this model is that it is fully analytical and that the details of the reconnection can be tuned to match desired properties, e.g. to match observations or simulations. In this investigation, we hope to gain a general sense of a particle response to a simple flux rope eruption event, using a configuration that does not (a) Corresponds to Case 1, which uses a uniform grid of initial particle positions all at 20 eV kinetic energy and 45 • pitch angles. (b) Shows a Case 2 example, where initial positions and pitch angles are randomised, with particle energies selected according to a Maxwellian distribution. Each orb in either figure represents an initial particle position and is colour coded according to the initial particle energy (see colour bar). The field configuration at = 0 has been included for reference, with the overlying field identified by grey lines and the low-lying field by black field lines.
contain nulls or other topological features of interest. This would then provide a benchmark for later, more complex environments.
Orbit Calculation Results
In order to study different aspects of the particle response to this model, we analysed more than 8000 particle orbit calculations based on two different sets of test-particle initial conditions. We outline these in Section 3.1, before describing the resulting final orbit positions (Section 3.2), energies, and spectra (Section 3.3) recovered using the two sets of initial conditions. The conditions of the global model remain fixed regardless of the choice of particle initial conditions. We choose global parameters L x = 5 √ 2l scl ≈ 707 km and L y = L z = 5l scl = 500 km (corresponding to a peak electric field of 50 V m −1 at z = 500 km, x = 0 km) for l scl = 100 km. We only studied the region in the vicinity of the flux rope expansion and eruption, using the range xy ∈ [−10, 10]l scl and z ∈ [0, 40]l scl (i.e. xy ∈ [−1, 1] Mm and z ∈ [0, 4] Mm when l scl = 100 km) for all test-particle calculations. We analysed the behaviour of a new set of particles at different times over the course of the evolution of the global magnetic field. With t scl = 100 s, and = t/τ = t/t scl , we initialised a new set of particles every 100 s; this corresponds to initialisation of orbits at integer values of .
Test-Particle Initial Conditions
To study different aspects of the model behaviour, we used two different sets of initial conditions for the orbits, which we label Case 1 or Case 2. A visual representation of these cases (and the initial field structure of the global model) is provided in Figure 2.
Case 1 comprises an evenly spaced grid of initial orbit positions containing particles with identical initial kinetic energy and pitch angle. Such a uniformly spaced initial set of positions is physically unrealistic, but allows us to build up a clear picture of the particle response to the reconnection region and its surroundings. A total of 8192 particles are divided evenly between eight separate z-planes (at 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, and 0.8 Mm above the photosphere) and are distributed in a 32 × 32 grid in (x, y) (ranging from −0.6 → 0.6 Mm). Each orbit begins with an initial pitch angle of 45 • and 20 eV of kinetic energy. These values are deliberately chosen to limit the initial parallel velocity of each orbit and establish general particle behaviour trends within the reconnection region. Case 1 initial conditions are illustrated by Figure 2a.
Owing to the importance of the energies that result from our calculations, Case 2 comprises a more "realistic" set of initial conditions. In Case 2, the initial orbit energies adhere to a Maxwellian distribution that peaks at an approximately coronal temperature (1 MK). These orbits are also given random initial positions throughout the domain and random initial pitch angles. An example of the initial conditions found in Case 2 is illustrated in Figure 2b. We also note that earlier investigations (e.g. Threlfall et al., 2015) showed that local parallel electric fields (if present) completely dominate the orbit energetics of test-particle orbits, but these investigations have so far only used simplistic (uniform) initial conditions. One of our objectives here is to show how our results are affected by the choice of initial conditions.
Final Orbit Positions
The final positions of each of the orbits recovered using Case 1 and Case 2 are very similar, but those for Case 2 are somewhat complicated by the random initial positions. For simplicity, we describe the key findings for this aspect of our study using the initial conditions of Case 1. Figure . For clarity, we have chosen to colour field lines in the global field model depending on their trajectories. Black field lines denote a magnetic field that never rises above z = 0.75 Mm. Purple lines indicate helical magnetic field lines (which, for the purposes of identification, we define to be lines that contain more than one maxima in z). Grey lines denote a magnetic field that is not helical and reaches above 0.75 Mm (and typically forms overlying parts of the magnetic arcade). Thus the appearance of increasing numbers of purple field lines in each row of Figure 3 implies that the global field passing through the reconnection region is becoming more helical as time progresses.
All of the orbits in Figure 3 terminate upon impact with the edge of our artificial domain. This domain is arbitrary, but encompasses and extends beyond the reconnection region. No orbits show evidence that they are mirrored by the magnetic mirror effect (where an orbit entering a region of increasing magnetic field strength may be reflected by the invariance of the magnetic moment).
Before the flux rope is formed, the first row of Figure 3 shows that all electrons are accelerated along the sheared magnetic arcade towards the positive y-region of the photosphere, while protons are accelerated towards the negative y-region of the photosphere (due to the difference in charge). Lower energy electrons typically impact the photosphere closer to the small minority terminating predominantly in the negative x-quadrant). The proton results (Figure 3b) display similar trends, but with all orbits accelerated in the opposite direction (towards the negative y-quadrant of the photosphere) and with the same minor asymmetry in x (where highly accelerated protons appear only slightly more likely to impact the solar surface in the positive x-quadrant). This minor asymmetry is most likely the result of the original shear imposed upon the magnetic field.
As time progresses and the flux rope starts to develop, this minor asymmetry in the spread of protons and electrons parallel to the PIL develops further. At = 10, the final electron positions seen in the second row of Figure 3 are farther from the PIL in y, with many more strongly accelerated electrons now arriving in the negative x-quadrant than seen at = 0. Many more protons (Figure 3d) also terminate in the positive x-region, while the entire distribution of final positions again lies at an equal distance from the PIL in y as the electrons, but on the opposite side.
The asymmetry in the final position parallel to the PIL is extremely pronounced by = 19. The final positions of electrons ( Figure 3e) and protons (Figure 3f) fall into two highly distinct categories. Orbits with weak energy gains uniformly terminate at the photosphere in a region |x| ≤ 0.6 Mm and with y ≥ 0.4 Mm for electrons and y ≤ −0.4 Mm for protons. Strongly accelerated orbits are exclusively found in the (x < 0, y > 0) quadrant for electrons and in the (x > 0, y < 0) quadrant for protons. At this stage of the experiment, many orbits impact with the side walls of our domain (which was not the case for earlier stages seen in the other rows of Figure 3).
We are also able to relate regions of strongly or weakly accelerated orbits with specific initial locations using Figure 4, which maps the peak kinetic energy gain over the orbit lifetime back onto the initial particle position. For brevity, we only discuss the proton results at the initial and final stages of the global eruption model (as the electron results typically mirror the proton results in all cases and only differ in direction of acceleration and time taken to impact the domain boundary). Firstly, note that all particles at initial positions (x 0 , y, z 0 ) experience the same initial (parallel) electric field regardless of their y coordinate. Thus all these particles start to accelerate in the same direction. This acceleration causes those protons (electrons) starting with y > 0 (y < 0) to travel along the field lines through an extended region on a non-zero parallel electric field, allowing them to be accelerated further and to gain higher energies. On the other hand, the protons (electrons) starting with y < 0 (y > 0) just travel away from the acceleration region and therefore only achieve low energies.
Additionally, we observe that the largest energy gains are strongly correlated to the locations where highly twisted helical field lines form. At = 0, no yellow positions exist in Figure 4a, indicating that the configuration is incapable of producing the largest energy gains at this time. These energies are only recovered at the later stages of the experiment. Since the electric field in our experiment is independent of time, the particles that achieve these higher energies must have done so by spending more time in the acceleration zone. Specifically, yellow orbits only begin to appear when the first helical field lines form, e.g. at = 10. The initial positions of these high-energy orbits are well aligned to the purple field lines, but associate with opposite ends of the flux rope depending on the particle species. This can be seen when = 19 in Figure 4b. The size of the region that produces the most highly accelerated orbits increases with height and also with time. By the time = 19, more reconnected helical field allows additional particles to enter the region where E is strongest and remain for longer, leading to acceleration to higher energies.
Orbit Energies and Spectra
After broadly describing the impact and acceleration sites of orbits in this reconnection model, we now describe the acceleration and energy gains.
In this section, we would first clarify that we report specific orbit findings that are not based on particle fluxes and that should not be linked to observational spectra without great care. These spectra simply reflect the response of our specific particle population to the global environment according to their chosen initial conditions. The energetics are intrinsically linked to our choice of initial conditions and are limited by the fact that we are sampling a discrete set of orbits over a limited range of parameter space for initial position, energy, and pitch angle in both Cases 1 and 2. To illustrate this point, we note for example that the region of smallest energy gains in the bottom plane of Figure 4 actually grows with time. Average energies in this plane decrease from 102 keV at = 0 to 35.5 keV ( = 10) and ultimately to 7.79 keV ( = 19). This finding is misleading, however, as sampling the entire population reveals an overall increase in average kinetic energy with time. Thus orbits that sample a different range of initial parameter space may have significant effects on the energies and resulting spectra.
While the lowest plane of initial positions suffers a reduction in energy gained in Case 1, as the experiment progresses, the uppermost z-plane is able to achieve much higher energy gains. At = 0, the average electron energy gain in the top plane is approximately 155 keV (orange orbits in Figure 4a). This average value increases to 289 keV and 369 keV at = 10 and = 19, respectively. For identical initial conditions, both electrons and protons typically recover the same final energy (to five significant figures or better).
In order to more fully describe the energetic impact of the "eruption" (formation and expansion of the flux rope) on the particle orbits, Figure 5 illustrates how the maximum and average kinetic energy gained in the experiment by all the particles for Case 1 changes with time. At = 0, the maximum kinetic energy gained by any orbit is 0.396 MeV, while the average kinetic energy gained for all electrons or protons is 0.135 MeV. Figure 5 allows us to broadly categorise the experiment as having three stages. The first stage begins at = 0 and sees a slow rise in both the average and peak kinetic energies of the orbits until ≈ 6, where the maximum value has risen to 0.620 MeV and the average has risen to 0.151 MeV. The second stage is marked by a much steeper, sustained increase in the maximum orbit energy gain, peaking at 1.92 MeV by = 10, accompanied by a smaller rise in average orbit energy (to 0.184 MeV). Following ≈ 11, the third stage sees a small decrease in peak energy gains, while the average energies remain at similar levels. At the end of the experiment, the maximum and average energy gains for Case 1 orbits are 1.59 MeV and 0.198 MeV, respectively. Figure 5 also demonstrates that for the vast majority of the experiment, both protons and electrons yield nearly identical maximum and average kinetic energy gains at each stage of the global field evolution.
The three stages illustrated by this experiment again reflect the evolution of the global kinematic model. Following = 0, the footpoints of the sheared arcade widen. Strongly accelerated orbits are still accelerating as they reach the photosphere. Reaching the photosphere terminates the orbit calculation. Footpoint widening acts to slightly increase the Figure 6 Energy spectra of our test-particle calculations of the Case 2 initial (black) and final energies (red) for both electrons (solid line) and protons (dashed line). The left column represents orbits beginning when = 0, while the right column shows = 10. The spectra in the top row are calculated at normalising length scale l scl = 100 km, while those in the bottom row are for l scl = 10 km. amount of time each orbit has to gain energy through direct acceleration. The second observed stage of behaviour coincides with the first visual identification of the formation of the flux rope. Once formed, the flux rope allows additional field lines to repeatedly re-enter the reconnection region and again experience direct acceleration by the regions of strongest electric field. The third stage sees the peak and average energies remain almost constant, with perhaps a small decline, which could be due to the footpoint separation. In the final stages of the experiment, most orbits terminate at the side boundaries of our artificial domain, rather than at the photosphere. Orbits at later stages may therefore traverse slightly shorter field lines as increases, again raising the possibility of slightly limiting the acceleration process.
Another way to evaluate the properties of the kinetic energies of the test particles is to form energy spectra, where the orbit energies are binned and the number of orbits in each bin is presented. To show how this energy spectrum changes over time, we use the initial conditions of Case 2. Figure 6 contains example spectra at the start and at the end of the orbit lifetimes for four different cases. The top row (Figures 6a and 6b) illustrates the initial and final energies of electrons and protons at two different stages of the global eruption model, = 0 and = 10, respectively. The bottom row (Figures 6c and 6d) shows the initial and final energy spectra at the same stages, but with a tenfold reduction in the global model length scale (i.e. l scl = 10 km).
Electron and proton final spectra are a close match in all cases in Figure 6. At the original length scale used in earlier experiments (l scl = 100 km), all stages of the global eruption model recover large energy gains. The final energy spectrum terminates abruptly at the high energy end when = 0 (Figure 6a), where there are fewer particles above 0.4 MeV than when = 10. Figure 6b again shows that the later stages of the global eruption model allow particles to achieve MeV energy gains or higher, with a bump on the high-energy tail of the final distribution.
The effect of reducing the length scale of the experiment by a factor of 10 is clear from Figures 6c and 6d, which show that before termination, the particle orbit spectra now gain less kinetic energy over their lifetimes. The final energy spectra peak at approximately 1 -2 keV, almost exactly a hundredfold reduction in the peak value for the spectra in Figures 6a and 6b. At the beginning of the eruption ( = 0) at the smaller length scale (l scl = 10 km), no particles achieve kinetic energies greater than 4 keV in Figure 6c. Later stages of this eruption increase the peak kinetic energy value to approximately 20 keV, as shown in Figure 6d. This increase in peak kinetic energy between early and latter stages of the experiment strongly resembles the increase seen in the case where the experiment used a higher value of l scl .
Analysis
In this experiment, our aim is to study particle acceleration in a simple analytical reconnection model in the absence of topological features commonly associated with reconnection. While our findings can be compared to those recovered by other test-particle studies of various models of magnetic reconnection (discussed in Section 4.1), several aspects of our results also bear a strong resemblance to certain morphological features observed during typical (two-ribbon) solar flares. We also include a brief discussion of these aspects in Section 4.2.
Comparison with Topological Reconnection Models
Before placing our findings in context with other models of magnetic reconnection, we would first like to emphasise that the nature of magnetic reconnection significantly changes when one moves from a two-to a three-dimensional picture. The reduced degree of freedom in 2D restricts magnetic reconnection to only take place at an X-type null point, where field lines are cut and pasted together in pairs and the electric field is perpendicular to the magnetic field, thus no acceleration due to parallel electric fields is possible. In 3D, magnetic reconnection occurs within a finite volume where the component of the electric field parallel to the magnetic field is non-zero and through which magnetic connectivity changes continuously (see e.g. Schindler, Hesse, and Birn, 1988;Hesse and Schindler, 1988;Biskamp, 2000;Priest and Forbes, 2000;Birn and Priest, 2007, and references therein). Thus, in contrast to 2D scenarios, a key particle acceleration mechanism is the parallel electric field in 3D. We therefore limit the scope of our comparison here to solely consider fully 3D magnetic reconnection models.
Previous models of particle acceleration at 3D magnetic reconnection sites are underpinned by the specific topology of the reconnection site itself. 3D magnetic null points are likely sites of reconnection that are known to be abundant in the solar atmosphere (e.g. Longcope and Parnell, 2009). The magnetic field configuration in the local vicinity of each null characteristically comprises specific structures, known as the fan or separatrix surface (a surface of field lines radiating out from or in towards the null) and spine (where field lines asymptotically approach or extend away from the null along a single axis). When considered in isolation (depending on how the reconnection itself proceeds), various effects including drifting, mirroring, and acceleration have been studied in reference to both protons and electrons in a number of different experiments (e.g. Dalla and Browning, 2005, 2008Stanier, Browning, and Dalla, 2012). In our experiment, we see no evidence of mirroring and a minimal impact of drifts. Because there are no significant or dramatic changes in magnetic field strength, we find that rapid direct acceleration by the local electric field dominates the particle motion. Some models of acceleration at 3D nulls have shown a tendency to efficiently accelerate protons over electrons (Guo et al., 2010). In our approach both electrons and protons are equally efficiently accelerated. The structures associated with 3D magnetic nulls must, in reality, be embedded in and form part of the local magnetic environment in the solar corona (in a number of possible configurations). Observational examples of nearly circular post-flare loops (e.g. Masson et al., 2009) use potential field extrapolations to map the loop locations to a local separatrix dome formed from the fan plane of an overlying coronal null. Baumann, Haugbølle, and Nordlund (2013) modelled a similar configuration using a novel combination of PIC and magnetohydrodynamic (MHD) approaches and revealed that the underlying particle acceleration was driven by direct acceleration from interaction with the local reconnection electric field. We would also note the discussion of Pontin, Galsgaard, and Démoulin (2016) regarding the need for a self-consistent model of particle acceleration at a null-point current sheet. As with 3D null-point reconnection models, we also use a simplified analytical model as a crude first step towards predicting energetic particle locations or impact sites in a flux rope eruption event.
In addition, reconnection and particle acceleration have also been studied in detail at magnetic separators. These are special magnetic field lines that link pairs of magnetic nulls (marking the intersection of the fan planes of both nulls). Like null points, separators are also preferred sites for current sheet formation and reconnection, and have been inferred as sites that undergo magnetic reconnection in the solar corona (e.g. Longcope et al., 2005). A series of recent articles (Threlfall et al., 2015(Threlfall et al., , 2016a has investigated the implications of separator reconnection upon local particle dynamics, using both simple analytical and complex numerical models of several idealised isolated separator reconnection configurations. As here, direct acceleration by local parallel electric field dominated the particle motion. Impact sites were seen to closely align with the spines and local separatrix surfaces of each null threaded by the separator. Drifts and magnetic mirroring were also identified, but played a lesser role than in the isolated 3D null cases. Electrons were typically accelerated towards two specific impact sites associated with the spines of one null in the system. Accelerated proton impact sites were aligned with the spines of the other null, while the protons themselves took fractionally longer to accelerate. Our present investigation also recovers impact sites at opposite corners of the domain depending on particle charge. A model of separator reconnection that is embedded in a coronal environment is needed to provide a direct comparison with our model (and indeed the embedded 3D null models described earlier). As an aside, we also note that the scaling of energy gains described in detail in Threlfall et al. (2016a) is also present in our results. Any scale-free model of magnetic reconnection where particle energy gains are due to a (field-aligned) potential difference may be rescaled without recalculation of the particle orbits, according to where is the change in energy, and barred quantities represent the normalised values of their counterparts. In Section 3.3 we describe how a tenfold reduction in length scale yields a hundredfold reduction in energy gained by the particle orbits. This is exactly as expected from this simple expression derived in Threlfall et al. (2016a). Thus scale-free models like this one allow for the coverage of larger ranges of parameter space without additional computational effort.
The final type of models that we broadly compare to here are reconnecting coronal loop models. Turkmani et al. (2005Turkmani et al. ( , 2006 inserted test particles into a magnetic cylinder that modelled a coronal loop subjected to photospheric driving, which built up fragmented current sheets throughout the volume. The peak absolute electric field within the current sheets reached approximately 900 V m −1 , and consequently, both protons and electrons were able to achieve up to 100 GeV energies. Energised particles in the simulations were either trapped by repeated mirroring due to interaction with multiple electric field regions or exited the loops via the loop footpoints. Our model is able to better control the peak electric field values by the use of a single monolithic electric field at a single broad location, which causes all orbits to be accelerated to some degree. With only a single electric field region, this electric field mirroring (also seen in e.g. Threlfall et al., 2016b, for simulations of a full active region) cannot be studied. A future extension of our work should aim to reduce the size of the reconnection region (for example using the parameters L x and L z , which were chosen here to match the original model of Hesse, Forbes, and Birn, 2005) and include complexity through the insertion of additional reconnection regions.
A similar, more recent model of reconnecting coronal loops (Gordovskyy et al., 2014) also shows high-energy proton and electron precipitation towards the loop footpoints. These authors recover a high-energy power-law tail using initially Maxwellian particle energies, likely due to the reconnection of multiple fragmented current sheets, again implying that our model lacks the appropriate size and sporadic nature of the electric field using the parameter values specified here. While our simple model certainly has drawbacks, it also has advantages over these models. For example, it is unclear whether the reconnection in coronal loop models takes place at or in the presence of specific topological features (e.g. nulls or separators) or geometric features (e.g. quasi-separatrix layers, or QSLs, see Titov, Hornig, and Démoulin, 2002). In our case, we can definitively state that the reconnection modelled here is not associated with separators, separatrix surfaces, or magnetic nulls in any way.
Qualitative Comparison with Flare Observations
Several facets of the results presented earlier are reminiscent of motions, associated or underpinned by reconnection, that are observed during flares. A comprehensive overview of flare observations can be found in e.g. Fletcher et al. (2011). By comparing our results with flare observations, we do not seek to suggest that this model underpins or models a solar flare, but merely that it highlights some interesting similarities between this simple model and aspects of flare observations. In this discussion, we refer to the classic two-ribbon flare picture (discussed in e.g. Savcheva et al., 2016, and based on references therein). Two types of apparent motions are prevalent in observations of such flares. One type is a fast elongation (often termed a "zipper" motion) running parallel to the PIL during the impulsive phase of these flares. The other type is a gradual expansion of the ribbons perpendicular to the PIL during the decline phase of these flares.
Our particle results suggest that both types of motions may be present in our model. The footpoints of our initial arcade clearly separate over time, as seen in Figure 1. The particle impact sites themselves (seen in Figure 3) track this separation, with the highest energy impact sites also steadily diverging from the lowest, as the footpoints move apart. This divergence from one to two impact sites would likely generate observational signatures suggesting an apparent motion perpendicular to the PIL. Using the original normalising length and timescale in our experiment (l scl = 100 km, t scl = 100 s), we estimate an apparent velocity of 0.15 km s −1 for this motion, well below typical speeds recovered by observations. Flare footpoint widening observations are not always definitive, however, with footpoint widening (and indeed contraction) varying from flare to flare across a range of speeds (e.g. Fletcher and Hudson, 2002). Varying the parameters in our model (such as using an appropriate non-linear as opposed to linear function of (t)) would lead to more rapid motions, as the approximate locations of highly accelerated particle impact sites at given times would be affected.
We also recover evidence of an apparent motion that runs parallel to the PIL. Over time, the asymmetry (in x) of final particle positions of highly accelerated orbits continues to develop and can clearly be seen in Figure 3. Observations of H α and hard X-ray footpoint sources sometimes move along extreme ultraviolet (EUV) flare ribbons rather than away from the PIL. These locations (and their apparent motions) are known to represent intense energy deposition in the lower solar atmosphere (e.g. Hudson, Wolfson, and Metcalf, 2006). By averaging the final x-position of orbits that achieved the largest energy gains, we again broadly estimate an apparent velocity of the impact sites of this population of approximately 0.2 km s −1 (when l scl = 100 km, t scl = 100 s). This speed is also much slower than the typical velocities of apparent PIL-parallel motions recovered by observations, but as before, it is dependent on the parameter values and box size we have chosen.
Our apparent parallel and perpendicular velocities appear much too low when compared to those uncovered during flare observations. We would once again note, however, that we have chosen model parameters and to non-dimensionalise our system without any consideration of modelling a solar flare. Our model can be rescaled in a number of ways, for example through l scl , b scl , and t scl (or indeed through the parameters L x , L y , and L z ) in order to yield speeds of apparent motions that are much closer to those found from solar flare observations. The factor 0.2 is also arbitrary, as is the functional dependence of on t/τ .
In addition to velocities, there are other minor similarities between our model results and solar flare observations. For example, we note that our model arcade is initially sheared. The probability of an active region or arcade structure to produce a flare has often been linked to the buildup of shear along the polarity inversion line (e.g. Hagyard et al., 1984;Leka and Barnes, 2007;Qiu, Gary, and Fleishman, 2009). It is also interesting to note that here the footpoints of the magnetic field do not actually move (since there is no plasma flow on the base), despite the apparent motions of particle impact sites. Equation (6) confirms that the photospheric magnetic field in this model is fixed, while there are significant changes to the coronal magnetic field. This confirms that apparent motions of the particle impact sites may be important tools for unraveling the behaviour of coronal magnetic fields (and in turn inferring the nature of magnetic topology and reconnection) during solar flares (e.g. Masson et al., 2009).
Conclusions and Future Work
In this article, we have adapted a scale-free model of a magnetic flux-rope eruption, originally proposed by Hesse, Forbes, and Birn (2005), and employed it to study particle acceleration in a 3D magnetic reconnection configuration that crucially does not depend on specific topological features, such as magnetic nulls and separators. In this model, the eruption of the magnetic flux rope and the formation of helical magnetic field lines allow electron and proton orbits to repeatedly re-enter the reconnection region, whereupon they receive strong direct acceleration from the electric field that is associated with the magnetic reconnection. The impact sites of these particles move over time, according to which particle species is being considered; both electrons and protons are ultimately able to achieve the same energies in this model. The model itself is inherently scale free, allowing us to suggest how the final orbit energy gains recovered would change not only over the course of the flux-rope eruption model, but also scale with different normalisation (with a peak energy gain of 2 MeV or 20 keV depending on the parameter regimes used here) according to the formula derived in Section 4.1. Our orbit calculation results also bear a notable resemblance to aspects of solar flare observations, although we make no attempt to model such a scenario.
A natural first extension of this work would be to investigate the presence of QSLs, which are geometric (rather than topological) features where reconnection and particle acceleration are thought to take place. It is also worth exploring whether we can reduce the reconnection region size and add more reconnection sites (in the manner seen in simulations of reconnection at fragmented current sheets within coronal loops), and how this would affect the flux rope formation in this model. In line with previous work, a more extended next step would be to attempt to model this eruption using high-resolution MHD simulations. The localised magnetic reconnection that takes place in this model results from a specific magnetic perturbation, introduced to allow the flux rope to erupt. It would be of interest to see whether or how (for example) footpoint motions or an initially non-equilibrium field configuration might build up current layers within the original arcade structure. How these current layers would then dissipate (through reconnection), and whether the configuration would continue to be devoid of magnetic null points or separators, are also open questions. | 11,221 | 2017-03-01T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Uncertainty assessment in 3-D geological models of increasing complexity 517 Germany Black Forest
The quality of a 3-D geological model strongly depends on the type of integrated geological data, their interpretation and associated uncertainties. In order to improve an existing geological model and effectively plan further site investigation, it is of paramount importance to identify existing uncertainties within the model space. Information entropy, a voxel-based measure, provides a method for assessing structural uncertainties, comparing multiple model interpretations and tracking changes across consecutively built models. The aim of this study is to evaluate the effect of data integration (i.e., update of an existing model through successive addition of different types of geological data) on model uncertainty, model geometry and overall structural understanding. Several geological 3-D models of increasing complexity, incorporating different input data categories, were built for the study site Staufen (Germany). We applied the concept of information entropy in order to visualize and quantify changes in uncertainty between these models. Furthermore, we propose two measures, the Jaccard and the city-block distance, to directly compare dissimilarities between the models. The study shows that different types of geological data have disparate effects on model uncertainty and model geometry. The presented approach using both information entropy and distance measures can be a major help in the optimization of 3-D geological models.
Introduction
Three-dimensional (3-D) geological models have gained importance in structural understanding of the subsurface and are increasingly used as a basis for scientific investigation (e.g., Butscher and Huggenberger, 2007;Caumon et al., 2009;Bis-tacchi et al., 2013;Liu et al., 2014), natural resource exploration (e.g., Jeannin et al., 2013;Collon et al., 2015;Hassen et al., 2016), decision making (e.g., Campbell et al., 2010;Panteleit et al., 2013;Hou et al., 2016) and engineering applications (Hack et al., 2006;Kessler et al., 2008).Overall, 3-D geological models are usually preferable over 2-D solutions because our object of study is intrinsically three-dimensional in space and, therefore, they offer a higher degree of data consistency and superior data visualization.Moreover, they enable the integration of many different types of geological data such as geological maps, cross sections, outcrops, boreholes and data from geophysical (e.g., Boncio et al., 2004) and remote-sensing methods (e.g., Schamper et al., 2014).Nevertheless, input data are often sparse, heterogeneously distributed or poorly constrained.In addition, uncertainties from many sources such as measurement error, bias and imprecisions, randomness, and lack of knowledge are inherent to all types of geological data (Mann, 1993;Bárdossy and Fodor, 2001;Culshaw, 2005).Furthermore, assumptions and simplifications are made during data collection, and subjective interpretation is part of the modeling process (Bond, 2015).Hence, model quality strongly depends on the type of integrated geological data and its associated uncertainties.
In order to assess the quality and reliability of a 3-D geological model as objectively as possible, it is essential to address underlying uncertainties.Numerous methods have recently been proposed that enable estimates, quantification and visualization of uncertainty (Tacher et al., 2006;Wellmann et al., 2010;Lindsay et al., 2012Lindsay et al., , 2013Lindsay et al., , 2014;;Lark et al., 2013;Park et al., 2013;Kinkeldey et al., 2015).A promising approach is based on the concept of information entropy (Shannon, 1948).Wellmann and Regenauer-Lieb (2012)
applied this concept
Published by Copernicus Publications on behalf of the European Geosciences Union.
to 3-D geological models.In their study, they evaluated uncertainty as a property of each discrete point of the model domain by quantifying the amount of missing information with regard to the position of a geological unit (Wellmann and Regenauer-Lieb, 2012).They consecutively added new information to a 3-D model and compared uncertainties between the resulting models at discrete locations and as an average value for the total model domain using information entropy as a quantitative indicator.Through their approach, they addressed two important questions: (1) how is model quality related to the available geological information and its associated uncertainties, and (2) how is model quality improved through the incorporation of new information?
Wellmann and Regenauer-Lieb (2012) illustrated their approach using synthetic 3-D geological models, showing how additional geological information affects model uncertainty.The present study goes a step further.It applies the concept of information entropy as well as model dissimilarity to a real case, namely the city of Staufen, Germany, at the eastern margin of the Upper Rhine Graben.In contrast to the previous study, the present study evaluates the effects of the consecutive addition of data from different data categories to an existing model on model uncertainty and overall model geometry.We hypothesize that disparate effects of different data types on model uncertainty exist and that the quantification of these effects provides a trade-off between costs (i.e., data acquisition) and benefits (i.e., reduced uncertainty and therefore higher model quality).Thus, several 3-D geological models of the study site were consecutively built with increasing complexity; each of them based on an increasing amount of (real) categorized data.An approach was developed that uses information entropy and model dissimilarity for the quantitative assessment of uncertainty in the consecutive models.Results indicate that the approach is applicable for complex and real geological settings.The approach has large potential as a tool to support both model improvement through successive data integration and cost-benefit analyses of geological site investigations.
Study site
The city of Staufen suffers from dramatic ground heave that resulted in serious damage to many houses (southwest Germany, Fig. 1).Ground heave with uplift rates exceeding 10 mm month −1 started in 2007 after seven wells were drilled to install borehole heat exchangers (BHEs) for heating the local city hall (LGRB, 2010).After more and more houses in the historic city center showed large cracks, an exploration program was initiated by the state geological survey (LGRB -Landesamt für Geologie, Rohstoffe und Bergbau) in order to investigate the case.Results showed that the geothermal wells hydraulically connected anhydrite-bearing clay rocks with a deeper aquifer, and resulting water inflow into the anhydritic clay rock triggered the transformation of the min-eral anhydrite into gypsum (Ruch and Wirsing, 2013).This chemical reaction is accompanied by a volume increase that leads to rock swelling, a phenomenon typically encountered in tunneling in such rock (e.g., Einstein, 1996;Anagnostou et al., 2010;Butscher et al., 2011bButscher et al., , 2015;;Alonso, 2011), but recently also observed after geothermal drilling (Butscher et al., 2011a;Grimm et al., 2014).The abovementioned exploration program was aimed not only at finding the cause of the ground heave but also at better constraining the complex local geological setting.The hitherto existing geological data were not sufficient to explain the observed ground heave, locate the geological units that are relevant for rock swelling, and plan countermeasures.
Staufen is located west of the Black Forest at the eastern margin of the Upper Rhine Graben (URG).It is part of the "Vorbergzone" (Genser, 1958), a transition zone between the eastern main border fault (EMBF) of the graben and the graben itself.This zone is characterized by staggered fault blocks that were trapped at the graben margin during opening and subsidence of the graben.The strata of this transition zone are often steeply inclined or even vertical (Schöttle, 2005) and are typically displaced by west-dipping faults with a large normal displacement.The fault system, kinematically linked to the EMBF, has a releasing bend geometry and today experiences sinistral oblique movement (Behrmann et al., 2003).The major geological units at the site comprise Triassic and Jurassic sedimentary rocks, which are covered by Quaternary sediments of an alluvial plain in the south (Sawatzki and Eichhorn, 1999) (Fig. 1).
Three geological units play an important role for the swelling problem at the site: the Triassic Gipskeuper ("Gypsum Keuper") formation, which contains the swelling zone, and the underlying Lettenkeuper formation and Upper Muschelkalk formation, which are aquifers providing groundwater that accesses the swelling zone via pathways along the BHE.The Gipskeuper formation consists of marlstone and mudstone and contains the calcium-sulfate minerals anhydrite (CaSO 4 ) and gypsum (CaSO 4 + H 2 O).The thickness of this formation varies between 50 and 165 m, with an average thickness of 100-110 m (LGRB, 2010), depending on the degree of leaching of the sulfate minerals close to the ground surface.It is underlain by the Lettenkeuper formation (5-10 m thickness), consisting of dolomitic limestone, standstone and mudstone, and the Upper Muschelkalk formation (≈ 60 m thickness) dominantly consisting of limestone and dolomitic limestone.
Input data
Input data for the 3-D geological modeling include all available geological data that indicate (1) boundaries between geological units, (2) the presence of geological units and faults at a certain positions, and (3) orientation (dip and azimuth) of the strata.These data were classified into four categories (Fig. 2): (1) non-site-specific, (2) site-specific, (3) direct problem-specific data and (4) indirect problem-specific data.
The non-site-specific data category comprises geological data that are generally available from published maps (Sawatzki and Eichhorn, 1999), the literature (Genser, 1958;Groschopf et al., 1981;Schreiner, 1991) and the database of the state geological survey, LGRB.Furthermore, a digital terrain model (DTM) of 1 m grid size is included in the non-site-specific data.Outcrop and borehole data are mostly scarce and irregularly distributed in space.The site-specific data comprise drill logs of the geothermal drillings, which provided a pathway for uprising groundwater that finally triggered the swelling.Problem-specific data comprise all data collected during the exploration program that was conducted after heave at the ground surface caused damage to the local infrastructure (LGRB, 2010(LGRB, , 2012)).This exploration program was initiated because geological knowledge of the site was insufficient for an adequate understanding of the swelling process in the subsurface and for planning and implementing suitable countermeasures.The problem-specific data were further divided into direct data from drill cores of the three exploration boreholes (Fig. 2; EKB 1+2 and BB 3), which add very accurate point information, and indirect data from a seismic campaign (Fig. 2; Profile 1-5), which add rather "fuzzy" 2-D information that has to be interpreted.
3-D geological modeling
The 3-D geological models were constructed using the geomodeling software SKUA/GoCAD ® 15.5 by Paradigm.They cover an area of about 0.44 km 2 and have a vertical extent of 665 m.A smaller area of interest (AOI, 300 m × 300 m, 250 m vertical extent) was defined within the model domain, including the drilled wells and the area where heave at the ground surface was observed and the problem-specific data were collected.
The strata of the models cover 10 distinct geological units including Quaternary sediments, Triassic and Jurassic bedrock, and crystalline basement at the lower model boundary (Fig. 3).The Triassic strata are further divided (from top to bottom) into four formations of Keuper (Steinmergelkeuper, Schilfsandstein, Gipskeuper and Lettenkeuper), two formations of Muschelkalk (Upper Muschelkalk, Middle to Lower Muschelkalk) and the Buntsandstein formation.Figure 3 provides an overview over the modeled geological units and average thicknesses used in the initial models.
Four initial models were consecutively built, according to the four previously described data categories.Model 1 was constructed based only on non-site-specific data (maps, literature, etc.); Model 2 additionally considered site-specific data (drill logs of the seven geothermal drillings); Model 3 also included "direct" problem-specific data (exploration boreholes); and finally, Model 4 included "indirect" problemspecific data (seismic campaign).
Site specific
Additional data: Geological data and information on local to regional scale.
Geological data with direct reference to the area of interest (AOI).
Geological data with direct reference to the AOI and collected explicitly to address the swelling problem.data density and structural model complexity increase from Models 1 to 4 and the models required successively higher efforts in data acquisition in the field.
First, an explicit modeling approach (Caumon et al., 2009) was used to create representative boundary surfaces for the geological units and faults of the initial model because the available input data were, in terms of spatial coverage, not sufficient to directly use an implicit approach.Discrete smooth interpolation (DSI) provided by GoCAD ® was used as the interpolation method (Mallet, 1992), which resulted in Delaunay-triangulated surfaces for both horizons and faults.Subsequently, based on the explicitly constructed surfaces, a volumetric 3-D model was built by implicit geological modeling, implemented in the software SKUA ® .The implicit modeling approach uses a potential field interpolation considering the orientation of strata (Frank et al., 2007), and is based on the U -V -t concept (Mallet, 2004), where horizons represent geochronological surfaces.
General approach
Our approach for assessing uncertainties in the 3-D geological models consists of four distinct steps (Fig. 4): i. Building the initial 3-D geological models of increasing data density and structural complexity (see above).
ii.The definition of fault and horizon uncertainties.Horizon uncertainties were specified in SKUA ® by a maximum displacement parameter or by alternative surface interpretations, resulting in a symmetric envelope of possible surface locations around the initial surface.
To constrain the shape of generated horizons, SKUA ® uses a variogram that spatially correlates perturbations applied to the initial surfaces (Paradigm, 2015).Fault uncertainties were defined by a maximum displacement parameter and a Gaussian probability distribution around the initial fault surface (Caumon et al., 2007;Tertois and Mallet, 2007).
iii.The creation of 30 model realizations for each initial model based on the surface variations defined above, applying the Structure Uncertainty workflow of SKUA ® .
iv.The extraction of the geological information from all model realizations for analysis, comparison and visualization.For this purpose, the AOI was divided into a regular 3-D grid of 5 m cell size, resulting in 180 000 grid cells.The membership of a grid cell to a geological unit was defined as a discrete property of each grid cell and extracted for all 30 model realizations.Based on these data, we calculated the probability of each geological unit being present in a grid cell in order to derive the information entropy at the level of (1) a single grid cell, (2) a subset representing the area of extent of a geological unit and (3) the overall AOI.Furthermore, the fuzzy set entropy was calculated to determine the ambiguousness of the targeted geological units Gipskeuper (km1), Lettenkeuper (ku) and Upper Muschelkalk (mo) within the AOI.Calculations were conducted using the statistics package R (R Core Team, 2016).The underlying concepts and equations used to calculate probabilities and entropies are described in the following section.
Information entropy
The concept of information entropy (or Shannon entropy) was first introduced by Shannon (1948) and is well known in probability theory (Klir, 2005).It quantifies the amount of missing information and hence, the uncertainty at a discrete location x, based on a probability function P of a finite data set.When applied to geological modeling, information entropy expresses the "degree of membership" of a grid cell to a specific geological unit.In other words, information entropy quantitatively describes how unambiguously the available information predicts that unit U is present at location x.Information entropy was recently applied to 3-D geological modeling by Wellmann et al. (2010) and Wellmann and Regenauer-Lieb (2012) in order to quantify and visualize uncertainties introduced by the imprecision and inaccuracy of geological input data.A detailed description of the method can be found in the cited references and is briefly summarized here.By subdividing the model domain M into a regular grid, a discrete property can be assigned to any cell at location x in the model domain.In a geological context, the membership of a grid cell to a geological unit U can be defined as such a property by an indicator function: Applied to all n realizations k of the model space M, the indicator function yields a set of n indicator fields I with each of them defining the membership of a geological unit as a property of a grid cell.Considering the combined information of all indicator fields, it follows that membership is no longer unequivocally defined at a location x and hence has to be expressed by a probability function P U : From the probabilities of occurrence P x (U ) the uncertainty (or amount of missing information) associated with a discrete point (grid cell) can be obtained by calculating the information entropy H x (Shannon, 1948) for a set of all possible geological units U: (3) In a next step, information entropy H M can be calculated as an average value of H x over the entire model space: where |M| is the number of elements within M, H M = 0 denotes that the location of all geological units is precisely known (no uncertainty) and H M is maximal for equally distributed probabilities of the geological units (P P U 3 = . ..), which means that a clear distinction between geological units within the model space is not possible.Similarly, average information entropy can also be applied to only a subset of the model space (S ⊆ M): H S can be used to evaluate the contribution of a specific sub-domain to overall uncertainty.In the case of a drilling campaign, for example, the sub-domain can comprise a targeted depth or a geological formation of specific interest.In this study, we used the probability function P x (U ) with H S conditioned by P x (U ) > 0 to define subsets within the model space.Thus, each subset represents the probability space of a geological formation of interest, namely the Lettenkeuper (S ku ), Gipskeuper (S km1 ) and Upper Muschelkalk (S mo ) formation.
Wellmann and Regenauer-Lieb (2012) also adapted fuzzy set theory (Zadeh, 1965) in order to assess how well-defined a single geological unit is within a model domain.A fuzzy set of n model realizations introduces a certain degree of indefiniteness to a discrete property (e.g., membership of a ge-ological unit), resulting in imprecise boundaries which can be referred to as fuzziness.The fuzziness of a fuzzy set (De Luca and Termini, 1972) in the context of a geological 3-D model can be quantified by the fuzzy set entropy H U (Leung et al., 1992;Yager, 1995): where the probability function P x (U ) with an interval [0,1] represents the degree of membership of a grid cell to a fuzzy set.H U equals 0 when P x (U ) is either 0 or 1 everywhere within the set; and H U equals 1 when all cells of the set have an equal probability of P x (U ) = 0.5.
Model dissimilarity
The stepwise addition of input data to the models (see Sect. 3.1) not only affects uncertainties associated with a geological unit but also the geometry of the units and therefore their position, size and orientation in space.New data may significantly change the geometry of a geological unit but only marginally change the overall uncertainty.Thus, both model uncertainty and dissimilarity should be evaluated.In order to quantify the dissimilarity d between consecutive models in terms of the probability of a specific geological unit occurring in a given voxel, two measures, the Jaccard and the city-block distance (Fig. 5), are proposed to complement information entropy.However, dissimilarities between models, and therefore, uncertainties, have recently also been addressed very effectively using geo-diversity metrics such as formation depth and volume, curvature and neighborhood relationships together with principal component analysis (Lindsay et al., 2013) and through topological analysis, which quantifies geological relationships in a model (Thiele et al., 2016a, b).The set of locations for which the probability P x (U ) of belonging to a particular geological unit U is greater than a threshold value t can be defined by A threshold value of t = 0 was applied in order to capture and consider the same sample space as in H U .This definition is highly sensitive to outcomes of small probability and might, in some cases, be more robust using a threshold value greater than 0 (e.g., t > 0.05).The Jaccard similarity measure (Webb and Copsey, 2003) is then defined as the size of the intersection divided by the size of the union (overlap) of two sample sets (M1, M2), which in our case represent the similarity in position of a geological unit U between two models: Accordingly, the dissimilarity between models can be expressed by the Jaccard distance: where d JAC = 1 indicates maximum dissimilarity (no match in position of a geological unit U between two models) and d JAC = 0 indicates complete overlap.
Even though the use of binary dissimilarities is straightforward and suitable to quantify absolute changes in position of a geological unit between models, it does not account for fuzziness (see Sect. 3.3.2).Hence, the dissimilarity may be overestimated by the Jaccard distance.In order to include fuzziness, the normalized city-block distance was employed, adopting the probability function P x (U ) as a dimension to compare dissimilarities between the two sample sets (M1,M2) (Webb and Copsey, 2003;Paul and Maji, 2014): where N is the size of M1 ∪ M2 (i.e, number of grid cells present within the union).The distance is greatest for d NCB = 1.
Initial 3-D models
The four consecutively constructed initial models show a stepwise increase in structural complexity (Fig. 6).structures difficult, especially into depth (Jessell et al., 2010).Dip and strike were assumed uniform (40 and 35 • ) for all horizons across the model domain (see Fig. 6).Information from geological maps and outcrop data revealed a normal fault within the AOI, which was assumed to be ENE-WSW striking with a moderate displacement of about 50 m.In Model 2, horizon positions of the Schilfsandsteinkeuper (km2), Gipskeuper (km1) and Lettenkeuper (ku) were locally constrained by site-specific information provided by drill logs of the geothermal wells, slightly impacting fault displacement and thickness of the formations.However, changes in model geometry were minor, as no further infor-mation on horizon orientations was available and no additional faults could be located.By adding the direct problemspecific data from the exploration wells to Model 3, a horstgraben structure was identified that entailed a considerable displacement at two normal faults between and to the northwest of the wells with a displacement of 120 and 70 m, respectively.Furthermore, the drill logs included orientation measurements of the strata, resulting in a shift in position and inclination of layers, compared to the previous models.Thus, large parts of the model domain within the AOI changed from Model 2 to Model 3, and, as a consequence, dissimilarities between these models are particularly high (see Sect. 4.4).Finally, Model 4, which included data from a seismic campaign, has the highest degree of structural complexity.The information provided by seismic sections revealed uncertainties, which were present previously but not captured by the simpler Models 1 to 3. Ultimately, seismic data force the interpreter to add complexity down to a certain scale.However, seismic surveys are inherently ambiguous and allow alternative interpretations, especially concerning the orientation and number of faults as well as the type of fault contact to a fault network (e.g., branching) (Røe et al., 2014;Cherpeau and Caumon, 2015;Julio et al., 2015).In our case, seismic sections and interpretations were adopted from LGRB (2010).
The indirect problem-specific data from the seismic 2-D survey located several additional faults within the AOI, and in some cases caused a shift in position of faults compared to Model 3. The AOI was strongly fragmented by the added faults, and the orientation of layers is no longer uniform but varies strongly between fault blocks.In summary, the stepwise integration of data according to the four data categories improved our general knowledge of subsurface structures at the study site (Fig. 2).In addition, the effect of data integration from different exploration stages on modeled subsurface geometry could be evaluated and visualized.
Multiple model realizations
The multiple (30) model realizations created by the Structural Uncertainty workflow of SKUA ® are illustrated in Fig. 7 using 2-D cross sections of Models 1 and 4 as examples.A total number of 30 realizations and a cell size of 5 m was chosen as a compromise between model detail, lowest practical limit for statistical viability and data handling.For the same reason we did not base our number of realizations on an estimate of convergence.Instead we used the estimate of 30 realizations for a stable fluctuation in fuzzy entropy in a model developed by Wellmann et al. (2010) as a guideline value to our model.Perturbations in horizon location are based on (1) alternative surface interpretations, which reflect a maximum deviation in dip and azimuth (±5 • ) from the initial surface and (2) constant displacement values, which were assigned in order to account for uncertainties in formation thickness and boundary location.For a more detailed expla-nation of our choice of parameters, assigned probability distributions and specific input modes of the Structural Uncertainty workflow, please refer to the Supplement (Tables S1 and S2).In Model 1, the non-site-specific data set includes minimal constraints, resulting in faults and horizons of the realizations that are widely dispersed but parallel.In contrast, the faults and horizons of the Model 4 realizations are more narrowly dispersed where problem-specific data were available within the AOI.The workflow handles equal uncertainties consistently across models by producing a similar pattern of horizontal displacement in Models 1 and 4.This can be seen in particular for structures located close to the NW boundary, which were not further constrained by consecutively added geological data.However, it is also apparent from the mostly uniform orientation of the surfaces in the 30 realizations of each model that perturbation measures implemented in the Structural Uncertainty workflow did not allow for large variations in dip and azimuth of horizons or faults.Therefore, uncertainty may be systematically underestimated especially at greater depths.
Distribution of information entropy
Information entropy, quantified at the level of individual grid cells, can be visualized in 3-D to identify areas of uncertainty and evaluate changes in geometry resulting from successive data integration.Figure 8a shows the distribution of information entropy for Models 1 and 4. It can also be seen that the approach is suitable for locating areas with high degrees of uncertainty, indicated by dark red colors (hot spots) in this figure.Furthermore, Fig. 8b highlights where additional constraints from the data helped to optimize the model by reducing uncertainties ( H x < O) and whether further constraints are needed in locations of specific interest.The overall distribution of uncertainty was clearly affected by additional geological information from site-and problemspecific input data (Model 4).This effect is highlighted by the changes in entropy between the models (Fig. 8b).Additional constraints on horizon and fault boundaries caused a shift in position and orientation of geological units, followed by a large redistribution of uncertainties, indicated by the changes in entropy.It can be seen that new hot spots of uncertainty were introduced in proximity to the faults identified by the exploration boreholes and the seismic data incorporated into Model 4 (see Fig. 6).However, these new areas of uncertainty can be considered an optimization of the model because large parts of the preceding Model 1 did not reflect the complex local geology.Model 1 (wrongly) predicted low uncertainties for areas where information on unidentified but existing structures (i.e., faults) was missing.This illustrates that epistemic uncertainties at the study site are likely substantial.Even Model 4 will inevitably still underrepresent the true structural complexity at this site, especially in areas of low data density.In a risk-assessment and decision-making process, this can be problematic because low uncertainty areas might be in fact no-information areas.In such a case, the respective model area would actually be highly uncertain.However, ambiguities in data interpretation (e.g., seismic sections) can lead to incorrectly identified structures and uncertainty in any case, even in areas of high data density.Nevertheless, the approach allows one to assess and visualize uncertainties related to structures that have been identified during site investigation.To lessen the limitations posed by non-sampled locations, Yamamoto et al. (2014) proposed a post-processing method for uncertainty reduction, using multiple indicator functions and interpolation variance in addition to information entropy.Based on information theory, Wellmann (2013) further proposed joint entropy, conditional entropy and mutual information as measures to evaluate correlations and reductions of uncertainty in a spatial context.However, uncertainty from a lack of evidence of a geological structure (e.g., fault), known as imprecise knowledge (Mann, 1993), still depends on the density and completeness of available input data.
Average information entropy
The calculated average information entropy H T of the consecutive models steadily decreases with higher data specificity (i.e., non-site to problem-specific, see Fig. 2) from Models 1-4 (Fig. 9).Mean values of H M ranged from 0.56 (Model 1) to 0.39 (Model 4), where H M = 0 would denote no structural uncertainty.The decrease from Models 1 to 4 is approximately linear, indicating that all four categories of geological data had a similar impact on overall model uncertainty, even though the added information resulted in quite different model geometries and, as discussed above, in some cases in a local increase in entropy (see Fig. 8b).A similar but more pronounced trend was observed for the average entropy H S of the subsets S km1 , S ku and S mo , which represent the domain of the three geological units that are of particular importance to the swelling problem.However, entropy, i.e., the amount of uncertainty, is considerably higher within the domain of these geological units than for the overall model space, especially for the subsets S ku and S mo , identifying them as areas of a particularly high degree of uncertainty.Note that these units are the aquifers that have been hydraulically connected to the swellable rocks via the geothermal drillings.Nevertheless, all entropy values are comparably moderate, considering that a maximum of (only) five different geological units was found in any one grid cell across all four models, yielding a possible maximum entropy of H M = 2.32 for an equal probability distribution (P 1 = P 2 = P 3 = P 4 = P 5 ).For comparison: if all 10 geological units would be equally probable, the maximum entropy would be 3.32.Furthermore, median values and interquartile range dropped from 0.51 (0-0.99) in Model 1 to 0 (0-0.84) in Model 4. This helps to illustrate that the amount of grid cells with H x = 0 (indicating no inherent uncertainty), increased notably by 34.8 % from 40.6 (Model 1) to 54.8 % (Model 4) and that the remaining entropies in Model 4 are limited to a considerably smaller number of cells within the model domain.
Overall, comparing the pre-to post-site-investigation situations (Models 1-4), site and problem-specific investigations were all equally successful in adding information to the model and reducing uncertainties in the area of the targeted horizons.While the benefits from the different data are equal, the costs in data acquisition (i.e., work, money and time re- quired) may vary considerably, depending on the exploration method (e.g., drillings and seismic survey).An economic evaluation was not within the scope of this study.Nevertheless, the approach presented could improve cost and benefit analyses by quantifying the gain in information through different exploration stages.
Fuzzy set entropy
The fuzzy set entropy was calculated to indicate how welldefined a geological unit is within the model space.Applied to the swelling problem of our case study, a high degree of uncertainty remains with regard to the position of the relevant geological units (km1, ku, mo) after full data integration.We obtained fuzzy set entropy values (H U ) ranging between 0.329-0.504(Fig. 10).The fuzziness of these geological units only slightly changed from Models 1 to 4, indicating that higher data specificity did not translate into more clearly defined geological units within the model domain.This can be partially attributed to the complex geological setting of the study site.In the process of data integration, additional boundaries between geological units are created at newly introduced faults, increasing the overall fuzziness of a unit.
In the case of the Lettenkeuper formation (unit ku), boundaries are even slightly less well-defined in Model 4 compared to Model 1.This is likely related to the low thickness of the formation (5-10 m, Fig. 3) relative to the mesh size (5 m).A finer grid could reduce this effect; however, computation time would increase significantly.Wellmann and Regenauer-Lieb (2012) propose using unit fuzziness to determine an optimal representative cell size and reduce the impact of spatial discretization on information entropy.As previously discussed in Sect.4.2, our workflow does not explicitly consider uncertainties through dip and strike variations by a value indicated for this purpose but through perturbations based on alternative surface interpretations, which in our case likely underestimates the fuzziness of the targeted geological units at greater depths.Thus, overall fuzziness, particularly in Model 1, may be significantly higher than calculated.
Models dissimilarity
A gain in structural information through newly acquired data usually not only impacts model uncertainty but is also associated with a change in model geometry.The calculated distances between models can identify the data category with the strongest impact on model geometry and make it possible to determine whether model geometry and uncertainty are related.Figure 11 shows the calculated Jaccard and city-block distances between the models with respect to the targeted geological units km1, ku and mo.Calculated distances between models are rather high, with values of up to 0.78; indicating a pronounced shift in position of the geological units after data were added.The addition of both direct and indirect problem-specific data to Model 3 had a strong impact on model geometry, which can be seen by comparing the calculated distances between Models 2, 3 and 4 for both Jaccard and city block (Fig. 11).In contrast, sitespecific data had a much lower effect, with less than a 20 % (0.2) change in unit position, except for ku of the Jaccard distance (see distance between Models 1 and 2).
Overall, the city-block distance, which considers the fuzziness of geological boundaries, shows a similar trend to the Jaccard distance; however, changes are much less pronounced, especially for unit ku.According to the low cityblock distance, absolute changes in probability P x (U ) for each grid cell are small, whereas high Jaccard distances indicate a large number of grid cells being affected through newly added data.Thus, the Jaccard distance likely overestimated the actual dissimilarity between models.Comparing unit ku of both distances; the disparity between values hints at a large number of low-degree changes in membership of the grid cells ( P x (U ) 1).These predominately low-degree changes are likely related to the abovementioned high degree of unit boundary fuzziness and the resulting, illdefined, geological unit ku being shifted within the model domain.However, a direct comparison of fuzzy set entropy to the corresponding city-block distance yields no quantifiable relationship between model geometry and structural uncertainty.
Nonetheless, both distance measures allow the quantification and assessment of different aspects of dissimilarities and therefore, changes in geometry across models.Nevertheless, the city-block distance is preferable when sets of multiple realizations are compared because it factors in the probability of the occurrence of a geological unit at a discrete location.In recent years, various distance measures have already been applied in other contexts to create dissimilarity distance matrices and compare model realizations in history matching and uncertainty analysis, particularly in reservoir modeling (Suzuki et al., 2008;Scheidt and Caers, 2009a, b;Park et al., 2013).These include the Hausdorff distance which, similar to our approach, directly compares the geometry of different structural model realizations but also more sophisticated measures that calculate distances in realizations based on flow model responses from a transfer function.
Summary and conclusions
Prior work has demonstrated the effectiveness of information entropy in assessing model uncertainties and providing valuable insight into the geological information used to constrain a 3-D model.Wellmann and Regenauer-Lieb (2012), for example, evaluated how additional information reduces uncertainty and helps to constrain and optimize a geological model using the measure of information entropy.Their approach focused on a hypothetical scenario of newly added borehole data and cross-section information to a synthetic model.In the present study, information entropy and, in addition, model dissimilarity was used to assess the impact of newly acquired data on model uncertainties using actual siteinvestigation data in the complex geological setting of a real case.
We presented a new workflow and methods to describe the effect of data integration on model quality, overall structural understanding of the subsurface and model geometry.Our results provide a better understanding of how model quality can be assessed in terms of uncertainties in a data acquisition process of an exploration campaign, showing that information entropy and model dissimilarity are powerful tools to visualize and quantify uncertainties, even in complex geological settings.The main conclusions of this study are as follows: 1. Average and fuzzy set entropy can be used to evaluate uncertainties in 3-D geological modeling and, therefore, support model improvement during a consecutive data integration process.We suggest that the approach could be used to also perform a cost-benefit analysis of exploration campaigns.
2. The study confirms that 3-D visualization of information entropy can reveal hot spots and changes in the distribution of uncertainty through newly added data in real cases.The method provides insight into how additional data reduce uncertainties in some areas and how newly identified geological structures may create hot spots of uncertainty in others.Furthermore, the method stresses that parsimonious models can locally underestimate uncertainty, which is only revealed after new data are available and being considered.
3. Dissimilarities in model geometry across different sets of model realizations can effectively be quantified and evaluated by a single value using the city-block distance.A combination of the concepts of information entropy and model dissimilarity improves uncertainty assessment in 3-D geological modeling.
However, some limitations of the presented approach are noteworthy.Although it was designed to assess uncertainties in the position and thickness of horizons, uncertainty in orientation could only be included indirectly through perturbations based on alternative surface interpretations but not by explicit dip and azimuth parameter values indicated for this purpose.This may result in a systematic underestimation of uncertainties at greater depths of the model domain.Furthermore, our study site (Vorbergzone) is a highly fragmented geological entity, and epistemic uncertainties due to missing information about unidentified but existing geological structures are likely substantial.
Future work should therefore aim to include "fault block uncertainties" more effectively into the workflow, for example by including multiple fault network interpretations (Holden et al., 2003;Cherpeau et al., 2010;Cherpeau and Caumon, 2015) or by considering fault zones that produce a given displacement by a variable number of faults.Finally, all data of the investigated site were collected prior to our analysis; therefore, additional data were not explicitly collected in order to reduce detected uncertainties within the consecutive models.Applying this approach during an ongoing site investigation could improve the targeted exploration and allow a well-founded cost-benefit analysis through uncertainty hot-spot detection.
Data availability.The underlying research data were collected and provided by the state geological survey, LGRB.They are freely available in the form of two extensive reports (LGRB, 2010(LGRB, , 2012) ) summarizing the findings of the exploration campaigns conducted in the city of Staufen (Germany).Both reports can be downloaded from http://www.lgrb-bw.de/geothermie/staufen.Since the size of the simulation data sets is too large for an upload, the authors encourage interested readers to contact the coauthors.
The Supplement related to this article is available online at doi:10.5194/se-8-515-2017-supplement.
Figure 1 .
Figure 1.Study site and location of the model area and area of interest (AOI).
Figure 2 .Figure 3 .
Figure 2. Data categories and geological input data used to build four initial 3-D geological models.The green square indicates the area of interest (AOI), where data were extracted for further analysis.For geological formation color code, see Fig. 1.
Figure 5 .
Figure 5. Distance measures used to calculate dissimilarities between models (M1, M2).(a) Jaccard distance (d JAC ) using a true/false binary function and (b) normalized city-block distance based on a probability function.
Figure 6.(a) Cross section through the AOI of all four initial geological models with projected borehole tracks (black lines) and 3-D representations of (b) Model 1 and (c) Model 4.
Figure 7 .
Figure 7. Cross section through Models 1 and 4. The multiple lines show 30 model realizations with shifted faults and horizons (for the location of the cross sections, see Fig. 6).The horizontal lines indicate the land surface (purple) and the base of the Quaternary (blue).
Figure 8. 3-D view of the AOI with a discretization of 5 m for (a) average information entropy H M of Models 1 and 4 and (b) change in entropy H x between both models.
Figure 9 .
Figure 9. Average entropy H M calculated for the different models (mean and median) and for subsets of the model space of each model (S km1 , S ku , S mo ).
Figure 10 .
Figure 10.Fuzzy set entropy H U of the targeted geological units km1, ku and mo of the different models.
Figure 11 .
Figure 11.Dissimilarities between the different models expressed by (a) Jaccard distance and (b) city-block distance. | 9,368.8 | 2017-04-13T00:00:00.000 | [
"Geology"
] |
Research Progress of Group II Intron Splicing Factors in Land Plant Mitochondria
Mitochondria are important organelles that provide energy for the life of cells. Group II introns are usually found in the mitochondrial genes of land plants. Correct splicing of group II introns is critical to mitochondrial gene expression, mitochondrial biological function, and plant growth and development. Ancestral group II introns are self-splicing ribozymes that can catalyze their own removal from pre-RNAs, while group II introns in land plant mitochondria went through degenerations in RNA structures, and thus they lost the ability to self-splice. Instead, splicing of these introns in the mitochondria of land plants is promoted by nuclear- and mitochondrial-encoded proteins. Many proteins involved in mitochondrial group II intron splicing have been characterized in land plants to date. Here, we present a summary of research progress on mitochondrial group II intron splicing in land plants, with a major focus on protein splicing factors and their probable functions on the splicing of mitochondrial group II introns.
Introduction
Mitochondria are proposed to originate from α-proteobacteria through endosymbiosis [1].During endosymbiosis and evolution, most ancestral bacterial genes have been either lost or transferred into the host nuclear genomes, leading to nearly all mitochondrial proteins being encoded by the nuclear genes.As a result of proteobacterial origin, and deletion/insertion of sequences or horizontal gene transfer during long-term evolution, introns are commonly present in the mitochondrial genomes of land plants.According to their RNA-folding patterns and splicing mechanisms, introns in plant mitochondria are classified into two groups, I and II.Most mitochondrial introns in land plants are group II introns, and only a single group I intron is present in the mitochondrial cox1 of some flowering plants [2,3].Correct excision of these introns is critical to the gene expression and biological function of mitochondria, and the growth and development of plants.Canonical group II introns are self-splicing introns that can remove themselves from pre-RNAs in vivo.However, group II introns in plant mitochondria have degenerated, resulting in the lack of regions required for their self-splicing and the loss of their ability to self-splice [4].Instead, splicing of plant mitochondrial group II introns is promoted by numerous nuclear-and mitochondrial-encoded protein cofactors [5].So far, some proteins from diverse families have been found with functions in mitochondrial group II intron splicing in land plants.
Here, we summarize research progress on mitochondrial group II intron splicing in land plants, with a major focus on protein splicing factors and their probable functions in the splicing of mitochondrial group II introns.
Genes 2024, 15, 176 2 of 15 is the largest domain, which interacts with other domains of group II introns and has functions in RNA folding [7].DII and DIII are smaller domains, and they interact with each other to form key elements of the active site of group II introns [8].DIV is the most diverse domain in different group II introns, and the ancestral DIV often carries a sequence expressing an intron-encoded protein (IEP).IEPs are multi-functional proteins with reverse transcriptase and maturase activities and are vital to the self-splicing of group II introns.DV is the most conserved domain, mostly composed of a 34 bp stem-loop structure.DV has a catalytic triad AGC and a binding site for Mg 2+ ions, and it can interact with DI to form a catalytic core [9].DVI harbors a branch-point adenosine that is generally located 7-8 bp upstream of the 3 end of the intron.Not all group II introns in plant mitochondria possess conventional secondary structural features [2].For example, nad1 intron 1 has a larger DV loop and a DVI without a branch-point adenosine.nad1 intron 2 has a short and strongly base-paired DVI.nad4 intron 2 does not have a branch-point adenosine residue at the expected position.
Group II Intron Structure
Group II intron RNAs are characterized by a conserved secondary structure that consists of six stem-loop domains (DI to DVI) extending from a wheel [6] (Figure 1).DI is the largest domain, which interacts with other domains of group II introns and has functions in RNA folding [7].DII and DIII are smaller domains, and they interact with each other to form key elements of the active site of group II introns [8].DIV is the most diverse domain in different group II introns, and the ancestral DIV often carries a sequence expressing an intron-encoded protein (IEP).IEPs are multi-functional proteins with reverse transcriptase and maturase activities and are vital to the self-splicing of group II introns.DV is the most conserved domain, mostly composed of a 34 bp stem-loop structure.DV has a catalytic triad AGC and a binding site for Mg 2+ ions, and it can interact with DI to form a catalytic core [9].DVI harbors a branch-point adenosine that is generally located 7-8 bp upstream of the 3′ end of the intron.Not all group II introns in plant mitochondria possess conventional secondary structural features [2].For example, nad1 intron 1 has a larger DV loop and a DVI without a branch-point adenosine.nad1 intron 2 has a short and strongly base-paired DVI.nad4 intron 2 does not have a branch-point adenosine residue at the expected position.
Group II Intron Splicing
Group II introns are commonly spliced by the branching pathway containing the same two-step transesterification reactions as those of the spliceosomal introns in nuclei (Figure 2).Firstly, the 2′-OH of the branch-point adenosine attacks the phosphate at the 5′ splice site and attaches to the 5′ end of the intron, forming a free 3′-OH at the 5′ splice site and a lariat intermediate composed of an intron and a 3′ exon.Secondly, the free 3′-OH of the 5′ exon attacks the phosphate at the 3′ splice site, with the connection of the 5′ and 3′ exons and the releasing of the intron in a lariat form [10].In the mitochondria of land plants, most group II introns are spliced by the branching pathway, and several of them are spliced through the hydrolytic pathway or the circularization pathway.For example, because of the lack of a branch-point adenosine in DVI, splicing of nad1 intron 1 pro-
Group II Intron Splicing
Group II introns are commonly spliced by the branching pathway containing the same two-step transesterification reactions as those of the spliceosomal introns in nuclei (Figure 2).Firstly, the 2 -OH of the branch-point adenosine attacks the phosphate at the 5 splice site and attaches to the 5 end of the intron, forming a free 3 -OH at the 5 splice site and a lariat intermediate composed of an intron and a 3 exon.Secondly, the free 3 -OH of the 5 exon attacks the phosphate at the 3 splice site, with the connection of the 5 and 3 exons and the releasing of the intron in a lariat form [10].In the mitochondria of land plants, most group II introns are spliced by the branching pathway, and several of them are spliced through the hydrolytic pathway or the circularization pathway.For example, because of the lack of a branch-point adenosine in DVI, splicing of nad1 intron 1 proceeds through the hydrolytic pathway [2].During this pathway, H 2 O or -OH, instead of the branch-point adenosine in DVI, attacks the phosphate at the 5 splice site and the intron is released in a linear form (Figure 2).Splicing of nad1 intron 2 in wheat mitochondria is accomplished through the circularization pathway [6] In the first step, the free 3 -OH of an external free exon proposed to be generated by the spliced-exon reopening reaction attacks the phosphate at the 3 splice site, generating ligated exons and a splicing intermediate.In the second step, the 2 -OH at the 3 end of the intron attacks the phosphate at the 5 splice site, releasing an independent exon and an intron in a circular form (Figure 2).ceeds through the hydrolytic pathway [2].During this pathway, H2O or -OH, instead of the branch-point adenosine in DVI, attacks the phosphate at the 5′ splice site and the intron is released in a linear form (Figure 2).Splicing of nad1 intron 2 in wheat mitochondria is accomplished through the circularization pathway [6] In the first step, the free 3′-OH of an external free exon proposed to be generated by the spliced-exon reopening reaction attacks the phosphate at the 3′ splice site, generating ligated exons and a splicing intermediate.In the second step, the 2′-OH at the 3′ end of the intron attacks the phosphate at the 5′ splice site, releasing an independent exon and an intron in a circular form (Figure 2).In vivo, the self-splicing and mobility of group II introns need the aid of IEPs.IEPs are multi-functional proteins and usually have a reverse transcriptase (RT) domain and an RNA-binding (X) domain at the N-terminus.The RT domain participates in the transcription of intron RNAs into DNAs.The X domain is related to RNA splicing and maturase activity.Some IEPs also contain a DNA-binding (D) domain and an endonuclease (En) domain following the X domain, both of which are critical for the mobility of group II introns.During the self-splicing and retrotransposition of group II introns, IEP functions as maturase and reverse transcriptase, recognizing its parent intron RNA and forming an IEP-intron ribonucleoprotein complex to promote splicing and reverse splicing of group II introns [11].For group II introns in plant mitochondria, the coding sequences of IEPs have been lost or degenerated, resulting in the lack of their ability to self-splice.Instead, the splicing of group II introns in plant mitochondria is promoted by proteins encoded by nuclear and mitochondrial genes [5].
Maturase
In the mitochondrial genomes of angiosperms, only one maturase-related (matR) gene has been maintained within intron 4 of nad1 [12,13].MatR encoded by the matR gene is well conserved in angiosperms; it contains a shortened RT domain, an intact X domain, and fragments of the D/En motif [5,14].MatR in angiosperms is closely associated with maturase encoded by bacterial group II introns [15], thus it is proposed to have similar In vivo, the self-splicing and mobility of group II introns need the aid of IEPs.IEPs are multi-functional proteins and usually have a reverse transcriptase (RT) domain and an RNAbinding (X) domain at the N-terminus.The RT domain participates in the transcription of intron RNAs into DNAs.The X domain is related to RNA splicing and maturase activity.Some IEPs also contain a DNA-binding (D) domain and an endonuclease (En) domain following the X domain, both of which are critical for the mobility of group II introns.During the self-splicing and retrotransposition of group II introns, IEP functions as maturase and reverse transcriptase, recognizing its parent intron RNA and forming an IEP-intron ribonucleoprotein complex to promote splicing and reverse splicing of group II introns [11].For group II introns in plant mitochondria, the coding sequences of IEPs have been lost or degenerated, resulting in the lack of their ability to self-splice.Instead, the splicing of group II introns in plant mitochondria is promoted by proteins encoded by nuclear and mitochondrial genes [5].
Maturase
In the mitochondrial genomes of angiosperms, only one maturase-related (matR) gene has been maintained within intron 4 of nad1 [12,13].MatR encoded by the matR gene is well conserved in angiosperms; it contains a shortened RT domain, an intact X domain, and fragments of the D/En motif [5,14].MatR in angiosperms is closely associated with maturase encoded by bacterial group II introns [15], thus it is proposed to have similar functions in angiosperms.In brassicaceae, MatR was found to be related to the splicing of many group II introns in mitochondria, and its host nad1 intron 4 is also included [14].
In addition to matR in mitochondrial genomes and matK in chloroplast genomes, there are four maturase genes designated nMat 1 to 4 in the nuclear genomes of angiosperms.nMat genes exist as standalone open reading frames and encode proteins closely associated with the maturases encoded by group II introns [16][17][18].Based on their topology and proposed evolutionary origins, four nMATs encoded by nMat genes are divided into type I and type II maturases [16,17].nMAT1 and nMAT2 are type I maturases, which contain the RT domain and the X domain but have lost the D/En motif, whereas nMAT3 and nMAT4 are type II maturases, harboring the RT domain, the X domain, and a predicted non-functional D/En motif.It is thus expected that all four nMATs in angiosperms have kept splicing activities but lack mobility-associated functions.Subcellular localization experiments indicate that nMAT 1 to 4 are all targeted into mitochondria [17], and genetic and biochemical data have shown that they all are related to the splicing of mitochondrial group II introns in Arabidopsis or maize.nMAT1 is essential for the splicing of mitochondrial intron 1 of nad1, intron 1 of nad2, and intron 2 of nad4 in Arabidopsis [18].nMAT2 facilitates the splicing efficiencies of 11 mitochondrial group II introns in Arabidopsis, including introns 2 and 3 of nad1; introns 1 and 4 of nad2; intron 2 of nad4; introns 1, 2, and 3 of nad5; intron 2 of nad7; and the intron of rps3 and cox2 [17,19].nMAT3 and nMAT4 seem to be related in evolution [20], and they are both found to function during the splicing of mitochondrial nad1 introns 1, 3, and 4 in Arabidopsis or/and maize [21][22][23].
PPR Proteins
Pentatricopeptide repeat (PPR) proteins typically have tandem arrays of a motif with 35 amino acids [24].Based on their motif architecture, PPR proteins are classified into classes of P and PLS.P-class PPR proteins always harbor PPR (P) motifs with 35 amino acids, while PLS-class PPR proteins usually contain P, L (longer), and S (shorter) motifs that form tandemly repeated PLS triplets [25].Based on the conserved C-terminal domains following the tandem arrays of PLS triplets, the PLS-class PPR proteins are classified into subclasses of PLS, E, E+, and DYW.PPR proteins are widely present in land plants and compose a large RNA-binding protein family.Most known PPR proteins are imported into mitochondria and/or chloroplasts and function in multiple steps of RNA metabolism [26,27].The P-class PPR proteins generally function in diverse aspects of organellar RNA processing, such as RNA splicing, RNA stabilization, and RNA cleavage or translation, while the PLS-class PPR proteins mostly function in RNA editing [28].Recently, increasing genetic and biochemical data indicate that PPR proteins have critical functions in the splicing of organellar group II introns in land plants.A detailed summary of PPR proteins involved in mitochondrial group II intron splicing in maize and Arabidopsis is listed in Table 1.Most characterized PPR proteins are needed for the splicing of one or several group II introns in the mitochondria of land plants.For instance, DEK2, EMP16, and PPR18 in maize [29,39,44] and EMB2794, MID1, and OTP43 in Arabidopsis [54, 55,59] are specifically involved in the splicing of a single group II intron in mitochondria.DEK41, EMP8, and PPR278 in maize [32,35,47] and BLX, MISF68, and MISF74 in Arabidopsis [53,57] are involved in the splicing of several group II introns in mitochondria.Additionally, two P-class PPR proteins in maize, PPR-SMR1 and SPR2, participate in the splicing of a large proportion of mitochondrial group II introns.PPR-SMR1 has a small MutS-related (SMR) domain at the C-terminus and functions in the splicing of 16 mitochondrial group II introns in maize [48].SPR2 is a small PPR protein merely harboring four PPR motifs, and it is required for the splicing of 15 mitochondrial group II introns in maize [49].
mTERF Proteins
Similar to PPR proteins, mitochondrial transcription termination factors (mTERFs) are characterized by harboring various numbers of tandem repeats of mTERF motifs, and each mTERF motif contains 30 amino acids forming three α-helices [63].mTERF proteins are widely present in metazoans, green alga, and plants [63].In metazoans, the mTERF proteins have been grouped into four subfamilies, mTERF1 to mTERF4, which target mitochondria, bind to nucleic acids, and then regulate DNA replication, transcription, or translation of mitochondrial genes [64][65][66][67][68].In land plants, there are more members of mTERFs; 31 mTERFs in maize and 35 mTERFs in Arabidopsis have been identified, respectively [69,70].However, only 14 mTERFs have been well functionally identified in plants; they are all localized in chloroplasts and/or mitochondria and regulate organellar gene expression [71].The present studies indicate that plant mTERFs can regulate the expression of organellar genes at transcriptional or post-transcriptional levels, including transcription [72], intron splicing [73], and translation [74].Two plant mTERFs have been shown to be involved in mitochondrial group II intron splicing.The mTERF15 in Arabidopsis is required for the splicing of mitochondrial nad2 intron 3 [75].ZmSMK3, an mTERF protein in maize, contains two mTERF motifs and plays an important role in the splicing of the fourth nad1 intron and the first nad4 intron in mitochondria [76].
CRM Domain Proteins
CRM domain proteins have a conservative RNA-binding domain denoted chloroplast RNA splicing and ribosome maturation (CRM) domain.The CRM domain is derived from an ancient ribosome-associated protein that has been maintained in eukaryotes only within the genomes of algae and plants [77,78].In archaea and bacteria, CRM domains exist as a stand-alone protein encoded by single-copy genes, while in plants, they present as a family of proteins containing one to four CRM domains [77].There are 16 CRM domain proteins in Arabidopsis and 14 in rice, and according to the domain organization, they can be divided into four subfamilies, chloroplast RNA splicing 1 (CRS1) and CRS2 associated factor (CAF), 3, and 4 [77].Most known CRM domain proteins are involved in RNA splicing in plant chloroplasts or mitochondria.To date, at least 10 CRM domain proteins have been confirmed to promote group II intron splicing in chloroplasts, such as CRS1, CAF1, CRM family member 2 (CFM2), and CFM3 [79].These CRM domain proteins are required for the splicing of nearly all group II introns in chloroplasts, and group II introns spliced by each CRM domain protein are overlapping but not identical.
The functions of CRM domain proteins in chloroplasts have been well characterized, but their roles in the splicing of mitochondrial introns have rarely been studied.Arabidopsis mitochondrial CAF-like splicing factor 1 (mCSF1) is a member of the CAF subfamily, and it contains two CRM domains.mCSF1 has been demonstrated to be localized exclusively to mitochondria and to be involved in the splicing of 13 group II introns in mitochondria, including the intron of cox2 and rps3; introns 2 and 3 of nad1; introns 1, 2, 3, and 4 of nad2; introns 1, 2 and 3 of nad5; and intron 2 of nad7 [80].Zm-mCSF1 is the ortholog of Arabidopsis mCSF1 in maize, and it has two CRM domains and is required for the splicing of six mitochondrial group II introns, including introns 2 and 3 of nad2, introns 1 and 2 of nad5, intron 3 of nad7, and the intron of ccmFc [48].CFM6 to CFM9, members of subfamily 3, harbor one CRM domain and are proposed to localize to mitochondria or nuclei [77].Genetic and biochemical data imply that Arabidopsis CFM9 targets mitochondria and mediates the splicing of 17 group II introns, including nad1 introns 1, 2, and 3; nad2 introns 1, 2, 3, and 4; nad4 introns 2 and 3; nad5 introns 1, 2, and 3; nad7 introns 1, 2 and 4; the rps3 intron; and the cox2 intron [81].Arabidopsis CFM6 was recently characterized to localize to mitochondria and to be specifically involved in the splicing of nad5 intron 4 [82].Loss-of-function mutations in these mitochondrial CRM domain proteins generally hinder the assembly and function of mitochondria, and then result in severe retarded growth or defective seed development [48,[80][81][82].
DEAD-Box RNA Helicase
RNA helicases are enzymes that catalyze the unwinding of duplex RNA and the rearrangement of ribonucleoprotein complexes [83,84].Based on their conserved motifs and structures, RNA helicases are divided into six superfamilies, SF1 to SF6 [85].DEADbox RNA helicases constitute the largest subfamily of SF2.They are characterized by nine conserved motifs and named after the conservative amino acid sequence of DEAD (Asp-Glu-Ala-Asp) in motif II [86].Nine conserved motifs of DEAD-box RNA helicases constitute the helicase core that has been reported to be essential for ATP binding, ATP hydrolysis, and RNA binding [87][88][89][90].DEAD-box RNA helicases exist in prokaryotes and eukaryotes, where they play important functions in RNA processing [91].About 60 DEADbox RNA helicase genes have been found in plants [92], but most of them have not been functionally identified.Only two members in Arabidopsis and one member in maize have been characterized to be required for the splicing of mitochondrial group II introns.The putative mitochondrial RNA helicase 2 (PMH2) was characterized to be essential for the splicing of 15 group II introns in Arabidopsis mitochondria, including nad1 introns 2 and 3; nad2 introns 1, 2, and 4; nad4 introns 2 and 3; nad5 introns 1, 2, and 3; nad7 introns 1 and 4; rps 3 intron; cox2 intron; and rpl2 intron [93].ABA OVERLY SENSITIVE 6 (ABO6) is related to the splicing of 12 mitochondrial group II introns in Arabidopsis, including nad1 introns 1, 2, 3, and 4; nad2 introns 3 and 4; nad4 introns 1, 2, and 3; and nad5 introns 1, 2, and 3 [94].Maize ZmRH48 was recently found to be required for the splicing of mitochondrial nad2 intron 2; nad5 intron 1; nad7 introns 1, 2, and 3; and the ccmFc intron [95].
Other Proteins
The plant organelle RNA recognition (PORR) domain was previously designated as the 'domain of unknown function 860 (DUF860) and was later renamed as the PORR domain because of its RNA-binding ability [96].PORR proteins constitute a small family in angiosperms, with 15 in Arabidopsis, 15 in maize, and 17 in rice [96].Nearly all PORR proteins in land plants are postulated to be located in chloroplasts or mitochondria, and two PORR proteins have been determined to function in plant mitochondrial group II intron splicing.WTF9 is required for the splicing of introns in mitochondrial rpl2 and ccmFc in Arabidopsis [97].OsPORR1 is associated with nad4 intron 1 splicing in rice mitochondria [98].
The regulator of chromosome condensation 1 (RCC1) proteins typically contain RCC1like domains.The RCC1-like domain was first identified in RCC1 [99] and has tandemly repeated RCC1 motifs of a conservative domain with about 50 amino acids [100].RCC1 proteins are predicted to be critical to the regulation of nuclear gene expression.In Arabidopsis and maize, 25 and 31 RCC1 proteins have been identified, respectively, but only two of them, RUG3 and DEK47, have been found to be splicing factors in mitochondria.RUG3 contains seven RCC1 motifs and is essential to the splicing of mitochondrial nad2 introns 2 and 3 in Arabidopsis [101].DEK47 also contains seven RCC1 motifs and is involved in the splicing of introns 1, 2, 3, and 4 of the nad2 transcript in maize mitochondria [102].As the RNA-binding activity of RCC1 proteins has not been proven, they are predicted to recruit RNA-binding factors or other proteins to the splicing complex [101,103].
In land plant mitochondria, multiple protein factors being required for the splicing of one group II intron hints the potential involvement of a splicing complex.Indeed, increasing studies indicate that large ribonucleoprotein complexes associated with the splicing of group II introns may exist in the mitochondria of land plants.Both nMAT2 and PMH2 are required for the splicing of 10 out of 23 group II introns in Arabidopsis mitochondria [19] (Figure 4).Analysis of pull-down experiments and native mitochondrial extracts showed the interaction between nMAT2 and PMH2 and the presence of nMAT2, PMH2, and their intron RNA targets in a large ribonucleoprotein complex.OZ2 functions in the splicing of seven group II introns in Arabidopsis mitochondria [104].Three mitochondrial splicing factors, ABO5 [51], MISF26 [57] and PMH2 [93], share target introns with OZ2, and physical interactions between OZ2 and these three proteins were observed in yeast two-hybrid and bimolecular fluorescence complementation assays [104].EMP603, a PPR protein, is specifically involved in mitochondrial nad1 intron 2 splicing in maize and interacts with DEAD-box RNA helicase PMH2-5140, the RAD52-like proteins ODB1-0814 and ODB1-5061, and Zm-mCSF1 in vitro and in vivo [42].
PPR-SMR1 and SPR2 in maize are involved in the splicing of 16 and 15 group II introns in mitochondria, respectively, and they are both essential for the splicing of 13 out of 22 group II introns in mitochondria [48,49] (Figure 3).PPR14, Zm-mCSF1, and EMP16 are specifically involved in the splicing of one or several introns of these 13 introns [39,43,48].Bimolecular fluorescence complementation and pull-down experiments revealed that PPR-SMR1, PPR14, and Zm-mCSF1 directly interact with one another [43,48], SPR2 directly interacts with PPR-SMR1, Zm-mCSF1, PPR14, and EMP16 [49].Meanwhile, the luciferase complementation imaging assay indicated that the interaction of PPR-SMR1 with EMP16 is mediated by the bridging of SPR2 [49].Moreover, ZmRH48 in maize was recently reported to be involved in the splicing of three out of 13 mitochondrial group II introns spliced by both PPR-SMR1 and SPR2 (Figure 3), and direct interactions between ZmRH48 and PPR-SMR1, SPR2, Zm-mCSF1 were confirmed through pull-down and bimolecular fluorescence complementation experiments [76].In land plant mitochondria, multiple protein factors being required for the splicing of one group II intron hints the potential involvement of a splicing complex.Indeed, increasing studies indicate that large ribonucleoprotein complexes associated with the splicing of group II introns may exist in the mitochondria of land plants.Both nMAT2 and PMH2 are required for the splicing of 10 out of 23 group II introns in Arabidopsis mitochondria [19] (Figure 4).Analysis of pull-down experiments and native mitochondrial extracts showed the interaction between nMAT2 and PMH2 and the presence of nMAT2, PMH2, and their intron RNA targets in a large ribonucleoprotein complex.OZ2 functions in the splicing of seven group II introns in Arabidopsis mitochondria [104].Three mitochondrial splicing factors, ABO5 [51], MISF26 [57] and PMH2 [93], share target introns with OZ2, and physical interactions between OZ2 and these three proteins were observed in yeast two-hybrid and bimolecular fluorescence complementation assays [104].EMP603, a PPR protein, is specifically involved in mitochondrial nad1 intron 2 splicing in maize and interacts with DEAD-box RNA helicase PMH2-5140, the These data imply that the splicing of group II introns in plant mitochondria is also performed by a spliceosomal complex.Splicing factors act as components of a spliceosomal complex in the mitochondria of land plants, in which some splicing factors, such as SPR2/PPR-SMR1 in maize and nMAT2/PMH2 in Arabidopsis, are proposed to serve as the core components of the spliceosomal complex and exert intron splicing through dynamic interactions with other intron-specific splicing factors in plant mitochondria.
Roles of Protein Splicing Factors in the Splicing of Mitochondrial Group II Introns in Land Plants
In group II introns, the formation of exact and conserved ribozyme-like tertiary structures is required for their removal from pre-RNAs.Compared with bona fide group II introns in bacteria, group II introns in plant organelles have degenerated and lost some essential fragments required for the formation of the active tertiary structures.Factors of splicing machinery in land plant organelles are thought to aid group II introns in folding into the correct structure suitable for splicing.In chloroplasts, several splicing factors have been shown to promote or maintain a proper intron structure by direct intron binding, such as CRS1, PPR4, and PpPPR_66 [79].
During the splicing process, the exact roles of splicing factors in plant mitochondrial group II intron RNA folding or sequence recognition have not been established.Arabidopsis nMAT1 may function in the release of 5 exons from its target introns [20].Two DEAD/DExH-box RNA helicases in Arabidopsis, PMH2 and ABO6, are predicted to function as RNA chaperones required for resolving stable inactive secondary structures within introns and promoting introns to fold correctly in mitochondria [93,94].ODB1, a RAD52-like protein, is thought to facilitate the splicing of intron 2 in nad1 and intron 1 in nad2 by stabilizing the correctly folded structures of DV and DVI in the two introns either through directly binding to the introns or interacting with other components of the spliceosomal complex [105].In Arabidopsis mitochondria, nMAT2, PMH2, and their intron RNA targets are proposed to be present in a large ribonucleoprotein particle [19], implying that nMAT2 and PMH2 promote the splicing of their target introns by directly binding to intron RNAs or interacting with other splicing factors in a spliceosomal complex.
Among the known mitochondrial splicing factors, PPR proteins make up the largest group.As members of the RNA-binding protein family, some PPR proteins facilitate RNA editing in plant organelles by specifically binding to their target RNAs through a one repeat-one nucleotide mechanism [106][107][108].Additionally, the PPR proteins EMB2654, OTP51, PBF2, PTSF1, PpPPR_66, PPR4, and THA8 functioning in group II intron splicing in chloroplasts have been shown to directly bind to their target introns in vitro [79].In plant mitochondria, more than thirty PPR proteins have been found to function in mitochondrial group II intron splicing in land plants to date (Table 1), and several of them have been identified to interact with various mitochondrial splicing factors in vivo and in vitro [48,49].Accordingly, it is hypothesized that PPR proteins function predominantly by recognizing and binding to specific RNA sequences within their target introns, binding with other splicing factors to promote or maintain the intron RNA folding into a correct active conformation, and then initiating splicing.
These data suggest that splicing factors from diverse families may function in the splicing of plant mitochondrial group II introns by recognizing target RNA sequences or interacting with other splicing factors to form or maintain the active spliceosomal complex.However, further biochemical studies are necessary to confirm their exact functions in the splicing of group II introns in plant mitochondria.
Conclusions and Prospects
Due to the loss or degeneration of sequences encoding IEPs, the splicing of plant mitochondrial group II introns is promoted by protein cofactors.Indeed, some splicing factors in plant mitochondria have been reported (Figures 3 and 4), but only a few of them have been confirmed to function by binding to their target intron RNAs or interacting with other splicing factors [19,20,42,48,49,76,104,105]; the precise roles of most splicing factors in the splicing of plant mitochondrial group II introns are not yet understood.Group II introns are considered to be the ancestor of spliceosomal introns in nuclei; thus, it is proposed that group II intron splicing in vivo is promoted by a spliceosome.Multiple protein factors being involved in the splicing of one group II intron in plant mitochondria hints the potential involvement of a splicing complex (Figures 3 and 4); however, it remains largely unclear what components are present in a splicing complex and whether they form a larger complex in the same manner as a nuclear spliceosome.Most known splicing-related proteins are members of the PPR protein family, and more than twenty PPR proteins in maize have been identified to be involved in the splicing of group II introns in mitochondria (Table 1).The direct binding of PPR proteins to their target RNAs has been found in RNA editing and in the splicing of chloroplast group II introns [79,[106][107][108], while little is known about the interaction between PPR proteins and their target introns in plant mitochondria.In addition, some splicing factors have been characterized for most introns in mitochondrial genes, but very few have been reported in several mitochondrial genes, such as cox2, ccmFc, and rps3 in maize mitochondria and rpl2 and ccmFc in Arabidopsis mitochondria (Figures 3 and 4).Thus, more biochemical analysis techniques, such as gel shift and co-immunoprecipitation, are needed to confirm the presence of a spliceosome in plant mitochondria, the binding of splicing factors to specific RNA sequences in their target introns, or the interaction of splicing factors with other components of the spliceosome.More genetic studies are needed to identify more splicing factors in plant mitochondria, especially the splicing factors associated with the splicing of introns of cox2, ccmFc, and rps3 in maize mitochondria and rpl2 and ccmFc in Arabidopsis mitochondria.
Figure 1 .
Figure 1.Diagrammatic sketch of nad1 intron 4 predicted secondary structure.Six domains (DI-VI) of introns and the open reading frame (ORF) of matR are labeled.
Figure 1 .
Figure 1.Diagrammatic sketch of nad1 intron 4 predicted secondary structure.Six domains (DI-VI) of introns and the open reading frame (ORF) of matR are labeled.
Figure 2 .
Figure 2. Three splicing pathways of group II introns.The grey boxes represent the exons.The curves represent the introns.The dashed arrows represent the transesterification reactions.The branch-point adenosine in DVI is labeled.
Figure 2 .
Figure 2. Three splicing pathways of group II introns.The grey boxes represent the exons.The curves represent the introns.The dashed arrows represent the transesterification reactions.The branch-point adenosine in DVI is labeled.
Figure 3 .
Figure 3. Group II introns and protein factors involved in their splicing in maize mitochondria.The black boxes represent the exons.The closed curves represent the cis-spliced introns.The open curves represent the trans-spliced introns.The different colored ellipses represent different protein splicing factors.The partially overlapped ellipses represent proteins that have been shown to interact with each other.
Figure 3 .
Figure 3. Group II introns and protein factors involved in their splicing in maize mitochondria.The black boxes represent the exons.The closed curves represent the cis-spliced introns.The open curves represent the trans-spliced introns.The different colored ellipses represent different protein splicing factors.The partially overlapped ellipses represent proteins that have been shown to interact with each other.
Figure 4 .
Figure 4. Group II introns and protein factors involved in their splicing in Arabidopsis mitochondria.The black boxes represent the exons.The closed curves represent the cis-spliced introns.The open curves represent the trans-spliced introns.The different colored ellipses represent different protein splicing factors.The partially overlapped ellipses represent proteins that have been shown to interact with each other.
Figure 4 .
Figure 4. Group II introns and protein factors involved in their splicing in Arabidopsis mitochondria.The black boxes represent the exons.The closed curves represent the cis-spliced introns.The open curves represent the trans-spliced introns.The different colored ellipses represent different protein splicing factors.The partially overlapped ellipses represent proteins that have been shown to interact with each other.
Table 1 .
List of mitochondrion-targeted PPR proteins required for the splicing of mitochondrial group II introns in maize and Arabidopsis. | 7,774.2 | 2024-01-28T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Reliable Portfolio Selection Problem in Fuzzy Environment : An m λ Measure Based Approach
This paper investigates a fuzzy portfolio selection problem with guaranteed reliability, in which the fuzzy variables are used to capture the uncertain returns of different securities. To effectively handle the fuzziness in a mathematical way, a new expected value operator and variance of fuzzy variables are defined based on the mλ measure that is a linear combination of the possibility measure and necessity measure to balance the pessimism and optimism in the decision-making process. To formulate the reliable portfolio selection problem, we particularly adopt the expected total return and standard variance of the total return to evaluate the reliability of the investment strategies, producing three risk-guaranteed reliable portfolio selection models. To solve the proposed models, an effective genetic algorithm is designed to generate the approximate optimal solution to the considered problem. Finally, the numerical examples are given to show the performance of the proposed models and algorithm.
Introduction
The portfolio selection problem is a well-known problem in the field of economics, which aims to allocate the capital to a pre-given set of securities and meanwhile obtain the maximum return.Theoretically, this problem can be characterized through a standard linear programming model, in which the decision variable corresponds to the investment ratio of the involved capital to each considered security, and the total return is typically a linear form with respect to the decision variables.If all of the parameters in the portfolio selection problem are pre-specified, the corresponding model can be easily solved by the simplex method or some existing classical algorithms.
In the real-world applications, there exist two types of uncertainties in the decision-making process.One is randomness; the other is fuzziness.In general, if enough sample data are available, we can use the statistics methods to estimate the probability distribution of the involved uncertain parameters, and the probability theory can be used as an effective tool to deal with them.On the other hand, when there are not enough sample data or even no sample data, a common method is to treat the uncertain parameters as fuzzy variables by using professional judgments or expert experiences.With these concerns, two classes of methods can be adopted in the literature to investigate the portfolio selection problem, i.e., random optimization and fuzzy optimization, in order to maximize the total return and decrease the risks in the uncertain environment.In the following discussion, we aim to review the existing works in the literature along these two lines.
For the earlier works, Markowitz [1,2] first proposed the mean-variance models in stochastic environments, in which the variance is usually used to quantify the existing risks in the uncertain return.In detail, to measure and control the risks of the investment, a threshold is firstly pre-given for the portfolio, and an investment strategy is called a feasible plan if the variance of its random return is not over this threshold.Afterwards, for the stochastic portfolio selection problem, a variety of existing research works focus on improving or extending this type of model to more complex decision environments.For instance, Shen [3] investigated a mean-variance-based portfolio selection problem in a complete market with unbounded random coefficients, which was solved by the stochastic linear-quadratic control theory and the Lagrangian method.Lv et al. [4] explored a continuous-time mean-variance portfolio selection problem with random market parameters and a random time horizon in an incomplete market, which was formulated as a linear constrained stochastic linear quadratic optimal control problem.He and Qu [5] considered a multi-period portfolio selection problem with market random uncertainty of asset prices.They formulated the problem as a two-stage stochastic mixed-integer program with recourse and designed a simplification and hybrid solution method to solve the problem of interest.Najafi and Mushakhiann [6] considered three indexes to model the portfolio selection problem, including the expected value, semivariance and conditional value-at-risk.A hybrid algorithm with a genetic algorithm and a particle swarm optimization algorithm was designed to solve the proposed model.Low et al. [7] estimated the expected returns by sampling from a multivariate probability model that explicitly incorporated distributional asymmetries to enhance the performance of mean-variance portfolio selection.Shi et al. [8] proposed three multi-period behavioral portfolio selection models under cumulative prospect theory.Shen et al. [9] discussed a mean-variance portfolio selection problem under a constant elasticity of variance model on the basis of backward stochastic Riccati equation.Kim et al. [10] resolved the high-cardinality of mean-variance portfolios through applying the semi-definite relaxation method to a cardinality-constrained optimal tangent portfolio selection model.Zhang and Chen [11] investigated a mean-variance portfolio selection problem with regime switching under the constraint of short-selling being prohibited.Chiu and Wong [12] further enriched the literature of the mean-variance portfolio selection by considering correlation risk among risky asset returns.Fulga [13] proposed a quantile-based risk measure, which is defined using the modified loss distribution according to the decision maker's risk and loss aversion.For other research, interested readers can refer to Alexander et al. [14], Villena and Reus [15], Maillet et al. [16], Huang [17], etc.
In the condition of incomplete information, fuzzy set theory can be used as an efficient tool to deal with this situation.Along this line, Zhang and Zhang [18] investigated a multi-period fuzzy portfolio selection problem to maximize the terminal wealth imposed by risk control, in which the returns of assets are characterized by possibilistic mean values, and a possibilistic absolute deviation is defined as the risk control of the portfolio.Li and Xu [19] studied the multi-objective portfolio selection model with fuzzy random returns for investors through three criteria, i.e., return, risk and liquidity, and a compromise approach-based genetic algorithm was designed to solve the proposed model.Gupta et al. [20] proposed a multi-objective credibilistic model with fuzzy chance constraints for the portfolio selection problem, which was solved by a fuzzy simulation-based genetic algorithm.Mehlawat [21] dealt with fuzzy multi-objective multi-period portfolio selection problems.A fuzzy credibilistic programming approach with multi-choice goal programming was proposed to obtain investment strategies.Huang and Di [22] discussed a new uncertain portfolio selection model in which background risk was considered, and the returns of the securities and the background assets were given by experts' evaluations instead of historical data.Huang and Zhao [23] investigated a mean-chance model for portfolio selection based on an uncertain measure and developed an effective genetic algorithm to solve the proposed nonlinear programming problem.Bermudez [24] extended genetic algorithms from their traditional domain of optimization to the fuzzy ranking strategy for selecting efficient portfolios of restricted cardinality.Rebiasz [25] presented a new method for the selection of efficient portfolios, where parameters in the calculation of effectiveness were expressed by interactive fuzzy numbers and the probability distribution.For other research about the fuzzy optimization technique on this topic, we can refer to Zhang et al. [26], Liu et al. [27], Bhattacharyya et al. [28], Saborido [29], etc.
As can be seen in the literature, in the case of incomplete historical data, a variety of fuzzy approaches has been proposed to deal with portfolio selection under uncertain environments.The evaluation indexes are usually associated with the possibility measure, credibility measure, expected values, variance, semivariance, etc.In this paper, we aim to propose new definitions for the expected value operator and variance to characterize the feature of fuzzy variables, which are defined on the basis of the m λ measure proposed by Yang and Iwamura [30].Actually, the m λ measure is a linear combination of the possibility measure and necessity measure, which provides an effective method to make a trade-off between the optimistic and pessimistic decisions.Based on the newly-proposed expected value operator and variance, we handle the portfolio selection problem with different indexes from the literature.To the best our knowledge, no related research can be found to investigate the portfolio selection problem in the fuzzy environment, which motivates us to study this problem to get suitable strategies with fuzzy parameters.
The rest of this research is organized as follows.Section 2 gives a detailed description for the considered problem and formulates the reliable mathematical model with fuzzy parameters.In Section 3, some equivalent models have been proposed based on the rigorous mathematical analysis.In Section 4, an effective genetic algorithm is proposed to search for the approximate optimal solution of the proposed model.Finally, some numerical experiments are implemented to show the application and performance of the proposed methods.
Problem Statement and Mathematical Models
The portfolio selection problem deals with how to allocate the pre-given capital to a set of securities with the maximized returns.When all of the parameters are deterministic variables in this process, this classical problem can be essentially formulated as a linear programming model with the selection ratio constraints.In order to characterize this problem, we first introduce some notations in the formulation process.
S: total number of involved securities; s: index of involved securities, s ∈ {1, 2, . . ., S}; r s : return of the s-th security; x s : the investment ratio for security s, which is a decision variable.If all of the parameters in this process are constant, we can formulate this problem as the following linear programming model.
In this model, the objective function is to maximize the total return among different feasible investment strategies.The constraint ensures that the sum of the investment proportions should be unity, and each proportion is larger than or equal to zero.Obviously, if all of the parameters are pre-given constants, it is typically a linear programming model and can be solved by the simplex method.Note that the portfolio selection plan is usually made before the return can be fulfilled.Thus, the actual return of each security is practically uncertain.In the following discussion, we particularly treat the return of each security as a fuzzy variable to suitably describe the practical uncertainty.Theoretically, if we consider the uncertainty in the portfolio selection problem, this problem should be formulated as a robust or reliable optimization programming with fuzzy parameters.For the completeness of this study, we shall introduce the basic knowledge in fuzzy set theory in the following discussion.
m λ Measure and Expected Value Operator
Fuzzy set theory was first proposed by Zadeh in 1965 and further developed by many researchers, such as Nahmias [31], Liu and Liu [32], etc.In this theory, the possibility measure and necessity measure are two effective tools to characterize the chance of a fuzzy event.For instance, let ξ be a fuzzy variable with membership function µ ξ (x) and B a subset of real numbers.Then, a fuzzy event can be expressed as {ξ ∈ B}.Its possibility and necessity, respectively, can be calculated as follows: In general, we have the following relationship between these two measures, i.e., Pos{•} ≥ Nec{•} for any fuzzy event {•}.Additionally, even if the possibility of a fuzzy event achieves one, it cannot necessarily guarantee the occurrence of this event; on the other hand, if the necessity of a fuzzy event is zero, it is possible that this event can occur.In this sense, in the process of optimizing the chance of fuzzy events, the possibility measure is more suitable for the optimistic decision makers, while the necessity measure is more favored by pessimistic decision makers.In order to make a trade-off between the optimism and pessimism in the decision-making process, Yang and Iwamura [30] proposed a linear combination of this two fuzzy measures by introducing a weighted parameter λ, called the m λ measure, which has successfully applied to a variety of fields in handling the fuzziness of the decision-making process, for instance carbon capture, utilization and storage (Dai et al. [33]), water quality management (Li et al. [34]), etc.In detail, the m λ measure is defined as follows: Theoretically, if parameter λ is close to one, this measure is more suitable for risk-loving decision makers; on the contrary, it is suitable for risk-averse decision makers.In particular, if λ is set as 0.5, this measure degenerates to the credibility measure proposed by Liu and Liu [32].For this measure, Yang and Iwamura [30] proved that for any fuzzy event A, we have m λ {A} + m 1−λ {A c } = 1, which implies that m λ and m 1−λ are two dual measures in scaling the chance of fuzzy events.With this property, we hereinafter define the expected value of a fuzzy variable, which has the form of a scalar integral.
Definition 1.
Let ξ be a fuzzy variable with the membership function µ ξ (x).Then, the λ-expected value of this fuzzy variable is defined by: provided that at least one integral is finite.
Typically, this definition has a common form of the expected value operator for random variables where the probability measure is self-dual.To illustrate the calculation of expected value operator, we here give several illustrations for clarity.
Example 1.Let ξ = (a, b, c) be a triangular fuzzy variable with the following membership function: Then, we have: Proof.In the following, we only consider the case of 0 < a < b < c, and the other situations can be proven similarly.In this case, it is sufficient to calculate the first integral in the definition.In fact, we have: Thus, Remark 1.If we take λ = 0.5, the expected value of fuzzy variable can be simplified as: which just coincides with the situation proposed by Liu and Liu [32].
Example 2. Let ξ = (a, b, c, d) be a trapezoidal fuzzy variable with the following membership function: Then, we have: Example 3. Let ξ = [a, b] be an interval fuzzy variable with the following membership function: Then, we have: In this definition, we have the following relationship with respect to the different parameter λ.
Theorem 1.Let ξ be a fuzzy variable and λ 1 , Yang and Iwamura [30]).Then, we have: The proof is thus completed.
To capture the variant of the expected values with respect to parameter λ, we show the curve of the λ-expected value of a triangular fuzzy variable in Figure 1.Clearly, the expected value has a linear increasing relationship with respect to λ.If λ = 1, the expected value turns out to be b + c 2 ; Next, we aim to investigate the linearity of the proposed λ-expected value operator.To this end, we firstly introduce some closely related concepts and theorems in the following discussion.Definition 2. (Liu [35]) Suppose that ξ is a fuzzy variable and α ∈ (0, 1].Then: is called the α-optimistic value to ξ; and: Theorem 2. (Liu [35]) Suppose that ξ and η are two fuzzy variables.Then, for any α ∈ (0, 1], we have: A fuzzy variable ξ with membership function µ ξ (x) is called a normalized fuzzy variable if it has the following characteristics: (1) there exists a real number x * with µ ξ (x * ) = 1; (2) µ ξ (x) is nondecreasing for x ≤ x * and nonincreasing for x ≥ x * .Typically, the interval fuzzy variables, triangular fuzzy variables, trapezoidal fuzzy variable and fuzzy variables with unimodal membership functions are all normalized fuzzy variables.Theorem 3. Let ξ be a normalized fuzzy variable with finite expected value.Then, we have: in which ξ sup (α) and ξ inf (α), respectively, are α-optimistic and α-pessimistic values to fuzzy variable ξ.
Proof of Theorem 3. Since ξ is a normalized fuzzy variable, there exists a real number x * with µ ξ (x * ) = 1, and µ ξ (x) is nondecreasing for all x < x * and nonincreasing for all x ≥ x * .In the following, we only consider the case of x * > 0, and the other situation can be proven similarly.In this case, we have: The proof is completed.
The linearity of the expected value operator of random variables is an important property in the real-world applications.Likewise, for the λ-expected value operator proposed in this study, we also have a similar linearity for the fuzzy variables.Theorem 4. Let ξ and η be normalized fuzzy variables with finite expected values.If a and b are two real numbers, we then have: This theorem can be easily proven by using Theorems 2 and 3.
In addition, based on the expected value operator, we can define the concept of variance to measure the diversity degree of the uncertain information, given below.Definition 3. Let ξ be a fuzzy variable with λ-expected value e.Then, λ-variance of ξ is defined as: Example 4. Let ξ = [a, b] be an interval fuzzy variable.Then, the λ-variance of ξ is: Then, the λ-variance of ξ is calculated as follows. V Remark 2. Typically, if parameter λ approaches one, the λ-variance of ξ is close to (b − a) 2 .On the other hand, if parameter λ is close to zero, the λ-variance is also close to zero.If ξ represents the fuzzy return, it shows that the optimistic decision makers face large risk when they set a large λ, and the pessimistic decision makers have a small risk when they take a small parameter λ.Actually, this situation just coincides with the real conditions.In detail, if λ is equal to zero; the expected return should be a, which is the least realization in the fuzzy return and typically can be realized.Then, the return risk will be reduced to zero.However, if λ is taken as one, the expected return should be b, which is the upper bound of realizations in the fuzzy return, and can be realized with the largest risk.
Theorem 5. Let ξ be a fuzzy variable with λ-expected value e and a, b two real numbers.We then have: Proof of Theorem 5.It follows that: The proof is thus finished.
New Reliable Models
In this problem, if we use the least expected return to evaluate the quality of the portfolio selection strategy, we can formulate the following least expected return model for the fuzzy portfolio selection problem.
In this model, the objective function aims to minimize the expected return in the decision-making process.However, we note that the optimal strategy for this model is not always favored in all situations, because the least expected return strategy, occasionally creating large, systemic and poorly understood risks, might be subject to high risks under some extreme scenarios and would be specifically undesirable to risk-averse travelers.By recognizing this critical requirement, we here particularly propose three models to reduce the the diversity of the fuzzy total return.In addition, to avoid the excessive decentralization of the final investigation, the threshold for each sector needs to be given.That is, let h s be the threshold of investigation ratio x s , and then, we should have x s = 0 or x s ≥ h s , s = 1, 2, • • • , S in the process of the formulation.
Firstly, we add the variance of the total return into the objective function, and then, an expectation-variance reliable model is formulated, given by: In the literature, this type of index has been successfully applied to the transportation field (e.g., Xing and Zhou [36], Sen et al. [37]) as a guidance for finding a reliable path.In this model, the parameter β is a reliability coefficient to reflect the significance of total return variability.This reliability coefficient can also vary for different decision makers.In general, if the decision maker is risk-averse, he/she can set a relative large reliability coefficient; otherwise, a smaller coefficient can be taken for the optimistic decision makers.Typically, in the process of maximizing the objective, a small variance of the fuzzy return is desirable in evaluating the investment strategies.
Secondly, to reduce the risk, the variance of total return is assumed to be less than a threshold V, which is regarded as a constraint in the formulation.This constraint can be referred to as a side constraint, which has been widely used to handle the routing optimization problems effectively (e.g., Wang et al. [38], Wang et al. [39]).By maximizing the expected return, we hereinafter reformulate the risk-guaranteed reliable model as a variance-constrained reliable model, given by: In Model (9), the parameter V is an upper limit of the variance, which is determined by decision makers.Obviously, a risk-seeker often select a relatively large threshold V, while a smaller V will lead to a risk-averse strategy.
Thirdly, in the condition of minimizing the variance of the total return, we regard the expected return as a resource constraint, and thus, the expectation-constrained reliable model is formulated by: Model (10) aims to minimize the risk in the condition that the expected return is not less than a given threshold Ē.In other words, the parameter Ē is regarded as the lower bound of the expected return.In this model, the risk-seeker prefers a larger Ē, while the risk-averser often selects a smaller one.
Property Analysis
Next, we shall analyze the computational properties of some special cases.If all of the returns are interval fuzzy variables, we have the following results.Theorem 6.Let r s = [a s , b s ] be interval fuzzy variables, s = 1, 2, . . ., S.Then, the variance can be calculated according to the following equation.
Proof of Theorem 6.Since the returns r s , s = 1, 2, . . ., S are interval fuzzy variables, then the total fuzzy return is also an interval fuzzy variable for each feasible solution, given below.Typically, if all of the parameters r s , s = 1, 2, . . ., S are interval fuzzy variables, we can use analytical methods to calculate the objective function for each feasible solution X. Theorem 7. (i) Let r s = (a s , b s , c s ) be a triangular fuzzy variable, s = 1, 2, . . ., S.Then, the expected total return can be calculated according to the following equation.
(ii) Let r s = (a s , b s , c s , d s ) be a trapezoidal fuzzy variable, s = 1, 2, . . ., S.Then, the expected total return can be calculated according to the following equation.By using Examples 1 and 2, the proof of this theorem is obvious.
In general, for a common fuzzy variable with a complex membership function, it is difficult to calculate its expected value and variance by analytic methods.In this situation, we have to use a simulation algorithm to achieve the approximate values of these two indexes.In the following, we shall design the detailed simulation algorithm to simulate the expected value and variance of a fuzzy variable.
Consider a function f (x, ξ).We next design the simulation procedure of the expected value E[ f (x, ξ); λ].Typically, if we set f (x, ξ) = ξ, this procedure corresponds to the simulation process of the expected value of ξ; moreover, if we set f (x, ξ) = (ξ − E[ξ; λ]) 2 , this procedure corresponds to the simulation process of variance of ξ.For this purpose, we firstly need to compute the chance measure with form m λ { f (x, ξ) ≤ r}, and chance measure m λ { f (x, ξ) ≥ r} can be simulated similarly.Yang and Iwamura [30] give the simulation procedure as follows.
Next, we aim to give a numerical example to show the accuracy of the proposed simulation algorithm.Specifically, let us consider the sum of ten triangular fuzzy variables ξ i = (5 + i, 7 + i, 10 + i), i = 1, 2, . . ., 10.If we set λ = 0.8, by using the linear property of the expected value operator, we typically have: On the other hand, when we use the simulation algorithm, we then have E = 134.90after implementing a total of 1000 cycles, which has a relative error of 0.07%.More specifically, we simulate a total of ten cases for parameter λ (i.e., λ = 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0); the comparison between the exact values and simulated values is illustrated in the following Table 1.The results demonstrate the effectiveness of the proposed simulation algorithm.
Solution Method
As discussed above, the proposed models are typically non-linear programming models due to the complexity of the objective function or constraints.In this case, it is impossible to adopt the optimization solver to find the near-optimal solution to the considered problem.In the following, we shall design a genetic algorithm-based approach to solve the proposed model.The genetic algorithm is an effective algorithm to solve the optimization problems.Up to now, this type of algorithm has been successfully applied to a variety of real-world fields, such as transportation, economy, finance, etc. Next, we aim to design the technical details for the genetic algorithm for the problem of interest in this paper.
Solution Representation
In this paper, the decision variables are the investment ratios associated with different securities.In total, there are S securities that can be invested.Then, we can use x s to denote the decision variable.Then, in the genetic algorithm, we can use an S-dimensional array to represent the decision variables, listed below: where we need guarantee x s ∈ [0, 1], s = 1, 2, . . ., S. Take Model (9) for example: three types of constraints should be satisfied for the decision variables.Then, in the process of initializing the population, we need to generate a total of pop_size feasible solutions in the initial population.This process can be fulfilled through the following operation.Firstly, we need to generate a solution that satisfies the first constraint.In general, we can finish this part according to the following procedure.
Step 1. Randomly generate a sequence of nonnegative real numbers y 1 , y 2 , . .., y S ; Step 2. Let x s = y s / ∑ S s=1 y s , s = 1, 2, . . ., S. Based on the above procedure, we can obtain a solution satisfying the first constraint (i.e., the sum of investigation ratios is a unit).With this form, we need to further correct this solution by the following procedure.
Step 1.Let s = 1; Step 2. If x s = 0 or x s ≥ h s , go to Step 4; otherwise, go to Step 3; Step 3. Randomly find an index s with x s > 0; let x s ← x s + x s , x s ← 0; Step 4. If s < S, let s + +, go to Step 3; otherwise, stop.
After the above operations, we finally produce a solution that satisfies the first and third constraints.If it also satisfies the variance constraint, it is necessarily a feasible solution for the proposed model.When a total of Pop_size chromosomes is generated, the initial population is produced.
Selection Operation
The selection operation is used to select the chromosomes for the crossover and mutation operations, which is the basis of the genetic algorithm.In this paper, we shall adopt a common selection operation frequently used in the literature (e.g., Yang and Iwamura [30]) to perform this operation.However, for the completeness of this paper, we still give a detailed description in this part.
To implement the selection operation, we first arrange all of the chromosomes in the population from good to bad according to their objective functions, denoted by X 1 , X 2 , . . ., X S .This operation is performed based on the fitness of each chromosome.To show the superiority of each chromosome, we can define different evaluation functions for each chromosome.In the following, we introduce two commonly-used methods.
Objective function-based evaluation: In this method, the objective function will be used to evaluate each chromosome, given below.
Rank-based evaluation: In this method, the fitness function can be defined according to the ranked order from good to bad, i.e., In the following, the roulette wheel will be used to select chromosomes for crossover operation.In detail, we firstly make an increasing sequence {q i } pop_size i=0 according to the following formula.
We repeat the following procedures for pop_size times to select the new population for the crossover operation: randomly generate a number l in interval (0, q pop_size ]; if there exists a k such that q k−1 < l ≤ q k , then X k will be selected for the new population.After a total of pop_size cycles, a new population, which may have overlapped chromosomes, finally comes into being.
Crossover Operation
The crossover operation is a key operation in the procedure of the genetic algorithm, which aims to produce the new chromosomes in the population.Through this process, we can expectedly find the better solutions as soon as possible in the following solution process.The crossover operation is carried out on the basis of selected chromosomes.In the process of biological evolution, not all of the chromosomes can finally produce the offspring due to the crossover probability.Thus, following this rule, we finally select a total of pop_size • P c chromosomes expectedly to implement the crossover operations, in which P c is the pre-given crossover probability.In this operation, we select chromosomes according to the following procedures: for each individual in the population, we randomly generate a number w in [0, 1]; if w ≤ P c , this individual will be selected to take part in the crossover operation.
Next, we denote the selected chromosomes by X 1 , X 2 , . . ., X H ("H" represents the number of selected chromosomes).In the following operation, any two individuals can be grouped as a pair of parents for the crossover operation.Without loss of generality, assume that X 1 and X 2 are grouped as a pair of parents.We carry out the crossover operation according to the following formula.
where λ is a pre-specified parameter in the crossover process.Obviously, the offspring X 1 and X 2 satisfy the first constraint in our Model ( 9), but they do not necessarily satisfy the second and third constraints.To satisfy the third constraint, the procedure proposed in initializing the population should be implemented to correct offspring.Once the X 1 and X 2 also satisfy the variance constraints, they will be used to replace their parents in the population.At the end of this operation, at most a total of 2 H/2 chromosomes in the population can be expectedly updated.
Mutation Operation
In the process of biological evolution, the mutation always occurs to increase the diversity of individuals.To simulate this process, the genetic algorithm also includes the mutation operation in the population.Theoretically, different approaches can be designed to mutate a chromosome.In this study, we can perform this operation as follows.
Since only a part of individuals is involved in this operation, we first consider a selection probability P m in the process of mutating chromosomes.Thus, an expected number P m • pop_size of individuals will be selected.The following is the detailed method.For each chromosome in the population, we generate a random number w in [0, 1]; if w ≤ P m , this chromosome will be selected to take part in this operation.Let X i be a selected chromosome.We only change the values of two elements in this array.For instance, assume that X i has the following form: where one of x s and x s has positive values.We can change the value of these two elements, e.g., letting x s ← x s + x s and x s ← 0 if x s > 0. Through this operation, a new chromosome typically satisfies the first constraint.We can also correct this chromosome so as to satisfy the third constraint by the procedure in Section 4.1.If this chromosome also satisfies the second constraint, it can be used to replace the original one in the population.At the termination, a new population with the updated chromosomes can be produced.
•
Procedure of the genetic algorithm: With the technical details designed above, the framework of the genetic algorithm can be summarized in the following.
Step 1. Determine the parameters for the algorithm, including population size pop_size, fitness value parameter α, crossover probability P c , mutation probability P m , number of generation M, etc.; Step 2. Initialize the population, in which a total of pop_size feasible individuals should be produced; Step 3. Implement the selection operation based on the objective functions of different chromosomes; Step 4. Implement the crossover operation with crossover probability P c ; Step 5. Implement the mutation operation with mutation probability P m ; Step 6. Repeat Step 3 to Step 5 for M times; Step 7. Output the best individual found in this procedure as the near-optimal solution to the proposed model.
Numerical Examples
In this section, we aim to implement a series of numerical experiments to test the performance of the proposed approaches for the variance-constrained Model (9).All of the experiments are implemented in a personal computer with a 1.60-GHz CPU and 4.00 GB of memory.
Assume that there are 20 securities in our decision-making process.We use the triangular fuzzy variables to denote the security returns in the model, given in Table 2.In addition, to generate a favorite investment strategy, we set the threshold for each security as 0.1 in the experiments.Thus, at most ten securities can be finally selected for investment.(1) Steadiness of the proposed algorithm With the above-mentioned decision data, we implement this experiment by the genetic algorithm in C++ software, in which the relevant model parameters are set as λ = 0.8, V = 40.The computational results are listed in Table 3. Specifically, we randomly choose the critical parameters in the algorithm, including the crossover probability, mutation probability and population size, to test the steadiness of the algorithmic implementation.The algorithm terminated after 500 cycles, i.e., M = 500, and a total of ten tests is finally executed.Typically, among all of these experiments, the relative errors are not greater than 4.00%, which implies the steadiness of the proposed algorithm with respect to these critical parameters.For the parameter setting P c = 0.6, P m = 0.7 and Pop_size = 40, we can find the near-optimal solution with objective value 80.23, and the corresponding optimal solution is x 3 = 0.11, x 6 = 0.13, x 8 = 0.76.(2) Sensitivity w.r.t parameter λ In the proposed model, we use the m λ measure to characterize the feature of the involved decision makers.Then, we are particularly interested in investigating the sensitivity analysis of the near-optimal solution with respect to parameter λ.In detail, we discretize this parameter into the following numbers, i.e., λ = 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, and the critical parameters in the genetic algorithm are taken as follows: P c = 0.6, P m = 0.7 and Pop_size = 40.We list the computational results in Table 4.In particular, to further show a straightforward overview, we also give Figure 2 to show the variation of the objective function with respect to different parameters λ.Typically, in the experimental results, the returned optimal objective values take almost an increasing tendency when we enhance the parameter λ except for λ = 0.7 at which an opposite tendency occurs in comparison to the adjacent values.As expected, when we take different parameters λ, the optimal solutions can be different.For instance, when λ = 0.4; the optimal solution turns out to be x 2 = 0.29, x 3 = 0.39, x 6 = 0.13, x 8 = 0.19; if λ = 0.9, the optimal solution is changed to x 2 = 0.36, x 3 = 0.30, x 9 = 0.23, x 16 = 0.11, which is typically different from the solution of λ = 0.4.(3) Sensitivity w.r.t parameter V In our proposed variance-constrained Model (9), the parameter V is regarded as the upper bound of variance to denote the risk-averse degree in the decision-making process.Practically, since the variance can be used to denote the risk of an investment strategy, a small parameter corresponds to a risk-averse decision.In order to show the influence of this parameter on the optimal solution, we here especially implement a set of experiments to test this performance.Specifically, we take a total of ten cases for parameter V, i.e., V = 40, 41, 42, 43, 44, 45, 46, 47, 48, 49.In addition, we set λ = 0.8 in the m λ measure, and P c = 0.6, P m = 0.7, Pop_size = 40 in the genetic algorithm.The computational results are listed in Table 5.Clearly, different parameters V might produce different optimal solutions.For instance, if we take V = 43, the optimal solution turns out to be x 2 = 0.10, x 6 = 0.42, x 8 = 0.48; on the other hand, when we set V = 48, the outputted optimal solution is x 3 = 0.11, x 6 = 0.37, x 8 = 0.52.
Conclusions
This paper proposed a new model for the portfolio selection problem in the fuzzy environment.To balance the optimism and pessimism in the decision-making process, we defined a new concept, called the m λ measure, to scale the chance of a fuzzy event.Then, the expected value and variance of the fuzzy variable are defined based on the m λ measure.Some properties of the expected value and variance were also investigated, e.g., linearity.Based on these definitions, we developed three risk-guaranteed models for the portfolio selection problem.Since the proposed models are nonlinear, we in particular designed a genetic algorithm to search for near-optimal solutions.A set of numerical experiments was also implemented to show the performance of the proposed variance-constrained model and algorithm.
In this paper, we propose some basic reliable models for the portfolio selection problem in fuzzy environment.Here, we need to mention that the proposed model framework can be also suitable for a variety of practical problems so as to optimize the decision risks.In the further study, we will investigate their real-world applications of the proposed models by real cases.
Figure 1 .
Figure 1.Expected value curves with respect to parameter λ.
using Examples 1 and 4, we can easily prove this theorem.
3 .Step 4 .
Set a = N k=1 f (x, u k ) and b = N k=1 f (x, u k ); Randomly generate r in interval [a, b];
Figure 2 .
Figure 2. Variation of the optimal objective with parameter λ.
the expected value takes the value a + b 2 .Actually, if ξ denotes the fuzzy return, this expected value function is reasonable for different types of decision makers, since optimistic decision makers can usually overrate the expected return, and pessimistic decision makers prefer to underestimate the expected return.
Table 1 .
Comparison of the expected value (Exp.) and simulated value (Sim.).
Table 2 .
Fuzzy returns for each security in the decision-making process.
To further show the performance of the computational results, we compute the relatively errors among different results with respect to the best objective function (i.e., 80.23) according to the following equation,
Table 3 .
The computational results with different parameters. | 8,972.6 | 2017-04-18T00:00:00.000 | [
"Business",
"Computer Science",
"Mathematics"
] |
Automatic Recognition Reading Method of Pointer Meter Based on YOLOv5-MR Model
Meter reading is an important part of intelligent inspection, and the current meter reading method based on target detection has problems of low accuracy and large error. In order to improve the accuracy of automatic meter reading, this paper proposes an automatic reading method for pointer-type meters based on the YOLOv5-Meter Reading (YOLOv5-MR) model. Firstly, in order to improve the detection performance of small targets in YOLOv5 framework, a multi-scale target detection layer is added to the YOLOv5 framework, and a set of Anchors is designed based on the lightning rod dial data set; secondly, the loss function and up-sampling method are improved to enhance the model training convergence speed and obtain the optimal up-sampling parameters; Finally, a new external circle fitting method of the dial is proposed, and the dial reading is calculated by the center angle algorithm. The experimental results on the self-built dataset show that the Mean Average Precision (mAP) of the YOLOv5-MR target detection model reaches 79%, which is 3% better than the YOLOv5 model, and outperforms other advanced pointer-type meter reading models.
Introduction
Pointer-type meters are widely used in power systems, manufacturing systems, military, and aerospace due to their simple structure, low design and manufacturing costs, strong anti-interference ability, and high reliability. The traditional manual method of periodically checking meter readings is not only inefficient and inaccurate, but also cannot provide real-time readings. The traditional method requires a significant amount of human and material resources, and in certain extreme environments, such as those characterized by high temperature, high pressure, and radiation, manual reading of pointer instruments can be inconvenient. Currently, meter reading models are mainly based on traditional image processing techniques, which fail to effectively address challenges such as uneven illumination, complex backgrounds, tilted pointers, and blurred images in each image. As a result, the processing is cumbersome, the accuracy is low, and the error is significant. Thus, there is a pressing need to adopt deep learning technology to develop a meter reading model for pointer-type instruments.
In early research, both domestic and international researchers used a series of traditional image processing techniques to process pointer-type instrument panels. Alegria et al. [1] were the first to use image processing and computer vision techniques to automatically read pointertype instrument readings for data communication interfaces. In recent years, Yue et al. [2] implemented automatic reading of pointer-type instruments based on machine vision technology, proposing a distance discrimination method based on the distance from the pointer to the adjacent scale line on the left, for scale line and pointer positioning. However, this method required two clearer images for subtraction to extract the pointer, and was not robust against strong external lighting changes and other interferences. Sun et al. [3] proposed using the concentric ring search method to determine the deflection angle of the instrument pointer, which had higher accuracy for reading pointer-type instruments. In order to achieve automatic reading of meters, Belan et al. [4] used a method combining radial projection with the Bresenham algorithm to identify the position of the pointer in the instrument panel, thereby obtaining the meter reading. Liu et al. [5] proposed a machine vision-based an automatic meter reading method that used a region-growing algorithm to locate the center of the dial, and extracted the pointer through contour fitting; it can automatically read instruments with evenly or unevenly distributed scale lines, and has good accuracy. Fang et al. [6] used the SIFT algorithm to match the instrument dial area, and then used algorithms such as the Hough transform to further process the instrument pointer, calculating the deflection angle of the pointer to achieve automatic reading of the pointer-type instrument. Huang et al. [7] proposed an instrument detection algorithm for multiple instrument types, and a single-camera visual pointer reconstruction algorithm to accurately read the scale of the pointer-type instrument. Gao et al. [8] extracted the connected components of the pointer by analyzing the contour of the instrument, and proposed using the support vector machine and histogram gradient method to recognize the numbers on the instrument panel. They then used Newton interpolation linear relationships to determine pointer errors and achieved an automatic reading of the instrument. Ma et al. [9] proposed a method based on symmetric binarization thresholds to segment the pointer region, and an improved random sample consistency algorithm to identify the pointer, which had high adaptability to background interference and pointer shadow issues in complex environments. The research on automatic reading of pointer-type instruments has gradually matured both domestically and internationally, with strong robustness against factors such as lighting and background interference, but most algorithms still have poor robustness and accuracy.
In recent years, with the development of computer technology and artificial intelligence [10,11], automatic meter reading based on deep learning has been widely developed. Currently, most meter reading methods based on deep learning technology are implemented based on object detection methods [12], and deep neural networks have become the main method for object detection. The task of object detection is to find the regions of interest in an image and determine their position, size, and category information. Existing deep learning detection methods can be mainly divided into two categories. One category is the two-stage detection model represented by R-CNN [13], SPP-NET [14], Faster R-CNN [15], and Mask R-CNN [16]. These models have high accuracy but slow detection speed. The other category is the one-stage detection model represented by the YOLO series [17] and SSD [18], which is based on regression. These models have faster computation speeds but lower detection accuracy. In 2016, Redmon et al. proposed the YOLOv1 [17] model, which for the first time regressed the problem of object localization and classification as a whole. The model divided the image into S × S grids and predicted two candidate boxes of different sizes on each grid, and then used non-maximum suppression (NMS) to eliminate duplicates and obtain the final prediction boxes. Although YOLOv1 had a fast detection speed, its detection performance was poor due to the small number and fixed size of predicted boxes on each grid. In 2017, Redmon et al. proposed the YOLOv2 [19] model, which retained the idea of grid division in YOLOv1, and introduced the Anchor concept from Faster R-CNN, greatly increasing the detection accuracy of the model. However, the scale of Anchor in YOLOv2 was relatively single, and its detection performance for multi-scale targets was not very good. In 2018, Redmon et al. [20] proposed the YOLOv3 model, which introduced the feature pyramid and achieved multi-scale object prediction. In 2020, Bochkovskiy et al. proposed the YOLOv4 [21] model, which combined the idea of the cross-stage local network and improved the main network, achieving a dual improvement in detection accuracy and speed. In 2020, the Ultralytics team proposed the YOLOv5 [22] model, which has four versions of different sizes, from small to large: YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x. The YOLOv5 models of different sizes have similar structures, and only control the network layers and the number of input and output channels of each layer through two parameters, depth and width, to obtain four models of different sizes.
The entire YOLOv5 network can be divided into an input end, a backbone network, a Neck network, and a prediction end. The backbone network is used to extract features, the Neck network is used to fuse features, and the prediction end regresses the predicted results based on the input features. Although YOLOv5 has not proposed a novel model system for the YOLO series, it is a culmination of many optimization strategies and training techniques, achieving the highest performance of YOLO series of object detection algorithms, and providing a very convenient model training and deployment scheme.
Deep learning is a popular topic in computer vision [23], and studies have investigated recognition reading algorithms for pointer meter. For example, Liu et al. [24] used Faster R-CNN to locate the meter position and employed Hough transform for pointer detection and reading. Similarly, Wang et al. [25] used Faster-RCNN to detect the target meter panel area and proposed a Poisson fusion method to expand the dataset. They used image processing methods for preprocessing and Hough transform for pointer centerline detection. Wu et al. [26] proposed an automatic reading system based on computer vision, addressing issues of poor robustness, high training costs, inadequate compensation correction, and low accuracy. They designed a meter image skew correction algorithm using binary masks and improved Mask-RCNN for different types of pointer meters. Zou et al. [27] used Mask-RCNN for pointer segmentation and achieved high-precision meter reading recognition. Li et al. [28] presented a dial reading algorithm for inspection robots that solved problems of uneven image illumination, complex backgrounds, and interference through techniques like image enhancement, circle detection, pointer detection, and automatic reading. However, most of these algorithms are based on the Faster-RCNN algorithm. YOLOv5 has several advantages over Faster R-CNN. For instance, YOLOv5 has a faster detection speed due to its new network structure, which can process input images more efficiently. Moreover, YOLOv5 has a simpler network structure, making it easier to implement and use. Additionally, YOLOv5 can run directly on a single GPU, while Faster R-CNN requires multiple GPUs to achieve high-speed detection. Furthermore, YOLOv5 has higher detection accuracy on some public datasets than Faster R-CNN, and supports multiple input sizes to adapt to different image sizes. In terms of model deployment, YOLOv5 has smaller model files and can be deployed on low-power devices more easily, while Faster R-CNN requires more custom code for deployment, which may increase deployment difficulty and time.
In engineering applications of object detection, single-stage object detection models are more widely used than two-stage models due to their high real-time requirements. Among them, the YOLO series of models have become the most user-friendly object detection models with high accuracy, fast speed, and ease of deployment through continuous development [21]. However, the YOLOv5 model still has shortcomings in detecting small objects. The first reason for the poor detection of small objects by the YOLOv5 model is that the sample size of small objects is small, and the down-sampling factor of the YOLOv5 model is relatively large, making it difficult for deeper feature maps to learn the feature information of small objects. The second reason is that the Generalized Intersection over Union (GIoU) [29] loss function used by YOLOv5 for small object detection is slow and unstable in convergence. The third reason is that the YOLOv5 model uses the nearest-neighbor interpolation method for up-sampling, which requires manually designed up-sampling parameters and is difficult to obtain optimal up-sampling parameters.
In this paper, a new pointer automatic meter reading model, named YOLOv5-MR, is proposed and combined with a new method of fitting the dial outer circle to improve the accuracy of meter reading. Compared with the original YOLOv5 model, YOLOv5-MR adds detection to the C 2 feature layer, which can extract more target features, especially when improving the detection of small objects. In order to solve the problem of edge length error amplification in the convergence process of the model, this paper uses the Efficient Intersection over Union (EIoU) [30] loss function to improve the original GIoU loss function, which makes the model convergence more stable. Meanwhile, in order to learn better up-sampling parameters, this paper adopts the transposed convolution method instead of the fixed nearest neighbor interpolation method. Finally, this paper proposes a new method to fit the outer circle of the dial to calculate the meter readings using the circular angle algorithm. After experimental validation, the model performs well in small object detection, while the introduced penalty term helps to avoid erroneous length amplification, thus achieving faster and better model convergence. In addition, the proposed method of fitting the dial outer circle outperforms existing object detection-based meter reading methods in terms of accuracy, speed and robustness. We also present a new dataset for lightning arrester meter reading named MR-Meter.
The paper is organized as follows. Section 2 presents related work in the field of automatic meter reading. Section 3 describes the proposed YOLOv5-MR model, including the improvement of the original YOLOv5 model and a new method for fitting the dial outer circle. Section 4 presents the experimental results and analysis. Finally, Section 5 discusses the advantages of the model proposed in this paper over other models. Section 6 summarizes the paper and discusses future work.
Object Detection
Deep object detection models can be categorized into two types: two-stage detectors and one-stage detectors. Faster R-CNN [15] is a classic two-stage deep learning model that uses a Region Proposal Network (RPN) to propose candidate object regions and performs classification and refinement on these proposals. However, for small objects like watch dials, it is difficult to match the size of the candidate regions with the size of the objects, leading to inaccurate detection results.
One-stage detectors, YOLO [17,[19][20][21] and SSD [18], are based on single-stage detection, and YOLO is simpler and easier to train and adjust, compared to SSD. To improve the detection and recognition rates of YOLOv2 [19], YOLOv3 [20] also introduces a concept called multi-scale prediction, where the model predicts objects at three different scales, allowing it to detect small objects more accurately. YOLOv4 [21] builds upon the success of previous YOLO versions and introduces various improvements such as a new backbone network, a better neck network, and a new loss function. These improvements led to state-of-the-art performance on the COCO dataset. YOLOv5 [22], on the other hand, is a completely new architecture that uses a novel approach to object detection.
YOLOv5 Network Architecture
In the YOLOv5 model, the Neck network is responsible for feature enhancement by processing the features extracted from the backbone network to improve the accuracy of predictions. The original Feature Pyramid Network (FPN) structure used in YOLOv5's Neck network employs a top-down feature fusion method to address the multi-scale variation problem in object detection, which has been widely used in many models. However, if only the FPN structure is used to fuse contextual information, communication between upperlevel and lower-level information cannot be achieved. Therefore, the YOLOv5 model adds a Path Aggregation Network (PAN) structure on top of the FPN to introduce a bottom-up information flow, which fully integrates the top-down and bottom-up information flows to enhance the detection ability of the network. This results in the PANet network, which has been shown to improve performance in object detection tasks. The PANet network structure is shown in Figure 1: The bottom-up structure enhances the information transfer between different feature maps by fusing shallow features with rich position information to deeper features. This method accurately preserves spatial information and effectively improves the detection capability of the network for large and medium-sized objects. As shown in Figure 2, in the YOLOv5 model, the image is first divided into S × S grids, and the center coordinates of each grid are denoted as C x and C y , and the width and height are denoted as t w and t h , respectively. In each grid, the corresponding predicted values for the center coordinates (σ(t x ), σ(t y )) and relative width t w and height t h are output, and the final predicted box (corresponding to the orange solid line) is obtained based on the actual position, width, and height of the grid. The dashed box in Figure 2 represents the prior box, where p w and p h are the width and height of the prior box. For example, in the grid located at the second row and second column in Figure 2, the center point position (b x , b y ) and width (b w ) and height (b h ) of the predicted box are obtained based on Equation (1). The final values used in the calculation of the loss function include the width, height, and center point position of the predicted box, the confidence score of the predicted box, and the classification information. The confidence score of the predicted box refers to the Intersection over Union (IoU) between the predicted box and the annotated box. Generally, predicted boxes with IoU > 0.7 are considered positive examples, meaning successfully predicted targets, while those with IoU < 0.3 are considered negative examples, meaning background. Ignoring other predicted boxes, the loss function is calculated using positive and negative examples, while attempting to maintain a balance between the numbers of positive and negative examples. The classification information refers to the probability that the predicted box contains a target of a certain category. The IoU is shown in Equation (2). The values used in the computation of the loss function for object detection include the width, height, and center point position of the predicted box, the confidence score of the predicted box, and the classification information. The confidence score of the predicted box is defined as the IoU between the predicted box and the ground truth box. Generally, a predicted box with an IoU greater than 0.7 is considered a positive example, meaning that the target object has been successfully detected. Objects with an IoU less than 0.3 are considered negative examples, representing the background. The loss function is calculated using only positive and negative examples while striving to maintain a balance between the two classes. The classification information refers to the probability that the predicted box contains an object of a certain category. The IoU is defined as shown in Equation (2), where Box 1 represents the predicted box and Box 2 represents the ground truth box.
The YOLOv5 model's loss function consists of three components: classification loss, localization loss, and object confidence loss. YOLOv5 employs the GIoU [29] as the bounding box regression loss function. The GIoU method overcomes the shortcomings of IoU while fully utilizing its advantages. Let A be the predicted box and B be the ground truth box, and let A c represent the minimum convex hull containing both A and B. The GIoU loss function is calculated as shown in Equation (3): During the training phase, binary cross-entropy (BCE) loss is used as the classification loss. Therefore, the complete loss function consists of three components: bounding box regression loss (first term), object confidence prediction loss (second and third terms), and class prediction loss (fourth term). The loss function is formulated as shown in Equation (4): In Equation (4), S is the grid scaling factor, B is the number of predicted boxes per grid cell, C is the total number of classes, p is the class probability, and x i , y i , w i , h i represent the center point coordinates, width, and height of the predicted box in the ith grid cell. The weight coefficient for the bounding box coordinates is denoted as λ coord , and the penalty coefficient for the objectness prediction is denoted as λ noobj .
In order to solve low precision and large error of the current meter reading, we present a novel pointer-type automatic meter reading model based on YOLOv5, which aims to address the significant challenges of low accuracy and large errors in current object detection-based meter reading methods.
Multi-Scale Feature Detection
To improve the detection ability of the model for small targets and detect more target features, this paper proposes a new object detection model, YOLOv5-MR. The overall structure of the YOLOv5-MR model is shown in Figure 3. The model is divided into the input layer, feature extraction layer, feature fusion layer, and detection head. To enable YOLOv5 to detect more small target features, we add the C 2 target detection layer to detect more shallow features. At the same time, a set of custom anchors are used on the C 2 target detection layer to better converge and fit small targets in the dataset. Secondly, the EIoU loss function is used to make the model converge more stably and solve the problem of error side length enlargement that may exist during the convergence process. Finally, the paper uses transpose convolution to learn better up-sampling parameters. In the field of computer vision, detecting small objects has long been a challenge in object detection, aiming to accurately detect objects with very few visual features (objects with a size of 32 pixels × 32 pixels or smaller) in an image. In the deep feature extraction process of neural networks based on the basic unit of CNN, the information on the feature map decreases as the network deepens, through continuous convolution and pooling operations. For example, Faster R-CNN use VGG16 as the backbone network, after the last convolution operation, the height and width of the feature map become one-sixteenth of the original image. Therefore, with continuous convolution, the information of small objects on the feature map is constantly decreasing, which leads to the failure to detect some small objects, and subsequently, the failure in classification and bounding box regression.
After continuous convolution and pooling operations, the information on the feature map is constantly decreasing. In Figure 3, the C 2 feature layer contains more object information than the C 3 feature layer. Therefore, this paper adds detection on the C 2 feature layer, which contains more feature information. In the original YOLOv5 algorithm, YOLOv5 uses the FPN original model, which only has three detection layers and corresponds to three sets of initialized Anchor values. When the input image size is 640 × 640, the detection layer size corresponding to C 3 is 80 × 80, which can be used to detect objects larger than 8 × 8; the detection layer size corresponding to C 4 is 40 × 40, which can be used to detect objects larger than 16 × 16; the detection layer size corresponding to C 5 is 20 × 20, which can be used to detect objects larger than 32 × 32. To improve the model's ability to detect small objects; after the M 3 layer, the feature map continues to be up-sampled and processed to further expand the feature map. At the same time, the obtained feature map with a size of 160 × 160 is concatenated with the C 2 feature layer in the backbone network for feature fusion, in order to obtain a larger feature map for small object detection. Therefore, this paper adds the P 2 detection layer in the feature fusion part, using four feature layers (P 2 P 3 P 4 P 5 ) for detection.
The original YOLOv5 detection model uses default anchor boxes designed for the COCO dataset, which generate nine anchors of different scales and aspect ratios to detect small objects on large feature maps. The sizes of the anchors are (10 13 16 30 33 23), (30 61 62 45 59 119) and (116 90 156 198 373 326).
However, the anchor sizes in this set are designed based on the object sizes in the COCO dataset and are not suitable for the MR-Meter dataset used in this paper. The default anchor sizes in the network cannot converge well to fit small objects in the dataset. Therefore, in consideration of the size characteristics of the meter objects in the MR-Meter dataset, a new set of anchors with sizes of (5,6,8,14,15,11) (smaller than the default settings) was added to the network to detect small objects in the meter. During feature extraction, larger feature maps contain more information about small objects. Thus, smaller values are typically assigned to anchor sizes in larger feature maps, while larger values are assigned to anchor sizes in smaller feature maps to detect larger objects. Additionally, non-maximum suppression is used to eliminate redundant candidate regions. The new small-size anchors are used in the newly added large-scale object detection layer C 2 in Figure 3 to detect small objects in the C 2 feature map. Therefore, during training, four sets of 12 anchors are used, located in the C 2 , C 3 , C 4 , and C 5 object detection layers, with sizes of: (5 6 8 14 15 11)
Loss Function
The YOLOv5 model adopts the GIoU loss function [29] for bounding box regression. Although the GIoU loss function addresses the gradient vanishing problem of the IoU loss function, it is unstable and converges slowly. Therefore, this paper use the EIoU loss function [30] for bounding box regression.
The EIoU loss function is an improvement upon the Complete Intersection over Union (CIoU) loss function [29], which itself is an improvement upon GIoU. CIoU is defined by Equation (5): In Equation (5), c is the diagonal length of the box enclosing the two bounding boxes, b and b gt represent the centers of the predicted and ground-truth bounding boxes respectively, p(.) represents the Euclidean distance between the centers of the predicted and ground-truth boxes, and α is a weight coefficient. V is defined by Equation (6): There are two problems with CIoU. Firstly, CIoU uses the relative proportion of width and height instead of their actual values. According to the definition of v, if the predicted width and height satisfy w = kw gt , h = kh gt k R + , then the penalty term added in CIoU for the relative proportion will not be effective. Secondly, from Equations (7) and (8), we can derive that v w = − h w v h , indicating that the gradients v w and v h for width and height have opposite signs. This opposite sign poses a problem during the training process. When one value of width or height increases, the other value must decrease, i.e., the two values cannot increase or decrease together, which affects the convergence speed of the model. As shown in Figure 4, GIoU uses the closure of the area minus the union of the area as a penalty term, which leads to the problem of taking a detour to first expand the union area and then optimize IoU, as shown in the first row of Figure 4. Additionally, in CIoU, the width and height cannot increase or decrease simultaneously. This is demonstrated in the second row of Figure 4. From the Figure 4, it can be seen that EIoU has the best convergence effect.
Anchor is a bounding box whose width and height are both larger than the object to be detected, but it still amplifies the width of the predicted box during optimization. Compared to the two loss functions above, EIoU has a faster convergence speed. Based on this phenomenon, EIoU proposes a loss function that directly penalizes the predicted results of w and h, that is, Equation (9): In Equation (9), C is the diagonal length of the box surrounding the two bounding boxes, C w represents the width of the bounding box, C h is the height of the bounding box, b and b gt , respectively, represent the center of the prediction bounding box and the ground live bounding box, w and w gt , respectively, represent the width of the prediction bounding box and the width of the real bounding box, h and h gt , respectively, represent the height of the prediction bounding box and the height of the real bounding box, and p (.) represents the Euclidean distance between the centers of the prediction bounding box and the ground live bounding box. Figure 4. The convergence of the three loss functions of GIoU, CIoU, EIoU at the same anchor point and ground truth, where yellow box represents Ground Truth, black box represents the initial position. The first row represents the convergence process of GIoU, the second row represents the CIoU convergence process, and the third row represents the convergence process of EIoU. The pink box, the brown box and the green box represent the convergence process of the three prediction box from the 10th iteration to the 150th iteration, respectly.
Transposed Convolution
In the process of using neural networks, it is often necessary to use up-sampling to increase the resolution of low-resolution images. Up-sampling methods include nearestneighbor interpolation [31], bilinear interpolation [32], and bi-cubic interpolation [33]. These up-sampling methods are based on prior knowledge and have fixed and nonlearnable rules, which are not ideal in many scenarios. Therefore, we introduce transposed convolution [34] to learn better up-sampling parameters. Compared with traditional upsampling methods, the up-sampling method of transposed convolution is not a preset interpolation method, but like standard convolution, it has learnable parameters and can obtain the optimal up-sampling method through model learning. The operation of standard convolution is to multiply the elements in the convolution kernel with the corresponding elements in the input matrix pixel by pixel and sum them up. Then, the convolution kernel slides on the input matrix in units of stride until all positions of the input matrix are traversed. Assuming that the input is a 4 × 4 matrix, a 3 × 3 standard convolution is used for calculation, and padding = 0 and stride = 1. The final output result is a 2 × 2 matrix. Let x be the 4 × 4 input matrix, y be the 2 × 2 output matrix, and C be the 3 × 3 convolution kernel, then the standard convolution process is C x = y. The process of standard convolution is shown in (a) of Figure 5. Figure 5b shows the transposed convolution schematic.
The main purpose of transpose convolution [34] is to perform up-sampling. Transpose convolution is not the inverse operation of convolution, and it can only restore the size of the feature map to its original size, but the data is different from the original. Transpose convolution does not increase the receptive field size, it only learns the content of a feature point and then maps it to the convolution kernel. Therefore, the receptive field of all points on the convolution kernel remains unchanged, and adding multiple layers of convolution behind it can expand the receptive field range. Using transpose convolution for up-sampling in the model can automatically learn better up-sampling parameters and achieve higher accuracy. Therefore, in the feature extraction part of the YOLOv5-MR model, transpose convolution is used to improve the up-sampling ability.
Recognition and Reading of Dial Numbers
The final scale value of the dial needs to be calculated using the coordinates of the pointer, the center point of the dial, and the coordinates of six scale targets. Firstly, the YOLOv5-MR model is trained to detect eight types of targets, including the pointer, the center point of the dial, and six scale values (0, 2, 4, 6, 8, 10). After training, the model is used to locate the corresponding candidate regions in the original image, as shown in Figure 6. The YOLOv5-MR model obtains the center points of the scale C(C x , C y ), the pointer R(R x , R y ), and the center point of the scale N i (N ix , N iy ). However, the predicted center points are not the center of the circle but the center of an ellipse. Directly calculating using Equations (6)-(10) would result in large errors, so it is necessary to fit the circumscribed circle of the ellipse. The schematic diagram of fitting the minimum circumscribed circle of the elliptical dial is shown in Figure 7. In Figure 7, C(C x , C y ) represents the center point of the ellipse-shaped dial predicted by the YOLOv5-MR rotation object detection model, while R(r x , r y ) represents the center of the predicted pointer. P(C x , C y ) is a point with the same horizontal coordinate as the center of the dial and the same vertical coordinate as the center of the pointer. O(O x , O y ) is the center of the finally predicted circumscribed circle. Since the YOLOv5-MR rotation object detection model is used, the rotation angle of the pointer can be directly predicted by the model and denoted as θ i . According to the principle of equal opposite angles, it is known that θ 1 = θ 2 and θ 3 = 90 − θ 2 . The length of line RP is denoted as l x = C x − r x , and l PO = l RP is obtained by tanθ = l RP . Finally, the coordinates of the center of the circumscribed circle are O x = O y and O y = r y +l PO . When calculating the reading value based on the predicted center point position, pointer center point position, and dial scale position information, it is necessary to check whether the detection box is complete. The conditions for calculating the reading result are: (1) center point; (2) pointer; and (3) at least two scale marks. If these conditions are not satisfied, the dial scale prediction cannot be performed normally.
The process for calculating the angle between the pointer and each scale on the dial is shown in Figure 8, where 0, 2, 4, 6, 8, and 10 denote the dial scales, (x c , y c ) represents the center of the minimum enclosing circle, (x 1 , y 1 ) represents the center of the pointer, and (x 2 , y 2 ) represents the coordinates of the 0-scale. The angle between the pointer and each scale, denoted by θ i (I = 1,2. . .6), is obtained by Equation (10). This angle calculation method calculates the angle required for the pointer to rotate counterclockwise or clockwise from (x 1 , y 1 )to the current scale with (x c , y c ) as the rotation center. Counterclockwise angles are negative, and clockwise angles are positive. θ ∈ (−180, 180) forms an angle vector V = [θ 1 θ 2 θ 3 θ 4 θ 5 θ 6 ].
The mathematical principle for calculating the angle between two vectors based on their coordinates is as follows: Let m and n be two non-zero vectors, and let <m,n> denote the angle between them. Then, the cosine of <m,n> is given by cos < m, n ≥ m.n |m||n| . If the vectors are represented by their coordinates and the dot product operation is used, then we can obtain: m = (x 1 , y 1 , z 1 ), n = (x 2 , y 2 , z 2 ) m.n = (x 1 x 2 + y 1 y 2 + z 1 z 2 ) (11) |m| = x 2 1 + y 2 1 + z 2 1 (12) |n| = x 2 2 + y 2 2 + z 2 2 (13) By substituting Equations (10)-(13) into Equation (14), the cosine value of the angle between the two vectors can be obtained, and thus the degree of the vector angle can be obtained. cos < m, n >= (x 1 x 2 + y 1 y 2 + z 1 z 2 ) Setting z = 0 in Equation (14), it yields a two-dimensional plane vector, as shown in Equation (15): cos < m, n >= (x 1 x 2 + y 1 y 2 ) The range of the angle between two vectors is θ ∈ [0, π]. If the scale to be read is located on the left side of the pointer, it is rotated counterclockwise, and the angle value is negative. The larger the negative value, the closer the scale is to the pointer. Similarly, if the scale to be read is on the right side of the pointer, it is rotated clockwise, and the angle value is positive. The smaller the positive value, the closer the scale is to the pointer. After obtaining the angles between each scale and the pointer, the smallest positive angle θ i and the largest negative angle θ j are determined. Based on the proportions of these two angles in the entire angle (360 degrees) of the circle, the scale value that the pointer points to is determined. The proposed method directly uses the recognized pointer coordinates, scale coordinates, and fitted center point coordinates for calculation, which is simpler and more convenient than the method proposed by Zou et al. [27], which first extracts the pointer from the image using Mask R-CNN, and then uses the Hough transform to fit the pointer line and obtain the pointer rotation angle.
Recognition and Reading of Dial Numbers
In this paper, the hardware environment used to conduct the experiments in this paper consisted of an NVIDIA GTX3090 GPU, an Intel(R) Core (TM) i7 CPU with a clock speed of 2.3~4.6 GHz, and 32 GB of RAM. The algorithms were implemented using Pytorch 3.6, Python 3.9, and cuda 10.0, and the experiments were conducted on an ubuntu 18.4 operating system.
All models were trained, validated, and tested under the same hyper-parameters. The hyper-parameters are set as shown in Table 1. The evaluation of the performance of all detections is based on the IOU (Intersection over Union) between the predicted and ground truth bounding boxes. A predicted bounding box is considered successful if its IoU is greater than 0.5. In object detection, both precision and recall need to be considered when evaluating the performance of a network model. The mAP is generally used to evaluate the performance of network models in object detection. The precision and recall are calculated by the Formulas (16) and (17), respectively. The parameter values presented in Table 1 In the context of object detection, TP refers to the number of true positive detections, FP refers to the number of false positive detections, and FN refers to the number of false negative detections. Average Precision (AP) is defined as the average precision at different recall levels, and is commonly used to evaluate the detection performance of a specific class. mAP is the mean of AP across all object classes, and is commonly used to evaluate the overall performance of a detection model. The calculation of mAP can be expressed by Equation (18).
The AP i represents the detection accuracy of a certain class, and n is the number of classes.
Dataset
As there is no publicly available dataset for lightning arrester meter reading, this paper presents a new dataset named MR-Meter, which consists of 2000 lightning arrester images and 14,000 annotated targets. The images were mainly collected from a well-known power station in the region and were uniformly resized to 640 × 640. The dataset was annotated in the VOC2007 format, and the annotation includes the rd_pointer, which refers to the position of the pointer, the Center refers to the center of the pointer, and the positions of the zero, two, four, six, eight, and ten scales. During annotation, efforts were made to ensure that the center of the annotation box is on the scale as much as possible. The training and testing sets were randomly sampled at a ratio of 7:3, with 1400 images for training and 600 images for testing. All annotation information, including location and class information, was recorded in xml format.
In order to obtain better prior knowledge of the dial targets, this paper uses the Kmeans algorithm to generate eight prior boxes on the MR-Meter dataset. The clustering results are shown in Figure 9, where (a) represents clustering the target sizes in the dataset, (b) represents clustering the target center positions in the dataset, and (c) represents the statistics of the number of labels in the dataset. From Figure 9, it can be seen that many points are clustered in the lower left corner, and many target sizes are smaller than 0.04, indicating that there are many small targets in the dial dataset.
Results of the YOLOv5-MR Reading Model
This section employs comparative experiments to verify the effectiveness of the YOLOv5-MR model, using the average error δ as the evaluation metric, as shown in Equation (19): where n denotes the number of experimental groups, a i represents the predicted value of the model, and A i represents the manual reading. We used 10 sets of data, and the recognition results are shown in Table 2. According to the data in Table 2, it can be seen that YOLOv5-MR can effectively improve the detection performance of small targets in the dataset, and has strong convergence ability. Therefore, the prediction performance of YOLOv5-MR model has a smaller error compared to YOLO series object detection models. The final recognition results of YOLOv5-MR are shown in Figure 10.
Experimental Results
To compare the performance of our YOLOv5-MR model with the YOLOv5 model, we conducted experiments using the same dataset and parameter settings. We plotted the loss curves of the two models based on the saved log files during training, as shown in Figure 11. Each image from left to right represents the Box_Loss, Obj_Loss, Cls_Loss, Ang_Loss, and Total_Loss, respectively. From the Total_Loss curves, we can see that the YOLOv5-MR model converges faster, more stably, and with a smaller loss value than the YOLOv5 model. This indicates that the EIoU loss function, which directly penalizes the predicted box's aspect ratio, can effectively avoid the problem of enlarging the aspect ratio of incorrectly predicted boxes in the GIoU loss function, thus improving the convergence ability of the network. Furthermore, our YOLOv5-MR model adds the detection of the C 2 feature layer, which contains more object information, and sets a group of smaller Anchors based on the target size characteristics of the MR-Meter dataset to speed up the convergence speed of predicted boxes for small targets, improving the model's detection ability for small targets. Finally, the YOLOv5-MR model can learn better up-sampling parameters through transposed convolution, which helps the model converge faster and more stably.
Ablation Studies
To evaluate the effects of multiscale feature detection, EIoU loss function, and transposed convolution on object detection performance under the same experimental condi-tions, ablation experiments were conducted. The Ultralytics 5.0 version of the YOLOv5m model was used as the baseline model. The input image resolution was set to 640 × 640, and the model was trained for 300 epochs. The results are shown in Table 3. The second row of Table 3 shows that after introducing multiscale feature detection, the average precision increased by 1.2%, but the computation speed decreased. The third row of Table 3 shows that after using the EIoU loss function, the average precision increased by 1.8%, and the speed did not decrease. The fourth row of Table 3 shows that after replacing the nearest neighbor interpolation with transposed convolution, the average precision decreased by 0.3%. Due to unreasonable hyper-parameter settings, such as kernel size, stride, and padding of the transposed convolutional layer, adjustments need to be made based on the MR-Meter dataset. By incorporating these three improvements into the YOLOv5-MR model, the average precision improved by 3.0%, and the detection performance of small targets was significantly improved.
Comparative Experiment
In this section, we compared the performance of our proposed YOLOv5-MR model against several state-of-the-art object detection models, including Ultralytics5.0 version of YOLOv3 [20], YOLOv3-spp [35], and YOLOv5 [22]. The models were trained and validated using the MR-Meter dataset, and the comparison was conducted in terms of Recall, mAP, GFLOPS, and Weights, as presented in Table 4. It can be observed from Table 4 that our proposed YOLOv5-MR model achieved significantly higher mAP accuracy compared to YOLOv3, YOLOv3-spp, YOLOv5s, and YOLOv5m models. Although the GFLOPS of our proposed model is lower than that of YOLOv5s and YOLOv5m models, it still satisfies the requirements of real-time and accuracy. Our YOLOv5-MR model introduced multi-scale feature detection and detected the C 2 feature layer with more object feature information to improve the extraction ability of small object features. Furthermore, we employed a better EIoU loss function that directly penalizes the side length to solve the problem of enlarging the wrong side length during the bounding box convergence process, which greatly improved the convergence speed of the model. Additionally, using transpose convolution in the backbone network can better learn the parameters of up-sampling and improve the accuracy of the model. This paper compares and analyzes the YOLOv5-MR algorithm with YOLOv7 [36] and YOLOv8 [37], and our findings indicate that our proposed algorithm exhibits excellent performance in terms of accuracy and recall. Specifically, compared to YOLOv7 and YOLOv8, our algorithm exhibits higher mAP and recall.
Disscussion
In this paper, we propose a new automated meter reading model, YOLOv5-MR. The YOLOv5-MR model has added a small object detection layer to predict small objects on the C 2 feature layer. In accordance with the characteristics of the MR-Meter dataset, a new set of anchors has been added to detect small objects, greatly improving the model's ability to detect small objects. The YOLOv5-MR model directly uses the length of the predicted box as the penalty term, effectively avoiding the problem of amplifying the wrong length in the original IOU loss function, making the model converge quickly and well. Experimental results demonstrate that the YOLOv5-MR model outperforms the original YOLOv5 model in small object detection, and the new external circle fitting method is more accurate, faster, and more robust than current target-based meter reading methods. Compared to other recent meter reading studies, the YOLOv5-MR model achieves better results in accuracy, speed, and robustness. Compared to the recent single-stage YOLO network, our proposed algorithm shows excellent performance in accuracy and recall. Specifically, compared to YOLOv7 [36] and YOLOv8 [37], our algorithm shows higher mAP and recall rate.
Conclusions
This paper proposes an automatic meter reading model of dial indicator based on YOLOv5-MR, aiming at the problems of low accuracy and large error of current targetdetection-based meter reading methods. The algorithm has a high accuracy and robustness. Through improving the multi-scale, Loss function and the way of using the target, the detection ability of the model is improved, and finally a new dial fitting circle algorithm is proposed. Accurate reading of the dial scale has been achieved. In the next phase, we intend to utilize a more lightweight backbone network and apply a model pruning algorithm to further reduce the weight of our model, thereby improving its overall efficiency. This will enable the model to operate with less computational resources and make it more suitable for use in resource-constrained settings.
Future research can explore how to apply this method to other fields while further improving the model's performance and accuracy. The YOLOv5-MR model can be applied to fields such as substation automatic inspection robots and autonomous driving to improve their performance and accuracy. | 10,794 | 2023-07-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Overlaps and fermionic dualities for integrable super spin chains
The psu224\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathfrak{psu}\left(2,\left.2\right|4\right) $$\end{document} integrable super spin chain underlying the AdS/CFT correspondence has integrable boundary states which describe set-ups where k D3-branes get dissolved in a probe D5-brane. Overlaps between Bethe eigenstates and these boundary states encode the one-point functions of conformal operators and are expressed in terms of the superdeterminant of the Gaudin matrix that in turn depends on the Dynkin diagram of the symmetry algebra. The different possible Dynkin diagrams of super Lie algebras are related via fermionic dualities and we determine how overlap formulae transform under these dualities. As an application we show how to consistently move between overlap formulae obtained for k = 1 from different Dynkin diagrams.
Introduction
The study of the AdS/CFT correspondence with the presence of defects has lead to the discovery of a number of integrable boundary states of the psu(2, 2|4) super spin chain underlying N = 4 SYM, among these certain matrix product and valence bond states [1,2,3,4,5].The overlap between these particular boundary states and the Bethe eigenstates of the spin chain encode information about the one-point functions of conformal operators in the domain wall versions of N = 4 SYM [1], and other overlaps are of relevance for the study of quantum quenches in statistical mechanics [6].Not least the AdS/dCFT motivation has sparked the derivation of a number of exact overlap formulae.From the first exact expression involving the overlap between the Bethe eigenstates of the Heisenberg spin chain and the Néel state, obtained in statistical physics [7,8,9], the catalogue of exact formulae has been extended to overlaps with a large class of matrix product states [1] and arbitrary valence bond states [10], as well as to overlaps in several bosonic spin chains where nesting is involved [11,12,13].The latest addition consists of overlap formulae for integrable super spin chains [2,3,4,5].
All known overlap formulae contain as a key ingredient the Gaudin matrix [14] of the Bethe eigenstate, or more precisely an object which can be expressed as the superdeterminant of the Gaudin matrix [4].The Gaudin matrix encodes the norm of the Bethe eigenstate [14,15] and can be expressed in a closed form given the Bethe roots of the state plus the Cartan matrix and Dynkin label describing the Lie algebra and its particular representation underlying the integrable spin chain in question.Dynkin diagrams and Cartan matrices for super Lie algebras are not unique [16] but related via a set of fermionic dualities [17], and this immediately raises a question in relation to the newly derived overlap formulae for integrable super spin chains, namely: How do these formulae transform under fermionic dualities?This question is the main focus of our work.We argue that for consistency reasons the overlap formulae have to transform covariantly under fermionic dualities, a property to be defined more precisely in the following, and that this requirement puts very strong constraints on these formulae.Furthermore, we derive the transformation properties of the super determinant of the Gaudin matrix for all fermionic dualities that are needed to move between the various possible Dynkin diagrams of psu (2, 2|4).As an application we transform an overlap formula found in [5] for the Dynkin diagram corresponding to the alternating grading to the cases of the Beauty and the Beast Dynkin diagram [18].Transformation rules that we derive permit us to check the general formula for one-point functions [5] against explicit field-theory calculations presented in [4] in a different grading.
Our paper is organized as follows.We begin in section 2 by describing the general structure of overlap formulae for integrable boundary states where the superdeterminant of the Gaudin matrix plays a key role.We furthermore review how fermionic dualities allow one to move between different Dynkin diagrams and associated Cartan matrices of a super Lie algebra and correspondingly between different sets of Bethe equations determining the eigenstates of the integrable super spin chain in question.Then in section 3 we determine how the superdeterminant of the Gaudin matrix transforms under fermionic dualities treating first the dualization after a non-momentum-carrying node and subsequently the slightly more complicated case of dualization after a momentum-carrying node where the superdeterminant becomes singular and needs regularization.In both cases we start by a simple example and work our way towards the general case.With the transformation rules for the superdeterminant in place we, in section 4, turn to the translation of overlap formulae between different gradings starting by going through the procedure in some detail for su(2|2) and finally demonstrating how to translate the overlap formulae of psu(2, 2|4) between any two gradings.Section 5 contains our conclusion.
2 Integrable Overlaps and Fermionic Duality
Overlap Formulae
The Bethe-ansatz spectrum of an integrable spin chain with a rational Rmatrix is neatly encoded in the group-theory data.Each eigenstate is characterized by the rapidities of constituent magnons u ja assigned to the nodes of the Dynkin diagram.The spectral equations depend on the Cartan matrix M ab and the Dynkin labels of the spin representation q a : Their solutions enumerate all eigenstates of the Hamiltonian.
Local single-trace operators in the N = 4 SYM correspond to the Bethe states |{u aj } ≡ |u of the psu(2, 2|4) spin chain [19], equivalently represented via the AdS/CFT duality by on-shell states of the dual string theory.Likewise, the boundary states of the spin chain describe D-branes in field theory and, whatever that means, the boundary-state descriptions proved highly efficient in computations of correlation functions in the presence of domain walls [1] or of very large determinant operators [20,21].Expectation values of local operators induced by the D-brane are naturally given by an overlap between the boundary state and on-shell Bethe eigenstates.
An example is the D3-D5 domain wall, a codimension one defect across which the SU (N ) symmetry of the SYM is broken to SU (N − 1).We shall consider the case where symmetry breaking is effected by Neumann/Dirichlet boundary conditions imposed on the sundry field components [22].The D3-D5 defect preserves scaling symmetry, but allows for non-trivial one-point functions with power-law fall-off away from the defect.An expectation value of the local operator O u in the presence of the domain wall at x 3 = 0 is thus given by where the bracket D3D5| denotes a boundary state in the psu(2, 2|4) spin chain.It can be explicitly constructed in perturbation theory by evaluating Feynman diagrams in the presence of the defect [23,4].The combinatorial prefactor that depends on the length is just a matter of convention.Moreover, a non-perturbative solution for the overlap was obtained by bootstrapping scattering theory of magnons off the D5-brane [2,3,5], with no reference to the explicit form of the boundary wavefunction.This was possible because the D3-D5 system preserves integrability and scattering off the D5-brane is completely elastic.By definition, an integrable boundary state is a coherent superposition of magnon pairs with opposite momenta [24,6], and has non-zero projections only on parity-invariant Bethe eigenstates.We will assume that parity uniformly flips all the rapidities u aj → −u aj , as it does in the D3-D5 case, but more generally it can also permute nodes of the Dynkin diagram as exemplified in [20].The Bethe roots in a parity-even state are either paired: {u aj , −u aj }, j = 1, .., K a /2, or lie exactly at zero.We denote levels with zero roots by a α , α = 1, . . ., ν.
Another way to characterize admissible Bethe states is to require their Q-functions to have definite parity, even or odd.The reduced Baxter functions with zero roots removed, are manifestly even.
The main ingredient of the overlap formulae is the Gaudin matrix, the Jacobian of the transformation from rapidities to phases in the Bethe equations: Parity is a linear Z 2 automorphism on the space of Bethe roots and therefore defines the superdeterminant (2.6) For any integrable boundary state known so far the overlap with admissible Bethe states is expressed through the Gaudin superdeterminant decorated with reduced Baxter functions evaluated at specific points: (2.7) Or an overlap may be given by a linear combination of such terms.In view of its universal significance we introduce for this formula a graphic notation shown in fig. 1.
Diagonalizing the Z 2 symmetry brings the Gaudin matrix into a blockdiagonal form [8], and its superdeterminant can be expressed as a ratio of two ordinary determinants: where the components G + /G − are (K/2 + ν) × (K/2 + ν) and K/2 × K/2 matrices, respectively, with the matrix elements where . (2.10) For q a = 0 or M ab = 0, 1/q α or 1/M ab should be set to zero.The overlaps of the D3-D5 boundary state, for example, are given by 1 (2.11) This formula corresponds to the Bethe equations in the alternating grading 1 This is the weak-coupling limit of a more general asymptotic formula [3,5].It is important to keep in mind that the roots denoted by y (1,2) j in [3,5] can go to zero or to infinity at weak coupling.Large roots land onto the 3rd and 5th nodes, while small roots appear on nodes 1 and 7: y of the psu(2, 2|4) Dynkin diagram: and is illustrated in fig. 2.
It would be interesting to compare this formula, derived by utilizing bootstrap techniques, with the direct field-theory computations, but the latter are presented in different gradings [12,4] making direct comparison impossible beyond the simplest su(2) sector.The Bethe equations (at one loop) can be transformed to any grading by a chain of fermionic dualities, which we review below, and one may expect that the overlap formulas make sense in any grading as well.Formulating transformation rules of the overlaps under fermionic duality is the main goal of this paper.
Fermionic Duality
Consider a fragment of the Cartan matrix and of the weight vector around an auxiliary (non-momentum-carrying) fermionic node: The Bethe equations for the fermionic roots are expressed in terms of the Q-functions on the adjacent nodes: where ± denotes a shift by ±i/2.The fermionic duality is expressed by the equation where Q is the Baxter polynomial on the middle node and Q is a new, dual Q-function.It is easy to see that the fermionic duality is equivalent to the original Bethe equations, because the left-hand side evaluates to zero on any root of Q(u).The dual polynomial Q(u), of degree K l + K r − K − 1, absorbs the "unused" roots.Quite obviously, the dual roots satisfy the same set of Bethe equations.
The fermionic roots enter the Bethe equations on the adjacent nodes, and the scattering phases therein have to be re-expressed through the dual roots, to close the system.The duality equation suffices to do so.Indeed, setting u = u rk ± i/2 therein, and taking the ratio of the resulting identities gives: and, likewise, . (2.17) These are precisely the scattering phases of the Bethe equations to the left and to the right of the fermionic node.In addition to the dualized Q-functions they contain an extra self-scattering term that either cancels or reintroduces interactions among the u lj and u rj roots.
It is a easy to see that this extra self-scattering reverses the grading of the two nodes at hand.Imagine the right node were bosonic.Its Bethe equations then contained the ratio Q ++ r /Q −− r , exactly inverse to the Q-functions emerging from the duality transformation.Cancellation renders the node fermionic.If the node were fermionic from the beginning, the self-scattering induced by the duality makes the node bosonic with the correct self-scattering phase.All in all, the Cartan matrix transforms as The duality equation is slightly different if the fermionic node is momentum-carrying: where [±q] shifts the argument by ±iq/2 and G(u) = u L .Here q is the Dynkin label of the fermionic node: (2.20) The same manipulations as above now introduce an extra phase due to the G-function: The additional contribution can be attributed to the momentum phase and does not jeopardize the structure of the Bethe equations in the following five cases: (2.22) The fermionic duality reflects the non-uniqueness of the Cartan basis in a superalgebra [16] and relates Bethe equations for Cartan matrices of different grading [17].It has multiple uses in the spin-chain description of the SYM spectrum [25,26,27,28].The Q-functions generated by the fermionic duality are labelled by two integers from ∅ to 4: Q a| a , and can be placed at the vertices of a 5 × 5 rectangle (fig.3) [26].A particular grading is a path connecting the opposite corners.The nodes where the path turns are fermionic.The duality acts diagonally on each plaquette relating the Q-functions at the opposing corners.This is not the end of the story, and it is important to mention that the full set of Q-functions is defined on the Hasse diagram, with the duality equations acting along each square [29].This algebraic structure underlies the solution of the AdS/CFT spectral problem via the Quantum Spectral Curve [27] and has been extensively studied from different angles [30,31,28,32].While this extended structure is indispensable at the non-perturbative level, fermionic dualities alone are sufficient to solve the one-loop psu(2, 2|4) spin chain.For the Q-functions that fit on the square the fermionic duality relations are actually equivalent to the full set of Bethe equations [31].
At the level of the Q-functions no grading is distinguished.One can choose any path on the square or, better say, consider all the Q-functions on the same footing.The overlap formulae are formulated in a completely orthogonal way.They require fixing the grading and finding the Bethe roots.Ideally, we would like to have an invariant formulation where the path (the choice of grading) is not important at all.This we will not be able to achieve, but at least we will formulate the rules for how to transform overlaps from one grading to another, thus making the overlap formulae if not invariant under the fermionic duality then at least covariant.
Determining Transformation Laws
In this section we determine how Gaudin superdeterminants (2.8) transform under fermionic duality transformations.The section consists of two parts.In the first part we focus on the dualization of non-momentum-carrying nodes.
In the second part we then extend our results to momentum-carrying nodes.
Two-Node Example
The simplest example of the fermionic duality occurs for the O − X Dynkin diagram, where the bosonic node is momentum-carrying.The dual diagram is X − X: The Dynkin labels are the same in both cases: The fermionic duality is expressed by the equation where are the reduced Baxter polynomials (2.4) which only include paired roots.Indeed, the fermionic roots satisfy the equation and so do the roots of the polynomial The unused entries are the dual roots, which obviously satisfy the same Bethe equation.It is easy to see that trading u 2j for u 2j cancels self-scattering for u 1j and flips the u 1 − u 2 interaction, resulting the Cartan matrix M .
We assume that the original roots are fully paired, i.e., {u aj , −u aj } for j = 1, .., K a /2 in order for the state to be bosonic.The dual roots then form (K 1 − K 2 )/2 − 1 pairs plus a zero root with the latter being the origin of the variable u on the right-hand side of equation (3.3).One advantage of using reduced Baxter polynomials is thus that zero roots are always clearly visible in the duality equations.
Note that although the original operator is bosonic, the dual operator is fermionic.For example, the state with two u 1 roots corresponds to while the dual operator has the same u 1 roots but one additional u 2 root at zero.The corresponding operator reads The two operators are related by supersymmetry and belong to the same multiplet, but the notion of highest weight changes with grading and while the first (bosonic) operator is primary for M , it becomes a descendant in the M grading.Consequently, overlaps with primaries will map to overlaps with descendants under the duality transformations.Let us consider the simplest configuration {{u 1 , −u 1 } , {}} corresponding to the operator (3.5) and its dual {{u 1 , −u 1 } , {0}}.The Gaudin factors for this collection of roots are (3.7) and thus The general transformation rule reads The above equation must be a general algebraic fact but was found numerically and holds semi-off-shell.The latter term means that the u 1 roots can be arbitrary numbers while the u 2 roots must be chosen such that the duality equation (3.3) is fulfilled.The Bethe equations for the u 2 roots follow as a consequence.
The overlap formulae will transform covariantly provided that u 2 's only enter through Q 2 (0) and D and only in combination Q 2 (0)D.The dual formula will then contain D/ Q 2 (0).The factor K 1 takes into account that the original operator becomes a descendant in the dual frame.Factors like that are indeed expected to appear in the overlaps of descendants (see appendix A of [33]).
Three-Node Example
Next, we consider the su(2|2) extension of the above Dynkin diagram which pictorially is given by O −X −O, where as before the left node is momentumcarrying.The dual diagram is X − X − X.The corresponding Cartan matrices read: The Dynkin labels are the same in both cases: Assuming that the roots at the neighboring nodes are fully paired, the fermionic duality is expressed by the equation where as before are the reduced Baxter polynomials (2.4).Trading the roots u 2j for dual roots u 2j obviously cancels the self-scattering for u 1j and u 3j and flips the signs of the interactions resulting in the Cartan matrix M .
Again, we assume that the original roots are fully paired, i.e., we also assume the u 2 roots to be paired.The dual roots then form (K 1 +K 3 −K 2 )/2− 1 pairs plus a zero root.In the considered case the general transformation rule for the Gaudin superdeterminant (2.8) is given by which was found numerically and checked for various examples.
General Case
Let us finally turn to the most generic situation characterized by a fermionic, non-momentum-carrying node which has an arbitrary number of neighbors of arbitrary nature to both sides.Since the duality transformation only acts on nearest neighbors, we can account for this situation by considering the following parametric 3 × 3 Cartan matrix which might be part of some bigger Cartan matrix.The nearest neighbors can either be bosonic or fermionic as parametrized by η 2 and η 3 and might or might not carry momentum depending on whether the Dynkin labels V l and V r are non-zero or not.The variable η 1 parametrizes the interactions between different families of Bethe roots.Dualizing the middle node maps the above Cartan matrix and Dynkin labels to If all roots at the neighboring levels are fully paired, the fermionic duality is expressed by the equation where 2 ) are the reduced Baxter polynomials (2.4) associated with the left and right node, respectively.Obviously, there is a zero root associated with the middle node in this case which can either be part of the original set of Bethe roots or the set of dual roots.When considering integrable overlaps one can also face a situation where there is a single unpaired zero root at one of the neighboring levels.In this case, the fermionic duality equation takes the form Here, the upper signs pertain to the case of a zero root at the right node while the lower signs pertain to the case of a zero root at the left node.
Finally, we are now going to state how the Gaudin superdeterminant (2.8) transforms under the above transformation.We begin by focusing on the case where all roots are paired in the original grading.In this case, the dual roots form (K l + K r − K m )/2 − 1 pairs plus a zero root and the transformation law reads D = J D, (3.18) where As before, this result was found numerically and checked for various examples.If the zero root associated with the middle node is part of the original set of roots instead of the dual set, J must be replaced by (−J) −1 which is consistent with the fact that the duality transformation needs to square to the identity. 2Finally, we note that the transformation law (3.18)also pertains to the situation where there is an unpaired zero root among the right or left set of Bethe roots.However, in this case the fermionic duality equation (3.17) evaluated at u = 0 in fact ensures that J = 1 so that effectively (3.20)
Two-Node Example
Dualizing momentum-carrying nodes can potentially lead to different results.
To address this issue we consider the Dynkin diagram X − O and dualize the left node which is momentum-carrying in our case.The dual diagram is obviously X − X.The corresponding Cartan matrices read The Dynkin labels are given by 2 Recall that the duality transformation flips the sign of η 1 .The minus sign is hence necessary to ensure proper cancellation of pre-factors after the duality is applied twice.
The transformation law of Gaudin superdeterminants in general depends on the considered root configuration.In the original X − O grading the momentum-carrying roots always have to come in pairs, while the roots at the auxiliary level can either all be paired or be paired up to a single unpaired zero root.In what follows we will treat these two cases separately.
Fully Paired Roots.In this paragraph we assume that the original roots are fully paired, i.e. {u aj , −u aj } for j = 1, .., K a /2.In this case, the fermionic duality is expressed by the equation For even lengths the dual roots form (L+K 2 −K 1 )/2−1 pairs plus a zero root.
In this case the general transformation rule for the Gaudin superdeterminant (2.8) is given by which was just as before found numerically.The combination of Baxter polynomials is obviously the same as in the non-momentum-carrying case.
Zero Root at the Auxiliary Level.The second situation to consider is characterized by a set of fully paired momentum-carrying roots while the auxiliary roots are only paired up to a single zero root, i.e.
where u j are the momentum-carrying roots and y k are the auxiliary roots.The fermionic duality equation reads Note that the fermionic duality equation is in principle universal in the sense that it does not care about the different classes of root configurations.However, since we chose to work with reduced Baxter polynomials (2.4), the two equations (3.23) and (3.26) look slightly different.Our notation, however, makes it clear that the situation where there is a zero root among the auxiliary roots needs special care as this zero inevitably leads to dual roots located at ±i/2.Schematically, the dual configuration in the X − X grading thus looks as follows: Naively, roots at ±i/2 render the BAE as well as the overlap formulae divergent.For this reason, we first need to introduce an adequate regularization scheme.To establish such a scheme, we fist look at the BAE in the dual X − X grading: In order to regularize the singular roots, we make the ansatz [34] where N = K1 + K 2 is the total number of roots and c 1 and c 2 are yet to be determined constants.Plugging the above expressions into the BAE (3.28) and requiring that these hold true in the limit ε− > 0 yields the following expressions for the two constants: In addition to the BAE, the Gaudin determinant needs to be regularized as well.However, we note that the regularization prescription (3.29) is not symmetric and the regularized Gaudin determinant hence no longer factorizes.
We will now describe how to overcome this issue.We begin by considering the full Gaudin determinant and set the singular roots as well as the zero root to their regularized values.The remaining roots stay undetermined for the moment.Expanding the matrix elements in ε yields the following expression: .
Note that we only listed the divergent contributions explicitly while packaging all finite contributions into the expressions Φ i,j .In parts, the latter are polynomials in ε but for our purposes it is sufficient to consider them at ε = 0.By definition, the Φ i,j elements are thus independent of ε.We now perform row and column manipulations in such a way that all off-diagonal ε L poles are canceled.To achieve this, we first add the first and the second column to the last column.Then we add the first and the second row to the last row.After that, the leading divergences sit on the diagonal in the upper left corner and the Gaudin determinant takes the following form: , where G red is the reduced Gaudin matrix , where Laplace expanding the above Gaudin determinant shows that the coefficient of the leading divergence is given by the determinant of the reduced Gaudin matrix (modulo the product of c 1 and c 2 ), i.e.
The key insight to make everything work out nicely is to treat the reduced Gaudin determinant as if it were the Gaudin determinant itself.If all the spectator roots (with respect to regularization) are fully paired, the reduced Gaudin determinant indeed factorizes, i.e.
leading to the following Gaudin superdeterminant: Having defined the regularized Gaudin superdeterminant in the above way, it is now a straightforward exercise to check that which exactly mirrors the situation encountered for non-momentum-carrying roots (3.20).
General Case
We close this section by considering a more generic situation where the momentum-carrying node is embedded into a longer Dynkin diagram so that it has neighbors to both sides which are either bosonic or fermionic in nature.More precisely, we consider the BAE associated with the following parametric Cartan matrix with a fermionic middle node that carries momentum (V = 0).Here, η 2 and η 3 parametrize neighboring nodes that can either be bosonic or fermionic, while η 1 parametrizes the interactions between different families of Bethe roots.One might think of this Cartan matrix as being part of a bigger Cartan matrix.Under a duality transformation of the momentum-carrying middle node the above Cartan matrix and Dynkin labels are mapped to The fermionic duality is expressed by the equation are the reduced Baxter polynomials associated with the left and right node, respectively.The original roots are assumed to be fully paired, i.e., {u aj , −u aj } for j = 1, .., K a /2 so that for even lengths the dual roots form An important point to note concerns the total momentum phase of a Bethe state in the above set-up.Since the dual roots inevitably contain an uncompensated zero root, one may wonder about the fate of the zeromomentum condition which apparently seems to evaluate to −1 after the dualization.However, from (3.38) it follows that from which we conclude that Since the duality transformation interchanges the role of the vacuum and its excitations, the exchange statistics of the spin vacuum is flipped after the transformation.This is reflected by the sign (−1) L+1 which nicely compensates the −1 stemming from the zero root, see [25] for more details.
In the considered case the general transformation rule for the Gaudin superdeterminants (2.8) is given by which was again found numerically.The combination of reduced Baxter polynomials is thus the same as in the non-momentum-carrying case.Since the transformation formulae in the momentum-carrying and the non-momentumcarrying case are very similar, we strongly suspect the equation to hold for root configurations containing a neighboring unpaired zero root once proper regularization has been performed.However, we have only checked the last equation for the above example.
Dualizing Overlap Formulae
In this section we apply the above insights to study how overlap formulae transform under a change of grading of the underlying algebra.We begin by working through an su(2|2) example in great detail to illustrate the procedure.Finally, we leverage the newly gained insights to give a concise graphical procedure for how to transform overlap formulae between any two gradings.
Dualizing su(2|2) Overlaps
We begin by focusing on the su(2|2) case and consider the X − O − X grading as well as the O − X − O grading, where the left node is momentumcarrying in both cases.Both gradings are connected by fermionic duality transformations, i.e. going from X − O − X to O − X − O is achieved by first dualizing the third node and then dualizing the second node.
In the X − O − X grading the overlap formula for valence bond states of the form reads This formula slightly extends a result reported in [4].Here, u 1 , u 2 , u 3 are the three families of roots and Q i are the associated reduced Baxter polynomials, i.e., zero roots simply contribute a factor of 1.There are two root configurations that need to be considered: (1.1) the number of roots K 1 , K 2 and K 3 are all even (1.2) K 1 and K 3 are even while K 2 is odd (zero root at the second level) We begin by dualizing the third node of the Dynkin diagram X − O − X which results in X − X − X.The Cartan matrices read 3) The Dynkin labels are the same in both cases q = q = (1, 0, 0).Since we are using reduced Baxter polynomials (2.4), the duality equation has to be phrased in a slightly different fashion depending on the considered root configuration: The number of dual roots u 3 is given by and is therefore odd for an even number of roots K 2 (1.1) and even for an odd number of roots K 2 (1.2).For the transformation law of the Gaudin superdeterminants we find where we have listed the zero roots explicitly for maximal clarity.Obviously, both formulae only differ by a sign.The pre-factor in the lower equation is actually equal to 1 as explained below equation (3.18) but we prefer to keep it for later convenience.
The second step consists of dualizing the second node of the X − X − X Dynkin diagram so that the final diagram reads O − X − O corresponding to the Cartan matrix while the Dynkin labels remain unchanged.Again, there are two different root configurations that we need to consider (2.1) K 1 , K 2 are even while K 3 is odd , (2.2) K 1 and K 3 are even while K 2 is odd .
The Bethe roots are dualized with the help of the equations The number of dual roots is given by For the transformation law of the Gaudin superdeterminants we find where once again we have listed the zero roots explicitly for maximal clarity.
Combining the results (4.6) and (4.10) yields In order to see that both formulae actually agree, we evaluate the upper equation of (4.8) at u = 0 and find where we have used that Using this relation the upper transformation law in equation (4.11) can be rewritten as proving that both transformation laws in equation (4.11) are in fact equivalent.For both root configurations that are of interest to us we therefore obtain where we have now dropped the notation highlighting the zero roots.This result is in perfect agreement with (2.11) up to the factor which accounts for the fact that the state is a descendant in the new grading.
Dualizing psu(2, 2|4) Overlaps
Suppose duality on a fermionic node a maps Dynkin diagram M to M , and an explicit expression for an overlap is known in the original grading.How will the overlap formula look in the grading M ?From the previous sections we know that the action of the duality on the Gaudin superdeterminant produces a Jacobian Upon substituting this into the overlap formula, the Q-functions in the prefactor re-arrange themselves nicely if and only if the original pre-factor contained Q a (0), and no other instances of Q a .Then Q a (0) flips to 1/ Q a (0) and additional factors of Q a±1 (i/2) appear on the adjacent nodes.Using graphic notations of fig. 1, the resulting transformation rule is shown in fig. 4 on the left.
When the pre-factor contains 1/Q a (0) the overlap formula will also transform nicely.The original grading then corresponds to M and this results in the inverse transformation illustrated in fig. 4 on the right.These are the two basic rules for transforming the overlaps.They are obviously quite restrictive.The only fermionic Q-functions allowed are [Q a (0)] ±1 .The D3-D5 overlap (2.11) in fig. 2 is consistent with this requirement, but one should remember that duality changes bosonic nodes into fermionic and new restrictions arise.It is quite remarkable that application of all possible duality transformations does not jeopardize the strict requirement imposed on the fermionic Q-functions.We are going to demonstrate this shortly.
We are going to apply transformation rules to the overlap formula (2.11), written for the alternating Dynkin diagram in fig. 2. The first example is given in fig.5, which transforms the overlap to the SO(6)-friendly grading (dubbed in [35] the "Beauty" diagram).The result is shown in fig.6(a).When restricted to the SO( 6) sector (the three middle nodes), the result agrees with the overlap derived in [4] thus establishing a link between the bootstrap approach of [3] and direct field-theory calculations.
Another case considered in [4] is an SU (2|1) subsector of fermionic excitations Ψ α on top of the vacuum composed of Zs.Comparison to the general formula [3] requires a chain of duality transformations illustrated in fig. 7.This results in the overlap formula for the grading with a fermionic momentum-carrying node in fig.6(b).The SU (2|1) subsector corresponds to the central node plus the one immediately to the right.The overlap again agrees with the results of the direct calculation [4].
Finally we can transform the overlap to the distinguished Dynkin diagram of psu(2, 2|4), called the "Beast" diagram in [35].The chain of dualities is shown in fig.8 and results in the overlap illustrated in 6(c).We can again make contact with [4] by considering the gluonic spin-1 su(2) subsector formed by the self-dual components of the field strength.The Bethe ansatz for this spin chain is obtained by restricting to the momentum-carrying bosonic node in fig.6(c).The overlap with the D3-D5 state is proportional to 1/Q(0)Q(i/2), in agreement with the direct inspection of the spin chain, where the boundary state is the Néel-type SU (2) singlet [4].
Apart from complete agreement with the direct field-theory computa- tions, our findings reveal remarkable internal consistency of the overlap formula (2.11).The fermionic duality happens to work as a miracle, enabling all elementary moves along the way.This is a very strong consistency condition, and we cannot exclude that duality covariance fixes the overlap formula completely.
Conclusion
Recent results on integrable one-point functions in domain wall versions of N = 4 SYM [5,4] have triggered a need to know how overlap formulae for super spin chains can be translated between different gradings of the underlying super Lie algebra.We recall that one-point functions in the D3-D5 defect set-up are expressed as overlaps between Bethe eigenstates and matrix product states for k 2 and as overlaps between Bethe eigenstates and valence bond states for k = 1.Evoking bootstrap arguments reference [5] presented a closed formula for the one-point functions of the D3-D5 set-up for k 2 with the fields involved characterized by quantum numbers referring to the alternating grading of the underlying psu(2, 2|4) algebra.Earlier an expression for the tree level one point functions of all scalar operators for k 2 was found in the grading corresponding to the so-called beauty Dynkin diagram of the Lie algebra [12].Furthermore, using the k = 1 version of the D3-D5 set-up as a short cut for going beyond the scalar sector, one-point function formulae for certain fermionic as well as gluonic operators were derived in [4] where it was also shown that for the simplest subset of scalar operators the k = 1 result followed by analytical continuation from k 2. In order to translate the expression of [5] to different gradings of the super Lie algebra and in particular to show its compatibility with the other results mentioned above it is imperative to know how the key component of all overlap formulae, the super determinant of the Gaudin matrix looks in different gradings.By means of consistency and covariance arguments, aided by numerical investigations, we have identified the transformation properties of the super determinant of the Gaudin matrix under fermionic dualities which allow one to pass between the different possible gradings of superalgebras of the gl(M |N ) type.Applied to the case of psu(2, 2|4) our transformation rules have allowed us to demonstrate the compatibility of results of [5] with all earlier results and to express the one-point functions in any grading of the super Lie algebra.The fermionic dualities considered here constitute only a sub-set of duality transformations possible for the Bethe equations corresponding to an integrable spin chain based on a super Lie algebra.In general, the possible sets of Bethe equations and QQ-relations are encoded in the Hasse diagram which for the psu(2, 2|4) spin chain involves 256 nodes [29].In addition to the fermionic dualities the Hasse diagram entails two series of bosonic dualities.It would be very interesting if the transformation properties of the super determinant of the Gaudin matrix, discovered here, could be generalized to the full Hasse diagram.We plan to address this question in future work [36].Likewise, it would be interesting to find an analytical derivation of the transformation formulae.One could envision a recursive strategy as in the derivation of formulae for overlaps between Bethe states, as implemented most recently for the general case of gl(M |N ) spin chains [37].Alternatively, one could envision a constructive proof determining the super determinant of the Gaudin matrix as the unique quantity transforming covariantly under fermionic duality.
Figure 1 :
Figure 1: A graphic notation for the pre-factor in the overlap formula.
Figure 2 :
Figure 2: Dynkin diagram of psu(2, 2|4) in the alternating grading.Numbers in the lower row represent Baxter functions in the overlap formula, according to the conventions of fig. 1.
Figure 3 :
Figure 3: The Bethe equations are defined on a path connecting the two trivial Q-functions Q ∅| ∅ = 1 = Q 4| 4 .The horizontal direction roughly corresponds to S 5 and the vertical direction to AdS 5 in the dual string picture.The nodes at which the Dynkin diagram makes a turn are fermionic, and those which it passes straight are bosonic.The duality acts by flipping across a plaquette.
Figure 4 :
Figure 4: Transformation rules for the overlaps.
Figure 5 :
Figure 5: Duality transformations from the alternating to the Beauty diagram: red to green to black.
Figure 6 :
Figure 6: Overlap formulae in different gradings, in graphic notations of 1: (a) for the Beauty diagram; (b) in the grading where a fermionic node is momentumcarrying; (c) in the grading corresponding to the distinguished Dynkin diagram of psu(2, 2|4) (the Beast diagram).
Figure 7 :
Figure 7: A graphic representation of the transformation discussed in sec.4.1 from the alternating to the fermionic grading: red to to black.
Figure 8 :
Figure 8: From fermion to Beast: red to green to orange to blue to jade to purple black. | 9,132.2 | 2020-11-24T00:00:00.000 | [
"Physics"
] |
TeraHertz Exploration and Zooming-in for Astrophysics (THEZA): ESA Voyage 2050 White Paper
This paper presents the ESA Voyage 2050 White Paper for a concept of TeraHertz Exploration and Zooming-in for Astrophysics (THEZA). It addresses the science case and some implementation issues of a space-borne radio interferometric system for ultra-sharp imaging of celestial radio sources at the level of angular resolution down to (sub-) microarcseconds. THEZA focuses at millimetre and sub-millimetre wavelengths (frequencies above $\sim$300~GHz), but allows for science operations at longer wavelengths too. The THEZA concept science rationale is focused on the physics of spacetime in the vicinity of supermassive black holes as the leading science driver. The main aim of the concept is to facilitate a major leap by providing researchers with orders of magnitude improvements in the resolution and dynamic range in direct imaging studies of the most exotic objects in the Universe, black holes. The concept will open up a sizeable range of hitherto unreachable parameters of observational astrophysics. It unifies two major lines of development of space-borne radio astronomy of the past decades: Space VLBI (Very Long Baseline Interferometry) and mm- and sub-mm astrophysical studies with"single dish"instruments. It also builds upon the recent success of the Earth-based Event Horizon Telescope (EHT) -- the first-ever direct image of a shadow of the super-massive black hole in the centre of the galaxy M87. As an amalgam of these three major areas of modern observational astrophysics, THEZA aims at facilitating a breakthrough in high-resolution high image quality studies in the millimetre and sub-millimetre domain of the electromagnetic spectrum.
at facilitating a breakthrough in high-resolution high image quality studies in the millimetre and sub-millimetre domain of the electromagnetic spectrum.
Keywords Radio interferometry · VLBI · mm-and sub-mm astronomy · spaceborne astrophysics · super-massive black hole 1 THEZA science rationale Astronomical advances are typically driven by technological advances and expansion of the parameter space made available for observing the Universe. Imaging has been and still is at the front line of astronomical research. There are two extreme ends one can consider. One extreme frontier is covered by large survey telescopes which charter large areas of the entire sky to make ever-more complete inventories of cosmic sources. Gaia has measured positions of billions of stars, revealing important aspects of Galactic structures. Euclid will map two billion galaxies to understand the structure of the Universe and the nature of dark energy. On the ground, telescopes like the Square Kilometre Array (SKA) promise to map billions of radio sources, including all powerful radio galaxies back to the beginning of the Universe.
However, the other extreme end of the astronomical parameter space is higher resolution, high quality imaging. Rather than understanding the entire Universe at once, individual objects are studied with ever sharper vision. In optics, the Hubble Space Telescope has certainly changed our view of the Universe and made a big impact not only on science as such, but also on its perception by the general public.
Both approaches are necessary. After all, we cannot understand the large scales of the Universe, if we do not understand the objects that populate it and we cannot understand cosmic processes, if we only look at them with blurred vision.
Here we argue that the time is ripe for a major leap forward in astronomical resolution and image quality, providing us with orders of magnitude improvements in both areas and a stunning view of the most exotic objects in the Universe: black holes.
One of the major challenges of fundamental physics in the coming decades will be understanding the nature of spacetime and gravity, and black holes are at the centre of these challenges. Spacetime provides the underlying theatre within which the entire drama of our Universe unfolds. Gravity and the geometry of space are in principle well-described by the theory of General Relativity (GR). Yet this theory is still one of the biggest mysteries in theoretical physics. The presence of dark energy in the cosmos tells us that our understanding of spacetime is not complete and may require quantum corrections, which can even affect the largest scales. Similarly, the notion of Hawking radiation suggests that quantum theory and classical black holes seem to be incompatible. However, after many decades of research the unification of GR and quantum theory is still a major problem and perhaps experiments may now need to lead the way.
Fortunately, we are now entering an era when experimental tests of gravity -even under the most extreme conditions -are becoming possible. The nature of dark energy is targeted by large scale surveys, as mentioned before. The Square Kilometre Array (SKA) will have the ability to detect and measure new pulsar systems to significantly improve on existing tests of GR. New Xray missions like Athena will allow us to do spectroscopic measurements of hot gas orbiting black holes and gravitational wave experiments like LISA or the Einstein Telescope will provide detailed measurements of the dynamical nature of spacetime. Hence, one could claim that the past century was the century of particle physics and astrophysics, while this century promises to be the century of experimental spacetime physics -the ultimate synthesis of the two.
However, of all the techniques, there is only one that allows one to make actual images of black holes and the astrophysical processes surrounding them: interferometry. Optical interferometry with ESO's Gravity experiment has imaged the motion of gas around the Super-massive Black Hole (SMBH) in the centre of Milky Way (Gravity Collaboration et al., 2018a,b) and Very Long Baseline Interferometry (VLBI) in the radio domain has imaged the innermost structures of jets and recently even the black hole shadow in the centre of the radio galaxy M87 (Event Horizon Telescope Collaboration et al., 2019a,b,c,d,e). These new results are quantum leaps and they only mark the beginning. By going into space, the imaging resolution and fidelity can be enhanced enormously, and we will be able to see black holes and their immediate environment in unprecedented detail and quality (Fig. 1). Any physical and astrophysical theory of black holes and their activity will need to be able to predict in detail how they look.
In this paper we discuss the promises of space-based VLBI for the study of black holes within the concept dubbed TeraHertz Exploration and Zoomingin for Astrophysics (THEZA). Of course, once one has access to the extreme resolution provided by Space VLBI, other than SMBH imaging science cases will also become possible with the same technology, which we also briefly touch upon.
2 Science and technology heritage 2.1 Space VLBI For more than half a century, since its first demonstrations in 1967, Very Long Baseline Interferometry (VLBI) holds the record in sharpness of studying astronomical phenomena. This record reaches angular resolutions of milliarcseconds and sub-milliarcseconds at centimetre and millimetre wavelengths owing to the ultimately long baselines, comparable to the Earth's diameter. However, the hard limit of VLBI angular resolution defined by the size of the Earth does not allow us to address astrophysical phenomena requiring even sharper sight. Not surprisingly, soon after the demonstration of first VLBI fringes with global Fig. 1 Zooming into the central area of a galaxy containing a SMBH. Artwork by Beabudai Design, the background -Cen A free stock image, a simulated image of a central area of a galaxy containing a supermassive black hole (Mościbrodzka et al., 2014).
Earth-based systems, a push for baselines longer than the Earth's diameter materialised in a number of design studies of Space VLBI (SVLBI) systems.
Over the past decades, several dozens of various SVLBI concepts have been presented with widely varying depth of development and level of detail (Gurvits, 2018(Gurvits, , 2020b. They paved the way for the first SVLBI demonstration in the middle of the 1980s by the TDRSS Orbital VLBI experiment (Levy et al., 1986), and two dedicated SVLBI missions, VSOP/HALCA launched in 1997 and RadioAstron launched in 2011 (Kardashev et al., 2013). First VLBI "fringes" on baselines longer than the Earth's diameter were obtained with NASA's geostationary spacecraft of the Tracking and Data Relay Satellite System (TDRSS, shown in the left panel of Fig. 2) in 1986 (Levy et al., 1986). This was a very efficient example of ad hoc use of existing orbiting hardware not designed originally for conducting SVLBI observations. The main outcome of several TDRSS observing campaigns was two-fold. First, the very concept of getting coherent interferometric response (the so called interferometric "fringes") on baselines longer than the Earth's diameter was proven experimentally. Second, observations of two dozen of the strongest Active Galactic Nuclei (AGN) at 2.3 GHz in 1986-87 Linfield et al., 1989), and in a dual-frequency mode at 2.3 GHz and 15 GHz in 1988 (Linfield et al., 1990) provided indications that at least some extragalactic sources of continuum radio emission were more compact and therefore brighter than expected. These milestones supported growing momentum for the first generation of dedicated Space VLBI missions.
The VSOP/HALCA (Fig. 2, middle) operated in orbit in the period 1997-2003. Its science heritage is summarised in Hirabayashi et al. (2000a,b) and Hagiwara et al. (2009, Parts 3-4). The RadioAstron mission (Fig. 2) was operational in the period 2011-2019. Its science outcome is still to be worked out; some preliminary summaries are presented by Kardashev and Kovalev (2017) and in the Special Issue of Advances in Space Research (Gurvits, 2020a). A very brief list of major legacy achievements of VSOP and RadioAstron missions with associated references is given in Gurvits (2018). The major qualitative result of the first generation SVLBI missions, VSOP and RadioAstron can be expressed as follows: sub-milliarcsecond angular scale in continuum (AGN, pulsars) and spectral line (maser lines of hydroxyl and water molecules) sources is consistent with the current general understanding of the astrophysics of these sources, but various enigmatic details require further in-depth studies with similar or sharper resolution and higher sensitivity.
The first generation SVLBI era has come to completion in 2019. It has provided a solid proof of concept of radio interferometers exceeding the size of Earth and serves as a stepping stone toward future advanced SVLBI systems presented in this White Paper.
2.2 Millimetre and sub-millimetre space-borne radio astronomy Over the last three decades, developments for ground and space-borne millimetre and sub-millimetre heterodyne detection instruments have generated important synergies. The SWAS (Melnick et al., 2000), Odin (Frisk et al., 2003) and NASA Earth Observing System Microwave Limb Sounder (Waters et al., 2006) orbital missions were all equipped with room temperature or cooled Schottky receivers. One of the most important steps in advancing modern heterodyne instruments was the development of cryogenic systems with superconducting mixer elements (SIS). In particular, such technology was essential not only for achieving quantum limited noise but could also work with much lower local oscillator (LO) power, a condition permitting very compact and versatile solid state LO generators. The complex Herschel HIFI instrument adopted this technology for the on-board instrumentation enhanced by stringent quality assurance schemes (de Graauw et al., 2010). The success of HIFI showed that SIS and hot-electron bolometer (HEB) technology is very well suited for space application.
This technology path found its way into advanced mm/sub-mm systems both for Earth-based facilities (e.g., the ALMA and IRAM's NOEMA interferometers). The recent study for Origins (one of the four NASA flagship candidates for the next Decadal) included a thorough look at the heterodyne needs for the HERO (HEterodyne Receiver for Origins) onboard instrument (Wiedner et al., 2018). This study showed that achieving a high Technology Readiness Level (TRL) sufficient for implementation within the timeframe of the 2020 US Decadal survey is possible and no showstoppers are foreseen. This also holds for direct detectors, as studied for SPICA and Origins, and in a different incarnation for the ESA's Cosmic Vision L2 mission Athena (the XIFU instrument). The Transition Edge Sensors (TES) and Kinetic Induction Detectors (KID) devices provide ultimate sensitivity and, if needed, high multiplexing capabilities. The technology required for these detectors has been developed, in particular, at the SRON, The Netherlands. Finally, very powerful broadband digital signal processing is now available which enables data processing with extreme efficiency, while power consumption for these digital operations is expected to decrease within the coming years to sufficiently low levels for space application. Other important improvements in cryogenic heterodyne instruments have taken place and are still moving forward. Instantaneous bandwidth is now an orders of magnitude larger than 20 years ago also due to the greatly improved performance of cryogenic IF amplifiers.
The Herschel instruments were passively cooled to 80 K and the instruments were kept cold through boiling of Helium gas in a large cryostat. With the developments of cryocoolers for Planck, the cryocoolers of Hitomi and the current developments for Athena and SPICA, we expect that any mission needing low temperatures can rely on closed cycle cryocooling rather than liquid Helium cryostats. Planck also showed the value of the V-groove system, currently employed in Ariel and studied for the SPICA satellite. All in all, the available heritage of mm/sub-mm space missions and their instrumentation and ongoing developments assure beyond doubt that technical specifications for THEZA can be met within the Voyage 2050 time-frame. Fig. 3 presents artist's impressions of four completed missions (SWAS, Odin, Herschel and Planck) as well as two concepts (SPICA and Millimetron) which can be seen as stepping stones toward a mission addressing the THEZA concept.
2.3 Ground-based radio astronomy arrays: EVN, Global VLBI, GMVA, EHT With a collecting area comparable to that of the first phase of the midfrequency Square Kilometre Array (SKA1-MID), today the European VLBI Network (EVN 1 ) is a joint facility of independent European, African, Asian and North American radio astronomy institutes with thirty-two radio telescopes throughout the world and operating in a range of wavelengths going from 92 cm to 0.7 cm. The EVN offers (sub-)milliarcsecond angular resolution with µJy sensitivity in those bands with best performances (21/18 cm and 6/5 cm). It is the most sensitive regular VLBI array adopting open skies policies 2 . Since the start of its operations in the early 1980s, the EVN has been undergoing continuous development to match the scientific requirements, the most relevant being: the increased recording rate at each telescope from the initial 4 Mbps to 2 Gbps at present; the increased number of member observatories, which has brought the number of antennas from 5 to the current 32, with an amazing jump in the image fidelity; and the real-time operations with a subset of the array through the e-VLBI, whereby the recorded signal is transferred to the EVN correlator at JIVE (the Joint Institute for VLBI European Research Infrastructure Consortium) through fibre link connection, a feature which has made the e-EVN array an SKA pathfinder.
Owing to the broad frequency range and high sensitivity offered to the community, the EVN delivers outstanding science in almost any astrophysical area, from the distribution of dark matter through gravitational lensing, to the origin of relativistic jets in AGN and their evolution with cosmic time, to stellar evolution -from the pre-main sequence stage to the post-asymptotic giant branch stage -and to the successful detection of exoplanets. More recently, it has become clear that VLBI can uniquely contribute to what is now referred to as the science of transient phenomena, a broad range of very energetic phenomena covering fast radio bursts, gamma-ray bursts, tidal disruption events, and follow-up of electromagnetic counterparts to gravitational wave events, thanks to the combination of superb angular resolution and sensitivity, and microarcsecond precision localisation.
While the EVN as such has no frequency overlap with the THEZA concept presented here, it is the vast area of complementary science that makes the EVN a highly synergistic to the THEZA science. Moreover, the most important synergy between EVN and THEZA is in human capital: most members of the THEZA Team are active EVN functionaries and users. In short, the EVN is a major "breeding ground" for ideas, technologies and know-how behind the THEZA initiative.
The already superb angular resolution of the EVN can be sharpened by observing at shorter wavelengths, as has been achieved by the Global Millimetre VLBI Array (GMVA 3 ), another world-spanning array consisting of sixteen antennas operating at 3.5 mm (85-95 GHz band). The array may be further expanded with the addition of the phased ALMA and the Korean VLBI Network. Further potential enlargement of the GMVA will involve the Greenland Telescope and the Mexican Large Millimetre Telescope. Exploration of the inner jet regions of nearby AGN and blazars is one of the main scientific drivers for 3 mm GMVA observations. The experience acquired with the GMVA, in operation since the early 2000s, for which observations, calibration, and data analysis pose several challenges due to the weather dependence and the heterogeneity of antennas in the array, has paved the way to the Event Horizon Telescope (EHT 4 ), which has pushed the angular resolution and observing frequencies to the limits of the current capabilities of ground-based VLBI. Imaging the black hole shadow of the nearby galaxy M 87 has been one of the most remarkable achievements of the VLBI technique to date (Event Horizon Telescope Collaboration et al. (2019a) and references therein), and this has been possible owing to 50 years of a worldwide collaborative effort based on scientific, technological, and human capital investment.
The EHT is the ultimate development of the Earth-based VLBI in terms of the wavelength and geometry of the array. It serves as a benchmark for the THEZA concept presented here. The next steps forward in development of VLBI at frequencies above 100 GHz and into the THz regime are necessitated by the astrophysical motivations. They promise transformational science results and require VLBI systems above the Earth's atmosphere. This is the essence of the THEZA concept.
Event horizon in SMBH: physics and cosmology
In 2019, after decades of developmental efforts from a global collaboration of scientists, the EHT collaboration presented the first image of a black hole (Event Horizon Telescope Collaboration et al., 2019a). The EHT image, revealing the supermassive black hole (SMBH) in M87, was captured using a global VLBI network operating at a wavelength of 1.3 mm. It is formed by light emitted near the black hole and lensed towards the photon orbit of the 6.5 billion solar mass black hole at the galaxy's core. The combination of light bending in curved spacetime and absorption by the event horizon leads to a characteristic black hole shadow embedded in a light ring that was predicted by Falcke et al. (2000) to be observable with mm-wave VLBI. The results constrain the object to be more compact than the photon orbit, at a size ≤ 3/2R S , where R S is the Schwarzschild radius. This is comparable to the compactness currently probed by LIGO/VIRGO with gravitational waves, but on mass scales eight orders of magnitude larger.
The capability to image black holes on event horizon scales enables entirely new tests of General Relativity (GR) near a black hole, e.g., Johannsen and Psaltis (2010a), and opens a direct window into the astrophysical processes that drive accretion onto a black hole and formation of relativistic jets (Mościbrodzka et al., 2016). Jets are a major source of energy output and high-energy emission for black holes across all mass scales. Their origin and formation is of fundamental importance to high-energy astrophysics and happens on event horizon scales.
For precise tests of GR and time-domain studies of accretion flows and jet formation, we therefore need sharper angular resolution, higher observing frequencies, and faster and more complete sampling of interferometric baselines.
The angular resolution of ground-based VLBI is approaching fundamental limits. Interferometer baseline lengths are currently limited to the diameter of the Earth, imposing a corresponding resolution limit for ground arrays of ∼22 µas at an observing frequency of 230 GHz. Observations at higher frequencies can improve the angular resolution but become increasingly challenging because of strong atmospheric absorption and rapid phase variations, severely limiting the number of suitable ground sites and the windows of simultaneous good weather at many global locations. The fixed telescope locations also limit the number of Fourier-components of the image that can be sampled and hence the image quality. Going into space would overcome these limitations and open new scientific possibilities for horizon-scale studies in SMBH.
Measuring the shape and size of the shadow and surrounding lensed photon ring in M87* and Sgr A* provides a null hypothesis test of GR (Psaltis et al., 2015). Better images allow one to measure spin, test the no-hair theorem, measure the structure of the spacetime, and test for the possibilities for black hole alternatives, e.g., Mizuno et al. (2018).
The big advantage of SMBH imaging, with respect to gravitational wave experiments, is that the sources are stable and their parameters can be determined independently with ever better accuracy. Extraction of parameters is achieved by comparing matched image templates from GRMHD simulations (Event Horizon Telescope Collaboration et al., 2019e,f). For example, in Sgr A*, the mass and distance are already well-measured within ∼ 1% (Gravity Collaboration et al., 2018a). Therefore the precision of testing GR for Sgr A* is already limited by the fidelity of observing data and the ability to extract the emission corresponding to the black hole photon orbit and interior shadow. With the current resolution the sharply delineated lensed photon ring and more extended lensed emission structures are still blurred together, making detailed tests or spin measurements almost impossible.
Higher angular resolution by Space VLBI will allow more precise measurement of the shadow size and shape and increased dynamic range will improve image fidelity. Space VLBI observation allows us to extract the thin, lensed photon ring feature from the more diffuse surrounding emission. Such highresolution images of the black hole shadow will allow us to constrain the physics of the black hole itself. Fig. 4 shows that the difference between accretion onto a Kerr black hole and a dilation black hole, which represents a modification of general relativity, becomes apparent in the size and shape of the photon ring at a resolution of 5-10 µas. The resolution of the ground-based EHT is not sufficient to distinguish between these cases based on reconstructed images , but Space VLBI concepts will be able to reach the required resolution (see also Section 4). Another quantity that becomes measurable at this resolution is black hole spin. For a Kerr black hole, the effect of black hole spin on the shadow size is limited to about 4% (Johannsen and Psaltis, 2010b), which is impossible to measure with ground-based VLBI. Fig. 5 shows that using machine learning techniques, the black hole spin starts becoming measurable at 5 µas rsolution at 230 GHz (van der Gucht et al., 2020). High-frequency imaging at 690 GHz or higher will, therefore, allow for stronger constraints on the spin. Left: Black hole spin recovery accuracy from GRMHD simulations by a machine learning classifier network as a function of telescope beam size, where 1 means perfect accuracy and 0.2 means random guesses as five black hole spin values between −1 (maximal retrograde spin) and 1 (maximal prograde spin) were considered in this analysis. At the EHT resolution at 230 GHz (∼20 µas), the spin recovery accuracy is low. Only at high resolutions of around 5 µas, as could be attainable with Space VLBI, is the classifier network capable of recovering the correct spin value. Right: Size of black hole shadows for three different mass ranges of SMBHs as function of redshift. We use as examples: M87 (6.4 × 10 9 M ), OJ287 (2 × 10 10 M ), and TON618 (6.6 × 10 10 M ), but allow them to be at any distance.
The relatively small mass of Sgr A*, 4.1 × 10 6 M (Gravity Collaboration et al., 2018a), results in correspondingly short dynamical timescales (of order ten minutes) for the system. The current EHT array lacks sufficient baseline coverage to form images on these timescales. The rapid baseline sampling of an orbiter is necessary to recover the complex structure in an evolving accretion flow. Movies of Sgr A* by multiple snapshot images will clarify the nature of coherent orbiting features such as "hotspots" (Gravity Collaboration et al., 2018b) and the origin of the flaring events observed in many wavebands, e.g., Marrone et al. (2008).
Magnetic fields play an important role in accretion and jet formation. The magneto-rotational instability (MRI, Balbus and Hawley (1998)) in accretion disc is thought to transport angular momentum and drive accretion onto the central black hole. Magnetic fields can also cause instabilities and flaring on horizon scales (Tchekhovskoy et al., 2011). Polarimetric imaging of polarised synchrotron radiation observed by Space VLBI can reveal the structure and dynamics of magnetic fields near the horizon. It will allow us to probe the magnetic field degree of ordering, orientation, and strength through Faraday rotation studies. Power spectral analysis will provide information on the turbulent accretion flow on very fine spatial scales, for the first time observationally testing our understanding of MRI and angular momentum transport in the inner part of accretion disc (Balbus and Hawley, 1998).
Finer angular resolution also provides access to additional targets with spatially resolved black hole shadows. At ∼ 5 µas angular resolution, the number of known nearby SMBHs that are expected to resolve the black hole shadow will increase also from two (Sgr A* and M87*) to six (with the addition of M85, Cen A, M104, IC1459), allowing more robust tests. At ∼ 2 µas also IC4296 and M81 become accessible, but more importantly also large SMBHs at cosmological distances, e.g. something like OJ287 at z = 0.3 or SMBH monsters such as TON618 at all redshifts.
So, an order of magnitude increase in resolution will provide at least an order of magnitude more sources for black hole shadow tests and access to the jet launching regions of hundreds to thousands of powerful AGN.
Physics of inner jets in AGN
The processes that govern the formation, acceleration, and collimation of powerful relativistic jets in active galactic nuclei (AGN) and X-ray binaries are a half-century old mystery in black hole physics. Without an instrument capable of resolving the accretion flow and eventual ejection of plasma in the immediate vicinity of the central black hole, most of the recent advances in our understanding of jet formation have resulted from general relativistic magnetohydrodynamic (GRMHD) simulations, e.g., McKinney and Blandford (2009);Tchekhovskoy et al. (2011);Liska et al. (2018). These simulations show that relativistic jets can be powered by magnetic extraction of rotational energy of either the central black hole itself, as originally proposed by Blandford and Znajek (1977), or the accretion disc (Blandford and Payne, 1982). However, we still do not understand the details of this process and cannot answer such (Fromm et al., 2016(Fromm et al., , 2019. The southwestern part is partly obscured by a dusty torus. The model is based on NGC1052 but can be used as a generic model for AGN jets. It was scaled to 8 Jy here to represent a typical bright AGN. Middle: simulated ground-based VLBA observation of this model. Right: simulated Space VLBI observation of this model with the Event Horizon Imager (EHI, see Sec. 4).
basic questions as how the properties of the accretion flow and black hole are connected to the jet formation and why only a fraction of the actively accreting supermassive black holes produces powerful jets in the first place.
In these magnetically driven scenarios, the jet is initially triggered by magneto-centrifugal force, with further acceleration and collimation produced by magnetic pressure gradients and tension forces. In these regions the jet is expected to be characterised by a parabolic collimation profile and a gradual transition from a predominantly poloidal to a helical or toroidal magnetic field configuration. While the jet launching takes place in the innermost few Schwarzschild radii, it appears that the collimation and acceleration extends up to ∼ 10 5±1 Schwarzschild radii from the black hole (Asada and Nakamura, 2012;Homan et al., 2015;Kovalev et al., 2020), with a bulk of acceleration taking place within the first ∼ 10 3 Schwarzschild radii (Mertens et al., 2016). To test jet formation models with actual observations it is necessary to probe linear scales smaller than this. With the exception of M 87, the angular resolution required to resolve these structures is of the order of ten microarcseconds or better, and observations at short mm and sub-mm wavelengths are required to see through the self-absorbed synchrotron-emitting plasma at the jet base. This calls for Space VLBI with orbiting antennas operating at mm and submm wavelengths.
Space VLBI can provide the necessary angular resolution to study jet formation, collimation, and acceleration in other nearby AGN by directly resolving the sites of these physical processes (Meier, 2009). This would allow us to study what determines the jet power, and how this is related to the black hole spin, rate of accretion, and disc magnetisation. Especially, polarisation observations using Space VLBI are fundamental to reconstruct the three-dimensional magnetic field structure of the jet as near as possible to the black hole, thus helping to understand the jet formation, conversion of the magnetic energy to the kinetic energy of the flow, and dissipation of this energy. The likely co-existence of relativistic, projection and other effects makes it challenging to reconstruct the intrinsic orientation and strength of the magnetic fields along the jet where the very nature of these fields is still highly debated, see Boccardi et al. (2017) and references therein. In particular, it is unclear whether the observed polarisation is due to compression in a shock of a random ambient magnetic field, or to the presence of a large-scale, ordered field permeating the plasma flow. Space VLBI polarisation imaging, both along and transverse to the jet direction, at millimetre wavelengths of a significant number of jets will enable us to reduce many of these uncertainties distinguishing between the different possible magnetic field configurations proposed, e.g., Gómez et al. (2016); Marscher et al. (2008Marscher et al. ( , 2010. Furthermore, multi-frequency Space VLBI observations would allow measurement of a spatially resolved polarisation spectrum near the black hole, which can be used to probe the line-of-sight component of the magnetic field and the low energy end of the electron energy distribution, e.g., Homan et al. (2009). Both of these are difficult to constrain by other ways.
To illustrate the jet imaging possibilities with Space VLBI, we consider the jet model by Fromm et al. (2016Fromm et al. ( , 2018Fromm et al. ( , 2019. VLBI observations of radio galaxies reveal highly collimated jets surrounded by obscuring dusty tori. Detailed studies of the jet structure revealed several regions of enhanced emission. These regions could be interpreted as travelling shock waves compressing the underlying jet flow or as stationary recollimation shocks. Recollimation shocks are formed due to the mismatch between the pressure in the jet and in the surrounding medium. They cause a distinctive radiative signature in the form of an edge-brightened flow converging into a central bright region, as seen between the two brightest spots in the model image in Fig. 7. The recollimation shock profile cannot be resolved by the Earth-based VLBA, but a simulated observation with the two-satellite Event Horizon Imager Space VLBI concept (see Section 4 and Martin-Neira et al. (2017); Kudriashov et al. (2018); Roelofs et al. (2019) for details) shows that these features can be resolved with Space VLBI.
Combining Space VLBI with multi-wavelength observations that recover the complete spectral energy distribution (SED) from radio to gamma-rays will shed light on the mechanisms of energy dissipation in the innermost part of the jet, which is responsible for the spectacular flaring observed in blazars, e.g., Marscher et al. (2008Marscher et al. ( , 2010. Correlated monitoring campaigns will help to localise emission features that fall outside the radio regime as we see structural changes related to flares at other wavelengths.
Binary AGN -Gravitational Wave precursors
The early growth of massive black holes is believed to go through intense phases of accretion and mergers, and to be tightly coupled to the evolution of their host galaxies (Hopkins et al., 2008;Kormendy and Ho, 2013). Since mergers drive both gas and the black holes to the nucleus, this should often lead to pairs of active galactic nuclei (AGN). On kpc scales these can be easily resolved from the radio to the X-rays, but well-established dual-AGN are still rare (Komossa and Zensus, 2016). Direct imaging of such dual-AGN on parsec to tens of parsec scales requires milliarcsecond angular resolution and therefore can only be done using the very long baseline technique (VLBI) in the radio. This means that a multi-band approach to confirm candidates is becoming problematic . The best established case with a projected separation of 7 pc is 0402+379 (Rodriguez et al., 2006).
Below about 10 parsec separation the dual black hole system becomes gravitationally bound, and we refer to these systems as massive black hole binaries (MBHB), e.g., Bogdanović (2015) and references therein. Over 100 MBHB candidates have been identified in optical quasar variability surveys (Graham et al., 2015a;Charisi et al., 2016;Liu et al., 2019), and orders of magnitude more is expected to be found in the first 5 years of operations of the much deeper all-sky time-domain survey to be conducted by the Vera Rubin Observatory in the framework of the Legacy Survey of Space and Time (LSST). It is estimated that as much as 1% of the AGN below z ∼ 0.6 could reside in massive binaries (Kelley et al., 2019). This regime is particularly important because the processes that lead to hardening of the binary are not well understood. The energy loss in mergers of black holes is initially dominated by dynamical friction on stars and dark matter, and subsequently by stellar scattering, but below about a few parsecs this becomes inefficient. In this sub-parsec regime, gas may play a role in promoting binary inspiral.
Somewhere between 100−1000 gravitational radii the dominant energy loss becomes gravitational wave radiation and MBHBs become visible to LISA 5 .
Revealing the rates at which various processes work at intermediate scales is possible in principle, given a large sample of MBHBs, via a measurement of the relative abundances of MBHBs for a range of separations (Haiman et al., 2009).
The possibility of directly resolving MBHB with mm-VLBI imaging was first discussed by D'Orazio and Loeb (2018). They focused on MBHBs that have orbital periods less than 10 years and are resolvable by ground-only baselines. They predict that 100 such systems might be detectable down to 1 mJy limit within z < 0.5 (their selection criteria for the tightest orbits and low Eddington rates for accretion biases towards the low-luminosity population). Studying the few hundred to few thousand gravitational radii regime for the most massive AGN (∼ 10% of which are radio-loud) at very high redshifts becomes feasible from space; estimates for the fraction of MBHBs in the AGN population range from 5% ( (Tang et al., 2017), showing that tidal forces truncate mini discs in MBHBs, allowing the search for electromagnetic signatures of the binary. Right: apparent angular size of 100−1000 gravitational radii separation -where GW radiation gradually becomes the dominant process in hardening the binary -for various SMBH masses, in the local Universe. Binaries are expected to be long-lived (spending ∼100,000 years) in this regime (Haiman et al., 2009) and can be directly probed by mm-VLBI for masses 10 8 − 10 10 M . The most massive sub-pc separation binaries as well as jet formation in LISA-detected SMBH mergers will be observable at cosmological distances by THEZA (see also Fig. 5).
The caveat is that space mm-VLBI observations require that both black holes are active, which is quite rare in SMBH pairs with kpc separations. There are good reasons to believe this is not the case for sub-pc MBHB. This is because the most common MBHB are likely formed in minor mergers that have unequal mass black holes; below about 0.1 pc the binary is embedded in an accretion disc and gas interaction becomes important (Armitage and Natarajan, 2002). Hydrodynamic simulations indicate that in unequal-mass binaries the secondary gets most of the accretion, and starves the primary . This will likely make the secondary super-Eddington, and the primary sub-Eddington (aka "hard state") -both of theseṀ regimes are associated with jets and radio emission.
Depending on the configuration, a space mm-VLBI array operating at 230 GHz would have an angular resolution from 10 µas (few Earth radii) to 1 µas (∼25 Earth radii) allowing direct imaging of a number of candidates. For reference, the diameter of the estimated orbit of the z = 0.3 periodic binary candidate quasar PG 1302−102 extends ≈10 µas (Graham et al., 2015b).
Time-and X-ray-domain synergies
Radio transients are both the sites and signatures of the most extreme phenomena in our Universe: e.g. exploding stars, compact object mergers, black holes and ultra-relativistic flows. Essentially all explosive events in astrophysics are associated with incoherent synchrotron emission, resulting from ejections at velocities in excess of the local sound speed that compress ambient magnetic fields and accelerate particles. These events range from relatively low-luminosity flares from stars, to the most powerful events in the Universe, associated with gamma-ray bursts and relativistic jets from supermassive black holes in active galactic nuclei. Crucially, radio observations can act as a calorimeter for the kinetic feedback, probe the circumburst environment, and provide localisation/resolution unachievable at other wavelengths. Followup radio observations of high-energy astrophysical transients has a rich history of important discoveries, including the first galactic superluminal source (Mirabel and Rodríguez, 1994), the beamed-like nature of Gamma-Ray Bursts (GRBs) and their association with unusual supernovae (Kulkarni et al., 1998) and the association of highly relativistic jet-like flows with the tidal disruption and accretion of a star by a supermassive black hole (Zauderer et al., 2011). Most recently, the LIGO-Virgo binary neutron star merger GW170817 was associated with a relativistic jet and radio afterglow (Mooley et al., 2018;Ghirlanda et al., 2019). Fig. 9 illustrates the luminosity -timescale space for all radio transients, including coherent sources (see below).
A prominent example of transient radio emission is accretion onto black holes, in which one of the most relativistic and energetic processes in the Universe occurs on a range of spatial and time scales which extends over more than seven orders of magnitude. On the smallest scale is accretion on to stellar mass black holes in binary systems, in which we can track the full evolution of the radio jet and its connection to the varying accretion flow humanlyaccessible timescales. Since these systems are typically at the same distance as Sgr A* (the closest known is about a factor of 8 closer), but they are 100,000 times less massive, we will never be able to image on scales comparable to their event horizons. However, we are able to test fundamental regions of the inner relativistic jet and, given the unique empirical coupling with the accretion flow we have established in such systems, probe the physics of time-variable jet formation in a way not really possible for AGN. Fig. 10 illustrates the spatial scales associated with a typical, nearby, black hole binary in our galaxy and shows how the innermost regions of the jet could be imaged by space-based mm VLBI. At these scales we would expect to see variations in the ∼ THz jet base occurring only minutes after X-ray variations, providing clues to the formation of jets in unprecedented detail. On larger physical scales, highly transient radio emission has been clearly associated with the transient accretion onto massive black holes resulting from tidal disruption of a star. In a non-thermal Tidal Disruption Event (TDE), this emission is likely to arise in a jet in most, if not all, cases, and VLBI offers a unique opportunity to see how accretion and jets evolve in pristine environments rather than in very-long-timescale steady states associated with most AGN; see Yang et al. (2016); Mattila et al. (2018).
At the most explosive end of the stellar scale are gamma-ray bursts (GRB). The long variant is associated with the death of the most massive stars, while the short variant now appears quite clearly to be associated with the merger of two neutron stars. The end product in both cases can be stellar-mass black holes accreting at highly super-Eddington rates, and these are known to be powerful engines of relativistic jets. For a long time, long GRBs have been known to produce relativistic jets that can be studied by ground-based VLBI (Taylor et al., 2004). Recently superluminal motion was detected in a jet associated with the late-peaking (∼ 6 months) radio afterglow of the neutron star merger GW170817 (Mooley et al., 2018;Ghirlanda et al., 2019). In exceptional cases, we might be able to study the ultra-relativistic phase in "engine-driven" stellar jets by Space VLBI, if these are nearby and beamed in our line of sight.
Coherent transients and variables are amongst the most important sources in astrophysics right now, in particular Fast Radio Bursts (FRBs) and pulsars. Often the broad-band nature of the phenomena is not well established (e.g. for FRB). Prominent counter-examples include pulsars and neutron stars, which are observed across the whole electromagnetic spectrum, but even in those cases, the THz window is mostly unexplored. Potential sources for this frequency range are, for instance, magnetars, which have been detected up to 291 GHz (Torne et al., 2017) with rather flat flux density spectra spanning from a 1 GHz up to, now, 300 GHz. Whether this makes them prominent sources at THz frequencies will be interesting to see. The current status of the transient sky at radio frequencies is summarised in Fig. 9 as compiled in Keane (2018). It is likely that THz frequencies will mostly probe the incoherent part but the magnetar detections at 300 GHz suggests that we should be open for surprises.
Water maser science with Space VLBI
Water vapour is found out to cosmic distances, e.g., Impellizzeri et al. (2008). It can emit thermally or through stimulated emission, i.e., masers. Since masers are compact and bright, they are excellent astrometry targets. Due to heavy atmospheric absorption the mm and sub-mm masers are much harder to observe from Earth than the well-studied 22 GHz transition. While bright water masers, at, e.g., 183 GHz and 325 GHz, can be observed by dedicated ground- Fig. 9 Luminosity -timescale parameter space for radio transient and variables. The blue shaded region delimits a brightness temperature of 10 12 K -sources inside this region are likely to be incoherent synchrotron emitters, those in the white region coherent. From Keane (2018) and Pietka et al. (2015). Fig. 10 Illustration of the angular scales associated with a low-mass black hole binary system at a distance of 1 kpc (equal to the smallest current distance known, typical distances are 8 kpc). For a space-VLBI resolution of 4 µas at 690 GHz both the jet base and binary separation are clearly resolvable. Coupled with X-ray (inner accretion flow) and infrared (innermost regions of the jet, close to launch zone) observations, we could achieve unprecedented understanding of how jets form and couple to accretion. based telescopes (see Humphreys et al. (2017)), others have only been detected from above the main atmosphere layer by the Kuiper Airborne Observatory and SOFIA, e.g., Herpin et al. (2017); Neufeld et al. (2017); Richards et al. (2020) and references therein. There are over a hundred predicted water maser transitions from GHz-THz frequencies (a few tens have been detected), mostly excited by collisional pumping under distinctive combinations of temperature, number density and other parameters (Gray et al., 2016). ALMA and Ra-dioAstron observations found that mm and 22 GHz water maser sources can have both extended and extremely compact morphologies, down to just a few solar radii (Hirota et al., 2012;Sobolev et al., 2018). The millimetre water transitions are thus very suitable for space-space VLBI that would cover a range of baselines from hundreds of km to tens of Earth radii.
Typically, Galactic water masers arise from a disc-jet system of a young stellar object and in envelopes around evolved stars. AU-scale spatial distributions of a number of mm water maser transitions can put strong constraints on density, temperature and water distribution in gas surrounding the young stellar object that could be compared with current theories of star-formation, e.g., Klassen et al. (2016); they provide powerful diagnostics on the disc chemistry (Banzatti et al., 2015). Proper motions can trace the dynamics of the gas flow, and reveal the disc or jet origin of the emitting gas, as these have distinct kinematic signatures (Sanna et al., 2015). For evolved stars, various water transitions are observed at different stellar radii, see Fig. 11, tracing the change of physical conditions in the stellar envelope (Richards et al., 2014).
For nearby and distant galaxies, bright mm water maser emission is an excellent tracer for both the star formation processes and physics of the activity of their central engines, in some cases -AGN. For example, in the interacting galaxy Arp 220, ALMA observations of mm-water masers find unresolved emission that is best modelled by a large number of pc-scale molecular clouds (König et al., 2017) -only space-space VLBI could provide images at these scales. On the other hand, towards Circinus and NGC 4945, the mm water maser emission is associated with the circumnuclear region (Hagiwara et al., 2013;Pesce et al., 2016) . Water masers from circumnuclear discs have been found to have extremely compact components on Space-Earth baselines, see Fig. 11. (Sub)mm water maser emission from the circumnuclear discs surrounding supermassive black holes holds the promise of yielding independent estimates of geometric distances, and perhaps even an independent verification of the Hubble constant.
One of the major benefits of space-space VLBI is the absence of the terrestrial atmosphere: (sub)mm water masers are currently difficult or even impossible to observe at most ground-based sites (see Gray et al. (2016) for model predictions of millimetre maser transitions). SOFIA observations have already detected THz water maser emission at 1.3 THz (Herpin et al., 2017). The more transitions of water that can be observed from the same volumes of gas, ideally quasi-simultaneously, the better constrained radiative transfer modelling to determine density and temperature become. Detecting the lines for the first time also provides a good test of the predictions of maser theory and models.
The quest for water in protoplanetary discs
Water is an important molecule in many astrophysical environments. In the case of protoplanetary discs, two key issues emerge from previous studies. On the one hand, water can freeze out on dust grains if conditions get sufficiently cold and shielded. In the heavily discussed core accretion scenarios of planet formation, the presence and location of such ice lines are essential ingredients for determining where and how efficiently the small dust grains can stick together and start the growth to larger agglomerates as a first step to planet formation, e.g., Birnstiel et al. (2016). Analysing the water vapour signals from the gas phase gives crucial constraints on the distribution of the water phases in a disc.
Secondly, water is intimately linked to the composition of exoplanets, e.g., Pontoppidan et al. (2019) and references therein. During the process of planet formation, the abundance and phase (solid or gas) of water traces the flow of volatile elements with implications for the bulk constitution of the planets, the composition of their early atmospheres, and the ultimate incorporation of such material into potential biospheres, e.g., Marty (2012); Pearce et al. (2017). Furthermore, water as a simple molecule with high abundance is the dominant carrier of oxygen. Hence, its distribution in a disc can also steer the C/O ratio which can be a tell-tale connection between planet composition and the natal disc composition and location of the birth site (especially for giant planets).
After all, the complicated interplay between grain evolution, grain surface chemistry and freeze-out, photodesorption and photodissociation, and radial and vertical mixing processes will regulate the abundance of water in its different phases especially in the outer disc, e.g., Henning and Semenov (2013). But integral models of these processes can only advance if we have observational access to all phases of water at all temperature regimes in a protoplanetary disc. Here, we will still have major deficits even in the 2030s.
Though water vapour lines have been detected from the ground, such observations are mostly limited to high-excitation thermal lines (E up /k 700 K) that are not excited in the Earth's atmosphere, or to certain maser lines. Water lines seen in the near-infrared just arise from the very inner hot gas disc. In the mid-infrared, one still probes very warm water gas of many hundred kelvin. The MIRI instrument on the James Webb Space Telescope (JWST) will make very sensitive observations of such warm water lines for many protoplanetary discs. But the spatial resolution of JWST in the mid-infrared towards longer wavelengths is limited, and typical discs of just 1-4 arcsec in size will just be moderately resolved. Furthermore, JWST does not offer sufficient spectral resolution to resolve the velocity structure of the detected lines. To get access to the bulk of the water vapour reservoir in a disc that contains the colder gas, one needs to include the far-infrared and sub-millimetre range. With the aforementioned difficulties for sensitive observations of cooler water vapour from ground that severely hamper even ALMA with its good observing site, observations from space are pivotal to make progress in this field. To get spatially resolved information on such lines in a typical disc, one should aspire to an angular resolution of 0.1". Thus, this field of research would demand short baselines on the order of several hundred metres to a kilometre.
Exoplanets
Thousands of exoplanets have been discovered in the last decades, showing the ubiquitous character of the planetary objects. The discovery of new worlds will continue in the next years even more efficiently with new and more sophisticated instruments, which will help to provide a complete knowledge of the formation and evolution processes of these objects. Observations are being carried out covering the complete electromagnetic spectrum from visible, infrared to radio wavelengths. However, and despite the extraordinary contribution of the ALMA interferometer to protoplanetary discs, the observation of exoplanets at millimetre and sub-millimetre wavelengths, covering to THz frequencies, is yet to be developed. Ground-based THz astronomy is heavily hampered by absorption features in the Earth's atmosphere, caused by the presence of molecules of oxygen and water. Although Earth's atmosphere's opacity can partially be alleviated by ground observatories built at high altitude, only space-based interferometers would definitely be free from Earth's atmospheric limitations, fully exploiting the science provided by sub-mm wavelengths.
Detection of exoplanets via astrometric monitoring of the reflex motion of the parent star can be successfully applied in the sub-mm range. The stellar photosphere is dominated by the permanent and more stable thermal, black body emission (∝ ν 2 ), free from the on/off nature of stellar flares present in active stars at longer wavelengths. For the nearest stars, radio luminosities of normal, solar-like stars would correspond to tens of mJy at 100s GHz frequencies (Lestrade, 2008). An efficient monitoring of selected samples would solve the orbit inclination ambiguity inherent to the planetary masses determined by radial velocity. Additionally, the population of planets on the outer side of the planetary systems, less sensible to radial velocity techniques, can be characterised. Sub-milliarcsecond-precise astrometry would suffice to detect Jovian planets within 10 pc.
Direct detection of exoplanets at sub-mm wavelengths would require µJy sensitivities. Most of the continuum exoplanet emission would correspond to thermal radiation, which at THz frequencies may reach 10-100 µJy from a Jupiter-like planet within 10 pc (Villadsen et al., 2014). This emission is 1-2 orders of magnitude larger than that measured by ground-based telescopes, showing that direct detection of exoplanets will necessarily require space missions. The importance of these measurements is fundamental as they will be able to bridge the gap between the far-and mid-IR studies of exoplanets with the, so far elusive, emission at long radio wavelengths. Perspectives to detect exoplanets during their early stages of formation are particularly favoured at sub-mm wavelengths. Protoplanets still embedded in the circumstellar discs will radiate by reemission of the heated surrounding dust. Even modest millisecond resolution would discriminate between the emission of the disc and that of the circumplanetary material for a Jupiter at 1 AU within 100 pc. Measurements of non-Keplerian motions of the protoplanet would provide direct information of the density and viscosity of the disc (Wolf and D'Angelo, 2005;Pinte et al., 2018).
The sub-mm range of the spectrum (100-1000 GHz) is particularly rich in water lines which, with the adequate sensitivity and resolution, may not only characterise the atmosphere of newly discovered planets but unambiguously trace the signs of biological activity. Sub-mm spectroscopy of exoplanets highly irradiated by their host stars constitutes the best scenario to detect these absorption features. These studies are out of the capabilities of existing, or even planned, ground-based observatories as they are only doable for space-based missions with µJy sensitivities (Öberg et al., 2018). The characterisation of the atmosphere of Earth-like planets by high-resolution sub-mm observations is of extraordinary relevance as the presence of water is anthropologically linked to the development of life.
SETI -search for technosignatures
Over the last few years, the field of SETI (Search for Extraterrestrial Intelligence) has undergone a major rejuvenation, see, e.g., Price et al. (2020); Siemion et al. (2013). The discovery by the Kepler mission that most stars host planetary systems, and that around 20% of these planets will be located within the traditional habitable zone, plus the continually growing evidence that the basic pre-biotic constituents and conditions we believe necessary for life are common and perhaps ubiquitous in the Galaxy, has brought new focus to one of the most important questions that human-kind can ask itself: Are We Alone?
A space-based mm-VLBI interferometer operating outside of the Earth's atmosphere would enable the first serious SETI searches to be conducted across the full mm and sub-mm domains of the electromagnetic spectrum. The recent upsurge in interest in the search for "techno-signatures" (Wright, 2019) has a strong focus on covering as much of the electromagnetic spectrum as is sensible. Searches at mm and sub-mm wavelengths are extremely well motivated. In particular, advanced civilisations will be well aware of the advantages of operating communication systems within this particular "high frequency" domain, especially for long distance communication systems that are likely to be associated with powerful interplanetary (and indeed interstellar) networks. Large bandwidth sub-mm systems offer significant carrying capacity, and yet continue to operate in a regime where scattering by ionised gas, and absorption by dust are usually negligible. This part of the high frequency radio spectrum is also relatively free of human-made radio frequency interference. Although this almost pristine environment is likely to change in the coming decades, a space-based, sub-mm long-baseline interferometer is largely immune to the effects that plague ground-based arrays of limited spatial extent.
The detection of "leakage" radiation or perhaps deliberately established beacons from another technical civilisation would be possible with a spacebased sub-mm interferometer. In particular, such signals are likely to be entirely unresolved, even with resolutions approaching 10 µarcsecs. Narrow-band signals (like beacons) would probably exhibit very large Doppler accelerations and those transmitters associated with exoplanets located within 1 kpc of the Earth and with similar orbital periods (∼ 1 yr), would show changes in proper motion to be detected that a space-based sub-mm interferometer would be sensitive to on timescales as short as a few days.
A space-based interferometer might also be able to detect non-coherent technosignatures. For example, the emission of waste heat from highly efficient mega-structures (such as Dyson spheres/swarms) could re-emerge as relatively cold black-body emission in the sub-mm. In particular, discontinuous structures such as sharp edges or holes (as might be commonly associated with artificial mega-structures in general) would induce ringing in the uv-plane and a highly correlated response in the visibility data.
THEZA implementation options
The key capabilities of the THEZA concept require a space-borne VLBI system able to observe at frequencies above 200 GHz (1.5 mm wavelength) to at least 1 THz (300 µm wavelength) or even higher. Extension of the observing range toward lower frequencies, e.g., down to 86 GHz might be considered as an attractive broadening of the THEZA science scope. A design of the mission addressing the THEZA science outlook will be the subject of several major engineering trade-offs. One of them is between interferometers employing Space-Earth and Space-Space-only baselines.
The former would have an enhanced baseline sensitivity due to a larger collecting area of Earth-based antennas, ultimately like a phased ALMA, just as demonstrated recently by Event Horizon Telescope Collaboration et al. (2019a). However, such the system will be limited in frequency coverage by the atmosphere opacity thus likely operating efficiently at frequencies not higher than 350-400 GHz (practically at 230 GHz, due to a foreseeable lack of Earthbased facilities able to operate at frequencies above 230 GHz simultaneously). That said, the ongoing design studies of several Earth-Space mm-VLBI mission concepts, such as Millimetron (Kardashev et al., 2014) should provide useful input into assessment of the feasibility of various models of THEZA implementation.
Interferometers with Space-only baselines offer a clear advantage in frequency coverage: they are not subjected to severe atmosphere limitations and can operate at frequencies above the practical for Earth-based VLBI cut-off of around 300 GHz. The further advantage is a principle possibility to cover the uv-plane efficiently by using free-flying spacecraft formations. However, these advantages come at a price: it is unrealistic to expect that on the timescale of the Voyage 2050 programme, large mm/sub-mm space-borne apertures comparable in size to Earth-based antennas can be deployed. Yet, by using receivers with system temperatures near the quantum limit and wide-band data acquisition systems, an acceptable baseline sensitivity can be achieved even with moderate in size space-borne antennas. This approach is pursued in several ongoing studies, including IRASSI (Infrared Astronomy Satellite Swarm Interferometry) for far-infra-red astronomy (Linz et al., 2020b) and a concept of Event Horizon Imager (EHI), a step beyond the EHT requiring space-borne interferometric elements (Martin-Neira et al., 2017;Kudriashov et al., 2018;Roelofs et al., 2019). 4.1 Event Horizon Imager: a case study of space-only sub-mm aperture synthesis system The EHI concept considers two or three satellites in circular medium-Earth orbits (MEOs). By setting a small difference between the orbit radii, the satellites drift apart as they orbit Earth, increasing the baseline length as it constantly changes orientation. The resulting uv-plane coverage will have the shape of dense and isotropic spirals, which is especially suitable for high-fidelity and high-resolution imaging (Fig. 12). In fact, the uv-coverage will be unlike that of any interferometer before and be almost like a filled aperture for integration times of weeks. Filling of the uv-plane is essentially done by Earth's gravity and happens in principle without any active orbital control during an observation. Adjustments of the orbital height separation allow one to adjust the configuration to fill the uv-plane within a desired time scale from a few days to months, commensurate with the variability or integration time scale of the source to be observed. For small orbital height separations even a compact configuration with only a few hundreds of meters to hundreds of kilometres could be achieved and maintained for an extended period of time (e.g., offering a smooth extension of the resolving power of ALMA).
Otherwise the baselines will be significantly longer than any ground-baselines and shorter than some of the longest baselines in past Space VLBI experiments, but ideally matched to the desired resolution with maximum image fidelity.
Space-space operation allows one to go to much higher frequencies than with Earth-based arrays or Space-Earth experiments, which increases resolution further.
The concept aims to exchange the data between the satellites via a laserbased intersatellite link and correlate the data on-board using an orbit model provided by, e.g., measurements with GNSS satellites. Circular MEOs allow for a relatively stable orbit to start with. Further processing of the data is then done on the ground using a refined orbit model based on, e.g., intersatellite ranging measurements and astronomical calibrators. The local oscillator signals may be shared between the satellites as well in order to increase phase stability. An on-going technological study is investigating the feasibility of this concept (Martin-Neira et al., 2017;Kudriashov et al., 2018).
The laser-based intersatellite link provides high data rates over very large space-space distances and hence allows wide bandwidths and accordingly high sensitivity even with modest-sized dishes. The laser links could also be used to directly transport the IF signal down to Earth. That would allow one to perform Space-Earth VLBI during special campaigns, e.g. to do snap-shot observations of highly variable or very faint objects together with sensitive ground-arrays (e.g. EHT/ALMA or ngVLA). This is obviously only possible at longer wavelengths, e.g. 1.5 mm, as demonstrated by the EHT, or 3 mm, as done regularly by the Global mm-VLBI Array.
The concept of multi-element space-borne interferometer has been also considered by Fish et al. (2019). Their concept differs from the EHI in the choice of orbits, number of space-borne antennas and the overall interferometric configuration of the system. These concepts have a lot in common and offer convenient starting points for further mission design studies. Roelofs et al. (2019) performed imaging simulations of the EHI concept. Using GRMHD models of Sgr A* at 690 GHz from Mościbrodzka et al. (2014) as input, complex visibilities were sampled at the EHI uv-spiral points, and thermal noise was added based on preliminary system parameters. Orbit radii of 13,892 km and 13,913 km were assumed for the two satellites, which gives a nominal resolution of 3.6 µas at 690 GHz. Each satellite was assumed to carry an antenna with a diameter of 4.4 m, which would fit in an ESA Ariane 6 launcher. Of course, larger dishes would yield even better results. Fig. 13 shows a GRMHD model image and its reconstructions for two EHI system variants. The middle panel shows the case of a perfectly phase stable EHI configuration with two satellites, limited by thermal noise only. Because of the dense and isotropic uv-coverage, the visibilities could be gridded in the uv-plane and an image could be reconstructed by taking the FFT of the complex visibilities, which were averaged over six iterations of the uv-spiral to build up signalto-noise ratio on long baselines. The reconstructed image shows many of the detailed features that are present in the input GRMHD model. Roelofs et al. (2019) show that visibility averaging also helps mitigating source variability (which occurs on a timescale of minutes for Sgr A*), allowing to reconstruct an average image of a variable source. The basic reason is that averaging in Fourier space is the same as averaging in image space and the average structure is dominated by GR, which is not changing. (Mościbrodzka et al., 2014); Middle and Right: Image reconstructions with the EHI consisting of two satellites with long phase stability allowing for the use of complex visibilities for imaging (Middle), and with the EHI consisting of three satellites with short phase stability relying only on the bispectrum for imaging (Right). The total integration time is 6 months for both cases. EHI reconstructions from Roelofs et al. (2019).
Simulations of the Event Horizon Imager
In practice, the EHI may not be phase stable over multiple months, depending on the attainable post-processing orbit reconstruction accuracy and clock stability. If the system is phase stable within an integration time (which is limited to timescales of minutes because of visibility smearing on arcs swept out in the uv-plane), a system consisting of three satellites would allow for the use of closure phase, which is the sum of the complex visibility phases on a triangle of baselines. Closure phases are robust against station-based phase errors such as those resulting from an inaccurate orbit model. The right panel of Fig. 13 shows a reconstructed image for such a system, made with the maximum entropy method implemented in the EHT-imaging library (Chael et al., 2016(Chael et al., , 2018. The image quality is slightly less than for the idealised phase stable system, but the model features and size and shape of the black hole shadow are still recovered robustly.
Mission outlook and key technologies
The THEZA concept aims at addressing multi-disciplinary cutting edge topics of modern astrophysics. While many of these topics presented in Section 3 are complementary in their science contents and synergistic in terms of engineering implementation requirements, a single mission addressing them all would likely be in the L-class category. However, optimisation of the THEZA mission science composition might lead to lowering some technical requirements (e.g., frequency band coverage, data acquisition rate, number of space-borne elements, etc.) and therefore shifting the mission toward the M-class envelope. We also note a significant overlap in technology requirements and, partially, in science rationale of the THEZA concept with the concepts of the Origins Space Telescope 6 and a mission for the far-infrared spectral domain with subarcsecond angular resolution, submitted as White Papers for the current ESA Voyage 2050 Call for Proposals, Wiedner et al. (2020) and Linz et al. (2020a), respectively.
In general, all key engineering components required for THEZA implementation are well within the mainstream developments of relevant Earth-based and space-borne technologies. While we foresee a detailed analysis of the TRL figures for THEZA implementation at the stage of pre-design study of a specific mission, a preliminary evaluation conducted by the EHI study team has not identified insurmountable technological problems preventing a project with the launch well within the Voyage 2050 time frame.
VLBI is international and in fact global in its very nature. Obviously, this is even more so for Space VLBI. Not surprisingly, all three implemented SVLBI experiments and missions to date, as described in subsection 2.1, have been widely international. We foresee that the THEZA concept implementation in a form of a specific mission would benefit greatly if it involves more than one major space agency. Such a collaboration not only enhances the mission potential by choosing the best available technologies but also might help in fulfilling budgetary limitations of all parties involved. The THEZA Core Team is aware of several highly compatible initiatives prepared within the ongoing US Decadal Astronomy Survey. Close coordination and collaboration with respective projects in the US, as well as in other countries, will be highly beneficial for THEZA implementation.
This White Paper presents the concept developed within the framework of ongoing studies in Europe, the United States, Japan, and Russia. A series of special workshops was initiated in 2018 with the first workshop held in Noordwijk, the Netherlands, in September 2018 (a part of its materials published and referred to in Gurvits (2020a)), and the second one in Charlottesville, VA, USA, in January 2020 (Lazio et al., 2020), to discuss scientific and technological issues of what is called here THEZA. We expect that this series of workshops will provide important contribution toward advancing the THEZA concept. | 15,629.6 | 2019-08-28T00:00:00.000 | [
"Physics"
] |
Pressure Management of Water Distribution Networks Based on Minimum Ground Elevation Difference of DMAs
: Water network partitioning (WNP) represents an efficient strategy to improve management of water distribution networks, reduce water losses and monitor water quality. It consists in physically dividing of a water distribution network (WDN) into districted metered areas (DMAs) through the placement of flow meters and isolation valves on boundary pipes between DMAs. In this paper, a novel methodology for designing DMAs is proposed that provides districts with quite similar node elevations and minimizes the number of boundary pipes in order to simplify pressure management and reduce the number of devices to place into the network.
Introduction
District metered areas (DMAs) [1] is one of the main methodologies for improving water system management and reducing water losses. The main benefits of the application of DMAs are: (a) improvement of the management of the supply system, through continuous monitoring of hydraulic magnitudes (pressure and flow), in order to prevent crisis situations and to plan maintenance and expansion works [2]; (b) simplifying the assessment of district water balance; (c) differential regulation of pressures [3]. Recent studies proposed DMAs techniques to protect water networks against both accidental and intentional contamination events [4,5], by using innovative sensors to measure water quality [6]. However, the partitioning of a water supply network inevitably produces a reduction in the hydraulic performance of the system.
Traditionally, the definition of DMAs was based on some empiric suggestions and criteria (number of users per DMA, pipe length and minimum or maximum DMA size) [7], or on trial and error hydraulic simulation techniques. In recent years, various authors proposed automatic procedures for the water network partitioning. These procedures are usually organized in two phases [8]: the first one, the clustering phase, defines the shape and size of the DMAs and thus the boundary pipes; the second one, the dividing phase, provides the optimal placement of flow meters and isolation valves on boundary pipes, maximizing or minimizing one or more objective functions. Several algorithms were proposed for the clustering phase: spectral techniques [9,10], community detection [11][12][13], graph partitioning [3,14], while the placement of flow meters and isolation valves can be solved by using heuristic optimization approaches based on economic criteria coupled with hydraulic performance indicators [15,16], or the reduction of background water leakages [12].
Generally, pressure reducing valves (PRVs) can be installed on DMAs entry points to regulate pressure and reduce water losses, for this purpose DMAs were introduced as a management technique for water distribution network [17].
As is well known, pressure varies in the network depending on the distance from the water supply tank to the given network point and the ground elevation of the point, e.g., two nodes located at equal distance from the tank (same head losses), but at different elevations, will have different pressure values. Therefore, defining a DMA with highly varying node elevations reduces the effectiveness of pressure regulation. In fact, low ground elevation nodes will be areas of high pressure while high ground elevation nodes will be areas of low pressure, as depicted in Figure 1. In this way, pressure regulation within a DMA with great difference of ground elevations will have a negligible effect on areas of high pressure, because it is necessary to ensure demand supply in areas of low pressure. For this reason, some authors modified the clustering phase to define the shape and size of DMAs that minimize the difference of ground elevations within the DMAs. Gomes et al. proposed the division of water supply networks into DMAs using the Floyd-Washall algorithm, with the flow direction, computed at peak hours consumption, and several design criteria, including minimizing differences in the ground elevations within DMAs [16]. Other authors [18] defined DMAs using the Walktrap algorithm, a well-known community detection technique, coupled with different design criteria: total demand supplied in the DMA, total length of DMA pipes and the highest elevation of the DMA. Specifically, Brentan et al. [18] observed that the criteria of total length and of maximum elevation are the most appropriate to facilitate the pressure management after dividing the network into DMAs.
Following these considerations, a new DMAs design approach is proposed, it considers ground elevation of network and can be applied to a wide range of clustering algorithms (spectral clustering, community detection, graph partitioning) that take into account a weighted graph as input. The proposed methodology, based on heuristic optimization, was applied to a case study in Mexico with promising results in term of pressure management and water loss reduction.
Methodology
The proposed DMAs design approach improves the clustering phase of water network partitioning by grouping nodes with similar ground elevation and minimizing the difference between minimum and maximum elevation within a DMA. As is well known, many clustering algorithms taking into account a weighted network as input. Therefore, it is possible to assign weights to nodes, links, or both elements of the network, and the choice of weight can significantly affect the result of the clustering phase. In other words, the choice of weights changes the shape and size of the DMAs. Taking advantage of this feature, the developed methodology provides a combination of weights, to assign to nodes or links (depending on the clustering algorithm), which forces the definition of clusters with similar elevation. Therefore, the algorithm produces a sequence of weights in order to generate DMAs with a minimum ground elevation. The developed weight search algorithm is described as follows: Step a. Selection of clustering algorithm, it is possible use several algorithms, such as graph partitioning, community detection, spectral clustering; in this work, for the sake of brevity, a multi-level recursive bisection (MLRB) algorithm [19] was applied to test the proposed procedure.
Step b. Define the number of DMAs NDMA.
Step c. Set the initial number of weights, nw, to assign to nodes or links equal to the number of DMAs, specifically, nw represents the variable of the weight search algorithm.
Step d. Divide the nodes of network into nc classes according their elevations, with nc equal to the number of weights, nw (step c).
Step e. Assign the weights wj with j = 1, …, nw to the elements of the network (nodes or links) belonging to the same class; this step simplifies the allocation of weights and reduces the number of variables, because the weight is assigned to the class and not to a network element.
Step f. Modify the weights wj and divide the network in clusters using the algorithm, selected in step a, minimizing the standard deviation of the DMAs ground elevations. To this aim, a genetic optimization algorithm was implemented; the optimization variables are the wj weights and the objective function (OF) to minimize is described as follows: where, , represents the standard deviation of the node ground elevations computed for the k-th cluster.
Step g. Repeat from step c to step f, modifying the number of weights: = +1.
Step h. The algorithm ends when the condition < is satisfied, in other words the value of objective function of i-th iteration is lower than the value of previous iteration.
It is worth noting that the proposed procedure does not involve hydraulic simulations, but is completely based on the topology of the network.
Case Study and Results
The weights search algorithm was applied to the water supply network of a part of Mexico City, that consists of 217 nodes, 289 pipes and one tank located at 348 m.a.s.l. The average daily supplied demand is about 120 l/s and the amount of water losses is about 45% of total inflow. The hydraulic simulations were carried out with the EPANET software [20], while the leakages were modeled using the following relationship between pressure, h, and leak flow, : = • ℎ (2) where the coefficient c was considered constant for all nodes and computed iteratively; the exponent e is equal to 1.18 as reported in [21]. The minimum desired pressure h * to ensure to deliver the total demand to nodes h * = 10 m.
The minimum and maximum elevations are 264.00 m and 327.28 m, respectively. However, a large part of the network (about 150 nodes) is located at an elevation lower than 300 m, as shown in Figure 2, then a single PRV placed on the main pipe could not regulate pressures effectively, because the set point of valve is strongly limited by few higher elevation nodes.
In order to improve pressure management and water loss reduction, the network was divided into two, three, four and five DMAs by using the MLRB technique and weights provided by the developed algorithm. To test the effectiveness of the proposed procedure, the results were compared with DMAs carried out by MLRB without weights. Tables 1-3 show the results of the clustering phase: the number of nodes in each DMAs, the number of boundary pipes, Nec, the balance index IB, computed as the ratio between the largest size of cluster, in terms of nodes, multiplied by k (the number of DMAs), the average elevation zmean, the difference between the minimum and maximum elevation, ∆z and the standard deviation of elevation, σz, within a DMA. The results obtained for the clustering phase show that the weights provided by the algorithm methodology affects significantly both the shape and size of the DMAs generated by the MLRB. In fact, the unweighted MLRB generates perfectly balanced DMAs (the balance index is about 1 for all studied configurations), while, by using the weights computed by weight search algorithm, the DMAs result unbalanced: the balancing index varies between 1.11 (3DMAs) and 1.59 (5DMAs). In addition, the weighted MLRB algorithm produces a larger set of boundary pipes than the unweighted MLRB for all DMAs layouts, this feature could represent a disadvantage from an economic, hydraulic and management point of view.
On the contrary, employing weights provided by the proposed methodology makes it possible to reduce the standard deviation of the ground elevations for each DMA and, in this way, the difference between the minimum and maximum altitude within the same DMA decreases significantly. In fact, comparing all the studied DMAs configurations, it is worth observing that the slightest difference for unweighted DMAs is equal to ∆z = 31.28 m (4DMAs and 5 DMAs), while, for weighted DMAs, it is ∆z = 12.03 m. Therefore, the use of weights improves the minimization of the elevation range within a DMA, compared to a DMA obtained without weights. Therefore, the weighted MLRB improves the definition of DMAs with a minimum elevation difference, compared to the unweighted MLRB. Then, to evaluate the influence of the computed weights on water losses reduction by placing PRVs on DMAs entry points, the dividing phase was carried out by minimizing the following multiobjective function, named MOF: where nfm is the number of flow meters to install; Qi and Hi the delivered flow and head at the i-th node; n the number of nodes in the network; and hi is the pressure at the i-th node. The constraint of Equation (3) is necessary to ensure a minimum level of service for customers in terms of pressure. The minimization of Equation (3) was carried out by the multi objective genetic algorithm NSGA-II [22]. At the end of the dividing phase, PRVs were placed at the DMAs entry points. The pressure regulation was applied during the hours of lowest consumption, from 00:00 to 7:00 and from 18:00 to 23:59. The PRV settings were defined by minimizing the total inlet water volume, ensuring the minimum design pressure h * in the network.
In Table 3, the minimum number of water meters to install, the number of PRVs, nprv, required to regulate pressures into network and the percentage of leakage are reported. Specifically, the number of PRVs is greater than the number of flow meters, because an additional valve is placed on the main pipe of the network. The results in Table 3 show that, despite the greater number of boundary pipes, the DMAs configurations obtained by the weighted MLRB allow the placement of a greater number of isolation valves and, consequently, reduce the flow meters to install on boundary pipes. In fact, for all weighted conditions the number of meters is lower than for the unweighted configurations. It is worth observing that non partitioned network layout with a single PRV downstream of the tank has a low impact on water losses reduction (the leakage percentage is about 43%). In addition, the minimization of elevation differences within DMAs has a significant effect on water losses reduction, in fact, the percentage of losses is reduced from 45% to 33.49% for weighted DMAs, while the maximum loss reduction is 37.03% for unweighted DMAs layouts.
Conclusions
The weights search algorithm allows to minimize ground elevation differences within DMAs, and improves the performance of the unweighted clustering algorithm. A weighted network according to the proposed procedure has advantages both in the dividing phase and in the water losses reduction by installing PRVs on DMA entry points. The weights search algorithm computes weights without the need of any hydraulic simulation, and it can be coupled with several clustering algorithms that accept a weighted network as input. To test the effectiveness of the proposed methodologies, it is necessary study other networks and implement other clustering algorithms.
Author Contributions: All authors have read and agree to the published version of the manuscript. All authors contributed equally to the paper. | 3,124.8 | 2020-09-04T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Dendrite‐accelerated thermal runaway mechanisms of lithium metal pouch batteries
High‐energy‐density lithium metal batteries (LMBs) are widely accepted as promising next‐generation energy storage systems. However, the safety features of practical LMBs are rarely explored quantitatively. Herein, the thermal runaway behaviors of a 3.26 Ah (343 Wh kg−1) Li | LiNi0.5Co0.2Mn0.3O2 pouch cell in the whole life cycle are quantitatively investigated by extended volume‐accelerating rate calorimetry and differential scanning calorimetry. By thermal failure analyses on pristine cell with fresh Li metal, activated cell with once plated dendrites, and 20‐cycled cell with large quantities of dendrites and dead Li, dendrite‐accelerated thermal runaway mechanisms including reaction sequence and heat release contribution are reached. Suppressing dendrite growth and reducing the reactivity between Li metal anode and electrolyte at high temperature are effective strategies to enhance the safety performance of LMBs. These findings can largely enhance the understanding on the thermal runaway behaviors of Li metal pouch cells in practical working conditions.
INTRODUCTION
The rechargeable batteries are of great significance to various energy storage processes for the environmental sustainability and the availability of renewable energy. [1][2][3] With the booming development of electric vehicles and portable electronic devices, advanced electrode materials and high-energy-density battery systems are urgently demanded. [4][5][6][7] Lithium metal batteries (LMBs) are promising to achieve a theoretically high energy density due to the lowest potential (−3.04 V vs. the standard hydrogen electrode) and ultra-high specific capacity (3860 mAh g −1 ) of the Li anode. [8][9][10] Actually, LMBs were previously tried into the commercial products and their high-energydensity feature rendered the practical applications. 11,12 However, the frequent fire accidents led to the abandon of LMBs while the safe lithium ion batteries (LIBs) based on the intercalation principles achieve great success since 1990s. Therefore, the safe operation of LMBs is among the most urgent issues before the practical applications. 13,14 During practical operations of batteries, abuse conditions can occur, such as mechanical abuse, [15][16][17] electrical abuse, [18][19][20][21] and thermal abuse, [22][23][24][25] leading to safety problems of batteries. Under these abuse conditions, thermal runaway is the intrinsic feature of safety risk, mainly controlled by the thermal balance between the heat generated (exponentially) and the heat dissipated (linearly) of the cell. 26 The main trigger for thermal runaway involves a series of uncontrolled exothermal chain reactions contributed by the components inside a battery, including the anode, the cathode, and the electrolyte. [27][28][29] Calorimetric techniques such as C80 microcalorimeter, 30,31 and extended volume-accelerated rate calorimetry (EV-ARC), [32][33][34][35] and differential scanning calorimetry (DSC) are adopted to evaluate the contribution of different exothermic reactions to the thermal runaway. 36 EV-ARC can measure the intrinsic thermal safety of the battery under adiabatic conditions. Working with DSC, EV-ARC can provide accurate quantification of the thermal runaway properties of cells. These methods are beneficial to enhance the understanding of the thermal runaway processes of LMBs. However, the thermal runaway behaviors of high-energy-density LMBs are rarely investigated quantitatively.
Compared to the LIBs, LMBs suffer from problems such as dendritic Li deposition and unstable solid electrolyte interphase (SEI). [37][38][39][40][41][42] During cycling, the SEI is continuously broken and repaired upon Li anode, leading to the continuous accumulation of thermally unstable SEI resulting in potential safety risks at elevated temperature. [43][44][45] Meanwhile, Li dendrites can increase the specific surface area of Li anode and intensify the serious exothermic reactions between Li metal and other cell components, such as SEI, electrolytes (non-aqueous and solid-state electrolytes), and cathodes. [46][47][48][49] Beside these, thermal decomposition and combustion of electrolytes themself can also aggravate the safety risks. 50 However, the detailed reactions sequence and heat contribution during thermal runaway of high-energy-density LMBs in working conditions are still tightly sealed to researchers. Therefore, it is of great importance to understand the safety characteristics and the roots of working LMBs based on the practically adopted cell patterns, such as pouch cells.
In this contribution, the thermal runaway features of the 3.26 Ah Li | LiNi 0.5 Co 0.2 Mn 0.3 O 2 (NCM523) pouch cells with 1.0 M lithium hexafluorophosphate (LiPF 6 ) ethylene carbonate (EC)/diethyl carbonate (DEC) electrolyte were comprehensively investigated during their whole life cycle. Combined with the exothermal, morphological, and compositional characterizations of the cycled Li, the trigger for thermal runaway of the cycled cell is considered as the exothermic reactions between the large amounts of inorganic SEI and LiPF 6 salts, while the reactions between Li metal and electrolytes lead to the final occurrence of thermal runaway. Compared with the activated cell, the self-heating and triggering temperature of the cycled cells are significantly reduced from 112.2 • C to 72.7 • C and from 215.3 • C to 88.2 • C, respectively. The pristine cells would not undergo thermal runaway until 300 • C. These results illustrate the thermal runaway feature of LMBs quantitatively and clearly explain the origin of thermal runaway.
RESULTS AND DISCUSSION
The adopted Li | NCM523 pouch cell consists of the NCM523 cathode (4.0 mAh cm −2 ), Li metal anode (50 µm), and routine ester electrolyte (1.0 M LiPF 6 EC/DEC (1:1 by vol.)). The polypropylene/polyethylene/polypropylene (PP/PE/PP) separator is employed in the pouch cell ( Figures S1 and S2). The assembled Li | NCM523 cells are cycled at 0.03 C (1.0 C = 180 mA g −1 ) for the first two cycles and then at 0.1 C for long cycling ( Figure S3A). The discharge capacity of the first cycle is 3.26 Ah and the energy density of the cell is 343 Wh kg −1 ( Figure 1A, the calculation method in Table S1). After 13 cycles, the discharge capacity of the cell declines significantly due to the rapid depletion of the active Li and the accumulation of dead Li and SEI in the working cell, accompanied with an increasing polarization voltage and a rapid decay in the Coulomb efficiency (CE, Figure S3B). [51][52][53] EV-ARC is a frequently adopted instrument to obtain the thermal safety performance of cells working at an adiabatic test environment ( Figure S4). 54 In the heating stage, the EV-ARC works in a heat-wait-seek mode. When the battery self-heating temperature rate reaches .02 • C min −1 , it will change into a self-heating stage, at which the critical temperature is defined as T 1 . When the battery self-heating temperature rate achieves 1 • C s −1 (defined as T 2 ), the temperature rises rapidly in a thermal runaway stage. The maximum temperature reached by the cell after thermal runaway is T 3 . In general, the lower T 1 and T 2 are the easier to trigger thermal runaway, and the higher T 3 suggests more exothermic heat produced during thermal runaway.
Thermal runaway feature of the cycled cell
The thermal runaway feature of 20-cycled Li | NCM523 cell at 100% state-of-charge (SOC) was characterized by EV-ARC. Only a few residues remain of the pouch cell after thermal runaway ( Figure 1B). The thermal runaway of the cell is initiated at 72.7 • C (T 1 ) and then triggered at 88.2 • C (T 2 ) after 1.38 h ( Figure 1C). The cell reaches a maximum temperature of 407.4 • C (T 3 ). During thermal runaway, the pouch bag ruptures and the residual electrode materials are ejected, leading to a low T 3 . A stable voltage curve is observed until the temperature rise rate reaches 1 • C s −1 , suggesting that the internal short-circuits appear later than T 2 ( Figure 1D) and internal short-circuit is not the cause for thermal runaway. 55 The EV-ARC results exhibit that LMBs have a larger risk of thermal runaway than LIBs. 56 The cell components and mixtures were characterized by DSC to assess the origin of thermal runaway. The DSC and EV-ARC results do not match exactly due to their different working mechanisms, but are somewhat instructive and complementary to each other. The DSC curve of NCM523 and electrolyte has an exothermic peak at 283.5 • C, attributed to thermal-induced oxygen reaction with electrolyte ( Figure 2A). 57 The cathode cannot contribute to the heat release before thermal runaway temperature (T 2 ). The heat contribution of Li metal anode is also analyzed. There is an exothermic peak locating at 143.7 • C in the DSC curve of cycled Li with an onset temperature of 89 • C ( Figure 2B), indicating that cycled Li anode can contribute to initiate the thermal runaway. The exothermic reactions between Li metal and electrolyte (128.1 and 158.0 • C) can further increase the temperature to T 2 , leading to thermal runaway. The exothermic peak of anode and cathode reaction (234.5 • C) leads to further heat generation and brings the battery temperature to T 3 rapidly. 58,59 The DSC results of the cell compositions, together with EV-ARC results, reveal that the exothermic reactions containing cycled Li are the causes for the cell thermal runaway: Li anode itself contributes to the self-heating feature of the pouch cells (T 1 ), the reactions between Li metal anode and the electrolyte lead to the occurrence of thermal runaway (T 2 ), and exothermic reactions in the pouch cells including cathode, anode, and electrolyte result in the highest temperature (T 3 ) during thermal runaway. Cycled Li is further characterized to probe the reason of the exothermic reaction (T 1 ). By comparing the images of cycled Li and pristine Li, the cycled anode contains numerous dendritic dead Li after cycling and some even adheres to the separator ( Figure 3A). The powdery and porous structure has a very high specific surface area, which rendering high reactivity toward other cell components. 60 Meanwhile, the X-ray photoelectron spectroscopy (XPS) spectra confirm that cycled Li is composed mainly of plenty of LiPF 6 decompositions, inorganic components, organic components, and some amounts of metallic Li ( Figure 3B, Figure S5). 61,62 It is worth noting that both Li 2 CO 3 and Li 2 O are the main components of the inorganic SEI. 63,64 Based on the chemical compositions, LiPF 6 and inorganic products of SEI were tested by DSC. The reaction at 77 • C (near T 1 ) of LiPF 6 and Li 2 O contributes to self-heating of the cell ( Figure 3C, Figure S6A). Furthermore, the exothermic peak at 143 • C of LiPF 6 and Li 2 CO 3 is consistent with the occurrence of an exothermic peak for cycled Li, causing the temperature of the cell to rise continuously, triggering thermal runaway. These exothermic reactions are due to the reaction of PF 5 (LiPF 6 decomposition) with Li 2 O and Li 2 CO 3 . 65,66 By contrast, the reaction of residual Li and the inorganic SEI compositions does not result in an exothermic peak of cycled Li at 143.7 • C ( Figure S6B). These conclusions are also confirmed by Li | NCM523 cell with 1.0 M LiTFSI EC/DEC electrolyte ( Figure 3D, Figure S7) and Si/C anode with 1.0 M LiPF 6 EC/DEC electrolyte ( Figure S8). Therefore, the reaction between the LiPF 6 and inorganic SEI results in the initial exotherm (T 1 ).
According to the above discussions, the thermal runaway mechanism of the Li | NCM523 cell with 1.0 M LiPF 6 EC/DEC after cycling is proposed (Figure 4). When the cell temperature increases under abuse conditions, the LiPF 6 decompositions (PF 5 ) will react with Li 2 O and Li 2 CO 3 in the SEI on the dendritic Li to produce mild heat, while the organic components of SEI are unstable and can transform into inorganic compounds. 43 Then, the cell begins to self-heating (T 1 ) and its temperature rises gradually. During this process, the SEI is gradually consumed, exposing fresh dendritic Li to react with the electrolyte and forming new SEI. The reaction continues to release heat and raises the temperature of the cell, which in turn causes the Li and electrolytes to react more vigorously. Eventually, the cell temperature rate reaches 1 • C s −1 (T 2 ), resulting in thermal runaway.
As temperature increases sharply, the separator collapses and the voltage starts to drop. The large-scale internal short circuit generates joule heat. Next, the NCM523 cathode releases oxygen. The reactive oxygen reacts exothermically with the organic solvent (EC and DEC) in the electrolyte. Furthermore, the oxygen diffusing into the anode also reacts exothermically with the Li metal. 67 All these reactions result in the highest temperature during thermal runaway. The volatilization of the electrolytes and the gas generated by these reactions cause the cell to bulge and rupture. Figure 5C). The difference in T 3 between cycled cell and activated cell is that activated cell has a fully charged capacity of 3.26 Ah, while cycled cell just has a fully charged capacity of only 0.2 Ah. The reduced energy reduces the maximum temperature of the cell. 56 On the other hand, the cycled cell with numerous dendrites, SEI, and dead Li produces enough gas to get the electrode material out at elevated temperature, leading to the incomplete combustion reaction after T 2 and the final lower T 3 . It is different from an activated cell. The trigger for T 1 and T 2 of activated cell is due to the highly reactive dendritic Li on the anode surface after the initial charge (Figures S9 and 10). For a pristine cell, there is no thermal runaway. The temperature of T 1 is 176.1 • C without T 2 ( Figure 5D,E). It only occurs with eruptions of flammable electrolytes ( Figure 5F, Movies S1). The pristine cell with a high T 1 exhibits good thermal stability due to the Li anode without dendrite.
The thermal behaviors in the whole life cycle of Li | NCM523 batteries were comprehensively investigated by comparing the key parameters extracted from the EV-ARC test results (Table 1). The pristine cell exhibits superior thermal stability without thermal runaway, because all electrode materials, especially Li foil, are fresh. After the first charging, Li + from the cathode is deposited as dendrites. Dendrites can increase the content of SEI on the anode and largely strengths the exothermic reactions related to the Li anode. Consequently, the temperature of TA B L E 1 Key parameters extracted from the EV-ARC results of the whole life cycle of Li | NCM523 pouch cell Note: a Due to the large amount of gas generated during thermal runaway, the cell components rush out of the cells, leading to the incomplete reactions of electrode materials.
T 1 is reduced from 176.1 • C to 112.2 • C. After 20 cycles, large amounts of dendrites accumulate on the anode and the SEI on the surface of the Li anode increases substantially after rupture and regeneration, bringing the temperature of T 1 down to 72.7 • C and decreasing T 2 from 215.3 • C to 88.2 • C. Besides the reduction in the T 1 and T 2 temperature, dendrites on the anode also lead to the reduction in the interval time between these characteristic temperatures. The parameter of Δt (the interval time from T 1 to T 2 ) is introduced to evaluate the escape time during thermal runaway. 68 The Δt of the cycled cell (1.38 h) is much less than 13.99 h of an activated cell, suggesting its greater risk during thermal runaway, which agrees with the conclusions drawn from the parameters of T 1 and T 2 . Therefore, the thermal stability of Li | M523 batteries severely drops with the accumulation of dendrites on the anode. If the Li anode can be stably deposited and stripped without dendrites, the safety of LMBs can be improved similar to that of LIBs due to the reduced reactivity of fresh Li metal and the liquid electrolyte ( Figure S11). Besides suppressing dendrite growth, enhancing the thermal stability of electrolytes against Li metal can also largely improve the safety of LMBs, such as fire-resistant solvents and Li salts of liquid electrolytes and solid-state electrolytes.
CONCLUSION
The thermal behaviors of a high-energy-density (343 Wh kg The Si/C composite anode was ball-milled with Si powder (100-200 nm, Aladdin), Super P conductive carbon black, and polyvinylidene fluoride binder (PVDF) at a weight ratio of 6:3:1 in N-methyl pyrrolidone solvent (Lizhiyuan Battery Materials Co., Ltd.) to form a homogeneous slurry. The slurry was directly coated on a Cu current collector and then dried at 60 • C for 12 h. The Si/C electrode was pouched disks with a diameter of 13.0 mm. The loading was ∼1.5 mg cm −2 . The cell with Si/C was made of Li (600 µm), PP, and LiPF 6 in EC/DEC (50 µl, 1.0 M, 1:1 by volume ratio). The cell was assembled in 2025-type coin cell, and tested from 0.01 to 1.5 V at 1 C (1 C = 2887 mAh g −1 ). All cells were monitored in the galvanostatic mode in a Land CT2001 multichannel battery tester. Both coin and pouch cells were measured at room temperature (25 • C) and without additional stress.
Materials characterizations
The morphologies of Li electrodes were observed by a JSM 7401F (JEOL Ltd., Tokyo, Japan) scanning electron microscopy (SEM) operated at 3.0 kV. XPS experiments were employed on scanning X-ray microprobe (Thermo Fisher Scientific, ESCALAB Xi+) operated at 15 kV, with monochromated Al Kα radiation. Ar + sputtering rate for the XPS depth-profiling calibrated on SiO 2 surface was ∼30 nm min -1 , and the sputtering time was 6 min. The thermal runaway tests of pouch cells were employed in the EV-ARC system that was produced by Thermal Hazard Technology. K-type thermocouples inserted in the cell to monitor the internal temperature. The cell voltage was recorded using a data logger by Hitachi. Here, a typical heat-wait-seek mode was adopted for testing from 40 • C to 300 • C. The temperature setting for each step was 5 • C. The wait time and seek time were set as 30 and 20 min, respectively. The characteristic temperatures (T 1 , T 2 , and T 3 ) were extracted based on the data analysis. The thermal stability of cell materials was also tested under Ar atmosphere in a Simultaneous Thermal Analyzer (STGA/DSC1/1600LF, METTLER): heating rate was 10 • C min −1 ; the temperature was raised from 30 • C to 300 • C. The tested materials were placed in sealed crucibles by DSC press. The heat flow of the mixture was calculated based on the sum of the masses of the components. | 4,319 | 2022-06-20T00:00:00.000 | [
"Materials Science"
] |
DNA Damage and Repair in Eye Diseases
Vision is vital for daily activities, and yet the most common eye diseases—cataracts, DR, ARMD, and glaucoma—lead to blindness in aging eyes. Cataract surgery is one of the most frequently performed surgeries, and the outcome is typically excellent if there is no concomitant pathology present in the visual pathway. In contrast, patients with DR, ARMD and glaucoma often develop significant visual impairment. These often-multifactorial eye problems can have genetic and hereditary components, with recent data supporting the role of DNA damage and repair as significant pathogenic factors. In this article, we discuss the role of DNA damage and the repair deficit in the development of DR, ARMD and glaucoma.
Eye Diseases
Chronic ocular pathologies such as diabetic retinopathy (DR), age-related macular degeneration (ARMD) and glaucoma are leading causes of blindness worldwide [1]. Indeed, patients with these conditions are predominantly referred by ophthalmologists to low-vision rehabilitation services due to vision impairments [2,3]. Hence, further understanding of the underlying mechanism of pathology is crucial in the development of novel therapeutics for preventing irreversible vision loss in these patients. DNA damage, in particular, is thought to play an important role in the progression of these conditions. The purpose of this review is to highlight different mechanisms of DNA damage and repair pertinent to the progression of ocular pathologies and their roles in short-and long-term treatment outcomes.
DNA Damage and Repair Mechanisms
Environmental and metabolic agents such as reaction oxygen species (ROS), UV radiation, alkylating agents, and heterocyclic aromatic amines damage DNA, causing the loss of genetic information, and can lead to cell death and aging if not repaired accurately and in a timely manner (reviewed in [1,2]). DNA damage also induces constellations of mutations such as base changes, single-base insertions and deletions (indels), short-sequence insertions and deletions, and larger chromosomal rearrangements that are common to many human diseases [2][3][4]. Importantly, the location and the functions of the eye make it particularly susceptible to exogenous DNA damage such as radiation and chemical exposure, and several DNA repair defects are associated with impaired eye functions [5,6]. Naturally, cells evolve multiple mechanisms to detect and eliminate DNA lesions (so called DNA repair) to avoid harmful consequences to cells and organisms and to sustain genetic integrity ( Figure 1). The defects in DNA damage repair and signaling thus predispose humans to a host of diseases and accelerated aging [1]. We will briefly outline below the types of DNA damage most relevant to eye diseases and the basic framework of repair mechanisms and pathways responsible for removing these lesions. Overview of DNA damage and repair pathways. Environmental agents (i.e., radiation, chemotherapy and reactive oxygen species) or cellular metabolisms (oxidation, alkylation or hydrolysis) induce diverse types of lesions in the DNA. Specific pathways (base excision repair, nucleotide excision repair, non-homologous end joining and homologous recombination) are primarily responsible for repairing these lesions.
Base excision repair (BER): ROS or alkylating agents produce DNA base damages that can trigger base mispairing and substitutions [7]. ROS is also linked to dysfunctional mitochondria and cellular stress to hyperglycemia, the common conditions associated with eye diseases [8].
BER is a specialized repair pathway that senses these base lesions and cleaves the glycosidic bond of a modified base by one of several glycosylases, producing the nucleotides without a base, also known as the apyrimidinic/apurinic (AP) site, as the key repair intermediates (reviewed in [9]). The AP site is subsequently cleaved by the AP endonuclease and replaced with a correct nucleotide by DNA polymerase β. Alternatively, the repair DNA synthesis could extend beyond a single nucleotide and replace longer stretches of DNA at the lesion [10,11]. The late steps of BER also share mechanistically with that of single-strand break repair and is tightly coupled to cellular metabolic and energy status [12]. Since oxidative stress is the primary DNA lesion type in eye diseases, both hyperactive or reduced BER is implicated in DR [13,14]. Polymorphism of BER genes, MUTYH and hOGG1 are also associated with age-associated macular degeneration [15].
Nucleotide excision repair (NER): The vision depends on light that is transmitted through the cornea and reaches retina [16]. UV radiation and UV mimetic agents, however, induce pyrimidine dimers and bulky DNA adducts, distorting the DNA structure and impeding faithful DNA replication and transcription (reviewed in [17]). NER is responsible for sensing such DNA distorting lesions, excises both sides of the aberrantly modified bases by two DNA endonucleases (ERCC1/XPF and XPG) and catalyzes resynthesis of the excised DNA sequence [18]. NER has two different variants that differ primarily at the lesion sensing step. Transcription coupled NER (TC-NER) links DNA lesion sensing and subsequent repair events to the transcription process and thereby focuses on eliminating DNA damage on DNA encoding transcripts [19]. Alternatively, global genomic-NER (GG-NER) recognizes DNA lesions anywhere in the genome using the DDB/XPC/hHR23B complex [20]. NER defects cause xeroderma pigmentosum, the recessive human genetic disorder with extreme UV sensitivity, cancer predisposition and ocular manifestation [21].
Double strand break repair (DSBR): Ionizing radiation and radiomimetic agents induce DNA double-strand break (DSB), the most severe types of DNA damage and a major threat to the survival of cells and host organisms (reviewed in [22]). Unrepaired DSBs are also capable of inducing chromosome breakage and translocation, and the production of fusion genes, the hallmarks of cancer cells. Even though the formation of DSBs is not inherent to eyes, such damage could arise as the secondary lesions if the original ones persist and are further modified by cellular events such as DNA replication [23]. In all eukaryotic cells, the efficient elimination of DSBs relies on two evolutionary conserved mechanisms: homologous recombination (HR) and non-homologous end joining (NHEJ). HR begins with the formation of 3 single-strand DNA (ssDNA) by nucleolytic degradation of 5 DNA ends, called 5 to 3 end resection [24][25][26]. The ssDNA is then bound by Rad51 recombinase that searches and copies from homologous templates across the DNA break [27]. NHEJ, instead, seals broken DNA by DNA ligase IV after the juxtaposition and alignment of chromosome ends [28]. Importantly, 5 to 3 end resection plays a pivotal role in the repair pathway choice by inhibiting NHEJ [29]. End resection also commits cells to HR and unconventional end joining called microhomology mediated end joining (MMEJ) [29]. The importance of DSB repair is further underscored by the findings that multiple human diseases leading to immune dysfunctions or cancer predisposition are the results of mutations in DSB repair genes [28].
DNA damage response (DDR): DNA damage triggers complex and cell-wide responses encompassing cell cycle arrest, gene expression, chromatin remodeling, energy control, programmed cell death and autophagy, in addition to DNA repair, many of which are essential for the integrity of eye functions and the age-related disease progression [30]. Central to DDR lies two apical protein kinases, ataxia telangiectasia mutated (ATM) and ataxia telangiectasia and Rad3 related (ATR), which initiate a cascade of signal transduction through multiple kinases and the effector molecules and orchestrate DNA damage repair with cellular homeostasis [31]. Accordingly, the eye diseases often feature dysregulation of these multi-faceted DDR and impaired coordination of cellular physiology and functions in eye and optic nerve systems, albeit with the intact DNA repair.
DNA damage and repair in mitochondria: Emerging evidence suggests that mitochondrial DNA integrity and homeostasis is essential to the subset of age-related diseases including eye diseases [32]. Mitochondria is a cytoplasmic organelle, producing cellular energy by oxidative phosphorylation. Mitochondrial DNA is particularly vulnerable to DNA damage because oxidative phosphorylation could generate reactive oxygens and high energy electrons, capable of inflicting further DNA damage (reviewed in [33]). Evidence suggests that most DNA repair pathways also operate at mitochondria but with substantial differences exist to nuclear counterparts [34]. Nuclear DNA damage signaling is also intimately linked with mitochondrial integrity, function and aging [35]. The details of mitochondrial DNA damage repair and signaling are still emerging and are currently the subject of intense investigation. However, a few recent findings firmly established the causal effect of mitochondrial DNA integrity on the prevention and treatment of age associated diseases [32], warranting further analysis of this topic. To learn more about DNA damage repair and signaling in cancers and aging associated diseases, we would also like to highlight excellent reviews listed on each repair pathway for the readers for further reading.
Diabetic Retinopathy
Diabetic retinopathy (DR), the most prevalent microvascular diabetes complication, is a leading cause of blindness that impacts nearly 100 million people globally. Although DR is primarily characterized by vascular dysfunction and capillary nonperfusion, it is caused by both vascular dysfunction and neurodegeneration [36]. Indeed, neuropathy, indicated by the diminished electrical activity in electroretinograms and thinning inner retinal layer that includes ganglion and amacrine cells, is present even before apparent retinal ischemia. Patients with DR can also develop diverse conditions including macular ischemia, diabetic macular edema, preretinal hemorrhage, vitreous hemorrhage, and tractional retinal detachment, underscoring its complex pathologies impacting multi-organ failure. DR can be classified as nonproliferative or proliferative, based on extraretinal neovascularization and/or proliferation [37].
Growing evidence suggests that diabetes induces accelerated DNA damage by multiple means, which could explain some of its complex pathologies [38,39]. The results suggest that hyperglycemia is at least partially responsible for elevated DNA damage in diabetes ( Figure 2). Indeed, hyperglycemia can trigger oxidative damage and single-strand breaks on cultured endothelium [40] and several cell types in vitro [41]. Transcriptomic analysis indicates that the level of BER gene expression including apurinic/apyrimidinic endodeoxyribonuclease 1 (APEX1) and N-methylpurine-DNA glycosylase (MPG) shows strong correlation with the protection from microvascular complications in DR patients [42]. High carbohydrate exposure also results in the depletion of NAD+ (nicotinamide adenine dinucleotide) compared to NADH (nicotinamide adenine dinucleotide hydrogen) and defective DNA double-strand break (DSB) repair. Accordingly, reconstituting DSB repair prevents fibrosis instigated by metabolic stress [43].
is a leading cause of blindness that impacts nearly 100 million people globally. Although DR is primarily characterized by vascular dysfunction and capillary nonperfusion, it is caused by both vascular dysfunction and neurodegeneration [36]. Indeed, neuropathy, indicated by the diminished electrical activity in electroretinograms and thinning inner retinal layer that includes ganglion and amacrine cells, is present even before apparent retinal ischemia. Patients with DR can also develop diverse conditions including macular ischemia, diabetic macular edema, preretinal hemorrhage, vitreous hemorrhage, and tractional retinal detachment, underscoring its complex pathologies impacting multi-organ failure. DR can be classified as nonproliferative or proliferative, based on extraretinal neovascularization and/or proliferation [37].
Growing evidence suggests that diabetes induces accelerated DNA damage by multiple means, which could explain some of its complex pathologies [38,39]. The results suggest that hyperglycemia is at least partially responsible for elevated DNA damage in diabetes ( Figure 2). Indeed, hyperglycemia can trigger oxidative damage and single-strand breaks on cultured endothelium [40] and several cell types in vitro [41]. Transcriptomic analysis indicates that the level of BER gene expression including apurinic/apyrimidinic endodeoxyribonuclease 1 (APEX1) and N-methylpurine-DNA glycosylase (MPG) shows strong correlation with the protection from microvascular complications in DR patients [42]. High carbohydrate exposure also results in the depletion of NAD+ (nicotinamide adenine dinucleotide) compared to NADH (nicotinamide adenine dinucleotide hydrogen) and defective DNA double-strand break (DSB) repair. Accordingly, reconstituting DSB repair prevents fibrosis instigated by metabolic stress [43]. The elevated glucose levels in the blood are also linked to cellular senescence and DNA damage, which could be responsible for organ fibrosis in diabetes complications. Hyperglycemia can also induce DNA damage by advanced glycated end products (AGEs). In a mouse model of type 2 diabetes, there was an increase in the level of DNA advanced glycation end products (DNA-AGEs) in urine and tissue (liver and kidney) [44]. Alternatively, hyperglycemia induces high insulin levels to control blood glucose, with consequent hyperinsulinemia causing a significant increase in DNA damage in vitro, which coincided with the generation of reactive oxygen species (ROS). 8-oxo-7,8-dihydroguanine (8-oxoG) is a hallmark of oxidative DNA damage and a primary mutagenic intermediate of oxidative stress [45]. Among the diabetic patients, those with proliferative DR had significantly higher 8-oxoG levels than those with nonproliferative DR or without DR [46]. In support of this, antioxidants, IGF-1 receptors, insulin blockers, and a phosphatidylinositol 3-kinase inhibitor treatment reduces ROS [47]. Furthermore, free fatty acids (FFA), which could lead to insulin resistance and increases the risk of diabetes, can be responsible for mitochondrial DNA damage [48,49]. The results of human antioxidant studies remain controversial despite encouraging outcomes in vitro studies. However, some combined antioxidant therapies appear promising [50].
Surprisingly, DR tends to continue to progress despite strict control of glucose levels after prolonged hyperglycemia. Hyperglycaemic memory is also illustrated after sustained microvascular damage, indicated through the loss of retinal pericytes. It is suggested that the metabolic memory phenomenon and the mitochondrial DNA (mtDNA) damage by reactive oxygen species might be potentially responsible for prolonged progressive course of DR [51]. The mtDNA repair system does not function adequately in chronic, unlike acute, hyperglycemia. Peripheral blood mitochondrial DNA damage can serve as a biomarker for DR since rodents with DR had increased blood mtDNA damage with decreased copy numbers compared with diabetic rodents without retinopathy and nondiabetic individuals [52]. Prominent consequences of mtDNA damage are subnormal complex I and III with reduced membrane potential. This causes the positive feedback loop where hyperglycemia induces superoxide, which damages mtDNA, impeding the electron transport chain and resulting in superoxide overexpression [51]. Moreover, hyperglycemia increases mtDNA sequence variants in the displacement-loop (d-loop), which contains transcription and replication components [53].
Epigenetic modification also takes part in the pathogenesis of DR [54]. Hyperglycemia activates histone deacetylase (HDAC) and increases its expression in the retina and capillary cells. Hyperglycemia concurrently downregulates the activity of histone acetyl transferase (HAT) and inhibits the acetylation of histone H3. Diabetes-induced changes for HDAC and HAT expression persist even after the termination of hyperglycemia. This result suggests that the deacetylation of retinal histone H3 could contribute to the metabolic memory phenomenon and the pathogenesis of DR [55]. Long noncoding RNAs (LncRNAs) are noncoding transcripts longer than 200 nucleotides that can bind specifically to DNA, RNA, or proteins. Diabetes overexpresses several LncRNAs that can translocate in the mitochondria, such as LncMALAT1 and LncNEAT1, which are encoded in the nucleus and participate in mitochondrial homeostasis. High glucose aggravates LncMALAT1 and LncNEAT1 expression, impairing mtDNA and mitochondrial membrane potential [56].
In DR, mtDNA is hypermethylated with increased 5mC levels, particularly at the d-loop region. The inhibition of DNA methylation thus decreases diabetes-induced base mismatches in the d-loop [57]. The overexpression of the enzyme Mlh1, associated with polymerase γ, mitigated the sequence variants in endothelial cells and decreased respiration rates while increasing apoptosis [53]. During the diabetes control and complication trial (DCCT), there was a persistency of DNA methylation over time at key genomic loci associated with diabetic complications in type 1 diabetes patients [58]. Further analysis is required to define the role of DNA methylation, mismatch repair, and base mismatches in D-loop.
Age-Related Macular Degeneration
Age-related macular degeneration (ARMD) is the leading cause of vision loss in individuals over 55. ARMD is characterized by the progressive deterioration of photoreceptors and outer retinal layers and the buildup of macular deposits, drusen. ARMD pathogenesis involves lipid deposition, chronic inflammation, oxidative stress, and inhibited extracellular matrix maintenance [59]. ARMD can be classified as non-neovascular or neovascular based on the presence of choroidal neovascularization [60]. Although age is the primary risk factor, other factors contributing to the progression of ARMD include genetic susceptibility, diet, smoking, and cardiovascular status. The development of ARMD is accompanied by the loss of integrity in retinal pigment epithelium (RPE) cells, photoreceptors and chori-ocapillaris, which relies on peripheral RPE to replace damaged central RPE cells in the macula. However, with senescence, degenerated areas in the macula cannot be replaced or regenerated [61]. A senescent cell shares characteristics of cancer cells, such as more prominent DNA damage, especially DSBs and elevated DDR, and chromosome aberrations. Senescent cells also secrete inflammatory cytokines, matrix-remodeling proteases, growth factors and chemokines, for example through the CXCR2 protein, that contribute to low-grade inflammations and aging-associated changes [61,62].
Consistent with the role of DNA damage in ARMD pathologies, oxidative stress causes damage to the DNA of RPE cells and hinders DNA repair ability with age. Using neutral comet and pulsed-field gel electrophoresis assays, cells taken from ARMD patients display greater endogenous DNA damage but not the double-strand breaks. In ARMD patients, DNA base oxidative modification is greater than in the controls, as probed by DNA repair enzymes NTH1 (endonuclease III-like protein 1) and Fpg (DNA-formamidopyrimidine glycosylase). Furthermore, DNA repair is less effective in lymphocytes from ARMD patients, and these lymphocytes are highly sensitive to hydrogen peroxide and UV radiation [63]. Patients with exudative ARMD had elevated 8-oxoG levels compared to control individuals [61]. In most cases, the repair of 8-oxoG is initiated by the 8-oxoguanine DNA glycosylase (hOGG1) via the BER pathway [64]. If 8-oxoG escapes this process and replicative DNA polymerase misinserts adenine instead of cytosine opposite to 8-oxoG, an alternative pathway of BER can be activated with the MutY glycosylase homologue (MUTYH, hMYH) to remove adenine [65,66]. The genetic variability in the hOGG1 and hMYH genes may be associated with ARMD occurrence and progression in human studies [15]. For the prevention of ARMD progression, antioxidants including lutein, zeaxanthin and vitamin C and E are used contemporarily [67]. In addition, metformin, a common diabetic medication, can function as an antioxidant and anti-inflammatory agent. The use of metformin is associated with decreased risk of developing ARMD and is currently under investigation for clinical use [68].
In addition to BER defects, the sensitivity of RPE cells to blue and UV lights with subnormal DNA repair may promote the development of ARMD [63]. Consistent with the role of UV damage repair deficit in ARMD, the mutation in ERCC6 gene, the key factor in transcription-coupled NER (TC-NER) for UV lesion repair, cause Cockayne syndrome (CS), an autosomal recessive disorder characterized by severely impaired physical and intellectual development, clinically recognized by features including photosensitivity, pigmentary retinopathy, and retinal degeneration [69]. Moreover, single nucleotide polymorphism (SNP) of the G allele in ERCC6 C-6530>G is associated with a risk of ARMD development [70].
The dysfunction of DNA repair in mitochondria also contributes to the pathogenesis of ARMD [71]. More mtDNA lesions in RPE cells are from the macular region rather than the periphery, and mtDNA repair capacity is particularly impaired in the macular region as well. Puzzlingly, unlike aging, which only affects the common deletion region in the mitochondrial genome, mtDNA lesions significantly increase in all regions of the mitochondrial genome in ARMD patients, and this mtDNA damage is associated with ARMD progression [72]. Indeed, mtDNA damage and ARMD stage have a positively correlated relationship, while mtDNA repair capacity and ARMD stage have a negatively correlated relationship. In addition, more mitochondrial heteroplasmic mutations, which have two or more variants within the same cell, are present with ARMD [73]. One interpretation is that overactive initiation of DNA repair systems by alkylating agents can lead to retinal damage and blindness in mice through BER-initiating alkyladenine DNA glycosylase (AAG). Thus, balancing the degree of DNA damage and the repair ability may be essential to preserve retinal function [74].
Glaucoma
Glaucoma is a type of optic neuropathy characterized by the degeneration of ganglion cells, which is related to increased intraocular pressure (IOP). Thus, controlling this pressure, which is directed by the secretion and drainage of aqueous humor by the ciliary body through the trabecular meshwork and uveoscleral outflow pathways, through medication, laser, or surgery is the primary therapeutic strategy for glaucoma. However, open-angle glaucoma patients have prominent outflow resistance through the trabecular meshwork [75]. The trabecular meshwork is a complex, perforated three-dimensional structure in the extracellular matrix composed of trabecular meshwork cells (TMC) [76]. Various signaling pathways are involved in the pathogenesis of glaucoma, such as TGF-beta, MAP kinase, Rho kinase, BDNF, JNK, PI-3/Akt, PTEN, Bcl-2, Caspase, and Calcium-Caspain signaling. These signaling pathways converge into proapoptotic gene expression, suppression of neuroprotective and pro-survival factors, and fibrosis at the trabecular meshwork, which causes increased resistance to aqueous humor drainage and elevation of IOP [77]. IOP-induced mechanical stress can also result in the distortion and remodeling of the lamina cribrosa, leading to impaired axonal transport of essential trophic factors from relay neurons in the lateral geniculate nucleus to ganglion cells. In addition, during metabolic stress from high IOP, retinal ganglion cells may have difficulty producing sufficient energy because of mitochondrial dysfunction. The characteristic optic nerve head with a greater cup to disc ratio and decreased thickness of the retinal nerve fiber layer appearances of glaucoma develop with the death of retinal ganglion cells and optic nerve fiber. These changes, the most crucial aspect of a glaucoma diagnosis, are apparent in the visual field test [75].
One pathogenic factor of glaucoma development is oxidative DNA damage. The oxidatively modified DNA base 8-hydroxy-2 -deoxyguanosine (8-OHdG) is a marker of oxidative DNA damage [78]. 8-OHdG increases in aqueous humor and serum of glaucoma patients [79]. In open-angle glaucoma patients, the amount of 8-OHdG is significantly prominent in the trabecular meshwork and correlates positively with IOP and visual field deterioration [80]. With 8-week oral antioxidant supplementation, 8-OHdG can be reduced in glaucoma patients with relatively high oxidative stress [81]. Antioxidant supplementation in glaucoma patients may be a promising therapy [82]. Furthermore, BER is deficient in glaucoma patients [83]. The expression of poly (ADP-ribose) polymerase (PARP1) and 8-oxoguanine DNA glycosylase (hOGG1), the two key BER enzymes, is significantly decreased in glaucoma patient cells. PARP1 detects DNA damage and facilitates the repair process through uncondensed chromatic structures and interactions with multiple DNA repair factors. OGG1 removes the modified base by cleaving the glycosidic bond [78]. Furthermore, the 399 Arg/Gln genotype of the X-ray repair cross-complementing group 1 (XRCC1) gene is associated with poor DNA repair ability and is related to an increased risk of primary open-angle glaucoma (POAG) occurrence and progression [83]. POAG patients also exhibit a variety of mitochondrial abnormalities, and the accumulation of mtDNA damage is responsible for its pathogenesis. Increased mtDNA deletion is accompanied by reduced mitochondria count per cell and cell loss in POAG. mtDNA deletion is transferred to mitochondrial progeny, progressively increasing with age [84]. The mtDNA to nDNA ratio, representing the degree of mitochondrial DNA damage, is inversely correlated with impaired ocular blood flow in male patients with severe open-angle glaucoma [85]. In a whole-mitochondrial genome sequencing study, half of the POAG patients had pathogenic mitochondrial mutations, and 36.4% were in complex 1 mitochondrial gene [86].
Evidence suggests that glaucoma is associated with epigenetic change that alters histone acetylation and DNA methylation and modulate gene expression [77]. Acute optic nerve injury significantly increases histone deacetylase (HDAC) 2 and 3 transcripts and decreases histone H4 acetylation in retinal ganglion cells. Histone deacetylase inhibitors such as trichostatin A (TSA) and valproic acid reduce the loss of ganglion cells and even enhance axonal regeneration after optic nerve injuries. These suggest that abnormal histone acetylation/deacetylation may be related to retinal ganglion cell damage in glaucoma. Significant genomic DNA methylation has been found in peripheral monocular cells from patients and lamina crobrosa cells from human donors with open-angle glaucoma compared to a healthy control [87,88].
POAG is also associated with an increased number of DNA breaks in both the local trabecular meshwork and the systematic circulating leukocyte [89]. If double-strand breaks in neurons, the most harmful form of DNA damage, are not repaired appropriately, the persistent activation of the DNA damage response can cause dysregulation of the cell cycle and re-entry into G1 that leads to neural dysfunction, apoptosis, and senescence. In the early stage of double-strand breaks, the MRN (MRE11-RAD50-NBS1) complex, including the Mre11, Rad50 and Nbs1/Nbn proteins, activates the ataxia telangiectasia mutated (ATM) kinase for DNA damage responses such as cell-cycle arrest, repair, and apoptosis [90]. Another early response to double-strand breaks is the formation of γH2AX through the phosphorylation of the Ser-139 residue of the histone variant H2AX [91]. Compared to the control, laser-induced chronic glaucoma modeled in rhesus monkeys showed higher expressions of 8-hydroxyguanosine (8-OHG), which indicates oxidative stress, and γH2AX, which indicates DNA double-strand breaks, in the neurons of the lathheral geniculate nucleus (LGN), primary visual cortex (V1), and secondary visual cortex (V2). Apurinic/apyrimidinic endonuclease 1 (APE1) and DNA repair proteins Ku80, Mre11, Proliferating Cell Nuclear Antigen (PCNA), and DNA ligase IV were also elevated in the LGN, V1, and V2 [92]. The DNA damage response is important for neural development. Furthermore, persistent DNA damage response might be responsible for aging and neurodegenerative diseases such as Alzheimer's disease and amyotrophic lateral sclerosis [93]. Clinically, optic nerve crush injury models are similar to glaucoma to the extent that retinal ganglion cell (RGC) death is a main pathologic phenomenon [94]. DNA damage responses can be attenuated through the inhibition of Mre11 in the MRN complex or ataxia telangiectasia mutated (ATM) kinase. Interestingly, attenuated DNA damage response after an optic nerve injury is also neuroprotective to retinal ganglion cells and promotes regeneration of their neurites [90]. Attenuating the DNA damage response pathway also promotes functional recovery after spinal cord injuries [90]. These results suggest that DNA damage response can contribute to glaucoma development, while promoting DNA stability and mutation prevention.
Conclusions
In vision-threatening conditions such as DR, ARMD, and glaucoma, DNA damage and repair is a relevant pathogenic mechanism that suggests a potential future direction for the prevention and therapy of these disabling conditions. In addition, it may hold value as an indicator of prognosis. Both mitochondrial and nuclear DNA damage and their associated repair mechanisms appear to be crucial. However, not all repair responses after DNA damage are beneficial, and a balance needs to be maintained. Hence, further research is required to better understand the role of DNA damage and repair in the progression of these disorders and to pave the way for development of new agents for better clinical outcomes. | 6,025.6 | 2023-02-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Robot-assisted excision of urachal cyst: case report in a child
The urachus is an embryological structure of the urogenital sinus and allantoid that connects the allantois to the early bladder in fetal life and then remains as the median umbilical ligament connecting the umbilicus to the dome of the bladder. An early laparoscopic procedure could trigger a quiescent urachal remnant to become symptomatic, causing a lesion or infection either during carbon oxide contamination or insufflation or a periumbilical or suprapubic port placement. A 15-year-old girl complaining of supra-pubic abdominal pain. About 2 months previously, she had undergone laparoscopic appendectomy for acute appendicitis, and early postoperative period was uneventful. She underwent a robotic-assisted excision of a urachal cyst. It has been suggested that early laparoscopic procedures could trigger previously asymptomatic urachal remnants to become symptomatic. Robot-assisted excision of a urachal cyst is a safe, effective alternative to open surgery in children.
Background
The urachus is an embryological structure of the urogenital sinus and allantoid that connects the allantois to the early bladder in fetal life and then remains as the median umbilical ligament connecting the umbilicus to the dome of the bladder [1][2][3]. Abnormalities in involution of the urachus may result in patent urachus, umbilical urachal sinus, vesico-urachal diverticulum, and urachal cyst [1]. In particular, a urachal cyst is reported to occur in 0.02% of live births but is symptomatic in just 0.00067% of the population [4]. An inadvertent rupture of a quiescent urachal remnant may, rarely, occur during a laparoscopic procedure, during port placement.
We report a case of a young girl that underwent a robotic-assisted excision of a symptomatic urachal cyst following laparoscopic appendectomy.
Case presentation
A 15-year-old girl was admitted to our Institution complaining of a supra-pubic abdominal pain. The symptoms lasted for 2 weeks. About 2 months previous, she had undergone a laparoscopic appendectomy for acute appendicitis, and the early postoperative period was uneventful. Moreover, preoperative Pediatric Appendicitis Score (PAS) was 7/10 and abdominal ultrasound had shown an appendix with a 7-mm diameter, with peri-appendiceal fluid.
The constant pain she complained of did not radiate and was 5/10 on the pain scale. There were no lower urinary tract symptoms. Urinalysis and blood tests were unremarkable, and C-reactive protein (CRP) was negative. An abdominal ultrasound and a magnetic resonance imaging (MRI) scan did not document any pathological abnormality, rather a supra-vesical cyst consistent with an enlarged urachal cyst (Fig. 1). For this reason, after informed consent was obtained, the patient underwent a robotic-assisted excision of the urachal cyst. Briefly, with the patients in supine and Trendelenburg position, a 2cm-long incision was performed above the navel, opening the fascia and accessing the abdominal cavity with trocar placement for optics. After induction of pneumoperitoneum at 12 mmHg, another 8-mm robotic trocar and two assistant ports, one 5 mm and one of 12 mm, were positioned according to the scheme for surgery in the lower pelvis. A lysis of some adhesions between the intestine and the abdominal wall was performed. The urachus was identified between the two umbilical ligaments and was followed cranially to its end where it was dissected. It was carefully separated from the bladder identifying a tiny passage (Fig. 2). The urachus was extracted using an endobag and sent for definitive histological examination. To ensure a tight suture, the bladder was sutured with V-lock 3-0 stitch (Fig. 3) and the parietal peritoneum was closed (Fig. 4). Postoperative course was uneventful, oral feeding was started the day after the procedure, and the patient was discharged after 7 days. Histology confirmed a urachal cyst. Three months later, a follow-up ultrasound was normal.
Discussion
The sign and symptoms of urachal abnormalities range from a completely asymptomatic, incidentally found lesion to pain, infection, lower urinary tract symptoms, and rarely, malignant degeneration [1]. While the management of an asymptomatic urachal remnant is still controversial, surgical excision of a symptomatic lesion is strongly recommended. Even if open surgery has been deemed the mainstay for many years, minimal invasive techniques have been employed being considered a safe, effective alternative with additional advantages of improved anatomical visualization and cosmesis [5,6]. Robotic-assisted laparoscopy for the surgical management in pediatric age of urachal anomalies was firstly described by Yamzon et al. [7]. Later, some case series of children who underwent robotic-assisted laparoscopic urachal cyst excision were reported [1,8]. In front of longer operating times, including increased time for robotic setup, surgeon learning curve, and increased cost of robotic equipment, this technique offers the advantages of a 3-dimensional visualization, easier intracorporeal suturing and a more precise excision of the lesion compared to standard laparoscopy [9,10]. In particular, in our case, the patient had undergone a previous laparoscopic appendectomy and robotic management was useful in carrying out a complete lysis of adherence.
Recently, it has been highlighted that early laparoscopic procedures could trigger previously asymptomatic urachal remnants causing them to become symptomatic. Port site injuries to urachal remnants have been reported in nine other cases, two involving urachal cysts [11,12], two a possible patent urachus [11,13], and five cases related to a urachal diverticulum [14][15][16][17][18]. Our case resembles a third reported case of a possible patent urachus probably injured during port placement.
A possible explanation could be that a lesion and contamination or insufflation of carbon oxide during placement of an umbilical or suprapubic port might have caused an enlargement of quiescent, asymptomatic urachal remnants [11]. Moreover, it is worth reflecting that even an emptied bladder, an iatrogenic lesion of an asymptomatic patent urachus or urachal diverticulum is susceptible to damage on insertion of a suprapubic port as the remnants are sited in the Retzius space [18].
In this regard, we carefully reviewed the video recorded during the laparoscopic appendectomy for signs of urachal remnant, and no urachal anomalies were visualized during the procedure. Moreover, an abdominal ultrasound was carried out before appendectomy not showing any urachal abnormality.
Conclusions
We believe that our case highlights at least two relevant concepts. Firstly, the placement of a periumbilical or suprapubic port during laparoscopic surgery could likely be the cause of latent asymptomatic urachal remnant lesion or infection. A symptomatic urachal remnant should be suspected if symptoms of abdominal pain occur after laparoscopic surgery. Moreover, this rare complication should be discussed with patients or parents before any laparoscopic procedure including an umbilical or suprapubic access. Secondly, in expert hands, robotic-assisted excision of a urachal cyst could be considered a safe, effective alternative to laparoscopy and open surgery also in pediatric patients, especially after previous abdominal surgery where postoperative adherence should be expected and managed.
Abbreviation CRP: C-reactive protein | 1,533 | 2021-05-10T00:00:00.000 | [
"Medicine",
"Engineering"
] |
The Development of Scientific Growth in Latin America and the Caribbean. An Economic and Social Approach
In Latin American and Caribbean countries, research and developmental processes have been the determinant factors in productivity, innovation, and economic growth since they have contributed to not only a more competitive, and egalitarian society, but also to greater welfare indices, which when factored in aides in problem-solving through the enrichment of scientific knowledge. Economic growth models based on Research, Development, and Innovation (R D I) have had the following objectives: long-term sustainability, stimulation of the creation of new discoveries that have been adding to the improvement of life quality and the production policies of developing territories. This literature review led to the designating of researchers as generating agents of scientific capital and recognizing them as a means of growth and impact in today's society. However, there is no coherence between the materialization of projects and the coverage of current needs in Latin America and the Caribbean, among some government agents along with the lack of prioritization in relation to the current problems of some developing countries.
Introduction
The development of knowledge in society has highlighted the great importance that innovation and intellectual resources have both as sources of competitiveness and long-term economic development. According to the IDB (Inter-American Development Bank), 2010, the solutions to the most pressing challenges affronted by developing countries, such as climate change, energy accessibility, disease control, among others, increasingly require greater substantial technological progress. [1] According to the IDB, Latin America and the Caribbean (LAC) have been working on the implementation of adequate means to meet the basic needs of their populations through nutrition and sanitation programs, poverty reduction, universal quality education, and economic modernization. An effort that has helped put into evidence that promoting opportunities provided by technological change in a globalized economy is vital for any emerging economy. Part of this development has been possible thanks to the scientific community that has taken on different developmental functions such as social, economic, environmental and public health. [1] According to the United Nations Educational, Scientific and Cultural Organization (UNESCO), research has now become the pillar of scientific development; its growth is proven by the professional academic training and the number of university institutions in Latin America that have been working towards the development of scientific application methods material to problem-solving continent-wide.
Taylor and Bogdan, (1992) determined that the application of certain scientific methods depends on how the problems are perceived and the capacitive ability to solve them. Said intellectual capacity is currently being aimed at the identification of vulnerabilities, and at the same time, intervention strategies in the different educational processes, such as in the human, natural, economic, and pure sciences, among others, mediums through which a continual strive to generate innovation processes flourishes. However, despite the low innovation rate in Latin America, its materialization has been recognized in the creation of new products, processes, sectors, and activities that boost transformation and development in a virtuous process of growth in which the generation of positive changes based on knowledge is increasingly appreciated. [2] A deeper understanding of this knowledge arises from the implementation of three basic questions that have distinctly marked research approaches in a social context: How is the nature of knowledge and reality conceived? How is the nature of the relationships between the researcher and the knowledge produced? What is the researcher's way of building knowledge? [3] Based on the above, it is necessary to identify the importance of investment in the researcher education and the implementation of resources in innovation processes, which should go hand in hand with the recognition of the interaction with nature, the needs of society, and intervening discernment of the two. Therefore, this process should not only be reflected in academic setting, but as well as in the ability to execute projects, and in the same breath, in the ability to encourage adherence to public policies in some lagging countries, being this the way attain a better visualization on how to achieve greater contribution to education and scientific training.
The United Nations Economic Commission for Latin America and the Caribbean have stated that innovation policies, in coordination with those of science and technology, are mandatory for efficient linking of efforts from industry, government and academic sectors, which will bring about not only the strengthening of national innovation systems but also synchronization with leading global world economic trends. [4] Some of the efforts identified in the academic sector have been manifested in the rankings of the best Latin American universities, in which, Rankings. (2019), fourteen Latin American universities are among the top 600 in the world, highlighting that Latin America has to date great recognition across borders, among which, five Nobel prizes in Chemistry, Philology, and Medicine, and twelve more in Literature. [5]
Academic Evidence
In the last ten years approximately, there has been progressing in the number of university enrollment, as indicated in the total undergraduate alumni, from nearly 1.76 million in 2006 to 2.46 million in 2015. The social sciences fields had the greatest enrollment rate for undergraduate students in Latin America, with 5 out of 10 students coming from said fields. In step, it was noted that the total number of students who completed their doctoral studies in Latin America had significant growth of roughly 23 thousand more students over a nine-year period. Unlike the Undergraduate and Master's degrees, the Ph.D. degrees are distributed evenly among, Natural and Exact Sciences, Social Sciences and Humanities with a rate of 24%, 24%, 21% respectively. [6] Similarly, according to the Scopus database, the number of articles published in scientific journals registered by LAC authors (Latin America and the Caribbean) between 2006 and 2015 increased 96%, while also highlighting Brazil's growth, the country managed to increase the number of publications in said database by 102%. Latin America also managed to increase its participation by 55%, reaching a score of 7.9% in worldwide scientific production. In relation to the total number of patents requested in the national offices of Latin American countries, an increase of 32% was achieved between 2006 and 2015, while LAC accounted for 27%. [6] In comparison with Latin America, within the same time frame, Portugal stood out as one of the most prominent countries in patent production, with an increase of 83%, in comparison to Spain with 14%. In contrast to the above, in LAC the rise was led by Chile with a fivetime increase of patent applications while Colombia tripled them, however, its impact was less in relation to total applications in LAC. It should be noted that 9 out of 10 patent applications in Latin America correspond to foreign companies seeking to protect their products in overseas markets. Figure 1, [6]. Similarly, the 2015 report of the National Science and Technology Organizations of each country, showed that Brazil concentrated the largest number of researchers, beating Argentina in a 4: 1 ratio. Along the same lines, it was indicated that a tenth of Latin American researchers is mostly Mexican, and to a lesser extent Colombian, Chilean and Ecuadorian Table 1.
Participation of universitis in patents ownership 2010-2015
usually above 60%, while the Latin American average for investment in research is one third, consequently this has generated a very low demand from companies to universities, demonstrating the urgency for support in both the execution and materialization of projects. [6] The universities of Latin America and the Caribbean have gradually positioned themselves as centers of basic and applied research. This rise is due to greater government support and the creation of policies that fostered innovation. In sharp contrast, according to RICYT (Network on Science and Technology Indicators -Ibero American and Inter-American), one of the characteristics of Latin America has been the low innovation rate of companies and their low involvement in research and development activities. [7] However, the statistics carried out by the RICYT showed a significant increase in research and development investment in the countries of Latin America and the Caribbean, as well as in the number of people who carry out some type of scientific activity, and hence the results obtained regarding publications. During this period, investment in Research and Development in the region increased by 27%, reaching a large sum of resources which was close to $ forty billion dollars in 2015. However, in relation to the regional Gross Domestic Product (GDP), the increase accounted for 0.02% in five years, from 0.68% in 2010 to 0.70% in 2015.
Despite the figures, resources were not limited; on the contrary, there was an increase in the effectiveness and success of Latin American research internationally. [7] This can be observed in the number of scientific publications and co-publications at the international level, and as well as in the Scopus database, which showed a growth of 37% from 2010 to 2015 of articles belonging to Latin American institutions. This growth is not only indicated by the number of publications but also by the participation of Latin America. [8] Figure 2.
World Journal of Social Sciences and Humanities
The above results show the positive progress of Latin America, that goes along the same line as the one presented by Bárcena Alicia at the 5th biennial meeting of the forum on development cooperation of the United Nations Economic and Social Council (ECOSOC), who asserted that the change needed for a1) a sustained channeling towards new development, and for 2) the achievement the Sustainable Development Goals (SDGs) requires a new global and regional technological governance, which is aimed at the skills and acquired knowledge, thusly generating a change that goes in favor of science and technological development in Latin America, in order to both continue with research-based progress, and also to compete with countries that are currently the world power on the matter. [9,10]
Main Science and Technology Producing Countries in Latin America and the Caribbean
Latin America has been characterized as mediumincome economy with highly developed levels in countries such as Argentina, Chile, and Uruguay, with Chile, is the country that in 2014 showed the highest GDP per capita and Honduras the lowest, even so, In LAC, one of the greatest inequalities has been evident within a group of countries worldwide as stated by the United Nations Economic Commission for Latin America and the Caribbean (ECLAC), making it clear that the four countries with the highest levels of poverty are Honduras, Brazil, the Dominican Republic, and Colombia. [11,12] Paraguay, Nicaragua, Mexico, Chile, and Ecuador are the countries with the highest GDP growth, while in other countries such as Colombia, Guatemala and El Salvador growth has to be stable. In opposition to this, some countries GDP were observed to have declined due to political and economic situation such as is the case for Cuba, Venezuela, and Honduras. It should be noted then, that the vast majority of governments in Latin America and the Caribbean dedicate more than 1% of gross domestic product (GDP) to higher education, this being comparable to the investment made in developed countries [11,12].
In the last 20 years, Latin American countries have created specific funds for research and innovation, funds that were originally granted through national loans that were financed through the Inter-American Development Bank (IDB). Said funding has influenced the design of national policies in relation to research and innovation, expressing the required regulations in order to grant such loans as contests, credits, and scholarships [11,13].
Consequently, it was observed that in the last ten years in Colombia and Chile there was a sizeable growth in university enrollments, and as well as in the expenditure that each student generated for the institutions. According to the UNESCO Institute for Statistics, during 2012 in Latin America, two million undergraduate degrees were awarded. With respect to the proportion of doctorates within the general population in the most advanced countries of Latin America, the data proved to be comparable to the figures for China, India, Russia, and South Africa. However, LAC remains far from the most developed countries [6,11,14]. Brazil with more than 25,000, Mexico around 25,000 and Colombia with about 20,000 students who made the decision to study outside of Latin America, mainly choosing countries located in North America or Western Europe. Although you can still find many students who prefer to stay in Latin America to study.
It is important to recognize that, in recent years, some Latin American countries have worked to strengthen national knowledge networks, in the case of Argentina, with the "Raíces" program, it became state policy in 2008, which has made then permitted the repatriation of 1323 scientists since its inception in 2003, in parallel to the promotion of the creation of networks of Argentine scientists in developed countries. Similarly, Colombia, Ecuador, and Uruguay have taken initiatives to finance the World Journal of Social Sciences and Humanities 5 repatriation of highly competent scientists, in coordination with industrial development and production policies, promoting sophisticated mechanisms to carry out this process, thus facilitating the aggregation of the national staff economy highly qualified. [6,15,16] In the case of Colombia, according to the National Science and Technology Council (COLCIENCIAS), they have established the recognition of STIs (science, technology and innovation) as the supports for the increase of productivity and competitiveness, which highlights the need for certain adequate policies and resources to boost the generation, use and appropriation of knowledge, necessary for profitable innovation at the social level, as the country currently requires. [16] In the year 2008, Colombian former President Álvaro Uribe Vélez introduced the national policy coined "Colombia Siembra Futuro" that called for the promotion of research and innovation, based on the generation of scientific and technological knowledge, painting contributions to development as a generator of economic growth and the main axis for the reduction of inequality, making it clear that this goal is not only a responsibility of COLCIENCIAS, nor anyone sector be it, entrepreneurship, the public sector or even the scientific community, but as a responsibility of the entire community as a whole, having as its primary objectives for 2019 sustained social development through the reduction of: poverty, inequality, insufficient coverage and improvement of the quality of health and educational services. [16] On January 25, 2019 the congress of the Republic of Colombia signed into law the approval of the creation of the Ministry of Science, Technology and Innovation through which it intends to build capacities, promote scientific and technological knowledge, contributing to the development and growth of the country and anticipate to future technological challenges, always seeking the wellbeing of Colombians and consolidating not only a more productive, competitive economy but also a more equitable society.
Juan Francisco Miranda Acosta, for his part, asserted that the investment destined to research should be greater, since, by 2010, Colombia was below countries such as Brazil, Mexico, and Argentina with respect to scientific production, this is because, for that year the goal in national investment in science, technology, and innovation was to reach 1% of gross domestic product (GDP), moreover, he made reference to generation of more economic and social incentives for those dedicated exclusively to research, in order not to have to look for new horizons in other countries.
In relation to the above, it should be noted that interdisciplinary in labor development, and the execution of ideas, has become complex when reaching an agreement that inclines towards the development of knowledge and new opinions, not of any specialty, but rather of a high intellectual complex structure that generates results. Whose effects obtained to date are not satisfactory, acknowledging some irregularities in the fulfillment of certain goals, which is evidenced in the updating of the National Science, Technology and Innovation Policy 2016-2025, prepared by the Council National Economic and Social Policy Republic of Colombia (CONPES), who readjusted the policy reform due to the low effectiveness and non-compliance they had with the following factors: The generation of knowledge for the solution of national and regional problems (CONPES 3080 of 2000). [17] The generative capacities, use and transfer of knowledge relevant to competitiveness and development (CONPES 3527 of 2009), [18] the increase of the country's generative capacity and in the use of scientific and technological knowledge, with the purpose of contributing to the productive transformation of the country (CONPES 3582 of 2009). [19] This managed to ignite an alarm in Colombia, forcing the government to make decisions in favor of the fulfillment of the new approaches, which as was to be expected, were reflected in the scientific production, and its applicability. Annexed to the above, is the National In a global view, it could be said that the results have had a reciprocal positive impact on scientific productivity. However, globally and in the same year, 179,021 articles in medicine and 19,214 mathematical articles were published. This is to say that Colombia's global participation is 0.06% for medical articles and 0.08% in the mathematical articles. The foregoing shreds of evidence they need to bear the needs that the country has presented to date in relation to the percentage of investment in science, technology, and innovation. [20] To the extent that Colombia takes advantage of the potential of science and technology, new development models can be organized in collaboration with the government where knowledge deemed as "public property for all" is guaranteed in such a way that it is accessible, allowing capacity building in the entire population and thus determine the role of technology, where the ultimate goal is human development. A more empowered society from a scientific and technological knowledge standpoint requires greater investments in education, research, and development, policies, and strategies where the citizen can assess the importance of knowledge and its application from the scientific and technological results generated in the country. [22] Finally, there is acknowledgment that each of the Latin American countries has a diversity of knowledge, for this reason it is necessary to work in an interdisciplinary manner, so that this knowledge is disseminated in the national and international community, and that this is represented in the economic knowledge model, where its various forms are valued which in turn generates better results in terms of economic growth, social equality, political decisions based on facts, greater transparency and ethics, these being the key components in societies based on knowledge. [23] These types of societies are built from the diversity of knowledge and culture that is a public good available to all, being influenced by scientific advances and the use of cutting-edge technologies, where education, critical thinking, the promotion of Diversity and innovation are fundamental for the implementation of a knowledge society.
Knowledge-based societies generate greater awareness about the importance of science and technology as a key element to assess and optimize the use of goods, products, and services that a country has, generating citizens with greater skills to face current changes and greater awareness in decisions that promote social welfare, respect for others and equality.
Discussion
The Universities of Latin America have increased their participation in the national science and technology systems of each country, and not only that, but they have also increased the quality of their scientific processes, furthermore, they have also worked to improve investment conditions, which has led to the strengthening of research centers. In terms of counterpart analysis, a greater industrial participation in strengthening of research and development is lacking, given that a low rate of innovation in the Latin American business system is evident, so it could be said that greater integration between entrepreneurs and producers of science and technology, this in order to boost the demand for technological knowledge from companies to universities.
Little by little, Latin American universities have positioned themselves as high-level research centers, this has been demonstrated with a significant increase in the number of articles that have been registered in international databases, taking as an example 82% of the articles published in Latin America, which have been by university authors. In some countries such as Chile, Colombia, and Brazil, participation has been greater and is close to 90% of the total scientific articles published in the database (SCOPUS) [24] One of the countries with the greatest scientific production has been Brazil, of the twenty most productive institutions in Latin America, ten are Brazilian, three Argentine, three Chilean, two Mexican and two Colombian. What continues to confirm that Latin American universities are leading actors in science and technology at the international level.
According to Carlos G. Mejía in his article "Notes From a Researcher" the main aspects that should be taken into account in the educational, research and quality context, is the way people handle knowledge and achieved with this great social relevance. In turn, he argues that the engine of human growth and development is based on the way in which an economy develops, uses and extracts value from knowledge, which finds its innovative foundation in the ability to articulate different kinds of knowledge such as: The combination of modern information technologies with a new accounting system, graphic design with the technical skills for creating Web pages, among others. [25] To the extent that countries take advantage of the potential of science and technology, new development models can be organized in collaboration with the government where knowledge as "public property for all" is guaranteed in such a way that it is accessible, allowing development capacities in the entire population and thus determine the role of technology, where the ultimate goal is human development. A more empowered society of scientific and technological knowledge requires greater investments in education, research, and development, policies, and strategies where the citizen can assess the importance of knowledge and its application from the scientific and technological results generated in the country. [26] Thus, the applicability of what has been learned will be the manifestation of scientific growth in countries that still demand attention, and investment in their academic, political and economic development. That is why the effectiveness of a tax correlation between science and democracy will improve to the extent that scientific dynamism incurs development, and in turn, the ability of the government to implement approvals in order to achieve the expected impact. [26]
Conclusions
Knowledge of information and telecommunications has been inducing great impact in all sectors of social activity, from production processes to educational components and in health services. However, government support must be consolidated in order to materialize the ideas that have been developed in Latin America and the Caribbean. [27] Latin America, despite not being the most vulnerable region in the world at a socioeconomic level, has been one of the most inequitable economically speaking, which in some way has been reflected in the per capita investment that has even been generated in most representative countries in the scientific field. Therefore, it is essential to strengthening the annual investment not only in science and technology but also in the training of professionals that allow the viability of project execution.
It is important to recognize that the foundations of the scientific pillars in Latin America have gone against the current needs of each country. Therefore, there must be greater coherence between what is done and what each region urges. This point is one of the biggest differentiators among the most developed countries in terms of science and innovation. Unfortunately, this reality has been pressing as underdeveloped societies more than 40 years ago, showing the little impact that the materialization of projects has had at present. [28] A minimal relationship was identified between scientific and technological activity and the basic development problems facing some countries. In turn, there is a poor relationship between university institutions with some government agencies. As far as the implementation of professionals specialized in different areas of knowledge, they have been forced in decisionmaking with little scientific evidence, showing the rush to implement more practical aspects that provide better results in terms of solution, more counting on the tools and evidence that allow classifying possible levels of intervention. [28] Research, science, and technology are an effective instrument of the transformation of society. Therefore, it is necessary to counteract the attitudinal processes of some rulers lagging behind the progress that still think that research is a luxury for some first world countries and in turn some companies that close their occupation in passive attitudes, leaving in danger a scientifically inactive nation and also completely uncommunicated with the most advanced countries. | 5,654.6 | 2019-12-05T00:00:00.000 | [
"Economics",
"Political Science"
] |
Minimal quasi-stationary distribution approximation for a birth and death process
In a first part, we prove a Lyapunov-type criterion for the $\xi\_1$-positive recurrence of absorbed birth and death processes and provide new results on the domain of attraction of the minimal quasi-stationary distribution. In a second part, we study the ergodicity and the convergence of a Fleming-Viot type particle system whose particles evolve independently as a birth and death process and jump on each others when they hit $0$. Our main result is that the sequence of empirical stationary distributions of the particle system converges to the minimal quasi-stationary distribution of the birth and death process.
Introduction
Let X be a stable birth and death process on N = {0, 1, 2, . ..} absorbed when it hits 0. The minimal quasi-stationary distribution (or Yaglom limit) of X, when it exists, is the unique probability measure ρ on N * = {1, 2, . ..} such that ρ(•) = lim t→∞ P x (X t ∈ • | t < T 0 ) , for all x ∈ N * , where T 0 = inf{t ≥ 0, X t = 0} is the absorption time of X.The probability measure ρ is called a quasi-stationary distribution because it is stationary for the conditioned process, in the sense that ρ = P ρ (X t ∈ • | t < T 0 ), for all t ≥ 0.
These notions and important references on the subject are recalled with more details in Section 2, with important definitions and well known results on quasi-stationary distributions.We also provide a new Lyapunov-type criterion ensuring that a probability measure µ belongs to the domain of attraction of the minimal quasi-stationary distribution, which means that (1.1) These results are illustrated with several examples.
We use these new results in Section 3 to extend existing studies on the long time and high number of particles limit of a Fleming-Viot type particle system.The particles of this system evolve as independent copies of the birth and death process X, but they undergo rebirths when they hit 0 instead of being trapped at the origin.In particular, the number of particles that are in N * remains constant as time goes on.Our main result is a sufficient criterion ensuring that the empirical stationary distribution of the particle system exists and converges to the minimal quasi-stationary distribution of the underlying birth and death process.
We conclude the paper in Section 4, providing a numerical study of the speed of convergence of the Fleming-Viot empirical stationary distribution expectation to the minimal quasi-stationary distribution for a linear birth and death process and a logistic birth and death process.This numerical results suggest that the bias of the approximation is surprisingly small for linear birth and death processes and even smaller for logistic birth an death processes.
2 Quasi-stationary distributions for birth and death processes Let (X t ) t≥0 be a birth and death process on N = {0, 1, 2, . ..} with birth rates (b i ) i≥0 and death rates (d i ) i≥0 .We assume that b i > 0 and d i > 0 for any i ≥ 1 and b 0 = d 0 = 0.The stochastic process X is a N-valued pure jump process whose only absorption point is 0 and whose transition rates from any point i ≥ 1 are given by i → i + 1 with rate b i , i → i − 1 with rate d i , i → j with rate 0, if j / ∈ {i − 1, i + 1}.
Such processes are extensively studied because of their conceptual simplicity and pertinence as demographic models.It is well known (see for instance [20,Theorem 10 and Proposition 12]) that X is stable, conservative and hits 0 in finite time almost surely (for any initial distribution) if and only if The divergence of this series will be assumed along the whole paper.In particular, for any probability measure µ on N, the law of the process with initial distribution µ is well defined.We denote it by P µ (or by P x if µ = δ x with x ∈ N) and the associated expectation by E µ (or by , we thus have where, for any subset F ⊂ N, M 1 (F ) denotes the set of probability measures on F .
A quasi-stationary distribution for X is a probability measure ρ on N * = {1, 2, . ..} such that The probability measure ρ is thus stationary for the conditioned process (and, as a matter of fact, was called a stationary distribution in the seminal work [7]).The property "ρ is a quasi-stationary distribution for X" is directly related to the long time behaviour of X conditioned to not being absorbed.Indeed (see for instance [22] or [20]), a probability measure ρ is a quasi-stationary distribution if and only if there exists µ ∈ M 1 (N * ) such that We refer the reader to [25,20,10] and references therein for an account on classical results concerning quasi-stationary distributions for different models.
For a given quasi-stationary distribution ρ, the set of probability measures µ such that (2.2) holds is called the domain of attraction of ρ.It is nonempty since it contains at least ρ and may contains an infinite number of elements.In particular, when the limit in (2.2) exists for any µ = δ x , x ∈ N * , and doesn't depend on the initial position x, then ρ is called the Yaglom limit or the minimal quasi-stationary distribution.Thus the minimal quasi-stationary distribution, when it exists, is the unique quasi-stationary distribution whose domain of attraction contains {δ x , x ∈ N * }.From a demographical point of view, the study of the minimal quasi-stationary distribution of a birth and death process aims at answering the following question: knowing that a population isn't extinct after a long time t, what is the probability that its size is equal to n at time t?
One of the oldest and most understood question for quasi-stationary distributions of birth and death processes concerns their existence and uniqueness.Indeed, van Doorn [22] gave the following picture of the situation: a birth and death process can have no quasi-stationary distribution, one unique quasi-stationary distribution or an infinity (in fact a continuum) of quasistationary distributions.In order to determine whether a birth and death process has 0, one or an infinity of quasi-stationary distributions, one define inductively the sequence of polynomials (Q n (x)) n≥0 for all x ∈ R by (2.3) As recalled in [22, eq. (2.13)], one can uniquely define the non-negative number ξ 1 satisfying Also, the useful quantity can be easily computed (see [1,Section 8.1]), since, for any z ≥ 1, The following theorem answers the question of existence and uniqueness of a QSD for birth and death processes.
3. If S = +∞ and ξ 1 > 0, then there is a continuum of QSDs, given by the one parameter family (ρ x ) 0<x≤ξ 1 : and the minimal quasi-stationary distribution is given by ρ ξ 1 .
Remark 1. Theorem 2.1 gives a complete description of the set of quasistationary distributions for a birth and death process but is not well suited for the numerical computation of the Yaglom limit of a given birth and death process.Indeed, the polynomials Q n have in most cases quickly growing coefficients, so that the value of ξ 1 cannot be easily obtained by numerical computation.
Theorem 2.1 is quite remarkable since it describes completely the possible outcomes of the existence and uniqueness problem for quasi-stationary distributions.However, it only partially answers the crucial problem of finding the domain of attraction of the existing quasi-stationary distributions and in particular of the minimal quasi-stationary distribution.The following theorem answers the problem when there exists a unique quasi-stationary distribution.
Theorem 2.2 (Martínez, San Martín, Villemonais 2013 [19]).Let X be a birth and death process such that Then there exists γ ∈ [0, 1) such that, for any probability measure µ on N * , where • T V denotes the total variation norm and ρ is the unique quasistationary distribution of the process.In particular, the domain of attraction of the unique quasi-stationary distribution is the whole set M 1 (N * ) of probability measures on N * .
A weaker form of Theorem 2.2 has also been proved in [28] but the strong form (with uniform convergence in total variation norm) is necessary to derive the results of the next section.A generalized version of Theorem 2.2 has been rencently derived in [8], with complementary results on the so-called Q-process (the process conditioned to never being absorbed).
The case where there exists an infinity of quasi-stationary distributions is trickier and can be partially solved, as we will show, when the birth and death process is ξ 1 -positive recurrent.
Definition The birth and death process X is said to be ξ 1 -positive recurrent if ξ 1 > 0 in Theorem 2.1 and if, for some i ∈ {1, 2, . ..} and hence for all i ∈ {1, 2, . ..}, we have In the following theorem, we provide a new Lyapounov-type criterion ensuring the ξ 1 -positive recurrence of a birth and death process.As will be shown in the examples below, this criterion can be checked on a wide variety of examples and has its own interest in the domain of ξ 1 -classification for birth and death processes (see [17] and [24] for an account on this area).
Theorem 2.3.Let X be a birth and death process with infinitesimal generator L. We assume that there exists C > 0, λ 1 > d 1 and φ : N → R + such that φ(i) goes to infinity when i → ∞ and Then X admits a quasi-stationary distribution and the birth and death process X is ξ 1 -positive recurrent.
In the next theorem, we assume that the process is ξ 1 -positive recurrent and we exhibit a subset of the domain of attraction for the minimal quasistationary distribution.
Theorem 2.4.Let X be a ξ 1 -positive recurrent birth and death process with infinitesimal generator L. Then the domain of attraction of the minimal quasi-stationary distribution of X contains the set D defined by Assume moreover that there exists C > 0, λ 1 > ξ 1 and φ : N → R + such that φ(i) goes to infinity when i → ∞ and Then the domain of attraction of the minimal quasi-stationary distribution of X contains the set D φ defined by As it will be shown in the proof, we have D φ ⊂ D for all function φ satisfying the assumptions of Theorem 2.4.However, Q • (ξ 1 ) cannot be computed explicitly but in few situations.As a consequence, we won't be able to use the first criterion to determine whether a probability distribution µ belongs or not to the domain of attraction of the minimal quasi-stationary distribution.
On the contrary, we will be able to give explicit functions φ satisfying the Lyapunov criterion of our theorem for a wide range of situations.
Note that, since d 1 ≥ ξ 1 , our results immediately imply that, if the process X fulfils the assumptions of Theorem 2.3 with a Lyapunov function φ, then the process is ξ 1 -positive recurrent and the domain of attraction of its minimal quasi-stationary distribution contains D φ .This consequence is used in the following examples.
Example 1.We consider the case where b i = b i a and d i = d i a for all i ≥ 1, where b < d are two positive constants and a > 0 is fixed.Now, defining φ(0) = 0 and Since i a → ∞ when i → ∞, we immediately deduce that there exists C > 0 and λ 1 > d 1 such that φ satisfies Lφ ≤ −λ 1 φ + C. Now Theorem 2.3 implies that the process is ξ 1 -positive recurrent and Theorem 2.4 implies that the domain of attraction of the minimal quasi-stationary distribution contains Example 2. We consider now the case where the birth and death rates are constant for all i ≥ 2, that is b i = b > 0 and where b < d are positive constants.We assume that and the value of b 1 > 0 can be chosen arbitrarily.Using the same function as in the previous example, that is In particular, there exists C > 0 and λ 1 > d 1 such that φ satisfies Lφ ≤ −λ 1 φ + C. Once again, we deduce from Theorem 2.3 that the process is ξ 1positive recurrent, which was already known in this case (see [23, eq.(6.6)]).We also deduce the following new result from Theorem 2.4: the domain of attraction of the minimal quasi-stationary distribution contains the set Example 3. In the two previous examples, the birth and death rates are nondecreasing and proportional to each other.This is coincidental and is only useful to get straightforward calculations.The aim of the present example is to illustrate this on a particular case without monotony nor proportionality between the birth and death rates: we choose b i = | sin(iπ/2)|i + 1 and we get, for all i ≥ 2, As above, we deduce that the process is ξ 1 -positive recurrent and that the domain of attraction of the minimal quasi-stationary distribution contains the set of probability measures defined by The end of this section is dedicated to the proof of Theorems 2.3 and 2.4.
Lemma 2.5.We assume that there exists λ 1 > ξ 1 , C > 0 and φ : N → R + such that φ(i) goes to infinity when i → ∞ and Then there exists a constant i 0 ≥ 1 and α > 0 such that φ(i) For all i ≥ i 0 , we have φ (i) ≥ 1 and Hence, since Q i 0 (ξ 1 ) > 0 (see [22, eq.(2.13)]) and replacing φ by Q i 0 (ξ 1 )φ , we can assume without loss of generality that Our aim is now to prove that, for all i ≥ i 0 , φ(i) ≥ Q i (ξ 1 ).Because of the changes made with the function φ, this implies the inequality claimed in the lemma with the constant α = 1/Q i 0 (ξ 1 ).Assume the contrary, which means that there exists i This is feasible, since x → Q i (x) is a polynomial function of x and is then continuous in x for any fixed i. Then the function ϕ x : N → R + defined by Let us now prove that this inequality extends to any j ≥ i 1 + 1.Indeed, using the equality Lϕ x = −xϕ x and the inequality Lφ ≤ −λ 1 φ, we have, for all j > i 1 + 1, and We deduce that and thus φ(j) < ϕ x (j) for all j ≥ i 1 + 1.
Lemma 2.6.Let φ : N → R + be a function such that φ(0) = 0, φ(1) = 1 and Lφ ≤ 0. Then Proof.Under our assumption, (φ(X t )) t≥0 is a super martingale.As a consequence, for all i ≥ 1, But T 1 ≤ T 0 < ∞ almost surely and X T 1 = 1 almost surely, so that Proof of Theorem 2.3.The main difficulty is to prove that the minimal quasistationary distribution ρ ξ 1 for X exists and that Once this is proved, Lemma 2.5 implies that which is a sufficient condition for X to be ξ 1 -positive recurrent (see [23,Theorem 5.2]).Let us prove that (2.5) holds.For any M ≥ 1, let us denote by (P M t ) t≥0 the semi-group of the process X M evolving in {0, 1, . . ., M } and defined as where T M = inf{t ≥ 0, X t = M }.We also define φ M by φ M (M ) = 0 and φ M (i) = φ(i), i ∈ {0, 1, . . ., M − 1}.Now, denoting by L M the generator of the stopped process X M and setting ϕ(i) = 1 i≥1 , we thus have Hence, using Kolmogorov equations for the finite state space continuous time Markov chain X M , we deduce that .
This implies that, for any t ≥ 0, we have But X M 0 = 1 under P 1 , so that Now, by dominated convergence, we have By monotone convergence, we also deduce that We finally deduce that, for all t ≥ 0, The first consequence of this inequality is that the process X t conditioned on the event X t = 0 does not diverge to infinity.As a consequence, ξ 1 > 0 and there exists a minimal quasi-stationary distribution ρ ξ 1 for X (see [22,Theorem 4.1]).In particular, X t conditioned on X t = 0 converges in law to ρ ξ 1 .Hence, we deduce from the above inequality that, for any K ≥ 0, By monotone convergence, we obtain by letting K tend to ∞ that Proof of Theorem 2.4.Let X be a ξ 1 -positive recurrent birth and death process with minimal quasi-stationary distribution ρ ξ 1 .We prove that the domain of attraction of ρ ξ 1 contains the set of probability measures Once this is proved, the second assertion of Theorem 2.4 follows immediately from Lemma 2.5.
Approximation of the minimal quasi-stationary distribution
This section is devoted to the study of the ergodicity and the convergence of a Fleming-Viot type particle system.
Fix N ≥ 2 and let us describe precisely the dynamics of this system with N particles, which we denote by (X 1 , X 2 , . . ., X N ).The process starts at a position (X 1 0 , X 2 0 , . . ., X N 0 ) ∈ (N * ) N and evolves as follows: -the particles X i , i = 1, . . ., N , evolve as independent copies of the birth and death process X until one of them hits 0; this hitting time is denoted by τ 1 ; -then the (unique) particle hitting 0 at time τ 1 jumps instantaneously on the position of a particle chosen uniformly among the N − 1 remaining ones; this operation is called a rebirth; -because of this rebirth, the N particles lie in N * at time τ 1 ; then the N particles evolve as independent copies of X and so on.
We denote by As a consequence, the particle system (X 1 t , X 2 t , . . ., X N t ) t≥0 is well defined for any time t ≥ 0 in an incremental way, rebirth after rebirth (see Figure 1 for an illustration of this construction with N = 2 particles).This Fleming-Viot type system has been introduced by Burdzy, Holyst, Ingermann and March in [5] and studied in [6], [12], [27], [13] for multidimensional diffusion processes.The study of this system when the underlying Markov process X is a continuous time Markov chain in a countable state space has been initiated in [11] and followed by [4], [2], [14], [3] and [9].We also refer the reader to [15], where general considerations on the link between the study of such systems and front propagation problems are considered.
We emphasize that, because of the rebirth mechanism, the particle system (X 1 , X 2 , . . ., X N ) evolves in (N * ) N .For any t ≥ 0, we denote by µ N t the empirical distribution of (X 1 , X 2 , . . ., X N ) at time t, defined by where M 1 (N * ) is the set of probability measures on N * .A general convergence result obtained in [26] ensures that, if µ N 0 → µ 0 , then The generality of this result does not extend to the long time behaviour of the particle system, which is the subject of the present study.We provide a sufficient criterion ensuring that the process (µ N t ) t≥0 is ergodic.Denoting by X N its empirical stationary distribution (a random measure whose law is the stationary distribution of µ N ), our criterion also implies that where ρ is the minimal quasi-stationary distribution of the birth and death process X.Our result applies (1) to birth and death processes with a unique quasi-stationary distribution (such as logistic birth and death processes) and (2) to birth and death processes with a minimal quasi-stationary distribution satisfying an explicit Lyapunov condition (fulfilled for instance by linear birth and death processes).These two different conditions are summarized in Assumptions H1 and H2 below.
Assumption H1.There exist a function φ : N → R + and two constants Assumption H2.The birth and death process X admits a unique quasistationary distribution (S < +∞).
Theorem 3.1.Assume that Assumption H1 or Assumption H2 is satisfied.Then, for any N > λ 1 λ 1 −d 1 under H1 and any N ≥ 2 under H2, the measure process (µ N t ) t≥0 is ergodic, which means that there exists a random measure If H1 holds, then Moreover, if Assumption H1 or H2 is satisfied, then where ρ is the minimal quasi-stationary distribution of X.
1. Assumption H1 is the Lyapunov criterion which is used in Theorem 2.3 to ensure ξ 1 -positivity (and hence the existence of a minimal quasistationary distribution).This assumption also implies that the conditions of Theorem 2.4, where we determine a subset of the domain of attraction of the quasi-stationary distribution, are also satisfied.For instance, the birth and death processes of Examples 1, 2 and 3 in the previous section satisfy Assumption H1.
2. Assumption H2 is satisfied for processes that come fast from infinity to compact sets, as the logistic birth and death process (where b i = b i and d i = d i + c i(i − 1) for all i ≥ 1 with b, c, d > 0).Note that, in this particular example, an easy calculation shows that Assumption H1 is also satisfied with φ(i) = 2 i .However, this assumption is useful for any situation where it is easy to check that S < ∞, but difficult to find an explicit Lyapunov function satisfying Assumption H1.In particular, we cannot apply Theorem 2.3 on ξ 1 -positivity and, in fact, it is known that the pure drift birth and death process is not ξ 1 -positive recurrent (see [23]).As a consequence, the additional difficulty is not a technical one and the following proof cannot work in the pure drift situation.We emphasize that Theorem 3.1 for pure drift birth and death processes remains an open problem.See for instance [3] and the numerical investigation in [18] for more details.
Since the proof of Theorem 3.1 differs whether one assumes H1 or H2, it is split in two different subsections : in Subsection 3.1, we prove the theorem under Assumption H1 and, in Subsection 3.2, we prove the result under assumption H2.
Proof under Assumption H1: exponential ergodicity via a Foster-Lyapunov criterion
Step 1. Proof of the exponential ergodicity by a Forster-Lyapunov criterion We define the function where φ is the Lyapunov function of Assumption H1.Fix N ≥ 2 and let us express the infinitesimal generator L N of the empirical process (µ N t ) t≥0 applied to f at a point µ ∈ M 1 (N * ) given by where (x 1 , . . ., x N ) ∈ (N * ) N .In order to shorten the notations, we introduce, for any y ∈ N * , the probability measure For a fixed N > λ 1 λ 1 −d 1 and any constant k > 0, the set of probability measures µ is irreducible (this is an easy consequence of the irreducibility of the birth and death process X).Thus, using the Foster Lyapunov criterion of [21, Theorem 6.1, p.536] (see also [16,Proposition 1.4] for a simplified account on the subject), we deduce that the process µ N is exponentially ergodic and, denoting by X N a random measure distributed following its stationary distribution, we also have This concludes the proof of the first part of Theorem 3.1.
Step 2. Convergence to the minimal QSD Since φ(i) goes to infinity when i → ∞, we deduce from (3.2) that the family of random measures (X N ) N is tight.In particular, the family admits at least one limiting random probability measure X , which means that X N converges in law to X , up to a subsequence.
Let µ N t be the random position at time t of the particle system with initial (random) distribution X N .On the one hand, the stationarity of X N implies that µ N t ∼ X N for all t ≥ 0, and thus On the other hand, the general convergence result of [26] implies that As an immediate consequence Using Theorem 2.4, we deduce that X belongs to the domaine of attraction of the minimal QSD ρ almost surely, that is Thus the random measure X converges in law to the deterministic measure ρ, which implies that X = ρ almost surely.
In particular, ρ is the unique limiting probability measure of the family (X N ) N , which ends the proof of Theorem 3.1 under Assumption H1.
Proof under Assumption H2: exponential ergodicity by a Dobrushin coefficient argument
Fix N ≥ 2 and let us prove that the process is exponentially ergodic.Under assumption (H2), it is well known (see for instance [19]) that the process X comes back in finite time from infinity to 1, which means that Since the particles of a Fleming-Viot type system are independent up to the first rebirth time, we deduce that This implies that the FV process is exponentially ergodic.
Let us now denotes by X N the empirical stationary distribution of the system (X 1 , . . ., X N ), for each N ≥ 2. Theorem 2.2 implies that there exists γ > 0 such that, for any t ≥ t , any initial distribution µ 0 and any function f : But, for any t ≥ 0, [26] implies that As a consequence, In particular, for any > 0, there exists t and N such that But µ N t converges in law to X N , so that This inequality being true for any > 0, this concludes the proof of Theorem 3.1 under Assumption (H2).
4 Numerical simulation of the Fleming-Viot type particle system In this section, we present numerical simulations of the Fleming-Viot particle system studied in Section 3. Namely, we focus on the distance in total variation norm between the expectation of the empirical stationary distribution (i.e.E(X N )) and the minimal quasi-stationary distribution of the underlying Markov process X, when N goes to infinity.This means that we aim at studying the bias of the approximation method.
We start with the linear birth and death process case in Subsection 4.1.This is one of the rare situation where explicit computation of the minimal quasi-stationary distribution can be performed (see for instance [20]).In Subsection 4.2, we provide the results of numerical simulations in the logistic birth and death case.
The linear birth and death case
We assume in this section that b i = i and d i = 2i for all i ≥ 0. This is a sub-case of Example 1 and thus one can apply Theorem 3.1: the empirical stationary distribution of the process X N exists and converges in law, when the number N of particles goes to infinity, to the minimal quasi-stationary distribution ρ of the process, which is known to be given by (see [20]) The results of the numerical estimations of E(X N ) − ρ T V for different values of N (from 2 to 10 4 ) are reproduced on Table 1.One interesting point is the confirmation that E(X N ) is a biased estimator of ρ.A second interesting point is that the bias decreases quickly when N increases.Up to our knowledge, there exists today no theoretical justification of this fact, despite its practical implications.Indeed, one drawback of the speed of the the sequence of rebirths times.Since the rate at which rebirths occur is uniformly bounded above by N d 1 , lim n→∞ τ n = +∞ almost surely.
Figure 1 :
Figure 1: One path of a Fleming-Viot system with two particles.
Remark 3 .
The pure drift birth and death process (b i = b and d i = d for all i ≥ 1, where b < d are two positive constants) does not satisfy Assumption H1 nor Assumption H2.Note that this process is the same as in Example 2 but does not satisfy ( √ d − √ b) 2 > d 1 .
Figure 2 :
Figure 2: Estimated value of the minimal quasi-stationary distribution ρ(n) for a logistic birth and death process. | 6,819.4 | 2014-04-26T00:00:00.000 | [
"Mathematics"
] |
Energy balance following diets of varying fat content: metabolic dysregulation in a rodent model of spinal cord contusion
Abstract Within the spinal cord injured (SCI) population, metabolic dysfunction may be exacerbated. Models of cord injury coupled with metabolic stressors have translational relevance to understand disease progression in this population. In the present study, we used a rat model of thoracic SCI at level T10 (tSCI) and administered diets comprised of either 9% or 40% butterfat to create a unique model system to understand the physiology of weight regulation following cord injury. SCI rats that recovered on chow for 28 days had reduced body mass, lean mass, and reduced fat mass but no differences in percentage of lean or fat mass composition. Following 12 weeks on either low‐fat diet (LFD) or high‐fat diet (HFD), SCI rats maintained on LFD did not gain weight at the same rate as SCI animals maintained on HFD. LFD‐SCI had reduced feed conversion efficiency in comparison to Sham‐LFD whereas tSCI‐HFD were equivalent to Sham‐HFD rats. Although SCI rats still maintained lower lean body mass, by the end of the study HFD‐fed rats had higher body fat percentage than LFD‐fed rats. Macronutrient selection testing demonstrated SCI rats had a significant preference for protein over Sham rats. Analysis of metabolic cage activity showed tSCI rats had elevated energy expenditure, despite reduced locomotor activity. Muscle triglycerides and cholesterol were reduced only in LFD‐tSCI rats. These data suggest that consumption of HFD by tSCI rats alters the trajectory of metabolic dysfunction in the context of spinal cord disease progression.
Introduction
Innovations to specialized healthcare have greatly improved the longevity of persons with injury to the spinal cord (Strauss et al. 2006). As the average age of this injured subpopulation continues to rise, the potential for the various comorbidities of Metabolic Syndrome (MetS) also increases (Maruyama et al. 2008). In fact, persons with SCI are at increased risk for dyslipidemia, cardiovascular disease, and glycemic dysregulation, all contributing factors to MetS (Akkurt et al. 2017;Alves et al. 2017;Berg-Emons et al. 2008). Development of MetS can complicate long-term care for these patients. Therefore, understanding how and why MetS develops in this vulnerable population is important for improving quality of life for these individuals.
Obesity is a disorder of energy balance where intake and output are mismatched resulting in increased storage of energy as adipose tissue. The rate of obesity among subpopulations of SCI individuals varies from 25 to 57% (Gater 2007;Gorgey et al. 2014;Gater et al. 2019) which is somewhat higher than the national range 22-35% (Ogden et al. 2013). Reduced ability to exercise and increased consumption of calories may contribute to this increased risk. Altered neural connectivity, hormones, and inflammatory factors may also add to this disparity, but no specific factors have been identified to-date that support the elevated risk for MetS. SCI individuals suffer from increased incidence of two subcategories of obesity 1) "visceral obesity" or obesity primarily centered around the abdominal region and 2) "sarcopenic obesity" or obesity due to reduced lean mass or quality (Stenholm et al. 2008). Visceral obesity has great relevance to the obesity epidemic because of its high association with insulin resistance, type 2 diabetes and cardiovascular disease (Bays et al. 2013). Sarcopenic obesity is the result of significant muscle atrophy due to reduced ability to ambulate and exercise (Stenholm et al. 2008). One study estimates that following SCI anywhere from 27 to 65% of the muscle fibers atrophy in the first 6-18 months (Castro et al. 2000). This loss of muscle mass also contributes to the reduced basal metabolic rate and energy expenditure (Castro et al. 2000). Methods to estimate energy expenditure following spinal injury suggest that resting metabolic rate is 14-27% lower in persons with SCI compared to those without injury (Buchholz and Pencharz 2004) and also worse in injuries with complete transections located higher in the cord (Mollinger et al. 1985).
Along with the obvious reduced energy output in persons with SCI, caloric intake in persons with SCI may be different than noninjured individuals. Recent work concerning food intake patterns of persons with SCI report increased total food intake in comparison to individuals who are immobile for other reasons (Pellicane et al. 2013;Farkas et al. 2019). Food intake pattern analysis ofoverweight or obese persons with SCI suggests that overall calories are distributed within the acceptable range for energy needs but that there is increased consumption of calories from fat (Silveira et al. 2019).
In the present work, we used a rodent model of thoracic spinal contusion at level T10 and then administered two diets (9% fat vs. 40% fat) to create a unique model system to understand the physiology of body-weight regulation following cord injury. The two diets were equivalent in protein content but varied in fat and carbohydrate content. We hypothesized that tSCI rats would overconsume the high-fat diet (HFD) and that this would exacerbate the metabolic dysfunction (i.e., obesity) of the SCI animals. We report measures of body weight gain, body composition, food intake, macronutrient preference and metabolic rate to understand how these diets with two different concentrations of fat affect long-term injury recovery.
initially multiply housed and maintained in a room on a 12/12-h light/dark cycle at 25°C and 50-60% humidity with ad libitum access to water and standard chow (#8640, Envigo, 3.0 kcal/g; 17% fat, 54% carbohydrate, 29% protein). Rats were acclimated to the vivarium for 1 week prior to injury. Rats were assigned to either Sham-laminectomy (Sham) (N = 17) or thoracic spinal cord injury (tSCI) (N = 22) groups in a counterbalanced fashion based on body weight on the day prior to the start of surgery. Surgery was performed and animals were allowed to recover for 28 days following surgery while consuming standard chow. Rats were singly housed from the time of surgery to the end of the study. After 28 days, rats were switched to one of two protein-matched diets: high-fat diet (HFD) (#D03082706, Research Diets, New Brunswick, NJ, 4.54 kcal/g; 40% fat, 45% carbohydrate, 15% protein) or low-fat diet (LFD) (#D03082705, Research Diets, New Brunswick, NJ, 3.81 kcal/g, 9% fat, 76% carbohydrate, and 15% protein) for 12 weeks until the remainder of the study totaling 16 weeks. The number of animals in each group at the end of the study were: Sham-LFD (N = 8), tSCI-LFD (N = 10), Sham-HFD (N = 8) and tSCI-HFD (N = 8). One Sham died of unknown causes after 2 months. Euthanasia was performed due to some animals' autophagic behavior from significant neuropathic pain that is common in Long Evans rats following SCI (Mills et al. 2001;He and Nan 2016).
Surgical procedures
All surgical procedures were performed on animals that were deeply anesthetized using 5% isoflurane with a gradual decrease to 2.5%. tSCI surgeries were performed as previously described (Scheff et al. 2003). During the surgery, the animal was placed on a heating pad set to 41°C. The heating pad was removed during the impact portion of the surgery and replaced for the suture portion of the surgery. Incisions were made on the animals' dorsal skin and overlying muscles and the vertebral column was exposed. A laminectomy was performed at thoracic level 10 (T10) and the vertebral column was stabilized using Anderson Forceps that grasp the ventral surface of the lateral spinous processes at vertebral levels T9 and T11. Using an Infinite Horizon Spinal Impactor Device (Precision Systems and Instrumentation, LLC, Fairfax Station, VA), moderate contusion injury was delivered to the T10 spinal cord using 150 kdynes of force with a 1 sec dwell. The area was inspected for bruising and the digital trace was observed to ensure there was no bone obstruction and therefore an appropriate injury. The dura mater remained closed for the entire duration of SCI surgeries. Immediately following tSCI, the overlying muscles were sutured and the skin was securely closed using stainless steel wound clips.
Sham-laminectomy surgery was performed consisting of incisions to the animals' dorsal skin exposing the musculature and vertebral column. A laminectomy was performed at T10 vertebrae and then the overlying muscles were sutured and the skin securely closed using stainless steel wound clips.
Postoperative care
Animals received one dose of buprenorphine SR (Sustained Release) (1.0-1.2 mg/kg SQ (ZooPharm, Laramie, WY) and 72 h later, single-dose buprenorphine for postsurgical pain management (0.025 mg/kg, twice daily for a period of 2 days, then as needed). Animals also receive (1) antibiotic, naxcel (5 mg/kg SQ, Zoetis, NJ) once daily for a period of 5 days, (2) 3-5 mL of 0.9% saline, twice daily for a period of 3 days to ensure hydration. Beginning the day of spinal injury, each rat's urinary bladder was manually expressed two to three times daily until the animal recovered the ability to void its bladder. The T10 contusion disrupts the supraspinal pathways that are responsible for bladder voiding. In our hands, control of neurogenic bladder function returns in approximately 14 days. As a rule, bladder care was discontinued for an animal when it exhibited an already-voided bladder on two consecutive bladder care sessions.
Hindlimb locomotor function assessment
Hindlimb locomotor function was assessed using the Basso, Beattie, and Bresnahan (BBB) open-field locomotor scale (Basso et al. 1995). BBB scores were initially assessed on days 1, 7, 14 and 28 postinjury. Only animals who achieved a 1 or lower on day 1 postinjury were allowed to continue in the study. Following diet inductions on day 28, BBB was tested at weeks 8, 12, and 16 on diet. Briefly, the rat was placed into the open field and allowed to move freely for approximately 4 min. Movement and articulation of the joints of each hindlimb were scored using a scoring sheet. A 21-point BBB Open-Field Rating Scale was used for the determination of intact locomotor behavior. When an animal reached the score of 21, it was no longer tested. For each animal, the locomotor scores for both hindlimbs were averaged to produce one score per test session.
Body weight and composition
Following surgery, animals were weighed daily for the first 14 days and then weekly thereafter. Lean and fat mass were analyzed using Echo Magnetic Resonance Imaging (echoMRI) (EchoMedical Systems, Houston, TX) at weeks 4 (prior to start of diet), 8, 12, and 16.
Blood collection and measurements
During postinjury week 12, tail vein blood collection was performed. This time point was chosen prior to the complex TSE scheduling. Tail bleeds occurred within 2 h of lights-on to obtain plasma for ad libitum-fed analyses. Food was then removed from the animals for 24 h and the following morning within 2 h of lights-on, an additional fasting plasma sample was procured. Non-esterified fatty acids (NEFA), phospholipids, b-hydroxybutyrate, total plasma triglycerides, and total plasma cholesterol were measured. During postoperative week 15, animals were fasted for~6 h following lights-on. Baseline blood glucose was measured using an AccuChek glucometer. The remaining analytes were measured from terminal trunk blood that was obtained after food was removed and 6-8 h after lights-on (Table 1).
Feed conversion efficiency (FCE) calculation
FCE was calculated using the change in body weight over the first week divided by the kcal consumed during the first week.
Macronutrient selection testing
During week 12 postinjury, three pure macronutrient diets, (Harlan Teklad; TD.02521[carbohydrate], TD.02522 [fat], and TD02523[protein]) were presented in separate containers simultaneously for 4 days. Animals were acclimatized during the first 48 h to the new diets. The total amount consumed of each macronutrient during the final 24 h period was converted to kcal consumed and then a percentage for each macronutrient was reported.
Metabolic system monitoring
During postinjury weeks 13-15, rats were placed in special metabolic cages (AccuScan Instruments Inc, Columbus, OH) in a staggered fashion for about 64 h. The first 24 h were considered an acclimatization period and only the data collected during the second 24 h was used in the calculations. Rats were housed individually in an acrylic cage (16 9 24 9 17 cm) equipped with oxygen sensor to measure oxygen consumption (VO 2 ) and infrared beams to determine motor activity. VO 2 was measured for 2 min at 10-min intervals using a Zirconia oxygen sensor. This system also measured carbon dioxide production (VCO2). Respiratory quotient was calculated as VCO 2 / VO 2 (Evans et al. 2004). Heat production was derived from the following formula (4.33 + (0.67 * RQ) * VO 2 * weight (grams) * 60. Energy expenditure was mathematically calculated post hoc according to Weir using the following equation: total EE (kJ h À1 ) =16.3 9 VO 2 (L h À1 ) + 4.57 9 VCO 2 (L h À1 ) (Weir 1949). Animal motor activity was determined using infrared light beams mounted in the cages in X, Y, and Z axes. Precise measurements of food and liquid consumption were taken manually every 24 h.
Tissue harvest
During week 16 postinjury, rats were euthanized by conscious decapitation starting at 6 h following the onset of the light cycle. Tissues excised include terminal plasma, liver, and gastrocnemius. Tissue was flash frozen with methylbutane on dry ice and then stored in À80°C until further processing.
Statistical analyses
All statistical analyses were performed using GraphPad Prism version 7.02 (GraphPad Software, San Diego, CA). Differences between two groups were assessed by using unpaired Student's t test and two-tailed distribution. Statistical significance was determined with two-way analysis of variance followed by Tukey's post hoc test for variables of injury and diet. To observe time-wise differences, repeated measures, two-way ANOVA (variables: Sham-LFD, sham-HFD, tSCI-LFD, and tSCI-HFD) with Tukey post hoc test was used. All results are given as means AE SEM. Results were considered statistically significant when P < 0.05.
Changes in body mass and composition during first 28 days
Male tSCI rats lost a significant amount of body weight during the first 7 days postinjury in comparison to Sham rats (main effect of injury, P < 0.001, main effect of time, P < 0.001, interaction time 9 injury, P < 0.05) (Fig. 1A). Rats having received tSCI returned to their preinjury weight by approximately 21 days postinjury (Fig. 1A).
Although the tSCI rats increased in body weight at a parallel trajectory to sham-injured animals, they did not exhibit catch-up body weight gain during their early recovery period (Fig. 1A). tSCI rats consumed fewer calories from chow during the first 28 days after injury in comparison to Sham rats (main effect of injury, P < 0.001 and main effect of time, P < 0.001¸interaction time 9 injury, P < 0.05) ( Fig. 1B and C). Overall the tSCI rats weighed less than the Sham rats (P < 0.0001) (Fig. 1D), had less lean body mass (P < 0.0001) (Fig. 1D), and less fat mass (P < 0.05) (Fig. 1D). However, the percentage of lean and fat mass normalized to body weight between the two groups did not vary among the groups (Fig. 1E). Thus, as the Sham animals increased proportionately in size during this time frame, tSCI animals grew proportionately but remained reduced in overall size. BBB testing to assess locomotor function was first performed on day 1 postinjury. For tSCI rats, the average BBB score was approximately 0 and rose to a score of 11 for both hindlimbs on day 28 postinjury (Fig. 1F). On day 1, Sham rats had a score of approximately 15 which was restored to 21 on day 7 post injury (main effect of injury, P < 0.001, main effect of time, P < 0.001) (Fig. 1F).
Changes in body mass and composition during 12-week diet phase
Following the 28-day recovery period, rats were placed either on LFD or HFD ( Fig. 2A). Irrespective of group, all the rats continued to gain weight throughout the 12week time frame. LFD-fed tSCI animals continued to have a significantly reduced body weight in comparison to Sham (main effect of injury, P < 0.05; main effect of time, P < 0.0001) ( Fig. 2A). HFD-fed tSCI increased in body weight in a trajector similar to Sham (main effect of time, P < 0.0001) ( Fig. 2A). tSCI animals on LFD displayed an attenuated body weight change from the time of the diet induction to the end of the 12 weeks of diet induction (main effect of injury, P < 0.05, main effect of time, P < 0.0001) (Fig. 2B). However, tSCI animals placed on HFD had similar body weight change to the Sham-HFD animals (main effect of time, P < 0.001) (Fig. 2C). This suggests that the altered macronutrient content of the HFD caused accelerated weight gain for the tSCI rats in comparison to the LFD-tSCI rats but similar to Sham-HFD. At the end of the study, tSCI rats remained lighter than Sham, and LFD-fed animals continued to weigh less overall then HFD (main effect of injury, P < 0.01, main effect of diet, P < 0.05) (Fig. 2D). Overall, tSCI rats continued to have less lean body mass than Sham rats (main effect of injury, P < 0.01) (Fig. 2E) following 12 weeks of a LFD/HFD feeding. Although overall body fat increased over time for all animals, there was no difference in body fat as a result of diet or injury (Fig. 2F) only time P < 0.0001. When this was normalized for body weight, there was no difference in the lean mass percentage by diet or injury (Fig. 2G). However, the HFD-fed Sham and tSCI had a higher percentage of fat in comparison to LFD-fed Sham and tSCI, (main effect of diet, P < 0.05) (Fig. 2G). We also assessed BBB scores every 4 weeks. Whether on LFD or HFD, tSCI animals had relatively similar BBB scores that continued to increase to a final mean score of 14.5 during week 16 of the (Fig. 2H). On the other hand, sham-injured rats received an average BBB rating of 21 by day 7 postinjury and remained at this score for the subsequent 16 weeks of the study (main effect of injury, P < 0.0001, main effect of time, P < 0.0001) (Fig. 2H).
Diet consumption
Overall, the impact of injury on food intake was diminished such that there was only a significant effect of diet on the number of calories ingested, that is, HFD-fed animals consumed more calories than LFD-fed rats (main effect of diet, P < 0.01) (Fig. 3A). We performed a feed conversion efficiency (FCE) calculation to more closely analyze the first week on the respective new diets and there was a significant main effect of injury, (P < 0.05). We determined that the tSCI-LFD rats were not converting the consumed calories into body mass gain compared to Sham-LFD rats (P < 0.05), and that the HFD-fed tSCI rats were converting the calories into body weight at an equal rate to the Sham-HFD rats (Fig. 3B).
Metabolic cage analysis
Placing the animals in metabolic cages, we determined that tSCI rats had significantly increased energy expenditure (main effect of injury, P < 0.05) (Fig. 4A). Twentyfour hours locomotor activity was lower in tSCI animals as result of injury (main effect of injury, P < 0.01) 1 (Fig. 4B). During this time, food intake was reduced in the tSCI rats in comparison to Sham (main effect of injury, P < 0.05) (Fig. 4C).
Macronutrient selection test
During postinjury week 12, we performed a macronutrient selection test, offering pure fat, carbohydrates, and protein. When normalized to each animal's total kcal content, there was no difference in the percent of fat (Fig. 5A) or carbohydrate consumed (Fig. 5B). tSCI rats consumed significantly more percentage of protein then Sham rats during the test period (main effect of injury, P < 0.05) (Fig. 5C).
Plasma analytes
We wanted to compare blood obtained during the ad libitum fed and 24 h fasted state in order to determine the differences in energy utilization. We observed no differences in fed or fasted NEFA and phospholipids levels among groups (Table 1) and elevated fasting levels of bhydroxybutyrate, P (diet) < 0.05 in HFD-fed animals (Table 1). Total triglycerides were significantly increased in HFD-fed animals in comparison to LFD-fed animals in the fasting condition, P (diet) < 0.01 (Table 1) whereas total cholesterol levels were reduced in HFD-fed animals after 24 h of fasting, P (diet) < 0.01 (Table 1). No differences among groups were identified in plasma glucose measured 15 weeks postinjury (Table 1). Fasting plasma insulin, C-peptide, a marker of insulin production, and leptin were elevated in both groups of HFD-fed rats (Sham and tSCI) in comparison to LFD-fed animals, P (diet) < 0.05 (Table 1). In order to determine if fatty infiltration of the skeletal muscle was occurring, we measured triglycerides and cholesterol in the skeletal muscle. Whereas there were no differences in the triglyceride and cholesterol content between Sham-HFD and tSCI-HFD, there was a significant reduction of triglycerides and cholesterol in tSCI-LFD in comparison to Sham-LFD, P (diet) < 0.05 (Table 1).
Discussion
Spinal cord injury results in debilitating impact on motor and sensory control of the limbs. The long-term reductions in mobility and locomotor activity result in negative impact on the metabolic health of injured persons (Smith and Yarar-Fisher 2016). Even beyond the altered energy expenditure complexities following SCI, changes in energy intake and utilization may contribute to the increased MetS rates in this population (Smith and Yarar-Fisher 2016). Chronic injury to the cord can also alter the functioning of the central nervous system as whole, changing the neural connectivity within and between higher-order brain centers and germane to the current work, bodyweight regulation (Smith and Yarar-Fisher 2016). Finally, in the context of nutrient rich, palatable Western diets that are both high in saturated fats and carbohydrates, persons with SCI may fare worse than noninjured individuals because of the various neural and hormonal impairments associated with their injury (Smith and Yarar-Fisher 2016).
In the present work, we used a rodent model of thoracic-10 level spinal contusion compared to Sham laminectomy-operated rats to determine whether two types of macronutrient compositions, LFD versus HFD would alter metabolic outcomes in rats with the various end-point measures culminating 16 weeks postinjury (12 weeks on the special diet). We probed energy intake, energy expenditure by metabolic monitoring. We predicted that under HFD conditions, SCI rats would have significantly increased metabolic dysfunction over LFDfed SCI rats.
Body-weight regulation following spinal cord injury: body weight gain
A robust body of work has documented that significant weight loss occurs with thoracic lesions in rats. Lesions that are high thoracic (T3) and complete will typically recover presurgical body weight within 10 days of injury (Primeaux et al. 2007;Ramsey et al. 2010) but body weight may permanently be reduced in comparison to uninjured rats (Primeaux et al. 2007). Using a T9-10 contusion model, rats recover presurgical body weight around 14-21 days postinjury (Jeong et al. 2011;Gaudet et al. 2019). However, this may vary not only with the strain of the rat but also sex (Gaudet et al. 2019). The male Long Evans rats recovered presurgery body weight between 14 and 21 days postinjury and then continued on their growth trajectory never experiencing catch-up growth; this is in line with the rodent literature (Jeong et al. 2011;Gaudet et al. 2019). Taken together, the first month of recovery of the rats is in line with data in the field.
Dietary manipulations to improve health and provide neuroprotection are of great interest in the SCI field. Use of specialized diets rich in nutrients such as the DHA, EPA, and choline such as Fortasyn â diet have been administered to SCI rats to enhance motor recovery (Pallier et al. 2015). Given the recent public interest in ketogenic diets, the anti-inflammatory and neuroregenerative potential of ketogenic diets have been applied to SCI research supplying rats with diets very high in fat and simultaneously low in carbohydrates (Streijger et al. 2013); these have shown beneficial results (Streijger et al. 2013). Still others have directly administered SCI rodents with omega-3 or À6 fatty acids in hope of enhancing neuroprotection following injury (King et al. 2006;Lim et al. 2013). To our knowledge, our previous microarray study was the first study where tSCI animals were placed on a western-style, high-fat, high-carbohydrate diet chronically (8 weeks) to exacerbate metabolic dysfunction, and obesity (Spann et al. 2017). In the current study, we again utilized this western-style HFD but also in parallel, expanded the study to include a control LFD; we extended use of the diet to a total of 12 weeks to maximize its effects. Despite the significant body weight loss during the first several weeks following injury, when placed on HFD, the tSCI animals accelerated their weight gain but not as drastically when consuming LFD; this is in direct contrast to the Sham rats who equally put on weight with these diets. Body weight gain in the SCI rats consuming HFD tracked far more consistently with the Sham rats on HFD than did the body weight gain of tSCI rats on the LFD. From these data, tSCI animals may be more susceptible to weight gain when fed a diet of high fat/high carbohydrates than when fed a LFD; this is exactly what we hypothesized.
Body-weight regulation following spinal cord injury: food intake
In some studies, SCI individuals have been shown to consume a greater amount of calories when compared with individuals that have had other injuries rendering them less mobile (Pellicane et al. 2013). We did not observe differences in cumulative food intake in tSCI animals when directly comparing them to Sham animals maintained on the same diet. The only differences in the amount of calories consumed were group differences between LFD and HFD. This is expected since HFD is more calorically dense than LFD (4.54 kcal/g vs. 3.81 kcal/g, respectively). Shifts in the type of macronutrients consumed by injured persons have also been reported; SCI individuals are reported to consume higher levels of fats and simple carbohydrates (as opposed to complex carbohydrates) (Sabour et al. 2012). Another report suggests that persons with SCI consume far more protein and carbohydrates then recommended by the USDA . We specifically used the macronutrient selection test to determine if there was a preference by tSCI rats to consume a particular macronutrient. This test has been used successfully in other models (Wilson-Perez et al. 2013). The stark difference in macronutrient preference that we observed in tSCI animals was a marked preference for protein in comparison to Sham rats. With the reduced lean (muscle) mass due to atrophy that is clearly hallmark of SCI, there may be an increased physiologic drive to increase protein ingestion to stave muscle loss. This could be further explored in future studies.
Body composition changes following SCI
The thoracic spinal cord injury has a significant impact on lean body mass that persists to 16 weeks postinjury. This is in line with previous reports in rats which show reduced lean mass in T3-lesioned rats by NMR but then no difference in lean body mass composition when normalized to body weight (Primeaux et al. 2007). In SCI individuals, the loss of lean body mass can lead to a 50% reduction in the skeletal cross-sectional area in comparison to able-bodied controls (Castro et al. 2000). This atrophy is only partially from peripheral denervation of the muscles but predominantly from reduced muscle loading/unloading and movement disuse (Bauman and Spungen 2000). This reduced muscle mass is consistent in the tSCI rat for the course of the study. Beyond the mass of the muscle, the quality of the muscle is also altered in SCI; infiltration of fat is high resulting in sarcopenic obesity. In this study, tSCI rats consuming LFD had reduced accumulations of triglycerides and cholesterol within the muscle in comparison to Sham-LFD. The HFD-fed tSCI rats had equivalent levels of triglycerides and cholesterol to the Sham-HFD-fed rats. So even in the rodent, reduction in lean mass coupled with consuming a HFD contributes to the sarcopenic obesity observed in this population.
Because standard laboratory chow cannot be matched in micronutrients or ingredients to the butterfat HFD, for this current study, we intentionally used the manufacturer-suggested, nutrient-matched "control" LFD consisting of 9% butterfat. This palatable LFD diet also has great obesogenic potential as can be well observed by the increase in body weight and adiposity of the Sham-LFD rats. After 16 weeks, both diets causes substantial increases in body fat mass. Although this can be viewed as a weakness of the current study in that we did not have a lean control group, on the other hand, the strength of the study is that we heavily controlled for the micronutrient but not the macronutrient percentage under which the adiposity developed. Adiposty developed either through high fat or high carbohydrates. The variability of adiposity gain on an individual level in the current study reduced our ability to see group differences. Nonetheless, HFD consumed by tSCI rats clearly results in gains in fat mass composition in the injured animals that is akin to the adiposity gain of the Sham animals. In the human population, fat mass in the lower body measured by DEXA increases substantially following injury (Singh et al. 2014). In this study, we did not measure changes within the specific depots of fat. We conjecture that visceral and mesenteric adiposity was increased in the tSCI rats. Future work will necessitate DEXA analysis of the rats over time to determine depot specific changes.
Rats with a T9 or 10 spinal contusion have remarkable return of locomotor function as evidenced by improvements of BBB scores during the course of the 16-week period of this study BBB scores reflect the locomotor function of the rat at the time of injury and return of mobility. However, the field of spinal cord injury uses a vast array of methods to produce injury from complete lesions to contusions using impactors, to ball-drop contusions. In a T3 complete lesion, for instance, even after 18 weeks, BBB scores may not surpass a 4 (Primeaux et al. 2007). In the early weeks of a T8 contusive lesion, BBB scores may range between 5 and 8 (Vasconcelos et al. 2016). On the other hand, early scores of rats with a T9-10 lesion begin at a 0 and can improve to 12-15 in a matter of weeks (Mills et al. 2001;Gaudet et al. 2017). This remarkable recovery within the rat model does often diminish its translatability to the human condition where recovery of function is very slow. However it does allow us to study the immediate effects of spinal cord injury and the effects of diet during recovery period.
Body-weight regulation following spinal cord injury: energy expenditure
We placed animals in the metabolic cages to assess various components of energy expenditure starting at week 9 on diet (13 weeks after injury). tSCI animals had significantly increased heat production resulting in increased energy expenditure calculations. The tSCI rats also had reduced locomotor activity in the metabolic cages. The reduced mobility exhibited by the tSCI animals may require that more energy be used for thermogenesis to maintain body temperature. We did not measure body temperature during these experiments but would predict body temperature would be similar to Shams. Body temperature for paraplegics is typically more akin to ablebodied individuals whereas core temperature is more unstable and variable in tetraplegics (Thijssen et al. 2011).
Circulating analytes of metabolic disease
In general, the circulating analytes that are typically elevated with MetS are increased in the HFD-fed animals irrespective of whether they are Sham or tSCI. Leptin and insulin, peripheral markers of adiposity and glycemic control are increased in the animals fed a HFD. Nonfasted triglyceride levels are very high as are fasted cholesterol levels in the HFD-fed animals. All these are indicative of long-term consumption of an obesogenic diet, high in saturated fat.
Summary
SCI partially altered energy balance through reduced mobility. Initially, as innervation to the limbs is compromised, lean and fat mass is lost reducing overall body weight as locomotor activity is reduced. Despite the potential of the LFD to produce obesity, the lower fat and higher carbohydrate content preserves a reduced feed efficiency and lipid content within the muscle of tSCI rats. However the high-butterfat diet accelerates metabolic dysregulation for the SCI animals. This suggests that lipid metabolism may be affected in SCI rats particularly with higher fat loads. More work needs to be performed to determine how this occurs. | 7,516.6 | 2019-08-01T00:00:00.000 | [
"Biology"
] |
The Emergence and Development of Bioethics in the Uk
ABSTRACT Bioethics emerged in a specific social and historical context. Its relationship to older traditions in medical ethics and to environmental ethics is an ongoing matter of debate. This article analyses the social, institutional, and economic factors that led to the development of bioethics in the UK in the 1980s, and the course it has taken since. We show how phenomena such as globalisation, the focus on ‘ethical legal and social issues’ and the empirical turn have affected the methods employed, and argue that ongoing controversies about the nature and possibility of ethical expertise will affect its future.
I. INTRODUCTION
Written by a historian and a bioethicist, this article presents an overview of the emergence and development of bioethics in the UK since the 1980s. It is by no means comprehensive, but reflects our perspective after years working on and in bioethics. We believe it provides important context for the topics discussed in other articles in this special issue; not least by helping us reflect on why academics from philosophy, law, and the social sciences began to discuss and help regulate matters that had long been the preserve of doctors and scientists. We aim to identify some key trends without purporting to offer a detailed history.
Derived from the Greek words bios (life) and ethike (ethics), 'bioethics' is one of the most recognisable neologisms of recent decades. The term initially denoted an approach few of us would recognise today. During the 1920s the German pastor Fritz Jahr defined 'bio-ethik' as the assumption of more compassionate attitudes towards animals and plants based on scientific research that showed commonalities across V C The Author(s) 2018. Published by Oxford University Press. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. species barriers. 1 Unaware of Jahr's work, claiming the term came to him 'with a Eureka feeling', in 1970, the American biochemist Van Rensselaer Potter characterised bioethics as a new 'science of survival' that drew on ecology and biomedical science in order to underpin decision-making in the face of a looming environmental crisis. 2 In contrast to Jahr, who sought to extend moral consideration to non-humans, Potter viewed bioethics as an anthropocentric system of ethics designed to secure 'the future of earth's biological resources for human needs'. 3 Independently of Potter, the Dutch obstetrician André Hellegers and the political activist Sargent Shriver also coined the term 'bioethics' in 1970 when they opened the Joseph and Rose Kennedy Institute for the Study of Human Reproduction and Bioethics at Georgetown University, a private Jesuit institution in Washington DC. 4 Hellegers and Shriver's definition of bioethics is the one we recognise today. Amid growing discussion of the social impact of biological research, the rationing of new medical technologies such as kidney dialysis and the rights of patients and experimental subjects, they viewed bioethics as the scrutiny of ethical issues raised by medicine and the biological sciences. This definition quickly rose to prominence. Between 1972 and 1974, the theologian Warren Reich began work on an Encyclopedia of Bioethics, the philosopher Daniel Callahan wrote an article on 'Bioethics as a Discipline' and the Library of Congress adopted 'bioethics' as a subject heading. 5 'Bioethics' in all these instances focused on advances in biomedical research and clinical practice, not on issues associated with ecology or environmental science.
The focus on medical practice and research appeared to continue a long-standing tradition that had been labelled as 'medical ethics' since the early 19th century, but bioethics differed in one crucial respect. In the USA and elsewhere, medical ethics had long been considered a matter for doctors. Discussion of ethics was confined to professional books and regulatory codes, and few people questioned whether doctors were best placed to determine what constituted good conduct in their own field. 6 When lawyers and religious figures engaged with medical ethics during the 19th century and for much of the 20th century, they sought to consolidate the authority of doctors by clarifying the legal and ethical aspects of contentious issues such as abortion. 7 Pointing to declining confidence in professions during the 1960s and 1970s, caused in part by the growth of radical politics, the exposure of unethical experiments 1 Jahr's work went largely unnoticed during his lifetime, but several authors have recently analysed it as part of the longer 'pre-history' of bioethics and as a possible bridge between bioethics and environmental ethics. on vulnerable populations and concerns over dilemmas raised by new procedures such as organ transplantation, advocates of bioethics claimed this paternalistic stance had become untenable. Callahan argued that people no longer believed 'a good training in medicine' led to 'good ethical decisions', and concluded that lawyers, philosophers, theologians, and others should now play an active role in drawing up codes of conduct for medicine and the biological sciences. 8 This argument appealed to doctors and scientists concerned by public criticism of medical research, and to academics in fields such as philosophy who were keen to utilise their training 'in a more applied way'. 9 It also appealed to politicians such as Senator Edward Kennedy, who argued that federal policy should not emanate 'just from the medical profession, but from the ethicists, the theologians, the lawyers and many other disciplines'. 10 Kennedy was instrumental in persuading President Nixon to establish a National Commission for the Protection of Human Subjects in Biomedical and Behavioural Research; and the act that established this commission notably stipulated that no more than five of its 11 members should be scientists or doctors, with the majority drawn from law, philosophy, theology, social sciences, and the general public. 11 In 1978, the commission issued binding guidelines for experiments involving human subjects in the Belmont Report, which ruled that all researchers should adhere to three core principles of respect for persons, beneficence, and justice. This principles-based approach, outlined in detail by Tom Beauchamp and James Childress's influential 1979 book The Principles of Biomedical Ethics, 'set out a clear and simple statement of the ethical basis of research' and quickly became the dominant framework in American bioethics. 12 By the turn of the 1980s, as the historian David Rothman remarks, 'it was clear that the monopoly of the medical profession in medical ethics was over. The issues were now public and national -the province of an extraordinary variety of outsiders'. 13 Although this definition of bioethics emerged in the USA, it soon became a global phenomenon. Members of several disciplines now scrutinise ethical issues and help regulate the conduct of doctors and biomedical scientists across Europe, in Australia, Canada, Israel, Latin America, Japan, Pakistan, Singapore, and South Korea. 14 As we scrutinise bioethics in these locations it becomes clear that we cannot generalise from its history in the USA. The sociologist David Reubi, for example, shows how the development of bioethics in Singapore during the 1990s owed little to radical politics or the exposure of unethical research, but was part of state efforts to encourage foreign investment in biomedicine. Politicians, Reubi argues, viewed bioethics as central to reassuring incoming scientists and companies that Singapore had rigorous ethical standards and was a safe place to invest. 15 These findings prevent us from mistakenly viewing bioethics as a monolithic entity with a universal history, and encourage us to recognise instead that what count as 'bioethical' problems, approaches and solutions differ across specific times and places. This was certainly the case in the UK. Despite public criticism of medical research in the 1960s and 1970s, politicians believed the best solution here was for 'the medical profession to get its house in order', while the British Medical Journal labelled bioethics 'an American trend'. 16 By the 1980s and 1990s, however, members of several professions began to play a leading role in developing laws for new procedures such as in vitro fertilisation (IVF) and embryo research; students increasingly learnt about ethical issues in medicine not from doctors but from philosophers and lawyers, who often worked in new academic centres for medical law and bioethics; interdisciplinary journals considered problems that were previously confined to medical publications; and newspapers portrayed a growing number of philosophers, lawyers, and theologians as 'ethics experts' whose input was central to debates concerning medicine and the biological sciences. 17 In this article, we detail how bioethics emerged as a high profile and valued approach in the UK thanks to the interplay between changing political agendas and institutional, professional and personal concerns. We argue that a significant factor in the development of UK bioethics was that politicians in the 1980s and 1990s no longer believed medical researchers should be solely responsible for discussing and resolving ethical questions that arose in the course of their work. From the 1979 election onwards, members of successive conservative and 'New Labour' governments argued that professions should be exposed to outside scrutiny in order to make them publicly accountable. This political shift benefitted individuals who promoted bioethics for different reasons, including the academic lawyer Ian Kennedy, an advocate of civil rights politics who argued it was vital to democratising medicine, and philosophers such as Mary Warnock, among others, who believed engagement with practical issues would make their field relevant. The development of bioethics also stemmed from the way in which early bioethicists presented their work as a vital intermediary: claiming outside involvement with ethical decision-making would 'reduce the burden of responsibility' on doctors and scientists whilst reassuring politicians and the public that 'no nameless horrors were going on in laboratories'. 18 This argument resonated with healthcare professionals who acknowledged 'the era which required paternalism is past', and journals that dismissed bioethics as 'an American trend' in the 1970s now portrayed it as vital to ensuring 'scientific progress'. 19 We also detail how UK bioethics was generally regarded less as a stable discipline and more as what Onora O'Neill calls 'a meeting ground for a different number of disciplines, discourses and organisations'. 20 Opinions regarding appropriate methods and solutions remained divided within as well as between the disciplines that constituted this new 'meeting ground'. This was evident in debates concerning whether or not bioethicists were moral experts who could foster agreement on difficult ethical issues, and in more recent discussions about the benefits and drawbacks of empirical, global, communitarian, and feminist 'turns' in bioethics. Yet, these differences of opinion did not prevent the continued growth of bioethics, evidenced by the formation of several academic centres in UK universities throughout the 1980s and 1990s, and the national Nuffield Council for Bioethics in 1991. Nor did they shake the enthusiasm for bioethics on the part of politicians and the media, with the BBC nominating Mary Warnock as one of the most influential people of the 1980s, the Labour government knighting Ian Kennedy for 'services to bioethics' in 2001 and the Independent newspaper selecting the philosopher John Harris as one of the UK's most influential thinkers in 2006. 21 With this in mind, we argue the continued appetite for bioethics can best be explained by viewing it as what Bill Readings calls a 'community of dissensus': where a lack of consensus is productive because, to quote the sociologist Les Back, 'it drives us to think harder about the key issues and problems of our time'. 22
II. THE EMERGENCE OF BIOETHICS IN REGULATORY COMMITTEES, PUBLIC DEBATES AND UK UNIVERSITIES IN THE
1980S AND 1990S During the 19th century British doctors viewed medical ethics as an internal concern that functioned as what the historian Harold Perkin calls a 'strategy of closure'. 23 It helped doctors consolidate their professional expertise by limiting disputes, excluding unqualified practitioners and allowing them to position themselves as the only group capable of providing an essential service. Thomas Percival's 1803 book Medical Ethics, for instance, asserted the need for cordial relations and self-regulation among orthodox doctors to maintain the support of patients who could just as easily choose the services of alternative therapists such as homeopathists or bonesetters. 24 To Percival and the medical reformers he influenced in the mid-19th century, any discussion of medical ethics should be produced by doctors and for doctors. This argument resonated with Victorian laissez-faire attitudes towards regulation, and the 1858 Medical Act officially granted doctors 'self-governing authority' by leaving them in charge of the new General Medical Council (GMC) that controlled registration, education, and discipline. 25 This situation persisted well into the 20th century. When Clement Attlee's Labour government sought to implement its 1946 National Health Service Act, doctors agreed to reform on the condition there would be as little scrutiny as possible of their 'privileged clinical position or research practices'. 26 Support for self-regulation was strengthened during the 1950s thanks to advances such as effective anti-tuberculosis drugs, open-heart surgery, kidney transplants and the discovery of DNA's helical structure. Many doctors and scientists hailed these projects as evidence of the benefits of professional freedom, and celebratory press coverage portrayed them as pioneering figures who were central to a 'new Elizabethan' era of progress and discovery. 27 But simply focusing on the arguments of doctors or medical researchers cannot tell the whole story. As the sociologist Andrew Abbott argues, professions do not emerge or develop in isolation and we need to move from 'an individualistic to a systematic view'. 28 We cannot fully appreciate the persistence of the belief that ethics was an internal concern without also studying the 'hands off' approach other professions adopted when they considered medical practice and research. The decisions in two medical negligence cases from the 1950s demonstrate how lawyers and judges believed, like doctors and politicians, that 'the medical profession should be held in special regard and interfered with as little as possible'. 29 The first case, Hatcher v Black, arose after a patient claimed they were not informed about possible nerve damage during thyroid surgery. Ruling in favour of the doctors, the judge, Denning J, warned that giving courts the power to decide what constituted negligent behaviour would lead to 'defensive medicine' where doctors thought 'more of their own safety than the good of their patients'. 30 The second case, Bolam v Friern Hospital Management Committee, arose in 1957 when a patient sued doctors for injuries that arose after they failed to restrain him during electroconvulsive therapy and did not warn him of the risks beforehand. 31 Here, as in Hatcher v Black, the judge ruled in favour of the doctors. Their decision hinged not on the possibility of 'defensive medicine' but on the argument that the patient's treatment conformed to standard medical practice. This ruling became known as the 'Bolam test' and was applied to virtually all medical negligence cases, until the UK Supreme Court's 2015 ruling in Montgomery v Lanarkshire Health Board 32 established that, in the context of informed consent, a patient should be told whatever they would want to know about the nature and risks of medical procedures, and not simply 'what the doctor thinks they should be told'. 33 As Margaret Brazier notes, by deciding that medical conduct should be judged according to professional norms, not the expectations of patients or the public, the underlying presumption in the courts for nearly 60 years 'was that "doctor knew best"'. 34 Philosophers adopted a similar stance, albeit for different reasons. In his influential 1903 book Principia Ethica, G E Moore argued that notions of 'good' so central to moral philosophy did not refer to a natural property and that we could not prove an action was good in the same way that, for example, we can demonstrate blood flows around the body. 35 In his iconoclastic 1936 book Language Truth and Logic, A J Ayer drew on Moore's argument and the logical positivism of the Vienna circle to portray moral statements as simply 'expressions of emotion that can be neither true nor false'. 36 To say a course of action was right or wrong, in effect, amounted to little more than saying 'Hurrah!' or 'Boo!'. 37 Ayer claimed that since philosophers should only scrutinise verifiable propositions, 'a strictly philosophical treatise on ethics should make no ethical pronouncements'. 38 His work had a lasting effect on mid-20th century UK philosophy, and on the rare occasions that philosophers responded to the ethical work of doctors and scientists, it was to reaffirm why they avoided normative issues. When the biologist Conrad Waddington told Ludwig Wittgenstein he was writing an essay for Nature on 'science and ethics' in 1942, the horrified philosopher replied it 'was a terrible business -just terrible! You can at best stammer when you talk of it'. 39 C E M Joad was the only philosopher who publicly responded to Waddington's essay, but this was only to chide him for presuming that notions such as 'good' could be easily identified. 40 This collective 'hands off' attitude was evident following the 1967 publication of Human Guinea Pigs by the medical whistleblower Maurice Pappworth, who outlined how NHS patients had been unwittingly exposed to unnecessary and dangerous procedures, such as cardiac catheterisation, as part of medical research. Pappworth claimed that in order to prevent future 'dangers and indignity', it was essential that 'our laws do not place the entire authority to decide what is permissible and what is not in the hands of one professional class'. 41 He argued that medical ethics should no longer be considered a matter for doctors alone, and urged the government to pass a law requiring all research projects to be scrutinised by a 'consultation committee' that contained at least one outsider, 'preferably but not essentially a lawyer'. 42 favourable media coverage of Pappworth's work, the majority of politicians continued to endorse laissez-faire attitudes to regulation. Members of Harold Wilson's Labour government, elected on a promise to turn the 'white heat' of science and technology into economic prosperity, were reluctant to interfere with medical expertise and reiterated that ethical questions were 'for the profession to consider'. 43 The lawyer Cecil Clothier, meanwhile, drew on the ruling in Hatcher v Black when he wrote to Pappworth rejecting calls for statutory oversight. Clothier argued that formal scrutiny was inappropriate when doctors were faced with severely ill patients whose only chance of survival 'could include trying a newly-devised drug if nothing else had done any good'. 44 Fear of litigation and criminal prosecution might prevent doctors from trying experimental procedures in such cases, he concluded, and 'individual assessment' remained the best form of governance. 45 In contrast to lawyers, philosophers and politicians, growing numbers of religious figures began to endorse what the Cambridge theologian Ian Ramsey called 'transdisciplinary' involvement with medical ethics during the 1960s. There were obvious professional motivations behind their argument. Attendance at Sunday school, Protestant churches and religious rites of passages fell away dramatically in the 1960s, and a young generation were less concerned with ethics surrounding faith, God and the afterlife than with the environment, gender, nuclear weapons, and political activism. 46 Ramsey argued it was only by placing itself within interdisciplinary discussion of contemporary issues, including 'medical moral problems', that theology 'may find a new prospect and a new relevance'. 47 He was also clear that interdisciplinary involvement with ethical issues would benefit doctors, helping reconcile them to the problems of increasingly secular and 'pluralist societies' where there was no longer agreement on what constituted a right course of action. 48 Crucially, and in contrast to bioethicists in the USA, Ramsey reassured doctors that input from theologians, philosophers, and others did 'not in any way compromise the surgeon's or physician's responsibility for making decisions', but was simply intended to facilitate 'responsible debate' and help them better understand the moral implications of issues such as organ transplantation or IVF. 49 While theologians were central to redefining medical ethics as a 'trans-disciplinary' endeavour in the UK, it was lawyers who went a step further from the late 1970s onwards and began to demand that members of other professions should play an active role in determining what constituted good professional conduct. These calls were led by Ian Kennedy, who notably labelled this more interventionist approach 'bioethics'. 51 Influenced by civil rights politics in the 1960s and 1970s, Kennedy believed that professions should 'respect each person's autonomy, his power to reach his own decisions and act on them'. 52 After encountering bioethics during a spell teaching in the USA during the early 1970s, he claimed to find 'much of value' in the work of the lawyers, philosophers, and religious figures who endorsed outside involvement with medical decision-making. 53 On returning to the UK he argued that discussion of medical ethics here was 'too narrow' and criticised lawyers, politicians, and others for 'saying these are medical matters and shifting responsibility for decisions back to the hapless doctor'. 54 In journal articles and several documentaries for BBC radio, on subjects such as withdrawing treatment from patients with no hope of recovery, Kennedy claimed that doctors and medical scientists 'function within a framework of legal and social rules that go beyond the rules of their particular profession and must be observed'. 55 Like the American bioethicists whose 'brilliant insights' he praised, Kennedy believed the solution was for 'all interested parties' to have a say in developing codes of practice for new or publicly contentious procedures. 56 Kennedy discussed these proposals in detail during his 1980 BBC Reith Lectures, broadcast with the provocative title Unmasking Medicine. The major thrust of the six lectures was that standards for doctors and medical scientists 'will have to be set by others, and the principle of outside scrutiny, a key feature of consumerism, seems inevitable'. 57 This was especially the case with teaching ethics to medical students, which Kennedy argued should be central to the curriculum and undertaken 'not by some superannuated elder statesman nor by the latest star in the medical firmament, but by an outsider, someone who is not deafened by the rhetoric of medicine'. 58 The seemingly confrontational tone of these proposals led some to dismiss Kennedy's lectures as 'doctor bashing'. 59 But he again emulated American bioethicists such as the Yale lawyer Jay Katz, who promised not to 'indict or stifle research', by portraying outside involvement as a help rather than a hindrance. 60 He argued lawyers, philosophers, and others were trained to scrutinise ethical issues and that when 51 I. Kennedy confronted by particular dilemmas 'it may be the doctor who is the layman'. 61 Bioethics would, therefore, provide 'great help to doctors in that it offers a guide to what they need to do where none existed before'. 62 Kennedy reassured doctors that he wanted to establish 'a relationship of partners in the enterprise of health', in which outsiders were 'not interfering but trying to help'. 63 Several commentators pointed out that Kennedy's promotion of bioethics resembled Maurice Pappworth's calls for outside involvement in the regulation of medical research. 64 Yet while Pappworth's proposals were dismissed in the 1960s, senior doctors were far more receptive to Kennedy's arguments in the 1980s. This change can be explained by the shifting political landscape that followed the election of Margaret Thatcher's Conservative Party in 1979. Thatcher's government lauded private enterprise and regarded state-supported and self-regulating professions as unresponsive to the entrepreneurial outlook they saw as vital to regenerating the country. Their solution, as Nigel Lawson set out in 1980, was to remodel professions on market lines; and throughout the 1980s, in cases such as teaching, local government and social services, reliance on professional expertise gave way to forms of outside scrutiny that were designed to ensure transparency, value-for-money and accountability to end users who were increasingly viewed as 'consumers'. 65 Ian Kennedy's political background ensured he was no fan of the conservative government, and he often criticised its neo-liberal belief that many aspects of public life 'could be regulated (if that is the right word) entirely by market forces'. 66 But his demands for outside involvement and patient empowerment nevertheless mapped onto the government's desire for publicly accountable and 'customer focussed' professions. This was not lost on doctors. John D Swales, head of the University of Leicester's medical school, acknowledged Kennedy's 'views enjoy the enormous advantage of following the current political tide' and recommended that 'doctors should look closely at what he is saying'. 67 Sir Douglas Black, President of the Royal College of Physicians, similarly believed that Kennedy's views have to be taken seriously, both for their own sake and because they are representative of the forces that seek to effect a radical change in the focus of medicine. 68 The changing 'political tide' was evident in 1982, when the government responded to growing press disquiet surrounding the 'aberrations of the baby revolution' by announcing a public inquiry into IVF and embryo research. 69 In a break from long-standing reliance on scientific or medical expertise, figures at the Department for Health and Social Security prioritised the appointment of an 'outside chairman'. 70 The government's decision to appoint the philosopher Mary Warnock as head of an inquiry where members of various professions outnumbered doctors and scientists was notably praised by Ian Kennedy as 'evidence that progress along the lines I advocate has recently been made'. 71 Like Kennedy and members of the government, Warnock presented outside scrutiny as vital to ensuring public accountability. Writing for the popular New Scientist magazine in 1984, she argued that when medical research raised a moral dilemma, there was no reason why scientists should be responsible by themselves for solving it . . . Increasingly, and rightly, people who are experts expect, as of right, to help determine what is or is not a tolerable society to live in. 72 Warnock also presented outside scrutiny as beneficial to scientists and doctors. She claimed it would safeguard public and political trust by ensuring. She also argued it would safeguard public and political trust by ensuring 'that no nameless horrors are going on in laboratories', which would allow researchers 'to get on with their work, without the fear of private prosecution or disruption by those who object to what they are doing'. 73 Like Kennedy, Warnock promoted outside scrutiny of biomedical research for specific reasons. She was one of a growing number of philosophers who believed the mid-century reluctance to engage with practical issues had rendered the field irrelevant. In a 1960 book on Ethics Since 1900, Warnock complained that philosophy had for too long been characterised by 'the refusal of philosophers to commit themselves to moral opinions'. 74 But she closed the book on an optimistic note by claiming 'the most boring days were over'. 75 Warnock drew here on the work of Philippa Foot, who wrote a 1958 article seeking to counter Moore's 'naturalistic fallacy' by arguing that moral statements could not be separated from the benefits or harms they produced in specific contexts. 76 To Warnock, Foot's work allowed philosophers to focus on 'both description of the complexities of actual choices and actual decisions, and also discussion of what would count as reasons for making this or that decision'. 77 In a 1978 edition of Ethics Since 1900, Warnock argued this approach was vital if philosophy was to become 'a practical subject and therefore more urgent and interesting'. 79 Other philosophers, in turn, believed Warnock's role as chair of the government inquiry into IVF and embryo research demonstrated the value of 'applied ethics', even if they disagreed with her committee's policy recommendations. To Singer, for example, her appointment showed how 'the broader community has willingly accepted the relevance and value of philosophers to practical issues', which was 'particularly notable in bioethics'. 80 Other philosophers were prompted to assert the value of practical approaches after the government cut the block grant it distributed to universities through the University Grants Commission (UGC) in 1981. The government announced that reductions were to be imposed selectively between institutions and subject areas; and given the government's emphasis on meeting 'national needs' and enthusiasm for commercial approaches, academics rightly predicted the UGC would prioritise disciplines that were seen to contribute to economic growth, while penalising those they viewed as unproductive. 81 Letters sent to each university advised Vice-Chancellors to protect 'big science' from budget cuts and 'downgrade the arts'. 82 Senior academics in fields such as philosophy were encouraged to take early retirement and were not replaced, making it easier for politicians and administrators to criticise shrinking departments as 'weak and ineffectual'. These pressures were compounded in 1988, when a new Universities Funding Council announced plans to distribute money based on new 'research assessment exercises' that judged the 'quality' of a department's research according levels of grant income and journal publications.
Many academics in arts and humanities departments recognised that these new criteria favoured the sciences and engineering, and believed they stood a better chance of gaining funding and meeting expectations that research had to confer 'social benefits' if they worked in areas with practical relevance. 83 Some academics and university managers also argued it was 'possible to improve both performance and image by casting down old-fashioned departmental barriers and abandoning worn-out subject divisions'. 84 This combination of factors prompted growing numbers of academics to assert the value of bioethics during the 1980s and 1990s. Bioethics appealed to staff in the humanities who sought funding for applied work, and its presentation as a 'partnership' made it an obvious subject for interdisciplinary collaboration. 85 While budget cuts were not their sole motivation, academics keen to work with like-minded colleagues in other disciplines began to promote outside involvement in teaching medical ethics to senior figures in university medical schools. Their efforts received support from a 1987 Institute of Medical Ethics report and the GMC's 1993 report, Tomorrow's Doctors, which both recommended that ethics should be central to the medical curriculum and presented input from a variety of perspectives as important to giving students 'a clear grasp of the issues involved'. 86 With medical students often demanding that more time be spent discussing ethics, senior doctors welcomed outside involvement as a 'splendid idea'. 87 Growing numbers of philosophers, lawyers, and others subsequently taught ethics to medical students and established new postgraduate degrees aimed primarily at healthcare professionals. Keen to formalise their collaborative work, the academics who taught on these new degrees also began to establish centres dedicated to research and teaching in bioethics and medical law; and by the late 1990s, these new centres brought together individuals from different fields at Bristol, Cardiff, Edinburgh, Glasgow, Keele, King's College London, Liverpool, Manchester, Newcastle, Oxford, Nottingham, Preston, and Swansea. Many of the new centres received praise from university managers as they secured postgraduate fees and grant income from external funding bodies such as the European Commission and the Wellcome Trust: helping academics in philosophy, law and, later, the social sciences assert the value of their work in an increasingly austere and competitive climate. 88
III. METHODS IN BIOETHICS
The extent to which national factors shaped UK bioethics is also apparent when we survey the methods bioethicists employed and endorsed in their work. With notable exceptions such as the physician and philosopher Raanan Gillon, who replaced Alastair Campbell as editor of the Journal of Medical Ethics, prominent UK bioethicists largely rejected the principles-based approach endorsed by the majority of their counterparts in the USA. 89 There was little agreement, however, on which methods should take precedence. While some believed that bioethicists with a philosophical background were 'moral experts' or 'specialists in ethics', who could foster consensus by providing a framework for analysing specific issues, others argued that adherence to a particular theory could not capture the range of viewpoints held in pluralist societies and was likely to leave 'people more dogmatic or muddled than before'. 90 The handbook for a module at the University of Manchester's Centre for Social Ethics and Policy, which was established in 1987, embodied this latter viewpoint when it argued that the value of bioethics did 'not lie in its ability to provide answers. . .to the difficult problems faced by healthcare professionals and others', but lay instead 'in its ability, first, to widen awareness of the issues involved and sensitivity to them; secondly to clarify one's thinking about these issues'. 91 But there were also broad similarities between the US and the UK. During the 1980s the disciplines that constituted bioethics on both sides of the Atlantic were primarily law and 'applied ethics', with theologians involved to a lesser extent. Even though many social scientists worked on issues such as IVF and organ transplantation, the majority shied away from engaging with bioethics and instead criticised it for what they saw as a 'tendency to distance and abstract itself from the human settings in which ethical questions are embedded and experienced'. 92 Some philosophers and lawyers took offence at this negative characterisation of their work, viewing the social scientist as 'the team member who does nothing to help but only criticizes team performance', and relations remained 'tentative, distant and susceptible to strain' throughout the 1980s. 93 This changed during the 1990s, however, as social scientists began to outline how bioethics might benefit from sociological or ethnographic perspectives. Motivated in part by the continued demand for practically oriented work, UK sociologists argued that a more 'bottom up' approach could help connect bioethics to the actual expectations of doctors and patients, who often displayed preferences, values, and forms of reasoning different to those prioritised in bioethical texts. 94 Their arguments were well received by many lawyers and philosophers who worked in university centres and the Nuffield Council of Bioethics; and as social scientists published in bioethics journals and helped determine public policy, many began to talk of an 'empirical turn' in bioethics. 95 By the 21st century, social scientists joined colleagues from law and philosophy in describing bioethics as a 'dynamic, changing, multi-sited field', where individuals from a growing number of disciplines 'claim the title of bioethicists'. 96 This development fostered a new research agenda that scrutinised the ways in which the disciplines that constitute bioethics can and should relate, addressing questions such as the extent to which theoretical reflection of a philosophical sort can be integrated with empirical work done by sociologists, anthropologists, and psychologists. It was by no means the case that all scholars working in the field wanted to work in a multidisciplinary way and disagreement continued over the role of empirical evidence in bioethics; and while some took the view that integration was possible and desirable, differences remained over what form integration should take. What was called 'integrative bioethics' suggested the emergence of a new discipline. 97 Advocates of 'integrated bioethics', 98 on the other hand, called for a deep and continual interaction between the constituent disciplines, while others argued that the disciplines needed to maintain their distinctive methodologies and distance, working together in a 'complementary' way. 99 These debates, along with attempts to define the boundaries of bioethics, continue.
Stemming in part from the reaction to issues such as genetically modified food, the empirical turn was also marked by an increasing focus on public involvement with bioethics. While it might be argued that this was not a part of bioethics per se, public participation and 'engagement' was something that bioethicists had to consider in both their funding proposals and outputs. Another change, again related to the relationship between bioethics and different funding agendas, was the 'ELSI-fication' of work from the 1990s onwards.
ELSI emerged after a proportion of the Human Genome Project budget in the USA was set aside to investigate 'ethical, legal and social issues (ELSA)'. Other countries quickly followed suit. In Europe, for instance, ELSA, designating a focus on 'ethical, legal and social aspects', was the acronym of choice. The choice of 'aspects' over 'issues' provided greater scope for input from social scientists, in offering a more rounded than linear approach. Thus, funding bodies (including not only the European Commission but also the Wellcome Trust) had a significant impact in encouraging social science input.
The phenomenon of 'ELSI-fication' meant that scholars who had for years been working on ethical, legal, and social issues now had a recognizable 'brand', but it was not always regarded with approval by those who preferred to identify with their home discipline and they were sometimes (unfairly) accused of jumping on the latest bandwagon. Throughout the period, some scholars who might be identified by the observer as 'bioethicists' regarded themselves as philosophers, legal scholars, or social scientists inspired by the significance of the issues themselves. More recently, the emphasis in European funding has shifted from ELSA to a new acronym: RRI, or 'responsible research and innovation'. This has been defined by Rene von Schomberg as a transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view on the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products (in order to allow a proper embedding of scientific and technological advances in our society). 100 Ethical acceptability is further explained here as being in compliance with the values of the European Union charter on fundamental rights, such as the right to privacy. 101 Bioethics has also been subject to a 'global turn' thanks to the promotion of a new approach known as 'global research ethics' or 'global bioethics'. 102 mean for ethics to be 'global' is not entirely clear, however. Nigel Dower has drawn a distinction between an ethic that is global in application and an ethic that is global in acceptance. 103 The first is arguably easier to achieve than the second, given the cultural differences at work in different parts of the world, but even an ethic that is global only in application is challenging. What does it mean for an ethics to apply globally? A further distinction needs to be drawn between extending, say, discussions of the just distribution of healthcare resources from intrastate to the interstate arena on the one hand, and discussing issues that are global per se on the other. The latter include issues that of their very nature in principle affect the whole globe, as in the cases of climate change and global pandemics, or in discussions of the human genome as the common heritage of humanity.
In dealing with these issues the question arises as to whether theories of biomedical ethics that have been prominent in the west are adequate in other locations and contexts. Bioethicists have discussed whether Kantianism, utilitarianism, and virtue theory, for instance, can feasibly be applied on a global scale. 104 These theories, however, were not developed with bioethics specifically in mind, whereas theories that were, such as Beauchamp and Childress's principles of biomedical ethics, have been canvassed as providing a possible basis for a global bioethics. Raanan Gillon has argued that autonomy, beneficence, non-maleficence and justice are the basis for a global ethics, being universally accepted in some form. 105 But others have countered by arguing that these principles can be universally accepted because they can be interpreted in different ways, so what appears to be agreement in fact masks profound disagreements. Søren Holm, in addition, has argued that the principlism framework may not travel well and reflects a particularly American perspective. 106 At the end of the 20th century and at the beginning of the 21st century there was a notable increase in attempts to move away from the dominance of individualistic thinking in bioethics, including the perceived pre-eminence of autonomy in the four principles. One reason for this was the development of biobank research, leading the World Health Organisation, for example, to say that the balance between the individual and collective needed to be rethought. In speaking of genetic databases they said . . . the justification for a database is more likely to be grounded in communal value, and less on individual gain . . . . . . it leads to the question whether the individual can remain of paramount importance in this context And the achievement of optimal advances in the name of the collective good may require a reconsideration of the respective claims so as to achieve an appropriate balance between individual and collective interests, including those of ethnic minorities, from a multi-cultural perspective. 107 This has been described as a communitarian turn. 108 This does not mean that individualistic thinking gave way to communitarian thinking, or that bioethicists suddenly became communitarians. It means that principles other than the Georgetown four gained prominence, such as the principle of solidarity, leading to the 2011 report of the Nuffield Council of Bioethics in which solidarity is described as an 'emerging principle' in bioethics. It is unusual for the Council to issue reports on particular principles rather than on specific issues. The report explored the ways in which it might be applied. 109 The relationship between solidarity and justice was also a matter of investigation. The Nuffield Report interpreted solidarity as willingness to bear costs for another's good, but it is distinct from altruism because of the involvement of reciprocity in a relationship of solidarity. Solidarity comes in different guises. There are distinctions to be drawn between face-to-face and mediated solidarity (as in an insurance company), between the solidarity of a coalition and humanitarian solidarity. 110 The relationship between the communitarian turn and globalisation needs consideration. On the face of it, solidarity is associated with membership of groups or communities, and so might be thought naturally to combine with exclusion of the interests of those who are not members of the group. The possibility of humanitarian solidarity, where the relevant group includes all human beings, is needed to counteract this potential problem.
The development of feminist bioethics was increasingly influential during the period. Following the establishment of the International Association of Bioethics in the early 1990s, Feminist Approaches to Bioethics 111 was established as an international network of feminist scholars. The work of feminist bioethicists had considerable impact upon bioethics worldwide. The concerns of feminist bioethics to some extent overlapped with those of communitarians and also extended to global issues. The concept of relational autonomy, for example, emphasized the context of social relationships in which individuals exist, and provided a counter balance to the picture of the autonomous agent as an isolated individual decision maker. 112 For feminist bioethics power relationships also provided an important focus. In the context of reproductive decision-making, whether concerning genetic testing or termination of pregnancy, the power of women to make a choice in relation to partners, clinicians and the prevailing legal system has to be taken into account. As regards global issues, questions of the global distribution of healthcare resources cannot ignore the stark differentials between men and women in some societies, evident for example in sexual and reproductive health and infant mortality statistics. 113 These ideas can be found in writers looking at bioethical issues from different perspectives. For example, Onora O'Neill, using a Kantian inspired approach in addressing issues of transnational justice, argued that while we need a system of abstract reasoning, this does not need to be based on the notion of idealized autonomous agents but humans with limited capacities and varied vulnerabilities who interact. Idealized agents have traditionally been based on the model of men and are thus biased in favour of men. 114 These considerations, in turn, connect feminist, communitarian, and global approaches to the increasing emphasis on public health ethics during the past decade, with issues such as antibiotic resistance, climate change, and obesity now receiving far more attention in bioethics and beyond. But the extent to which this is a new phenomenon is debatable. Justice has always been one of Beauchamp and Childress's four principles of biomedical ethics, 115 and questions about the fair distribution of resources, such as organs and dialysis machines, were credited as a major influence behind the emergence of bioethics in the USA during the 1960s and 1970s. 116 What appears beyond doubt, however, is that the focus on new issues and concerns has fostered a timely re-evaluation of the relationship between individual and collective interests.
IV. CONCLUSIONS
The 'turns' and approaches discussed here represent trends that had the potential to affect the ways in which bioethicists worked, who they worked with, as well as what counted as 'bioethical' issues. They stemmed not only from the pre-existing commitments of those individuals and groups who engaged with bioethics, but also in response both to funding initiatives that followed technological developments, such as whole genome sequencing, and to ongoing research assessment initiatives which continue to emphasise the social and economic impact of university research. New approaches are likely to emerge in response to more recent questions surrounding developments such as gene editing, 3-D printing, and biometric technologies, among other issues, but we should be wary of assuming what form they will take, or who will undertake them. By showing how the contours and influence of bioethics are connected to broader social, financial, and political concerns, history reminds us that its status and authority are likely to change in future. Arguably, the current climate appears less conducive for bioethics than at any period in its history. While the possibility and nature of expertise in bioethics has long been an issue, claims that 'red tape' are today stifling innovation and a distrust of 'experts' in multiple sectors threatens to undermine the goodwill which doctors and politicians showed towards bioethicists in the 1980s and 1990s. 117 At the same time, some bioethicists worry that the academic centres they helped establish face a diminished student intake and an uncertain future, with undergraduate tuition fees of £9,000 per year (at the time of writing) and the ongoing focus on research performance evaluation raising the prospect of universities attaching 'less value to the taught postgraduate courses that have educated so many health professionals in ethics'. 118 The lawyers, philosophers, and social scientists who look to engage with bioethics in years to come cannot presume their input will be welcomed or even deemed necessary, and may have to find new ways of asserting why it benefits doctors, scientists, and the public at large. | 10,733.8 | 2018-04-04T00:00:00.000 | [
"Environmental Science",
"Philosophy"
] |
The Effect of Hydroxyapatite from Various Toothpastes on Tooth Enamel
The process of re-demineralisation is governed by the degree of mineral saturation of oral fluids. Due to positive changes in conditions, remineralisation can become the predominant process leading to the healing of injuries. To improve remineralisation, it is necessary to increase the concentration of calcium and fluoride in oral fluids. For this purpose, fluorides have traditionally been used in varied forms and concurrently, the cariostatic mechanism can be explained by increasing the force of fluorapatites. The aim of this paper was to demonstrate the importance of using toothpastes containing hydroxyapatite on tooth enamel and how to operate at microscopic level by sealing the enamel and enamel prism defects etched by acid. The specimens obtained from extracted teeth were treated with different types of toothpastes containing hydroxyapatite: Biorepair, Sensodyne Repair & Protect and Lacalut White & Repair. We treated the teeth with the aforementioned toothpastes, followed the study under SEM microscope. We compared the control sample with the treated sample, and then the treated samples were compared to each other. All three toothpastes used had the expected result, making a protective layer on the surface of the etched enamel, but in this study, the Sensodyne toothpaste seems to be the most effective.
The health of the oral cavity is not only due to a balanced nutrition but also to oral hygiene. Teeth cleaning should begin before the first milk teeth, each age requiring certain tooth care features to support their structural and functional development [1,2].
The main factor in maintaining oral hygiene is first and foremost the correct dental brushing and the use of proper sanitising means: toothbrush, toothpaste, dental floss, mouthwash. Hydroxyapatite toothpastes have an increased efficiency due to the ability to remineralize tooth enamel [3][4][5].
Hydroxyapatite is one of the biomaterials representative for the resorbable material category, with a calcium phosphate composition. The use of calcium phosphate biomaterials for dental applications is due to the absence of toxic compounds and their resemblance to the mineral component of the human skeleton. HA is the main crystalline component of the human skeleton that was first synthetically produced around 1970 and used since 1980 as bioactive material [6,7].
HA is considered to be a biomaterial with a chemical structure very similar to that of the human bone, due to the fact that the main form of calcium in this biomaterial is found in the bone tissue, the adherence to it being relieved by this chemical composition resemblance. Hydroxyapatite (HA), is a natural occurring mineral form of calcium apatite with the formula Ca5(PO4)3(OH), but is usually written Ca10(PO4)6(OH)2 to denote that the crystal unit cell comprises two entities [8]. The process of redemineralisation is governed by the degree of mineral saturation of oral fluids (saliva and plaque). Due to positive changes in conditions, remineralisation can become the predominant process leading to the healing of injuries [9][10][11].
To improve remineralisation, it is necessary to increase the concentration of calcium and fluoride in oral fluids. For this purpose, fluorides have traditionally been used in varied forms and concurrently, the cariostatic mechanism can be explained by increasing the force of fluorapatites [12,13].
A significant decrease of carious processes in highly industrialised countries can be attributed to the widespread use of fluoride. This preventive effect is mainly due to the formation of calcium fluorides as precipitation limiting demineralisation, while the level of fluoride required for remineralisation is assumed to be higher than that required to prevent the formation of lesions.
Nano-hydroxyapatite is considered one of the most biocompatible and bioactive materials that has gained acceptance in recent years in both general medicine and dental medicine [14]. While previous attempts to clinically use hydroxyapatite failed, the synthesis of hydroxyapatite with zinc carbonate proved to be an important and high affinity process.
Nano-particles are similar in morphology and structure with the tooth enamel crystals. Recently, some studies have shown that nano-hydroxyapatites have the potential to establish dentin lesions. Currently, for the remineralisation of underlying lesions by products containing nano-hydroxyapatite, various formulas have been developed, and the first records suggest remineralisation properties.
Tooth enamel is the tissue with the highest degree of mineralisation in the body, being at the same time the only tissue of ectodermal origin that mineralizes. The hardness of this layer, estimated on the Mohs scale, varies between 5 and 8. Generally, the highest hardness is found in the deep enamel layers on the lateral surfaces of the dental crown, providing the resilience of the enamel surface to mechanical stresses.
From a chemical point of view, enamel consists of 95% mineral substances, 1% organic substances and 4% water. The high percentage of mineral substances in relation to the amount of water and organic substances contained in them is no longer found in any part of the body. Approximately 90% of the mineral substances are calcium phosphates in the form of hydroxyapatite: Ca 10 (PO4) 6 (OH) 2 , a small part (3%) of fluorapatite: Ca 10 (PO4) 6 (OH) 2 , and the rest is made up of carbonates, silicates, silicon.
The basic unit of the enamel is the enamel prism. The number of enamel prisms is not the same for each tooth. They are numerous in teeth with bulky crowns (about 12 million in the first upper molars) and less numerous in teeth with small crowns (about 2 million at lower central incisors). Prisms have an oblique trajectory toward the surface of the tooth. The diameter of a prism is on average 4 microns, and its length is variable. Some prisms extend from the surface of the enamel to the dentin, and others disappear along the way, being continued by other prisms.
Due to the accumulation of bacterial plaque on the enamel surface and the subsequent modification of the local pH, it is demineralised with the exposure of the enamel prisms, which will demineralize if the low pH is maintained for a long time.
The aim of this paper was to demonstrate the importance of using toothpastes containing hydroxyapatite on tooth enamel and how to operate at microscopic level by sealing the enamel and enamel prism defects etched by acid [15][16]. Also, by using toothpastes with hydroxyapatite, the incipient lesions on the enamel surface stagnate, and over time, even a partial or total recovery of the damaged surface can be achieved [17][18][19]. All toothpastes are beneficial for maintaining dental hygiene, but toothpastes containing hydroxyapatite, besides their role in hygiene, also have an important role in repairing the damaged enamel surface.
Experimental part
In this study were used a total of 15 teeth extracted for orthodontic or periodontal reasons (IV grade mobility), which were randomly divided into 3 groups of 5 teeth. Each tooth was interdentally divided vertically, thus obtaining 10 specimens per group. Each group of teeth was treated with a type of toothpaste containing hydroxyapatite: Group 1 was treated with Biorepair; Group 2 was treated with Sensodyne Repair & Protect; Group 3 was treated with Lacalut White & Repair.
It was established the following working protocol: -Tooth preparation. After extraction, the teeth were cleansed from the biological debris with a dental brush under the water jet, dried, and then subjected to UV bactericidal lamps for 30 minutes. After that, they were immersed in physiological serum until the entire batch was ready for treatment. Then the teeth were mounted in a support made of chitinous print material.
-Each tooth was etched with 37% orthophosphoric acid for 1 minute. After this step, the teeth were washed for 20 seconds under running water, then air-dried.
-Tooth Cutting: The teeth were cut using a diamond disk mounted to the elbow part and water cooling, resulting in two identical halves.
-The first half of the tooth was kept as control sample, and the second half of the tooth was brushed with the toothpaste chosen according to the group that the tooth was part of. The teeth were brushed twice a day for two minutes for two weeks. After each brushing, the teeth were rinsed under running water and then kept them in physiological serum. After 2 weeks, the control sample and the treated samples were studied under the SEM microscope.
The toothpastes chosen in this study contain nanohydroxyapatite. Hydroxyapatite has been used as a remineralising agent in toothpaste for the past three decades, and in Japan since 1993, studies have shown a reduction in the caryogenic index among students who used this toothpaste. It is a crystalline calcium phosphate substance almost identical to natural hydroxyapatite, supplied as nanoparticles, which directly replaces minerals lost from demineralised enamel and completes microscopic cracks on the surface of the enamel [20][21][22][23][24]. In recent years, more oral hygiene products with hydroxyapatite have appeared on the market due to its benefits in remineralising enamel, increasing resistance to bacterial plaque adhesion and reducing dentinal hypersensitivity.
Group 1 was treated with the Biorepair paste. The simple and innovative idea of repairing the teeth with the hydroxyapatite that is physiologically found in the structure of teeth has become possible due to high-end technology of nanoparticles. The achievements belong to the researchers at Coswell Laboratories in collaboration with the researchers from the University of Ambient and Biological Chemistry in Bologna. The innovation patented by these two laboratories, the MICROREPAIR complex, contains microparticles of bioactive hydroxyapatite (similar to the one from the tooth structure) and zinc. These hydroxyapatite microparticles have the ability to integrate into dental enamel and dentin, penetrating into the smallest tooth imperfections, repairing them. Thus, imperfections, microcracks, demineralisations will be corrected. The surface of the dental enamel will be rebuilt, brighter. The reconstructed enamel is protected from the destructive action of bacteria, acids, cavities, etc. On the surface of tooth enamel, the newly deposited microrepair layer constitutes a natural, physiological, strong barrier against external factors (food debris, bacteria, increased acidity, etc.). These unique hydroxyapatite microcrystals have an increased chemical reactivity allowing for the rapid remineralisation action of enamel and dentin, also yielding locally calcium and phosphorus.
Group 2 was treated with Signal Sensitive Expert. This toothpaste combines a unique mineral formula, its main purpose being to relieve hot and cold pain in 30 seconds. Hydroxyapatite is deposited in exposed dentine channels, preventing the hot and cold thermal stimuli from reaching the nerve. Potassium citrate calms the pain experienced by the nerve endings, reducing dental discomfort. Fluoride protects against cavities and helps repair and protect the enamel.
Group 3 was treated with Lacalut White & Repair. This toothpaste is special for enamel mineralisation. It cares and smooths the teeth and prevents their demineralisation because it contains a special formula of active ingredients for dental care. Phosphate compounds whiten teeth. Sodium chloride in combination with hydroxyapatite (the main component of tooth enamel) improves the mineralisation process from the tooth surface. The tooth becomes smoother and more resistant.
Results and discussions
We treated the teeth with the aforementioned toothpastes, followed the study under SEM microscope. The SEM images are very useful to analyse the different samples [25][26][27]. We compared the control sample with the treated sample, and then the treated samples were compared to each other. Following the acid etching, the enamel surface was demineralised, simulating the demineralisation that can occur in the oral cavity due to poor hygiene.
Sample 1: Biorepair
In the SEM images taken at different magnitudes, one can notice how the Biorepair toothpaste was deposited on the surface of the etched enamel forming a protective layer. In areas with higher enamel defects, the deposition of hydroxyapatite microcrystals is noticeable. The toothpaste covers large cracks of 10-50 microns. When the surface is eroded so as to open the dentinal canals, a thin layer of Biorepair is deposited. The layer that forms is a thin and dense one that closes the dental canaliculi. Fig. 1 and fig. 2. SEM images of the samples treated with Biorepair toothpaste taken at different magnitudes When using the Sensodyne Repair and Protect toothpaste, one can notice that hydroxyapatite microcrystals have been deposited on the etched enamel prisms and the enamel surface. The toothpaste covers both large and small cracks, the covering layer being thick and dense. Fig. 6 and fig.7. Hydroxyapatite adhered to the surface of the exposed prisms The enamel surface treated with acid is restored. There is a large amount of reparative substance that adhered to the surface of the exposed prisms. Fig.8., Fig.9. and Fig.10. Deposition of the toothpaste on the enamel surface and the tendency to form a thin protective layer When using the Lacalut toothpaste, one can also notice the deposition of the toothpaste on the enamel surface and the tendency to form a thin protective layer. At a higher magnitude, one can notice the clear layout and shape of the exposed enamel prisms, but also the fact that the repairing of the etched surface is not as effective as the other toothpastes used. Sodium fluoride in combination with hydroxyapatite -the main component of tooth enamel improves the mineralisation process from the tooth surface. The tooth becomes smoother and more resistant, the small flaws in the enamel are repaired.
Conclusions
The use of toothpastes containing hydroxyapatite is effective in treating demineralised enamel surfaces and repairing small imperfections on the enamel surface [28].
All three toothpastes used had the expected result, making a protective layer on the surface of the etched enamel.
In the case of the Biorepair toothpaste, there were obseved deposits of hydroxyapatite microcrystals at the level of enamel defects.
The Sensodyne toothpaste was the most effective, at the studied samples It was noticed the deposition of a denser and thicker layer on the enamel surface, a layer that protects and recovers the enamel defects.
When using the Lacalut toothpaste, the deposited protective layer is thinner and one may notice the enamel prisms which are not completely covered. | 3,207.4 | 2019-08-15T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Technology of moisture-resistant chipboard using amino - formaldehyde binder
. In this paper we study the possibility of using modified amino-formaldehyde resins in the production of moisture-resistant wood particle boards. As a modifier of amino-formaldehyde resins it is proposed to use a by-product of melamine production - melan, which is a powder of light brown color, insoluble in water. Melan in its chemical composition and properties is largely similar to melamine, but unlike the latter it is an available raw material for the synthesis of polymers. This is the high market value of commercial melamine. Melamine, on the other hand, is often a waste product that is absolutely free. Aminoformaldehyde resins modified with melamine, despite its dark coloring, provide particleboards with a coloring identical to those obtained from conventional aminoformaldehyde resins. However, the properties of the obtained materials allow them to be classified as moisture-resistant.
Introduction
Obtaining water-resistant board materials has been and remains one of the priority areas in the production of materials from wood and synthetic polymers.
It is known that amino-formaldehyde resins are the most preferable for the production of wood board materials, which have a reduced content of harmful substances, in particular formaldehyde and methanol, but do not have sufficient water resistance. It is possible to improve this index by chemical modification of aminoformaldehyde resins during or after their synthesis.
One of the most well-known and effective modifiers of amino-formaldehyde resins is melamine. Its effectiveness as a modifier has been repeatedly proven [1][2][3][4][5]. But it has a significant disadvantage -a fairly high market price, which makes this product not very accessible to most manufacturers of synthetic resins.
The optimal solution to this problem is to use instead of pure melamine product -a waste product of its production. This product is called melamine, which in its structure and properties is very similar to melamine, therefore, it also has a modifying effect on aminoformaldehyde resins, and costs significantly cheaper, and often can be absolutely free [6][7][8][9][10][11]. Consequently, the introduction of melan into the amino-formaldehyde resins at the synthesis stage will reduce the cost of the finished resin by saving such expensive products as urea and melamine, that is, they can be partially replaced by melan [12][13][14][15][16]. And most importantly, melane, like melamine, can significantly increase water resistance of aminoformaldehyde resins and, as a consequence, water resistance of materials made with modified binder.
The aim of the present studies was to develop technology for obtaining water-resistant particleboards obtained on the basis of amino-formaldehyde resins modified with melamine.
To solve the goal the following important tasks have been defined and carried out: 1. evaluation of the properties of amino-formaldehyde resins modified by melanom. 2. development of rational modes of pressing moisture-resistant reinforced wood particle boards.
Evaluation of physical and mechanical characteristics of the obtained reinforced wood particle boards.
Estimation of production cost of reinforced wood particle boards.
Methodology and materials of the experiment
The object of the study in this work was reinforced wood particle boards. It was necessary to prove or disprove the effectiveness of amino-formaldehyde resins modified with melanol in terms of moisture resistance of the obtained materials [17][18][19][20][21][22].
All experiments were carried out in the laboratories of the FT9 Department of Chemistry and Chemical Technologies in Forestry, Mytishchi Branch of the Bauman Moscow State Technical University, which has all the necessary set of laboratory equipment for the synthesis of polymers, analysis of their properties, production of wood composite materials and evaluation of their physical and mechanical properties.
To determine optimal melane quantity we synthesized modified resins with different quantity of melane, eventually it was decided to stop at 8, 14 and 20 % melanes. These resins and participated in further, more detailed studies. They were studied the basic properties, which allowed to judge the high quality of the modified resins.
In the conducted work for the manufacture of particle boards the chips of fraction 7/5 were used, which is usually used to form the middle layer of reinforced wood particle boards. The chips were dried to a moisture content of 4 %. Separation of the chips was carried out on a laboratory sorter that separates the factory chips into several fractions, including the -/10 fraction, which should be further crushed and the dust fraction, which should be removed from the technological process. The size of the manufactured slabs was 300×130×16 mm.
Slabs were made at the average regimes adopted by most modern enterprises for the production of reinforced wood particle boards: the temperature of pressing -210 ° C, holding time -0.25 min / mm, the binder consumption -8.5 and 10.5 %. The density of the boards was 700 and 900 kg/m 3 [8][9][10][11].
Ammonium chloride was used as a hardener in an amount of 1% by dry matter taken by weight of the resin. The curing time was determined at 20°C and at 100°C. Determination of curing at ambient temperature allows to judge the viability of the binder (modified resin + hardener), that is, it is important to know how long the resin with the hardener introduced into it will have the fluidity and viscosity that allows to qualitatively conduct the "tarring" of wood particles.
For the manufacture of reinforced wood particle boards[1 were used three resins KFM. They differ in different amounts of melan added (it is 8, 14, 20%). The main indicators of these resins are presented in Table 1. Not less than 12 Not less than 12 Free formaldehyde content 0,11 0,08 0,09 The data in Table 1 show that the obtained resins have high physical and chemical characteristics, which will allow these resins to become a good basis for water-resistant aminoformaldehyde binders. And even with the hardener introduced, the modified binders retain a long viscous state for 12 and more hours. The proposed resins are even more environmentally friendly in terms of free formaldehyde content than the well-known lowtoxic urea resin KF-MT-15 with formaldehyde content of 0.15%.
Physical and mechanical properties of the obtained reinforced wood particle boards based on KFM-8, KFM-14 and KFM-20 resins are presented in Tables 2-4. The data analysis of Tables 2-4 indicates that the boards obtained by using aminoformaldehyde resins with different amounts of melan correspond to the quality indicators of moisture-resistant boards according to GOST 32399-2013 [14,15]. The table also shows that the type of binder, the amount of modifier, the binder consumption, as well as the density affects the class of the boards. So, for example, the binder consumption of 10.5 % and the density of 900 kg/m3 are required to obtain boards of grade P7 with high characteristics.
From the analysis of tables 2-4, we can also see that some of the boards slightly deviate from the required standards, but these discrepancies will be further eliminated in the finalization of technology for the production of moisture-resistant reinforced wood particle borads [14,15].
1. The proposed technology for obtaining moisture-resistant particleboards has shown its absolute efficiency. 2. Melan, due to its similarity with melamine, is ideally suited as a modifier for the synthesis of amino-formaldehyde resins. The modified binder allows to obtain moisture-resistant boards, corresponding to GOST 32399-2013 and formaldehyde toxicity class E1. 4. The proposed technology will produce board materials that will be widely used as structural and finishing materials. | 1,685.6 | 2023-01-01T00:00:00.000 | [
"Materials Science"
] |
A machine learning-based classification model to support university students with dyslexia with personalized tools and strategies
Dyslexia is a specific learning disorder that causes issues related to reading, which affects around 10% of the worldwide population. This can compromise comprehension and memorization skills, and result in anxiety and lack of self-esteem, if no support is provided. Moreover, this support should be highly personalized, to be actually helpful. In this paper, a model to classify the most useful methodologies to support students with dyslexia has been created, with a focus on university alumni. The prediction algorithm is based on supervised machine learning techniques; starting from the issues that dyslexic students experience during their career, it is capable of suggesting customized support digital tools and learning strategies for each of them. The algorithm was trained and tested on data acquired through a self-evaluation questionnaire, which was designed and then spread to more than 1200 university students. It allowed 17 useful tools and 22 useful strategies to be detected. The results of the testing showed an average prediction accuracy higher than 90%, which rises to 94% by renouncing to guess the less-predictable 8 tools/strategies. In the light of this, it is possible to state that the implemented algorithm can achieve the set goal and, thus, reduce the gap between dyslexic and non-dyslexic students. This achievement paves the way for a new modality of facing the problem of dyslexia by university institutions, which aims at modifying teaching activities toward students’ needs, instead of simply reducing their study load or duties. This complies with the definition and the aims of inclusivity.
the former is lacking but no problems are detected in the latter (for the Italian context, it means an intelligence quotient-IQ-equal or greater to 85), the presence of dyslexia is declared.In the last decades, the advent and the constant progress in biomedical engineering have raised the possibility to propose innovative approaches.Firstly, the importance of analyzing neurological data have been revealed, encouraging the use of functional magnetic resonance imaging (fMRI) to identify anomalies in the cerebral morphology of dyslexic subjects 8 .Further, similar approaches have been proposed by evaluating the activation patterns gathered from electroencephalogram (EEG) tests and, in particular, the spectral features obtained while analyzing different brain areas 9 .
Significant advances have been made not only in diagnosis instrumentation but also in diagnosis techniques, mainly thanks to artificial intelligence (AI), which has offered interesting new possibilities to analyze data, so much that it can be now considered as the effective turning point with respect to the most common practices.For example, machine learning (ML) algorithms were used to generate automatic predictions of the presence of dyslexia based on tests' results, as in 10 , where an artificial neural network (ANN), fed with the outcomes of the Gibson test, was capable to identify dyslexic subjects with an accuracy close to 90%.Similar outcomes were found in 11 and 12 .The former showed that a support-vector machine (SVM) algorithm is able to discriminate reading disorders again with a percentage of success of about 90%.The latter, instead, suggested how a fuzzy algorithm can help psychologists to detect potential cases of dyslexia.A human-driven machine learning algorithm was proposed in 13 , where the prediction of children at risk for dyslexia was detected with an accuracy of 99%.In addition, a screening tool, named as DysLexML, based on the combination of data gathered from eye movements during text reading and on the application of SVM was implemented and associated with an accuracy of 97% 14 .It is worth noting that, if on the one hand several efforts have been made to improve dyslexia diagnosis on children, on the other hand specific tools to detect and/or monitor dyslexia in adult subjects are still missing, with the exceptions of LSC-SUA 15 and Adult Dyslexia Battery 16 tests that, however, do not exploit at all the potential offered by information technology (IT).
Unfortunately, when early diagnosis fails or is not performed, the issues caused by dyslexia during the learning process tend to be more severe and the probability to solve, or at least mitigate them, decreases consistently 6 .In this case, developing specific support tools and strategies become of paramount importance to provide help properly and usefully.Again, IT and, in particular, AI can offer a wide variety of promising solutions.One of the most interesting was presented in 17 , where an assistive reading tool was designed by combining read-aloud technologies and AI paradigms applied to eye tracking.A pilot study on 20 children, ranged from 8 to 10 years old, showed an increase of 24% of a text comprehension score.Even in 18 , an assistive digital platform was implemented in Malay language.Hidden Markov models and an ANN were used to make the platform self-adaptable to the learning environment.Another digital support tool, called ALEXZA, was introduced in 19 .It helps young dyslexic students while reading, by using AI to recognize text from pictures and read it aloud, also suggesting common synonyms in case of unfamiliar words.A further online platform for e-learning has been proposed by 20 and tested on students ranged from 8 to 12 years old.This platform is able to adapt the methodology for providing the correct learning approach based on user profile and progress.An interesting approach that explores an alternative way to offer support to dyslexic students was introduced in 21 .Here, AI was employed to develop an augmentative and alternative communication (AAC) model, capable of classifying questions uniquely and to provide users with related pictograms.The study reported a decrease up to more than 66% in the effort and time to interact among the users.Finally, the work 22 proposed an adaptive e-learning method able to detect the dyslexia type and to offer appropriate learning methodology to the user.However, after the first identification, no further adaptations were applied leading to a low possibility to customize the methodology based on user needs and progress.From this overview, it appears clearly how several efforts have been made to help students with dyslexia, by offering adaptive e-learning methods, but such efforts are totally addressed to students at primary schools (7-12 years old).On the contrary, no digital platform based on AI have been proposed for university career, even though it is well known that the inclusion of student with dyslexia in higher education is one of the open challenges nowadays.An exception to this is represented by 23 .It describes the project VRAIlexia, in which AI is employed, jointly with virtual reality (VR) 24 , to develop a platform capable to offer personalized support to dyslexic students during their academic career.
The work here presented is framed within this project.In particular, it is aimed at building a classification model of the most useful digital tools and learning strategies, customized for each university student with dyslexia, Based on the challenges they have encountered during their educational journey.The goal is to provide tailored support methodologies to each student, in order to fill the gap between dyslexics and non-dyslexics, which very often arise in the years of university.ML techniques have been explored and they proved to be an optimal tool to achieve the purpose.In the next section, the used methodology is presented in detail, focusing on all the main aspects, from data collection and processing to algorithms choice, training and testing.Then, in the "Results" section, these are shown and discussed.The final section is left to the conclusions.
Data collection
To gather the data on which to train and then test the final ML prediction algorithm to be implemented, a questionnaire was elaborated.It is divided into three main sections.The first one concerns aspects related to demography (age, gender, provenience, etc.) and dyslexia history (presence of relatives with dyslexia, possible problems during study career, received support, etc.), and will not be taken into account in this work.However, a wide analysis of its information has been made in 25 .The second section is composed by questions related to the issues that students may have experienced during their learning path.The third section, instead, contains questions about the supporting tools and strategies (or services) they have found most useful to mitigate learning problems.In Tables 1 and 2 (a) and (b), the complete list of asked questions in the sections of interest is reported.
In the second section, the participants to the questionnaire were asked to express their feeling about how severely they have been affected by each one of the listed issues, by choosing an option among: "not at all", "very little", "little", "medium", "much" and "very much".In the third section, instead, the participants could express their opinion about the usefulness of each of the present supporting tools and strategies, with the same options as above but with the addition of "never tried" and "don't know" for the answers related to the tools.This allows a discrimination between a useless tool and an unknown one, so as not to bias the results.In both cases, an empty textbox was inserted, where the participants could insert possible additional information.
The questionnaire was created by a group of psychologists having a solid knowledge about dyslexia in the adult population.They initially sketched out a first list of items, based on their professional experience.Then, they interviewed a sample of twenty university students with dyslexia to refine such a list.Finally, another group Table 1.List of the questions about the possible issues experienced by dyslexic students during their career, asked in the 2nd section of the questionnaire.
Id
Have you ever experienced the following issues?The optimized version of the questionnaire was then published online and spread to people compliant with the following characteristics: 1. Having a certified diagnosis of dyslexia.2. Being native Italian speaker.Native speaker in other languages was excluded, since each language has its own peculiar features, which cause problems of different nature to dyslexic students.Thus, it is not a proper approach to consider more than one language jointly 26 .3. Being more than 18 years old.4. Attending university or having finished or abandoned it less than five years before the filling of the questionnaire.
As previously mentioned, comorbidity with other SLDs is likely to occur.Since their presence could bias the answers, students that have a certificate of the simultaneous presence of SLDs other than dyslexia have been discarded.Dyscalculia and Dysgraphia will be, singularly, the object of other two similar studies.Handling the three disorders singularly should avoid biases and provide more targeted results.
The collection of the data, as well as all the experimental protocols employed in this research were subjected to a double conformity check.Indeed, they were assessed first by the Ethics Committee of the University of Tuscia and then by the National University Conference of Disability Delegates (CNUD), which is an entity that represents the policy and activities of Italian universities, related to students with SLD and disabilities.The result of the assessment was positive in both cases.In addition, data collection was conducted according to the ethical standards outlined in the 1964 Declaration of Helsinki.The collected data were treated according to the articles 13-14 of the GDPR 2016/679 of the European Union, to ensure the privacy of the participants to be respected.Specifically, data have been taken and processed completely anonymously, and used only for research purpose.All the participants have given their informed consent before filling the questionnaire, by digitally signing an agreement.
Prediction algorithm design
As anticipated, only the questionary items about the issues encountered by dyslexic students during their career and about the tools and strategies they found more useful to face such problems were taken into account in designing the classification model.In particular, the issues were used as input to train and then to test AI algorithms (the predictors), whereas the tools and strategies were used as output (the labels) for each observed sample.This choice, jointly with the nature of the available data, suggested relying on supervised ML techniques 27 .Deep learning algorithms would indeed be likely to result in overfitting, whereas reinforcement learning algorithms would have no sequential data available on which being trained 28 .Furthermore, their higher complexity would not be justified for the addressed problem.
A preliminary choice that had to be made concerned whether to treat the output variables jointly or singularly and, in the first case, how to group them.The choice depends on if the output variables, or some of them, are considered as correlated to each other or not.Four options are meaningful: (i) all the variables are considered as correlated and, thus, they are treated jointly (in this case, the labels would be vectors containing 39 usefulness scores that were given to the 17 tools and to the 22 strategies); (ii) all the tools and all the strategies are considered as intra-correlated but not inter-correlated, thus the variables are split into two groups (in this case, two predictions would be made, one using a 17 elements vector with the scores given to the tools and the other using a 22 elements vector with the scores given to the strategies, as labels); (iii) following a correlation criterium, some groups of variables are considered as intra-correlated but not inter-correlated thus, they are divided into n groups (in this case, n predictions would be made, each using a vector with the scores given to the tools/ strategies within a specific group, as label); (iv) the variables are considered as uncorrelated (in this case a single prediction would be made for each different tool/strategy, using the score given to such a tool/strategy as label).Even if it could be intuitively hypothesized that some of the tools or strategies listed in Table 2 have some kind of correlation, no evidence is present in the literature about support methodologies that correlate to each other.Furthermore, cross-correlation matrix ρ X,Y was calculated statistically, by assigning a score to the given answer about the usefulness of each tool/strategy and considering it as the value assumed by that variable, as explained in detail in the next subsection and as shown in Table 3.The chosen correlation criterium was Spearman's one, since it is particularly suitable for ordinal variables, like the considered ones.Thus: where x i and y j are the i-th observation of two generic output variables, namely the scores given to two tools or strategies, and, N oss is the number of available observations.Figure 1 shows a graphical representation of |ρ X,Y | , where the 17 tools has been indicated with the numbers from 1 to 17 and the 22 strategies with the number from 18 to 39, whereas the absolute value of the Spearman's correlation coefficient that is, each entry of |ρ X,Y | , has been expressed with colors, whose values are derivable from the color bar.Most of the pairs of variables have a weak correlation.Only 4 of them in 741, namely less than 0.54%, have a strong correlation, stated by |ρ X,Y | > 0.7 .Thus, option (iv), namely considering output variables singularly, is the most meaningful and was chosen.This choice also gave the possibility to use a different ML algorithm for the prediction of each variable, improving the overall accuracy.In fact, one of the algorithms could be the strongest in predicting a variable j but weaker www.nature.com/scientificreports/than another in predicting variable k .Thus, by using only the former, a worse accuracy would be obtained in predicting k , whereas by using only the latter, a loss of accuracy would be experienced in predicting j .Using the best-predicting algorithm for each variable, instead, led to the best achievable accuracy.The same consideration was applied to the algorithms' setup: the best setup of an algorithm for the prediction of one of the variables may not be the best to predict another variable.Thus, different setups were chosen for every output variable.Four of the best performing supervised ML algorithms for classification 29,30 were selected to implement the classification model, namely: • random forest (RF); • linear/logistic regression (LR); • k-nearest neighbors (kNN); • support-vector machines (SVM).
This choice was motivated also by the fact that these algorithms present different classification abilities 31 .Thus, their joint use ensures exhaustivity in the performed research.
Appropriated boosting technique were applied, when recommended.In particular, ADABoost was used with RF and kNN and Gradient Boosting with LR.These techniques were preferred to newest one, as XGBoost, since their complexity does not compensate their performance in dataset as large as the used one, and since www.nature.com/scientificreports/ADABoost demonstrated to work better in binary classification problems 32 , as in the considered case (for the size of the dataset and the binarization of the problem, refer to the "Preprocessing of the data" section).No boosting techniques were applied to SVM, since this is already a strong classifier and transform it in a weak one to make it a strong one again with boosting 33 did not seem a reasonable solution for this work.
It is worth repeating that, for what previously stated, the finally implemented prediction algorithm is not one of the 4 listed above with a particular setup, but a sort of super-algorithm that operate the prediction of each single tool or strategy by relying on the best-performing algorithm within the list, with the most performant setup for that specific tool/strategy.
Preprocessing of the data
The collected database was composed by 1259 answers to the questionnaire.Among them, 42 were discarded for several reasons, as incompleteness, impossibility of verifying the presence of dyslexia, comorbidities with other SLDs, non-compliance with the four criteria exposed in the "Data collection" section and random filling.The participants that fall into the last categories were detected by the expert psychologists.However, in order not to perturbate the results, they discarded only the very evident cases of random filling as, for example, questionnaires in which the same answer was given to more than 80% of the items, or a questionnaire in which the participant stated that they did not know any of the support tools.The remaining 1217 answers were preprocessed, in order to have a suitable data format to run ML algorithms training and testing.Thus, firstly, a score was assigned to each possible answer to the three groups of questions about encountered difficulties, support tools and support strategies respectively, by following the equivalences shown in Table 3. Concerning the answers "never tried" and "don't know" in the support tools questions group, if more than 15% of the participants selected one of this two options for any question, then such a tool was excluded from the analysis.Otherwise, the score was inferred by using the one equivalent to the most frequent answer, namely the mode of the scores given to the considered tool, in order not to decrease the number of samples.Only the tool T4, namely the use of specific fonts for dyslexic students, did not pass the check and was excluded.The open commentaries of the students were also examined, but no significant additional information has been found.
Before starting with the training of the algorithm, a further analysis of the collected data was conducted, to verify their distribution.Concerning the input variables, namely the issues encountered by dyslexic students, a clear prevalence of the scores between 1 and 5 was noticed.This trend is often observed in clinical samples of psychology studies.However, in this case, the presence of 0 scores is considerably lower than in other case.The pool of psychologist that supported the experiment ascribed this to the lack of self-esteem that makes dyslexic students be more pessimistic when they have to self-evaluate their problems 2 .Further analysis will be performed about it.Thus, a switch from a 0-to-5 to a 1-to-5 score was operated, by setting to 1 the few answers with score 0 or 1.After the reset of the scale, the distribution of the scores is approximately uniform for every issue.Concerning the output variables (namely the support tools and strategies) instead, the distribution is imbalanced for some of them, with a maximum ratio, among all, of almost 1/26 between the number of answers with the most given score and the number of answers with the less given one.Since the scope of the algorithm is to indicate if a specific support methodology is useful or not, the scores of the output variables were thresholded, so as to obtain the desired binary response.In particular, scores lower than 2.5 were considered as a statement of uselessness, whereas scores higher than 2.5 were considered as a statement of usefulness.The selection of 2.5 as the threshold stays with the fact that this is the central value of the 0 to 5 score interval, thus ensuring that the same number of possible answers is assigned to the two classes "useful" and "useless".It is worth noting that, by making this choice, the participant students for which a certain tool/strategy is marked as useless actually answered that it is "not at all", "very little" or, at most, just "little" useful; conversely, the students for which a certain tool/strategy is marked as useful actually stated that it has a "very much", a "much" or, at least a "medium" utility, which is reasonable.After the thresholding, the highest imbalance between the (two) classes decreased to a ratio around 1/5, with a prevalence of usefulness statements.To deal with such an imbalance, at the moment of verifying the accuracy of the final algorithm, a different weight will be assigned depending on if the methodology is predicted as useful or as useless, as explained later in detail.
Training and testing of the ML algorithms
The four ML algorithms that compose the final prediction algorithm were trained and tested multiple times, by using different setups, in order to find the one that allow the highest accuracy to be achieved.As said in the "Preprocessing of the data" section, this process was repeated independently for each support tool or strategy, to increase the overall accuracy of the final prediction algorithm.
Thus, first the dataset has been randomly split into two, by using 75% of it for the training and validation phase, and the remaining 25% for the testing phase.Then, on the first group, stratified tenfold cross-validation was used in each single trial, so as to ensure that all the predictors and the labels are present in each partition of the collected data.This way, each fold is a smaller representation of the whole dataset and possible bias is avoided.
The tested setups for each ML algorithm are shown below.
Random forest setups
To train and validate RF algorithm, bootstrap technique was used, by randomly considering one third of the variables at each decision split and repeating for 50 decision trees.Three options were considered to treat input variables, namely as score (ordinal variables), as numeric values, and as binary values obtained by thresholding the scores and considering a difficulty as present if they are higher than the threshold or not present otherwise.Different thresholds ( Thr ) were tried that is, Thr = 1.5 , Thr = 2.5 , Thr = 3.5 , and Thr = 4.5 .Thus, a total of 6 setups was tested.
k-nearest neighbors setups
To train and test kNN algorithm, input variables were treated both as numeric values and as binary value, obtained with the same thresholding method described previously.Euclidean and Hamming distance were used in the first and in the second case, respectively.The considered values for the k parameter range from 7 to 39, with a step of 4. A total of 45 different setups (5 options for input variables × 9 values of k) was, thus, considered.
Support vector machines setups
Three different kernels were considered in training SVM algorithm that is linear, polynomial and radial basis function (RBF).Some preliminary tests had been carried out earlier to determine which degree of the polynomial kernel allowed the best performance and the result was 2. Again, input variables were treated both as numeric values and as binary value obtained with the same thresholding method described previously.
Thus, 15 different setups (3 options for kernels × 5 options for input) were tested.Summing, the tests carried out to find the best classification model for each of the 17 tools and of the 22 strategies were 67.A total of 67 × (17 + 22) = 2613 trials were, thus, performed.
To evaluate the performance of the algorithms, the overall weighted prediction accuracy ( A ) for each tool or strategy was calculated with the following formula: where N F is the number of folds used for cross-validation (namely 10), N (C/y) f and N (C/n) f are the number of correct predictions in the case of "useful" and "useless" algorithm response, respectively, N (T/y) and N (T/n) are the total number of tests performed in each fold, in the case of "useful" and "useless" response, respectively, and w y and w n are the weights used to take into consideration the imbalance of the two classes, as previously explained.w y and w n were set at the normalized inverse frequency of the "useful" and "useless" responses, so as to give a higher importance to less frequent predictions and vice versa.In addition, also F1-score was calculated, so as to include also precision and recall among the performance indexes.
Final testing of the procedure
The final super-algorithm, composed of the most accurate ML algorithm with its best setup for each single tool and strategy, was then tested on the part of the dataset left for this purpose, by calculating its overall accuracy.
Finally, a further evaluation step was performed in a real scenario.A group of students answered to the questions in Table 1.Their answers were input to the classification model, which output the best support tools and strategies for each of them.Then, the students tried all the suggested tools and strategies and were asked to state which of them were useful in studying and which not.The response of each student was compared with the output of the classification model, to evaluate its accuracy.Note that, among the strategies listed in Table 2b, S10 was not taken into account yet, since it requires a larger time span to be verified.However, a student association have already been created and its verification is ongoing.For the same reason, strategies S11, S17 and S18 were not applied on an entire course, but only on some of its topics.S11 was verified by teaching some topics in class and some other online or only by providing study material (books and notes), without the presence of a teacher.S17, instead, was verified by providing information about specific topics before the beginning of the lessons.Finally, S18 have been tested by grouping topics in shorter sub-topic modules.A total of 102 students having dyslexia participated at this last evaluation step, by answering to the questionnaire.Among its questions, one asks which of the support tools and strategies listed in Table 2 has been extensively or systematically used before.Fifty-six students answered that they have not tried any of them.This group was chosen for the evaluation, since for them no bias should be present.Among them, 13 were not able to participate for personal reasons, whereas 43 took part at the experiment.
Results
The training and evaluation of the ML algorithms were carried out on the 1217 questionnaire answers that passed the selection procedure exposed in the "Prediction algorithm design" section.As said in the "Preprocessing of the data" section, tool T4 was excluded from the analysis since more than 15% of the participants did not know it.Thus, a total of 16 tools and 22 strategies were effectively included in the analysis.
The weighted prediction accuracy of all the algorithms and their setups, and their F1-score were then calculated and the best performing one for each single tool and strategy was selected.The selection criterium was: "the winner is the algorithm/setup that has the highest mean value of its weighted accuracy and F1-score".Table 4 reports the names of the most effective algorithms and their best setup, jointly with the achieved prediction accuracy and F1-score, for each tool (a) and strategy (b).The use of these algorithms/setups will constitute the final classification model.Obviously, for the prediction of the i-th tool/strategy, the best performing algorithm/ setup for that tool/strategy will be run.
(2) www.nature.com/scientificreports/ The average weighted accuracy achieved globally and singularly for the tools and the strategies is reported in Table 5, jointly with the standard deviation, the minimum and the maximum of the results and the average F1-score.
The weighted accuracy in predicting the tool and the strategies is around 90% in average (88.7% for the former and 91.6% for the latter) and, globally, 90.4% is achieved, with a low standard deviation of 0.079 and 0.088 for tools and strategies, respectively, and 0.084 globally).Thus, the implemented algorithm is capable to output around 9 correct predictions each 10 about the most useful support methodologies for university students with dyslexia, based on the issues they encountered.The good result is confirmed by the F1-score, which is 0.927 for the tools, 0.945 for the strategies and 0.938 in general.The highest accuracy achieved is 0.978 for the tools and 0.994 for the strategies.The lowest one, instead, is 0.725 for the tools and 0.705 for the strategies, which are sensibly worse than the best cases.However, from Table 4, it appears that only 4 tools on 16 and 4 strategies on 22 are predicted with an accuracy lower than 0.85.Thus, in the case that this level of accuracy is considered as www.nature.com/scientificreports/not sufficient, it makes sense to renounce to predict the usefulness of these 5 tools/strategies and concentrate on the remaining 30.Table 6 shows how the results changes, with respect to Table 5, if the less predictable tools/ strategies are not considered.The average accuracy rises to 92.7% for the tools, 94.8% for the strategies and 94.0% globally, with a standard deviation of 0.029, 0.043 and 0.039 respectively.The minimum accuracy achieved is now 0.855.The average F1-score also rises to 0.955 and 0.968 for tools and strategies, respectively, and to 0.966 globally, giving a further proof of the goodness of the implemented classification model.It is worth noting that, even choosing the best performing ML algorithm with its best setup among all the tested ones, the achieved performance is considerably lower than by choosing a different algorithm for the prediction of each tool and strategy.In particular, SVM with numeric input and RBF kernel, which reached the highest average weighted accuracy (86.5% and 89.4%, by considering and discarding the performances under 85% accuracy, respectively), showed a performance around 4% lower than the implemented method.This justifies the choice made.
It is however interesting to compare the single algorithms performance.From Table 4 it can be noted that, as said, SVM is the algorithm that most of the times (21 on 38) outperforms the others, especially when the input Table 5. Average, standard deviation, maximum and minimum accuracy achieved over all the tools, all the strategies and globally on both.Once the best prediction algorithm had been determined and implemented, it was tested on a real case.As said, the test consisted in comparing the support tools and strategies that a sample of 43 dyslexic students found useful or useless, after trying them in studying, with the prediction output by the algorithm when fed with the issues experienced by the students during their career.The results, reported in Table 7, confirmed that the proposed prediction algorithm performs very accurately.Among all the support methodologies (tools plus strategies) predicted as useful, more than 92% are actually useful and, among all those predicted as useless, almost 90% are actually so.Similar results were obtained for tools and strategies singularly.This demonstrate that the algorithm can be profitably employed to predict the best support methodologies for dyslexic students.
Conclusions
This paper deals with the possibility to offer customized support to university students with dyslexia, by creating a classification model of the most useful digital tools and learning strategies for each of them singularly, based on the issues they have generally encountered during their educational journey.
To this, an AI algorithm has been implemented, which is based on effective ML techniques at the state of the art.In particular, four ML algorithms with different setups have been trained and tested and the most performing combination in predicting each tool/strategy has been chosen to predict that tool/strategy.To collect the data needed for the algorithm training/testing, a questionnaire about the difficulties encountered while studying and the most helpful support methodologies has been created and then spread to dyslexic university students.Questionnaire items must be answered with a Likert scale-based level of satisfaction, which was converted to a score between 0 and 5 and then thresholded at half of the score range, in the case of support methodologies, to obtain a yes/no response to their utility.The thresholding operation could introduce possible misclassifications but, for the scope of this work, it is a step that must be performed.Indeed, it is of primary importance to suggest clearly to students with dyslexia if each tool/strategy can be useful for their or not, avoiding the noise that can arise from the possible misunderstanding of the meaning of a utility score.
The results of the testing of the final algorithm show that the prediction accuracy, which was opportunely weighted to take into account a class imbalance of the considered variables, ranges from 72.5 to 97.8% for the tools and from 70.5 to 99.4% for the strategies, with an average of 88.7% and 91.6%, respectively.At a global level, the achieved average accuracy is 90.4%.In addition, the low standard deviation suggests that the prediction accuracy is around the average for the majority of the tools and strategies.Thus, it is meaningful to state that the implemented algorithm is capable of predicting correctly, in around 9 cases each 10.Its precision is also confirmed, by the high F1-score (0.927 for the tools, 0.945 for the strategies and 0.938 globally).
Average prediction accuracy has been also recalculated after discarding those tools and strategies for which a value lower than 85% had been achieved, which are only 4 per category.In this case, the average accuracy rises to 92.7% for the tools, 94.8% for the strategies and 94.0% globally.In addition, the proposed algorithm has been tested on a real case, achieving a prediction accuracy higher than 90% in suggesting useful and useless support methodologies to a sample of 43 dyslexic students.
Comparisons with other approaches could not be made, since, to the best of authors' knowledge after a thorough literature review, this is the first approach that aims at applying AI to estimate the most proper support methodologies in a personalized way for each student.Indeed, AI has been generally used directly to create aid tools, regardless the student-specific needs [19][20][21] , or as a diagnostic tool [10][11][12][13][14] .The only study focused on the selection of personalized learning experience based on different dyslexia types is the one proposed in 22 .However, a validation procedure of the used algorithms has not been reported and prediction accuracy has not been calculated.Furthermore, due to the absence of complete clinical reports of dyslexia made by experts, also a comparison between the tools and strategies suggested by the implemented algorithm and the ones suggested by the experts could not have been made.This represents a limitation in the validation procedure.Another limitation lies in the fact that the evaluation of the accuracy of the algorithm have been performed only in the short and medium term.A long-term evaluation process is needed and is being performing.
Despite of these limitation, the obtained results prove that the implemented classification model can be successfully used to provide dyslexic students at university with personalized support tools and strategies, given the issues that have affected their learning path.This achievement opens the door to a new way of thinking and acting www.nature.com/scientificreports/within university institutions about the problem of dyslexia, which aims at boosting the inclusivity by changing the teaching modalities toward the affected students' needs, thanks to the use of the new digital techniques and technologies, instead of simply decreasing the study load, giving more time to prepare exams, cutting some exams from the programme etc., as it has been done until now.This work will follow as provided by the objectives of the above-mentioned VRAIlexia project ("Introduction" section) that is, the predicted tools and strategies for each student will be tested further by themselves, so as to obtain feedback about their usefulness.Through such feedback, tools and strategies will be refined to meet the students' requirements and again tested by them.This process will be repeated until obtaining support tools and strategies which are fully personalized.Reinforcement learning techniques will be explored to try to reach the goal.At the finish line, a considerable step toward a real and fair inclusion within university of all the students will have been taken.
Table 3 .Figure 1 .
Figure 1.Spearman's cross-correlation (absolute value) matrix of the scores given (as in Table3) to the usefulness of tools and strategies.These are indicated with an ID number ranging from 1 to 17 (for the tools) and from 18 to 39 (for the strategies), whereas the correlation values are expressed with colors, whose values are mapped in the color bar on the right. https://doi.org/10.1038/s41598-023-50879-7
Table 2 .
List of the questions about the most useful tools (a) and strategies (b) for dyslexic students, asked in the 3rd section of the questionnaire.
Id Have you considered the following supporting tools as useful? Id Have you considered the following supporting strategies as useful?
Regression setupsSince output variables were binarized, logistic regression was used instead of linear regression.The characteristics of the LR algorithm required to consider input variables scores solely as numeric values, thus 1 setup of this algorithm was tested.
Table 4 .
Best-predicting algorithms and setups for each tool (a) and strategy (b) and accuracy achieved by each of them.
Table 6 .
Average, standard deviation, maximum and minimum accuracy achieved over all the tools, all the strategies and globally on both, calculated by excluding predictions with a weighted accuracy lower than 85%. is considered as numeric and RBF kernel is used.k-NN follows (10 times on 38), with k set at 31 (8 times) and 39 (twice) and, mostly, with numeric input.Then, random forest wins 7 times on 38, always when the input is considered as a score.Logistic regression never outperforms the other algorithms. | 8,815.8 | 2024-01-02T00:00:00.000 | [
"Education",
"Computer Science"
] |
Injectable glass polyalkenoate cements: evaluation of their rheological and mechanical properties with and without the incorporation of lidocaine hydrochloride
Lidocaine hydrochloride is used as an anesthetic in many clinical applications. This short communication investigates the effect of complete substitution of lidocaine hydrochloride for deionized (DI) water on the physico-chemical properties of two novel glass polyalkenoate cements. Substituting DI water with lidocaine hydrochloride resulted in cements with shorter working times but comparable setting times and mechanical properties. Fourier transform infrared spectroscopy confirmed that the setting reaction in cements containing DI water and lidocaine hydrochloride was completed within 24 h, post cement preparation and maturation. Further, it was explained that lidocaine hydrochloride binds to poly(acrylic) acid (PAA) due to electrostatic forces between the positively charged amino group of lidocaine hydrochloride and the carboxylic group of the PAA, resulting in a compact poly-complex precipitate.
Introduction
Bioactive glasses are implanted for tissue replacement or regeneration [1]. Such glasses elicit a biological response at their surface which stimulates cell growth and gene response for the formation of a bond between the material and living tissues [2,3].
Glass polyalkenoate cements (GPCs) are formed by an acid-base reaction between a water-soluble poly (acrylic) acid (PAA) and an acid-degradable fluoroalumino-silicate bioactive glass [4]. GPCs were initially developed in the early 1970's for use in restorative dentistry [5]. A poly-salt matrix is formed in GPCs through the degradation of the glass, leading to the release of free cations which associate with the carboxylic anions from the PAA [6]. The crosslinking mechanism is a continuous process during which acrylate networks are established, leading to the increase in strength over time [7,8]. The physical properties of GPCs have been shown to vary with alteration of the powder-liquid ratio, acid concentration, molecular weight of the PAA and methods of curing [9]. However, there has been little research reported concerning the changes in physical properties of the GPC upon the replacement of deionized (DI) water with a different agent.
Lidocaine hydrochloride is used as an anesthetic in many clinical applications [10,11]. It functions by inhibiting the flow of sodium ions into the membranes of neurons when activated through an exterior stimulant, causing temporary relief from pain [12]. Of the local anesthetics clinically available, lidocaine is the most widely used [12]. Substituting lidocaine for water in a GPC is a novel proposal, given the expansive range of its use. Injection of the resultant GPC could provide stabilization to the fracture whilst relieving patient pain.
The objective of this preliminary study is to determine the change in mechanical and rheological properties in GPCs that result from the complete substitution of lidocaine hydrochloride for water in the starting composition. The objective is realized by utilizing two distinct aluminum-free glass compositions and observing any variation in GPC properties prepared from them as a result of substituting lidocaine hydrochloride for DI water. (table 1). The glasses were prepared by weighing out appropriate amounts of the analytical grade reagents (Fisher Scientific, Ottawa and Sigma-Aldrich, Oakville, Canada) and mixing them in a container. Platinum (Pt) crucibles and a Lindberg/ Blue M model furnace (Lindberg/Blue M, Asheville, NC USA) with a UP-550 controller were used for melting the powders. BT101 glass was melted at 1500°C for 1 h while TA2 glass was melted at 1650°C for 1.5 h. The melts were shock quenched in water to obtain frit which was then dried in the oven (100°C, 1 h), ground using a ball mill (400 rounds per minute, 15 min), and sieved to obtain particle size of 45 μm and 20 μm<x<45 μm for TA2 and BT101, respectively. BT101 was then annealed at 630°C for 12 h, to relieve internal stresses within the glass network and to extend the handling properties of the GPCs made from them. The furnace (Lindberg/Blue M, Asheville, NC USA) was programmed to reach annealing temperature within 3 h and to cool down to room temperature (25±2°C) in a further 3 h. The glass powders of the selected compositions were then sieved and utilized for subsequent cement preparation and characterization. TA2 glass was not annealed because that would have resulted in handling properties longer than clinically applicable.
Cement preparation
Cement samples were prepared by mixing BT101 with poly(acrylic acid) (PAA 40, Mw=30 000, Advanced Healthcare Ltd, Tonbridge, UK) and TA2 with PAA 35 (Mw=55 000, Advanced Healthcare Ltd, Tonbridge, UK). The glasses were thoroughly mixed with their respective acids and DI water on a glass plate. The cement using BT101 was formulated at a P:L ratio of 1:2, where 1 g of glass was mixed with 1 g PAA 40 and 1 ml DI water. The cement using TA2 was formulated at a P:L ratio of 1:1.6, where 1 g of glass was mixed with 0.6 g PAA 35 and 1 ml DI water. The process was repeated with identical powder-liquid ratios for both glasses when formulating samples using lidocaine hydrochloride as opposed to DI water. Complete mixing was undertaken within 30 s in ambient room temperature (23±1°C). Cements were subsequently named (BT101-W, BT101-L, TA2-W and TA2-L) after the glasses and aqueous solvents (W for Water, L for lidocaine) that they were fabricated from.
Working and net setting times
The working time (T w ) of the cements (n=5) was measured in ambient air (23±1°C) using a digital stopwatch, and was defined as the period of time from the start of mixing during which it was possible to manipulate the material without having an adverse effect on its properties [13].
The setting time (T s ) of each of the cements (n=5) was measured in accordance with ISO 9917 [13]. An empty mold with internal dimensions 10 mm×8 mm was placed on aluminum foil and filled to a level surface with mixed cement. Sixty seconds after mixing commenced, the entire assembly was placed on a metal block (8 mm×75 mm×100 mm) in an oven maintained at 37°C. Ninety seconds after mixing, a Vicat needle indenter (mass 400 g) was lowered onto the surface of the cement. The needle was allowed to remain on the surface for 5 s, the indent it made was then observed and the process was repeated every 30 s until the needle failed to make a complete circular indent when viewed at ×2 magnification.
Fourier transform infrared (FTIR) spectroscopic study
Three cement cylinders (6 mm high, 4 mm diameter) of both compositions were prepared and aged for 1 day in DI water. ∼0.3 g powdered versions (<90 μm) of each cement were used as samples. Spectra were collected using a FTIR spectrometer (Spectrum One FTIR spectrometer, Perkin Elmer Instruments, USA) and background contributions were removed. The sample and the reference background spectra were collected for each cement formulation in ambient air (23±1°C). Analysis was performed in the wavenumber ranging from 4000 to 650 cm −1 with a spectral resolution of 4 cm −1 . Measurements were performed by attenuated total reflectance technique with a ZnSe crystal.
Evaluation of mechanical properties 2.5.1. Determination of compressive strength
The compressive strength (σ c ) of the four GPC compositions (section 2.2) were evaluated in ambient air (23±1°C) according to ISO 9917-1:2007 [13]. Cylindrical samples (4 mm Ø, 6 mm height, n=5) were tested after 1, 7 and 30 days ageing (DI water, 37°C). Testing was undertaken on an Instron Universal Testing Machine (Instron Corp., Massachusetts, USA) using a ±2 kN load cell at a crosshead speed of 1 mm min −1 . The fracture load was noted for each sample. Compressive strength was calculated according to equation (1).
where ρ is the fracture load (N) and d is the sample diameter (mm).
where ρ is the fracture load (N), t is the sample thickness (mm) and r is the radius of the support diameter (mm).
Statistical analysis
A non-parametric Kruskal-Wallis H Test was used to analyze the data. Mann-Whitney U test was used to compare the relative means and to report the statistically significant differences when P0.05. Statistical analysis was performed using SPSS software (IBM SPSS statistics 21, IBM Corp., Armonk, NY, USA).
Evaluation of rheological properties
The working and net setting times for BT101 and TA2 with DI water and lidocaine hydrochloride were evaluated and are presented in figure 1. The mean working times for BT101-W, BT101-L, TA2-W and TA2-L were recorded as ∼205, 137, 197 and 170 s, respectively ( figure 1(a)). There was a statistically significant difference (P=0.050) between the working times for the DI water/lidocaine hydrochloride cement pairs. The net setting times were also recorded. BT101-W and BT101-F had a similar mean setting time of ∼1100 s (P=0.513). A similar trend was observed with the TA2 cement pair as TA2-W and TA2-F had a setting time of ∼1140 s (P=0.513). The significant decrease in the initial working time can be attributed to complexation caused by the binding of lidocaine hydrochloride to PAA. Complexation of the polymer can cause conformational changes in the polymer chain leading to a decrease in viscosity [9]. In a study by Nurkeeva et al [15], the authors reported that lidocaine hydrochloride binds to PAA, leading to a significant decrease in the viscosity of the polymer. The binding occurred due to electrostatic forces and is accompanied by the formation of compact poly-complex precipitate. The electrostatic forces can be explained by the formation of ionic bonds between the positively charged amino group of lidocaine hydrochloride and the carboxylic group of the PAA [15].
FTIR spectroscopic study
FTIR transmittance spectra of the cements are shown in figure 2 in the range 4000-650 cm −1 . Figure 2(a) shows the FTIR transmittance spectra for BT101-W and BT101-L. Figure 2(b) shows the FTIR transmittance spectra for TA2-W and TA2-L. It is obvious that both BT101 and TA2 have similar FTIR bands. The obtained bands are centered at ∼3300, 2100, 1550, 1400, 1320, 1170, 1060 and 960 cm −1 . This indicates that both materials have similar chemical bonds when observed one day, post cement preparation and maturation.
The broad peak centered at 3268 cm −1 is assigned to hydrogen-bonded OH stretching vibrations of absorbed water within the poly-salt matrix [16]. The intensity of this peak was found to drop from ∼75 (BT101-W) to ∼70 cm −1 (BT101-L) and from ∼82 (TA2-W) to ∼64 cm −1 (TA2-L) when the water component was fully replaced with lidocaine. It is apparent that replacing DI water with lidocaine hydrochloride results in a poly-salt matrix containing larger amounts of water, evident from the drop in %transmittance. The spectra for all materials have shown peaks surrounded by noise in the region 2300-1800 cm −1 . Therefore, the peaks in the region 2300-1800 cm −1 were not analyzed. The peaks centered at 1550, 1400, 1320, 1170, 1060 and 960 cm −1 were observed in a similar study utilizing tantalum-containing glasses including the one used here [16]. The peaks centered at 1550, 1400 and 1320 cm −1 are assigned to the asymmetric/symmetric stretching vibrations of the dissociated carboxyl COO groups with the glass cations, for example Ca 2+ and Sr 2+ [17,18]. Confirming that the reaction between PAA and glass cations was completed within 24 h, post cement preparation and maturation. The peaks centered at 1170 and 1060 corresponds to the vibrational mode of the stretching of Si-O-Si [19]. The transmittance peak centered at 960 cm −1 corresponds to Si-OH deformation vibration [20].
It is obvious from FTIR spectra that Ta-based cements contained more DI water than BT-based cements when the DI phase was replaced with lidocaine hydrochloride, ne day post cement preparation and maturation. This could be due to the presence of Ta 5+ ions in the cement matrix delaying the gelation process thus facilitating larger amounts of water to be absorbed by the poly-salt matrix [16] in the Ta-containing GPCs. The delay in the gelation process results from the former role of Ta in these materials and its slow reactivity with PAA [16].
Evaluation of mechanical properties
The compressive strengths (σ c ) for the four cement compositions were tested over 1, 7, and 30 days and are presented in figure 3. Both BT101-W and TA2-W recorded their lowest σ c at 1 day with values of ∼5 MPa and highest σ c at 30 days with respective Biaxial flexural strengths (σ f ) were also evaluated over 1, 7 and 30 days for the formulated GPCs and are presented in figure 4. The minimum and maximum σ f were recorded at 1 and 30 days for both BT101-W and TA2-W samples with respective values of ∼5 (BT101-W) and ∼6 (TA2-W) MPa at 1 day and ∼11 (BT101-W) and ∼14 (TA2-W) MPa at 30 days. The increasing trend was found to be significant for both BT101-W (P=0.003) and TA2-W (P=0.003). Biaxial flexural strength results of BT101-L and TA2-L did not show any specific trend over time and the differences in the mean σ f values for both samples were found to be insignificant (P>0.05). There were, however, significant changes for certain maturation periods when comparing the differences between DI water and lidocaine hydrochloride samples. For BT101, the changes between 1 day (P=0.009) and 30 days (P=0.047) for the DI water/lidocaine pairs were significant. The TA2 DI water/lidocaine pairs had significant changes for 7 day (P=0.009) and 30 day (P=0.009) maturation periods.
The increase in σ c and σ f over time for DI waterbased cements is the expected trend seen by comparable GPCs in the literature [7,21,22] and is attributed to the continuous cross-linking process between the carboxyl groups from the polymer and the released cations from the glass [21,23,24]. The process is initiated by the release of protons from the PAA in the presence of water at neutral pH (equation (3)) [9]. The released protons attack the glass particles in an acidbase reaction that liberates the cations to form the acrylate networks. The σ c and σ f of lidocaine hydrochloride-containing GPCs however, showed lower or comparable strengths during maturation. When lidocaine hydrochloride is mixed with PAA and deionised water, protons are released due to the formation of ionic bonds between the carboxylic groups of the PAA and positively charged amino groups in lidocaine hydrochloride [15]. The ionic bond that forms slows down the process of cross-linking between the PAA and the cations from the glass. This behavior was shown to affect the strength of lidocaine hydrochloride-containing GPCs when compared to waterbased GPCs. This agrees with the obtained FTIR results. The lidocaine hydrochloride-containing GPCs were found to contain more water, which could explain the reduced strengths recorded.
CH CH CO H H O CH
CH H O COOH.
Conclusion
To the best of our knowledge, this communication reports for the first time that the local anesthetic lidocaine hydrochloride can fully substitute for the water phase in GPC systems, but this substitution results in cements with shorter working times but comparable setting times and mechanical properties. This may lead to the development of injectable GPCs with suitable working and setting times for various skeletal applications such as wrist and shoulder fixation which can also minimize pain for recipients. Due to the low strengths of the cements under study, the authors recommend the use of these cements in conjunction with plates or wires to offer additional fixation. Further ex-vivo and in-vivo studies are necessary to prove the suitability of the studied GPC systems for skeletal applications. | 3,563 | 2018-01-10T00:00:00.000 | [
"Materials Science"
] |
Genotyping-by-sequencing application on diploid rose and a resulting high-density SNP-based consensus map
Roses, which have been cultivated for at least 5000 years, are one of the most important ornamental crops in the world. Because of the interspecific nature and high heterozygosity in commercial roses, the genetic resources available for rose are limited. To effectively identify markers associated with QTL controlling important traits, such as disease resistance, abundant markers along the genome and careful phenotyping are required. Utilizing genotyping by sequencing technology and the strawberry genome (Fragaria vesca v2.0.a1) as a reference, we generated thousands of informative single nucleotide polymorphism (SNP) markers. These SNPs along with known bridge simple sequence repeat (SSR) markers allowed us to create the first high-density integrated consensus map for diploid roses. Individual maps were first created for populations J06-20-14-3דLittle Chief” (J14-3×LC), J06-20-14-3דVineyard Song” (J14-3×VS) and “Old Blush”דRed Fairy” (OB×RF) and these maps were linked with 824 SNPs and 13 SSR bridge markers. The anchor SSR markers were used to determine the numbering of the rose linkage groups. The diploid consensus map has seven linkage groups (LGs), a total length of 892.2 cM, and an average distance of 0.25 cM between 3527 markers. By combining three individual populations, the marker density and the reliability of the marker order in the consensus map was improved over a single population map. Extensive synteny between the strawberry and diploid rose genomes was observed. This consensus map will serve as the tool for the discovery of marker–trait associations in rose breeding using pedigree-based analysis. The high level of conservation observed between the strawberry and rose genomes will help further comparative studies within the Rosaceae family and may aid in the identification of candidate genes within QTL regions. A genetic map of rose DNA could help flower breeders develop ornamental crops that are more resistant to disease or have other desirable traits. Rose breeder, David Byrne and molecular geneticist, Patricia Klein along with their colleagues from Texas A&M University in College Station, USA, bred five different strains of ‘diploid’ rose, each with two full sets of chromosomes, to create 234 offspring plants. Using the full genome from strawberry, a closely related species, as a reference, the researchers then looked for sites in the genome where either single DNA letters differed between individual offspring or where short sequences of DNA repeated themselves to create easily identifiable genetic markers. They created genetic maps of each parental cross, and then formed a consensus map that can now serve as a tool for future genetically guided breeding efforts of horticulturally important traits.
Introduction
Roses (Rosa spp.) are one of the most important and popular ornamental crops in the world today. Diverse plant growth types, flower colors, flower sizes/shapes, and fragrance all contribute to the commercial value of rose. Besides ornamental uses, roses also have medicinal, culinary, and cosmetic uses 1,2 . Rose is a very important ornamental plant in the US specialty crop market with an annual value of about $400 million 3 . There are~200 Rosa species within the Rosaceae family of which about half are diploid (2x = 14). Among the more than 20,000 commercial rose cultivars 1 , most are either tetraploid (4x = 28), triploid (3x = 21), or diploid (2x = 14) 1,4 . Most cultivated roses are hybrids derived from 8 to 10 wild diploid and tetraploid species 5,6 . Though DNA amounts were found varying among diploid rose sections, subgenera and cultivars, the diploid rose genome size was reported to be small among the angiosperms, about 0.78-1.29 pg/2C, which is about two to four times the size of Arabidopsis thaliana (L.) Heynh [7][8][9][10] .
Genetic maps have been constructed in rose using a range of markers including phenotypic (i.e. visible) traits, isozymes, random amplified polymorphic DNAs (RAPDs), restriction fragment length polymorphisms (RFLPs), amplified fragment length polymorphisms (AFLPs), sequence-tagged sites (STSs), microsatellites or simple sequence repeats (SSRs), and single nucleotide polymorphisms (SNPs) [11][12][13][14][15] . Effective linkage map construction requires polymorphic markers, which are evenly distributed across the genome or the region of interest, high marker coverage, and a low genotyping error rate 16 . Initial linkage maps created for diploid roses started with creating two parental maps using the pseudo-testcross strategy, one for the female and the other for the male. In addition, these maps were created using relatively small populations (100 or less) due to the varying fertility and germination abilities of different rose genotypes. The first several diploid rose genetic maps utilized morphological markers as well as molecular markers and had from less than a hundred to about three hundred markers covering about 300-500 cM for each parental map 11,[17][18][19] . Genetic map construction has also been conducted in tetraploid roses with various marker types 14,15,20 . More recently, the integrated map approach has been possible utilizing a greater number of markers resulting in longer map lengths. Linde et al. 21 developed an integrated diploid genetic map for rose using 233 markers covering 418 cM of the rose genome. For tetraploid rose, Yu et al. 22 integrated the homologous linkage groups from both parents with 74 SSRs and constructed an integrated map with length of 874 cM. Most recently, Vukosavljev et al. 14 and Bourke et al. 15 both created integrated linkage maps using the WagRhSNP 68K Axiom SNP array 23 . Beyond the individual maps, an unified diploid consensus map for rose was constructed in 2011 using 59 bridge markers to link four diploid rose populations 24 . This ICM (integrated consensus map) included 597 markers and covered a length of 530 cM on seven linkage groups. These mapping studies also revealed genes or QTLs associated with horticultural traits such as thorn density, leaf area, chlorophyll content, flower size, days to flowering, leaf size, and resistance to powdery mildew 19,21,25 .
Genomic comparative studies within the Rosaceae family have shown that the synteny and collinearity among Prunus, Malus, Pyrus, Fragaria, and Rosa is high 14,15,20,[26][27][28][29][30] . Strawberry and rose both belong to the Rosoideae subfamily of the Rosaceae with a base chromosome number of 7, and they have been shown to have a close genetic relationship 14,15,20,24,31 . Gar et al. 20 compared the collinearity among Rosa and Fragaria by positioning 70 rose EST markers on the strawberry pseudochromosomes. They found most of the markers mapped to one linkage group of Rosa were located on one Fragaria pseudo-chromosome. It was estimated that four major translocations and six inversions have occurred between the Rosa and Fragaria genomes since their divergence from a common ancestor about 62-82 million years ago 32 . With the recent release of a new version of the diploid Fragaria vesca genome v2.0.a1 (denoted as Fvb) [33][34][35] and improved sequencing technologies, synteny between Rosa and Fragaria can now be examined at a much higher resolution. In recent studies, the comprehensive collinearity between strawberry and rose was demonstrated utilizing the WagRhSNP 68K Axiom SNP array on tetraploid roses 14,15 . These studies described the detailed syntenic relationship between the seven strawberry and rose chromosomes, and revealed a reciprocal translocation, a major telomeric inversion, and another possible inversion differentiating the chromosomes of these two genera 14,15 .
The aim of this study was to use previously developed anchor SSRs 24 and SNPs generated from GBS to construct a dense integrated consensus map for several diploid rose populations. This diploid rose consensus map (ICD) enabled us to visualize the syntenic relationship between strawberry and diploid rose, and compare and validate marker order across populations. The development of a high-density consensus genetic map in rose will help identify QTLs, candidate genes, benefit marker-assisted selection, and facilitate the study of syntenic relationships across taxonomic groups.
Mapping populations
A highly black spot resistant breeding line derived from R. wichurana "Basye's Thornless" (black spot resistant)-J06-20-14-3 (J14-3) according to Dong et al. 47 , a moderately resistant cultivar "Old Blush" (OB) and three susceptible cultivars with excellent ornamental characteristics-"Little Chief" (LC), "Red Fairy" (RF), and "Vineyard Song" (VS) were used to generate the three diploid populations (2n = 2x = 14) for linkage map construction (Table 1). Parents J14-3, OB, LC, RF, and VS also diverge in growth habit, horticultural characteristics and heat tolerance. These populations were grown in the field in College Station (30°3 6′5″N 96°18′52″W, 112 m elevation), TX, USA, a subtropical mild winter, hot summer humid climate, which has an average annual rainfall of 1018 mm, and Spring, Summer, Fall, and Winter average temperatures of 20, 29, 21, and 12°C, respectively 48 . One plant per seedling was planted on raised beds in rows oriented east to west in an open field in 2013 or 2014. Black landscape cloth weed barrier was placed around each plant for weed control. Each plant was hard pruned (reduced plant size by 50-75%) at the end of the winter in February/March and light pruned (reduced plant size by 25-40%) in both June and September to restrict plant size and induce new growth. Irrigation was applied as needed, but no chemical applications were applied.
DNA extraction
DNA extraction was performed based on Doyle's 49 CTAB protocol with some minor modifications. The stock solution preparation and DNA extraction protocol can be found in Supplementary File 1. Unexpanded young leaves were collected up to 1/3 volume of a 2 mL screw-cap tube and placed in liquid nitrogen immediately and stored at −80°C until extraction. After extraction, DNA samples were incubated with RNase at 37°C for forty to 50 min and then the isolated genomic DNA was purified using the OneStep™ PCR Inhibitor Removal Kit (Zymo Research, Irvine, CA, USA) according to the manufacturer's protocol. DNA quantification was performed fluorometrically using a Qubit Fluorometer (Thermo Fisher Scientific, Rochester, NY, USA) or AccuBlue™ (Biotium, Hayward, CA, USA) according to the protocol from the manufacturer. All DNA samples were stored at −20°C.
SSR analysis
Forty SSR markers described by Spiller et al. 24 as bridge markers were analyzed on the five parental lines: J14-3, OB, RF, VS, and LC. The original SSR names were appended to include the ICM LG numbers 24 . Twenty-six (Table 2) of the 40 SSRs were polymorphic within the three mapping populations and were run on the progenies to determine the linkage groups according to the rose ICM 24 and used as quality control markers. The 10 µL PCR reaction mixture contained 2 µL of 2.5 ng/µL genomic DNA, 2 µL 5×GoTaq Reaction Buffer (Promega Table 1 Diploid rose parents of the three mapping populations and their response to black spot disease
Genotyping by sequencing and SNP detection
Genotyping by sequencing or digital genotyping was performed using the methylation sensitive restriction enzyme NgoMIV (G ˅ CCGGC) according to the method described by Morishige et al. 42 Briefly, 250 ng rose DNA was digested with the restriction enzyme NgoMIV. Following digestion, multiplex identifier barcodes were ligated to the fragments, which were subsequently grouped into pools of 66 samples, each containing a unique 12 bp barcode. The pools were sheared by sonication to a target size of 250-300 bp followed by size selection on a 2% agarose gel. Following overhang fill-in, blunting and adenylation, the pools underwent ligation with an Illumina-specific adapter and were purified using Agencourt AMPure XP magnetic beads (Beckman Coulter, Indianapolis, IN, USA). The pools were then subjected to 20 cycles of PCR using Phusion high-fidelity polymerase (Thermo Fisher Scientific). Single-strand products were obtained using Dynabeads® (Thermo Fisher Scientific) then PCR-amplified for 14 cycles with Phusion polymerase to incorporate the Illumina bridge amplification sequence. Final PCR products were purified then quantified using PicoGreen® fluorescent dye (Quant-iT™ dsDNA Broad Range (BR) kit, Thermo Fisher Scientific). Final PCR products were diluted to 10 nM. Quality assessment of each template library was performed using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). The template was sequenced on an Illumina HiSeq 2500 (Illumina, San Diego, CA, USA) using standard Illumina protocols. Single-end sequencing was carried out for 126 cycles. Only Illumina data that passed quality control (FastQC) was further analyzed. Reads for each parent and progeny were identified by their unique 12 bp barcode identifier and sorted into individual files using a custom python script. A 100% match to both the 12 bp barcode sequence and the partial NgoMIV restriction site were crucial to retain the reads from each sample. Following the sorting of reads to each individual sample, the 12 bp barcode on the 5′ end was trimmed and the reads were imported into the CLC Genomics Workbench v9.0 (Qiagen, Boston, MA, USA). Trimmed reads from each sample were mapped to the Fragaria vesca genome v2.0.a1 (Fvb) 34 . Parameters for read alignment were set at a mismatch cost = 2, insertion and deletion cost = 3, 50% minimum read length required to match the reference and a minimum of 75% similarity between the reads and the reference genome. Any reads that failed to align to the reference genome or aligned identically to more than one position were ignored. After the alignment, variant detection was performed to call SNPs. The parameters for SNP detection in the CLC Genomics Workbench were: at least 90% probability to detect a variant, a minimum read coverage of 15 to detect a SNP, a minimum SNP count of 3, a neighborhood radius = 5, a minimum central quality = 20, a minimum neighborhood quality = 15. These parameters were applied to determine legitimate SNPs. The mapping and SNP files were exported as SAM and commaseparated-value (.csv) formats, respectively. Further SNP call analysis was performed using custom scripts written in python and perl. The scripts used for the GBS pipeline can be found in the Dryad Digital Depository, doi:10.5061/dryad.k2do5. SNP markers were named according to their physical position on the Fragaria vesca whole-genome v2.0.a1 Assembly & Annotation in the GDR database 51 . For example, SNP chr1_19.680628 is located on Fragaria vesca pseudo-chromosome 1 at position 19.680628 Mbp. Marker alleles were converted into genotype codes based on the possible CP population segregation types abxcd, efxeg, hkxhk, lmxll, and nnxnp as described in the JoinMap® v4.1 50 manual using a custom python script.
Individual genetic linkage map construction
Individual linkage maps were first developed from the crosses of J14-3×LC, J14-3×VS, and OB×RF independently using JoinMap® v4.1. SNPs were eliminated if both parents were homozygous, if one or both parents had no allele call at a given position, if there was too much missing data (>15% of the population size), if the segregation ratio was heavily skewed based on a χ 2 test (p ≤ 0.0005), or if any parental genotype did not follow what was described as a CP population as outlined in the JoinMap® v4.1 manual 50 . For the purpose of constructing the consensus rose map, after the application of the filtering criteria mentioned above and before importing markers into JoinMap® v4.1, 1014 SNPs that were common across all three diploid populations were appended with a "c" at the end of the marker name and the best effort to retain them throughout the mapping process was implemented regardless of the similarity of their segregation patterns. As for the rest of the markers, only one was kept if it co-segregated with other markers. Markers were grouped to the seven rose linkage groups with different LOD values that varied from 5 to 15 (9, 11, 5, 9, 11, 11, 11; 7, 7, 7, 11, 11, 7, 7; 15, 15, 14, 15, 14, 15, 14 for the seven LGs of OB×RF, J14-3×LC, and J14-3×VS, respectively). Each group was assigned to one of the seven rose linkage groups according to the anchor SSR markers with previously known linkage group positions 24 . The maps were constructed with the maximum likelihood mapping function. Poorly fitting markers that greatly inflated the map length or resulted in too many double recombinations were dropped during the mapping process. Graphical genotyping in Excel (E. van de Weg, personal communication (2016)) was used to check marker double recombinations. In addition, individuals with many unexpected alien alleles (>3%) or too many recombination events (either outcrosses or selfed progeny) were dropped before the final mapping. The common markers excluded during the mapping process were placed back onto the map in the final step after fixed-ordering all other markers to facilitate map integration across the three populations. The final linkage maps were visualized with MapChart 2.3 52 .
Integrated consensus map construction and synteny comparison
After each map was constructed for a population, a total of 234 F 1 progeny with 824 common SNPs and 13 common SSRs were used for developing an integrated consensus map. Map integration was first attempted using the JoinMap® v4.1 "combine groups for map integration" function, however, due to reshuffling of marker order within each individual map and extremely long computational time resulting from the large number of markers, consensus map construction within JoinMap® v4.1 was difficult. Therefore, MergeMap 53 was used to generate consensus marker order using homologous LGs from individual maps. The consensus map created in Merge-Map was of higher quality than the consensus map created in JoinMap® v4.1 (data not shown) with regard to marker number and marker density. The integrated consensus map for the three diploid populations was designated as ICD (integrated consensus map for diploid rose).
Genomic comparison between diploid rose and F. vesca was performed following construction of the rose ICD map. The comparison of these two genera was visualized using Circos 54 diagrams.
Mapping materials
Among all three mapping populations, 19 individuals were excluded during marker analysis and mapping due to an excessive number of alien alleles (suspicious outcrosses, >3%) or selfing events causing too many double recombinations. As a result, a total of 234 plants plus five parental lines were used to develop the linkage maps (Table 1).
Anchor SSR markers
Twenty-six out of 40 tested anchor SSRs were polymorphic in all three populations and thus used as quality control markers (Table 2; Supplementary Table 1), among which, six SSR markers failed to fit in the final maps though they were initially grouped into the expected LGs along with the SNP markers. On the final individual maps, 14, 13, and 18 SSRs were incorporated into the J14-3×LC, J14-3×VS, and OB×RF maps, respectively ( Table 2). These SSRs were distributed on all 7 LGs allowing us to assign each of the LGs according to the rose ICM 24 . After integrating all three maps, 20 SSR markers were present on the ICD, whereas 13 of them were present on at least two maps and 10 were shared across all three (Supplementary Figures 1-8).
SNP markers
The parents and progeny from the 3 rose populations were run on 5 lanes of an Illumina flow cell. A total of 99 Gb of sequence was obtained. Of the 255 progeny and 5 parents originally sequenced, only two progeny failed to sequence and these were removed from further analysis in addition to the 19 that were excluded during the mapping process as noted above. On average, 3.3 M reads were obtained for each sample and approximately 60.4% of the reads from each sample mapped to the F. vesca reference genome. After calling variants in the CLC Genomics Workbench, we initially obtained more than fifty thousand SNPs for each population (data not shown). However, after removing SNPs that were monomorphic, had too much missing data (>15% of population size), or the marker genotypes were not described in the JoinMap® v4.1 manual 50 , we retained~7000 SNPs per population. An additional two thousand SNPs were eliminated due to strong segregation distortion (p < 0.0005) leaving~5000 candidate SNPs, including 1014 that were common among the three populations, for mapping. During the mapping process,~3500 SNPs were eliminated because of co-segregation or because they failed to fit in the final map. Fourteen to fifteen hundred SNPs were successfully mapped to each population with hundreds of SNPs placed on each LG (Table 3). Among these, 824 SNPs common in at least two populations (192 common in all three) were retained to aid in map integration. The allele calls for the markers retained for each of the three individual mapping populations can be found in Supplementary Table 2.
Individual linkage map construction
A total of 14 SSR and 1567 SNP markers were mapped in the J14-3×LC population (464 cM) ( Supplementary Figures 3 and 4), 13 SSR and 1421 SNP markers were mapped in the J14-3×VS population (518 cM) (Supplementary Figures 5 and 6), and 18 SSR and 1533 SNP markers were mapped in the OB×RF population (524 cM) ( Supplementary Figures 7 and 8). Mean distance was calculated using the unique loci, where co-segregating markers were considered as one bin marker. The map density and mean distance across all the LGs varied from 1 to 4 markers per cM and 1-2.19 cM/bin marker, respectively. The largest gaps ranged from 3 to 15 cM (Table 4). Across the three populations, 837 markers (SNP + SSR) were shared between at least two populations and 203 markers (SNP + SSR) were shared across all three populations. These anchor markers were used to integrate the three individual maps.
Around 14% of the mapped markers showed segregation distortion (0.0005 < p < 0.05) with the distortion ratio varying among LGs and populations (Table 3). Segregation distortion was predominantly clustered into regions on LGs 2 and 6 of the J14-3×LC population, LGs 1 and 5 of the J14-3×VS population, and LGs 1, 2, and 6 of the Marker distortion was based on a χ 2 test (p < 0.05); LGs having more than 50 highly distorted markers are shown in bold for each population; "-" indicates distortion is not available for the consensus map.
In total, 19 markers (3 SSR and 16 SNP) showed segregation distortion (p < 0.05) in two populations and none of the markers showed distortion in all three populations (data not shown). Overall, 206, 187, and 211 markers showed high skewness only in the J14-3×LC, J14-3×VS, and OB×RF populations, respectively (data not shown). The majority of the markers on the final individual population maps passed the goodness-of-fit test favoring the alleles from both parental lines, which indicates a good level of cross and self-compatibility among the parental materials.
Integrated consensus map for diploid rose (ICD) construction
The ICD was developed by combining the marker data from the three individual populations. Thirteen SSR and 824 SNP markers shared between at least two populations served as bridge markers to integrate the individual maps resulting in a consensus map with 3527 markers (20 anchor SSRs and 3507 SNPs) and a map length of 892 cM (Tables 3 and 4; Supplementary Figures 1 and2). The largest gap in the ICD was 11.2 cM on LG5. Overall marker density was 4 markers/cM, and there was, on average, one bin marker every 1 cM. The LGs ranged in size from 95 to 167 cM and marker number varied from 300 to 700. The largest linkage group was LG7 (167 cM) but LG2 had the highest marker density (6 markers/cM) and the least mean distance (1 cM/bin marker) among bin markers (Table 3; Supplementary Figure 9). Compared to the individual maps, the total map length was increased by nearly 390 cM although the map density and mean distance between markers was improved. Many markers were mapped to the same locus due to their identical or similar segregation patterns and this occurred on every linkage group with as many as 40 markers cosegregating at one locus (LGs 2, 4, 7) (Fig. 1).
The ICD was developed based on three bi-parental populations. The comparison of the LGs of all four different maps shows excellent collinearity with only a few rearrangements supporting the use of the GBS protocol for producing high quality markers for genetic map construction (Fig. 2, part of LG1 only; the complete LG1 and all other LGs can be found in Supplementary Figures 10-16; SNP markers and cM positions for the four maps mentioned in this paper can be found in Supplementary Table 3).
Synteny among diploid rose and Fragaria vesca
There was a high level of synteny among the LGs of diploid Rosa and strawberry (Fragaria vesca) (Fig. 3a). The F. vesca genome was used as the "proxy" genome for mapping and SNP detection since a rose reference LG2 LG3 LG4 LG5 LG6 LG7 Overall genome is not presently available. When we grouped and mapped the SNP markers to their respective physical locations on the strawberry assembly, we detected one minor chromosomal inversion close to the telomere on Rosa LG6, one major inversion on Rosa LG7, and one translocation between diploid Rosa LGs 2 and 3 and Fragaria LGs 1 and 6 ( Fig. 3b). To summarize, Fragaria pseudomolecules 7, 4, 3, 2, and 5 correspond to the Rosa ICD LGs 1, 4, 5, 6, and 7, respectively. The major translocation seen among LGs 2 and 3 of Rosa uncovered that Rosa LG2 is composed of F. vesca pseudomolecule 1 and a portion of 6, whereas the remainder of F. vesca pseudomolecule 6 makes up the majority of Rosa LG3. These patterns were consistent across the four maps constructed here (Table 5) and agree with previous studies 14,15 .
Single map construction
We constructed three individual genetic maps for two half sib (J14-3×LC, J14-3×VS), and one unrelated (OB×RF) highly heterozygous F 1 populations. The pollen parents of the half sib populations are related as LC is a parent of VS. The breeding line J14-3 (derived from Rosa wichuriana) is different from other cultivated parents in various traits, including black spot resistance, growth type, horticultural characteristics, and heat tolerance. The initial breeding focused on combining the everblooming trait from the cultivated germplasm and the high black spot resistance and heat tolerance from R. wichuriana into improved breeding selections. These were used in the crosses reported here. These three populations together with other diploid segregating populations sharing common parents or linked via pedigree will help us identify certain QTL and associated markers in our diploid breeding program. With time, some of these selections will be introgressed into the tetraploid rose germplasm within the breeding program.
All the maps contained seven LGs corresponding to the seven base pseudo-chromosomes in rose (x = 7). Moreover, consistent collinearity for the seven LGs among the three individual maps and the consensus map was observed Fig. 2; Supplementary Figures 10-16), and the ordering of anchor SSR markers on our maps was consistent with the Rosa ICM map 24 . These results support the high quality and reliability of the maps generated in the present study. Comparing our results to some recent non-SNP-based rose maps 22,24 , marker number and density were increased without length extension using GBS to generate SNP markers and mapping them to the F. vesca genome assembly. The overall map length of the three maps are on averagẽ 200 cM shorter than the one of Vukosavljev et al. 14 and 70 cM less than the one produced by Bourke et al. 15 Approximately 7-10% of the initial markers generated from GBS were anchored to the single maps for each cross. Initial grouping of the remaining markers at LOD > 5 in JoinMap® v4.1, produced seven groups representing the seven rose chromosomes in each population. Markers with excessive numbers of double recombination events were eliminated as likely caused by sequencing error. The exclusion of a large proportion of GBS markers is common in other crops as well. For example, only about 10% of the SNPs produced by GBS were kept when constructing the strawberry map 46 , and 4.2% of the starting putative SNPs were retained for grapevine map construction 45 . For small populations, more markers can be incorporated into the consensus map by utilizing more individuals across the populations.
The 26 anchor SSR markers from the rose ICM 24 used in this study, all initially grouped to their expected LGs supporting the quality of the maps produced herein. However, only twenty of the anchor SSR markers were retained in the final ICD map. The order of most bridge markers was consistent between our maps and the ICM though occasional marker order discrepancies were observed. This could be due to several factors, including segregation distortion, population size, parental genetic background, and scoring errors 22 . Markers displaying segregation distortion (~14%) were present on almost every LG in every population, and clustering of the LGs. This is similar to what has been described in the past, where studies have found 20-22% of the markers on rose maps displaying segregation distortion. This is probably due to the interspecific nature of the crosses but could also be caused by gametophytic incompatibility or genotyping errors 11,22,24 .
We found that the marker density of LG2 was higher and that of LG3 (especially for the J14-3×VS and OB×RF populations) was lower than other LGs. In addition, we observed some large gaps across the LGs. The gaps in the J14-3×VS and OB×RF populations were in the same regions on LG3. Several other large gaps were seen in LG5 for the J14-3×VS population and LGs 3 and 6 for the OB×RF population. This may be the result of not discovering polymorphisms on the strawberry genome, which was used as the "proxy" reference genome or these regions could be dominantly homozygous. Alternatively, we used a methylation sensitive restriction enzyme to digest the rose genomic DNA, and low marker coverage would be expected in repetitive regions containing methylated residues which is seen in other plant species 42 . The present results showed that Rosa LG2 was syntenic with Fragaria pseudomolecules 1 and 6, and Rosa LG3 was syntenic with the remaining portion of Fragaria pseudomolecule 6. Fewer markers were mapped onto Rosa LG3 and this could be due to the fact that some of the Rosa LG3 markers were grouped with those from Rosa LG2. However due to the lack of a rose reference genome at present we cannot confirm this information. In apple, because of a genome-wide duplication 55 , the first step in creating the linkage map was to assign groups manually according to the physical position of the markers 56 . As a rose whole-genome sequence becomes available, it will be possible to more accurately assign markers to groups, and determine whether the low number of markers assigned and mapped to Rosa LG3 is due to the genetic nature of rose (e.g., repetitive regions) or an incorrect grouping issue.
A few minor marker inversions on LGs were observed among individual maps and the consensus map (Supplementary Figures 10-16). This could be partly explained by the diverse genetic backgrounds in these populations. The inconsistency of some markers can be explained by the tight linkage among different marker pairs, inadequate data (missing data), and differences in segregation information among markers and populations 57 . But overall, no major chromosomal rearrangements were observed across populations.
Consensus map construction
Over eight hundred bridge markers linked three individual maps into one consensus map containing 820 bin markers (3507 markers including those that co-segregated) covering 892 cM. A comparison between the rose ICD and ICM maps 24 showed that all the anchor SSRs were mapped to the same linkage groups at similar locations. The total map length of the ICD map was longer and marker number was significantly higher than in previous studies 11,21,24 ; we extended the genome coverage (LG length) for LG2 to LG7, whereas the length of LG1 remained the same 24 . The ICD map contains approximately one bin marker every cM, which increased the resolution of the rose genetic map substantially.
Regions containing large gaps with no marker coverage in some individual maps were covered in the consensus map, including the lower 15 cM of LG3 and the middle 15 cM of LG5 for the J14-3×VS population and the upper 15 cM and lower 20 cM of LG6 for the OB×RF population. In addition, the marker coverage of LG3 was greatly improved in the ICD map as compared to the individual maps. The extended length of the map may reflect an improved coverage for the rose genome or it is also possible that the genetic distances between markers and the length of LGs were inflated by MergeMap 58 .
Markers with similar segregation patterns were distributed along each LG. The clustering of markers is likely explained by the fact that a large number of markers were mapped on a relatively small number of individuals 24,46 . Few inversions were observed across individual maps and the ICD, and this may be attributed to the small population sizes or the fact that different recombination rates are present among populations 57,59 . Still, some gaps were evident. As the gaps in LG3 and LG5 were located near the middle of the LG, those gaps may be caused by the lack of markers covering heterochromatic pericentromeric regions 60 . This consensus map will serve as one of the basic components required in a pedigree-based QTL analysis (FlexQTL™) to facilitate marker-trait association studies 61 .
Synteny between Rosa and Fragaria
Synteny among several Rosaceae crops has been reported in many studies, including Prunus crops themselves (almond, peach, apricot, and cherry) 26,28 , Prunus and Malus (apple) 28 , Prunus, Fragaria (strawberry) and Malus 30 , Fragaria and Prunus 27 , Malus and Pyrus (pear) 29 , and Rosa (rose) and Fragaria 14,15,20 . Our genome-wide comparative analysis with the thousands of SNPs mapped to the diploid Rosa LGs and physically located on the F. vesca (Fvb) genome further confirmed the high level of synteny among these two genomes. Rosa LGs 1, 4, 5, 6, 7 are syntenic to Fragaria pseudomolecules 7, 4, 3, 2, and 5, respectively. In addition, a major translocation and fission/fusion occurred between Rosa LGs 2 and 3 with Rosa LG2 composed of Fragaria pseudomolecule 1 (one of the smallest strawberry pseudomolecules) combined with a part of Fragaria pseudomolecule 6 (one of the largest strawberry pseudomolecules) 35,46 . The remainder of Fragaria pseudomolecule 6 is syntenic to Rosa LG3. The syntenic relationship between Fragaria and Rosa supports a proposed evolutionary relationship among the Rosaceae genomes.
Conclusion
By mapping sequence-based co-dominant markers (SSR and SNP), we have illustrated the highly conserved synteny between diploid Rosa and Fragaria, and created a dense SNP-based consensus map for our rose germplasm. This high synteny will facilitate the ability to study the genetics and QTLs between two species and provide a better understanding of the evolution of the Rosaceae. Although we successfully used the Fragaria reference genome to find SNPs among Rosa sequence data, the accessibility of a rose reference genome that is currently being developed will provide a better view of gene positions and improve the coverage and confidence of the maps created herein. The development of reliable genetic markers for desirable traits in rose will accelerate the introgression of important traits from wild diploid rose species into the genetic background of modern roses and allow the pyramiding of desired traits. The three mapping populations created for this study are segregating for a number of traits including black spot disease response, growth type, plant architecture, and other horticultural traits. Therefore, the genetic maps created in this study will serve as a tool for QTL analysis for many important traits. Those traits segregating in only one population can be mapped using the more traditional bi-parental QTL mapping approach, whereas those traits segregating in multiple populations can be mapped using the ICD map and software such as FlexQTL™ 61 . The successful application of GBS on diploid rose may shed light on tetraploid rose as well, but allele dosage is a challenge to address.
Data availability
Custom perl and python scripts used in the bioinformatics processing of this project can be found in the Dryad Digital Depository, doi:10.5061/dryad.k2do5. Sequence files for all individual rose samples are available at the NCBI Short Read Archive under BioProject PRJNA412522, accessions SAMN07716066-SAMN07716304. | 8,344 | 2018-04-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Characterization of Atherosclerosis Formation in a Murine Model of Type IIa Human Familial Hypercholesterolemia
A murine genetic model of LDL-cholesterol- (LDL-C-) driven atherosclerosis, based on complete deficiencies of both the LDL-receptor (Ldlr−/−) and key catalytic component of an apolipoprotein B-edisome complex (Apobec1−/−), which converts apoB-100 to apoB-48, has been extensively characterized. These gene deficiencies allow high levels of apoB-100 to be present and inefficiently cleared, thus leading to very high levels of LDL-C in mice on a normal diet. Many key features of atherosclerotic plaques observed in human familial hypercholesterolemia are found in these mice as they are allowed to age through 72 weeks. The general characteristics include the presence of high levels of LDL-C in plasma and macrophage-related fatty streak formation in the aortic tree, which progressively worsens with age. More specifically, plaque found in the aortic sinuses contains a lipid core with relatively high numbers of macrophages and a smooth muscle cell α-actin- and collagen-containing cap, which thins with age. These critical features of plaque progression suggest that the Ldlr−/−/Apobec1−/− mouse line presents a superior model of LDL-C-driven atherosclerosis.
Introduction
Atherosclerosis is a self-sustaining inflammatory fibroproliferative disease that progresses in discrete stages and involves a number of cell types and effector molecules [1,2]. Lipid metabolic disorders, principally those dyslipidemias that lead to high LDL and triglyceride levels and/or low HDL levels, are heavily involved in the genesis and progression of atherosclerosis in humans, and thrombotic complications substantially contribute to the coronary end-stage arterial disease that sometimes accompanies atherosclerosis. Because of genetic differences in lipid metabolism between mice and humans, LDL and its precursors, VLDLs and IDLs, are rapidly cleared in mice. Thus, differing from humans, the cholesterol present in murine plasma is mostly carried in the artheroprotective HDL fraction. As a result, wild-type (WT) mice are more resistant than humans to dietary-induced elevation of LDL and, thus, useful metabolic murine models of human atherosclerosis were not prevalent until recent advances in in vivo gene targeting methods. Application of this technology then allowed generation of genetic strains of mice that possess some characteristics of lipid metabolism of humans.
While accelerated injury-and transplant-based atherosclerosis, as well as spontaneous genetic models of Type II familial hypercholesterolemia (FH), the most frequent type of FH observed in humans that leads to atherosclerosis, are continually emerging, the most widely studied murine models of FH-mediated human atherosclerosis are the low density lipoprotein receptor-deficient (Ldlr −/− ) [3] and apoE deficient (Apoe −/− ) [4][5][6] mouse lines. In the former case, by allowing LDL elevation in plasma via elimination of the apoB-100 receptors, Ldlr −/− mice provide a potential model of atherosclerosis elevated LDL-containing cholesterol (LDL-C) levels. However, differing from the human disease, these 2 BioMed Research International mice, when placed on a normal chow diet, present with only 2-fold elevated cholesterol concentrations and develop atherosclerosis very slowly [3]. High-fat, high-cholesterol diets are required for development of aortic atherosclerotic lesions in these mice [7] although VLDL-C not LDL-C is extremely elevated in these diets. With regard to the Apoe −/− mouse strain, these mice display very high levels of plasma cholesterol and severe atherosclerosis, which is enhanced by a high-cholesterol diet [5]. However, the cholesterol is mainly associated with the VLDL and IDL lipoprotein fractions, not the LDL fraction, as in the case of humans. Thus, while nonetheless very valuable, neither of these models closely reflects the lipid profiles of the disease in humans. This is, in part, due to the presence of an enzyme in mouse liver, apobec-1, a RNA-specific cytidine deaminase. This enzyme is a catalytic component of an edisome that alters the apoB-100-encoding mRNA to an mRNA coding for apoB-48. Incorporation of apoB-48 into VLDL initiates its rapid clearance by scavenger receptors prior to conversion of VLDL into LDL particles. This editing, while also occurring in human intestines, does not take place in human liver, and its occurrence in mouse liver results in resistance to elevation in LDL in mice.
VLDL-C driven atherosclerotic plaques in Apoe −/− and Ldlr −/− mice are lipid-rich and abundant with the presence of foam cells converted from macrophages [8] and contain thinner fibrous cap and extremely small extracellular matrices [9]. In contrast, mainly LDL-C driven human arteriosclerotic plaques are extracellular matrix-rich and abundant with the presence of smooth muscle cells, and they are erosion-prone and vulnerable [10]. It is not known whether such differences are due to species or cholesterol profile.
Because of the limited utility of the Ldlr −/− and Apoe −/− mouse lines in terms of lipid metabolism, other murine models have been developed that have focused on elevation of apoB-100 levels in mice. Among these models are the apoB-100 transgenic strain ((Tg)APOB100 +/+ ) [11] and mice with a combined (Tg)APOB100 +/+ /Ldlr −/− genotype [12]. While this latter model possesses important advantages in lipid metabolism, it should be emphasized that introducing other gene modifications to this line is difficult because the APOB100 loci are not fixed. A potentially important mouse model of human atherosclerosis with elevated LDL-C is one in which both the Ldlr and Apobec1 genes are deleted (Ldlr −/− /Apobec1 −/− , hereafter referred to as L −/− /A −/− ), since these mice lack the ability to convert apoB-100 to apoB-48 in liver and also are defective in LDL clearance. These doubly deficient mice exhibit high levels of apoB-100-LDL-C, more closely mirroring the plasma lipid profiles in human Type II FH [13], and slowly and progressively present with severe spontaneous atherosclerosis on a normal chow diet. Thus, a good model is potentially available to further test the evolution of the atheroma and its in vivo relationships with specific proteins. While some initial characterizations of this murine model have been published, a thorough analysis of plaque development and progression is warranted. The current manuscript presents results of an investigation in which the nature of the plaque has been characterized over the majority of the lifespan of the L −/− /A −/− mouse.
2.1.
Mice. L −/− /A −/− mice have been initially described [13,14]. These mice were back-crossed to C57Bl6/J mice (Jackson Laboratory, Bar Harbor, ME) for at least seven generations before cross-breeding. Each genotype was determined using PCR analysis with genomic DNA from ear punch biopsy. Male mice were used in all experiments. The mice were maintained on a low-fat diet for 12, 18, 24, 36, 48, 60, and 72 weeks. Mice fasted at least 6 h and sacrificed at each time point using overinhalation of isoflurane, and then their blood was obtained with sodium citrate or heparin as an anticoagulant. Their hearts and whole aortic trees were removed for morphometric analyses after perfusion with isotonic saline. All animal experiments described herein were approved by the Institutional Animal Care and Use Committee (IACUC) at the University of Notre Dame and Hamamatsu University School of Medicine.
Lipid
Analysis of Whole Plasma. Plasma was separated from whole citrated blood and used for the measurement of total cholesterol and triglycerides employing the cholesterol CII (Wako Chemicals, Richmond, VA) and GPO TRINDER kits (Sigma Diagnostics, St. Louis, MO), respectively. For the assay, a volume of 2 L of plasma was mixed with 100 L of the assay reagent from each kit and incubated at 37 ∘ C for 30 min. The absorbancies at 500 nm and 540 nm were obtained for total cholesterol and triglyceride, respectively, and calculated using the standards present in each kit. These procedures were as described in our previous report [15].
FPLC Analysis of Whole Plasma.
A volume of 100 L of plasma was analyzed by FPLC, using gel filtration on Superose 6 HR resin (Amersham Pharmacia Biotech, Piscataway, NJ). The samples were eluted at a flow rate of 0.5 mL/min with a column equilibration buffer, namely, 10 mM Tris-HCl/0.15 M NaCl/0.01% (w/v) EDTA, pH 7.4, as previously described [6]. Column volumes (500 L) were collected (36 fractions) and a 50 L aliquot from each tube was added to 100 L of cholesterol CII reagent for the determination of the cholesterol concentration in each fraction. These procedures were as described in our previous report [15].
Analysis of Atherosclerotic Lesions of Whole Aortic Trees.
After perfusion of the mice, the appearance of the aortic arch was photographed, and then the aortas were exposed and cut longitudinally, in situ, exposing the lumen. Whole aortic trees were removed and placed on 150 m gapped glass slides, with the lumen-side up. A glass slide was then placed over the lumens of the aortas to hold them in place during fixation. The aortas were then fixed in 10% normal buffered formalin for 16 hr at room temperature. They were rinsed with H 2 O and stained with Sudan IV (Sigma) solution (38% 2-propanol with supersaturated Sudan IV) for 16 hr at 4 ∘ C. A digital camera was used to capture the whole image and the total number of pixels for whole aortas and plaque areas BioMed Research International 3 were measured using Adobe Photoshop 7.0, thus allowing calculation of the % of the total surface area of plaque in the aortas. These procedures were as described in our previous report [15].
Sections of Hearts.
Hearts were cut at the level of the lower edge of the atrium and the lower region containing the aortic valve was fixed with periodate-lysineparaformaldehyde (PLP) for 16 h at 4 ∘ C. After fixation, some samples were directly embedded in Tissue-Tec OCT compound (Sakura Fine Tec, Torrance, CA), and others were processed and embedded in paraffin. A total of 30 serial sections (from #1 to #30) were obtained at a 4 m thickness from the aortic valve towards the ascending aorta. These procedures were as described in our previous report [15].
Histochemistry for Analysis of Plaque Progression in Aortic
Sinus. Serial sections (#1, #11, and #21) were stained with hematoxylin II and eosin Y (H&E) (Richard Allen Scientific, Kalamazoo, MI) for morphometric analysis. Serial sections (#2, #12, and #22) were stained with Oil Red-O (Sigma) for detecting lipid accumulation in the plaque. For the determination of the size of plaques in aortic sinuses, these images were captured and calculated as above, and the number of pixels was converted to m 2 with a proper reference measurement (hemocytometer grid). Serial sections (#3, #13, and #23) were used for Masson's trichrome stain to identify collagen accumulation in the plaque. These procedures were as described in our previous report [15].
2.7.
Immunohistochemistry. All slides were deparaffinized and then blocked with avidin block, biotin block, and Peroxoblock (Zymed Laboratories, South San Francisco, CA), before incubating in specific antibodies. For fibrin(ogen) staining, the slides were blocked only with Peroxo-block. These following procedures were as described in our previous report [15].
Results
It is known that female mice are more vulnerable to atherosclerosis than male mice due to sex steroid hormone fluctuations [16,17]. Thus, the progression of spontaneous atherosclerosis in only male mice maintained on a low-fat diet has been assessed. The distribution of cholesterol in the various lipoprotein fractions, as determined from FPLC profiles of the type shown in Figure 1, for mice aged up to 72 weeks, is listed in Table 1 (Figure 3(a)) shows that lipid-containing plaque forms initially in the aortic sinuses and then spreads throughout the entire aortic trees. The percentage of the trees containing plaque progresses from approximately 2% in mice 12 weeks of age to approximately 60% in 72 weeks (Figure 3(b)), thus showing the extensive nature of plaque deposition in these mice. Aortic sinuses have been microsectioned from L −/− /A −/− mice. The sizes of the plaques in the aortic sinuses of L −/− /A −/− mice were measured on 3 equally spaced H&E stained sections from each mouse aortic sinus. The data show that plaque continues to increase throughout the lifespans of the mice (Figure 3(c)). No significant plaque was found in Wt mice fed the same diet as young mice (12 weeks of age). (Figure 4(a)). At 18 weeks of age, H&E staining shows a diffuse thickening of the neointima that is primarily due to foam cells and focal acellular areas above the media (Figure 4(b)). At 24 weeks of age, H&E staining demonstrates a well-formed fibrous cap overlaying a foam cell laden core. Arterial wall thickening, with collagen development and stretching of lamella (Figure 4 Figure 7. At 24 weeks of age, anti-CD31 immunostaining (Figure 7(a)) indicates that an intact endothelium separates the lumen from the fibrous cap. At 36 weeks of age, anti-CD31 immunostains (Figure 7(b)) reveal an intact endothelium separating the lumen from the lesion. At 48 weeks of age, anti-CD31 immunostaining (Figure 7(c)) reveals multiple focal breakpoints within the endothelial layer. At 60 weeks of age, anti-CD31 immunostaining (Figure 7(d)) indicates the presence of focal rounded areas in the endothelium, with a small breakpoint. There are bulging endothelial regions with underlying foam cells.
Antifibrin(ogen) immunostaining of sectioned aortic sinuses from L −/− /A −/− mice at various ages are presented in Figure 8. At 24 weeks of age, antifibrinogen immunostains (Figure 8(a)) indicate that diffuse fibrin deposits are present in the underlying endothelial spaces and at the base of the lesion above the media. At 36 weeks of age, antifibrinogen immunostains (Figure 8(b)) demonstrate variable fibrin(ogen) in the underlying endothelial region, as well as some focal, but faint, positive areas associated with the endothelium, and in the lipid core above the media. At 48 weeks of age, antifibrin(ogen) immunostaining (Figure 8(c)) demonstrates that the underlying endothelial region is abundant in fibrin deposits. Patchy areas of fibrin are also associated with the endothelium and fibrin is also identified at breakpoints on the endothelium. Diffuse areas of fibrin are evident at the base of the lipid core above the media. At 60 weeks of age, antifibrin(ogen) immunostaining (Figure 8(d)) demonstrates fibrin deposition scattered throughout the underlying endothelium and core region. Antimacrophage immunostaining of sectioned aortic sinuses from L −/− /A −/− mice at various ages is presented in Figure 9. At 24 weeks of age, the antimacrophage immunostains (Figure 9(a)) demonstrate that the core consists predominantly of macrophages. At 36 weeks of age, antimacrophage immunostains (Figure 9(b)) show that the cap now consists of scattered macrophages. At the broadest area of the cap, the macrophages appear as foam cells adjacent to the core. At 48 weeks of age, antimacrophage immunostaining (Figure 9(c)) shows macrophages within the thinned cap in the subendothelium. Some are associated with breakpoints of the endothelium underlying some faintly positive foam cells. At 60 weeks of age, antimacrophage immunostaining (Figure 9(d)) reveals small clusters of macrophages in the endothelial and subendothelial areas.
Anti-SMA immunostaining of sectioned aortic sinuses from L −/− /A −/− mice at various ages are presented in Figure 10. At 24 weeks of age, anti-SMA immunostaining (Figure 10(a)) reveals a cellular multilayered SMC -actinpositive region associated with the fibrous cap, as well as with the positive medial compartment. However, at the base of the lipid core, within the media, an area devoid of positive cells is evident, with a faintly positive region immediately underlying the core. Furthermore, several SMemb-positive cells are identified in the core associated with an area devoid of SMA positive cells (Figure 10(b)). At 36 weeks of age, anti-SMA immunostains (Figure 10(c)) indicate numerous single layered positive cells at the endothelium. They consist of weak to highly positive SMA foci among other negative cells. The medial compartments of the arterial wall are populated with both SMA positive and negative cells. The inner medial compartment (below the lipid core) appears thickened and is mainly negative for SMA. Furthermore, several diffuse SMemb-positive foci are identified within the broad end of the foam cell-containing cap (Figure 10(d)). Additional foam cell clusters within the core are also positive. These areas within the core were negative for SMA. At 48 weeks of age, the anti-SMA immunostained pattern (Figure 10 Figure 11. H&E stains at two different depths (Figures 11(a) and 11(b)) show sections covered (Figure 11(a)) and noncovered (Figure 11(b)) by plaque. The plaque contains an irregular cap with many broken areas (Figure 11(c)). Diffuse fibrin is also present in the remaining part of the plaque core (Figures 11(e) and 11(f)) and dense fibrin deposits are observed on the cap, which colocalize with ruptured areas identified from endothelial cell staining (Figure 11(e)). A small number of SMA positive cells are observed in the cap area covered by plaque (Figure 11(g)), but the aortic wall is heavily stained with SMA positive SMC cells at a depth not covered by plaque (Figure 11(h)). SMemb-positive cells were detected in the cap area of the plaque and light staining is also in the aortic walls (Figure 11(i)). No expression of SMemb is observed in aortic wall areas in sections at a depth where plaque does not cover the wall (Figure 11(j)).
Discussion
Atherosclerosis in humans begins focally in lesion-prone vascular areas where blood flow is compromised [18]. This reduced flow then facilitates recruitment of monocytes to intimal locations, via their interaction with cell adhesion molecules (CAMs), followed by monocyte differentiation to macrophages, which subsequently become foam cells. This disease then progresses in a series of AHA-defined classifications, from simple Type I to advanced Type VI lesions [19]. Complications of atherosclerosis include thinning and rupture of unstable plaques and aneurysm. Rupture of the plaque leads to thrombotic disease and possibly sudden death. Certain characteristics of plaques, including the size and composition of the lipid core, the structure and composition of the fibrous cap, apoptosis and/or dedifferentiation of collagen-synthesizing SMCs, and/or the presence of a local inflammatory process, predispose the plaque to disruption [20,21]. Plaque rupture is frequently observed in calcified plaques [22]. The nature and level of plasma lipoproteins, especially of HDL-C, are of predictive significance in the risk of coronary artery [23] and plasma HDL-C has been found to be a potent inverse risk factor in a variety of clinical endpoints of this disease [23][24][25]. On the other hand, high LDL levels are associated with risk of cardiovascular disease [26], mainly through formation of its oxidized product(s) [27]. oxLDL in the vessel wall originates from mild oxidation of vascular LDL [28] and circulating oxLDL stimulates vascular monocyte/macrophage infiltration [29], as well as vascular EC [30] and SMC [31,32] migration and proliferation. Thus, it has become clear that a variety of genes, including those that influence lipid and carbohydrate metabolism, those that influence the hemodynamic state of the organism, those that mediate many phases of the inflammatory response, and those that affect hemostasis, can affect the development and progression of atherosclerosis [33], and murine models of atherosclerosis are extensively employed to investigate the effects of gene alterations on the characteristics of this disease.
In this investigation, we have employed a mouse strain that contains a total combined deficiency of the Apobec1 and Ldlr genes in order to both eliminate the liver production of apoB-48 and inhibit clearance of the produced apoB-100-containing particle. This has led to a lipid profile in these mice, which is very similar to that of atherosclerotic prone humans with familial hypercholesteremia, wherein the very high levels of cholesterol reside in LDL particles. These excessive LDL-C levels then predispose these mice, even on low fat, low cholesterol diets, to spontaneous development of atherosclerostic lesions, similar to the human condition. Whereas this mouse model has been introduced in a previous study [13], a systematic characterization of the lesions that develop has not been offered.
The lesions that spontaneously develop in −/− / −/− mice progressively worsen with time and begin as fatty streaks in the proximal aortic regions as early as 12 weeks of age. These lesions spread to distal regions and, by 72 weeks of age, occupy >60% of the entire arterial tree. Intimal thickening occurs with foam cells and collagen-producing SMCs, which then progresses to a fibrous cap containing atheroma, overlaying a well-established necrotic core. The fibrous cap then thins and the core becomes calcified. We have only sporadically observed mice that experience sudden death, despite clear evidence that the cap progressively thins, loses collagenproducing SMCs, perhaps through dedifferentiation, and gains macrophages, which may degrade the stabilizing collagen within the cap. This may indicate that cap rupture is not a feature of this model, a usual situation for murine models of atherosclerosis, but evidence for cap erosion is present in this model. WHHL (Watanabe Heritable Hyperlipidemic) and WHHLMI (WHHL Myocardial Infarction) rabbits are used as arteriosclerosis model animals. The function of LDLr is impaired in these rabbits due to spontaneously occurring mutations in Ldlr. Moreover, Apobec1 is not expressed in rabbit liver; thus, the majority of cholesterol is packaged in LDL fractions similar to humans [34]. However, a small number of cap ruptures are observed in these rabbit models. Cap rupture and/or cap erosion are asymptomatic without thrombotic complications even in humans. While these animals exhibit cap rupture and cap erosion, it is very hard technically to detect sudden death due to thrombotic complications. Previously, we reported that prothrombin times and activated partial thromboplastin times were shorter in −/− / −/− mice than Wt mice. Moreover, the platelets and von Willebrand factors in −/− / −/− mice were more activated [15]. However, considering the average lifespan of −/− / −/− mice is almost equal to that of Wt mice, these mice might not expire due to thrombotic complications following cap rupture and/or cap erosion, and the severity of thrombotic complications in humans might be much worse than that in other animals.
Conclusions
In conclusion, the L −/− /A −/− model is very suitable for the study of features of LDL-C driven spontaneous atherosclerosis in humans. It offers many advantages over the Apoe −/− and L −/− models in terms of generation of the initial stages of the disease and also appears to incorporate many of the progressive features of the human disease, beginning with fatty streaks, developing to a clear human stage IV atheroma and then to a thin cap fibrous atheroma. While this model is ideal for investigation of genetic influences of a variety of pathways on atherosclerosis development and progression, generation of mice with further deficiencies is a more complex and timeconsuming process. Despite this, we have successfully developed desirable strains containing additional gene deficiencies. In our previous report, some of the serum parameters other than factors related to coagulation and fibrinolysis in L −/− /A −/− mice were reported [15]. Global assessment of this model other than functions of any specific genes integrated also will be the subject of future communications.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
No conflicts of interest are declared in this report. | 5,273.8 | 2018-06-07T00:00:00.000 | [
"Medicine",
"Biology"
] |
THE USE OF MACHINE LEARNING IN SITUATIONAL MANAGEMENT IN RELATION TO THE TASKS OF THE POWER INDUSTRY
. The article discusses the application possibilities of machine learning methods (artificial neural networks (ANN) and genetic algorithms (GA) to form management actions when applying the concept of situational management for intelligent support of strategic decision-making on the development of energy. At the first stage, the application of ANN to classify extreme situations in the energy sector, to select the most effective management actions (preventive measures) in order to prevent a critical situation from developing into an emergency. Genetic algorithms are proposed to be used to determine the weighting coefficients for training ANN. An algorithm for constructing a classifier based on a neural network and a demonstration task using data on generation and consumption of the
INTRODUCTION
In connection with the spread of the concepts of Smart Grid [1] and digital energy [2], both, the application of modern information and telecommunication technologies and the improvement of the technological infrastructure, and the development decisions of which are strategic, are relevant. At present, the issue of improving the technology of intelligent decision-making support to minimize the risk of adverse situations is an urgent one of the promising approaches in this direction is the use of machine learning technologies that have shown good results in other areas [3][4][5]. The team represented by the authors proposed to use the concept of situational management for intelligent support for making such decisions, and to use a method based on the synthesis of artificial neuron nets (ANN) [6][7] and genetic algorithms (GA) [8][9][10] to select management actions (or a sequence of preventive measures). The primary task in this case is the classification of extreme situations (ExS) in the energy sector in accordance with the scale "norm -pre-crisis (critical situations) -crisis (emergency situations)". Timely recommendations on the selection of preventive measures (management actions) can prevent the transition of critical situations to emergency and allow you to return to the normal state of energy systems (ES). The article considers the modern interpretation of the concept of situational management and its interpretation on the example of studies to assess the state of the fuel and energy complex of Russia. Genetic algorithms are proposed to be used to determine weight coefficients when training ANN [9][10][11]. An algorithm for constructing a classifier based on a neural network and the results of solving the demonstration task using data on generation and consumption of the United Electric Power System of Siberia using the developed prototype software module are presented.
SITUATIONAL MANAGEMENT
The concept of situational management was proposed by D.A. Pospelov and developed by him and his students in 70-80 years. last century [12]. At that time, it was not possible to fully realize it, both because of the insufficient power of computer technology and the unsatisfactory level of development of the theory and practice of artificial intelligence. The decline in interest in situational management in Russia that occurred in the 90s, in addition to an objective change in external economic and political conditions, can be explained both by the "winter of artificial intelligence" that has come and the difficulties that developers have encountered when trying to build models of complex management objects using the proposed approach. Nevertheless, at present, a new round of interest in this area can be noted, which is reinforced by the availability of more advanced technology and the emergence of new methods and approaches (Intelligent Computing), including methods of semantic modeling [13].
In [14], the idea of the situational management is used, the essence of which is the choice of managerial decisions taking into account the current situation from a certain set of admissible (typical, standard) control actions. In this case, the current situation C is understood as the totality of the current state of the object (state vector X) and its external environment (disturbance vector F). Then C = <X, F>. The concept of the complete situation S = <C, G> is also introduced, where C is the current situation, G is the control goal. In turn, the control goal G can be represented as the target situation Gg, to which the existing current situation should be brought. Then S = <C, Gg>. Assuming that the current situation C belongs to some class Q ', and the target (given) situation Gg belongs to the class Q' ', we seek a control (control action vector U) that belongs to the set of admissible controls Ωu and provides the required transformation of one class of situations into another : Thus, situational management acts as a mapping: matching the pair "current situation -target situation", the desired result is control U. In other words, in situational management, the problem of choosing control actions is reduced to an adequate assessment of the state of the object and environment (which is complicated by the presence of uncertainty factors), assigning the corresponding current situation to one of the typical classes and choosing such a control (from a certain set of alternatives), which leads to the achievement of the management goal (the target situation) [12]. Currently, this concept is proposed for use in operational management. The authors proposed to use it for strategic management in the energy sector, which is justified by the example of studies on the development of the country's fuel and energy complex taking into account energy security requirements [13,15].
THE PROBLEM OF CHOOSING CONTROL ACTIONS IN SITUATIONAL MANAGEMENT
In fig. Figure 1 shows the general scheme of studies of energy security (EB) problems from the point of view of situational management, or, in other words, assessing the state of the fuel and energy complex under the conditions of possible scenarios of EB threats taking into account preventive measures aimed at increasing the level of EB. Let us compare this scheme with the approach described above [14]. Here S0 is the initial state of FEC, can be considered as the current situation С; Ei -scenarios of possible extreme situations in case of ES-threats realization (similar external influences F); Ap , Aq , Al -a set of preventive, operational and liquidation activities to prevent, neutralize or mitigate the effects of an emergency situation (can be seen as a set of related controls U)); Sj -the FEC state after the emergency (realization of ES-threats) , taking into account a set of implementation activities and / or ; Sk -FEC state after liquidation measures ( and can be seen as analogous to the corresponding target situation Gg)).
Until recently, the selection of management actions in these studies and the assessment of the effectiveness of the proposed solutions were carried out mainly by expert means.
In the works of Gerget O.M. [16][17][18][19] a bionic model of management actions was proposed, which is based on the synthesis of artificial neural networks, a genetic algorithm and a method of generalized indicator , which allows to increase the efficiency of decision-making on the choice of management action. Bionic models are understood as mathematical models built on the principle of functioning and organization of biosystems [16]. The bionic approach has been successfully applied in the field of medicine [17]. Authors, together with O.M. Gerget, proposed to adapt this approach to the tasks of the energy sector.
To evaluate and predict the effect of the application of management actions, it is proposed to use a bionic model based on a combination of the generalized indicator method, neural networks (NS) [6] and the genetic algorithm (GA) [8]. Figure 2 shows the concept of the choice of management actions using machine learning. Thus, each object of study is described by a bionic model of the form <NS, GA, I, A>, where A are model tuning algorithms. The synthesis in bionic models of neural, genetic and informational (generalized indicator method) systems allows systems to exchange information and transmit the values of their characteristics as input influences. In our case, the evaluation of the effect of the application of management actions on the basis of the bionic model can be used for a comprehensive assessment of the functional state of the system taking into account management actions, depending on the a priori information received regarding the classification of the object of study in one of the classes of critical situations: normal, pre-crisis, crisis. The generalized indicator method is associated with the determination of the homeostatic properties of the system and is not used for our tasks at this stage. Below are the results of a study of the possibility of applying the bionic approach in the tasks of the electric power industry.
ASSESSMENT OF THE POSSIBILITY OF USING NEURAL NETWORKS FOR THE CLASSIFICATION OF EXTREME SITUATIONS IN THE ENERGY SECTOR
We formulate a specific task, which will illustrate the possibility of using neural networks to classify extreme situations in the energy sector. The task will be related to the generation and consumption of electricity in the unified energy system (UES) of Siberia. The UES system of Siberia is located on the territory of the Siberian Federal District and partly on the Far Eastern Federal District. The operational zone of the ODE of Siberia covers 12 subjects of the Russian Federation: Altai, Buryatia, Tuva and Khakassia Altai, Transbaikal and Krasnoyarsk Territories; Irkutsk, Kemerovo, Novosibirsk, Omsk and Tomsk regions. It includes 10 regional energy systems: Altai, Buryat, Chita, Irkutsk, Krasnoyarsk, Novosibirsk, Omsk, Tomsk, Khakass, Kuzbass. At the same time, the Altai energy system unites the Republic of Altai and the Altai Region, Krasnoyarsk -Krasnoyarsk Region and the Tyva Republic . As shown above, the choice of preventive measures is necessary to prevent a critical situation from becoming emergency. In order to take advantage of them in time, it is proposed to use neural networks for the timely classification of extreme situations (ExS). This means that the algorithm will be divided into 2 stages: 1) forecasting the parameters of the situation; 2) classification of the situation according to the predicted parameters Having received information about the possible (since the network is capable of making mistakes) occurrence of an extreme situation, knowing the class to which the ExS belongs and the date of its occurrence, through the operation of the genetic algorithm, it is possible to form a set of preventive measures that are distributed according to the time principle: from the current day to the day of occurrence of the ExS, naturally, taking into account risks and assumptions. To build a classifier based on neural networks, you must perform the following steps: • data preprocessing; • choice of network topology; • choice of methods for determining the number of hidden layers; • selection of methods for determining the number of neurons in hidden layer; • selection of methods for initializing initial weights, etc.; • selection of network learning algorithm; • selection of network performance assessment methods. The algorithm for constructing a classifier based on neural networks is shown in Fig. 3. This algorithm can also be used to build a neural network that solves the prediction problem. In this case, the output of the neural network will be the predicted value of the indicator, and not the type of class.
One of the key stages in the operation of a neural network is the learning process with the aim of selecting weighting factors. The mathematical formulation of the problem of training a neural network is to minimize the objective function of the error of a neural network. The widespread gradient descent method [20] has several drawbacks (low convergence rate, high number of a priori indicators, the problem of a local minimum). An alternative to the gradient descent method is a genetic algorithm to minimize the cost function of a neural network, which is based on the principle of natural selection and avoids many problems at the stage of network training [16]. It is also possible to use the genetic algorithm to solve the problem of selecting a sequence of management actions in which the deviation of the predicted value of the integrated assessment of the ECO (or a separate indicator of the electric power system) from the desired (the required value of the indicator) is minimal/
DEMO EXAMPLE TO ILLUSTRATE THE POSSIBILITIES OF USING ANN
Formulation of the problem. It is necessary to predict the ratio of generation to electricity consumption for 7 days in advance and classify forecasting results. The number of situations classes are three -normal, pre-crisis, crisis. The data for the training sample (time series) were taken from 01.01.2013 to17.06.2019. A population of 2359 objects for training the network was formed. Each object is described by 2 parameters (features): date and amount of electricity production (MW * day) (Fig. 4). The entire sample was normalized in the range from 0 to 1. The network with 6 inputs and one output is trained on the resulting set.
To develop a prototype of the software module, the direct signal propagation architecture for the neural network was chosen, consisting of 4 layers: input layer, 2 hidden layers, output layer. The input layer consists of 6 neurons, the first hidden layer ˗ of 5 neurons, the second hidden layer ˗ of 3 neurons, the output layer ˗ 1 neuron. As a learning algorithm, it was proposed to use the genetic algorithm and the back propagation algorithm of the error, and then compare their results. As a function for activating hidden layers, a sigmoid function is used. Prototype development tools: Python programming language; development environment for software implementation PyCharm. Fig. 4. Initial data for a demo example.
THE RESULTS OF CALCULATING WITH THE PROTOTYPE OF SOFTWARE MODULE
The training of ANNs using a genetic algorithm is based on the idea of natural selection in living nature. In this model, the role of genes is played by sets of numbers, in our case, these are the weights of the neural network. Mutation occurs by changing the weight coefficient by a random value. The range and probability of mutation are given by the initial values, but they can also mutate during evolution. Crossing occurs by randomly mixing the neurons of two ANNs. Thus obtained, a new ANN takes place in the population. The task of the genetic algorithm is to minimize the objective function. As the objective function, the average error of the neural network is used. GA parameters: minimum number of individuals in a population are 5; maximum number of individuals in a population are 30; initial number of places in the population are 10; probability of mutation is 0.1; mutation level is 0.5. In the table. 1, fig. 5 and tab. 2, fig. 6 shows, respectively, the results of the prototype for both cases. Comparison of tables and graphs allows us to conclude that the results are comparable, which confirms the possibility of using a genetic algorithm to determine the weight coefficients of ANNs.
CONCLUSION
The possibility of using machine learning methods in the implementation of the concept of situational management, as applied to the tasks of the electric power industry, has been tested. In order to select a sequence of control actions, it is proposed to use a bionic model | 3,509.8 | 2019-01-01T00:00:00.000 | [
"Computer Science"
] |
A Relaxation Filtering Approach for Two-Dimensional Rayleigh–Taylor Instability-Induced Flows
In this paper, we investigate the performance of a relaxation filtering approach for the Euler turbulence using a central seven-point stencil reconstruction scheme. High-resolution numerical experiments are performed for both multi-mode and single-mode
Introduction
Rayleigh-Taylor instability (RTI) is an interfacial hydrodynamic instability that occurs at the interface separating two fluids of different densities in the presence of relative acceleration [1]. Understanding the behavior of the RTI-induced flows are of great importance because of the prevalence of such instability phenomena in many natural, industrial, and astrophysical systems with unstably stratified interfaces, such as coastal upwelling near the surface of the oceans [2], atmosphere and clouds [3], plasma physics such as magnetic or inertial confinement fusion implosions [4], the ignition of supernova [5,6], air bubble formation in the blood of deep sea divers [7], premixed combustion [8,9] and many more. In general, RTI phenomenon is one of the easiest hydrodynamics instabilities to observe, for example, if we invert a glass filled with water, the RTI occurs which makes the water falling [10,11]. Lord Rayleigh first described theoretically an instability that occurs in Cirrus cloud formation when a dense fluid is supported by a lighter one in a gravitational field [12]. Later Sir G. Taylor demonstrated the same instability experimentally for accelerated fluids [1], and honouring to their contributions, this instability is named after Lord Rayleigh and Sir G. Taylor which is the Rayleigh-Taylor instability. A detailed overview of the application of RTI phenomena with definitions, physical interpretations and terminologies can be found in [13][14][15][16][17]. Although RTI is a part of many diverse areas of scientific research, and there have been a substantial body of works conducted on this of a turbulent flow governed by Eulerian hyperbolic conservation laws, ILES methodology can be a good choice to consider which is proven to show a good performance on resolving turbulent flows with shock and discontinuities [45][46][47][48]. One of the popular ILES framework is to use an upwind scheme (e.g., Weighted Essentially Non-Oscillatory (WENO) scheme) along with a Riemann solver to incorporate the artificial through the use of numerical truncation errors [49]. The upwind-biased and nonlinearly weighted WENO schemes are widely used in resolving highly compressible turbulent flows because of their robustness to capture discontinuities in shock dominated flows and high order of accuracy in preserving turbulence features [50][51][52]. It should be mentioned that the development of improved WENO scheme is an active research field and still, there are a lot of works going on in this research direction [53][54][55][56]. Another candidate modeling approach is the explicit filtering approach using relaxation filtering which add dissipation on the truncated scales in LES through a low-pass filter [57][58][59][60]. In this approach, an additional low-pass spatial filter is used to estimate the effect of unresolved scales. The selection of relaxation filter also affects the solution field for explicit filtering approach and there are a significant number of literature available on the formulation of suitable and efficient LES filters [61][62][63]. In this work, we use the sixth-order symmetric central scheme with a 7-point stencil Simpson's filter (SF7) as a relaxation filter for RTI test case. We also implement ILES scheme combined with Roe and Rusanov Riemann solver to compare the results obtained by our relaxation filtering solver. The main purposes of this paper are to simulate RTI-induced flow (for both single and multi-mode perturbation) using a relaxation filtering approach to observe the resolution capability of this scheme, analyze the flow behavior by observing the density field contours, and compare the results obtained by relaxation filtering scheme and ILES scheme through kinetic energy (with and without density-weighted velocity) and power density spectra plots. The results show that the relaxation filtering scheme captures more scale in inertial subregion whereas the ILES scheme resolves more scales in high wavenumber regime. For both multi-and single-mode RTI, the kinetic energy spectra plots tend to follow the k −11/5 scaling law. On the other hand, the power density spectra plots are observed to be aligned to k −7/5 at high resolution. More rigorous derivations and mathematical analyses of scaling laws can be found elsewhere [64][65][66][67][68][69]. Also, the density contour plots for single-mode RTI reveal that the symmetry of the falling spike break at high resolution because of the formation of secondary instabilities from smaller scales resolved at high resolution. However, the lower resolution simulations resolve the symmetry for all schemes since the numerical dissipation surpasses the formation of the secondary instabilities. Also, it has been seen that the filter strength of the relaxation filter allows us to add more or less dissipation to the solver which eventually affects the flow behavior.
The rest of the paper is organized as follows. Section 2 gives a brief description of the governing equations. Section 3 illustrates the numerical methodology implemented in this study. In Section 4, we detail the results obtained by the numerical schemes used in our investigation along with the problem definitions of the RTI test problem. We demonstrate our findings through high-and coarse-resolution density field contour and density-weighted energy spectra plots. Section 5 gives the summary of our findings and conclusions.
Governing Equations
In our study, we consider two-dimensional Euler equations in their conservative dimensionless form as underlying governing equation for the Rayleigh-Taylor instability-induced flow evolution and can be expressed as: where F, G account for the inviscid flux contributions to the governing equation and S represents the gravitational term acting on the vertically downward direction (i.e., g = −1). The quantities included in q, F, G and S are: Here, ρ, p, e, u, and v are the density, pressure, total energy per unit mass, and the horizontal and vertical velocity components, respectively. The total enthalpy, h and pressure, p can be obtained by: where γ = 7/5 is chosen as the ratio of specific heats in our study. We refer the reader to [50,70] for details on the development of the eigensystem of the equations to devise hyperbolic conservation laws.
Numerical Methods
To develop the computational algorithm for our test problem governed by hyperbolic conservation laws, we formulate a finite volume framework by using different numerical strategies and schemes. In this section, we briefly introduce the numerical methods considered in the present study. We use the method of lines to cast our system of partial differential equations given in Equation (1) in the following form of ordinary differential equation through time: where q i,j is the cell-averaged vector of dependent variables, and £(q i,j ) represents the convective flux terms in the governing equation which can be expressed in the following discretized form: Here, F i±1/2,j are the cell face flux reconstructions in x-direction and G i,j±1/2 are the cell face flux reconstructions in y-direction. We use the optimal third-order accurate total variation diminishing Runge-Kutta (TVDRK3) scheme [71] for the time integration: where the time step, ∆t should be obtained by (satisfying the Courant-Friedrichs-Lewy (CFL) criterion): , η ∆y max(|v|, |v + a|, |v − a|) , where a is the speed of the sound that can be computed from the primitive flow variables (i.e., a = γp/ρ). In our current investigation, we use η = 0.5 for all the simulations (η ≤ 1 for numerical stability). For the cell face flux reconstructions, we have implemented the ILES and relaxation filtering modeling approaches on our test problem which will be discussed briefly in the subsequent sections.
ILES Approach
To develop our ILES framework, we first use the WENO interpolation scheme to reconstruct the left and right state of the cell boundaries. Later, we calculate the fluxes at cell edges from the reconstructed left and right states using a Riemann solver. The finite volume framework of a system of Euler conservation equations usually requires a Riemann solver to avoid the Riemann problem [72]. The damping characteristics of nonlinear WENO schemes acts as an implicit filter to prevent the energy accumulation near the grid cut-off [73,74]. In this work, we use the 5th order accurate WENO scheme followed by two widely used Riemann solver, Roe and Rusanov Riemann solver, to determine the flux at cell boundaries.
Weno Reconstruction
The WENO scheme is first introduced in [75] for problems with shocks and discontinuity to get an improvement over the essentially non-oscillatory (ENO) method [76,77]. In this work, we use an implementation of the WENO reconstruction using 7-point stencils (i.e., updating any quantity located at index i depends on the information coming from i − 3, i − 2, ..., i + 3) which can be written as: . (11) Here, q L i+1/2 and q R i−1/2 are the left state (positive) and right state (negative) fluxes, respectively, approximated at midpoints between cell nodes. The left (L) and right (R) states correspond to the possibility of advection from both directions. Since the procedures are similar in the y-direction, we shall present stencil expressions only in the x-direction for the rest of this document. w k are the nonlinear WENO weights of the kth stencil where k = 0, 1, ..., r and r is the number of stencils (r = 2 for the WENO5 scheme). The nonlinear weights are proposed by Jiang and Shu [78] in their classical WENO-JS scheme as: but the nonlinear weights defined by the WENO-JS scheme are found to be more dissipative than many low-dissipation linear schemes in both smooth region and regions around discontinuities or shock waves [79]. In our study, we have used an improved version of WENO approach proposed by [80], often referred to as WENO-Z scheme. One of the main reasons behind selecting WENO-Z can be less dissipative behavior than classical WENO-JS to capture shock waves. Also, there is a smaller loss in accuracy at critical points for improved nonlinear weights. The new nonlinear weights for the WENO-Z scheme are defined by: where β k and p are the smoothness indicator of the kth stencil and a positive integer, respectively. Here, = 1.0 × 10 −20 , a small constant preventing zero division, and p = 2 is set in the present study to get the optimal fifth-order accuracy at critical points. The expressions for β k in terms of cell values of q are given by: d k are the optimal weights for the linear high-order scheme which are given by:
Roe Riemann Solver
Based on the Godunov theorem [72], Roe developed an approximate Riemann solver, known as the Roe Riemann solver [81]. In our computational algorithm, we use the flux difference splitting (FDS) scheme of Roe [81] where the exact values of the fluxes at the interface can be computed in the x-direction by: Here, ∆F is the flux difference, calculated as: where Here, ∆ denotes the difference between right and left state fluxes for the variables ρ, p, u, v (e.g., ∆u = u R − u L ), and eigenvalues are defined as λ 1 = |ũ|, λ 2 = |ũ +ã| and λ 3 = |ũ −ã|, whereã is the speed of the sound at averaged state. In the equations, the tilde represents the density-weighted average, or the Roe average, between the left and right states. The Roe average values can be found by: where the left and right states of the un-averaged conserved variables are available from the WENO5 reconstruction described earlier. However, it is later realized that the stationary expansion shocks are not dissipated appropriately by this method. To fix the entropy in the expansion shocks, Harten proposed the following approach [82] replacing Roe averaged eigenvalues by: Here, = 2κã where κ is a small positive number, is set 0.1 in our computations. Similarly, in y-direction, λ 1 = |ṽ|, λ 2 = |ṽ +ã| and λ 3 = |ṽ −ã|. The interfacial fluxes in y-direction can be estimated by: where with
Rusanov Riemann Solver
Rusanov proposes a Riemann solver based on the information obtained from maximum local wave propagation speed [83], sometimes referred to as local Lax-Friedrichs flux [84,85]. The expression for Rusanov solver in the x-direction is as follows: where the right constructed state flux component, i+1/2,j ) and the characteristic speed, c i+1/2 =ã + |ũ|. The density-weighted average of the conserved variables can be calculated by Equation (18). Similarly, the expression for Rusanov solver in y-direction is: where c j+1/2 =ã + |ṽ|.
Central Scheme with Relaxation Filtering (Cs+Rf) Approach
In our relaxation filtering approach, we consider a symmetric flux reconstruction using a purely central scheme (CS) combined with a low-pass spatial filter, 7-point stencil Simpson's filter (SF7) in our case, as a relaxation filter (RF). We denoted this solver as CS+RF. For the cell interfacial reconstruction of the conserved quantity, the following symmetric non-dissipative scheme is used [86]: for the interpolation in x-direction, and similarly in y-direction, the conservative interpolation formula reads: where the stencil coefficients are given by: Here, q i,j represents the flow variables (at cell centers) given in Equation (4). The calculated fluxes from the relevant face quantities determined from the nodal values can be used in discretized finite volume equation. In this approach, we assume that the explicit filtering removes the frequencies higher than a selected cut-off threshold through the use of the low-pass spatial filter. A low-pass filter is commonly used in explicit filtering approaches which can be considered to be a free modeling parameter with a specific order of accuracy and a fixed filtering strength [87]. The filtering operation is done at the end of every timestep to remove high frequency content from the solution which eventually prevents the oscillations [58,88,89]. A discussion and analysis of the characteristics on different low-pass filter can be found in [90]. In our investigation, the expression for the sixth-order sequential RF for any quantity f is: where Here, the discrete quantity f i,j yields the filtered valuef i,j and the filtering coefficients are: and σ is a parameter that controls filter dissipation strength in a range of [0, 1] where σ = 0 indicates no filtering effect at all, i.e., completely non-dissipative and σ = 1 indicates the highest filtering effect, i.e., most dissipative with a complete attenuation at the grid cut-off wavenumber. The transfer function of the SF7 filter displays a trend of more dissipation with the increase of the parameter σ [49].
Results
In this section, we present our numerical assessment of the modeling approaches outlined in the previous section for both multi-mode and single-mode two-dimensional RTI test problem. We first illustrate the problem definitions of our test case which is followed by the results obtained by different numerical solvers. We perform our quantitative comparisons between the ILES and CS+RF models using the density contours, density-weighted kinetic energy spectra, and compensated density-weighted kinetic energy spectra plots. For comparative analysis, we obtain the high-resolution ILES and CS+RF solutions by using a parallel computing approach using the Open Message Passing Interface (MPI) framework [91,92]. A detailed discussion on the MPI methodology implemented in our study can be found in [49]. Using both high-and coarse-resolution simulation results, the scaling behaviors of the kinetic energy spectra plots are also investigated in this section.
Two-Dimensional RTI Test Problem: Case Setup
In our numerical experiments, we use a two-dimensional implementation of RTI using the aforementioned numerical schemes. In general, RTI arises at the interface of two fluids when a dense fluid is supported above a comparatively lower density fluid in a gravitational field or stay in the presence of relative acceleration. Since it is found in numerous literature that many properties related to the RTI-induced flows such as the overall growth rate of RTI mixing, dissipation scales, velocity field, and so on, more or less depend on the initial conditions of the flow domain [14,[93][94][95], we consider RTI with multi-mode or randomized perturbation and RTI with single-mode perturbation in our study. We first focus on the case of randomized initial perturbation where our computational domain is set (x, y) ∈ [0, 0.5] × [−0.375, 0.375] with the following initial conditions: p(x, y) = 2.5 − ρy.
Here, L y is set 0.75 and the amplitude of the perturbation is set at λ = 0.01 and α is a random number with a value in between 0 and 1. Since λ is updating itself at each grid point, we note that it is an implicit function of both x and y. On the other hand, for the single-mode RTI, the computational with the following initial conditions: where L x and L y is set 0.5 and 1.5 respectively with the similar amplitude of the perturbation as the multi-mode case, λ = 0.01. The similar two-dimensional RTI test problem set up has been used in various studies related to RTI [96,97]. Figure 1 shows the schematic of the computational domain for both cases of the RTI test problem where it can be seen that we consider the normalized gravity is acting in vertically downward direction in our problem definitions. We must note here that we apply the periodic boundary condition on the left and right boundaries, and the reflective boundary condition on the top and bottom boundaries of our computational domain for both test setup. To get a better understanding on the boundary conditions used in our test setup domain, we can consider an arbitrary two-dimensional domain illustrated in Figure 2. To apply the periodic and reflective boundary condition for our 7-point stencil scheme, we take three ghost points in each direction of the four boundaries of our computational domain. For periodic boundary condition on the left and right boundaries, the ghost point values of the time-dependent variables in vector q (from Equation (2)) can be computed by: where j = −2, −1, ...., N y + 3. On the other hand, our approximation for the reflective boundary condition is as follows: where i = 1, 2, ...., N x . For the parallelization, we do the domain decomposition in the y-direction and update the ghost points of the local domain by transferring the information from the adjacent domain. Although we implement our reflective boundary conditions as defined by Equation (39), we note that the boundary conditions are often applied on the velocity rather than momentum. We stress that simulations of unsteady compressible flows require an accurate control of wave reflections from the boundaries of the computational domain since such waves may propagate from the boundary and interact with the flow [98]. We plan to implement more accurate characteristics-based boundary conditions in our future studies.
RTI with Random (Multi-Mode) Perturbation
The nonlinear evolution of the Rayleigh-Taylor instability from multi-mode initial perturbations is studied based on the density field contour and density-weighted kinetic energy spectra to assess the performance of the underlying modeling schemes. Figure 3 shows the time evolution of the density field at high resolution using the ILES-Roe scheme. As it is shown in [99] that the DNS and ILES give similar results for global properties of RTI-induced mixing, we use the high-resolution results of ILES schemes to avoid the higher computational cost of DNS. Also, it is apparent in Figure 3 that several fine scale structures are captured using the ILES-Roe scheme because of its capability to resolve the smaller scales in high wavenumber region. It can be observed that the mixing growth rate is uniform along the interface at t = 1.6 with multiple modes. It has been seen before in [21] where authors found a uniform growth of the mixing region initially for an idealized initial condition whereas the experimental results of same test condition show the presence of dominant scale at the same time. In our study, even though there are some dominant scales or modes present at t = 4.0, there are a considerable amount of unmixed region can be seen at the same time which indicates the slow mixing rate for this initial condition. In Figure 4), we present the density contour plots for coarse-resolutions obtained by different ILES-Riemann solver combinations at final time of our simulation, i.e., at t = 4.0. As we can see, a clear difference in the growth of scales as well as mixing for both solvers. Since the Rusanov solver is more dissipative than the Roe solver [100], it is expected to have different evolution of the scales in the flow field. Similarly, we can observe different flow field evolution of CS+RF scheme for different filtering strength, σ in Figure 5. Since the higher value of σ adds more dissipation, this solver induces different amount of perturbation in the flow field with the evolution of time than the solver with lower value of σ. Hence, we plot the density-weighted kinetic energy spectra to get a better view in the performance of different solvers [27,[101][102][103][104]. To include these density effects, we define the energy spectrum built on density-weighted velocity vector which can be expressed as: where the density-weighted velocity components are u ρ (x, y, t) = ρ(x, y, t)u(x, y, t), We then can calculate the density-weighted kinetic spectra by following expressions: where k refers to the wavenumber along x-direction. We obtain the Fourier coefficients using a standard FFT algorithm [105]û where i refers to unit imaginary number, and (x i , y j ) determines the Cartesian grid. Since our domain is periodic only in x-direction, we note that our spectra calculations are averaged in y-direction as illustrated in Equation (43). The other statistical measures investigated in our study are the classical kinetic energy spectra and power density spectra. The energy spectra can be calculated using the following definition in wavenumber space [106]: where the velocity componentsû andv can be computed using a similar fast Fourier transform algorithm presented in Equation (44). To quantify the effect of the scale content of density field, we use the power spectrum that reflects the average packaging of density over different scales at any given time in the simulation. This may be given by the following expression: whereρ is the Fourier coefficients of the density field. For the validation of our spectra plots, we follow the well-established theory for two-dimensional RTI systems [14,43,44]. In his seminal paper, Chertkov [43] proposed a phenomenological theory corresponding to the Bolgiano scaling [107] which can be abstracted to k −7/5 scaling law for density or temperature and k −11/5 scaling law for velocity. In Figure 6, we can see that the spectra plots (on the left) obtained by the ILES and CS+RF schemes are showing a clear inertial subrange with the k −11/5 scaling. However, it is apparent that the CS+RF scheme results are the most aligned with the k −11/5 reference line. Moreover, we present the kinetic energy spectra plots without density weighting in Figure 7 which supports the conclusions of the density-weighted energy spectra plots. To validate further, we plot the regular and compensated power density spectra in Figure 8 where it can be seen that the density spectra for CS+RF scheme are following the k −7/5 scaling law.
The time evolution of the spectra shows similar statistical trends for all schemes. Therefore, we will only focus on the results at final time in our subsequent analyses. To compare the dissipation characteristics of the schemes, we place the density-weighted spectra of both ILES schemes in a single plot as well as for both CS+RF scheme with different filtering strength σ in Figure 9. It can be observed that the Rusanov solver is more dissipative than the Roe solver and the higher σ value adds more dissipation. These findings are consistent with the previously found results in the literature as well. Also, the spectra are following the reference k −11/5 scaling. The density-weighted spectra plots compensated by k 11/5 for the CS+RF scheme with different filtering strength show that all the lines are flat above the axis line. However, the kinetic energy spectra plots without the density weighting in Figure 10 exhibit a similar trend as the density-weighted ones. On the other hand, the power density spectra plots in Figure 11 show that the k −7/5 scaling law is maintained for both set of schemes. However, the compensated spectra plots indicate that the CS+RF scheme is more consistent with the scaling law than the ILES schemes. We present another set of density-weighted spectra plot varying grid resolution to show a comparison between the ILES and CS+RF schemes in Figure 12. It is apparent that the CS+RF captures more scales in the inertial subrange than the ILES schemes. However, the CS+RF scheme reaches the effective grid cut-off scales earlier than the ILES schemes. It is because the CS+RF solvers do the filtering once at the end of the simulation whereas the ILES solvers implicitly adds dissipation throughout the simulation. As a result, ILES schemes capture wide range of scale at high wavenumber even though they resolve comparatively less scales in the inertial subrange. Some key points can be seen from Figure 12 that the σ = 1.0 solver is the most dissipative among all solvers considered in this study, and σ = 0.4 solver captures more scales in the inertial subrange than the other solvers. Also, ILES-Roe solver resolves the highest range of scales in high wavenumber for both coarse and high resolution which explains the appearance of very fine small-scale structures in the density field contour plot obtained by ILES-Roe solver. (c) (d) Figure 6. Time evolution of density-weighted kinetic energy spectra and compensated densityweighted kinetic energy spectra for the RTI problem with multi-mode perturbation obtained using different modeling approaches at a resolution of 16384 × 24576; (a) density-weighted spectra using ILES-Roe solver; (b) compensated density-weighted spectra using ILES-Roe solver; (c) density-weighted spectra using CS+RF (σ = 1.0) solver; (d) compensated density-weighted spectra using CS+RF (σ = 1.0) solver. (c) (d) Figure 11. Comparison of ILES (ILES-Roe and ILES-Rusanov) models and CS+RF (σ = 1.0 and σ = 0.4) models for the RTI problem with multi-mode perturbation showing the power density spectra and compensated power density spectra at different resolutions; (a) power density spectra using ILES solvers; (b) compensated power density spectra using ILES solvers; (c) power density spectra using CS+RF solvers; (d) compensated power density spectra using CS+RF solvers.
RTI with Single-Mode Perturbation
Numerical simulation of flows with RTI is comparatively challenging because the instability grows from the small scales of the flow field [19,108]. Since the analytical modeling can be done for single-mode RTI, the numerical study of the RTI with the single-mode perturbation setup has been started very early [33,109] and still being studied extensively to understand and explain the nature of RTI-induced flows [20,31,32,[110][111][112][113]. For our analyses of the single-mode perturbation case, we first present the time evolution of the density field results obtained by ILES-Roe solver at a high resolution of 8192 × 24576 in Figure 13. It is observed in many studies that the tips of the spikes of a single-mode RTI-induced flow always maintains the symmetry [14]. Yet in our simulation, the line of symmetry within the spike of the single-mode RTI in Figure 13 is seen broken at time t = 4.5. This phenomenon was observed and well explained by Ramaprabhu et al. [31] at late-time of the RTI flow simulation which the authors referred to as "chaotic mixing" at the late-time regime. With simulations of the Euler equations, it can be also seen in [96] that the less dissipative schemes show this interface breaking up, while the more dissipative schemes suppress the instability. In our high-resolution simulation, the presence of small-scale structures can be seen in very early stage which lead to secondary instability, i.e., the KH vortex formation as well as chaotic mixing. However, for small or modest grid resolutions, the numerical viscosity suppresses the small-scale structures and preserves the symmetry which can be seen in Figures 14-17. In Figure 14, we present the state of the density field at t = 2.7 (top row) and t = 4.5 (bottom row) for the simulations by using the ILES-Rusanov solver at different grid resolutions. It is apparent that the 256 × 768 and 1024 × 3072 resolution results hold the symmetry. But the 4096 × 12288 result shows the development of smaller scales at t = 2.7 which leads to the loss of symmetry at final time, t = 4.5. Similar conclusions can be made for the ILES-Roe solver results in Figure 15. However, the loss of symmetry can be observed even in 1024 × 3072 resolution simulation for the ILES-Roe scheme since the ILES-Roe solver is less dissipative compared to the ILES-Rusanov solver. For both CS+RF schemes in Figures 16 and 17, the symmetry holds for lower resolutions and breaks for higher resolution. Since there is no physical viscosity in Euler simulations, we note that the breakup of the interface and the loss of symmetry can be due to the numerics. When increasing the resolution, the loss of symmetry in RTI problems have been also demonstrated in the literature (e.g., see [55,114,115]). Similar observations can be seen when we use the higher-order numerical schemes. We also refer to [116,117] for an illustration of symmetry breaking and increasing mixing in Richtmyer-Meshkov instability problems for solving Euler equations. Figure 18 presents the density field plots at the final time, t = 4.5, to get a comparative idea between the performance of the ILES schemes. We can observe in Figure 18 that the symmetry is maintained in lower resolutions, but starts to break with the increase of the resolution for both ILES solvers. If we look at the 4096 × 12288 resolution results for both ILES solvers, we can see that the ILES-Roe scheme result is more deviated from the symmetry than the ILES-Rusanov scheme because of the dissipative behavior of the ILES-Rusanov scheme. Based on these findings, we can say our two-dimensional simulation results are consistent with the findings in [31] for three-dimensional RTI case. Additionally, the dimensionless Atwood number defined as: is set as A ∼ 0.33 in our case, and it is lower than 0.6, which indicates the formation of reacceleration phase in the flow field due to the secondary KH instabilities. As suggested in the literature [31], these secondary instabilities can be responsible for the change in the usual behavior of the spikes in single-mode RTI flows. These findings are also supported by the works of Liska and Wendroff [96] where they showed that the less dissipative schemes result in an interface break up while the instability might be suppressed by high dissipative schemes. The same observations can be found at final time in Figure 19 that the higher resolution results start to break the symmetry of the spike for both CS+RF schemes. However, the density fields obtained by the CS+RF and ILES schemes seem different due to different amount of dissipation added to the system by different solvers which would eventually lead to different evolution of the flow fields. To get a more precise understanding on the simulation results, we next focus on the density-weighted kinetic energy spectra plots. We can say from the time evolution of the kinetic energy spectra plot in Figure 20 that the trends of the spectra are similar at late-stage of the simulation. We can observe that the density-weighted spectra analysis for ILES-Roe scheme shows an inertial subrange following k −11/5 scaling law. On the other hand, the kinetic energy spectra for CS+RF scheme follow the k −11/5 scaling in Figure 21. To validate our findings further, we present the power density spectra plots for both ILES-Roe and CS+RF (σ = 1.0) schemes in Figure 22. The power density spectra display a good alignment with the k −7/5 reference line. Since the time evolution of the field for both schemes follow a similar trend, we can consider the solutions at final time for rest of our analysis. Figure 23 shows that the ILES-Rusanov solver is more dissipative than the ILES-Roe solver as expected and the CS+RF scheme with σ = 1.0 is more dissipative than the σ = 0.4 solver. One interesting point can be noticed that the density-weighted spectra for the CS+RF scheme tend to deviate from the reference k −11/5 line at high wavenumber; however, the kinetic energy spectra in Figure 24 and the density-weighted spectra in Figure 25 clearly show that the spectra for the CS+RF scheme follow the reference scaling laws. On the other hand, the ILES spectra also maintain the inertial subrange following the k −11/5 and k −7/5 laws. Finally, similar to the previous section of multi-mode RTI case, we find the CS+RF solver captures more scales in the inertial range than the ILES solvers as shown in Figure 26. However, the ILES solvers resolve more scales in the high wavenumber region. This explains the reason we have seen different density field evolution for different solvers and appearance of smaller scales in ILES solvers than the CS+RF solvers. Since the ILES-Roe solver is least dissipative among the other solvers, we can see in the density contour plots that the ILES-Roe solution deviates most from the symmetry. (c) (d) Figure 20. Time evolution of density-weighted kinetic energy spectra and compensated densityweighted kinetic energy spectra for the RTI problem with single-mode perturbation obtained using different modeling approaches at a resolution of 8192 × 24576; (a) density-weighted spectra using ILES-Roe solver; (b) compensated density-weighted spectra using ILES-Roe solver; (c) density-weighted spectra using CS+RF (σ = 1.0) solver; (d) compensated density-weighted spectra using CS+RF (σ = 1.0) solver.
(a) (b) (c) (d) Figure 21. Time evolution of kinetic energy spectra and compensated kinetic energy spectra for the RTI problem with single-mode perturbation obtained using different modeling approaches at a resolution of 8192 × 24576; (a) kinetic energy spectra using ILES-Roe solver; (b) compensated kinetic energy spectra using ILES-Roe solver; (c) kinetic energy spectra using CS+RF (σ = 1.0) solver; (d) compensated kinetic energy spectra using CS+RF (σ = 1.0) solver.
Summary and Conclusions
In this paper, we put an effort to show the performance of a relaxation filtering approach using central scheme (CS+RF) on resolving the flows resulting from Rayleigh-Taylor hydrodynamic instability, and compare the simulation results with the results obtained by two common ILES-Riemann solver schemes. To assess the performance of the solvers, we use the density field contours and different spectra plots. We further analyze the resolution capacity of both CS+RF and ILES schemes as well as the flow nature at high-resolution simulation using CS+RF and ILES schemes. To validate the observations from the field plots, we use the statistical tools, i.e., kinetic energy and power density spectra plots for both high and coarse resolutions which show consistency with the existing results in the literature. In our investigation, we consider the two-dimensional RTI test problem with two different initial conditions. From the simulation results of both cases, we come to this conclusion that the CS+RF schemes capture more scales in the inertial subregion whereas the ILES schemes resolve a wide range of scales in high wavenumber region. The ILES-Rusanov scheme is more dissipative than the ILES-Roe scheme, and hence, the ILES-Roe scheme tends to deviate more from symmetry in the spike of single-mode RTI case. On the other hand, it is also observed that the dissipation can be controlled by σ parameter for CS+RF scheme which also affect the perturbation as well as the evolution of the flow field. Furthermore, we observe that the kinetic energy spectra follow k −11/5 scaling law for both multi-mode and single-mode RTI case whereas the power density spectra plots are seen to be more align to k −7/5 line at different resolutions. Also, we observe a chaotic mixing at late-time stage for single-mode RTI case at high resolution. It is because of the formation of secondary KH instabilities at high-resolution simulation of single-mode RTI case. The higher numerical dissipation due to the coarser resolution suppresses the formation of secondary instabilities which is the reason behind the preservation of symmetry for our coarse-resolution simulation results. Overall, we believe the study of relaxation filtering approach using CS will be a good contribution to the numerical study of RTI-induced flows as well as for understanding the nature of the flow field with the evolution of the instability. | 8,394.4 | 2019-04-21T00:00:00.000 | [
"Physics"
] |
Endoplasmic reticulum retention and degradation of a mutation in SLC6A1 associated with epilepsy and autism.
Mutations in SLC6A1, encoding γ-aminobutyric acid (GABA) transporter 1 (GAT-1), have been recently associated with a spectrum of epilepsy syndromes, intellectual disability and autism in clinic. However, the pathophysiology of the gene mutations is far from clear. Here we report a novel SLC6A1 missense mutation in a patient with epilepsy and autism spectrum disorder and characterized the molecular defects of the mutant GAT-1, from transporter protein trafficking to GABA uptake function in heterologous cells and neurons. The heterozygous missense mutation (c1081C to A (P361T)) in SLC6A1 was identified by exome sequencing. We have thoroughly characterized the molecular pathophysiology underlying the clinical phenotypes. We performed EEG recordings and autism diagnostic interview. The patient had neurodevelopmental delay, absence epilepsy, generalized epilepsy, and 2.5-3 Hz generalized spike and slow waves on EEG recordings. The impact of the mutation on GAT-1 function and trafficking was evaluated by 3H GABA uptake, structural simulation with machine learning tools, live cell confocal microscopy and protein expression in mouse neurons and nonneuronal cells. We demonstrated that the GAT-1(P361T) mutation destabilizes the global protein conformation and reduces total protein expression. The mutant transporter protein was localized intracellularly inside the endoplasmic reticulum (ER) with a pattern of expression very similar to the cells treated with tunicamycin, an ER stress inducer. Radioactive 3H-labeled GABA uptake assay indicated the mutation reduced the function of the mutant GAT-1(P361T), to a level that is similar to the cells treated with GAT-1 inhibitors. In summary, this mutation destabilizes the mutant transporter protein, which results in retention of the mutant protein inside cells and reduction of total transporter expression, likely via excessive endoplasmic reticulum associated degradation. This thus likely causes reduced functional transporter number on the cell surface, which then could cause the observed reduced GABA uptake function. Consequently, malfunctioning GABA signaling may cause altered neurodevelopment and neurotransmission, such as enhanced tonic inhibition and altered cell proliferation in vivo. The pathophysiology due to severely impaired GAT-1 function may give rise to a wide spectrum of neurodevelopmental phenotypes including autism and epilepsy.
Introduction
Autism or autism spectrum disorder (ASD) is a common childhood-onset neurodevelopmental condition with a strong genetic basis. The genetic architecture of ASD consists of rare de novo or inherited variants in hundreds of genes and common polygenic risks at thousands of loci. Genetic advances indicate ASD susceptibility genes are enriched for roles in early brain development and in cortical cell types [17], as well as in synaptic formation and function [13]. Importantly, ASD has a high comorbidity with epilepsy, suggesting common genetic and molecular susceptibility underlying both epilepsy and ASD [22]. This comorbidity also suggests findings from epilepsy may provide unique insights into understanding ASD.
The GABAergic pathway is likely a converging pathway for many gene mutations associated with ASD. This concept is rooted in the fact that multiple epilepsy syndromes are comorbid with ASD or autistic features [13,15]. SLC6A1, encoding γ-aminobutyric acid (GABA) transporter 1 (GAT-1), is one such gene commonly associated with epilepsy and ASD. This is not surprising because GAT-1 is one of the major GABA transporters in the brain and a key component of GABA signaling. Impaired GAT-1 function may result in altered GABA levels and the excitationinhibition imbalance that is a hallmark for autism [20,33]. GABA is a neurotrophic signal that is critical for early brain development, including regulation of neural stem cell proliferation [3,4]. It is plausible that impaired GABA signaling due to mutations in GABA A receptor genes or GAT-1 can affect the fundamental properties of the progenitor cells such as proliferation and differentiation. In epilepsy, impaired GABAergic signaling is a converging pathway of pathophysiology for epilepsy genes, including both ion channel and non-ion channel genes [21]. GAT-1 is a major GABA transporter subtype of sodium-and chloridedependent transporters and is localized in GABAergic axons and nerve terminals. Unlike GABA A receptors that directly conduct postsynaptic GABAergic currents, GAT-1 influences GABAergic synaptic transmission by clearance and re-uptake of GABA from the synapse [14].
Since the first report of SLC6A1 mutations in myoclonic atonic epilepsy (MAE), several studies have identified a number of mutations in SLC6A1 associated with two prominent features: intellectual disability (ID) and a wide spectrum of epilepsy [9,19]. A recent study also reported a SLC6A1 mutation causes a milder phenotype, characterized by a learning disorder without ID, nonspecific dysmorphisms, and an electroencephalogram (EEG) picture closely resembling that of myoclonic-atonic epilepsy with brief absence seizures later on [38]. We previously reported SLC6A1(G234S) associated with Lennox-Gastaut syndrome (LGS) [8]. Because LGS is often associated with mutations in GABRB3, it is intriguing to find SLC6A1 also associated with LGS. Overlapping clinical and molecular phenotypes of mutations in SLC6A1 and GABRB3 are further suggested by our previous study that a signal peptide variation in GABRB3 is associated with ASD with maternal transmission in multiple Caucasian families [13]. However, this area merits further elucidation.
In this study, we evaluated the impact of a novel mutation (P361T) associated with epilepsy and ASD by characterizing the mutant protein trafficking and function in different cell types including mouse neurons. Additionally, we thoroughly evaluated patient disease history, seizure phenotype, EEG, and ASD phenotype. We compared the wildtype and mutant transporter with protein structure modeling via machine learning based prediction, 3 H radioactive GABA uptake assay, and protein expression and subcellular localizations via confocal microscopy, in both heterologous cells and mouse cortical neurons. This study provides molecular mechanisms underlying how a defective GAT-1 can cause ASD in addition to epilepsy and expands our knowledge for understanding the pathophysiology underlying the comorbidity of ASD and epilepsy.
Patient with autism and epilepsy
The patient and her unaffected family members were first recruited at the Epilepsy Center and then evaluated in the clinical psychology clinic of the Second Affiliated Hospital of Guangzhou Medical University. The collected clinical data included age of onset, a detailed developmental history, autistic behaviors, seizure types and frequency, response to antiepileptic drugs (AEDs), family history, and general and neurological examination results. Brain magnetic resonance imaging (MRI) scans were performed to exclude brain structure abnormalities. Video electroencephalography (EEG) was examined repeatedly and the results were reviewed by two qualified electroencephalographers.
Autistic features were assessed and diagnosed by psychologists using Autism Diagnostic Interview Revised (ADI-R) [51] and Autism Diagnostic Observation Schedule-Genetic (ADOS-G) [30]. Individuals with the scores of ADI-R and ADOS greater than their corresponding threshold scores of ASD (cut-off) are considered to have ASD. To assess different aspects of the behaviors, developmental skills, and neuropsychological development of the patient, the third edition of Chinese Psychoeducational Profile (CPEP-3) (a modified version of Psychoeducational Profile -Revised (PEP-3)) [48,49] and the Gesell Developmental Schedule were performed by the same psychologists. ASD was diagnosed according to the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), and the tenth edition of the International Classification of Diseases (ICD-10). When a patient meets DSM-5 and ICD-10 criteria for deficits in all three areas-communication, social interaction, and repetitive behaviors-a diagnosis of ASD is made. Epileptic seizures and epilepsy syndromes were diagnosed and classified according to the criteria of the Commission on Classification and Terminology of the International League Against Epilepsy (1989, 2001, and 2010).
This study was approved by the ethics committee of the Second Affiliated Hospital of Guangzhou Medical University, and written informed consent was obtained from the parents.
Genetic data analysis
Blood samples of the patient, her parents, and her brother were collected. Genomic DNA was extracted from the peripheral blood using the Qiagen Flexi Gene DNA Kit (Qiagen, Germany). The SureSelect Human All Exon 50 Mb kit (Agilent Technologies Santa Clare, CA) was used to capture the exon regions of the genome. The DNA samples were sequenced using Illumina Hiseq 2000 sequencing system with 90-base pair reads, and the massively parallel sequencing was performed with more than 125 times average depth and more than 98% coverage of the target region.
The cDNAs for coding GABA transporter 1 The plasmid cDNA encoding enhanced yellow fluorescent protein (EYFP)-tagged rat GAT-1 was sub-cloned into the expression vector pcDNA3.1(+). Replications of patient GAT-1 mutations were cloned via a standard molecular cloning process. QuikChange Site-directed Mutagenesis kit was utilized to introduce the GAT-1(P361T) mutation into wildtype GAT-1 proteins. The product was then amplified via polymerase chain reaction and transformed using DHα competent cells and finally plated. A clone was chosen and grown overnight, replicating the cDNA. The GAT-1(P361T) mutation was confirmed by DNA sequencing. Both the wildtype and the mutant cDNAs were prepared with Qiagen Maxiprep kit.
Polyethylenimine (PEI) transfection
Standard transfection protocols were performed using human embryonic kidney 293 T (HEK293T) cells [8]. 24 h before transfection HEK293T cells were split equally into plates. During transfections, 1 μg of the cDNAs was used and combined with Dulbecco modified Eagle medium (DMEM) and a PEI/DMEM mixture. Transfected HEK293T cells incubated for 48 h. After incubation, proteins were harvested as described below.
Western blot analysis of total GAT-1 protein Briefly, HEK293T cells were seeded in 60 mm 2 dishes 1 day before transfection to avoid cell detachment. Live, transfected cells were washed with phosphate buffered saline (1 × PBS, pH 7.4) 3 times and then cells were lysed in RIPA buffer (20 mM Tris, 20 mM EGTA, 1 mM DTT, 1 mM benzamidine), supplemented with 0.01 mM PMSF, 0.005 μg/mL leupeptin, and 0.005 μg/mL pepstatin for 30 min at 4°C. The samples were then subject to protein concentration determination and followed by SDS-PAGE. Membranes were incubated with primary rabbit polyclonal antibodies against GAT-1 (Alomone Labs, AGT-001 or Synaptic System, 274,102 at 1:200 dilution).
Neuronal cultures and transfection in neurons
Mouse cortical neuronal cultures and transfection were prepared as previously described [24,26]. Mouse neurons were cultured from postnatal day 0 mouse pups. The neurons were plated at a density of 2 × 10 5 for western blot in plating media that contained 420 mL DMEM, 40 mL F12, 40 mL fetal bovine serum, 1 mL penicillin and streptomycin, and 0.2 mL L-Glutamine (200 mM) for 4 h. Neurons were then maintained in Neurobasal media that contained B27 supplement (50:1), L-Glutamine (200 mM), and 1 mL penicillin and streptomycin. Neurons were transfected with 15 μg cDNA at day 5-7 in culture with calcium phosphate and were harvested 8-10 days after transfection. Four 100 mm 2 dishes of neurons were transfected with either the wildtype or the mutant GAT-1 YFP cDNAs in each experiment to ensure enough proteins for immunoblotting assay due to low transfection efficiency in neurons.
Radioactive 3 H-labeled GABA uptake assay
The radioactive 3 H-labeled GABA uptake assay in HEK293T and HeLa cells was modified from previous studies [8,28]. Briefly, cells were cultured in 5 mm 2 dishes 3 days before the GABA uptake experiment in DMEM with 10% fetal bovine serum and 1% penicillin/ streptomycin. The cells were then transfected with equal amounts of the wildtype or the mutant GAT-1(P361T) cDNAs (1 μg) for each condition at 24 h after plating. GABA uptake assay was carried out 48 h after transfection. The cells were incubated with preincubation solution for 15 min and then incubated with preincubation solution containing 1μci/ml 3 H GABA and 10 μM unlabeled GABA for 30 min at room temperature. After washing, the cells were lysed with 0.25 N NaOH for 1 h. Acetic acid glacial was added and lysates were then determined on a liquid scintillator with QuantaSmart. The flux of GABA (pmol/μg/min) was averaged with at least triplets for each condition at each transfection. The average counting was taken as n = 1. The untransfected condition was taken as a baseline that was subtracted from both the wildtype and the mutant conditions. The pmol/ μg/min in the mutant was then normalized to the wildtype from each experiment, which was arbitrarily taken as 100%.
Live cell confocal microscopy and image acquisition
Live cell confocal microscopy was performed using an inverted Zeiss laser scanning microscope (Model 510) with a 63 × 1.4 NA oil immersion lens, 2-2.5 × zoom, and multi-track excitation. HEK 293 T cells were plated on poly-D-lysine-coated, glass-bottom imaging dishes at the density of 1-2 × 10 5 cells and cotransfected with 2 μg of the wildtype or the mutant GAT-1 plasmids and 1 μg pECFP-ER with PEI based on our standard lab protocol. Cells were examined with excitation at 458 nm for ECFP, 514 nm for EYFP. All images were single confocal sections averaged from 8 times to reduce noise, except when otherwise specified. The images were acquired using a LSM 510 invert confocal microscope with 63X objective.
Protein structural modeling and machine learning tools
We simulated the impact of the mutation on the transport protein with multiple machine learning tools. Tertiary structures of both the wildtype and P361T mutated GAT-1 protein were predicted by I-TASSER [52] and analyzed by MAESTRO web [29]. Details in structural differences between the wildtype and the mutant GAT-1 were illustrated using the modelled structure by Dyna-Mut [39]. Analysis of self-aggregation or co-aggregation was conducted using PASTA 2.0 [43].
Data analysis
Numerical data were expressed as mean ± SEM. Proteins were quantified by Odyssey software and data were normalized to loading controls and then to wildtype transporter proteins, which was arbitrarily taken as 1 in each experiment. The radioactivity of GABA uptake was measured in a liquid scintillator with QuantaSmart. The flux of GABA (pmol/μg/min) in the WT GAT-1 samples was arbitrarily taken as 100% each experiment. The fluorescence intensities from confocal microscopy experiments were determined using MetaMorph imaging software and the measurements were carried out in ImageJ as modified from previous description [23,27,46]. For statistical significance, we used one-way analysis of variance (ANOVA) with Newman-Keuls test or Student's unpaired t-test. In some cases, one sample t-test was performed (GraphPad Prism, La Jolla, CA), and statistical significance was taken as p < 0.05.
Mutation analysis identified P361T variation in SLC6A1 and the residue is conserved across species
Multiple mutations have been identified in the GAT-1 protein (Fig. 1a) [10,19,32]. These mutations are scattered throughout the transporter protein peptide. Protein sequence alignment indicates that P361T in GAT-1 occurs at a conserved residue located at the extracellular loop between the 7th and 8th transmembrane helices. The mutation was identified by whole exome sequencing in the proband but not in her unaffected parents and brother. Initially, all rare and potentially damaging variants were obtained through population and functional impact-based filtration. Next, de novo variants, homozygous and compound heterozygous genotypes in the proband were screened. Finally, according to the clinical concordance evaluation between the previously reported phenotypes of the mutated genes and the phenotypic characteristic of the patient, a de novo novel heterozygous missense variation (Ref Seq accession number NM_001348250: c.1081C > A/p. Pro361Thr) was identified in SLC6A1 (coding for GAT-1). The proband was confirmed to harbor the variant by Sanger resequencing, but her unaffected family members did not ( Fig. 1b and c). Further, this variation was absent in the general population of the 1000 Genomes Project, ExAC, ExAC-EAS, GnomAD, and our inner 296 normal controls. It was predicted to be "damaging" by SIFT (score = 0.0), "probably damaging" by Poly-Phen-2 (score = 1.0), and "disease causing" by Mutation Taster (score = 1). The pathogenicity of the novel SLC6A1 variant was assessed as likely pathogenic by American College of Medical Genetics and Genomics (ACMG) scoring. This SLC6A1 variant was the only de novo variant detected via whole exome sequencing. Additionally, four compound heterozygous variants in the BTN1A1, FLNC, C2CD3, and RYR1 genes were detected in the proband, but the clinical phenotypes of these four genes were not in concordance with the clinical features of the proband, suggesting none of these recessive variants were disease-causing variants. No homozygous variants were detected in the proband. We have aligned the encoded GAT-1 sequence and identified the proline residue is conserved across species.
Protein structural modeling suggests that P361T mutation in GAT-1 protein destabilizes the transporter protein conformation We then predicted the impact of the mutation on the transporter protein stability via several machine learning tools. Homology modelling of the P361T mutation in GAT-1 protein as shown in Fig. 2 was conducted using I-TASSER [52] with homology template PDB ID 4 m48. Residue P361T is colored red, where proline is mutated to threonine, which may trigger several conformational changes on GAT-1. Located at the extracellular loop Schematic representation of GAT-1 protein topology and locations of GAT-1 variants previously identified in patients associated with a spectrum of epilepsy syndromes. It is predicted that GAT-1 contains 12 transmembrane domains. P361 is located at the extracellular loop between the 7th and 8th transmembrane helices of the GAT-1 protein. The positions of variants are based on the published LeuT crystal structure. b Pedigree and the genotype. A missense mutation was only found in the proband but not in the rest of the family members. c Chromatogram of PCR-Sanger sequencing. DNA sequences of the proband and the immediate family members were shown. Arrow indicated a C-to-A transversion. d Amino acid sequence homology shows that proline (P) at residue 361 is highly conserved in SCL6A1 in humans (Accession NO.NP_003033.3) and across species as shown in boxed region between transmembrane domains, residue 361 is at the turn of two helices exposed on the surface of the protein's tertiary structure. Similar to the amino acid change from glycine to serine that we reported before on residue 234 (G234S) associated with LGS [8], the additional hydroxyl in threonine increased side chain polarity in comparison to the nonpolar side chain pyrrolidine in the wildtype proline. These polarity changes disturb the equilibrium of the transmembrane protein conformation, resulting in protein structure destabilization. Another observation resulting from the mutation is the breakage of hydrogen bonds between residue 361 and its neighboring residues 365 and 364 in the helix (red dash in Fig. 2a and b). This destabilization hypothesis is also supported by predicting the ΔΔG of the mutation using machine learning-based protein structure stability prediction methods SDM [35], mCSM, DUET [36], INPS [2,40], DynaMut [39] and MAESTROweb [29]. As indicated in Fig. 2c Supplementary Table 1 [29,36,37,39,40], nearly all the tools (six out of seven) predicted the P361T mutation destabilized the GAT-1 protein (Supplementary Table 1). Details in structural differences between the wildtype proline and mutated threonine were modelled by DynaMut interatomic interaction predictions. In addition, PASTA 2.0 [43] did not suggest any protein self-aggregation or co-aggregation from the perspective of energy changes.
Clinical phenotypes of autism and epilepsy
The proband, a 6-year-old girl was diagnosed as autism and epilepsy at 3.5 years old. She was born to nonconsanguineous healthy parents, termed and delivered naturally. There was no history of ASD, epilepsy, development disorders, or other neurological disorders in her family members or other relatives. The patient had developmental delay in gross and fine motor skills and speech at 6 months. Two month later, she had repetitive patterns of behavior, such as playing with hair, wringing hands, tapping the desk, and grinding her teeth. Diminished social interactions occurred, such as poor eye contact and no attempt to interact with any member of family, even her mother. She only engaged in solitary At 41.8 months of age, she was first evaluated in a psychology clinic with ADI-R and ADOS, receiving scores in each domain of the ADI-R and ADOS much higher than that the cut-off values of ASD (Tables 1, 2 and 3) and was consequently diagnosed with ASD (Tables 1, 2 and 3). The assessment results of CPEP-3 and Gesell indicated that she had regression in behaviors, developmental skills, and neuropsychological development (Tables 4 and 5). Additionally, the patient had her first seizure at 2 years old without obvious predisposing factors. It was a series of repetitive absence seizure attacks with transient loss of consciousness for 3-5 s, accompanied by an atonic seizure with head drooping to one side and occasionally to the ground. Subsequently, similar seizures occurred more than 10 times per day. Her brain MRI was normal. Interictally EEG recordings showed 2.5-3.0 Hz generalized spike and slow waves (Fig. 3a), spike and slow waves in the bilateral prefrontal lobes, and slow waves (2.0-3.0 Hz) predominantly in the bilateral occipital area during both wakefulness and sleep (Fig. 3a-c). A diagnosis of generalized epilepsy was considered. The patient had not received AED therapy before 3.5 years old. She was initially treated with valproate (VPA) with a dose of 20 mg.kg -l d − 1 ; the seizure frequency significantly reduced but the odontoprisis was still observed. She was then treated with levetiracetam Abnormality of development evident at or before 36 months (Cutoff)
(21)
COM qualitative abnormalities in communication, RRB restricted and repetitive Behavior; RSI qualitative abnormalities in reciprocal social interaction (LEV) with a dose of 28.57 mg.kg -l d − 1 . The seizures and odontoprisis disappeared, but she became irritable with frequent screaming. Finally, lamotrigine (LTG) was used as a substitute for LEV with a dose of 4.16 mg.kg -l d − 1 , and her condition was more stabilized than before. A recent EEG recording demonstrated the generalized epileptic discharge disappeared, but the focal EEG abnormalities did not show significant improvement (Fig. 3d, e).
GAT-1(P361T) had reduced total protein in both nonneuronal cells and mouse cortical neurons
Altered protein stability and enhanced protein degradation are common phenomena caused by mutations in various genes. This has been demonstrated in multiple GABA A receptor mutations across multiple subunits [26]. We first determined the total expression of the mutant GAT-1(P361T) by transfecting mouse cortical neurons with YFP-tagged wildtype or mutant GAT-1 cDNAs (Fig. 4a) for 8 days. We also transfected the YFPtagged wildtype or mutant GAT-1 cDNAs in HeLa cells for 48 h. In both neurons and HeLa cells, the wildtype GAT-1 YFP or the mutant GAT-1(P361T) YFP mainly migrated at 108 KDa, which is predicted for YFP-tagged GAT-1 and is consistent with previous findings [5,7]. When immunoblotted with anti-GAT-1 antibody, a strong band was detected at 67 KDa. This is the endogenous GAT-1, which was not changed in neurons transfected with either the wildtype or the mutant GAT-1 cDNAs (data not shown). This may suggest there is no dominant negative effect of the mutant GAT-1(P361T) in neurons (Fig. 4). Compared to the wildtype, the GAT-1(P361T) had reduced total protein expression (wt = 1, P361T = 0.22 ± 0.043) in mouse cortical neurons (Fig. 4a) and in HeLa cells (wt = 1, P361T = 0.41 ± 0.062) (Fig. 4bc). This suggests a similar reduction of the total protein level in the mutant GAT-1 in neurons and non-neuronal cells.
GAT-1(P361T) mutant protein was retained inside the endoplasmic reticulum
We have previously identified that mutant GABA A receptor subunits are more likely to be retained inside the endoplasmic reticulum (ER) due to misfolding and glycosylation arrest [25,26]. Those ER retention-prone mutant proteins can have either a higher or lower proportion of the total protein level compared with its wildtype counterpart [25,26]. To evaluate the subcellular localization of GAT-1(P361T), we determined the intracellular localization of the mutant GAT-1 (P361T) protein by coexpressing GAT-1 YFP or GAT-1(P361T) YFP with an ER marker, ER CFP [25]. When compared to wildtype, the mutant GAT-1(P361T) had a stronger presence intracellularly, colocalizing with the ER marker (Fig. 5a). The protein expression pattern was very similar to that of the wildtype GAT-1 protein treated with ER stress inducer tunicamycin (10μg/ml for 16 h). The percent fluorescence signal of GAT-1 overlapping with ER marker was higher in the mutant GAT-1(P361T) compared to wildtype (30.90 ± 3.26 vs 66.48 ± 2.23) (Fig. 5b). The percent fluorescence signal of GAT-1(P361T) overlapping with ER marker was similar to the wildtype treated with tunicamycin (66.48 ± 2.23 vs 82.24 ± 5.428) (Fig. 5b). The data indicates that the mutant protein was more likely to be retained inside the ER despite the reduced total amount of the mutant GAT-1(P361T) protein. The reduced total expression of the mutant GAT1(P361T) could consequently impair the overall function of GAT-1 due to the reduced number of functional transporters. We then determined the function of the wildtype and the mutant GAT-1 (P361T) in HEK cells by 3 H GABA uptake assay. The flux was conducted in a preincubation solution containing 1μci/ml and 10 μM cold GABA at room temperature for 15 min. The counts per minute (CPM) were converted to pmol/μg/ min by normalizing to the standard CPM, protein concentration, and time for flux. The measurements in the mutant transporter were then normalized to the wildtype which was taken as 100%. Compared with the wildtype, the GAT-1(P361T) had reduced 3 H GABA uptake in both HEK293T (wt = 100% vs 16.83 ± 4.1) (Fig. 6a) and HeLa (wt = 100% vs 28.0 ± 5.58) cells (Fig. 6b). The GAT-1 (P361T) transport activity was similar to the activity of wildtype GAT-1 treated with GAT-1 inhibitors Cl-966 (100 μM) and NNC-711 (70 μM) for 30 min in both HEK293T and HeLa cells. This indicated that the P361T mutation reduced the transporter activity to a similar level as that of the cells treated with the GAT-1 inhibitors Cl-966 or NNC-711.
Discussion
Mutations in SLC6A1 are associated with a wide spectrum of clinical phenotypes including autism and epilepsy It has been previously reported that MAE and ID are the two prominent phenotypes for SLC6A1 mutations [9,19]. More recently, studies on clinical manifestations associated with SLC6A1 variants indicate that variants in SLC6A1 can give rise to a wide spectrum of epilepsy syndromes, ranging from focal epilepsy to generalized epilepsy as well as learning disorders and intellectual disability with or without epilepsy [2]. Our study supports the hypothesis that mutations in SLC6A1 could give rise to a wide spectrum of epilepsy phenotypes. We have previously reported that a SLC6A1 mutation is associated with LGS. Here we are the first to report that a SLC6A1 missense mutation causes ASD plus epilepsy. Expanding the phenotype spectrum associated with SLC6A1 mutations and further supporting our previous hypothesis [19] that mutations in SLC6A1 are associated with a wide spectrum of phenotypes. However, the mechanisms underlying the phenotypic heterogeneity merits further elucidation.
SLC6A1 mutation mediated phenotypes suggest a role of GAT-1 in early brain development It has been reported that the head circumference is increased in autistic toddlers [11,12]. The genetic risk factors for autism range from rare point mutations in genes encoding numerous synaptic proteins (such as contactin-associated protein-like 2, CNTN AP2; SH3 and multiple ankyrin repeat domains 3, SHANK3; and neuroligin 3, NLGN3), to gains or losses of DNA segments, termed copy number variation (for example, 16p11.2 and 15q11-q13), and to gross chromosomal rearrangements that are estimated to occur in about 7% of autism cases [1,34]. It has been identified that ASD genes as a group are preferentially expressed in late mid-fetal prefrontal cortex and have concentrated expression in layer V/VI cortical projection neurons [47]. Collectively, studies from both rare mutations and common variants highlight the relevance of early fetal brain development in the pathophysiology of ASD (). Although the developmental profile of GAT-1 is unclear, it is likely that GAT-1 plays an important role in early brain development via affecting GABA signaling.
Impaired GABAergic signaling, a converging pathway in autism and epilepsy GABA is a critical neurotrophic signal in early brain development [44,45]. GABA modulates neuronal arbor elaboration and differentiation. In chick cortical and retinal cells, treatment with GABA increased the length and branching of the neurites and augmented the density of synapses. In mammalian neurons, GABA A receptor antagonists reduced the dendritic outgrowth of cultured rat hippocampal neurons. In subsequent studies in The trophic effects of GABA have been reproduced by agents acting on GABA synthesis, receptor activation or blockade, intracellular Cl _ homeostasis, or L-type Ca 2+ channels. Similarly, conversion of GABA-induced excitation/depolarization into inhibition/hyperpolarization in newborn neurons leads to significant defects in their synapse formation and dendritic development in vivo [16]. It has been demonstrated that GABA A receptor activation impacts neurite growth in various systems [6,31,42], validating the critical role of GABA signaling in brain development; however, the expression profile of GAT-1 in early brains and how impaired GAT-1 function affects early progenitor cells are unknown.
Mutations in SLC6A1 cause clinical phenotypes similar to mutations in GABRB3, suggesting overlapping pathophysiology underlying mutations in SLC6A1 and GABRB3 Both GABA A receptors and GAT-1 are key components of the GABAergic signaling pathway. It is not surprising that mutations in genes encoding both GABA A receptor subunits and GAT-1 are associated with the same clinical epilepsy phenotype. It is plausible that GABA A receptors and GABA transporters like GAT-1 work in concert to ensure an appropriate level of GABAergic neurotransmission as well as proper neurotrophic signaling during the progenitor cell stage. It has been demonstrated that GABRB3 affects cell proliferation and differentiation [3] at the stem cell stage. It is possible that mutations affecting either GABA A receptors or GABA transporters such as GAT-1 can impair GABAergic signaling and give rise to a similar clinical presentation. However, it merits further study to elucidate the similarity and difference of mutations in both genes from functional evaluations to clinical phenotypes, especially regarding the impact on early progenitor cell differentiation, spatial localization, and neuronal maturation.
Mutant GAT-1(P361T) protein had reduced protein stability and reduced total protein expression Our findings from protein structure simulation by machine learning, as well as appropriate modeling with various tools, indicate the P361T substitution results in the breakage of hydrogen bonds between residue 361 and its neighboring residues 365 and 364 in the helix, as indicated by structural simulation. Collectively, simulation data indicate that the mutation destabilizes the protein conformation. Our biochemical assay has demonstrated that the GAT-1(P361T) mutation reduced the total protein expression in both heterologous cells and neurons, further validating the hypothesis of reduced GAT-1 protein stability. Based on the wildtype and mutant protein expression patterns, it is likely that the mutation only caused a partial loss of function without clear dominant negative effects (data not shown).
Mutant GAT-1(P361T) transporter was mislocalized with increased ER retention
We previously demonstrated that mutant GABA A receptor subunits were retained inside the ER and were removed from the cells by ER-associated degradation, and that this is a major pathogenicity for GABA A receptor subunit gene mutations. Because GAT-1 is a transmembrane protein, it is likely that at least some mutations in GAT-1 cause protein instability and impair trafficking. We evaluated the subcellular colocalization of GAT-1(P361T) with an ER marker, and also evaluated the colocalization of wildtype GAT-1 with the ER marker after treatment with an ER stress inducer, tunicamycin (10 μg/ml). The GAT-1(P361T) expression profile was highly colocalized with the ER marker. The findings were similar to the expression pattern of the wildtype GAT-1 treated with tunicamycin or with Brefeldin A (which blocks protein transport from the ER to the Golgi apparatus) (Data not shown). Our data indicate that the mutant GAT-1(P361T) transporter is subject to similar intracellular protein processing as many mutant GABA A receptor subunits, due to a conserved protein quality control machinery inside cells [25,26]. The steady state level of the ER retained mutant protein could be higher or lower than its wildtype counterparts, depending on the intrinsic properties of the mutant protein that effect degradation rate of the mutant protein (Kang et al., 2009 c[24];). GAT-1(P361T) had reduced total protein expression in both neurons and nonneuronal cells, indicating reduced protein stability and enhanced disposal of the mutant protein, and most of the synthesized mutant transporters resided inside the ER. This finding is novel for SLC6A1 mutations but is consistent with our previous studies on multiple GABA A receptor subunit mutations associated with genetic epilepsy syndromes [25,27].
GAT-1(P361T) compromises the function of the transporter on GABA uptake GABA uptake assay is a gold standard to evaluate the function of GABA transporters. Our data indicate that GAT-1(P361T) substantially reduced GABA reuptake to the level of cells expressing the wildtype GAT-1 treated with GAT-1 inhibitors Cl-966 and NNC-711. Biochemical studies and confocal microscopy analysis indicate the total amount of mutant protein was substantially reduced and the remaining protein was likely retained inside ER; consequently, there would be far fewer transporters at the cell surface and synapses to conduct GABA transportation. This indicates GAT-1(P361T) is a loss-of-function mutation and could explain the associated disease phenotype in the patient carrying the mutation.
The implications of the impact of mutant GAT-1 on early brain development and neurodevelopmental disorders It is likely that mutations in GAT-1 cause dysregulated cell proliferation and differentiation at the early progenitor cell stage, which is very similar to the impact of GABA A receptor mutations. Future studies with human patient-derived pluripotent stem cells and animal models have potential to elucidate the pathological basis for these effects. Additionally, GAT-1 is a major target for seizure treatment. Tiagabine (TGB) is an inhibitor of GAT-1 and is widely used in focal epilepsy. How can loss-of-function mutations in GAT-1 cause epilepsy while inhibiting GAT-1 function paradoxically treats epilepsy? How does the malfunctioning GAT-1 affect tonic and phasic GABA-evoked current? How will seizure suppression with GAT-1 inhibition affect cognition and neurodevelopment? Studies from GAT-1 knockout mice indicate increased tonic current but decreased amplitude of spontaneous miniature inhibitory postsynaptic currents (mIPSCs) [18,50]. Because the mutant GAT-1(P361T) resulted in reduced GABA uptake, this would likely lead to higher ambient GABA concentration and enhanced tonic inhibition. Future work in mutation knockin mouse models with focus on early brain development, phasic versus tonic inhibition, tailored seizure treatment, and the correlation of seizure treatment and improvement of comorbidities-such as autism and cognition-will be of particular interest. The study has thoroughly characterized the clinic and molecular defects of GAT-1(P361T) mutation. Further study on mutation knockin mouse model would provide critical insights into the change of neurotransmission such as altered tonic and phasic inhibition. In summary, this study has characterized the clinical features of both epilepsy and ASD phenotypes for SLC6A1 (P361T) mutation and identified the molecular defects with a multidisciplinary approach including 3 H GABA uptake assay and confocal microscopy. The study indicates the mutation can reduce GAT-1 total expression and GABA uptake. This is likely due to altered GAT-1 protein stability, leading to enhanced GAT-1 protein degradation. Consequently, deficient GAT-1 function may alter neurodevelopment and neurotransmission that manifest as ASD and epilepsy.
Supplementary information
Supplementary information accompanies this paper at https://doi.org/10. 1186/s13041-020-00612-6. GABA flux was measured after 30 min transport at room temperature. The influx of GABA, expressed in pmol/μg protein/min, was averaged from duplicates for each condition and for each transfection. The average counting was taken as n = 1. The untransfected condition was taken as baseline flux, which was subtracted from both the wild-type and the mutant conditions. The pmol/μg protein/min in the mutant was then normalized to the wildtype from each experiment, which was arbitrarily set as 100%. (**p < 0.01 vs. wt, n = 4-5 different transfections) | 8,093.4 | 2020-05-12T00:00:00.000 | [
"Medicine",
"Biology"
] |
Application of Electron Paramagnetic Resonance Spectroscopy to Examine Free Radicals in Melanin Polymers and the Human Melanoma Malignum Cells Application of Electron Paramagnetic Resonance Spectroscopy to Examine Free Radicals in Melanin Polymers and the Human Melanoma Malignum Cells
The studies of free radicals in melanin and the human melanoma malignum cells by an X-band (9.3 GHz) electron paramagnetic resonance (EPR) spectroscopy were presented. The original results were compared with those published earlier. The aim of this work was application of the advanced spectral analysis to determine free radical properties in melanin biopolymers obtained from different melanotic tumor cells and free radicals existing in the human melanoma cells. Magnetic spin-lattice interactions in melanin samples were tested. The evolution of lineshape of tumor cells with increasing of microwave power was determined to confirm their complex free radical system. The useful shape parameters were proposed. The shape of melanotic tumor cells was analyzed. EPR spec- tra of free radicals in the melanin isolated from different tumor cells measured in the wide range of microwave power were analyzed. The melanins were obtained from the control tumor cells and the cells cultured with the several antitumor substances. The use-fulness of the electron paramagnetic resonance spectroscopy was confirmed.
The aim of this work was application of the advanced spectral analysis to determine free radical properties in melanin biopolymers obtained from the melanotic tumor cells and free radicals exiting in human melanoma cells. Free radicals in the original melanin samples and samples treated by the several antitumor substances were studied. The physical method of free radical detection based on paramagnetic character of melanins was used. EPR spectra of the tested natural melanins were compared with those of the model synthetic melanin polymers.
The innovatory lineshape analysis and the influence of microwave power on the complex EPR spectra were performed. The results were useful in medicinal therapy of the melanotic tumor cells. Both our published quantitative results [58][59][60] were cited, and the original spectral The EPR spectra of eumelanin (a) and pheomelanin (b). The measurements were done with the low microwave power with attenuation of 7 dB (microwave power of 14 mW). The melanin samples were studied in paper [40].
unpublished results were presented. The novelty in the present work, relative to our earlier papers [58][59][60], was the proposition of the spectral parameters to examine of the multicomponent EPR spectra as the sum of lines resulted from different types of free radicals existing in the melanotic A-2058 cells. The changes of these parameters with increasing of microwave power for the EPR spectra of the control cells and the cells cultured with valproic acid (VPA), 5,7-dimethoxycoumarin (DMC), and both valproic acid and 5,7-dimethoxycoumarin were presented.
The tested antitumor substances
The influence of the following substances on human melanoma malignum cells, valproic acid (VPA) (C 8 H 16 O 2 ), 5,7-dimethoxycoumarin (DMC), and both VPA and DMC, was examined. Chemical structures of the tested substances are shown in Figure 3 [61]. VPA and DMC were used as the potential antitumor substances [61].
The tested human melanoma malignum cells
The three types of the human malignant melanoma cell lines, A-2058, A-375, and G-361, were used in this study. The cells were also cultured with the antitumor substances: valproic acid (VPA), 5,7-dimethoxycoumarin (DMC), and both VPA and DMC. In our EPR studies, the measurements were performed for the same number of cells. The A-2058, A-375, and G-361 cells were obtained from LGC Promochem (Łomianki, Poland). A-2058 cells and A-375 cells were grown in the Minimum Essential Medium Eagle (Sigma-Aldrich). G-361 cells were grown in McCoy's medium (Sigma-Aldrich). These media were supplemented by the following components: 10% fetal bovine serum (FBS, PAA), 100 U/ml penicillin (Sigma-Aldrich), 100 μg/mL streptomycin (Sigma-Aldrich), and 10 mM HEPES (Sigma-Aldrich). The cells were incubated at temperature 37°C with the use of 5% CO 2 . The incubation details were described in [58,60].
The human malignant melanoma cell lines were incubated with 1 mM VPA, 10 μM DMC, and their combination for 4 days (A-2058) or 7 days (A-375 and G-361). EPR spectra of free radicals in the A-2058 cells and in melanin isolated from A-375, and G-361 cells were analyzed.
Isolation of melanin biopolymers from the melanotic cells
Melanin was isolated from the human melanoma malignum cells: A-375 and G-361. The enzymatic isolation procedure was described in detail in papers [62,63]. The cells were lysed by incubation with 1% Triton X-100 (Sigma-Aldrich) for 1 hour at room temperature. The melanin was obtained by centrifugation of the lysates of the control cells, and the cells were cultured with VPA, DMC, and both VPA and DMC. The concentrations of VPA and DMC were 1 mM and 10 μM, respectively. The remaining pellets were washed with phosphate buffer, resuspended in Tris-HCl buffer (50 mM, pH 7.4), and incubated for 3 h at temperature 37°C. This Tris-HCl buffer contained sodium dodecyl sulfate (5 mg/ml) and proteinase K (0.33 mg/ml, Sigma-Aldrich). Melanin as the insoluble pigments was successively washed with 0.9% NaCl, methanol, and hexane, dried to a constant weight at temperature 37°C, and stored in a glass desiccator over P 2 O 5 .
The model eumelanin
The model eumelanin as DOPA-melanin was obtained by tyrosinase-catalyzed oxidation of 3,4-dihydroxyphenylalanine. The precursor (3,4-dihydroxyphenylalanine) was obtained from Sigma-Aldrich firm. The precursor was dissolved in 50 mM sodium phosphate buffer (pH 6.8). The final concentration was 2 mM. The reaction mixture after addition of tyrosinase (100 U/ml) was incubated for 48°C at temperature of 37°C. DOPA-melanin was obtained from the mixture by centrifugation (5000 × g, 15 min). The samples were washed by deionized water. Tyrosine was removed from melanin sample by treatment with SDS and methanol and NaCl. Finally, the sample was rewashed with deionized water and dried to a constant weight at temperature 37°C. This procedure was described in detail in [59,60].
EPR detection system
Free radicals in melanin biopolymers existing in different types of tumor cells and model synthetic melanin were examined by the use of electron paramagnetic resonance (EPR) spectroscopy. EPR spectra of melanin isolated from the cells and EPR spectra of the whole melanotic cells were tested. The first-derivative spectra were measured by an X-band (9.3 GHz) EPR spectrometer produced by Radiopan (Poznań, Poland) and the numerical data acquisition system-the Rapid Scan Unit of Jagmar (Kraków, Poland) (Figure 4).
The cells or melanin samples in thin-walled glass tubes were located in the resonance cavity in magnetic field produced by electromagnet of the EPR spectrometer ( Figure 5). In the magnetic field, the Zeeman splitting appeared [41,42]. Free radicals absorb microwaves according to the electron paramagnetic resonance condition [41,42]: where h, Planck constant; ν, microwave frequency; μ B , Bohr magneton; g-factor; and B r , resonance magnetic induction.
The absorption is proportional to the free radical concentrations in the samples. The detailed determination of the free radical concentrations in cells and melanin samples was described in [58][59][60].
For the measurements and spectral analysis, the professional spectroscopic programs of Jagmar (Kraków, Poland), LabVIEW 8.5 of National Instruments (USA) and Origin (USA) were used. The Silesian Medical University has the right to use these programs. The program to spectroscopic analysis was prepared by Jagmar firm specially to our EPR spectrometer. The other programs are widely available.
The parameters of the EPR measurements
The EPR spectra were measured with the magnetic modulation of 100 kHz. Microwave frequency (ν) from the X-band (9.3 GHz) was obtained by MCM 101 detector of EPRAD (Poznań, Poland). The magnetic induction (B) in the range 332-338 mT was measured by NMR magnetometer of EPRAD (Poznań, Poland).
The maximal microwave power produced by klystron in microwave bridge of the EPR spectrometer was 70 mW. The measurements of the EPR spectra were done in the range of microwave power from 2.2 mW (attenuation of 15 dB) to 70 mW (attenuation of 0 dB). The microwave power was regulated by attenuation according to the formula [41,42]: where M is the microwave power used for detection of the EPR spectrum and M o is the maximal microwave power (70 mW).
Analysis of the EPR spectra
The influence of microwave power in the range of 2.2-70 mW on the lineshape parameters of the EPR spectra of the tested samples was determined. The model first-derivative EPR spectrum with the values, A 1 , A 2 , B 1 , and B 2 , was shown in Figure 6. The lineshape parameters were obtained as A 1 /A 2 , A 1 −A 2 , B 1 /B 2 , and B 1 −B 2 . The evolution of the proposed lineshape parameter with increasing of microwave power gives information about complex free radical system in the biological samples.
The influence of microwave power on the integral intensities (I) of the EPR spectra was determined. Integral intensity (I) is proportional to the concentration of free radicals in the sample [41][42][43]. Integral intensity (I) of the EPR spectrum is the area under the absorption line [41][42][43].
Because the EPR spectra were measured as the first derivative of absorption, the spectral lines were double integrated to calculate the integral intensity. The first integration gives the absorption spectra. The second absorption gives the area under the absorption line.
The changes of integral intensity (I) of the EPR line with increasing of microwave power bring to light the spin-lattice interactions in the samples [41,42]. Integral intensity (I) of the homogeneous broadened lines increased with increasing of microwave power, and after the reaching the maximal values, it decreased with the continuing increase of microwave power of the measurement [41,42]. The faster spin-lattice relaxation caused microwave saturation of the EPR line at the higher microwave powers [41,42].
EPR spectra of free radicals in the human melanoma malignum A-2058 cells
Free radicals with the strong EPR lines of g-factor near 2 were found in A-2058 human melanoma cells [58]. The EPR spectra of the A-2058 cells recorded with the attenuation of micro- wave power of 7 dB were presented in Figure 7. The other spectra of these samples were presented in paper [58]. The EPR spectra are the broad nonsymmetrical lines (Figure 7). The broadening of the EPR lines of A-2058 cells is caused by dipolar interactions between free radicals. In this study we concentrated on the spin-lattice interactions in A-20058 cells and on their complex system of free radicals. (Figure 8). The decrease of the integral intensity (I) was not observed, but the approaching to the maximum was visible (Figure 8). It means that the relatively slower spin-lattice relaxation processes existed in A-2058 cells cultured with both VPA and DMC, compared to the control cells, and the cells treated only with VPA or only with DMC. As one can see, the strongest effect on magnetic interactions in A-2058 cells was caused by the VPA and DMC used together in the cell culture. Figures 9-12, respectively.
All the tested lineshape parameters (A 1 /A 2 , A 1 −A 2 , B 1 /B 2 , and B 1 −B 2 ) for the control A-2058 cells and for the A-2058 cells cultured with the antitumor substances (VPA, DMC, and both VPA and DMC) were not constant, and their changes with microwave power were observed (Figures 9-12). The strongest changes of the parameters A 1 -A 2 (Figure 10) and B 1 −B 2 (Figure 12) were obtained. The changes of the spectral shape parameters with microwave power were not regular (Figures 9-12). These nonregular changes of the spectral shape parameters with microwave power confirmed the existence of several types of free radical in the tested A-2058 cells, both in the control cells and in the cells treated with the used antitumor substances. We proposed these shape parameters, A 1 /A 2 , A 1 −A 2 , B 1 /B 2 , and B 1 −B 2 , for checking the multicomponent type of free radical in cells. They supported in the analysis of complex free radicals in the other paramagnetic samples, for example, for drugs [64,65]. The EPR spectra of the cells were superposition of several lines resulted from the individual groups of free radicals. The microwave power differently influenced these EPR components, dependent on the type of free radicals. Amplitudes (A), linewidths (ΔB pp ), and integral intensities (I) of each component lines changed differently with microwave power. The component EPR lines saturated at different microwave powers. All these facts resulted in the summary effects of nonregular changes of shape parameters with microwave power used during the measurements of the EPR spectra of A-2058 cells. The existence of several groups of free radicals in A-2058 cells was expected. The o-semiquinone free radicals, biradicals, and free radicals formed, for example, by UV irradiation of the cells, may exist in the A-2058 cells. The studies of the complex free radicals system in tumor cells with application of the spectral shape analysis in the broad range of microwave power will be continued. The numerical analysis of the components will be performed. Besides the shape analysis proposed in this work, the important qualitative results for free radicals in the human melanoma malignum A-2058 cells were obtained by us earlier [58]. It was pointed out that treatment by VPA, DMC, and both VPA and DMC decreased free radical concentration in A-2058 cells [58]. This effect was the strongest for VPA used together with DMC, so these substances were proposed as the antitumor drugs [58]. The used in the present work spectral parameter -integral intensity (I) -was more precise than the amplitude (A) [58] for examination of spin-lattice relaxation processes in A-2058 human melanoma cells.
EPR spectra of free radicals in melanin isolated from human melanoma malignum A-375 cells
Free radicals were also found in melanin biopolymer isolated from the control A-375 cells and the A-375 cells cultured with VPA, DMC, and both VPA and DMC. For all the melanin samples, EPR spectra were measured. The exemplary EPR spectra of melanin isolated from A-375 cells cultured with VPA and DMC, recorded with microwave power attenuation of 7 dB, were shown in Figure 13. The other EPR spectra of melanin originated from A-375 cells were shown in [60].
The parameters of the EPR spectra of the melanin obtained from A-375 cells changed with microwave power. In Figure 14, the influence of microwave power on integral intensities (I) of the melanin obtained from A-375 cells cultured with VPA, DMC, and both VPA and DMC was compared. The changes of the integral intensities (I) of the melanin isolated from the control A-375 cells and the other A-375 cell culture with VPA, with increasing of microwave power, were published in our earlier paper [59].
The integral intensities (I) of the EPR lines of melanin isolated from A-375 cells treated with VPA increased with increasing of microwave power (M/M o ) reached the maximum and started to saturate (Figure 14). The EPR lines of melanin isolated from the control A-375 cells saturated at the low microwave power [59]. Comparing the results for EPR lines of melanin from the control A-375 cells [59] and from the A-375 cells cultured with VPA (Figure 14), it may be concluded that the faster spin-lattice relaxation processes existed in melanin from the A-375 cells treated by VPA. Such effect was not observed for the melanin isolated from A-375 cells cultured with DMC. EPR lines of melanin from A-375 cells treated with DMC (Figure 14) saturated at similar microwave power than the lines of melanin from the control A-375 cells [59]. The EPR lines of melanin obtained from A-375 cells treated by both VPA and DMC (Figure 14) saturated at the lower microwave power than the EPR lines of the melanin isolated from control cells [59]. The slower spin-lattice relaxation processes existed in melanin from A-375 cells cultured with both VPA and DMC than the EPR lines of the melanin from the control cells. o-Semiquinone free radicals mainly existed in the melanin samples from A-375 cells. The quantitative results were published in the earlier paper [59,60]. Considerable decrease of free radical concentration in melanin after treatment A-375 cells by both VPA and DMC was observed [60]. Free radical concentration in melanin isolated from A-375 cells cultured with DMC was lower than in melanin from the cells cultured with VPA [60]. The changes of amplitudes (A) and linewidths (ΔB pp ) with microwave power indicated homogeneous broadening of the EPR lines of melanin isolated from A-375 cells [60].
EPR spectra of free radicals in melanin isolated from human melanoma malignum G-361 cells
EPR lines of o-semiquinone free radicals were also measured for melanin isolated from G-361 human melanoma cells. The EPR spectra of melanin isolated from the control G-361 cells, and the G-361 cells treated with VPA, DMC, and both VPA and DMC, measured with microwave power attenuation of 7 dB, were shown in Figure 15. The other spectra of these melanin samples were presented in paper [60]. The high level of the noise was visible in these spectra (Figure 15), so the lower contents of free radicals were found in melanin from G-361 cells than from A-375 cells (Figure 13).
The influence of the antitumor substances, VPA, DMC, and both VPA and DMC, on spinlattice interactions in melanin obtained from G-361 human melanoma cells was not stated.
The changes of integral intensities (I) of the melanin from G-361 cells for the control cells, and cells cultured with VPA, DMC, and both VPA and DMC, with increasing of microwave power (M/M o ), were compared in Figure 16. The similar correlations between integral intensity (I) and microwave power for all the melanin samples were visible (Figure 16). The antitumor drugs did not change magnetic interactions in melanin structures of G-361 cells.
The quantitative results of EPR examination of melanin originated from G-361 cells were described in paper [60]. It was obtained that after treating of G-361 cells with both VPA and DMC free radical concentration in melanin strongly decreased [60]. Free radical concentration in melanin isolated from G-361 cells cultured with DMC was higher than in melanin from the cells cultured with VPA [60]. The changes of amplitudes (A) and linewidths (ΔB pp ) with microwave power indicated homogeneous broadening of the EPR lines of melanin isolated from G-361 cells [60]. Our present spin-lattice relaxation studies by the use of integral intensities (I) dependence on microwave power confirmed the results obtained for melanin from G-361 cells from the amplitude (A) changes with microwave power [60].
Conclusions
The existence of o-semiquinone free radicals in melanin from the human melanoma malignum cells was confirmed. Free radicals of melanin were mainly responsible for the EPR lines of the tested tumor cells. The free radical concentrations depended on the type of tumor cells. The antitumor drugs changed the free radical concentrations. The changes depended on the drug amounts. The parameters and lineshape of the EPR spectra of melanin changed with increasing of the measuring microwave power. All the EPR lines of the tested melanins were very broad. The most of the spin-lattice relaxation processes in melanin samples characterized the long relaxation times, and their EPR lines saturated at the low microwave powers. The analysis of the lineshape of the EPR spectra measured in the wide range of microwave power was useful to obtain information about complex free radical system in the melanin biopolymers. The spectral EPR results may be applied in therapy of tumors contained melanin. The free radical concentrations in the tumors and the effect of the antitumor substances on their values may be obtained. The effective antitumor drugs as those which cause the decrease of free radical concentrations in the melanotic tumor cells may be spectroscopically found. | 4,576.6 | 2017-03-01T00:00:00.000 | [
"Chemistry"
] |
Effect of compaction pressure on the thermal conductivity of UO2-BeO-Gd2O3 pellets
The (U,Gd)O2 fuels are used in pressurized water reactors (PWR) to control the neutron population in the reactor during the early life with the purpose of extended fuel cycles and higher target burnups. Nevertheless, the incorporation of Gd2O3 in the UO2 fuel decreases the thermal conductivity, leading to premature fuel degradation. This is the reason for the addition of beryllium oxide (BeO), which has a high thermal conductivity and is chemically compatible with UO2. Pellets were obtained from powder mixtures of the UO2, Gd2O3 and BeO, being the oxide contents of the beryllium equal to 2 and 3wt%, and the gadolinium fixed at 6wt%. The pellets were compacted at 400, 500, 600, and 700 MPa and sintering under hydrogen reducing atmosphere. The purpose of this study was to investigate the effect of BeO, Gd2O3 and compaction pressure on the thermal conductivity of the UO2 pellets. The thermal diffusivity and conductivity of the pellets were determined from 298 K to 773 K and the results obtained were compared to UO2 fuel pellets. The thermal diffusivity was determined by Laser Flash and Thermal Quadrupole methods and the thermal conductivity was calculated from the product of thermal diffusivity, the specific heat capacity and density. The sintered density of the pellets was determined by the xylol penetration and immersion method. The results showed an increase in the thermal conductivity of the pellets with additions of BeO and with the compaction pressure compared to the values obtained with UO2 pellets.
INTRODUCTION
One of the most important limiting factors in water nuclear reactor operation is the maximum temperature reached by the fuel, being a higher thermal conductivity of fuel essential for improving reactor performance under normal operation and accident conditions. The UO2 pellet is most used fuel in LWR (light water reactor), but its low thermal conductivity in the range of 2-8 W·m -1 ·K -1 [1] has a direct influence on the thermal behavior of the fuel during reactor operation as center fuel temperature, thermal expansion, fission gas release, gaseous swelling, etc.
Since there is a great interest in improving the thermal conductivity on nuclear fuel, several researches are being carried out with the addition of oxide to increase the thermal conductivity of the UO2 [2][3][4][5][6][7]. Researches to extent the fuel burnup are also conducted with the addition of oxide to UO2.
These processes of oxide addition are generally based on the mechanical mixing process because it is an easy step to implement in the nuclear industry conventional of UO2 fuel.
Based on previous studies in the Nuclear Technology Development Center (CDTN) on the fuel compounds UO2-BeO and UO2-Gd2O3, the main additives employed here are BeO and Gd2O3, once the addition of small amounts of BeO significantly increases the fuel thermal conductivity [4,7] and Gd2O3 is widely used in the nuclear industry because it can compensate for the excess initial reactivity at the beginning of reactor life, promoting a higher fuel burnup [8,9].
However, the addition of Gd2O3 to UO2 leads to a decrease in the thermal conductivity of the fuel, making the amount of gadolinium oxide added a limiting parameter of the fuel performance [8,9].
Beryllium and gadolinium oxides have excellent properties as a high melting point, good behavior under irradiation, chemical stability, etc.
Ishimoto et al. [4] showed significant improvements in the thermal conductivity of UO2 that can be achieved with only 3.2 vol.% of BeO. Garcia et al. [6] demonstrated that UO2 thermal conductivity over the temperature range of 298.15 K to 523.15 K was improved by approximately 10% for each 1 vol.% of BeO added. Our research [10,11] has shown that the addition of 2 and 3 wt% of BeO in UO2 can lead to an increase in the thermal conductivity of 22% and 28% at 573 K, respectively.
The purpose of this paper is to investigate the effect of the compaction pressure on the thermal conductivity of the UO2 pellets added with BeO and Gd2O3. All pellets were obtained based on the conventional fabrication process, such as mixing, pressing, and sintering under a reducing atmosphere. The thermal diffusivity of the pellets was determined at 298 K by the Laser Flash method [12] and from 473 K to 773 K by the Thermal Quadrupole method [13]. The results for UO2 and UO2-BeO-Gd2O3 pellets such as density and thermophysical properties (thermal diffusivity, specific heat capacity and thermal conductivity) are reported. The expanded uncertainty was estimated according to the ISO/BIPM Guide to the Expression of Uncertainty in Measurement (GUM) [14].
Obtaining and pellets characterization
The powders of BeO and Gd2O3 used in this work were supplied by Alfa Aesar (99.99% pure) and Sigma-Aldrich (99.98% pure), respectively, and the UO2 powder was provided by Institute of Energy and Nuclear Research (IPEN).
The UO2 and UO2-BeO-Gd2O3 pellets were obtained by mixing of powders, pressing into green pellets and sintering under a hydrogen atmosphere. The powders of UO2, BeO and Gd2O3 were mechanically homogenized for 4 h in a rotating apparatus for each content of BeO (2% and 3wt%) and 6wt% Gd2O3. These powders mixtures were compacted at 400, 500, 600, and 700 MPa utilizing a uniaxial hydraulic press. Then, the green pellets were sintered at 1700 °C for 3 h and presented final geometric dimensions of about 10 mm in diameter and 2-3 mm in thickness.
The density of the sintered pellets was determined by xylol penetration and immersion method [15] and the mass of the pellets was taken using a Mettler AT201 balance, which has a resolution of 0.1 mg. The dependence of the sintered density of UO2-BeO-Gd2O3 pellets with temperature was determined from linear thermal expansion data of the UO2.
Thermophysical properties
Thermal diffusivity of the pellets was determined by two methods: the Flash Laser [11] using an apparatus developed at CDTN [8] and the Thermal Quadrupole [13] employing a diffusivimeter (Protolab, QuadruFlash 1200), manufactured in Brazil. The Flash laser method was employed for measurements carried out at room temperature while the Thermal Quadrupole method was utilized in the temperature range of 473 to 773 K.
Before the measurements, the pellets were coated with a carbon film on both flat faces to improve the emissivity and uniform absorption of the laser beam. In both methods, the front surface of the pellets was subjected to a very short burst of radiant energy.
The thermal diffusivity results were normalized by the following equations [4]: where 95 corresponds to the thermal diffusivity normalized to 95% of theoretical density (TD), to the determined thermal diffusivity, P to the pellet porosity and T to the temperature.
The specific heat capacity values were calculated by the law of mixing [4] using reported data of specific heat of the individual components [16][17][18]. The thermal conductivity of fuel pellets was determined by product of their thermal diffusivity, density and specific heat capacity. Table 1 shows the geometric dimensions and sintered densities of the pellets for each compaction pressure. It can be observed from this table that the sintered density of the pellets was between 94%
RESULTS AND DISCUSSION
TD and 96% TD, exception for only one UO2-6wt%Gd2O3-3wt%BeO pellet pressed at 700 MPa. The maximum expanded relative uncertainty of sintered density pellets was estimated in 2%, for a coverage probability of approximately 95%, k=2. Furthermore, there is an increase in the theoretical density of pellets with the increase compaction pressure. The determined specific heat capacity of the pellets is presented in Table 2, where it can see an increase of the specific heat capacity of the pellets as a function of the BeO content, as expected. The expanded uncertainty adopted for the specific heat was assumed to be 2% [1]. Table 3 and Table 4 show thermal diffusivities as well as the thermal conductivities, both normalized to 95% TD as a function of temperature for all compaction pressures. For these results, the expanded relative uncertainty was estimated at 7.5% for thermal diffusivity and 8.5% for thermal conductivity. It is known that the addition of gadolinia to UO2 decreases the thermal diffusivities and thermal conductivities [8], as observed in Table 3 and Table 4, respectively. Table 4 also includes some results of the replicated pellets which indicated a good reproducibility of the process of obtaining pellets.
Characteristic graphs of thermal conductivities are shown from Fig. 1 to Fig. 3, where it can be seen there is a gradual increase of the thermal conductivity with increasing BeO content and a decrease in these values with increasing temperature that followed the same trend observed for thermal diffusivity not shown here.
CONCLUSION
To improve the performance of nuclear fuel, the thermal conductivity of UO2 pellets was investigated with the incorporation of additives in the form of beryllium and gadolinium oxides. In addition to the beryllium content, the compaction pressures of the UO2 and UO2-xBeO-6Gd2O3 pellets (x = 1wt% and 2wt%) varied from 400 to 700 MPa. The additions of Gd and Be had the objective of prolonging the burnup time of UO2 and attenuating the decrease in the thermal conductivity of UO2 caused by the presence of gadolinium, respectively. The comparison between the values of thermal conductivity obtained in the pellets with mixed oxide to UO2 for all compaction pressures indicated that BeO increased the thermal conductivity and between pressures of 500 and 700 MPa there was no the significative difference. There is an increase in thermal conductivity from 8% at room temperature and 5% up to 773K, for the compaction pressure of 400 MPa considering the temperature range measured when comparing insert data with 2 and 3% by weight of BeO. | 2,235.8 | 2021-07-25T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Optimize the Coverage Probability of Prediction Interval for Anomaly Detection of Sensor-Based Monitoring Series
Effective anomaly detection of sensing data is essential for identifying potential system failures. Because they require no prior knowledge or accumulated labels, and provide uncertainty presentation, the probability prediction methods (e.g., Gaussian process regression (GPR) and relevance vector machine (RVM)) are especially adaptable to perform anomaly detection for sensing series. Generally, one key parameter of prediction models is coverage probability (CP), which controls the judging threshold of the testing sample and is generally set to a default value (e.g., 90% or 95%). There are few criteria to determine the optimal CP for anomaly detection. Therefore, this paper designs a graphic indicator of the receiver operating characteristic curve of prediction interval (ROC-PI) based on the definition of the ROC curve which can depict the trade-off between the PI width and PI coverage probability across a series of cut-off points. Furthermore, the Youden index is modified to assess the performance of different CPs, by the minimization of which the optimal CP is derived by the simulated annealing (SA) algorithm. Experiments conducted on two simulation datasets demonstrate the validity of the proposed method. Especially, an actual case study on sensing series from an on-orbit satellite illustrates its significant performance in practical application.
Introduction
With the development of sensing and acquisition technology, more sensing data series of system condition are available. Data mining and knowledge discovery of these sensing data series can help mine the contained fault or failure information [1,2]. For practical application, one of the most valuable strategies is to detect the data which behave differently from the majority. The detected data are defined as anomalous in the domain of machine learning [3]. Anomaly detection is also the problem of finding items, events, or observations that do not conform with an expected pattern or a model of normal behavior [4]. The application fields of anomaly detection include network intrusion detection [5], financial fraud detection [6], medical sensor detection [7], and fault detection in industrial systems [8], etc. For system condition monitoring, the detected anomalous data can be excluded to prevent incorrect decision making. Especially, in the area of aeronautics and astronautics, the system reliability and operation safety can be enhanced by anomaly detection among telemetry series (i.e., sensing data series).
There are now three broad categories of anomaly detection techniques based on the availability of labels [4]. When a training dataset contains both normal and outlying instances, a supervised learning approach referring to a standard classification algorithm can be established to detect anomalies [9]. effective in estimating the performance of the PIs with different CPs. Therefore, this paper designs a graphic indicator of receiver operating characteristic of PI (ROC-PI) on the basis of the ROC curve.
In detail, ROC-PI offers a graphical illustration of these trade-offs between PI coverage probability (PICP) and the width of PI. PICP is also called the PI confidence level, and is the probability that the testing targets lie within the PI provided by one prediction model. The width of the PI is represented by the CP of the prediction distribution. Moreover, three criteria (i.e., points on the ROC curve closest to (0, 1), the Youden index, and the minimized cost criterion) have been developed to optimize the threshold point of the ROC curve [33]. For the ROC curve, (0, 1) is the ideal case for anomaly detection, so the point on the ROC curve closest to (0, 1) is optimal. Nevertheless, for the ROC-PI curve, the point (0, 1) means the CP of PI is not consistent with the PICP, which is unrealistic in real application. Thus, the effective point like (0, 1) is difficult to determine which makes the criteria (i.e., points on the ROC curve closest to (0, 1)) inappropriate for seeking the optimal point in the ROC-PI. The Youden index maximizes the vertical distance from the diagonal line. Namely, the Youden index is the point on the ROC curve with the farthest distance from line of equality (diagonal line). Moreover, the Youden index is more generally used with the advantage of reflecting the intension to maximize the correct classification rate and is easy to calculate [34]. The third criterion considers cost and is rarely applied because it is difficult to implement. Given the properties of ROC-PI, the point of diagonal reflects the effective estimation of prediction distribution whose CP is equal to the PICP, so the point on the ROC-PI curve closest to the line of equality is optimal within an acceptable range of CP. Namely, the optimal CP can be calculated by the modified Youden index. In addition, considering that the simulated annealing (SA) method has been utilized to solve this type of optimization problem [35][36][37], the Youden index is modified in this paper to determine the optimal CP based on the SA method.
On this basis, an improved method for anomaly detection with a probability prediction model is realized in this work. It is noted that the proposed method is not only suitable to GPR and RVM models, but can also apply to other probability prediction models, which can provide the distribution of new testing data. GPR and RVM are two typical probability prediction models, and they have different advantages: RVM is a sparse model which can give a quick testing result, while GPR is a non-parametric model which can be trained quickly and flexibly. Therefore, in order to test our proposed method comprehensively, both of them are considered to be the testing models in this work. The experiments on the simulated data and real spacecraft telemetry series validate the effectiveness and applicability of the proposed method.
Anomaly Detection Based on Prediction Interval
In statistical inference (specifically predictive inference), a PI is an estimate of an interval within which one or more future observations will fall with a certain probability given what has already been observed. A confidence interval only provides bounds for a scalar population parameter, such as the population mean [38]. By way of comparison, a PI contains the noise interference with the injected noise variance. Therefore, the PI is more effective for anomaly detection. Figure 1 shows its framework. In Figure 1, the important steps are data preprocessing, input data construction, prediction model training, and PI output.
1.
Data preprocessing. In this step, the erroneous data points are deleted and data normalization is performed by statistical analysis and min-max normalization, respectively. Then, the signal amplitude is restricted to the range from −1 to 1. It is noteworthy that some preprocessing methods related to some specific areas can also be performed in this step.
2.
Input data construction. In this work, autocorrelation analysis is applied to obtain the embedding dimension to construct the input matrix. 3.
Prediction model training. In this step, the initial model parameters and the optimization algorithms are determined first, then the prediction model is constructed based on training data. 4.
PI output. Combined with the sample distribution estimated by the one-step-ahead prediction model and the CP setting, a PI is constructed to reflect the normal range of a new monitoring point. The CP is set by default to, e.g., 90% or 95%.
Based on the above steps, a PI is constructed as the threshold to judge whether the detected point is normal or abnormal. Given that the predicted models with uncertainty presentation can provide the PI directly with the data distribution estimation, they are very suitable for the anomaly detection of time series (in this work, time series indicate the sensing data series). An example of anomaly detection with PI is shown in Figure 2. In Figure 2, the grey region is the 95% PI provided by the GPR model, and the four points beyond the corresponding range will be labeled as anomalies. In this paper, two prediction models with uncertainty presentation (i.e., GPR and RVM) are applied to perform anomaly detection, which will be introduced in the following two subsections.
Prediction Interval Estimation Based on Gaussian Process Regression
GP defines a collective of random variables where the combination of any finite dimensional variables obeys a joint Gaussian distribution [22]. Compared with Gaussian distribution for a single random variable whose properties are represented by mean and variance, mean function and covariance function are the characteristics of Gaussian process defined by Equations (1) and (2), respectively.
where E[ ] is the Expectation function. x i and x j are different input variables. k(x i , x j ) reflects the relation between x i and x j . The most-used covariance function is the square exponential function [39]: where υ 0 and ω 1 , ω 2 , . . . , ω d are hyper-parameters which need to be initialized. υ 0 is model variance, and ω l is the distance size. It is noted that users can define the covariance function as long as it meets the nonnegative conditions. Generally, based on the normalization of input variables, the mean function can be set to zero everywhere. In this case, the prior distribution of GP is determined by the covariance function as well as its set hyper-parameters. In practical applications, these initial hyper-parameters can be set randomly, ranging from 0 to 1. Moreover, conjugate gradient method is adopted to optimize these hyper-parameters. Given the regression problem defined by the following equation: where x is d dimensional input variables and y is the target variable, f (x) describes the functional relationship between x and y. ε is supposed to be additive white noise. Some parametric models restrict the explicit form of f (x) with some unknown parameters. However, the GPR model just assumes that the function values f (x 1 ), . . . , f (x N ) with different input variables obey a joint Gaussian distribution, then f (x 1 ), . . . , f (x N ) forms a GP described as Equation (5): One important property of GP is described by Definition 1.
Definition 1.
The sum of two independent multivariate normal distributions (e.g., A and B) is also a multivariate normal distribution (e.g., C), whose mean and variance are both the sum of the mean and variance of A and B.
Based on the property of GP described by Definition 1, also with Equations (4) and (5), the target y obeys a GP: where δ ij represents the Dirac function, δ ij = 1 only when i = j.
Then, suppose f * is the function variable at one test input x * . Multiple-tests are also allowed. y and f * still obey a joint Gaussian distribution (still based on the above property of GP), namely: where m is the mean vector of training data, and m * is the mean vector of testing data. In addition, A is the covariance matrix constructed by the training set itself, which also considers the noise variance, N is the training size. E is the covariance vector of the training set with testing input, E(i) = k(x i , x * ). Similarly, B is the covariance value of the testing input itself, Another important property of GP is described by Definition 2.
Definition 2. For a multivariate normal distribution (e.g., C) constructed by two multivariate normal distributions (e.g., A and B), when a part of the observed value (e.g., C 1 ) is known, the probability distribution of another part of the observed value (e.g., C 2 ) is also a multivariate normal distribution whose property can be expressed by the corresponding information of A and B.
Based on Definition 2 and Equation (7), the marginal distribution of y can be derived as Equation (8), and the condition distribution of y with known f * is given by Equation (9).
where N(.,.) represents a joint Gaussian distribution. Therefore, Equation (8) indicates that y obeys the joint Gaussian distribution with mean vector m and covariance matrix A, so as Equation (9). Then the posterior conditional distribution of f * can be easily inferred as: Accordingly, GPR can be applied for regression and prediction. Moreover, compared with single point prediction, GPR can realize interval estimation with the set CP.
In detail, the GPR prediction output includes the mean and variance of a normal distribution. So, the related confidence interval (CI) at a certain CP is which reflects the mean range of a testing target, while PI is the interval given the noise interference-namely,
Prediction Interval Estimation Based on Relevance Vector Machine
Similar to the GPR model, RVM is also proposed on the basis of Bayesian framework [21], and it has the same function form as SVM, described as Equation (13): where K(x, x i ) is the kernel function, ω i represents the weight of the model, and x i is the ith training input with the dimension d. N is the size of training data, and x is the testing input.
However, performing maximum-likelihood estimation on ω may cause the serious problem of over-fitting. So, in order to constrain these weights, Tipping defines a zero-mean Gaussian prior distribution over ω: where α is the hyper-parameter vector, α = {α 0 , α 1 , · · ·, α N }. Obviously, there is a consistent one-to-one match between each weight and each hyper-parameter. Especially, the hyper-parameter value controls the influence of the prior distribution on the weights, which is also the main reason to guarantee the sparsity of the model. To complete the specification of this hierarchical prior, we must define hyperpriors over α, as well as the noise variance σ 2 [21]. These quantities are examples of scale parameters, and suitable priors thereover are Gamma distributions [40]. Therefore, the posterior distributions of α and σ 2 are supposed to be Gamma distribution: where Then, the likelihood of target output as Equation (15) can be achieved by integrating the marginal likelihood of parameters: Therefore, the likelihood distribution of hyper-parameters is obtained as Equation (19): where A = diag(α 0 , α 1 , · · ·, α N ). The hyper-parameters α and σ 2 are estimated by iteration, which is not described in this section. Please refer to [21] to find the detailed computing process. Suppose the new testing point is x * , and the corresponding target is t * . Therefore, p(t * | t) ∼ N(µ * , σ 2 * ), and the mean µ * and variance σ 2 * are given: where µ * represents the predictive mean of t * and σ 2 * indicates the predictive variance which is the combination of two variance components. In detail, σ 2 MP is the estimated noise variance, and φ(x * ) T ∑ φ(x * ) reflects the uncertainty of weights estimation. Finally, the PI of RVM can be constructed
Analysis of PI Performance for Anomaly Detection with Different CPs
Based on the above description in Section 2, it is evident that the key step for prediction-based anomaly detection is constructing the PI, and it is sensitive to the parameters of the prediction model as well as the set CP. It must be noted that the model parameters are optimized by Bayesian framework, while the CP is set default by priori knowledge (e.g., 90% or 95%). Evidently, a higher CP has a wider PI, which will cover more training samples; on the contrary, a lower CP corresponds to a narrower PI, which may contain fewer available samples. Therefore, setting a higher CP will face the challenge of higher missing rates; otherwise, more false alarms may be produced. One GPR prediction example for sine signal with noise is shown in Figure 3. As shown in Figure 3, PIs with two common CPs have different performances to cover the available data: 90% PI is narrower than 95% PI, which can detect more future anomalous samples. Nevertheless, 95% PI covers all samples as shown in the enlarged figure, which will cause less false alarms. It is difficult to judge whose performance is better than the other, but there is no doubt that the set CP is particularly important for constructing an effective PI. Therefore, CP should be optimized to balance the relationship between missing rate and false alarms with the available training data. In reality, anomalous samples are less or are obtained expensively, so the traditional indicator of the ROC curve which describes the relationship between sensitivity and (1-specificity) cannot be applied in this case. Thus, this work focuses on estimating the performance of PI with the available normal data and optimizing its performance to obtain an optimal CP.
Improved Anomaly Detection Framework with Optimal PI
As shown in Figure 1, the PI is computed with the set CP. Combined with the analysis of Section 3.1, the performance of anomaly detection is generally influenced by CP. Therefore, PI performance with different CPs should be assessed in the training step; especially, some optimization algorithms can be applied to determine the optimal CP. Then, this will be taken as the input parameter of the testing phase. Anomaly detection with optimal PI is realized by the framework shown in Figure 4. As shown in Figure 4, the framework is divided into two parts: offline training and online testing.
Offline training
Offline training consists of two sections: hyper-parameters optimization and CP optimization. The hyper-parameters of one prediction model are optimized according to the model requirement. In detail, GPR trains its hyper-parameters by conjugate gradient method. In addition, RVM uses expectation maximization (EM) to optimize its model parameters. These contents were introduced in Sections 2.2 and 2.3.
CP optimization is the main focus and contribution of our work. Here the validation data set is used to determine the optimal PI. By reviewing the existing PI metrics (especially given the excellent ability of the ROC curve to estimate the performance of classification methods), this paper designs a graphic indicator (i.e., ROC-PI) to depict the trade-off between the PI width and PI coverage probability. Furthermore, the Youden index is modified to assess the detection performance with different CPs. In addition, SA is applied to optimize the modified Youden index. Based on these two-level optimizations, the prediction model with optimal hyper-parameters and CP is realized, and will be taken as the input of online testing.
Online testing
At the online testing stage, a sliding window is constructed by autocorrelation analysis, and the new samples are gradually added into the sliding window. The predicted mean value and variance for a new sample are obtained by a one-step-ahead prediction model. Then, the PI is constructed effectively to label the new sample with the optimal CP. By conducting these steps repeatedly, online testing is realized continuously.
It can be summarized from Figure 4 that the CP is optimized based on historical data. Moreover, only normal samples are used to obtain the optimal CP. In other words, our proposed framework is semi-supervised, such that failure patterns are not required in the training phase. Even when some new failure patterns appear, our proposed method is also effective. This is much more meaningful for industrial applications, especially in aerospace where a large amount of normal data can be collected and the monitored data changes very slowly. In this situation, the hyper-parameters and CP optimized offline in our method has strong applicability. When the normal pattern of monitored data has strong time variability, the hyper-parameters and CP optimization are required to be updated incrementally, which is not the focus of this work.
In the following subsections, the CP optimization is described in detail, including the analysis of some PI performance indexes, the design of ROC-PI, and its optimization.
Performace Estimation Indexes of PI
There are currently limited indicators which have been developed to quantitatively evaluate the performance of PI [40,41]. Suppose that y is the testing target series, y = {y 1 , y 2 , y 3 , . . . , y n }, and n is the testing size. For the ith testing input, the PI of y i is [L i , U i ], where L i and U i are the lower and upper bounds of PI, respectively. Some related indicators are described as follows.
1. The PI coverage probability (PICP) PICP-also called PI confidence level-is the probability that the testing targets lie within the PI provided by one prediction model [42]. PICP is derived by Equation (22): where c i has only two values (i.e., 0 and 1).
Normally, a higher PICP has a lower false rate. Ideally, PICP should be very close to 1.
PI normalized average width (PINAW)
PINAW, also called normalized mean prediction interval width (NMPIW), measures the wide degree of PI defined by Equation (23): where r = y max − y min . PINAW is the mean of PI widths normalized by the range of testing targets. For anomaly detection, a PI which is too wide is meaningless for detecting anomalies. Another similar parameter-PI normalized root-mean-square width (PINRW) [43]-has also been designed for performance estimation, and is not described in detail in this work. Based on the definitions of PICP and PINAM, it can be easily found that PICP and PINAW are two competing indicators, and the increase of PICP will widen the PINAW. Similarly, a wider PINAW has a better PICP value. Especially for the problem of anomaly detection, 1 -PICP is the false rate, whose best value is 0. Meanwhile, PINAW influences the detecting performance of PI. The smaller it is, the better the detecting ability it can reach. Therefore, a smaller PINAW and a larger PICP are desirable to construct PIs [42]. So, the coverage-width-based criterion (CWC) [41] is proposed to balance the relationship between PICP and PINAW, and is defined by Equation (24): where σ(·) is the sigmoidal function: where CP is the prior coverage probability set by users. Theoretically, PICP is unlimitedly close to or larger than CP. η is the controlling parameter which penalizes the PICP smaller than CP. For prediction-based anomaly detection, the mean and variance of a new sample are derived by a trained prediction model. Thus, PINAW only relates to the changeable CP. Namely, we can measure the performance of PI by CP and PICP. Although CWC can balance the relationship between the width of PI and PICP, it is not effective for anomaly detection. For example, the optional range of CP changes from 90% to 100%. Normally, 90% PI has a better CWC, because 90% CP has a smaller PINAW. At the same time, the PICP corresponding to 90% CP is generally larger than 90% (the increasing speed of PICP is usually reduced with the increasing CP). In this case, the CWC is invalid to determine the optimal CP of anomaly detection. Therefore, the definition of the ROC curve is applied as the basis of this work.
Receiver Operating Characteristic (ROC) Curve of Prediction Interval
A ROC [33,44] curve is a plot that depicts the trade-off between the sensitivity and (1-specificity) across a series of cut-off points. One example of a ROC curve is shown in Figure 5.
As shown in Figure 5, the properties of a ROC curve can be concluded as follows.
1.
The horizontal axis reflects the false positive rate (FPR), which indicates the positive samples labeled negative. FPR ranges from 0 to 1. Ideally, FPR equals 0.
2.
The vertical axis is the true positive rate (TPR), which also ranges from 0 to 1. Ideally, TPR equals 1.
3.
With the ROC curve, two or more classification methods can be visually compared in one figure.
Actually, there are not enough anomalous samples in the training step, so the ROC curve cannot be applied directly. Therefore, in this paper, a new indicator (i.e., ROC-PI) is designed on the basis of the ROC curve, where the original vertical axis of sensitivity is tuned to PICP (which indicates the detection rate for the testing samples) and the horizontal axis is changed to these set CPs (which represents the performance of PINAW). One example of a ROC-PI curve based on an RVM model is given in Figure 6. As shown in Figure 6, the properties of ROC-PI are listed as follows.
1.
PICP is the rate that normal data are labeled normal, and a larger PICP has a better performance. CP means the set priori coverage probability of the normal distribution. If the prediction model can describe the distribution of each new sample well, PICP should be greater than or equal to CP-ideally, the point close to (1, 1) has a better performance.
2.
In general, at the initial stage of CP growth, more data will be covered. So, PICP is larger than CP. However, at the late stage of CP growth, there are fewer points beyond the corresponding PI, the growth rate of CP will be faster than PICP's. Therefore, the point where PICP equals CP has a better performance within the effective range of CP.
3.
The ROC-PI can also be applied to estimate the performance of different models than the ROC curve. In addition, the area under the ROC-PI curve has a similar meaning to area under the curve (AUC).The diagonal line describes that the PICP values equals CP values. In general, the ROC-PI is above the diagonal line.
The Youden index is defined as the difference between TPR and FPR, and has been applied to select the optimal point in the ROC curve [45]. Based on the above analysis of ROC and ROC-PI curves, a similar performance can be concluded. So, the Youden index may be modified to adapt to this work.
On the basis of the Youden index definition, the difference between PICP and CP can also be used as a performance estimator. It is worth noting that CP is the set coverage probability of PI, and PICP reflects the posterior coverage probability of PI. Ideally, PICP should be very close to or greater than CP. As shown in Figure 6, with increasing CP, the difference between CP and PICP becomes larger. In other words, most of the available samples are gradually covered by the constructed PI. Conversely, the difference becomes smaller at the late stage of CP growth, and even becomes negative. The reason is that the PICP increase will cause a significant increase of PI. Accordingly, the optimal CP has the minimum absolute value of the difference between PICP and CP. Namely, the evaluation function of PI performance is the modified Youden Index defined by Equation (26):
Optimize the Coverage Probability of PI
For the probability prediction models (e.g., GPR and RVM), the prediction output is the series of confidence values ordered from small to large, and the PI with confidence value α is defined as: where γ is the quantile value corresponding to its superscript.
Then, the normal range for a new sample can be described by PI α . For normal distributions, CP = 1 − α. As PI is an estimate of an interval within which one or more future observations will fall given what has already been observed, PI α indicates that a new observation will fall into the PI with the probability of 1 − α. For example, α = 5%, PI 0.05 means that a new observation may fall into the PI with a probability of 95%. Obviously, with decreasing α, the PI will become wider, and the probability of the PI covering a new observation will become larger.
In this work, ROC-PI is proposed to describe the performance of PI under different CPs. Moreover, the Youden index is modified to optimize the CP of PI. Generally, we can examine the ROC-PI curve to select the optimal CP. However, it needs to compute several PICP values under a series of CPs. Actually, a small CP cannot obtain a good PICP with the assumption that the prediction model can describe the distribution of new samples well. Furthermore, given that the modified Youden index is not an analytic formula, it cannot be optimized by gradient descent method. Therefore, the CP optimization is realized by SA optimization technique in this work, which has been utilized to solve this type of optimization problem [46]. The SA algorithm randomly explores the neighborhood of the current solution, seeking a better solution which escapes from local minima with the probability of accepting a new solution that influences the cost function. Additionally, the probability is controlled by a parameter called the cooling temperature.
The training data set is divided into two sets: the training set and the validation set. They are applied to training the prediction model and optimizing the CP respectively. The detailed training procedures of GPR and RVM models are given in Sections 2.2 and 2.3. So, this section only gives the pseudocode of CP optimization with the SA algorithm, as shown in Figure 7. In Figure 7, the evaluation function is the modified Youden index. Since the PI is described by a normal distribution, one quantile corresponds to a specific CP; e.g., 1.96 is the quantile corresponding to a CP of 95%, and the corresponding CP for 1.65 is 90%. Thus, we can search the quantile that minimizes the evaluation function. Then, the CP related to this quantile is optimal for our task. The cooling temperature is set to allow uphill movement in the early iterations of the optimization algorithm which ranges from T s and T end , and the decay scale (DS) controls the cooling speed. In addition, the step factor of Metropolis (SF) is applied to generate a new quantile through random perturbation. At each iteration, a new quantile is generated within the setting range. PIs are constructed for each new quantile, and the optimal CP-together with the minimum value of the modified Youden index-will be the output of this optimization algorithm.
Experimental Results and Analysis
In this paper, the experimental validation is performed in two aspects. Firstly, two simulated data sets with injected anomalous samples are applied to measure the anomaly detection performance of this proposed method. Then, some typical telemetry series are applied to verify the practicality and effectiveness of our method in real applications.
The metrics are false positive rate (FPR), false negative ratio (FNR), and accuracy (ACC).
FPR
FPR is the ratio that the normal data is falsely detected and rejected.
where FP (false positive) represents the amount of normal data samples regarded as anomalies, and FP + TN (true negative) is the sum of the normal data samples.
FNR
FNR is the ratio that the abnormal data is detected in error and accepted.
where FN indicates the number of abnormal data points detected as normal points, and TP + FN refers to the number of the anomalous data points. Normally, smaller FNR and FPR implies better performance of anomaly detection. Generally, the classification of normal and anomalous are unbalanced. Moreover, FNR and FPR are contradictory. In order to estimate the performance by one indicator effectively, ACC is utilized and is defined by Equation (30): where FP + FN + TN + TP is the amount of all data detected, and TP + TN is the amount detected correctly. Namely, accuracy (ACC) is the ratio of the correctly detected normal data and anomalous data in the total detected data.
Experiments on Simulated Data Sets
In order to evaluate the anomaly detection performance, two typical series of Keogh_data and Ma_data are applied in this subsection. With the certain amount and location of the injected anomalies, the quantitative evaluation results can be given.
Keogh_Data is a simulated data set which has been utilized to test three anomaly detection algorithms referred to as IMM, TSA-Tree, and Tarzan in [47]. Moreover, many studies have introduced this data set to verify the algorithm performance [48,49]. Therefore, two types of abnormal series injected into Keogh_Data are applied to estimate the performance of our proposed method, and they are named Keogh_Data 1 and Keogh_Data 2, respectively. Keogh_Data 1 is generated by Equation (31): where t = 1, 2, 3 . . . ..N, N = 800, and n(t) is the white Gaussian noise with zero mean and standard variance 0.1. In addition, e 1 (t) reflects the customized abnormal mode, which is defined as Equation (32): Keogh_Data 2 is defined by Equation (33): where e 2 (t) is also the injected abnormal mode which is defined by Equation (34): Additionally, Ma_Data is generated from a stochastic process which was used to test SVR algorithm [50].
where n(t) is also the white Gaussian noise with zero mean and standard variance 0.1 and e 3 (t) is the simulated white Gaussian noise with zero mean and variance 0.5. Some examples of Keogh_Data 1, Keogh_Data 2, and Ma_Data are shown in Figure 8. In Figure 8, for each simulated series, the blue line represents the normal points generated by the equation of Y1, while the points labeled by a red star are anomalous as defined by the equation of Y2.
The quantitative results based on the optimal CP as well as the default CP (i.e., 90% and 95%) are shown in Tables 1 and 2, where the metrics are the mean value of ten random experiments. As shown in Tables 1 and 2, the PIs with these optimal CPs have a better performance for detecting anomalies. For example, for Keogh_Data1, the optimal CP for RVM model is 97.16%, the ACC of which is 94.8%. Meanwhile, the PIs with the default values of 90.00% and 95.00% are 87.20% and 91.40%, respectively (in order to ensure consistency in the number of significant digits, we have added several invalid zero at the end of the related numbers), and the improvements are 8.72% and 3.72%, respectively. Correspondingly, for Keogh_Data2, the optimal CP is 99.93% for the GPR model, the ACC of which is 96.60%. Correspondingly, the PI with the default value of 90.00% and 95.00% are 94.20% and 95.80%, respectively. It is noted that for Ma_Data, the optimal CPs are both 95.00% for GPR and RVM models. In other words, for different series, in order to obtain the better detection performance, the CP should be optimized rather than setting a default value.
Experiments on Normal Telemetry Series
When a spacecraft works on orbit, some sensor-based monitoring information will be encoded and transmitted into ground center. This is the only basis for the ground monitoring personnel to judge the working performance of on-orbit spacecraft. Therefore, anomaly detection of these series is very meaningful for enhancing the reliability and safety of the spacecraft systems. Given that the orbit of spacecraft is generally regular, together with the regular change of system working mode, some telemetry series show a pseudo-periodic property. So, in this subsection, some typical satellite telemetry series from power subsystems are applied to verify the validity of our work. As we made an analysis of the ROC-PI curve in Section 3.4, generally, PICP increases sharply at the beginning of CP while the rate of increase becomes slower at larger CP. This analysis allows us to determine the optimal CP with the smallest difference of PICP and CP. Therefore, some normal satellite series are first applied to test the effectiveness of our analysis on the ROC-PI curves. We also use the ROC-PI curves of normal telemetry series to validate the effectiveness of realizing the CP optimization with SA method.
The power subsystem of a satellite is mainly composed of a solar array-battery system, a charge regulating circuit, a discharge regulating circuit, and a shunt regulation circuit. The typical monitoring types are generally current, voltage, and temperature. So, three types of satellite telemetry series (i.e., solar array current, battery voltage, and solar array temperature from power subsystem) were selected as the test sequences, and are shown in Figure 9. These series were resampled by one minute.
The training data size for the three series was 1000, and the embedded dimension was determined by autocorrelation analysis to construct the input matrix. Due to the periodic property of these series, the validation data set was less likely to be generated by resampling method. Thus, we merely selected the last 500 samples as the validation set. The size of testing data was set to 2000.
The covariance function of GPR is a square exponential function, which is very common for performing prediction as defined by Equation (3). The hyper-parameters of the mean function were set to zero. In addition, the initial hyper-parameters in the covariance function were set to random values from 0 to 1. For the RVM model, the kernel function is a Gaussian kernel function, and the width of the Gaussian function was 8. Moreover, the estimated hyper-parameters were (1/N) 2 , where N is the training size. For the SA algorithm, the quantile of prediction ranges from 1.4 to 4, which corresponds to the CP from 90 to 99%. The initial quantile (Z opt ) is 1.96, whose corresponding CP is 95%. The step factor of Metropolis (SF) is 0.25, the decay factor (DS) is 0.85. The initial cooling temperature (T s ) is 15, and the end cooling temperature (T end ) is set to 1. For RVM and GPR models, the ROC-PI curves with different CPs for these three-telemetry series are shown in Figures 10-12. As shown in Figures 10-12, the optimal PI are different for various data sets. Moreover, compared with GPR, the PI of RVM is relatively narrow. The PICP generally increases more quickly at the start of CP, while it has a slower pace at the end of CP (as shown in Figures 10 and 11). However, for Figure 12, the PICP of RVM is smaller than CP at the CP ranging from 90% to 99%. This indicates that the performance of PI constructed by RVM is relatively poor for solar array temperature. Under the circumstances, the CP optimization by SA will select a better CP to improve the PI performance of the RVM model. The optimal CPs for different series based on SA are given in Table 3. Compared with the enlarged figures from Figures 10-12, the CP optimized by SA keeps high consistency with the optimal values intuitively shown in ROC-PI curves. The CP optimization is to keep a lower false rate with a higher detection performance given what has already been observed. Therefore, it can provide an effective PI for the following anomaly detection.
Experiments on Telemetry Series with Anomalies
In Section 4.2, the performance of CP optimization based on SA is verified by three normal series. However, there are no anomalous samples within these three series. In order to verify the anomaly detection performance, one telemetry series-collected by a temperature sensor from another satellite-is introduced as the test series, which is shown in Figure 13. Some abnormal samples appear on April 11th. It is noted that these anomalous samples are not much larger, so they cannot be effectively detected by fixed threshold, which are usually set much bigger than normal samples.
GPR and RVM models are also applied to detect the anomalies with the optimal CPs. Here the embedded dimension is 37, determined by autocorrelation analysis method. Other parameters are consistent with the parameter setting in Section 4.1. The training set contains the samples from April 8th and April 9th, and the validation set includes the points from April 10th. The real test data set is set to the samples from April 11th to 13th. The detailed design is shown in Figure 14. The optimal CPs of GPR and RVM based on SA are 98.41% and 97.30%, respectively. In order to make a comparison, we also depict the ROC-PI curves, which are shown in Figure 15. As shown in Figure 15, the PICP of RVM and GPR increase sharply at a smaller CP; for example, the PICP of RVM reaches 0.7 at a CP of 0.2. Meanwhile, as the CP increases, the PICP increases slowly. This case verifies the effectiveness of our analysis on ROC-PI. Moreover, in our proposed method, the intersection of ROC-PI and line of equality corresponds to the optimized CP, which can be applied to realize anomaly detection. For Figure 15, the intersections of ROC-PI curves for GPR and RVM models keep high insistence with the optimal CPs of 98.41% and 97.30% derived by the SA algorithm, respectively. Thus, our proposed method is effective in obtaining the optimal CP to realize anomaly detection.
Based on the hyper-parameters optimization of GPR and RVM, as well as CP optimization, these two models can be applied to realize the following anomaly detection. The detection results are shown in Figures 16 and 17, and the quantitative results are given in Table 4. As shown in Figure 15, it is obvious that at the start of CP, the PICP increases quickly. However, the PICP increases at a slower step when the CP is close to 1. Especially, the optimal CP of GPR is larger than the optimal CP of RVM (as shown in Figure 15), which is consistent with the result of CP optimization by SA method. Figures 16 and 17, GPR and RVM models can effectively detect these anomalous samples, and the PI with optimal CP is larger than that with default CPs which can be applied to realize better anomaly detection. The results of detecting anomalies in Table 4 can better describe the superiority of PI with the optimal CP. For example, the ACC of the GPR model with the CP of 98.41% is 98.96%, which is better than 98.33% and 98.75% for the CPs of 90% and 95%. A similar conclusion can be made for the RVM model. It is noted that the improvement is not evident, as these anomalous samples are relatively larger than normal data. Moreover, one evident difference is that the PIs of GPR at the anomalous indexes become larger than normal while the PIs of RVM are not influenced by these anomalous samples. The main reason is that the RVM model computes the new prediction by the projection of the original input into the relevance vector space. The GPR model directly computes the covariance of the testing set with the training set to make predictions. This means that for longer anomalous fragments, the RVM model is more robust than the GPR model. On the other hand, the GPR model is more effective if there are no anomalous samples. It is very meaningful that both GPR and RVM models can detect the real anomalous samples. Based on these warning alarms, the ground personnel can set telecommand to moderate the temperature to improve the reliability of the battery in case of causing fatal failure.
Results Analysis and Discussion
In the first experiment, it is noted that the missing rate of each method-even with the optimal CPs-is relatively high. The main reason is that we label the single point, not the whole fragment. In the second experiment, three telemetry series are applied to evaluate the performance of optimizing the CPs based on SA. The optimal CP keeps high consistency with the ROC-PI graphical indicator. Moreover, based on the ROC-PI curve, it is shown that the optimal CP is different for various series. Namely, the default CP cannot adapt to any series with which the detecting performance will be influenced without doubt.
In the third experiment, the anomaly detection for a real sensor series is realized. The appearing anomalous samples are larger than normal, so the detection rate is 100% with different CPs. However, the detection with optimal CP can obtain better performance with relatively lower FNRs. Thus, the PI with optimal CP has better extensibility for the unknown testing samples. Obviously, the improvement of ACC is smaller than the experiment on simulated data sets, with the main reason being that these anomalous samples are larger than normal.
Moreover, in this work, the distribution of the simulated data sets and the telemetry series are similar for the training data sets, validation sets, and testing sets, so the validation sets are not resampled by some methods (e.g., cross-validation, hold out, or bootstrapping). In other words, if the data sets are insufficient or imbalanced, some resampling methods should be applied to generate the validation set.
For the ROC-PI curve, two types of cases are not discussed in this work; namely, the PICP reaches 1 at a relatively smaller CP, and the CP reaches 1 with a smaller PICP. These two cases indicate that the PI generated by the prediction model cannot describe the distribution of the real value, and it is not consistent with our hypothesis. In one case, the smallest CP with PICP which equals 1 is the optimal CP, and in the other case, the prediction model cannot be applied for the detecting application. These phenomena may happen in the real applications, which should be processed especially.
Conclusions
The contributions of this work can be concluded as: (1) The graphical indicator ROC-PI is first proposed to measure the model performance with different CPs, which depicts the trade-offs between the PI width and PI coverage probability across a series of cut-off points; (2) CP is optimized by the modified Youden index with SA algorithm; (3) The improved anomaly detection method based on probability prediction model is utilized to achieve abnormal detection; (4) The detecting performance of GPR and RVM is compared and analyzed; (5) Actual in-orbit satellite telemetry data are labelled effectively by GPR and RVM models.
There is also some work which needs to be conducted in the future: (1) More prediction models can be applied to realize anomaly detection to demonstrate the universality of this method. (2) The hyper-parameters within the prediction model should be optimized with the cost function of anomaly detection. (3) Especially, anomaly detection for other types of unmanned aerial vehicle should be considered in the future. | 10,627.4 | 2018-03-24T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
A New Unsteady Flow Control Technology of Centrifugal Compressor Based on Negative Circulation Concept
,
Introduction
Since the invention of the centrifugal compressor, it has been widely used in military and civil fields, such as auxiliary power unit for aircraft [1], propulsion system for unmanned aerial vehicle or missile [2,3], turbocharger for vehicle [4], and fuel cell for automobile [5,6].With the development of modern industrial system and the improvement of technical requirements, people's demand for compressor performance is gradually increasing.At present, centrifugal compressors are developing towards high advanced pressure ratio, high efficiency, and high stability.The design technique of compressor has already evolved from the onedimensional empirical formula to the full threedimensional flow field analysis [7][8][9].In addition, some blade optimization measures are also widely used, such as leading edge sweep blade [10,11], split blade [12,13], tandem blade [14,15], and back-swept blade [16].In recent years, artificial intelligence has further optimized the compressor structure and blade profile.Ma et al. [17] used NSGA-III algorithms to optimize a centrifugal compressor in a fuel cell system, significantly improving the pressure ratio and efficiency of the compressor.Based on blade profile optimization, flow control technology is beginning to attract people's attention to further improve the performance of the compressor.
Most of the flow control techniques currently applied to centrifugal compressors at present are concentrated in the tip region, mainly for two reasons: one is that some devices installed in the blade or wheel hub are usually limited by space due to the compact structure of centrifugal compressors; the other is that the flow field structure of the centrifugal compressor tip is extremely complex due to the effects of meridian curvature and centrifugal force, and it has a great influence on the compressor performance [18][19][20][21][22][23].It is precisely for these two reasons that our subsequent research and application of flow control will also be established at the blade tip.At present, the applications applied to the tip area of centrifugal compressor are various, among which the most widely used is casing treatment, including selfrecirculation casing treatment (SRCT) [24], circumferential grooves [25], bleed slots [26], axial groove [27], nonaxisymmetric casing treatment [28], and ported shroud [29].Except for casing treatment, the application of some other control methods, such as air injection [30], suction [31], and blade tip winglet [32], is relatively less.Although the compressor performance can be improved to some extent by these flow control techniques, the results obtained when applying these techniques are often inconsistent.Taking SRCT as an example, Wang et al. [33] can improve the operating efficiency of 1.5% while increasing the stable operating range of the compressor by 20% by applying SRCT.However, in Jung and Pelton's research [34], he used SRCT to increase the stable working area of the compressor by 25%, but the efficiency decreased by 0.4%.Taking the circumferential grooves as an example, Bareiß et al. [25] found that the circumferential groove method would not affect the compressor efficiency while improving the compressor pressure ratio under any circumstances after applying this flow control technology to a centrifugal compressor.The circumferential groove method is also used by Park et al. [35], whose research found that this technology can increase the stall margin of the compressor, but the efficiency will decrease.Similar conclusions can be drawn from the review article [36].Therefore, when these flow control technologies are applied in practice, many questions will arise, such as which control method is the most suitable?What is the essential difference between different control methods?Will the effect of the same flow control method change when applied to different compressors?How should the control parameters of the same flow control method applied to different compressors be optimized?
The basic reason for the above questions is the lack of sufficient understanding of the flow theory and flow control mechanism of the centrifugal compressor.To solve this problem, some researchers have carried out modeling research on the compressor based on the mathematical model and theoretical analysis [37][38][39][40][41][42][43].Although these theories have made some achievements, the following problems still exist: (1) Some theories take the compressor as an integral unit, which lacks the details of the flow field and the complicated physical phenomena; (2) many models are based on cascade research, which may not be applicable to centrifugal compressors; (3) these one-dimensional models lack further support in describing complex flow structures; (4) part of the theories only predict the simple actions of vortices and cannot describe their complex motions; (5) the most important point is how to guide the flow control technology based on these models.
Due to the complex internal flow structure of the compressor, it is difficult to give a detailed and complete theoretical description, so we focus on the flow structure that plays a key role in the performance of the compressor.As mentioned above, the flow field in the tip region of the impeller is particularly important in the centrifugal compressor.In particular, the TLV (tip leakage vortex) often has a decisive influence on the performance of the centrifugal compressor [21][22][23]44].Tomita et al. [45] found that the blockage of the TLV breakdown is a major cause of compressor stall which may occur even at low flow rates.Schleer et al. [46] believe that a large TLV will cause the compressor stall and affect the performance of the compressor throughout the operating range.Cao et al. [22] found a Kelvin-Helmholtztype instability of the shear layer formed between the mainstream flow and the tip leakage flow, which will form TLV with strong unsteadiness and pressure fluctuations causing the reduction of the blade loading.Kaneko and Tsujita [47] found that TLVs formed by the main blade and split blade reduce the blade loading and compressor efficiency.Therefore, considering the important influences of TLV on the performance of centrifugal compressors, our research focus is based on TLV.Our approach is to optimize the flow at the compressor blade tip by suppressing the TLV, thereby improving the compressor performance.The specific steps are as follows: First, in order to obtain regular and generalised conclusions, we conducted theoretical research on TLV by introducing a two-dimensional vortex model.Secondly, we investigated the behaviour of the TLV vortex model under some external factors to provide a reference for our subsequent flow control technology.Then, following the theoretical research results, we proposed a flow control technology applicable to centrifugal compressors, which we call the negative circulation flow control (NCFC) method.Finally, we used unsteady numerical simulation to compare the performance of the centrifugal compressor with and without flow control.
Introduction of a Two-Dimensional TLV Model
Through many numerical simulations and experimental studies, it is found that the cross-sectional profile of the compressor TLV has an obvious elliptical structure [48][49][50], so we use an elliptical vortex model to describe the TLV rather than a circular model often adopted by many researchers.The region where vorticity is concentrated in a two-dimensional inviscid flow field is called the vortex patch.For example, the vortex core of a Rankine vortex is the simplest circular vortex patch often used by researchers to analyze vortex structures.In general, the boundary shape of a vortex patch of any shape should change continuously as the vortex patch moves.There is a special case: an elliptical vortex patch with uniform vorticity will rotate on itself at a constant angular velocity and remain unchanged; this type of vortex is called a Kirchhoff elliptical vortex, as shown in Figure 1.
The constant angular velocity is expressed by the following formula: where a is the major semiaxis and b is the minor semiaxis.
Based on the elliptical vortex model, we can divide the flow field inside the centrifugal compressor into the following two 2 International Journal of Aerospace Engineering parts: inside the elliptical vortex, the vorticity is known, and the velocity field can be determined; the outside of the ellipse is equivalent to an elliptical column affected by the compressor passage flow.Finally, the two parts are combined to give the total velocity field.We applied this method previously in our research of a blade-divergent passage, resulting in favourable outcomes [51].
2.1.Outside the Elliptical Vortex.The governing equation for the stream function outside the elliptical vortex is In the Cartesian coordinates (x, y), the ellipse boundary is whose unit outer normal vector can be written by where s is the surface arc length measured in a counterclockwise direction.When the elliptical vortex rotates counterclockwise, a point velocity at the vortex boundary is Thus, the normal velocity can be given by where r 2 = x 2 + y 2 .On the other hand, the normal velocity can be also expressed with stream function: According to Equations ( 6) and ( 7), we can obtain ∂Ψ ∂s = −Ωr ∂r ∂s 8 By integrating the above equation, we can derive the boundary flow function: Then, elliptic coordinates are used for the convenience of derivation, which has the form of where the elliptic focus c 2 = a 2 − b 2 and on the ellipse border ξ = ξ 0 .According to the CFD results [48][49][50], it can be observed that the velocity at the vortex boundary is very close to the blade tip velocity due to the influence of the viscous force and the relative motion of the shroud.Therefore, we assume where U is approximately equal to the blade tip velocity.
Then, we substitute a = ccoshξ 0 and b = csinhξ 0 into Equation (11) to get Since the rotational direction of the vortex is opposite to that of the blade, a minus sign should be added in front of U. Combining Equations ( 9) and ( 12), we can obtain the boundary stream function expression: Furthermore, the external flow function should simultaneously satisfy the governing Equation ( 2), the boundary condition of Equation ( 13), and the condition that the velocity at infinity is 0, so its solution should be of the form Comparing Equation ( 13) with ( 14), we can obtain the constant: Then, the external flow function becomes U ab c 2 e 2ξ 0 e −2ξ cos 2η 16
International Journal of Aerospace Engineering
Considering that the elliptical vortex has a circulation Γ = πabω, that is, the vortex flux passing through the cross-sectional area of the ellipse, the external stream function can be expressed as where superscript "o" in the formula means external and ω is vorticity.
2.2.Inside the Elliptical Vortex.The governing equation of the stream function inside the elliptical vortex is The solution of the stream function has a form of where superscript "i" in the formula means internal.By plugging it into Equation ( 18), we get The above undetermined constants can be derived from the continuity of the normal and tangential velocity components on the vortex boundary.The normal velocity component is So, one related equation can be determined by the continuity according to Equations ( 3), (8), and (21), that is From the continuity of tangential velocity which holds true for any value of η, so we can get another relation: Combining Equations ( 20), (22), and ( 24), we get the undetermined constants Finally, we can get the internal stream function
Modeling Analysis of TLV under Imposed Strain
The flow field structures inside the compressor are complex, even a certain simple dynamic behaviour of the TLV may be the result of some combined effects of many different factors.Therefore, our approach is to use a theoretical expression to describe the overall performance of the TLV.
Although a particular state of motion may also be a combination of many factors, the theory used to describe this phenomenon should be concise and clear.Our specific approach is to employ some typical external loads, which can be expressed by mathematical models, to the above TLV model.In addition, the specific performance of the model with external loads applied is also discussed.We expect that this approach will help us to understand the effect of the TLV and to find out what factors affect the TLV behaviour, such as other flow field structures or the configuration of the compressor itself, thereby guiding us to take appropriate flow control measures to improve the performance of the compressor.Four external strain effects are considered in this paper.They are the "Flow Passage Constriction Effect," "Passage Vortex Squeeze Effect," "Leakage Flow Translation Effect," and "Additional Circulation Effect."As we have given detailed introductions to the first three in our previous research [48], this paper only briefly describes them and mainly introduces the "Additional Circulation Effect."
Flow Passage Constriction Effect. To describe the "Flow
Passage Constriction Effect," we use the following flow function: where e is a strain rate strength that is determined here by the geometry of the flow passage.After applying it to the TLV model, we can get where ε = a/b is the axis ratio of the ellipse.The negative sign in the formula only reflects the rotational direction of the vortex, so the absolute value of the result can better reflect the intensity ratio of the external factor and the vortex.Further analysis of the above formula shows that the minimum value of the ratio of external factors to vortex intensity is 1/ ε + 1/ε + 2 , depending on the axis ratio of the elliptical TLV, and there is no maximum value.This means that even the flow passage with small meridional curvature could potentially have a significant constricting effect on the TLV.
4
International Journal of Aerospace Engineering 3.2.Passage Vortex Squeeze Effect.The "Passage Vortex Squeeze Effect" can be described by a flow function as where e is determined here by the strength of the passage flow.Similar to the previous treatment, after applying it to the TLV model, we can get whose value range is from −1/2 ε + 1 to −1/2 1/ε + 1 .The negative sign indicates that the effect of the passage vortex on the TLV is opposite to our setting, meaning that the passage vortex actually produces a suction effect rather than a squeeze effect on TLV.Therefore, we believe that the passage vortex inside the compressor can increase the effective area of the TLV, thus accelerating the deterioration of the flow field.
Leakage Flow Translation Effect.
For expressing the "Leakage Flow Translation Effect," we employed a simplified linear flow function where e is a constant determined by the strength of the leakage flow.When it acts on the TLV model, the ratio of leakage flow intensity to TLV can be obtained The above formula also only gives the minimum value of ab/2 2b + a .As the leakage flow is a necessary condition for producing the TLV, it is easy to understand that the ratio of its strength to the TLV strength is only the minimum value.
Additional Circulation Effect.
As mentioned above, the behaviour of the TLV may be a combination of several factors, including the additional circulation effect.Figure 2 illustrates the generation mechanism of the "Additional Circulation Effect."As shown in the figure, the direction of the leakage flow and the viscosity effect caused by the passage vortex are the same as the direction of rotation of the TLV.Therefore, the leakage flow and passage vortex will strengthen the TLV, as if an additional circulation is imposed around the TLV, so we call it the "Additional Circulation Effect." If we set the coordinate origin at the center of the ellipse vortex, a point vortex model can be used for additional circulation.The corresponding stream function is where the subscript "ace" means "Additional Circulation Effect" and Γ 0 is the intensity of the additional circulation determined by leakage flow and passage vortex.According to the continuity condition and TLV model, we can get We can multiply Equation ( 34) by 2 and add it to Equation (35) to eliminate the second term to get The above expression gives the range of Γ 0 /ω.It can be seen that the "Additional Circulation Effect" depends on the long and short axis of the elliptical vortex.By analyzing the minimum value of Γ 0 /ω, we can find that the minimum value of Γ 0 /ω is always greater than 1 as long as b/a is not extremely small (corresponding to an extremely elongated ellipse).We can therefore conclude that in most cases, the value of Γ 0 /ω is greater than 1.This means that in most cases, when the "Additional Circulation Effect" acts on the TLV, its influence will even exceed that of the TLV itself.As for the extremely narrow elliptical vortex, it can be generated at a very small tip clearance.Of course, in this case, the leakage flow generated by the small gap will also be small, and the corresponding "Additional Circulation Effect" may also be weakened at the same time.For most centrifugal compressors, the clearance may not reach this situation, so we will not discuss this extreme case.
A Negative Circulation Flow Control (NCFC) Method
From Equations ( 28) and (32), it can be seen that the intensity ratio of the "Flow Passage Constriction Effect" to the TLV and the ratio of the "Leakage Flow Translation Effect" to the TLV have only minimum values, which means that the two effects are the necessary conditions to affect the TLV.Therefore, from a flow control point of view, we are not willing to spend much energy on controlling these two effects.For the "Passage Vortex Squeeze Effect," if we take a conventional ellipse with axis ratio ε = 2, the maximum intensity ratio of the "Passage Vortex Squeeze Effect" to TLV can be gotten from Equation (30) as 1/3.Comparing this value with the minimum value of Γ 0 /ω (Equation ( 38)), which is greater than 1 in most cases, we can conclude that the effect of the "Additional Circulation Effect" on the TLV will be much greater than that of "Passage Vortex Squeeze Effect" on the TLV.Therefore, a proper flow control method that we adopt should be to weaken or better inhibit the "Additional Circulation Effect."This is why we propose the negative circulation flow control (NCFC) method.Its essence is to create an effect opposite to the "Additional Circulation Effect" to weaken the TLV. Figure 3 shows an NCFC device mounted on the shroud.A vortex generator to create negative circulation in the opposite direction to the TLV is located in a tube that connects to the hole near the starting position of the TLV on the shroud.A certain number of such devices can be placed around the circumference of the casing, depending on the structure of the compressor.
Due to the relative motion between the NCFC devices and the blades, the NCFC method can achieve unsteady flow control as the impeller rotates.Thus, the NCFC method has two characteristics of unsteadiness and negative circulation.Its unsteady control frequency is where ω r is the rotational speed (RPM) and N c is the number of NCFC devices.Similarly, the blade passing frequency (BPF) is where N b is the number of blades.From references [52][53][54][55], it can be seen that the unsteady fluctuation frequency of International Journal of Aerospace Engineering TLV (f T ) is approximately between 40% and 100% BPF (f T ≈ 0 4 ~1 0 f BPF ).On the other hand, researchers found that the unsteady flow control performs well when the frequency of unsteady flow control is close to the frequency of the controlled object [56][57][58].Therefore, we can get Because this article mainly focuses on the feasibility of the NCFC method, we choose f c ≈ f BPF here for simplification, which means N c = N b .In the similar way, other frequencies can also be studied.
The Numerical Simulation Methods
The investigated microcentrifugal compressor is used in a 30 kW distribution power generation system.The impeller consists of 10 pairs of main and split back-swept blades (N b = 10).The meridional channel shape is determined by a cubic Bezier curve.Table 1 shows the main geometric and aerodynamic parameters of the compressor.
To simulate the effect of the relative positioning of the NCFC devices and the blades changing due to the rotation of the impeller, two very thin transition layers (layer 1 and layer 2 in Figure 4) are added between the NCFC device computational domain and the shroud.Figure 4 shows the computational grid of the compressor and the data exchange settings between the different computational domains.The total thickness of the additional layer is 0.075 mm, which is 1/8 of the tip clearance height of 0.3 mm.A full nonmatching (FNM) connection is used between the shroud and layer 1.The calculation domain of the impeller and layer 1 is performed in the rotating coordinate system.This is because FNM requires the coordinate system solved on both sides to be consistent but does not need to consider mesh matching.FNM is also set for data exchange between the NCFC device and layer 2, and the two domains are calculated in the static coordinate system.Therefore, a rotor-stator (R-S) interaction is used to link layer 1 and layer 2.
International Journal of Aerospace Engineering
Considering the relative motion between the NCFC device and the impeller, we adopt an unsteady numerical simulation to study a single blade passage with Numeca/ Fine.Since we use the same unsteady control frequency as BPF, the number of NCFC devices is the same as the number of blades N b , so one blade passage corresponds to one NCFC device.The total number of grids in the compressor calculation domain without flow control is approximately 900,000.The total number of additional meshes for layer 1, layer 2, and NCFC devices for the controlled compressor calculation is approximately 30,000.The thickness of the first layer of mesh near the wall surface is set to 0.001 mm.The Reynolds number is approximately 3 1 × 10 5 , and the dimensionless wall distance y + varies from 1 − 3.
The time step is determined by the number of angular positions, which is set to be 30 points.Therefore, according to the rotational speed, the time step is 2 5 × 10 −6 s with 30 inner iterations.The domain scaling method is used for rotor/stator interface.The impeller inlet is set to at a rotational speed of 80,000 RPM, a total pressure of 101,325 Pa, and a total temperature of 293 K.The given static pressure at the outlet of the NCFC device is 101,325 Pa.The solid wall is set to adiabatic no-slip condition.The Spalart-Allmaras (S-A) model is employed for the turbulent model.If the compressor performance parameters change periodically, we assume that the calculation is convergent.By continuously increasing the static pressure at the impeller outlet, the calculation becomes divergent.A state before divergence is defined as a near-stall condition.We have validated the effectiveness of the numerical simulation method on a low-speed commercial centrifugal compressor as shown in Figure 5. Figure 6 shows the comparison of experimental and calculated results of the centrifugal compressor at 100% and 80% design speed (the red circle corresponding to the experimental point is large, reflecting the error band).From the characteristic diagram, it can be seen that the numerical calculation results are in good agreement with the experimental results.Therefore, it is believed that the numerical simulation method used in this article has a certain degree of credibility.More detailed experimental information can be found in references [53,57].
Compressor Performance with and without NCFC Devices
6.1.Overall Performance.In order to have a comparative analysis, we have simulated the cases of the compressor with noncontrolled (NC), hole control (HC, holes are placed on International Journal of Aerospace Engineering the shroud with the same size and number with NCFC devices), NCFC, and reverse NCFC (R-NCFC, generating vortex opposite with NCFC as shown in Figure 4).The flow field in one blade passage period after calculation convergence is selected for analysis.The performance of the timeaveraged flow field of the compressor in this period is shown in Figure 7.The figure shows that for high flow conditions, these control methods have little effect on compressor efficiency.For conditions spanning a wide range of flow rates, including the design point (the mass flow is 0.36 kg/s), these control methods have a significant impact on compressor efficiency.In particular, the NCFC method can significantly improve the efficiency of the compressor among these flow control technologies, with a maximum efficiency increase of 0.95%.Compared to the uncontrolled compressor, the efficiency of the compressor with the HC method hardly changes.However, the R-NCFC method reduces the efficiency of the compressor by about 0.05%.Therefore, it is obvious that the NCFC method is optimal among these methods.As mentioned above, the NCFC method has the advantages of both unsteadiness and negative circulation effect.The HC method has only the unsteadiness characteristic.
We can therefore conclude that the negative circulation effect is beneficial for flow control.A reverse negative circulation effect (with the R-NCFC method) actually reduces the efficiency of the compressor, which can further illustrate this conclusion.Further observation of the compressor pressure 9 International Journal of Aerospace Engineering characteristics shows that after applying these flow controls, the pressure ratio of the compressor does not change much, and the pressure ratio of the controlled compressor decreases slightly, with a maximum decrease of about 0.05.The relative stall margin is defined as where π cs and m cs are the total pressure ratio and mass flow rate of the compressor near stall condition with control and π ocs and m ocs are the total pressure ratio and mass flow rate of the compressor near stall condition without control.The SM rel is 6.9%, 5.2%, and -7.3% corresponding to the compressor with NCFC, HC, and R-NCFC, respectively.From Figure 7, it can also be seen that the operating ranges of the compressor under NCFC and HC conditions are larger than those under NC and R-NCFC conditions.
6.2.Flow Field Analysis of the Centrifugal Compressor.For the analysis, we use the same flow field data in one blade passing period as above.show the pressure distribution at 95% of the blade height of the compressor under no control and three other different controls.These cases are selected from the calculation point before the stall point, as shown by the dotted line in Figure 7.The mass flow and total pressure ratio of these cases are almost the same, so we consider this comparison to be credible.In addition, in order to present a good comparison effect, we recorded the range of blade tip high-pressure zone as T0 at its minimum and T3 at its maximum under different operating conditions.Moreover, in order to eliminate the differences of the compressor under different working conditions and to have better comparability, the dimensionless static pressure coefficient is selected here, which has the following form: where p t is the transient static pressure value and p inlet is the average static pressure at the inlet.
From Figure 8, it can be seen that the blade tip flow field of the uncontrolled compressor appears to have pressure fluctuations.The high pressure zone reaches its maximum value at T3 and then gradually decreases until T5, showing a relatively regular periodicity.From references [50,53,54], it can be seen that the formation and fluctuation of the high-pressure zone are mainly caused by the TLV.When the NCFC method is applied (as shown in Figure 9), the high-pressure zone at the blade tip is significantly reduced.From T0 to T5, there is almost no high-pressure zone compared with NC, which means that the flow passage blockage effect at the blade tip is weakened.Figure 12 shows the variation of the static pressure coefficient at two points of the blade tip during a blade passing period.It is clear from the figure that the curve of the NCFC is smoother than that of the other schemes, which is an indication that its pressure change has better continuity.For a better quantitative understanding of pressure fluctuations, a static pressure coefficient variance of one point is defined as where CP i is the instantaneous static pressure coefficient and CP is the average static pressure coefficient.Table 2 shows the variance of the static pressure coefficients at points A and B in Figure 12 under different schemes.From the 11 International Journal of Aerospace Engineering combination of the curves in Figure 12 and the data in Table 2, it can be concluded that the use of the NCFC scheme has great advantages for pressure stabilisation.
Figure 10 shows the case of the compressor using the HC method.It can be seen that only a small area of the highpressure zone appears at T2-T4.In addition, a comparison of the HC method with the NCFC and NC methods shows that the HC method is less stable to pressure fluctuations.The use of the R-NCFC method gives the worst flow field (Figure 11).In this operation, the overall characteristics of the flow field are similar to those of the NC case, but the area of the high-pressure zone is larger than that of the NC, which means that the clogging effect will increase.
Based on the above results, we can see that the NCFC method can effectively stabilize the blade tip flow field of the compressor, thus improving the stable operating range and efficiency of the compressor, without affecting the total pressure ratio.The HC method can also give good results, which is very useful for increasing the stall margin of the compressor.However, the excitation that is opposite to the mode of action of the NCFC, namely, R-NCFC, will deteriorate the flow field at the compressor blade tip and will cause the performance of the compressor to decrease.
The HC method uses the unsteady effect created by the relative motion of the impeller and hole to realize periodic excitation.However, the efficiency of each excitation is not taken into account.The NCFC method not only takes advantage of the same unsteady property as HC but also considers the efficiency of each excitation.Here, we use the concept of negative circulation to force the momentum to exchange in the original flow field.Therefore, from the perspective of flow control, we propose to strengthen the efficiency of each unsteady incentive, which is very beneficial to improve the overall effect of unsteady control.
Conclusions
In order to improve the performance of the centrifugal compressor, we carried out a theoretical study on the compressor TLV and proposed a two-dimensional vortex model.Through the analysis of this model, we put forward a view based on the concept of negative circulation control and conducted numerical simulation verification.The conclusions are as follows: (1) The Kirchhoff elliptical vortex is introduced to represent the compressor TLV.A point vortex model is drawn into the two-dimensional model to reveal an additional circulation effect caused by leakage flow and passage vortex viscous effect.The result shows that the additional circulation effect will exceed the TLV itself in most cases, which is also larger than the passage vortex squeeze effect (2) A negative circulation flow control device is designed and verified by numerical simulation.It is found that the NCFC method can greatly stabilize the flow field at the blade tip and improve the stall margin and efficiency of the compressor without affecting the total pressure ratio of the compressor (3) The effect of NCFC is better than that of the HC.The former utilizes both unsteady excitation and injection of negative circulation to improve momentum exchange.The latter only uses the unsteady effect.Therefore, it is highly recommended to improve the efficiency of each unsteady jet/suction and separation flow interaction
Nomenclature
A: Undetermined constant a: Long axis B: Undetermined constant b: Short axis C: Undetermined constant CP: Dimensionless static pressure coefficient CP: Average static pressure coefficient CP i : Instantaneous static pressure coefficient c: Elliptic focus e: Strain rate strength e x : Unit vector in the x direction e y : Unit vector in the y direction e z : Unit vector in the z direction The number of blades N c : The number of NCFC devices n: Unit outer normal vector n x : X component of the unit outer normal vector n y : Y component of the unit outer normal vector p t : Transient static pressure p inlet : Average static pressure at inlet r: Vector distance from origin r: Distance from origin SM rel : Relative stall margin s: Surface arc length U: Main flow velocity V: Strain rate matrix V b : Velocity of a point on the vortex border x: x coordinate y: y coordinate Γ: Circulation of the ellipse vortex Γ o : Circulation of the point vortex
Figure 1 :
Figure 1: Schematic diagram of the Kirchhoff elliptical vortex model.
Figure 4 :
Figure 4: Calculation grid of the compressor and the data exchange settings.
Figure 5 :Figure 6 :
Figure 5: (a) The microcentrifugal compressor used in the experiment; (b) the complete experimental setup consists of a trumpet-shaped intake, throttle plate, and pressure tubes.
Figure 7 :
Figure 7: Time-average compressor performance over one blade period: (a) compressor efficiency comparison; (b) compressor total pressure ratio comparison.
Figure 12 :
Figure 12: Static pressure fluctuations in one blade passing period.
F: Genetic function f BPF : Blade passing frequency f c : Unsteady control frequency f T : Unsteady fluctuation frequency of the TLV m cs : Mass flow rate of the compressor near stall condition with control m ocs : Mass flow rate of the compressor near stall condition without control N b :
Table 1 :
Main parameters of the microcentrifugal compressor in this study.
Table 2 :
The variance of the static pressure coefficients under different schemes. | 7,681.8 | 2023-10-09T00:00:00.000 | [
"Engineering",
"Physics"
] |
An Iterative Nonlinear Filter Using Variational Bayesian Optimization
We propose an iterative nonlinear estimator based on the technique of variational Bayesian optimization. The posterior distribution of the underlying system state is approximated by a solvable variational distribution approached iteratively using evidence lower bound optimization subject to a minimal weighted Kullback-Leibler divergence, where a penalty factor is considered to adjust the step size of the iteration. Based on linearization, the iterative nonlinear filter is derived in a closed-form. The performance of the proposed algorithm is compared with several nonlinear filters in the literature using simulated target tracking examples.
Introduction
Bayesian estimation is widely applied across many areas of engineering such as target tracking, aerial surveillance, intelligent vehicles, and machine learning [1]. In linear Gaussian systems, the state estimation can be optimally achieved using a Kalman filter as a closed form solution. However, many real-world estimation problems are nonlinear, resulting in analytically intractable posterior probability density function (PDF) for the state. In consequence, suboptimal approximation methods are sought to solve the nonlinear estimation problems [2].
Many suboptimal techniques have been developed to solve nonlinear estimation problems. These techniques may be divided into the following three categories. The first category, includes the extended Kalman filter (EKF) [3], the iterated extended Kalman filter (IEKF) [4,5], and their variants [6,7], solves the state estimation problem through replacing the nonlinear functions by their linear approximations via the Taylor expansion. The second category involves stochastic sampling methods. In the filtering process, a set of randomly sampled points with weights are adopted to approximate the PDF of the underlying state. For example, the particle filter (PF) [8][9][10] is a sequential Monte Carlo (SMC) stochastic sampling method, which approximates the PDF by the sampled particles from a proposal distribution. PF can be applied to nonlinear non-Gaussian systems. Markov Chain Monte Carlo (MCMC) is another popular stochastic method since it is able to achieve arbitrarily high accuracy using a large number of particles, sometimes resulting in prohibitive computational expenditure. The techniques in the third category use deterministic sampling methods. Nonlinear state PDFs are approximated by a set of fixed points and weights that represent the location and spread of the distribution. This category includes the unscented Kalman filter (UKF) [9,11,12], the cubature Kalman filter (CKF) [13,14], and the central difference Kalman filter (CDKF) [15]. The UKF and the
Problem Formulation
Consider a general dynamic system with measurement as follows.
where f k (·) denotes the state transition function, and h k (·) denotes the mapping from the system state to the measurement; ω k and v k are the process noise and the measurement noise, respectively. We assume that ω k and v k are Gaussian and mutually independent, ω k ∼ N (0, Q k ) and v k ∼ N (0, R k ).
Assuming that the posterior PDF p (x k−1 |z k−1 ) at time k − 1 is available, the PDF of the predicted state is obtained by the Chapman-Kolmogorov equation.
p (x k |z k−1 ) = p (x k |x k−1 ) p (x k−1 |z k−1 ) dx k−1 . (3) Then, at time k, the posterior PDF is obtained using the measurement z k by an application of the Bayes rule: For linear Gaussian systems, it is well known that the optimal state estimation x k|k and the corresponding error covariance P k|k under the criterion of minimum variance estimation are obtained by: For nonlinear systems, the integral in Equation (4) is often intractable. Suboptimal approximations for the posterior PDF are needed. Most existing suboptimal algorithms adopt linearization or sampling techniques to approximate the posterior PDF p (x k |z k ). We consider an iterative VB approach, in which the true PDF is approximated by a variational distribution and is approached by iterative optimization of the ELBO. The proposed algorithm converts the nontrivial integration to a closed-form optimization and therefore improves estimation accuracy.
Evidence Lower Bound Maximization
The above nonlinear estimation problem can be solved using a VB framework. Express the marginal PDF p (z k ) using a variational distribution q (x k |ψ k ) as follows: where D KL [p(x k |z k ) q(x k |ψ k )] is the KL divergence between the true posterior PDF p(x k |z k ) and the variational distribution q(x k |ψ k ); that is, L (ψ k ) is the variational ELBO: The variational distribution q (x k |ψ k ) is assumed Gaussian with unknown parameter ψ k = x k|k , P k|k (to be estimated), where x k|k is the mean and P k|k is the covariance.
Please note that the poterior PDF p(x k |z k ) needs to be closely approximated by a known distribution in nonlinear filtering. From Equation (7), evidently, the variational distribution q(x k |φ k ) would be equal to the true posterior PDF p(x k |z k ) if the KL divergence were zero. Minimizing the KL divergence, and thereby approximating the posterior PDF by a variational distribution, is equivalent to maximizing the ELBO, i.e., According to Equations (9) and (10), the nontrivial integration of Equation (7) is converted to the problem of maximizing ELBO, which can be solved by the VB method.
Proximal Iterative Nonlinear Filter
In this section, we derive an iterative procedure in a closed-form to iteratively maximize the ELBO so as to minimize the KL divergence between the true posterior PDF p(x k |z k ) and the variational distribution q(x k |ψ k ).
Penalty Function Based on KL Divergence
Notice that the KL divergence D KL q (x k |ψ k ) q x k |ψ i k is nonnegative for all q (x k |ψ k ). Following [24], we adopt the proximal point algorithm to generate a sequence ψ i+1 k via the following iterative scheme, where i denotes the iteration index and the penalty factor β i is used to adjust the optimization step length. Roughly speaking, the ELBO is maximized when the KL divergence between the two variational distributions q (x k |ψ k ) and q x k |ψ i k approaches zero. Equation (11) can be rewritten as Please note that one iteration of this proximal method is equivalent to moving a step in the direction of the natural gradient [18]. The influence of the KL divergence on ψ i+1 k can be adjusted by β i . The larger the β i , the weaker the influence of the KL divergence on ψ i+1 k , and vice versa. In [18], it is assumed that β i = 1.
The Proximal Iterative Nonlinear Filter
The proximal iterative method is implemented via an iterative minimization of the KL divergence, where the initial state is assigned with an estimation from a core-filter, e.g., Bayesian filter. Here, EKF is adopted as the core-filter to predict and update the system state before the iterative optimization process. We propose a proximal iterative nonlinear filter combined with VB, called PEKF-VB, which is described and derived in the following.
Firstly, by substituting Equation (10) into Equation (12), we can rewrite the iterative optimization as Under the Gaussian assumptions for the process noise and the measurement noise, the variational distribution is of the form q (x k |ψ k ) ∼ N (x k ; x k|k , P k|k ). Given the prior of the state x k−1|k−1 at time For the first term in Equation (14), the expectation E q [log p (z k |x k )] related to x k and P k can be approximated linearly using the gradients with respect to (w. r. t.) x k and P k . Defining g x k|k , P k|k E q [log p (z k |x k )], the gradients of g w. r. t. x k|k and P k|k are The expectation E q [log p (z k |x k )] at time k is then maximized by gradient ascent in the variables x k|k and P k|k ; that is, where α i k and γ i k at iteration i are given by and H i k is the Jacobian matrix: In other words, the coefficients α i k and γ i k are updated by H i k with x i k|k in each iterative step. The detailed calculations of α i k and γ i k are given in Appendix A.
where operators tr(·) and |·| denote the trace and the determinant of a matrix, respectively. By Equations (17)-(21), Equation (13) can be rewritten as, Then ψ i+1 k is maximized at a point (x, P) which can be explicitly calculated. To find them, set the partial derivatives of ψ i+1 Accordingly, x i+1 k|k and p i+1 k|k are seen to be (24) and (25) show that the state estimate and the associated covariance in the iteration are updated by α i k and γ i k , respectively. As shown in Figure 1, the complete iteration procedure consists of Equations (18), (19), (24) and (25).
Update state and associated covariance by Eqs. (5) and (6) Initialize iteration and Calculate Jaccobian matrix by Eq. (20) Calculate parameters and by Eqs. (18) and (19) Update iterative estimation and associated covariance by Eqs. (26) and (27) Termination Figure 1. The flow diagram of the proposed PEKF-VB algorithm.
We note that, in principle, the computational cost in Equations (24) and (25) can be slightly reduced by using the Matrix Inversion Lemma [27]. As a result, Equations (24) and (25) are derived as Equations (26) and (27), respectively.
where (24) and (25) can be rewritten as where The flow diagram of the proposed PEKF-VB algorithm is shown in Figure 1 and the detailed implementation of PEKF-VB is given in Algorithm 1.
Algorithm 1
The implementation of the PEKF-VB algorithm 1: Initialization (k = 0): state estimation x 0 and associated error covariance P 0 , the number of iterations. 2: Compute the predicted state x k|k−1 and the associated error covariance P k|k−1 where H k = ∂h k (x) ∂x | x=x k|k−1 . 4: Update the state estimation x k|k and the associated error covariance P k|k 5: Let x 1 k|k = x * k|k , P 1 k|k = P * k|k , and i = 1. 6: while not converge do 7: Compute parameters α i+1 k and γ i+1 k by Equations (18) and (19). 8: Compute iterated state estimation x i+1 k|k and its error covariance P i+1 k|k by Equations (26) and (27). 9: Let i = i + 1. 10: end while 11: Let k = k + 1, go back to Step 2.
1.
The VB method approximates the true posterior PDF by choosing from a parameterized variational distribution. In each iteration of the PEKF-VB, the ELBO (9) increases. It follows that the ELBO is a proper criterion for measuring the performance of variational optimization. The ELBO of the proposed nonlinear filter is where D x and D z denote the dimension of the state and the dimension of the measurement, respectively. The derivation of the ELBO is given in Appendix B.
2.
Apart from the KL divergence, we can use Calvo and Oller's distance (COD) as the penalty function in Equation (13); the corresponding filter is denoted by CODEKF. The COD of two where n is the dimension of T which is diagonal, and ∆µ = µ 2 − µ 1 , TT T = I. We replace the KLD in Equation (13) with Equation (31) as follows. 3.
Since both PEKF-VB and CODEKF involve iterations within the VB framework to minimize the divergence between the posterior PDF and variational distribution, their complexity is increased by the calculation of the Jacobian in each iteration.
4.
In PEKF-VB, we use KL divergence to measure the similarity between two distributions. Under Gaussian assumptions for the distributions, a closed-form solution of the variational distribution has been derived. However, the VB framework with the KL divergence can also apply to non-Gaussian distributions. If no closed-form exists, a Monte Carlo method can be used to approximate the divergence. Other measures of dissimilarity between probability distributions, such as the alpha-divergence, the Rényi-divergence and the alpha-beta divergence, can also be used in the VB framework. See [29] and references therein. Unfortunately, in general, no computationally tractable form of the variational distribution can be derived and a Monte Carlo method has to be employed.
Numerical Simulations
In this section, we present two nonlinear estimation examples of 2D target tracking and a benchmark nonlinear filtering problem to illustrate the performance of PEKF-VB and CODEKF. We compare them with EKF and UKF. The performance is measured by root-mean-squared error (RMSE) of the estimates and the computational overhead.
Example 1: Range-bearing tracking. In this scenario, the underlying target motion is described by a constant turn (CT) model, with the state vector consists of 2D position and velocity components. As shown in Figure 2a, the target moves with initial state x 0 = (565.7, 29.99, 1166, −0.62) T along a circular trajectory, and is observed by a range-bearing sensor. The state transition matrix F k in Equation (1) and the measurement function h k (x k ) in Equation (2) are: where the turn rateθ = −0.0333 rad/s, the covariances of the zero mean Gaussian white noises ω k and υ k are Q k = 1.5 GqG T and R k = diag r 2 , φ 2 , respectively, where q = I 2×2 , and G = T 2 /2 T 0 0 0 0 T 2 /2 T T , r = 35, and φ = 0.5π/180. At each run, the track is initialised using the two-point method [1] with initial error covariance P 0 = diag([600, 100, 600, 100]). We let β i = 1, and T = 1 s. The target moves for 80 scans (periods). 1000 Monte Carlo runs are carried out. The RMSE plots of EKF, UKF, CODEKF and PEKF-VB are showed in Figure 2b. The black, red, green and blue curves are obtained by the PEKF-VB CODEKF, UKF and EKF, respectively. It is seen that in terms of RMSE performance, PEKF-VB is slightly better than CODEF; both PEKF-VB and CODEF are better than EKF and UKF. Table 1 provides the quantitative comparison of RMSE and execution time of EKE, UKF and CODEKF, PEKF-VB. Figure 3b gives the relationship between iteration index and the running time of PEKF-VB. In Figure 3a we compare the RMSE performance of PEKF-VB by varying the numbers of iterations. Notice that the RMSE decreases with the increasing of the number of iterations. Figure 3b gives the computational overhead of PEKF-VB w.r.t the number of iterations. Both results agree with our intuition.
To illustrate the convergence of PEKF-VB, we present the ELBO for different numbers of iterations in Figure 4. Figure 4a illustrates the ELBO at the second scan, and Figure 4b shows the ELBO for all scans. Clearly, the ELBO increases with the number of iterations increases, showing that the iteration procedure in PEKF-VB converges. Example 2: Bearing-only tracking. In this scenario, a single target tracking using measurements from a single bearing-only sensor is considered. While the sensor (ownership) measurement model is nonlinear to the target state, the sensor has to maneuver relative to the target in order to observe it [30,31]. Let x k = [x k , y k ,ẋ k ,ẏ k ] be the state of the target at time k, where (x k , y k ) and (ẋ k ,ẏ k ) are the position and velocity, respectively. The target, with an initial range of 5 km (relative to the sensor) and initial course of 220 • in a clockwise direction (Set the positive axis of y is 0 • ), is modeled by a constant velocity model in Equation (35).
where T = 1min. The speed of the target is 0.1235 km/min. The sensor starts moving with a fixed speed of 0.1543 km/min and an initial course of 140 • in a clockwise direction (Set the positive axis of y is 0 • ). Please note that for the bearing-only tracking problem, to be able to estimate the range of the target, the sensor has to maneuver. Here we assume the sensor maneuvers from scan 14 to scan 17, and then moves with constant velocity from scan 18 to scan 40. The motion model of the sensor is given by Equation (36), where the turn rateθ = 30 • /min. Both the target and the sensor move for 40 scans and their trajectories are shown in Figure 5a.
The measurement function h k (x k ) of the bearing-only sensor is, where x k and y k are the position of the target in Cartesian coordinate system, x s and y s are the position of the sensor. The standard deviation of measurement noise is 1 • . The initial position of the target is randomly sampled at range r = 13 km with covariance P 0 where σ m = π/180 rad, ∆r = 2 km and ∆v = 61.7 m/min. We let β i = 1. Figure 5b shows the estimated target trajectories obtained by the EKF, UKF, CODEKF and PEKF-VB for a single run.
x (km) Based on 1000 Monte Carlo simulations, the RMSE comparison of EKF, UKF, CODEKF and PEKF-VB is illustrated in Figure 6, where the number of iterations for PEKF-VB is 5.
•
As we expected, with a fixed sensor trajectory, PEKF-VB has the best target observability from sensor measurements and leads to a better RMSE performance than EKF and UKF. Under the VB framework, the variational distribution approaches the real posterior PDF through the iteration of the proximal filter.
•
The RMSE performance of CODEKF is also better than those of EKF and UKF because, for CODEKF, the Jacobian matrix of Equation (37) in each iteration is updated to minimize the COD. However, the RMSE performance of CODEKF is slightly worse than that of PEKF-VB.
•
In the first few scans, the performance of the four filters are comparable. This is because, in this bearing-only tracking problem, the accumulative measurements in these scans do not provide enough information to the four filters. The performance of CODEKF and of PEKF-VB suffers when measurement data is very limited. As more measurements accumulate both CODEKF and PEFK-VB extract more information via the iteration process, resulting in superior performance. Figure 7 shows the metric values versus the number of iterations for PEKF-VB and CODEKF. The KLD curve is close to zero after the fifth iteration. The enlarged plots marked from the sixth to the tenth iteration shows that PEKF-VB converge faster than CODEKF. Example 3: A Strongly nonlinear filtering problem. For further verifying our proposed method, we run EKF, UKF, CODEDK and PEKF-VB on the benchmark nonlinear problem in [32][33][34] where ν k−1 and ω k are zero mean Gaussian noise with variances Q k−1 and R k , respectively. We let Q k = 0.0001, R k = 1 and scan period T = 1. The simulation results are given in Figure 8, from which we can see that CODEKF and PEKF-VB have very similar results and outperform EKF and UKF. This is because that local linearization adopted by EKF-based filter are not a sufficient description of the nonlinear nature of this example [34], while the VB iteration can make use of measurement information as much as possible.
Conclusions
We have developed a proximal iterative nonlinear filter, in which the expectation of the posterior PDF is approximated by a parameterised variational distribution that is iteratively optimized in the VB framework. A weighted KL divergence is adopted as a proximal term in the iteration to ensure the convergence can be achieved with a tight bound. The simulation results show that the proposed algorithm is better than several existing algorithms in terms of estimation accuracy at the cost of increased computational burden. Since the system is Gaussian, the likelihood function p(z k |x i k|k ) is The parameters α k in Equation (18) and γ i k in Equation (19) are derived according to Equation (A2) and Equation (A3), respectively.
By Bonnet's theorem [35], the gradient of the expectation of f (ξ) under a Gaussian distribution N (ξ|µ, C) w.r.t. the mean µ is the expectation of the gradient of f (ξ) It follows that α i k can be written where the Jacobian matrix According to Price's theorem [36], the gradient of the expectation of f (ξ) under a Gaussian distribution N (ξ|µ, C) w.r.t. the covariance C is the expectation of the Hessian of f (ξ): Similarly, (A7)
Appendix B. Derivation of ELBO
From Equations (9) and (8), the ELBO is Since both system noise and measurement noise are Gaussian, the likelihood function p (y k |x k ), the PDF p (x k ) and the PDF q (x k |ψ k ) below are Gaussian Assume that the state transition matrix and the measurement matrix are obtained by linearization, p(z k |x k ) and p(x k ) can be written as, where F k−1 = ∂ f (x) ∂x | x=x k−1|k−1 and H k = ∂h(x) ∂x | x=x k|k .
Assume that the state estimation is unbiased; that is, E q [x k−1 ] = x k−1|k−1 , Equation (A10) can be written approximately as Since estimation is unbiased, the ground truth x k at time k can be expressed as, where E q [x k|k ] = 0. The first term E q [log p(z k |x k )] in Equation (A8) becomes The second term E q [log p(x k )] in Equation (A8) becomes The third term E q [log q (x k |ψ k )] in Equation (A8) is Combining Equations (A16)-(A18), we obtain the ELBO as (A19) | 5,115.6 | 2018-12-01T00:00:00.000 | [
"Computer Science"
] |
Genomic Analysis of Non-B Nucleic Acids Structures in SARS-CoV-2: Potential Key Roles for These Structures in Mutability, Translation, and Replication?
Non-B nucleic acids structures have arisen as key contributors to genetic variation in SARS-CoV-2. Herein, we investigated the presence of defining spike protein mutations falling within inverted repeats (IRs) for 18 SARS-CoV-2 variants, discussed the potential roles of G-quadruplexes (G4s) in SARS-CoV-2 biology, and identified potential pseudoknots within the SARS-CoV-2 genome. Surprisingly, there was a large variation in the number of defining spike protein mutations arising within IRs between variants and these were more likely to occur in the stem region of the predicted hairpin stem-loop secondary structure. Notably, mutations implicated in ACE2 binding and propagation (e.g., ΔH69/V70, N501Y, and D614G) were likely to occur within IRs, whilst mutations involved in antibody neutralization and reduced vaccine efficacy (e.g., T19R, ΔE156, ΔF157, R158G, and G446S) were rarely found within IRs. We also predicted that RNA pseudoknots could predominantly be found within, or next to, 29 mutations found in the SARS-CoV-2 spike protein. Finally, the Omicron variants BA.2, BA.4, BA.5, BA.2.12.1, and BA.2.75 appear to have lost two of the predicted G4-forming sequences found in other variants. These were found in nsp2 and the sequence complementary to the conserved stem-loop II-like motif (S2M) in the 3′ untranslated region (UTR). Taken together, non-B nucleic acids structures likely play an integral role in SARS-CoV-2 evolution and genetic diversity.
Introduction
When we consider the structure of nucleic acids, our first thoughts are of the iconic DNA beta-helical structure. However, nucleic acids are structurally diverse and can be found in a wide range of topologies and conformations within both living and non-living entities. Non-B nucleic acids have been identified as important regulators in fundamental biological processes and have emerged as novel therapeutic targets within infection and disease.
There is growing evidence that these non-canonical nucleic acids structures, such as G-quadruplexes (G4s), cruciforms, hairpins, and pseudoknots, may contribute to both the functional biology and mutational variability of humans, animals, plants, and microorganisms [1][2][3][4][5]. Inverted repeats (IRs) constitute a sequence of nucleotides followed downstream by its reverse completed sequence, often separated by a 'loop' sequence. Viral origins of replication and bacterial plasmids are found to be enriched with IRs [6,7]. These IR sequences can fold into a hairpin stem-loop structure or palindrome in single-stranded nucleic acids. This can significantly contribute to genomic instability and mutation [8]. Furthermore, they have been implicated in a wide range of biological processes, such as replication, transcription, and DNA repair [9,10]. IRs also regulate RNA processing in animals and plants, and transcripts containing IRs are processed to produce small RNAs which silence genes [11,12]. IRs are also an important component of pseudoknots: a common structural motif in RNA formed of two nested stem-loops [13]. Pseudoknots have Genes 2023, 14, 157 2 of 11 been found to be present in viruses whereby they contribute to viral translation, replication, and can also induce frameshifts [5]. Pseudoknots have also been shown to act as binding sites for proteins and may act as regulatory switches in response to environmental signals. They are highly conserved amongst viruses, and, as such, they are beginning to emerge as a potential antiviral target for SARS-CoV-2 [14].
G4s are four-stranded nucleic acids structures that arise in guanine-rich regions of RNA/DNA, and are formed in sequences composed of four runs of ≥two guanines separated by a nucleotide loop (e.g., GGATGGATGGATGG) [15]. Here, four guanines associate via Hoogsteen hydrogen bonding to form a G-tetrad. These G-tetrads stack upon one another and are stabilised by a metal cation (e.g., K + ) to form the G4 secondary structure. These structures have been gaining interest recently as antimicrobial targets, due to their demonstrable roles in the regulation of fundamental biological processes such as transcription, translation, replication, and alternative gene splicing [15]. Indeed, G4s have arisen as promising drug targets within bacteria, viruses, parasites, and fungi [16][17][18][19].
Goswami and colleagues recently highlighted that SARS-CoV-2 hot-spot mutations were significantly enriched within IRs in the Wuhan reference genome, and hypothesised that IRs could contribute to further mutational drive [20]. This hypothesis was confirmed in additional variants, but in-depth analyses of IRs in more recently identified variants have not been conducted [21]. Moreover, G4s have recently arisen as promising targets to treat SARS-CoV-2 infections [22]. The important roles of these non-canonical nucleic acids structures in SARS-CoV-2 are only just starting to become apparent. Thus, critical biological insights into the roles these structures may have in SARS-CoV-2 could help with understanding the biology of this virus and unveil novel druggable targets to treat these infections. In this article, we analyse SARS-CoV-2 genomes for the presence of IRs, pseudoknots, and G4s, with the aim of stimulating new schools of thought and identifying future experimental directions for the fields of nucleic acids biology and virology.
Selection of Sequences
Representative genomes for the currently circulating variant of concern (Omicron), formerly circulating variants of concern (Alpha, Beta, Gamma, and Delta), 9 formerly monitored variants (Epsilon, 20A, Kappa, Iota, 20B, Eta, Theta, Lambda, and Mu), and the Wuhan reference strain were analysed. The FASTA sequences for the entire genomes and the S genes encoding the SARS-CoV-2 spike glycoproteins for each were obtained from the National Center for Biotechnology Information (NCBI; last accessed 19 December 2022). Only complete genomes and sequences were used for analysis. Representative genomes were used as there was negligible variation between the locations of predicted non-B structures amongst all genomes from the same variant. The accession numbers of the genomes analysed can be found in Table S1 and the genome information can be found in the Supplementary Materials.
Detection of Mutations within IRs, Prediction of Pseudoknot Formation, and G4-Analysis
To quantify the number of predicted IRs within the S genes, the FASTA sequences were analysed using the Palindrome Analyser web server (http://palindromes.ibp.cz/#/ en/index; (last accessed on 19 December 2022) [23]) using the default settings (size: 6-30 bp, spacer: 0-10 bp, and mismatches: 0, 1). Defining, shared, and unique mutations were identified via CoVariants (https://covariants.org/; (last accessed on 19 December 2022) [24]), which collates raw data provided by the Global Initiative on Sharing All Influenza Data ((GISAID); [25]). Prior to post-analysis, FASTA sequences of the variants' S genes were aligned to the Wuhan reference sequence using Clustal Omega (EMBL-EBI) to account for any effects of the deletion mutations and differences in nucleotide number. Mutations were noted to have occurred within an IR only if the mutation site fell within the stem or loop region of the predicted IR. Pseudoknot formation was predicted using ProbKnot within the RNAstructure program as described previously [26,27]. Pseudoknot predictions were Genes 2023, 14, 157 3 of 11 performed using 1 iteration and a minimum helix length of 3. The ProbKnot CT files containing the predicted pseudoknot structures are provided in the Supplementary Materials. The presence of G4-forming sequences in the SARS-CoV-2 genomes was determined via QGRS Mapper using the search options of max length = 30, minimum group size = 2, and loop size = 0-12 [28].
Statistical Analysis
Data comparing groups were first tested for normality via a Shapiro-Wilk normality test prior to analysis via either an unpaired Student's t-test or one-way ANOVA depending upon the number of variables. Significance was given as any value < 0.05.
There Is a Large Variation in the Number of Defining Mutations Falling within IRs between SARS-CoV-2 Variants
In-depth analyses of IRs in more recently identified SARS-CoV-2 variants have not been conducted [21]. Therefore, we first identified the presence of IRs in the entire genome and S genes of the currently circulating variant of concern (Omicron), formerly circulating variants of concern (Alpha, Beta, Gamma, and Delta), and nine formerly monitored variants (Epsilon, 20A, Kappa, Iota, 20B, Eta, Theta, Lambda, and Mu) to offer insight into whether SARS-CoV-2 was continuing to mutate as expected.
We found no significant difference in the number of IRs in the complete genome or in the S genes between variants (Table S1; Figure S1). Unexpectedly, we did find that the number of defining spike mutations occurring within IRs was largely varied between variants (Table 1; Figure 1A). Defining spike mutations of the Delta (22.2%), 20B (25%), and Iota (33.3%) variants were least likely to be found within IRs, but spike mutations of the Beta (70%), Eta (77.7%), and Theta (71.4%) variants were frequently located within IRs (Table 1; Figure 1A). Regarding specific mutations, the D614, N501, ∆Y144, ∆G142, T478, N440, K417, Q498, and ∆H69/V70 mutations were most frequently found within IRs (Table 1). Interestingly, we also found that defining mutations shared by the variants (e.g., ∆D69/V70, ∆Y144, N501Y, and D614G) were significantly more likely to be found within IRs compared to those unique to a variant, such as A570D, T716I, and S982A in the Alpha variant and ∆E156/F157 and R158G in the Delta variant ( Figure 1C,D). We also observed a preference for the defining mutation to be found within the stem rather than the loop of the IR ( Figure 1B; Table S1). Thus, it appeared that IRs play an integral role in driving the mutational diversity of spike protein mutations amongst variants. However, why some mutations were preferentially found within IRs and not others was unknown. Many of the mutations under investigation have now been implicated in ACE2 binding, antibody neutralization, or both [29][30][31][32][33][34]. We observed that defining mutations contributing to ACE2 binding, such as ∆H69/V70, N501Y, and D614G, were regularly found to occur within IRs (100.0%, 90.0%, and 94.4% of instances where this mutation was present were found within IRs, respectively). Conversely, mutations significantly contributing to antibody neutralization, such as T19R, ∆E156, ∆F157, R158G, and G446S, were not found within IRs (Figure 1E,F; Table S2).
Pseudoknots Are Predicted to Occur near the Sites of Several Key Mutations
In the Wuhan reference strain, pseudoknot prediction algorithms determined the presence of potential pseudoknots within the sites where the A27S, E484K, and S704L mutations occur ( Figure 1G
Gene Name/Region
Highest Scoring Sequence + or − Strand
Discussion
We found that there was significant diversity in the percentage of defining spike protein mutations occurring within IRs between variants. Mutations linked to infectivity were more likely to arise within IRs compared to those associated with antibody neutralization. Moreover, pseudoknots were predicted to form close to key spike protein mutations and G4s were predicted to form within two conserved regions within the 3′ and 5′ UTRs.
The SARS-CoV-2 mutations found most likely to occur within IRs amongst all variants were the ΔH69/V70, N440K, N501Y, and D614G mutations; all of which have been implicated in increased fitness and infectivity [36][37][38][39][40]. Mutations implicated in ACE2 binding and propagation were found to frequently occur within IRs, whilst mutations involved in antibody neutralization and reduced vaccine efficacy were rarely found within IRs. Although not significant, these data suggest that mutations linked to antibody neutralization may occur more frequently outside of IRs and are probably evoked due to the external pressures of vaccines and antibodies, rather than spontaneous mutation. Surprisingly, the D614G mutation was found within an IR for all variants except for Omicron BA.2, BA.4, and BA.5. The D614G mutation has been shown to enhance infectivity but it has also been shown to enhance susceptibility to vaccines and antibody neutralization. Notably, the Omicron BA.2, BA.4, and BA.5 variants also display increased resistance to neutralizing antibodies [41][42][43][44]. However, this is unlikely to be due to the loss
Discussion
We found that there was significant diversity in the percentage of defining spike protein mutations occurring within IRs between variants. Mutations linked to infectivity were more likely to arise within IRs compared to those associated with antibody neutralization. Moreover, pseudoknots were predicted to form close to key spike protein mutations and G4s were predicted to form within two conserved regions within the 3 and 5 UTRs.
The SARS-CoV-2 mutations found most likely to occur within IRs amongst all variants were the ∆H69/V70, N440K, N501Y, and D614G mutations; all of which have been implicated in increased fitness and infectivity [36][37][38][39][40]. Mutations implicated in ACE2 binding and propagation were found to frequently occur within IRs, whilst mutations involved in antibody neutralization and reduced vaccine efficacy were rarely found within IRs. Although not significant, these data suggest that mutations linked to antibody neutralization may occur more frequently outside of IRs and are probably evoked due to the external pressures of vaccines and antibodies, rather than spontaneous mutation. Surprisingly, the D614G mutation was found within an IR for all variants except for Omicron BA.2, BA.4, and BA.5. The D614G mutation has been shown to enhance infectivity but it has also been shown to enhance susceptibility to vaccines and antibody neutralization. Notably, the Omicron BA.2, BA.4, and BA.5 variants also display increased resistance to neutralizing antibodies [41][42][43][44]. However, this is unlikely to be due to the loss of this IR sequence and more likely to be due to the involvement of the S371F, D405N, R408S, F486V, and L452R mutations. However, of these mutations, only the F486V and L452R mutations were found within IRs, further supporting our claim that antibody neutralizing mutations occur with less frequency within IRs.
In the Wuhan reference strain, pseudoknot prediction algorithms determined the presence of potential pseudoknots within the sites where the A27S, E484K, and S704L mutations occur and in 29 additional mutations amongst the variants tested. The E484 mutation is particularly noteworthy as this mutation has been shown to arise with high frequency in the presence of antibodies [45]. However, whether the external influence exerted by antibodies can induce pseudoknot formation is unknown. It is well known that the pseudoknot in the ORF1 polyprotein of SARS-CoV-2 can induce frameshifts, whilst the conserved pseudoknot in the coronavirus 3 UTR is involved in viral replication [46]. These are two such key examples, but one can hypothesise that these examples are the tip of the iceberg, and that pseudoknots have important roles throughout the entire viral genome. Future studies could investigate whether antibody binding acts as an environmental trigger for pseudoknot formation/prevention and whether this influences further mutational drive.
It was previously demonstrated that the SARS-CoV-2 genome contained fewer G4s than the SARS-CoV genome and this has been suggested to be energetically favourable, as G4s can represent a barrier to translation and replication [47,48]. Moreover, the frequency of G4s in a viral genome is associated with whether infection is chronic or acute [28]. The G4s in the nsp1, nsp3, nsp10, S, and N genes have previously been shown to form in vitro [48]. However, the roles these G4s might play in controlling the biological functions of these genes have not been fully addressed. Of particular interest are the G4-forming sequences found in the UTRs of SARS-CoV-2. Both predicted G4 sequences would be found on the negative strand. It has recently been identified that SARS-CoV-2 negative strands have protein-coding potential, and they are known to be involved in replication [49]. Thus, the negative strand may be targeted by G4-stabilising compounds to prevent translation of proteins on the negative sense strand and subsequent SARS-CoV-2 replication cycles. Indeed, several G4-stabilisers have been found to bind to SARS-CoV-2 RNA and G4-stabilising compounds have recently been demonstrated to be antiviral in mouse models of infection [50][51][52], highlighting the therapeutic potential of targeting G4s in SARS-CoV-2 infections.
It has recently been shown that the conserved SL1 region in the 5 UTR of SARS-CoV-2 represents a potential drug target [53]. The authors demonstrated that a locked nucleic acid (LNA) antisense oligonucleotide to the SL1 region could inhibit viral translation, prevent lethality in mice expressing ACE2, and make SARS-CoV-2 vulnerable to non-structural protein 1 (Nsp1) translation suppression [53]. Chowdhury et al., recently demonstrated that LNA probes can promote disruption of the secondary G4 structure [54]. Therefore, it is likely that the LNA oligonucleotide used against the SL1 region could also disrupt the G4 predicted to form on the negative strand. This suggests that this conserved G4-forming sequence could be important in promoting viral translation and molecules designed to disrupt this G4 might have therapeutic potential.
The S2M region has previously been described as a recombination hotspot in SARS-CoV-2 compared with other positive single-stranded RNA viruses [55]. It is well-established that G4s can contribute to genome instability, and it is likely that this G4-forming sequence in past variants has contributed to the genetic variability observed within the S2M region of the new variants. On another note, the presence of two potential TAGGGA microsatellites in close vicinity to this region probably also contributes to the genetic variability within this region due to their high mutation rates. Finally, interferon-β (IFN-β) can inhibit SARS-CoV-2 replication and Nsp2 has recently been shown to repress the translation of IFN-β [56]. The presence of G4-forming sequences in an mRNA can prevent translation, and the loss of the predicted sequences in the recent Omicron variants could provide some explanation for the increased replication of these variants. Thus, loss of the G4-forming sequence from nsp2 might enhance the translation of Nsp2 and promote replication.
Conclusions
Taken together, non-B nucleic acids structures are prevalent throughout the SARS-CoV-2 genome where they may play integral roles in promoting mutational diversity. Furthermore, it could be interesting to explore whether environmental pressures, such as the immune response and antibodies, influence the formation of IRs, G4s, and pseudoknots. Finally, targeting non-B nucleic acids structures in SARS-CoV-2 may disrupt viral biological processes and have therapeutic potential, although a much greater understanding of their biological roles in SARS-CoV-2 is required.
Supplementary Materials:
The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/genes14010157/s1, Figure S1: The total number of IRs in the whole genome and spike protein; Table S1: SARS-CoV-2 spike protein mutations within inverted repeats (IRs); Table S2: Mutations involved in ACE2 binding and antibody neutralization found within IRs; and Table S3 | 4,125.8 | 2023-01-01T00:00:00.000 | [
"Biology"
] |
A Fuzzy-MOORA approach for ERP system selection
Article history: Received June 9, 2012 Accepted 5 July 2012 Available online July 6 2012 In today’s global and dynamic business environment, manufacturing organizations face the tremendous challenge of expanding markets and meeting the customer expectations. It compels them to lower total cost in the entire supply chain, shorten throughput time, reduce inventory, expand product choice, provide more reliable delivery dates and better customer service, improve quality, and efficiently coordinate demand, supply and production. In order to accomplish these objectives, the manufacturing organizations are turning to enterprise resource planning (ERP) system, which is an enterprise-wide information system to interlace all the necessary business functions, such as product planning, purchasing, inventory control, sales, financial and human resources into a single system having a shared database. Thus to survive in the global competitive environment, implementation of a suitable ERP system is mandatory. However, selecting a wrong ERP system may adversely affect the manufacturing organization’s overall performance. Due to limitations in available resources, complexity of ERP systems and diversity of alternatives, it is often difficult for a manufacturing organization to select and install the most suitable ERP system. In this paper, two ERP system selection problems are solved using fuzzy multi-objective optimization on the basis of ratio analysis (MOORA) method and it is observed that in both the cases, SAP is the best solution. © 2012 Growing Science Ltd. All rights reserved.
Introduction
Enterprise resource planning (ERP) is a comprehensive integrated information system comprising several configurable modules to automate the flow of material, information and financial resources among all the functions within a manufacturing organization on a common database.Besides integration, its main aim is to enhance decision support, reduce asset costs, receive more accurate and timely information, and attain higher flexibility with increased customer satisfaction.To survive in the global competitive environment, every organization now feels the need to augment a suitable ERP system in its present operational structure.Although implementing an ERP system is cost-extensive and time-consuming task, its benefits are worthwhile.With careful selection of an ERP system, a manufacturing organization can expect to achieve significant advantages, including dramatic increase in responsiveness, productivity, on-time shipment and sales, as well as decreases in lead time, purchase cost, quality problem and inventory.Failure in selecting an appropriate ERP system may lead to failure of the project or the organization's performance will get weakened.
Nowadays, most of the manufacturing organizations do not only implement ERP systems themselves, but also expect to have the full support of ERP software developers after installation, because ERP software is an expensive tool and requires well-defined contributions between the manufacturing organization and a developer or its vendor/consultant firm.On the other hand, an ERP software in market cannot fully meet the needs and expectations of organizations, because every organization runs its own business with different strategies and goals.ERP vendors/developers use different hardware platforms, databases and operation systems, and some ERP software is only compatible with some organizations' databases and operation systems.However, many manufacturing organizations install their ERP systems hurriedly without fully understanding the implications for their business or the need for compatibility with overall organizational goals and strategies.Surprisingly, given the significant investment in resources and time, many organizations did not achieve success in ERP implementation.It is estimated that the failure rate of ERP implementation ranges from 40% to 60% or higher (Nazemi et al., 2012).
Therefore, the selection process for finding out the most satisfying ERP software among a set of feasible alternatives in market should be achieved using one of the proven multi-criteria decisionmaking (MCDM) methods.By following an MCDM-based methodology, the decision maker should be able to strengthen the selection decision with respect to justifiability, accountability and reasonability, which are regularly seen as pre-requisites of complex and risky decisions.In this paper, the application of fuzzy multi-objective optimization on the basis of ratio analysis (MOORA) method is proposed to choose the best ERP systems for two manufacturing organizations.It is also proved that fuzzy MOORA method is a simple, easy to understand and accurate tool for solving decisionmaking problems having imprecise and vague evaluation data.Shyur (2003) developed an analytic network process (ANP)-based four-step semi-structured method for ERP system evaluation having interdependency relations among the considered multiple criteria.Wei and Wang (2004) presented a comprehensive framework for combining objective data obtained from external professional reports and subjective data accumulated from internal interviews with vendors to select a suitable ERP project.Bernroider and Mitlöhner (2005) observed an increase in awareness of MCDM methods in the context of ERP project selection and concluded that although the ERP selection decision problem would seem to be structured, formal MCDM methods would be quite helpful to solve that decision-making problem.Wei et al. (2005) presented a comprehensive method for selecting a suitable ERP system based on a framework which could construct the objectives of ERP selection to support the business goals and strategies of an organization, identify the appropriate attributes and set up a consistent evaluation standard for facilitating a group decision process.Lien and Liang (2005) proposed a three-phase ERP selection framework using fuzzy analytic hierarchy process (FAHP) and observed that cost was the most important factor affecting the ERP system selection decision.Liao et al. (2007) proposed a similarity degree-based algorithm to aggregate the objective information about ERP systems from some external professional organizations, which might be expressed by different linguistic terms.The consistency and inconsistency indices were defined by considering the subjective information obtained from internal interviews with the ERP vendors, and a linear programming model was established for selecting the most suitable ERP system.Ayağ and Özdemir (2007) applied a fuzzy extension of ANP method for selecting a suitable ERP system while considering various interactions, dependencies and feedback between higher-and lower-level elements in the decision-making process.Liang and Lien (2007) presented a practical procedure while combining both the ISO 9126 standard and FAHP approach to optimize the ERP selection problems.Lien and Chan (2007) proposed a FAHP method to develop a selection model for ERP system.Karaarslan and Gundogar (2008) aimed to select the most appropriate ERP software from two elected candidates using AHP method.Karsak and Özogul (2009) proposed an integrated decision framework for ERP software selection based on quality function deployment, fuzzy linear regression and zero-one goal programming methods.It was observed that the developed methodology would appear as a sound investment decision-making tool for ERP systems as well as other information systems.Yazgan et al. (2009) designed an artificial neural network-based model and trained with ANP in order to calculate priority values to select the best ERP software.Cebeci (2009) presented an approach to select a suitable ERP system for a textile industry and used FAHP method to compare the performance of ERP system solutions.Onut and Efendigil (2010) applied AHP method and its fuzzy extension to obtain more decisive judgments by prioritizing criteria and assigning weights to the ERP system alternatives.Forslund and Jonsson (2010) studied the effects of decisions made in the ERP system lifecycle phases on supply chain performance management.Asgari et al. (2011) introduced a comprehensive framework for selecting ERP system using fuzzy theory and fuzzy MCDM methods.Nikjoo et al. (2011) employed decision techniques and combined with goal programming to choose an appropriate ERP system for a particular manufacturing organization.Rouyendegh and Erkan (2011) presented a comprehensive framework for selecting the best suited ERP system using AHP method.It could systematically construct the objectives of an ERP system selection to support the business goals and strategies of the organization.Khaled and Idrissi (2011) addressed the question of how to choose an ERP solution that would best suit a given small and medium enterprise.Choquet integral was introduced as a new iterative learning-based approach destined to make enlightened decisions through the consideration of interdependencies among the adopted selection criteria.Huiqun and Guang (2012) developed a novel approach while integrating AHP and rough set theory for ERP software selection, and applied fuzzy technique for order preference by similarity to ideal solution (FTOPSIS) method to obtain final ranking of the ERP software alternatives.Although the earlier researchers have focused their attention on selecting the most suitable ERP systems using complex mathematical tools, a need is still felt to apply a simple and easily understandable MCDM method to deal with the ERP system selection problems.
Fuzzy MOORA method
The method of multi-objective optimization on the basis of ratio analysis (MOORA) was introduced by Brauers and Zavadskas (2006).Due to its simplicity and comprehensiveness, it has already been successfully applied in manufacturing (Chakraborty, 2010), construction engineering and management (Kracka et al., 2010;Brauers et al., 2008;Brauers et al., 2008), and economics (Brauers &Ginevicious, 2009;Brauers & Zavadskas, 2010).MOORA is the process of simultaneously optimizing two or more conflicting attributes (objectives) subject to certain constraints.In a decisionmaking problem, the values of these objectives are measured for every decision alternative, and this provides the basis of comparison of choices and consequently facilitates the selection of the best (satisfactory) option.Therefore, multi-objective optimization techniques seem to be an appropriate tool for ranking or selecting one or more alternatives from a set of feasible options based on multiple, usually conflicting attributes.It has been already observed that MOORA method is very simple, stable and robust, and it requires minimum mathematical calculations and computational time (Chakraborty, 2010;Brauers & Zavadskas, 2012).
In MOORA method, the overall performance of each alternative is calculated as the difference between sums of its normalized performances for beneficial and non-beneficial criteria, using the following expression: where x ij * is a dimensionless number in the interval of [0,1] representing the normalized performance of i th alternative on j th criterion, g is the number of beneficial criteria, (n -g) is the number of nonbeneficial criteria and y i is the overall performance of i th alternative with respect to all the criteria.
When priority weights are considered to give relative importance of one criterion over the other, Eq.
(1) can be rewritten as: where w j is the weight of j th criterion which can be determined by AHP or entropy method.The best alternative has the highest y i value, while the lowest y i value represents the worst alternative.
Fuzzy set theory provides a mathematical framework in which vague conceptual phenomena can be precisely studied (Zadeh, 1965).It has already been proven to be a valuable tool to strengthen the comprehensiveness and reasonableness of the decision-making process.It is an important method to measure the ambiguity of concepts that are associated with the decision maker's subjective judgments, including linguistic terms, satisfaction degree and importance degree that are often vague.
A fuzzy set Ā in a universe of discourse X is characterized by a membership function μ Ā (x) which associates with each element x in X a real number in the interval [0,1].The function value μ Ā (x) is termed as the grade of membership of x in Ā.The most commonly used fuzzy numbers are triangular and trapezoidal fuzzy numbers.Triangular fuzzy numbers are often used in applications because of their calculation easiness and added features.The triangular fuzzy numbers can be denoted as Ā = (l, m, n) and the membership function of fuzzy number Ā is defined by the following equation.In order to take full advantage of MOORA method for solving the decision-making problems with imprecise and vague data, a new variant of MOORA method, i.e. fuzzy MOORA is proposed here which consists of the following procedural steps.
Step 1: Based on the valued opinions of the decision makers, develop the fuzzy decision matrix where each criterion value is measured using triangular membership function.
where x ij l , x ij m and x ij n respectively denote the lower, middle and upper values of a triangular membership function for i th alternative with respect to j th criterion.
Step 2: Normalize the fuzzy decision matrix using vector normalization procedure.For this, the following equations are adopted (Stanujkic et al., 2012).
Step 3: Determine the weighted normalized fuzzy decision matrix employing the following equations: For the development of the weighted normalized fuzzy decision matrix, fuzzy criteria weights may also be used, but this leads to more complex calculations.
Step 4: Calculate the overall ratings of beneficial and non-beneficial criteria for each alternative.For beneficial criteria, the overall ratings of an alternative for lower, middle and upper values of the triangular membership function are computed as follows: Step 5: Determine the overall performance index (S i ) for each alternative.For this, the defuzzied values of the overall ratings for beneficial and non-beneficial criteria for each alternative are computed using the vertex method (Huiqun & Guang, 2012), as follows, Step 6: Based on the descending values of overall performance index, rank the alternatives from the best to the worst.The alternative with the highest overall performance index is the most favorable choice.
Illustrative examples
While selecting the best ERP system solution for a specific manufacturing organization, the decision maker has to take into account various critical factors influencing its successful deployment.Those important factors are enlisted in details here-in-under.a) Corporate vision i) What major organizational changes has the ERP vendor/developer made recently?
ii) What major product changes does the organization foresee or has planned in near future?
iii) What level of involvement does the executive staffs have in the organization's daily operations?b) Technology and system architecture i) Is the technology robust enough to handle current and future transactions?ii) Is the system's speed acceptable for daily use?
iii) Is source code provided so that customizations or modifications can be easily made?iv) Does the ERP system allow a number of database and server options?v) Does the ERP system support multi-organization, multi-division and multi-locational environments?c) Product functionality i) Does this ERP system meet the overall requirements?
ii) Is the menu structure easy to follow and understand?
iii) Are the help files easily assessable and easy for users to comprehend? iv) Can the user customize help files to meet the organizational needs?v) Is the product complicated or too sophisticated for the average user?vi) Are there useful standard reports available?d) Product cost i) Are the license costs justified given the offered functionalities?
ii) Is the required database affordable?
iii) Are annual maintenance charges reasonable?iv) What is the true implementation service-to-software ratio for the organization?v) How quickly can payback be received?e) Service and support i) Is the team comfortable with the sales process and representative?
ii) Can the ERP vendor provide a complete turn-key solution?
iii) What type of project management is available?iv) What type of training is available?v) What is the average technical support staff's experience level and tenure in the organization?vi) How quick can the non-critical software bugs be fixed?vii) Does the ERP vendor offer business process re-engineering as part of the implementation process?
viii) Does the ERP system vendor have experience in other industries?f) Vendor longevity i) How many years has the organization been actively engaged in this software industry?
ii) When was the product's first release?
iii) Has the organization been consistently profitable over the years?iv) Has there been recent turnover in the management staff?v) Are the customer references available?
Example 1
In this example, Huiqun and Guang (2012) considered four ERP system selection firms, i.e.CA-MANMAN/X, BAAN, SSA-BPCS and SAP R/3, and observed that apart from cost, low risk, high quality, flawless product, high reliability and delivery-on-time would be the key indicators for evaluating the performance of an ERP system selection firm.Five important criteria for software selection were identified as risk (R), quality (Q), effectiveness (E), efficiency (EF) and user satisfaction (US), and AHP method was then employed to determine the weights of those five criteria as w R = 0.364, w Q = 0.271, w E = 0.203, w EF = 0.093 and w US = 0.068.Using triangular fuzzy membership function, the fuzzy decision matrix for this ERP system selection problem was developed, as given in Table 1 and fuzzy TOPSIS method was applied to determine the best alternative as SAP R/3, whereas, SSA-BPCS was the worst chosen ERP system.
For solving the same problem using fuzzy MOORA method, at first, the fuzzy decision matrix containing the criteria values expressed in triangular fuzzy numbers, is normalized using Eqns.( 5)-( 7).This normalized fuzzy decision matrix is shown in Table 2. Normalization is essential for an MCDM method to make the elements of the decision matrix dimensionless and comparable.Table 3 exhibits the weighted normalized fuzzy decision matrix which is obtained after multiplying the normalized fuzzy criteria values with the corresponding values of crisp criteria weights.Among these five selection criteria, risk is the only non-beneficial attribute where lower value is preferred, and on the other hand, higher values are always desired for quality, effectiveness, efficiency and user satisfaction.Now using Eqns.( 11)-( 13), the overall ratings of beneficial criteria for the considered alternatives are calculated.Similarly, applying Eqns.( 14)-( 16), the overall ratings of non-beneficial criteria for the alternatives are computed.These overall ratings for both beneficial and non-beneficial criteria for the four alternative ERP systems are given in Table 4. Now, the vertex method is employed to defuzzify the overall ratings for beneficial and non-beneficial criteria so as to derive the values of overall performance index for all the alternatives.It is observed that the highest value of overall performance index occurs for alternative 4, which signifies that SAP R/3 is the best ERP system to be implemented.Alternative 3 (SSA-BPCS) is the worst favored ERP system.CA-MANMAN/X and BAAN are the intermediate choices.Huiqun and Guang (2012) also obtained the same rankings for these ERP systems using fuzzy TOPSIS method.
Example 2
After thoroughly investigating the effects of all the possible criteria on the final ERP system selection decision, Nikjoo et al. (2011) shortlisted five criteria, i.e. business volume (BV), rate of market share (RMS), customization (C), user interface (UI), and support services and after sale (SSS) according to their relative importance.Three alternative ERP systems, i.e.SAP, ORACLE, AXAPTA were considered, and three experts familiar with ERP systems were asked to judge and analyze about each criterion's satisfaction level by each alternative.They were also asked to determine the importance of each criterion using language variables.They explained their opinions in fuzzy numbers as expressed in trapezoid fuzzy membership functions.These trapezoid fuzzy numbers are transformed into triangular fuxxy membership functions, as given in Table 5.Then, the language variables associated with the corresponding criteria weights are changed into crisp values, and are normalized as w BV = 0.1823, w RMS = 0.2138, w C = 0.2078, w UI = 0.1842 and w SSS = 0.2119.Here, maximum weights are allocated to rate of market share, and support services and after sale criteria.It is interesting to note that all these five criteria are beneficial in nature where higher values are desired.Using fuzzy TOPSIS method, Nikjoo et al. (2011) identified SAP as the best ERP system to be implemented for the considered organization.
When this ERP system selection problem is solved using fuzzy MOORA method, the fuzzy decision matrix is first normalized, as given in Table 6.After that, the weighted normalized fuzzy decision matrix is obtained and is exhibited in Table 7.As all the five criteria are beneficial in nature, the overall ratings of beneficial criteria for each alternative are only required to be calculated, as shown in Table 8.These overall ratings of beneficial criteria are now defuzzified using Eq. ( 17) to obtain the corresponding overall performance indices for the three ERP system alternatives.Based on these overall performance index values, the alternatives ERP systems are ranked and it is clear that SAP emerges out as the best ERP system, followed by ORACLE.Axapta obtains the last rank.It is worthwhile to mention here that Nikjoo et al. (2011) also derived the same rankings for these three ERP system alternatives.These results evidently suggest the feasibility, usefulness and accuracy of fuzzy MOORA method in solving ERP system selection problems.It is noticed that SAP, started in 1972, is the world's largest inter-enterprise software organization and the world's fourth largest independent software supplier.The original SAP intends to provide customers with the ability to interact with a common corporate database for a comprehensive range of applications.Gradually, the applications have been assembled, and today many organizations, including IBM and Microsoft, are using SAP products to run their own businesses.SAP applications, built around their latest R/3 system, provide the capability to manage financial, asset and cost accounting, production operations and materials, personnel, plants, and archived documents.Hence, emergence of SAP as the best ERP system for both the cases can be justified to be quite obvious.
Conclusions
In the present day global competitive environment, the manufacturing organizations need to implement suitable ERP systems for their survival and achieve competitive advantage over their rivals.Evaluation and selection of an ERP system for a specific managerial function is often observed to be a cost-intensive and time-consuming task.On the other hand, the success of the manufacturing organization entirely depends on taking the full advantage of the installed ERP system.Sometimes, the information related to the performance of available ERP systems with respect to various deciding factors are vague and imprecise, and can be suitably expressed in triangular fuzzy numbers.The choice of a suitable ERP system from a pool of feasible alternatives is usually supplemented by fuzzy MCDM approaches.In this paper, fuzzy multi-objective optimization on the basis of ratio analysis method is adopted to select the best ERP systems for two organizations and it is observed that in both the cases, SAP is the best choice.As the proposed approach is simple, easy to understand and accurate, it can also be successfully applied to other managerial and strategic decision-making situations.
Fig. 1
Fig. 1 exhibits a triangular fuzzy membership function.In this membership function, the triangles are completed with the minimum and maximum points attached to the adjacent center.It is spaced equally according to the minimum and maximum values of the input data.
Fig. 1 .
Fig. 1.A triangular fuzzy membership function hand, for non-beneficial criteria, the overall ratings of an alternative are calculated as follows,
Table 2
Normalized fuzzy decision matrix for example 1
Table 3
Weighted normalized fuzzy decision matrix for example 1
Table 5
Fuzzy decision matrix for ERP system selection problem for example 2
Table 6
Normalized Fuzzy decision matrix for example 2
Table 8
Ranking of ERP systems for example 2 | 5,122.6 | 2012-07-01T00:00:00.000 | [
"Business",
"Computer Science"
] |